Updates from: 09/24/2022 01:12:58
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md
Previously updated : 06/20/2022 Last updated : 09/21/2022
The following sections cover network security groups and Inbound and Outbound po
### Inbound connectivity
-The following network security group Inbound rules are required for the managed domain to provide authentication and management services. Don't edit or delete these network security group rules for the virtual network subnet your managed domain is deployed into.
+The following network security group Inbound rules are required for the managed domain to provide authentication and management services. Don't edit or delete these network security group rules for the virtual network subnet for your managed domain.
| Inbound port number | Protocol | Source | Destination | Action | Required | Purpose | |:--:|:--:|:-:|:--:|::|:--:|:--| | 5986 | TCP | AzureActiveDirectoryDomainServices | Any | Allow | Yes | Management of your domain. | | 3389 | TCP | CorpNetSaw | Any | Allow | Optional | Debugging for support. |
-An Azure standard load balancer is created that requires these rules to be place. This network security group secures Azure AD DS and is required for the managed domain to work correctly. Don't delete this network security group. The load balancer won't work correctly without it.
+Azure AD DS also relies on the Default Security rules AllowVnetInBound and AllowAzureLoadBalancerInBound.
++
+The AllowVnetInBound rule allows all traffic within the VNet which allows the DCs to properly communicate and replicate as well as allow domain join and other domain services to domain members. For more information about required ports for Windows, see [Service overview and network port requirements for Windows](/troubleshoot/windows-server/networking/service-overview-and-network-port-requirements).
++
+The AllowAzureLoadBalancerInBound rule is also required so that the service can properly communicate over the loadbalancer to manage the DCs. This network security group secures Azure AD DS and is required for the managed domain to work correctly. Don't delete this network security group. The load balancer won't work correctly without it.
If needed, you can [create the required network security group and rules using Azure PowerShell](powershell-create-instance.md#create-a-network-security-group).
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
Extending this scenario:
### Mapping employment status to account status
-By default, the Azure AD SuccessFactors connector uses the `activeEmploymentsCount` field of the `PersonEmpTerminationInfo` object to set account status. There is a known SAP SuccessFactors issue documented in [knowledge base article 3047486](https://userapps.support.sap.com/sap/support/knowledge/en/3047486) that at times this may disable the account of a terminated worker one day prior to the termination on the last day of work.
+By default, the Azure AD SuccessFactors connector uses the `activeEmploymentsCount` field of the `PersonEmpTerminationInfo` object to set account status. There is a known SAP SuccessFactors issue documented in [knowledge base article 3047486](https://launchpad.support.sap.com/#/notes/3047486) that at times this may disable the account of a terminated worker one day prior to the termination on the last day of work.
If you are running into this issue or prefer mapping employment status to account status, you can update the mapping to expand the `emplStatus` field and use the employment status code present in the field `emplStatus.externalCode`. Based on [SAP support note 2505526](https://launchpad.support.sap.com/#/notes/2505526), here is a list of employment status codes that you can retrieve in the provisioning app. * A = Active
active-directory Permissions Management Trial Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-trial-playbook.md
Use the **Activity triggers** dashboard to view information and set alerts and t
- **Group entitlements and Usage reports:** Provides guidance on cleaning up directly assigned permissions - **Access Key Entitlements and Usage reports**: Identifies high risk service principals with old secrets that havenΓÇÖt been rotated every 90 days (best practice) or decommissioned due to lack of use (as recommended by the Cloud Security Alliance).
-## Next Steps
-For more information about Permissions Management, see:
-
-**Microsoft Docs**: [Visit Docs](../cloud-infrastructure-entitlement-management/index.yml).
+## Next steps
+
+For more information about Permissions Management, see:
+
+**Microsoft Learn**: [Permissions management](../cloud-infrastructure-entitlement-management/index.yml).
**Datasheet:** <https://aka.ms/PermissionsManagementDataSheet>
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The following client apps are confirmed to support this setting:
- Microsoft Cortana - Microsoft Edge - Microsoft Excel
+- Microsoft Launcher
- Microsoft Lists - Microsoft Office - Microsoft OneDrive
The following client apps are confirmed to support this setting:
- Microsoft Outlook - Microsoft Planner - Microsoft Power BI
+- Microsoft PowerApps
- Microsoft PowerPoint - Microsoft SharePoint - Microsoft Teams
The following client apps are confirmed to support this setting:
- MultiLine for Intune - Nine Mail - Email and Calendar - Notate for Intune
+- Yammer (iOS and iPadOS)
This list is not all encompassing, if your app is not in this list please check with the application vendor to confirm support.
active-directory Require Tou https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/require-tou.md
Title: Conditional Access require terms of use - Azure Active Directory
-description: In this quickstart, you learn how you can require that your terms of use are accepted before access to selected cloud apps is granted by Azure Active Directory Conditional Access.
+ Title: Quickstart require Terms of Use at sign-in
+description: Quickstart require terms of use acceptance before access to selected cloud apps is granted with Azure Active Directory Conditional Access.
+ Previously updated : 08/05/2022 Last updated : 09/22/2022+ + -
-#Customer intent: As an IT admin, I want to ensure that users have accepted my terms of use before accessing selected cloud apps, so that I have a consent from them.
# Quickstart: Require terms of use to be accepted before accessing cloud apps
-Before accessing certain cloud apps in your environment, you might want to get consent from users in form of accepting your terms of use (ToU). Azure Active Directory (Azure AD) Conditional Access provides you with:
--- A simple method to configure ToU-- The option to require accepting your terms of use through a Conditional Access policy -
-This quickstart shows how to configure an [Azure AD Conditional Access policy](./overview.md) that requires a ToU to be accepted for a selected cloud app in your environment.
--
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+In this quickstart, you'll configure a Conditional Access policy in Azure Active Directory (Azure AD) to require users to accept terms of use.
## Prerequisites To complete the scenario in this quickstart, you need: -- **Access to an Azure AD Premium edition** - Azure AD Conditional Access is an Azure AD Premium capability.-- **A test account called Isabella Simonsen** - If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users-azure-active-directory.md#add-a-new-user).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure AD Premium P1 or P2 - Azure AD Conditional Access is an Azure AD Premium capability. You can sign up for a trial in the Azure portal.
+- A test account to sign-in with - If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users-azure-active-directory.md#add-a-new-user).
-## Test your sign-in
+## Sign-in without terms of use
The goal of this step is to get an impression of the sign-in experience without a Conditional Access policy.
-**To test your sign-in:**
-
-1. Sign in to your [Azure portal](https://portal.azure.com/) as Isabella Simonsen.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as your test user.
1. Sign out. ## Create your terms of use This section provides you with the steps to create a sample ToU. When you create a ToU, you select a value for **Enforce with Conditional Access policy templates**. Selecting **Custom policy** opens the dialog to create a new Conditional Access policy as soon as your ToU has been created.
-**To create your terms of use:**
- 1. In Microsoft Word, create a new document. 1. Type **My terms of use**, and then save the document on your computer as **mytou.pdf**.
-1. Sign in to your [Azure portal](https://portal.azure.com) as Global Administrator, Security Administrator, or a Conditional Access Administrator.
-1. Search for and select **Azure Active Directory**. From the menu on the left-hand side select **Security**.
-
- ![Azure Active Directory](./media/require-tou/02.png)
-
-1. Select **Conditional Access**.
-
- ![Conditional Access](./media/require-tou/03.png)
-
-1. In the **Manage** section, click **Terms of use**.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Conditional Access Administrator, Security Administrator, or a Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
- :::image type="content" source="./media/require-tou/04.png" alt-text="Screenshot of the Manage section of the Azure Active Directory page. The Terms of use item is highlighted." border="false":::
-1. In the menu on the top, click **New terms**.
+ :::image type="content" source="media/require-tou/terms-of-use-azure-ad-conditional-access.png" alt-text="Screenshot of terms of use shown in the Azure portal highlighting the new terms button." lightbox="media/require-tou/terms-of-use-azure-ad-conditional-access.png":::
- :::image type="content" source="./media/require-tou/05.png" alt-text="Screenshot of a menu in the Azure Active Directory page. The New terms item is highlighted." border="false":::
+1. In the menu on the top, select **New terms**.
-1. On the **New terms of use** page:
+ :::image type="content" source="media/require-tou/new-terms-of-use-creation.png" alt-text="Screenshot that shows creating a new terms of use policy in the Azure portal." lightbox="media/require-tou/new-terms-of-use-creation.png":::
- :::image type="content" source="./media/require-tou/112.png" alt-text="Screenshot of the New terms of use page, with the name, display name, document, language, conditional access, and expanding terms toggle highlighted." border="false":::
+1. In the **Name** textbox, type **My TOU**.
+1. Upload your terms of use PDF file.
+1. Select your default language.
+1. In the **Display name** textbox, type **My TOU**.
+1. As **Require users to expand the terms of use**, select **On**.
+1. As **Enforce with Conditional Access policy templates**, select **Custom policy**.
+1. Select **Create**.
- 1. In the **Name** textbox, type **My TOU**.
- 1. In the **Display name** textbox, type **My TOU**.
- 1. Upload your terms of use PDF file.
- 1. As **Language**, select **English**.
- 1. As **Require users to expand the terms of use**, select **On**.
- 1. As **Enforce with Conditional Access policy templates**, select **Custom policy**.
- 1. Click **Create**.
+## Create a Conditional Access policy
-## Create your Conditional Access policy
+This section shows how to create the required Conditional Access policy.
-This section shows how to create the required Conditional Access policy. The scenario in this quickstart uses:
+The scenario in this quickstart uses:
- The Azure portal as placeholder for a cloud app that requires your ToU to be accepted. - Your sample user to test the Conditional Access policy.
-In your policy, set:
-
-| Setting | Value |
-| | |
-| Users and groups | Isabella Simonsen |
-| Cloud apps | Microsoft Azure Management |
-| Grant access | My TOU |
-- **To configure your Conditional Access policy:**
-1. On the **New** page, in the **Name** textbox, type **Require TOU for Isabella**.
-
- ![Name](./media/require-tou/71.png)
-
-1. In the **Assignment** section, click **Users and groups**.
-
- :::image type="content" source="./media/require-tou/06.png" alt-text="Screenshot of the Assignments section of an Azure portal pane that defines a policy. The Users and groups item is visible, with none selected." border="false":::
-
-1. On the **Users and groups** page:
-
- :::image type="content" source="./media/require-tou/24.png" alt-text="Screenshot of the Include tab of the Users and groups page. Select users and groups is selected, as is Users and groups. Select is highlighted." border="false":::
-
- 1. Click **Select users and groups**, and then select **Users and groups**.
- 1. Click **Select**.
- 1. On the **Select** page, select **Isabella Simonsen**, and then click **Select**.
- 1. On the **Users and groups** page, click **Done**.
-1. Click **Cloud apps**.
-
- :::image type="content" source="./media/require-tou/08.png" alt-text="Screenshot of the Assignments section of an Azure portal pane that defines a policy. The Cloud apps item is visible, with none selected." border="false":::
-
-1. On the **Cloud apps** page:
-
- ![Select cloud apps](./media/require-tou/26.png)
-
- 1. Click **Select apps**.
- 1. Click **Select**.
- 1. On the **Select** page, select **Microsoft Azure Management**, and then click **Select**.
- 1. On the **Cloud apps** page, click **Done**.
-1. In the **Access controls** section, click **Grant**.
-
- ![Access controls](./media/require-tou/10.png)
-
-1. On the **Grant** page:
-
- ![Grant](./media/require-tou/111.png)
-
+1. On the **New** page, in the **Name** textbox, type **Require Terms of Use**.
+1. Under Assignments, select **Users or workload identities**.
+ 1. Under Include, choose **Select users and groups** > **Users and groups**.
+ 1. Choose your test user, and choose **Select**.
+1. Under Assignments, select **Cloud apps or actions**.
+1. Select **Cloud apps or actions**.
+ 1. Under Include, choose **Select apps**.
+ 1. Select **Microsoft Azure Management**, and then choose **Select**.
+1. Under **Access controls**, select **Grant**.
1. Select **Grant access**.
- 1. Select **My TOU**.
- 1. Click **Select**.
-1. In the **Enable policy** section, click **On**.
-
- ![Enable policy](./media/require-tou/18.png)
-
-1. Click **Create**.
-
-## Evaluate a simulated sign-in
-
-Now that you have configured your Conditional Access policy, you probably want to know whether it works as expected. As a first step, use the Conditional Access what if policy tool to simulate a sign-in of your test user. The simulation estimates the impact this sign-in has on your policies and generates a simulation report.
-
-To initialize the **What If** policy evaluation tool, set:
--- **Isabella Simonsen** as user-- **Microsoft Azure Management** as cloud app-
-Clicking **What If** creates a simulation report that shows:
--- **Require TOU for Isabella** under **Policies that will apply**-- **My TOU** as **Grant Controls**.-
-![What if policy tool](./media/require-tou/79.png)
-
-**To evaluate your Conditional Access policy:**
-
-1. On the [Conditional Access - Policies](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ConditionalAccessBlade/Policies) page, in the menu on the top, click **What If**.
-
- ![What If](./media/require-tou/14.png)
-
-1. Click **Users**, select **Isabella Simonsen**, and then click **Select**.
-
- ![User](./media/require-tou/15.png)
-
-1. To select a cloud app:
-
- :::image type="content" source="./media/require-tou/16.png" alt-text="Screenshot of the Cloud apps section. Text indicates that one app is selected." border="false":::
-
- 1. Click **Cloud apps**.
- 1. On the **Cloud apps page**, click **Select apps**.
- 1. Click **Select**.
- 1. On the **Select** page, select **Microsoft Azure Management**, and then click **Select**.
- 1. On the cloud apps page, click **Done**.
-1. Click **What If**.
+ 1. Select the terms of use you created previously called **My TOU** and choose **Select**.
+1. In the **Enable policy** section, select **On**.
+1. Select **Create**.
## Test your Conditional Access policy
-In the previous section, you have learned how to evaluate a simulated sign-in. In addition to a simulation, you should also test your Conditional Access policy to ensure that it works as expected.
+In the previous section, you created a Conditional Access policy requiring terms of use be accepted.
-To test your policy, try to sign-in to your [Azure portal](https://portal.azure.com) using your **Isabella Simonsen** test account. You should see a dialog that requires you to accept your terms of use.
+To test your policy, try to sign-in to your [Azure portal](https://portal.azure.com) using your test account. You should see a dialog that requires you to accept your terms of use.
## Clean up resources When no longer needed, delete the test user and the Conditional Access policy: - If you don't know how to delete an Azure AD user, see [Delete users from Azure AD](../fundamentals/add-users-azure-active-directory.md#delete-a-user).-- To delete your policy, select your policy, and then click **Delete** in the quick access toolbar.-
- :::image type="content" source="./media/require-tou/33.png" alt-text="Screenshot showing a policy named Require M F A for Azure portal users. The shortcut menu is visible, with Delete highlighted." border="false":::
--- To delete your terms of use, select it, and then click **Delete terms** in the toolbar on top.
+- To delete your policy, select the ellipsis (...) next to your policies name, then select **Delete**.
+- To delete your terms of use, select it, and then select **Delete terms**.
:::image type="content" source="./media/require-tou/29.png" alt-text="Screenshot showing part of a table listing terms of use documents. The My T O U document is visible. In the menu, Delete terms is highlighted." border="false":::
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
When these conditions are met, the app can extract the claims challenge from the
```javascript const authenticateHeader = response.headers.get('www-authenticate');
-const claimsChallenge = authenticateHeader
- .split(' ')
- .find((entry) => entry.includes('claims='))
- .split('claims="')[1]
- .split('",')[0];
+const claimsChallenge = parseChallenges(authenticateHeader).claims;
+
+// ...
+
+function parseChallenges(header) {
+ const schemeSeparator = header.indexOf(' ');
+ const challenges = header.substring(schemeSeparator + 1).split(',');
+ const challengeMap = {};
+
+ challenges.forEach((challenge) => {
+ const [key, value] = challenge.split('=');
+ challengeMap[key.trim()] = window.decodeURI(value.replace(/['"]+/g, ''));
+ });
+
+ return challengeMap;
+}
``` Your app would then use the claims challenge to acquire a new access token for the resource.
Your app would then use the claims challenge to acquire a new access token for t
let tokenResponse; try {- tokenResponse = await msalInstance.acquireTokenSilent({
- claims: window.atob(claimsChallenge), // decode the base64 string
- scopes: scopes, // e.g ['User.Read', 'Contacts.Read']
- account: account, // current active account
- });
+ claims: window.atob(claimsChallenge), // decode the base64 string
+ scopes: scopes, // e.g ['User.Read', 'Contacts.Read']
+ account: account, // current active account
+ });
} catch (error) {- if (error instanceof InteractionRequiredAuthError) {- tokenResponse = await msalInstance.acquireTokenPopup({
- claims: window.atob(claimsChallenge), // decode the base64 string
- scopes: scopes, // e.g ['User.Read', 'Contacts.Read']
- account: account, // current active account
- });
+ claims: window.atob(claimsChallenge), // decode the base64 string
+ scopes: scopes, // e.g ['User.Read', 'Contacts.Read']
+ account: account, // current active account
+ });
} }
active-directory Msal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-overview.md
Previously updated : 07/22/2021 Last updated : 09/20/2022
# Overview of the Microsoft Authentication Library (MSAL)
-The Microsoft Authentication Library (MSAL) enables developers to acquire [tokens](developer-glossary.md#security-token) from the Microsoft identity platform in order to authenticate users and access secured web APIs. It can be used to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. MSAL supports many different application architectures and platforms including .NET, JavaScript, Java, Python, Android, and iOS.
+The Microsoft Authentication Library (MSAL) enables developers to acquire [security tokens](developer-glossary.md#security-token) from the Microsoft identity platform to authenticate users and access secured web APIs. It can be used to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. MSAL supports many different application architectures and platforms including .NET, JavaScript, Java, Python, Android, and iOS.
-MSAL gives you many ways to get tokens, with a consistent API for a number of platforms. Using MSAL provides the following benefits:
+MSAL gives you many ways to get tokens, with a consistent API for many platforms. Using MSAL provides the following benefits:
* No need to directly use the OAuth libraries or code against the protocol in your application.
-* Acquires tokens on behalf of a user or on behalf of an application (when applicable to the platform).
-* Maintains a token cache and refreshes tokens for you when they are close to expire. You don't need to handle token expiration on your own.
-* Helps you specify which audience you want your application to sign in (your org, several orgs, work, and school and Microsoft personal accounts, social identities with Azure AD B2C, users in sovereign, and national clouds).
+* Acquires tokens on behalf of a user or application (when applicable to the platform).
+* Maintains a token cache and refreshes tokens for you when they're close to expiring. You don't need to handle token expiration on your own.
+* Helps you specify which audience you want your application to sign in. The sign in audience can include personal Microsoft accounts, social identities with Azure AD B2C organizations, work, school, or users in sovereign and national clouds.
* Helps you set up your application from configuration files. * Helps you troubleshoot your app by exposing actionable exceptions, logging, and telemetry. > [!VIDEO https://www.youtube.com/embed/zufQ0QRUHUk] ## Application types and scenarios
-Using MSAL, a token can be acquired for a number of application types: web applications, web APIs, single-page apps (JavaScript), mobile and native applications, and daemons and server-side applications.
+Using MSAL, a token can be acquired for many application types: web applications, web APIs, single-page apps (JavaScript), mobile and native applications, and daemons and server-side applications.
MSAL can be used in many application scenarios, including the following:
active-directory Add User Without Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-user-without-invite.md
Previously updated : 08/05/2020 Last updated : 09/15/2022
You can now invite guest users by sending out a [direct link](redemption-experience.md#redemption-through-a-direct-link) to a shared app. With this method, guest users no longer need to use the invitation email, except in some special cases. A guest user clicks the app link, reviews and accepts the privacy terms, and then seamlessly accesses the app. For more information, see [B2B collaboration invitation redemption](redemption-experience.md).
-Before this new method was available, you could invite guest users without requiring the invitation email by adding an inviter (from your organization or from a partner organization) to the **Guest inviter** directory role, and then having the inviter add guest users to the directory, groups, or applications through the UI or through PowerShell. (If using PowerShell, you can suppress the invitation email altogether). For example:
+Before this new method was available, you could invite guest users without requiring the invitation email by adding an inviter (from your organization or from a partner organization) to the [**Guest inviter** directory role](external-collaboration-settings-configure.md#assign-the-guest-inviter-role-to-a-user), and then having the inviter add guest users to the directory, groups, or applications through the UI or through PowerShell. (If using PowerShell, you can suppress the invitation email altogether). For example:
1. A user in the host organization (for example, WoodGrove) invites one user from the partner organization (for example, Sam@litware.com) as Guest. 2. The administrator in the host organization [sets up policies](external-collaboration-settings-configure.md) that allow Sam to identify and add other users from the partner organization (Litware). (Sam must be added to the **Guest inviter** role.)
active-directory Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/identity-providers.md
Previously updated : 08/30/2021 Last updated : 09/14/2022
External Identities offers a variety of identity providers.
- **Azure Active Directory accounts**: Guest users can use their Azure AD work or school accounts to redeem your B2B collaboration invitations or complete your sign-up user flows. [Azure Active Directory](azure-ad-account.md) is one of the allowed identity providers by default. No additional configuration is needed to make this identity provider available for user flows. -- **Microsoft accounts**: Guest users can use their own personal Microsoft account (MSA) to redeem your B2B collaboration invitations. When setting up a self-service sign-up user flow, you can add [Microsoft Account](microsoft-account.md) as one of the allowed identity providers. No additional configuration is needed to make this identity provider available for user flows.
+- **Microsoft accounts**: Guest users can use their own personal Microsoft account (MSA) to redeem your B2B collaboration invitations. When setting up a [self-service sign-up](self-service-sign-up-overview.md) user flow, you can add [Microsoft Account](microsoft-account.md) as one of the allowed identity providers. No additional configuration is needed to make this identity provider available for user flows.
- **Email one-time passcode**: When redeeming an invitation or accessing a shared resource, a guest user can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in. The email one-time passcode feature authenticates B2B guest users when they can't be authenticated through other means. When setting up a self-service sign-up user flow, you can add **Email One-Time Passcode** as one of the allowed identity providers. Some setup is required; see [Email one-time passcode authentication](one-time-passcode.md).
active-directory Active Directory Troubleshooting Support Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
- Title: Find help and open a support ticket - Azure Active Directory | Microsoft Docs
-description: Instructions about how to get help and open a support ticket for Azure Active Directory.
------- Previously updated : 08/17/2022------
-# Find help and open a support ticket for Azure Active Directory
-
-Microsoft provides global technical, pre-sales, billing, and subscription support for Azure Active Directory (Azure AD). Support is available both online and by phone for Microsoft Azure paid and trial subscriptions. Phone support and online billing support are available in additional languages.
-
-## Find help without opening a support ticket
-
-Before creating a support ticket, check out the following resources for answers and information.
-
-* For content such as how-to information or code samples for IT professionals and developers, see the [technical documentation for Azure Active Directory](../index.yml).
-
-* The [Microsoft Technical Community](https://techcommunity.microsoft.com/) is the place for our IT pro partners and customers to collaborate, share, and learn. The [Microsoft Technical Community Info Center](https://techcommunity.microsoft.com/t5/Community-Info-Center/ct-p/Community-Info-Center) is used for announcements, blog posts, ask-me-anything (AMA) interactions with experts, and more. You can also [join the community to submit your ideas](https://techcommunity.microsoft.com/t5/Communities/ct-p/communities).
-
-## Open a support ticket
-
-If you are unable to find answers by using self-help resources, you can open an online support ticket. You should open each support ticket for only a single problem, so that we can connect you to the support engineers who are subject matter experts for your problem. Also, Azure Active Directory engineering teams prioritize their work based on incidents that are generated, so you're often contributing to service improvements.
-
-### How to open a support ticket for Azure AD in the Azure portal
-
-> [!NOTE]
-> If you're using Azure AD B2C, open a support ticket by first switching to an Azure AD tenant that has an Azure subscription associated with it. Typically, this is your employee tenant or the default tenant created for you when you signed up for an Azure subscription. To learn more, see [how an Azure subscription is related to Azure AD](active-directory-how-subscriptions-associated-directory.md).
-
-1. Sign in to [the Azure portal](https://portal.azure.com) and open **Azure Active Directory**.
-
-1. Scroll down to **Troubleshooting + Support** and select **New support request**.
-
-1. On the **Basics** blade, for **Issue type**, select **Technical**.
-
-1. Select your **Subscription**.
-
-1. For **Service**, select **Azure Active Directory**.
-
-1. Create a **Summary** for the request. The summary must be under 140 characters.
-
-1. Select a **Problem type**, and then select a category for that type. At this point, you are also offered self-help information for your problem category.
-
-1. Add the rest of your problem information and click **Next**.
-
-1. At this point, you are offered self-help solutions and documentation in the **Solutions** blade. If none of the solutions there resolve your problem, click **Next**.
-
-1. On the **Details** blade, fill out the required details and select a [Severity](https://azure.microsoft.com/support/plans/response/).
-
- ![image](https://user-images.githubusercontent.com/13383753/76565580-1c284900-6468-11ea-8c0f-85af98097b6f.png)
-
-1. Provide your contact information and select **Next**.
-
-1. Provide your contact information and select **Create**.
- ![Problem category self-help screenshot](./media/active-directory-troubleshooting-support-howto/open-support-ticket.png)
-
-### How to open a support ticket for Azure AD in the Microsoft 365 admin center
-
-> [!NOTE]
-> Support for Azure AD in the [Microsoft 365 admin center](https://admin.microsoft.com) is offered for administrators only.
-
-1. Sign in to the [Microsoft 365 admin center](https://admin.microsoft.com) with an account that has an Enterprise Mobility + Security (EMS) license.
-
-1. On the **Support** tile, select **New service request**:
-
-1. On the **Support Overview** page, select **Identity management** or **User and domain management**:
-
-1. For **Feature**, select the Azure AD feature for which you want support.
-
-1. For **Symptom**, select an appropriate symptom, summarize your issue and provide relevant details, and then select **Next**.
-
-1. Select one of the offered self-help resources, or select **Yes, continue** or **No, cancel request**.
-
-1. If you continue, you are asked for more details. You can attach any files you have that represent the problem, and then select **Next**.
-
-1. Provide your contact information and select **Submit request**.
-
-## Get phone support
-
-See the [Contact Microsoft for support](https://portal.office.com/Support/ContactUs.aspx) page to obtain support phone numbers.
-
-## Next steps
-
-* [Microsoft Tech Community](https://techcommunity.microsoft.com/)
-
-* [Technical documentation for Azure Active Directory](../index.yml)
active-directory How To Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-get-support.md
+
+ Title: Find help and get support for Azure Active Directory - Azure Active Directory | Microsoft Docs
+description: Instructions about how to get help and open a support request for Azure Active Directory.
+++++++ Last updated : 09/22/2022++++++
+# Find help and get support for Azure Active Directory
+
+Microsoft documentation and learning content provide quality support and troubleshooting information, but if you have a problem not covered in our content, there are several options to get help and support for Azure Active Directory (Azure AD). This article provides the options to find support from the Microsoft community and how to submit a support request with Microsoft.
+
+## Ask the Microsoft community
+
+Start with our Microsoft community members who may have an answer to your question. These communities provide support, feedback, and general discussions on Microsoft products and services. Before creating a support request, check out the following resources for answers and information.
+
+* For how-to information, quickstarts, or code samples for IT professionals and developers, see the [technical documentation at learn.microsoft.com](../index.yml).
+* Post a question to [Microsoft Q&A](/answers/products/) to get answers to your identity and access questions directly from Microsoft engineers, Azure Most Valuable Professionals (MVPs) and members of our expert community.
+* The [Microsoft Technical Community](https://techcommunity.microsoft.com/) is the place for our IT pro partners and customers to collaborate, share, and learn. Join the community to post questions and submit your ideas.
+* The [Microsoft Technical Community Info Center](https://techcommunity.microsoft.com/t5/Community-Info-Center/ct-p/Community-Info-Center) is used for announcements, blog posts, ask-me-anything (AMA) interactions with experts, and more.
+
+### Microsoft Q&A best practices
+
+Microsoft Q&A is Azure's recommended source for community support. We recommend using one of the following tags when posting a question. Check out our [tips for writing quality questions](/answers/support/quality-question).
+
+| Component/area| Tags |
+|||
+| Microsoft Authentication Library (MSAL) | [[msal]](/answers/topics/azure-ad-msal.html) |
+| Open Web Interface for .NET (OWIN) middleware | [[azure-active-directory]](/answers/topics/azure-active-directory.html) |
+| [Azure AD B2B / External Identities](../external-identities/what-is-b2b.md) | [[azure-ad-b2b]](/answers/topics/azure-ad-b2b.html) |
+| [Azure AD B2C](https://azure.microsoft.com/services/active-directory-b2c/) | [[azure-ad-b2c]](/answers/topics/azure-ad-b2c.html) |
+| [Microsoft Graph API](https://developer.microsoft.com/graph/) | [[azure-ad-graph]](/answers/topics/azure-ad-graph.html) |
+| All other authentication and authorization areas | [[azure-active-directory]](/answers/topics/azure-active-directory.html) |
+
+## Open a support request in Azure Active Directory
+
+If you're unable to find answers by using self-help resources, you can open an online support request. You should open a support request for only a single problem, so that we can connect you to the support engineers who are subject matter experts for your problem. Azure AD engineering teams prioritize their work based on incidents that are generated from support, so you're often contributing to service improvements.
+
+Support is available online and by phone for Microsoft Azure paid and trial subscriptions on global technical, pre-sales, billing, and subscription issues. Phone support and online billing support are available in additional languages.
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits your scenario, whether you're an IT admin managing your organization's tenant, a developer just starting your cloud journey, or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+- If you're not an Azure customer, you can open a support request with [Microsoft Support for business](https://support.serviceshub.microsoft.com/supportforbusiness).
+
+> [!NOTE]
+> If you're using Azure AD B2C, open a support ticket by first switching to an Azure AD tenant that has an Azure subscription associated with it. Typically, this is your employee tenant or the default tenant created for you when you signed up for an Azure subscription. To learn more, see [how an Azure subscription is related to Azure AD](active-directory-how-subscriptions-associated-directory.md).
+
+1. Sign in to [the Azure portal](https://portal.azure.com) and open **Azure Active Directory**.
+
+1. Scroll down to **Troubleshooting + Support** and select **New support request**.
+
+1. Follow the prompts to provide us with information about the problem you're having.
+
+We'll walk you through some steps to gather information about your problem and help you solve it. Each step is described in the following sections.
+
+### 1. Problem description
+
+1. Under **Problem description**, enter a brief description in the **Summary** field.
+
+1. Select an **Issue type**.
+
+ Options are **Billing** and **Subscription management**. Once an option is selected, **Problem type** and **Problem subtype** fields appear, pre-populated with options associated with the initial selection.
+
+1. Select **Next** at the bottom of the page.
+
+### 2. Recommended solution
+
+Based on the information you provided, we'll show you recommended solutions you can use try to resolve the problem. Solutions are written by Azure engineers and will solve most common problems.
+
+If you're still unable to resolve the issue, select **Next** to continue creating the support request.
+
+### 3. Additional details
+
+Next, we collect more details about the problem. Providing thorough and detailed information in this step helps us route your support request to the right engineer.
+
+1. Complete the **Problem details** section so that we have more information about your issue. If possible, tell us when the problem started and any steps to reproduce it. You can upload a file, such as a log file or output from diagnostics. For more information on file uploads, see [File upload guidelines](../../azure-portal/supportability/how-to-manage-azure-support-request.md#file-upload-guidelines).
+
+1. In the **Advanced diagnostic information** section, select **Yes** or **No**.
+
+ - Selecting **Yes** allows Azure support to gather [advanced diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources.
+ - If you prefer not to share this information, select **No**. For more information about the types of files we might collect, see [Advanced diagnostic information logs](../../azure-portal/supportability/how-to-create-azure-support-request.md#advanced-diagnostic-information-logs) section.
+ - In some scenarios, an administrator in your tenant may need to approve Microsoft Support access to your Azure Active Directory identity data.
+
+1. In the **Support method** section, select your preferred contact method and support language.
+ - Some details are pre-selected for you.
+ - The support plan and severity are populated based on your plan.
+ - The maximum severity level depends on your [support plan](https://azure.microsoft.com/support/plans).
+
+1. Next, complete the **Contact info** section so we know how to contact you.
+
+Select **Next** when you've completed all of the necessary information.
+
+### 4. Review + create
+
+Before you create your request, review all of the details that you'll send to support. You can select **Previous** to return to any tab if you need to make changes. When you're satisfied the support request is complete, select **Create**.
+
+A support engineer will contact you using the method you indicated. For information about initial response times, see [Support scope and responsiveness](https://azure.microsoft.com/support/plans/response/).
+
+## Get Microsoft 365 admin center support
+
+Support for Azure AD in the [Microsoft 365 admin center](https://admin.microsoft.com) is offered for administrators through the admin center. Review the [support for Microsoft 365 for business article](/microsoft-365/admin).
+
+## Stay informed
+Things can change quickly. The following resources provide updates and information on the latest releases.
+
+- [Azure Updates](https://azure.microsoft.com/updates/?category=identity): Learn about important product updates, roadmap, and announcements.
+
+- [What's new in Azure AD](whats-new.md): Get to know what's new in Azure AD including the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes.
+
+- [Azure Active Directory Identity Blog](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity): Get news and information about Azure AD.
+
+## Next steps
+
+* [Post a question to Microsoft Q&A](/answers/products/)
+
+* [Join the Microsoft Technical Community](https://techcommunity.microsoft.com/)]
+
+* [Learn about the diagnostic data Azure identity support can access](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/)
active-directory Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/support-help-options.md
- Title: Support and help options for Azure Active Directory
-description: Learn where to get help and find answers to your questions as you build and configure identity and access management (IAM) solutions that integrate with Azure Active Directory (Azure AD).
------- Previously updated : 08/23/2021----
-# Support and help options for Azure Active Directory
-
-If you need an answer to a question or help in solving a problem not covered in our documentation, it might be time to reach out to experts for help. Here are several suggestions for getting answers to your questions as you use Azure Active Directory (Azure AD).
-
-## Create an Azure support request
-
-<div class='icon is-large'>
- <img alt='Azure support' src='/media/logos/logo_azure.svg'>
-</div>
-
-Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're an IT admin managing your organization's tenant, a developer just starting your cloud journey, or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
--- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).--- If you're not an Azure customer, you can open a support request with [Microsoft Support for business](https://support.serviceshub.microsoft.com/supportforbusiness).-
-## Post a question to Microsoft Q&A
-
-<div class='icon is-large'>
- <img alt='Microsoft Q&A' src='../develop/media/common/question-mark-icon.png'>
-</div>
-
-Get answers to your identity and access management questions directly from Microsoft engineers, Azure Most Valuable Professionals (MVPs), and members of our expert community.
-
-[Microsoft Q&A](/answers/products/) is Azure's recommended source of community support.
-
-If you can't find an answer to your problem by searching Microsoft Q&A, submit a new question. Use one of following tags when you ask your [high-quality question](/answers/articles/24951/how-to-write-a-quality-question.html):
-
-| Component/area| Tags |
-|||
-| Active Directory Authentication Library (ADAL) | [[adal]](/answers/topics/azure-ad-adal-deprecation.html) |
-| Microsoft Authentication Library (MSAL) | [[msal]](/answers/topics/azure-ad-msal.html) |
-| Open Web Interface for .NET (OWIN) middleware | [[azure-active-directory]](/answers/topics/azure-active-directory.html) |
-| [Azure AD B2B / External Identities](../external-identities/what-is-b2b.md) | [[azure-ad-b2b]](/answers/topics/azure-ad-b2b.html) |
-| [Azure AD B2C](https://azure.microsoft.com/services/active-directory-b2c/) | [[azure-ad-b2c]](/answers/topics/azure-ad-b2c.html) |
-| [Microsoft Graph API](https://developer.microsoft.com/graph/) | [[azure-ad-graph]](/answers/topics/azure-ad-graph.html) |
-| All other authentication and authorization areas | [[azure-active-directory]](/answers/topics/azure-active-directory.html) |
-
-## Stay informed of updates and new releases
-
-<div class='icon is-large'>
- <img alt='Stay informed' src='/media/common/i_blog.svg'>
-</div>
--- [Azure Updates](https://azure.microsoft.com/updates/?category=identity): Learn about important product updates, roadmap, and announcements.--- [What's new in Azure AD](whats-new.md): Get to know what's new in Azure AD including the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes.--- [Azure Active Directory Identity Blog](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity): Get news and information about Azure AD.--- [Tech Community](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity/): Share your experiences, engage and learn from experts.
active-directory How To Connect Group Writeback Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-disable.md
Title: 'Disable group writeback in Azure AD Connect'
-description: This article describes how to disable Group Writeback in Azure AD Connect.
+description: This article describes how to disable group writeback in Azure AD Connect by using the wizard and PowerShell.
-# Disabling group writeback
-The following document will walk you thorough disabling group writeback. To disable group writeback for your organization, use the following steps:
+# Disable group writeback
+This article walks you through disabling group writeback in Azure Active Directory (Azure AD) Connect.
-1. Launch the Azure Active Directory Connect wizard and navigate to the Additional Tasks page. Select the Customize synchronization options task and click next.
-2. On the Optional Features page, uncheck group writeback. You'll receive a warning letting you know that groups will be deleted. Click Yes.
- >[!Important]
- >Disabling Group Writeback will cause any groups that were previously created by this feature to be deleted from your local Active Directory on the next synchronization cycle.
-
-3. Uncheck the box
-4. Click Next.
-5. Click Configure.
+## Disable group writeback by using the wizard
+1. Open the Azure AD Connect wizard and go to the **Additional Tasks** page. Select the **Customize synchronization options task**, and then select **Next**.
+2. On the **Optional Features** page, clear the checkbox for group writeback. In the warning that groups will be deleted, select **Yes**.
+
+ > [!IMPORTANT]
+ > Disabling group writeback sets the flags for full import and full synchronization in Active Directory Connect to `true`. It will cause any groups that were previously created by this feature to be deleted from your local Active Directory instance in the next synchronization cycle.
->[!Note]
->Disabling Group Writeback will set the Full Import and Full Synchronization flags to 'true' on the Azure Active Directory Connector, causing the rule changes to propagate through on the next synchronization cycle, deleting the groups that were previously written back to your Active Directory.
+3. SelectΓÇ»**Next**.
+4. SelectΓÇ»**Configure**.
-
-## Rolling back group writeback
+## Disable or roll back group writeback via PowerShell
-To disable or roll back group writeback via PowerShell, do the following:
+1. Open a PowerShell prompt as an administrator.
+2. Disable the sync scheduler after verifying that no synchronization operations are running:
-1. Open a PowerShell prompt as administrator.
-2. Disable the sync scheduler after verifying that no synchronization operations are running:
-``` PowerShell
- Set-ADSyncScheduler -SyncCycleEnabled $false
- ```
+ ``` PowerShell
+ Set-ADSyncScheduler -SyncCycleEnabled $false
+ ```
3. Import the ADSync module:
- ``` PowerShell
- Import-Module 'C:\Program Files\Microsoft Azure AD Sync\Bin\ADSync\ADSync.psd1'
- ```
+
+ ``` PowerShell
+ Import-Module 'C:\Program Files\Microsoft Azure AD Sync\Bin\ADSync\ADSync.psd1'
+ ```
4. Disable the group writeback feature for the tenant:
- ``` PowerShell
- Set-ADSyncAADCompanyFeature -GroupWritebackV2 $false
- ```
-5. Re-enable the Sync Scheduler
- ``` PowerShell
- Set-ADSyncScheduler -SyncCycleEnabled $true
- ```
+
+ ``` PowerShell
+ Set-ADSyncAADCompanyFeature -GroupWritebackV2 $false
+ ```
+5. Re-enable the sync scheduler:
+
+ ``` PowerShell
+ Set-ADSyncScheduler -SyncCycleEnabled $true
+ ```
-## Next Steps:
+## Next steps
- [Azure AD Connect group writeback](how-to-connect-group-writeback-v2.md) - [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md)
active-directory How To Connect Group Writeback Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-enable.md
Title: 'Enable Azure AD Connect group writeback'
-description: This article describes how to enable Group Writeback in Azure AD Connect.
+description: This article describes how to enable group writeback in Azure AD Connect by using PowerShell and a wizard.
# Enable Azure AD Connect group writeback
-Group writeback is the feature that allows you to write cloud groups back to your on-premises Active Directory using Azure AD Connect Sync.
+Group writeback is a feature that allows you to write cloud groups back to your on-premises Active Directory instance by using Azure Active Directory (Azure AD) Connect sync.
-The following document will walk you through enabling group writeback.
+This article walks you through enabling group writeback.
-## Deployment Steps
+## Deployment steps
-Group writeback requires enabling both the original and new versions of the feature. If the original version was previously enabled in your environment, you will only need to follow the first set of steps, as the second set of steps has already been completed.
+Group writeback requires enabling both the original and new versions of the feature. If the original version was previously enabled in your environment, you need to use only the first set of the following steps, because the second set of steps has already been completed.
->[!Note]
->It is recommended that you follow the [swing migration](how-to-upgrade-previous-version.md#swing-migration) method for rolling out the new group writeback feature in your environment. This method will provide a clear contingency plan in the event that a major rollback is necessary.
-
-
-### Step 1 - Enable group writeback using PowerShell
-
-1. On your Azure AD Connect server, open a PowerShell prompt as administrator.
-2. Disable the sync scheduler after verifying that no synchronization operations are running.
-
- ``` PowerShell
- Set-ADSyncScheduler -SyncCycleEnabled $false
- ```
-3. Import the ADSync module.
- ``` PowerShell
- Import-Module 'C:\Program Files\Microsoft Azure AD Sync\Bin\ADSync\ADSync.psd1'
- ```
-4. Enable the group writeback feature for the tenant.
- ``` PowerShell
- Set-ADSyncAADCompanyFeature -GroupWritebackV2 $true
- ```
-5. Re-enable the Sync Scheduler.
- ``` PowerShell
- Set-ADSyncScheduler -SyncCycleEnabled $true
- ```
-
-### Step 2 ΓÇô Enable group writeback using Azure AD Connect wizard
-If the original version of group writeback was not previously enabled, continue with the following steps.
-
+> [!NOTE]
+> We recommend that you follow the [swing migration](how-to-upgrade-previous-version.md#swing-migration) method for rolling out the new group writeback feature in your environment. This method will provide a clear contingency plan if a major rollback is necessary.
+
+### Enable group writeback by using PowerShell
+
+1. On your Azure AD Connect server, open a PowerShell prompt as an administrator.
+2. Disable the sync scheduler after you verify that no synchronization operations are running:
+
+ ``` PowerShell
+ Set-ADSyncScheduler -SyncCycleEnabled $false
+ ```
+3. Import the ADSync module:
+
+ ``` PowerShell
+ Import-Module 'C:\Program Files\Microsoft Azure AD Sync\Bin\ADSync\ADSync.psd1'
+ ```
+4. Enable the group writeback feature for the tenant:
+
+ ``` PowerShell
+ Set-ADSyncAADCompanyFeature -GroupWritebackV2 $true
+ ```
+5. Re-enable the sync scheduler:
+
+ ``` PowerShell
+ Set-ADSyncScheduler -SyncCycleEnabled $true
+ ```
+
+### Enable group writeback by using the Azure AD Connect wizard
+If the original version of group writeback was not previously enabled, continue with the following steps:
+
+1. On your Azure AD Connect server, open the Azure AD Connect wizard.
+2. Select **Configure**, and then select **Next**.
+3. Select **Customize synchronization options**, and then select **Next**.
+4. On the **Connect to Azure AD** page, enter your credentials. Select **Next**.
+5. On the **Optional features** page, verify that the options you previously configured are still selected.
+6. Select **Group Writeback**, and then select **Next**.
+7. On the **Writeback** page, select an Active Directory organizational unit (OU) to store objects that are synchronized from Microsoft 365 to your on-premises organization. Select **Next**.
+8. On the **Ready to configure** page, select **Configure**.
+9. On the **Configuration complete** page, select **Exit**.
+
+After you finish this procedure, group writeback is configured automatically. If you experience permission issues while exporting the object to Active Directory, open Windows PowerShell as an administrator on the Azure AD Connect server. Then run the following commands. This step is optional.
-
-1. On your Azure AD Connect server, open the Azure AD Connect wizard, select **Configure** and then click **Next**.
-2. Select **Customize synchronization options** and then click **Next**.
-3. On the **Connect to Azure AD page**, enter your credentials. Click **Next**.
-4. On the **Optional features** page, verify that the options you previously configured are still selected.
-5. Select **Group Writeback** and then click **Next**.
-6. On the **Writeback page**, select an Active Directory organizational unit (OU) to store objects that are synchronized from Microsoft 365 to your on-premises organization, and then click **Next**.
-7. On the **Ready to configure page**, click **Configure**.
-8. When the wizard is complete, click **Exit** on the Configuration complete page. Group Writeback will be automatically configured.
-
- >[!Note]
- >The following is performed automatically after the last step above. However, if you experience permission issues while exporting the object to AD then do the following:
- >
- >Open the Windows PowerShell as an Administrator on the Azure Active Directory Connect server, and run the following commands. This step is optional
- >
- >``` PowerShell
- >$AzureADConnectSWritebackAccountDN = <MSOL_ account DN>
- >Import-Module "C:\Program Files\Microsoft Azure Active Directory Connect\AdSyncConfig\AdSyncConfig.psm1"
- >
- ># To grant the <MSOL_account> permission to all domains in the forest:
- >Set-ADSyncUnifiedGroupWritebackPermissions -ADConnectorAccountDN $AzureADConnectSWritebackAccountDN
- >
- ># To grant the <MSOL_account> permission to specific OU (eg. the OU chosen to writeback Office 365 Groups to):
- >$GroupWritebackOU = <DN of OU where groups are to be written back to>
- >Set-ADSyncUnifiedGroupWritebackPermissions ΓÇôADConnectorAccountDN $AzureADConnectSWritebackAccountDN -ADObjectDN $GroupWritebackOU
- >```
-
+``` PowerShell
+$AzureADConnectSWritebackAccountDN = <MSOL_ account DN>
+Import-Module "C:\Program Files\Microsoft Azure Active Directory Connect\AdSyncConfig\AdSyncConfig.psm1"
+# To grant the <MSOL_account> permission to all domains in the forest:
+Set-ADSyncUnifiedGroupWritebackPermissions -ADConnectorAccountDN $AzureADConnectSWritebackAccountDN
+
+# To grant the <MSOL_account> permission to a specific OU (for example, the OU chosen to write back Office 365 groups to):
+$GroupWritebackOU = <DN of OU where groups are to be written back to>
+Set-ADSyncUnifiedGroupWritebackPermissions ΓÇôADConnectorAccountDN $AzureADConnectSWritebackAccountDN -ADObjectDN $GroupWritebackOU
+```
## Optional configuration
-To make it easier to find groups being written back from Azure AD to Active Directory, there's an option to write back the group distinguished name with the cloud display name.
+To make it easier to find groups being written back from Azure AD to Active Directory, there's an option to write back the group distinguished name by using the cloud display name:
- Default format:
-CN=Group_3a5c3221-c465-48c0-95b8-e9305786a271, OU=WritebackContainer, DC=domain, DC=comΓÇ»
+`CN=Group_3a5c3221-c465-48c0-95b8-e9305786a271, OU=WritebackContainer, DC=domain, DC=com`ΓÇ»
-- New Format:
-CN=Administrators_e9305786a271, OU=WritebackContainer, DC=domain, DC=comΓÇ»
+- New format:
+`CN=Administrators_e9305786a271, OU=WritebackContainer, DC=domain, DC=com`ΓÇ»
-When configuring group writeback, there will be a checkbox at the bottom of the Group Writeback configuration window. Select the box to enable this feature.
+When you're configuring group writeback, a checkbox appears at the bottom of the configuration window. Select it to enable this feature.
->[!NOTE]
->Groups being written back from Azure AD to AD will have a source of authority of the cloud. This means any changes made on-premises to groups that are written back from Azure AD will be overwritten on the next sync cycle.
+> [!NOTE]
+> Groups being written back from Azure AD to Active Directory will have a source of authority in the cloud. Any changes made on-premises to groups that are written back from Azure AD will be overwritten in the next sync cycle.
-## Next steps:
+## Next steps
- [Azure AD Connect group writeback](how-to-connect-group-writeback-v2.md) - [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md) - [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md) -
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Title: 'Azure AD Connect: Group Writeback'
-description: This article describes Group Writeback in Azure AD Connect.
+ Title: 'Azure AD Connect: Group writeback'
+description: This article describes group writeback in Azure AD Connect.
-- # Plan for Azure AD Connect group writeback
-Group writeback allows you to write cloud groups back to your on-premises Active Directory using Azure AD Connect Sync. This feature enables you to manage groups in the cloud, while controlling access to on-premises applications and resources.
+Group writeback allows you to write cloud groups back to your on-premises Active Directory instance by using Azure Active Directory (Azure AD) Connect sync. You can use this feature to manage groups in the cloud, while controlling access to on-premises applications and resources.
-There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities:
+There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory instance as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities:
-- Microsoft 365 groups can be written back as Distribution groups, Security groups, or Mail-Enabled Security groups. -- Azure AD Security groups can be written back as Security groups. -- All groups are written back with a group scope of universal. -- Groups with assigned and dynamic memberships can be written back. -- Directory settings can be configured to control whether newly created Microsoft 365 groups are written back by default. -- Group nesting in Azure AD will be written back if both groups exist in AD. -- Written back groups nested as members of on-premises AD synced groups will be synced up to Azure AD as nested. -- Devices that are members of writeback enabled groups in Azure AD, will be written back as members to AD. Azure AD registered and Azure AD Joined devices require device writeback to be enabled for group membership to be written back.-- The common name in an Active Directory groupΓÇÖs distinguished name can be configured to include the groupΓÇÖs display name when written back. -- The Azure AD Admin portal, Graph Explorer, and PowerShell can be used to configure which Azure AD groups are written back.
+- You can write back Microsoft 365 groups as distribution groups, security groups, or mail-enabled security groups.
+- You can write back Azure AD security groups as security groups.
+- All groups are written back with a group scope of **Universal**.
+- You can write back groups that have assigned and dynamic memberships.
+- You can configure directory settings to control whether newly created Microsoft 365 groups are written back by default.
+- Group nesting in Azure AD will be written back if both groups exist in Active Directory.
+- Written-back groups nested as members of on-premises Active Directory synced groups will be synced up to Azure AD as nested.
+- Devices that are members of writeback-enabled groups in Azure AD will be written back as members of Active Directory. Azure AD-registered and Azure AD-joined devices require device writeback to be enabled for group membership to be written back.
+- You can configure the common name in an Active Directory group's distinguished name to include the group's display name when it's written back.
+- You can use the Azure AD admin portal, Graph Explorer, and PowerShell to configure which Azure AD groups are written back.
-The new version is only available in the [Azure AD Connect version 2.0.89.0 or later](https://www.microsoft.com/download/details.aspx?id=47594). or later and must be enabled in addition to the original version.
+The new version is available only in [Azure AD Connect version 2.0.89.0 or later](https://www.microsoft.com/download/details.aspx?id=47594). It must be enabled in addition to the original version.
-The following document will walk you through what you need to know before you enable group writeback for your tenant.
+This article walks you through activities that you should complete before you enable group writeback for your tenant. These activities include discovering your current configuration, verifying the prerequisites, and choosing the deployment approach.
+## Discover if group writeback is enabled in your environment
-
-
-## Plan your implementation
-There are a few activities that you'll want to complete before enabling the latest public preview of group writeback. These activities include discovering your current configuration, verifying the prerequisites, and choosing the deployment approach.
+To discover if Azure AD Connect group writeback is already enabled in your environment, use the `Get-ADSyncAADCompanyFeature` PowerShell cmdlet. The cmdlet is part of the [ADSync PowerShell](reference-connect-adsync.md) module that's installed with Azure AD Connect.
-## Discovery
-The following sections describe various methods of discovery and how you can discover if group writeback in enabled.
+[![Screenshot of Get-ADSyncAADCompanyFeature cmdlet.](media/how-to-connect-group-writeback/powershell-1.png)](media/how-to-connect-group-writeback/powershell-1.png#lightbox)
-### Discover if group writeback is enabled in your environment
+`UnifiedGroupWriteback` refers to the original version. `GroupWritebackV2` refers to the new version.
-To discover if Azure AD Connect group writeback is already enabled in your environment, use the `Get-ADSyncAADCompanyFeature` PowerShell cmdlet.
+A value of `False` indicates that the feature is not enabled.
-The cmdlet is part of the [ADSync PowerShell](reference-connect-adsync.md) module that is installed with Azure AD Connect.
+## Discover the current writeback settings for existing Microsoft 365 groups
- [![Screenshot of Get-ADSyncAADCompanyFeature cmdlet.](media/how-to-connect-group-writeback/powershell-1.png)](media/how-to-connect-group-writeback/powershell-1.png#lightbox)
+To view the existing writeback settings on Microsoft 365 groups in the portal, go to each group and select its properties.
-The `UnifiedGroupWriteback` refers to the original version, while `GroupWritebackV2` refers to the new version.
+[![Screenshot of Microsoft 365 group properties.](media/how-to-connect-group-writeback/group-2.png)](media/how-to-connect-group-writeback/group-2.png#lightbox)
-A value of **False** indicates that the feature is not enabled.
+You can also view the writeback state via Microsoft Graph. For more information, see [Get group](/graph/api/group-get?tabs=http&view=graph-rest-beta).
-### Discover the current writeback settings for existing Microsoft 365 groups
+> Example: `GET https://graph.microsoft.com/beta/groups?$filter=groupTypes/any(c:c eq 'Unified')&$select=id,displayName,writebackConfiguration`
-You can view the existing writeback settings on Microsoft 365 groups in the portal. Navigate to the group and select its properties. You can see the Group write-back state on the group.
+> If `isEnabled` is `null` or `true`, the group will be written back.
- [![Screenshot of Microsoft 365 group properties.](media/how-to-connect-group-writeback/group-2.png)](media/how-to-connect-group-writeback/group-2.png#lightbox)
+> If `isEnabled` is `false`, the group won't be written back.
-You can also view the writeback state via MS Graph: [Get group](/graph/api/group-get?tabs=http&view=graph-rest-beta)
+Finally, you can view the writeback state via PowerShell by using the [Microsoft Identity Tools PowerShell module](https://www.powershellgallery.com/packages/MSIdentityTools/2.0.16).
- Example: `GET https://graph.microsoft.com/beta/groups?$filter=groupTypes/any(c:c eq 'Unified')&$select=id,displayName,writebackConfiguration`
+> Example: `Get-mggroup -filter "groupTypes/any(c:c eq 'Unified')" | Get-MsIdGroupWritebackConfiguration`
- - If isEnabled is null or true, the group will be written back.
- - If isEnabled is false, the group won't be written back.
+## Discover the default writeback setting for newly created Microsoft 365 groups
-Finally, you can also view the writeback state via PowerShell using the [Microsoft Identity Tools PowerShell Module](https://www.powershellgallery.com/packages/MSIdentityTools/2.0.16)
+For groups that haven't been created yet, you can view whether or not they'll be written back automatically.
- Example: `Get-mggroup -filter "groupTypes/any(c:c eq 'Unified')" | Get-MsIdGroupWritebackConfiguration`
+To see the default behavior in your environment for newly created groups, use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta) resource type in Microsoft Graph.
-### Discover the default writeback setting for newly created Microsoft 365 groups
+> Example: `GET https://graph.microsoft.com/beta/Settings`
-For groups that haven't been created yet, you can view whether or not they're going to be automatically written back.
+> If a `directorySetting` value of `Group.Unified` doesn't exist, the default directory setting is applied and newly created Microsoft 365 groups *will automatically* be written back.
-To see the default behavior in your environment for newly created groups use MS Graph: [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta)
+> If a `directorySetting` value of `Group.Unified` exists with a `NewUnifiedGroupWritebackDefault` value of `false`, Microsoft 365 groups *won't automatically* be enabled for writeback when they're created. If the value is not specified or is set to `true`, newly created Microsoft 365 groups *will automatically* be written back.
- Example: `GET https://graph.microsoft.com/beta/Settings`
+You can also use the PowerShell cmdlet [AzureADDirectorySetting](../enterprise-users/groups-settings-cmdlets.md).
- If a `directorySetting` named **Group.Unified** doesn't exist, the default directory setting is applied and newly created Microsoft 365 groups **will automatically** be written back.
+> Example: `(Get-AzureADDirectorySetting | ? { $_.DisplayName -eq "Group.Unified"} | FL *).values`
- If a `directorySetting` named **Group.Unified** exists with a `NewUnifiedGroupWritebackDefault` value of **false**, Microsoft 365 groups **won't automatically** be enabled for write-back when they're created. If the value is not specified or it is set to true, newly created Microsoft 365 groups **will automatically** be written back.
+> If nothing is returned, you're using the default directory settings. Newly created Microsoft 365 groups *will automatically* be written back.
+> If `directorySetting` is returned with a `NewUnifiedGroupWritebackDefault` value of `false`, Microsoft 365 groups *won't automatically* be enabled for writeback when they're created. If the value is not specified or is set to `true`, newly created Microsoft 365 groups *will automatically* be written back.
-You can also use the PowerShell cmdlet [AzureADDirectorySetting](../enterprise-users/groups-settings-cmdlets.md)
+## Discover if Active Directory has been prepared for Exchange
+To verify if Active Directory has been prepared for Exchange, see [Prepare Active Directory and domains for Exchange Server](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019#how-do-you-know-this-worked).
- Example: `(Get-AzureADDirectorySetting | ? { $_.DisplayName -eq "Group.Unified"} | FL *).values`
+## Meet prerequisites for public preview
+The following are prerequisites for group writeback:
- If nothing is returned, you are using the default directory settings, and newly created Microsoft 365 groups **will automatically** be written back.
+- An Azure AD Premium 1 license
+- Azure AD Connect version 2.0.89.0 or later
- If a `directorySetting` is returned with a `NewUnifiedGroupWritebackDefault` value of **false**, Microsoft 365 groups **won't automatically** be enabled for write-back when they're created. If the value is not specified or it is set to **true**, newly created Microsoft 365 groups **will automatically** be written back.
+An optional prerequisite is Exchange Server 2016 CU15 or later. You need it only for configuring cloud groups with an Exchange hybrid. For more information, seeΓÇ»[Configure Microsoft 365 Groups with on-premises Exchange hybrid](/exchange/hybrid-deployment/set-up-microsoft-365-groups#prerequisites). If you haven't [prepared Active Directory for Exchange](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019), mail-related attributes of groups won't be written back.
-### Discover if AD has been prepared for Exchange
-To verify if Active Directory has been prepared for Exchange, see [Prepare Active Directory and domains for Exchange Server, Active Directory Exchange Server, Exchange Server Active Directory, Exchange 2019 Active Directory](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019#how-do-you-know-this-worked)
+## Choose the right approach
+The right deployment approach for your organization depends on the current state of group writeback in your environment and the desired writeback behavior.
-## Public preview prerequisites
-The following are prerequisites for group writeback.
+When you're enabling group writeback, you'll experience the following default behavior:
- - An Azure AD Premium 1 license
- - Azure AD Connect version 2.0.89.0 or later
- - **Optional**: Exchange Server 2016 CU15 or later
- - Only needed for configuring cloud groups with Exchange Hybrid.
- - See [Configure Microsoft 365 Groups with on-premises Exchange hybrid](/exchange/hybrid-deployment/set-up-microsoft-365-groups#prerequisites) for more information.
- - If you haven't [prepared AD for Exchange](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019), mail related attributes of groups won't be written back.
+- All existing Microsoft 365 groups will automatically be written back to Active Directory, including all Microsoft 365 groups created in the future. Azure AD security groups are not automatically written back. They must each be enabled for writeback.
+- Groups that have been written back won't be deleted in Active Directory if they're disabled for writeback or soft deleted. They'll remain in Active Directory until they're hard deleted in Azure AD.
-## Choosing the right approach
-Choosing the right deployment approach for your organization will depend on the current state of group writeback in your environment and the desired writeback behavior.
+ Changes made to these groups in Azure AD won't be written back until the groups are re-enabled for writeback or restored from a soft-delete state. This requirement helps protect the Active Directory groups from accidental deletion, if they're unintentionally disabled for writeback or soft deleted in Azure AD.
+- Microsoft 365 groups with more than 50,000 members and Azure AD security groups with more than 250,000 members can't be written back to on-premises.
-When enabling group writeback, the following default behavior will be experienced:
+To keep the default behavior, continue to the [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md) article.
-To keep the default behavior, continue to the [enable group writeback](how-to-connect-group-writeback-enable.md) article.
+You can modify the default behavior as follows:
-The default behavior can be modified as follows:
+- Only groups that are configured for writeback will be written back, including newly created Microsoft 365 groups.
+- Groups that are written to on-premises will be deleted in Active Directory when they're disabled for group writeback, soft deleted, or hard deleted in Azure AD.
+- Microsoft 365 groups with up to 250,000 members can be written back to on-premises.
+If you plan to make changes to the default behavior, we recommend that you do so before you enable group writeback. However, you can still modify the default behavior if group writeback is already enabled. For more information, see [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md).
+
+## Understand limitations of public previewΓÇ»
-If you plan to make changes to the default behavior, we recommend that you do so prior to enabling group writeback. However, you can still modify the default behavior, if group writeback is already enabled. To modify the default behavior, see [Modifying group writeback](how-to-connect-modify-group-writeback.md).
+Although this release has undergone extensive testing, you might still encounter issues. One of the goals of this public preview release is to find and fix any issues before the feature moves to general availability.
-
- ## Public preview limitationsΓÇ»
+Microsoft provides support for this public preview release, but it might not be able to immediately fix issues that you encounter. For this reason, we recommend that you use your best judgment before deploying this release in your production environment.ΓÇ»
-While this release has undergone extensive testing, you may still encounter issues. One of the goals of this public preview release is to find and fix any such issues before moving to General Availability.ΓÇ» While support is provided for this public preview release, Microsoft may not always be able to fix all issues you may encounter immediately. For this reason, it's recommended that you use your best judgment before deploying this release in your production environment.ΓÇ» Limitations and known issues specific to Group writeback:
+These limitations and known issues are specific to group writeback:
-- Cloud [distribution list groups](/exchange/recipients-in-exchange-online/manage-distribution-groups/manage-distribution-groups) created in Exchange Online cannot be written back to AD, only Microsoft 365 and Azure AD security groups are supported. -- To be backwards compatible with the current version of group writeback, when you enable group writeback, all existing Microsoft 365 groups are written back and created as distribution groups, by default. This behavior can be modified by following the steps detailed in [Modifying group writeback](how-to-connect-modify-group-writeback.md). -- When you disable writeback for a group, the group won't automatically be removed from your on-premises Active Directory, until hard deleted in Azure AD. This behavior can be modified by following the steps detailed in [Modifying group writeback](how-to-connect-modify-group-writeback.md) -- Group Writeback does not support writeback of nested group members that have a scope of ‘Domain local’ in AD, since Azure AD security groups are written back with scope ‘Universal’. If you have a nested group like this, you'll see an export error in Azure AD Connect with the message “A universal group cannot have a local group as a member.” The resolution is to remove the member with scope ‘Domain local’ from the Azure AD group or update the nested group member scope in AD to ‘Global’ or ‘Universal’ group. -- Group Writeback only supports writing back groups to a single Organization Unit (OU). Once the feature is enabled, you cannot change the OU you selected. A workaround is to disable group writeback entirely in Azure AD Connect and then select a different OU when you re-enable the feature.  -- Nested cloud groups that are members of writeback enabled groups must also be enabled for writeback to remain nested in AD. -- Group Writeback setting to manage new security group writeback at scale is not yet available. You will need to configure writeback for each group. 
+- Cloud [distribution list groups](/exchange/recipients-in-exchange-online/manage-distribution-groups/manage-distribution-groups) created in Exchange Online can't be written back to Active Directory. Only Microsoft 365 and Azure AD security groups are supported.
+- When you enable group writeback, all existing Microsoft 365 groups are written back and created as distribution groups by default. This behavior is for backward compatibility with the current version of group writeback. You can modify this behavior by following the steps in [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md).
+- When you disable writeback for a group, the group won't automatically be removed from your on-premises Active Directory instance until you hard delete it in Azure AD. You can modify this behavior by following the steps in [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md).
+- Group writeback does not support writeback of nested group members that have a scope of **Domain local** in Active Directory, because Azure AD security groups are written back with a scope of **Universal**. 
-
-
+ If you have a nested group like this, you'll see an export error in Azure AD Connect with the message "A universal group cannot have a local group as a member." The resolution is to remove the member with the **Domain local** scope from the Azure AD group, or update the nested group member scope in Active Directory to **Global** or **Universal**.
+- Group writeback supports writing back groups to only a single organizational unit (OU). After the feature is enabled, you can't change the OU that you selected. A workaround is to disable group writeback entirely in Azure AD Connect and then select a different OU when you re-enable the feature.ΓÇ»
+- Nested cloud groups that are members of writeback-enabled groups must also be enabled for writeback to remain nested in Active Directory.
+- A group writeback setting to manage new security group writeback at scale is not yet available. You need to configure writeback for each group. 
-## Next steps:
+## Next steps
- [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md) - [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md)
active-directory How To Connect Modify Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-modify-group-writeback.md
-- # Modify Azure AD Connect group writeback default behavior
-Group writeback is the feature that allows you to write cloud groups back to your on-premises Active Directory using Azure AD Connect Sync. You can change the default behavior in the following ways:
+Group writeback is a feature that allows you to write cloud groups back to your on-premises Active Directory instance by using Azure Active Directory (Azure AD) Connect sync. You can change the default behavior in the following ways:
- - Only groups that are configured for write-back will be written back, including newly created Microsoft 365 groups.
- - Groups that are written back will be deleted in AD when they're either disabled for group writeback, soft deleted, or hard deleted in Azure AD.
- - Microsoft 365 groups with up to 250,000 members can be written back to on-premises.
+- Only groups that are configured for writeback will be written back, including newly created Microsoft 365 groups.
+- Groups that are written back will be deleted in Active Directory when they're disabled for group writeback, soft deleted, or hard deleted in Azure AD.
+- Microsoft 365 groups with up to 250,000 members can be written back to on-premises.
-The following document will walk you through deploying the options for modifying the default behaviors of Azure AD Connect group writeback.
+This article walks you through the options for modifying the default behaviors of Azure AD Connect group writeback.
## Considerations for existing deployments
-If the original version of group writeback is already enabled and in use in your environment, then all your Microsoft 365 groups have already been written back to AD. Instead of disabling all Microsoft 365 groups, you'll want to review any use of the previously written back groups, and disable only those that are no longer needed in on-premises AD.
+If the original version of group writeback is already enabled and in use in your environment, all your Microsoft 365 groups have already been written back to Active Directory. Instead of disabling all Microsoft 365 groups, review any use of the previously written-back groups. Disable only those that are no longer needed in on-premises Active Directory.
### Disable automatic writeback of all Microsoft 365 groups
- 1. To configure directory settings to disable automatic writeback of newly created Microsoft 365 groups, update the `NewUnifiedGroupWritebackDefault` setting to false.
- 2. To do this via PowerShell, use the: [New-AzureADDirectorySetting](../enterprise-users/groups-settings-cmdlets.md) cmdlet.
- Example:
- ```PowerShell
- $TemplateId = (Get-AzureADDirectorySettingTemplate | where {$_.DisplayName -eq "Group.Unified" }).Id
- $Template = Get-AzureADDirectorySettingTemplate | where -Property Id -Value $TemplateId -EQ
- $Setting = $Template.CreateDirectorySetting()
- $Setting["NewUnifiedGroupWritebackDefault"] = "False"
- New-AzureADDirectorySetting -DirectorySetting $Setting
- ```
- 3. Via MS Graph: [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta)
-
-### Disable writeback for each existing Microsoft 365 group.
--- Portal: [Entra admin portal](../enterprise-users/groups-write-back-portal.md) -- PowerShell: [Microsoft Identity Tools PowerShell Module](https://www.powershellgallery.com/packages/MSIdentityTools/2.0.16)
- Example: `Get-mggroup -filter "groupTypes/any(c:c eq 'Unified')" | Update-MsIdGroupWritebackConfiguration -WriteBackEnabled $false`
-- MS Graph: [Update group](/graph/api/group-update?tabs=http&view=graph-rest-beta)
+To configure directory settings to disable automatic writeback of newly created Microsoft 365 groups, use one of these methods:
-
+- Azure portal: Update the `NewUnifiedGroupWritebackDefault` setting to `false`.
+- PowerShell: Use the [New-AzureADDirectorySetting](../enterprise-users/groups-settings-cmdlets.md) cmdlet. For example:
+
+ ```PowerShell
+ $TemplateId = (Get-AzureADDirectorySettingTemplate | where {$_.DisplayName -eq "Group.Unified" }).Id
+ $Template = Get-AzureADDirectorySettingTemplate | where -Property Id -Value $TemplateId -EQ
+ $Setting = $Template.CreateDirectorySetting()
+ $Setting["NewUnifiedGroupWritebackDefault"] = "False"
+ New-AzureADDirectorySetting -DirectorySetting $Setting
+ ```
+
+- Microsoft Graph: Use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta) resource type.
+
+### Disable writeback for each existing Microsoft 365 group
-## Delete groups when disabled for writeback or soft deleted
-
->[!Note]
->After deletion in AD, written back groups are not automatically restored from the AD recycle bin, if they're re-enabled for writeback or restored from soft delete state. New groups will be created. Deleted groups restored from the AD recycle bin, prior to being re-enabled for writeback or restored from soft delete state in Azure AD, will be joined to their respective Azure AD group.
-
- 1. On your Azure AD Connect server, open a PowerShell prompt as administrator.
- 2. Disable [Azure AD Connect sync scheduler](./how-to-connect-sync-feature-scheduler.md)
- ``` PowerShell
- Set-ADSyncScheduler -SyncCycleEnabled $false
- ```
-3. Create a custom synchronization rule in Azure AD Connect to delete written back groups when they're disabled for writeback or soft deleted
- ```PowerShell
- import-module ADSync
- $precedenceValue = Read-Host -Prompt "Enter a unique sync rule precedence value [0-99]"
-
- New-ADSyncRule `
- -Name 'In from AAD - Group SOAinAAD Delete WriteBackOutOfScope and SoftDelete' `
- -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947' `
- -Description 'Delete AD groups that fall out of scope of Group Writeback or get Soft Deleted in Azure AD' `
- -Direction 'Inbound' `
- -Precedence $precedenceValue `
- -PrecedenceAfter '00000000-0000-0000-0000-000000000000' `
- -PrecedenceBefore '00000000-0000-0000-0000-000000000000' `
- -SourceObjectType 'group' `
- -TargetObjectType 'group' `
- -Connector 'b891884f-051e-4a83-95af-2544101c9083' `
- -LinkType 'Join' `
- -SoftDeleteExpiryInterval 0 `
- -ImmutableTag '' `
- -OutVariable syncRule
-
- Add-ADSyncAttributeFlowMapping `
- -SynchronizationRule $syncRule[0] `
- -Destination 'reasonFiltered' `
- -FlowType 'Expression' `
- -ValueMergeType 'Update' `
- -Expression 'IIF((IsPresent([reasonFiltered]) = True) && (InStr([reasonFiltered], "WriteBackOutOfScope") > 0 || InStr([reasonFiltered], "SoftDelete") > 0), "DeleteThisGroupInAD", [reasonFiltered])' `
- -OutVariable syncRule
-
- New-Object `
- -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.ScopeCondition' `
- -ArgumentList 'cloudMastered','true','EQUAL' `
- -OutVariable condition0
-
- Add-ADSyncScopeConditionGroup `
- -SynchronizationRule $syncRule[0] `
- -ScopeConditions @($condition0[0]) `
- -OutVariable syncRule
+- Portal: Use the [Microsoft Entra admin portal](../enterprise-users/groups-write-back-portal.md).
+- PowerShell: Use the [Microsoft Identity Tools PowerShell module](https://www.powershellgallery.com/packages/MSIdentityTools/2.0.16). For example:
+
+ `Get-mggroup -filter "groupTypes/any(c:c eq 'Unified')" | Update-MsIdGroupWritebackConfiguration -WriteBackEnabled $false`
+- Microsoft Graph: Use a [group object](/graph/api/group-update?tabs=http&view=graph-rest-beta).
+
+## Delete groups when they're disabled for writeback or soft deleted
+
+> [!NOTE]
+> After you delete written-back groups in Active Directory, they're not automatically restored from the Active Directory Recycle Bin feature if they're re-enabled for writeback or restored from a soft-delete state. New groups will be created. Deleted groups that are restored from Active Directory Recycle Bin before they're re-enabled for writeback, or that are restored from a soft-delete state in Azure AD, will be joined to their respective Azure AD groups.
+
+1. On your Azure AD Connect server, open a PowerShell prompt as an administrator.
+2. Disable the [Azure AD Connect sync scheduler](./how-to-connect-sync-feature-scheduler.md):
+
+ ``` PowerShell
+ Set-ADSyncScheduler -SyncCycleEnabled $false
+ ```
+3. Create a custom synchronization rule in Azure AD Connect to delete written-back groups when they're disabled for writeback or soft deleted:
- New-Object `
- -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.JoinCondition' `
- -ArgumentList 'cloudAnchor','cloudAnchor',$false `
- -OutVariable condition0
-
- Add-ADSyncJoinConditionGroup `
- -SynchronizationRule $syncRule[0] `
- -JoinConditions @($condition0[0]) `
- -OutVariable syncRule
-
- Add-ADSyncRule `
- -SynchronizationRule $syncRule[0]
-
- Get-ADSyncRule `
- -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947'
- ```
-
-4. [Enable group writeback](how-to-connect-group-writeback-enable.md)
-5. Enable Azure AD Connect sync scheduler
- ``` PowerShell
- Set-ADSyncScheduler -SyncCycleEnabled $true
- ```
-
->[!Note]
->Creating the synchronization rule will set the Full Synchronization flag to 'true' on the Azure Active Directory Connector, causing the rule changes to propagate through on the next synchronization cycle.
-
-## Writeback Microsoft 365 groups with up to 250,000 members
-
-Since the default sync rule, that limits the group size, is created when group writeback is enabled, the following steps must be completed after group writeback is enabled.
-
-1. On your Azure AD Connect server, open a PowerShell prompt as administrator.
-2. Disable [Azure AD Connect sync scheduler](./how-to-connect-sync-feature-scheduler.md)
- ``` PowerShell
- Set-ADSyncScheduler -SyncCycleEnabled $false
- ```
-3. Open the [synchronization rule editor](./how-to-connect-create-custom-sync-rule.md)
-4. Set the Direction to Outbound
-5. Locate and disable the ΓÇÿOut to AD ΓÇô Group Writeback Member LimitΓÇÖ synchronization rule
-6. Enable Azure AD Connect sync scheduler
-``` PowerShell
- Set-ADSyncScheduler -SyncCycleEnabled $true
-```
-
->[!Note]
->Disabling the synchronization rule will set the Full Synchronization flag to 'true' on the Active Directory Connector, causing the rule changes to propagate through on the next synchronization cycle.
+ ```PowerShell
+ import-module ADSync
+ $precedenceValue = Read-Host -Prompt "Enter a unique sync rule precedence value [0-99]"
+
+ New-ADSyncRule `
+ -Name 'In from AAD - Group SOAinAAD Delete WriteBackOutOfScope and SoftDelete' `
+ -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947' `
+ -Description 'Delete AD groups that fall out of scope of Group Writeback or get Soft Deleted in Azure AD' `
+ -Direction 'Inbound' `
+ -Precedence $precedenceValue `
+ -PrecedenceAfter '00000000-0000-0000-0000-000000000000' `
+ -PrecedenceBefore '00000000-0000-0000-0000-000000000000' `
+ -SourceObjectType 'group' `
+ -TargetObjectType 'group' `
+ -Connector 'b891884f-051e-4a83-95af-2544101c9083' `
+ -LinkType 'Join' `
+ -SoftDeleteExpiryInterval 0 `
+ -ImmutableTag '' `
+ -OutVariable syncRule
+
+ Add-ADSyncAttributeFlowMapping `
+ -SynchronizationRule $syncRule[0] `
+ -Destination 'reasonFiltered' `
+ -FlowType 'Expression' `
+ -ValueMergeType 'Update' `
+ -Expression 'IIF((IsPresent([reasonFiltered]) = True) && (InStr([reasonFiltered], "WriteBackOutOfScope") > 0 || InStr([reasonFiltered], "SoftDelete") > 0), "DeleteThisGroupInAD", [reasonFiltered])' `
+ -OutVariable syncRule
+
+ New-Object `
+ -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.ScopeCondition' `
+ -ArgumentList 'cloudMastered','true','EQUAL' `
+ -OutVariable condition0
+
+ Add-ADSyncScopeConditionGroup `
+ -SynchronizationRule $syncRule[0] `
+ -ScopeConditions @($condition0[0]) `
+ -OutVariable syncRule
+
+ New-Object `
+ -TypeName 'Microsoft.IdentityManagement.PowerShell.ObjectModel.JoinCondition' `
+ -ArgumentList 'cloudAnchor','cloudAnchor',$false `
+ -OutVariable condition0
+
+ Add-ADSyncJoinConditionGroup `
+ -SynchronizationRule $syncRule[0] `
+ -JoinConditions @($condition0[0]) `
+ -OutVariable syncRule
+
+ Add-ADSyncRule `
+ -SynchronizationRule $syncRule[0]
+ Get-ADSyncRule `
+ -Identifier 'cb871f2d-0f01-4c32-a333-ff809145b947'
+ ```
+
+4. [Enable group writeback](how-to-connect-group-writeback-enable.md).
+5. Enable the Azure AD Connect sync scheduler:
+ ``` PowerShell
+ Set-ADSyncScheduler -SyncCycleEnabled $true
+ ```
-## Restoring from AD Recycle Bin
+> [!NOTE]
+> Creating the synchronization rule will set the flag for full synchronization to `true` on the Azure AD connector. This change will cause the rule changes to propagate through on the next synchronization cycle.
-If you're updating the default behavior to delete groups when disabled for writeback or soft deleted, we recommend that you enable the [Active Directory Recycle Bin](./how-to-connect-sync-recycle-bin.md) feature for your on-premises instances of Active Directory. This feature will allow you to manually restore previously deleted AD groups, so that they can be rejoined to their respective Azure AD groups, if they were accidentally disabled for writeback or soft deleted.
+## Write back Microsoft 365 groups with up to 250,000 members
-Prior to re-enabling for writeback, or restoring from soft delete in Azure AD, the group will first need to be restored in AD.
+Because the default sync rule that limits the group size is created when group writeback is enabled, you must complete the following steps after you enable group writeback:
+1. On your Azure AD Connect server, open a PowerShell prompt as an administrator.
+2. Disable the [Azure AD Connect sync scheduler](./how-to-connect-sync-feature-scheduler.md):
+ ``` PowerShell
+ Set-ADSyncScheduler -SyncCycleEnabled $false
+ ```
+3. Open the [synchronization rule editor](./how-to-connect-create-custom-sync-rule.md).
+4. Set the direction to **Outbound**.
+5. Locate and disable the **Out to AD ΓÇô Group Writeback Member Limit** synchronization rule.
+6. Enable the Azure AD Connect sync scheduler:
+
+ ``` PowerShell
+ Set-ADSyncScheduler -SyncCycleEnabled $true
+ ```
+
+> [!NOTE]
+> Disabling the synchronization rule will set the flag for full synchronization to `true` on the Azure AD connector. This change will cause the rule changes to propagate through on the next synchronization cycle.
+
+## Restore from Active Directory Recycle Bin
+
+If you're updating the default behavior to delete groups when they're disabled for writeback or soft deleted, we recommend that you enable the [Active Directory Recycle Bin](./how-to-connect-sync-recycle-bin.md) feature for your on-premises instances of Active Directory. You can use this feature to manually restore previously deleted Active Directory groups so that they can be rejoined to their respective Azure AD groups, if they were accidentally disabled for writeback or soft deleted.
+
+Before you re-enable for writeback or restore from soft delete in Azure AD, you first need to restore the group in Active Directory.
-## Next steps:
+## Next steps
- [Azure AD Connect group writeback](how-to-connect-group-writeback-v2.md) -- [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md) -
+- [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md)
- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
The following scenarios are not supported for Staged Rollout:
- When you first add a security group for Staged Rollout, you're limited to 200 users to avoid a UX time-out. After you've added the group, you can add more users directly to it, as required. -- While users are in Staged Rollout with Password Hash Synchronization (PHS), by default no password expiration is applied. Password expiration can be applied by enabling "EnforceCloudPasswordPolicyForPasswordSyncedUsers". When "EnforceCloudPasswordPolicyForPasswordSyncedUsers" is enabled, password expiration policy is set to 90 days from the time password was set on-prem with no option to customize it. To learn how to set 'EnforceCloudPasswordPolicyForPasswordSyncedUsers' see [Password expiration policy](./how-to-connect-password-hash-synchronization.md#enforcecloudpasswordpolicyforpasswordsyncedusers).
+- While users are in Staged Rollout with Password Hash Synchronization (PHS), by default no password expiration is applied. Password expiration can be applied by enabling "EnforceCloudPasswordPolicyForPasswordSyncedUsers". When "EnforceCloudPasswordPolicyForPasswordSyncedUsers" is enabled, password expiration policy is set to 90 days from the time password was set on-prem with no option to customize it. Programatically updating PasswordPolicies attribute is not supported while users are in Staged Rollout. To learn how to set 'EnforceCloudPasswordPolicyForPasswordSyncedUsers' see [Password expiration policy](./how-to-connect-password-hash-synchronization.md#enforcecloudpasswordpolicyforpasswordsyncedusers).
- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for Windows 10 version older than 1903. This scenario will fall back to the WS-Trust endpoint of the federation server, even if the user signing in is in scope of Staged Rollout.
active-directory How To Upgrade Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-upgrade-previous-version.md
# Azure AD Connect: Upgrade from a previous version to the latest
-This topic describes the different methods that you can use to upgrade your Azure Active Directory (Azure AD) Connect installation to the latest release. You also use the steps in the [Swing migration](#swing-migration) section when you make a substantial configuration change.
+This topic describes the different methods that you can use to upgrade your Azure Active Directory (Azure AD) Connect installation to the latest release. Microsoft recommends using the steps in the [Swing migration](#swing-migration) section when you make a substantial configuration change or upgrade from older 1.x versions.
>[!NOTE]
-> It is important that you keep your servers current with the latest releases of Azure AD Connect. We are constantly making upgrades to AADConnect, and these upgrades include fixes to security issues and bugs, as well as serviceability, performance and scalability improvements.
+> It's important that you keep your servers current with the latest releases of Azure AD Connect. We are constantly making upgrades to AADConnect, and these upgrades include fixes to security issues and bugs, as well as serviceability, performance, and scalability improvements.
> To see what the latest version is, and to learn what changes have been made between versions, please refer to the [release version history](./reference-connect-version-history.md)
->[!NOTE]
-> It is currently supported to upgrade from any version of Azure AD Connect to the current version. In-place upgrades of DirSync or ADSync are not supported and a swing migration is required. If you want to upgrade from DirSync, see [Upgrade from Azure AD sync tool (DirSync)](how-to-dirsync-upgrade-get-started.md) or the [Swing migration](#swing-migration) section. </br>In practice, customers on extremely old versions may encounter problems not directly related to Azure AD Connect. Servers that have been in production for several years, typically have had several patches applied to them and not all of these can be accounted for. Generally, customers who have not upgraded in 12-18 months should consider a swing upgrade instead as this is the most conservative and least risky option.
+Any versions older than Azure AD Connect 2.x are currently deprecated, see [Introduction to Azure AD Connect V2.0](whatis-azure-ad-connect-v2.md) for more information. It is currently supported to upgrade from any version of Azure AD Connect to the current version. In-place upgrades of DirSync or ADSync are not supported, and a swing migration is required. If you want to upgrade from DirSync, see [Upgrade from Azure AD sync tool (DirSync)](how-to-dirsync-upgrade-get-started.md) or the [Swing migration](#swing-migration) section.
-If you want to upgrade from DirSync, see [Upgrade from Azure AD sync tool (DirSync)](how-to-dirsync-upgrade-get-started.md) instead.
+In practice, customers on old versions may encounter problems not directly related to Azure AD Connect. Servers that have been in production for several years typically have had several patches applied to them and not all of these can be accounted for. Customers who have not upgraded in 12-18 months (about 1 and a half years) should consider a swing upgrade instead as this is the most conservative and least risky option.
There are a few different strategies that you can use to upgrade Azure AD Connect.
-| Method | Description |
-| | |
-| [Automatic upgrade](how-to-connect-install-automatic-upgrade.md) |This is the easiest method for customers with an express installation. |
-| [In-place upgrade](#in-place-upgrade) |If you have a single server, you can upgrade the installation in-place on the same server. |
-| [Swing migration](#swing-migration) |With two servers, you can prepare one of the servers with the new release or configuration, and change the active server when you're ready. |
+| Method | Description | Pros | Cons |
+| | | | |
+| [Automatic upgrade](how-to-connect-install-automatic-upgrade.md) |This is the easiest method for customers with an express installation |No manual intervention |Auto-upgrade version might not include the latest features |
+| [In-place upgrade](#in-place-upgrade) |If you have a single server, you can upgrade the installation in-place on the same server |Does not require another server |If there is an issue while in-place upgrading, you cannot roll-back, and sync will be interrupted |
+| [Swing migration](#swing-migration) |With two servers, you can prepare one of the servers with the new release or configuration and change the active server when you are ready |Safest approach and smoother transition to a newer version. Supports Windows OS (Operating Systems) upgrade. Sync is not interrupted and does not impose a risk to production |Requires another server|
For permissions information, see the [permissions required for an upgrade](reference-connect-accounts-permissions.md#upgrade). > [!NOTE]
-> After you've enabled your new Azure AD Connect server to start synchronizing changes to Azure AD, you must not roll back to using DirSync or Azure AD Sync. Downgrading from Azure AD Connect to legacy clients, including DirSync and Azure AD Sync, isn't supported and can lead to issues such as data loss in Azure AD.
+> After you've enabled your new Azure AD Connect server to start synchronizing changes to Azure AD, you must not roll back to using DirSync or Azure AD Sync. Downgrading from Azure AD Connect to legacy clients, including DirSync and Azure AD Sync, is not supported and can lead to issues such as data loss in Azure AD.
## In-place upgrade
-An in-place upgrade works for moving from Azure AD Sync or Azure AD Connect. It doesn't work for moving from DirSync or for a solution with Forefront Identity Manager (FIM) + Azure AD Connector.
+An in-place upgrade works for moving from Azure AD Sync or Azure AD Connect. It does not work for moving from DirSync or for a solution with Forefront Identity Manager (FIM) + Azure AD Connector.
+
+This method is preferred when you have a single server and less than about 100,000 objects. If there are any changes to the out-of-box sync rules, a full import and full synchronization will occur after the upgrade. This method ensures that the new configuration is applied to all existing objects in the system. This run might take a few hours, depending on the number of objects that are in scope of the sync engine. The normal delta synchronization scheduler (which synchronizes every 30 minutes by default) is suspended, but password synchronization continues. You might consider doing the in-place upgrade during the weekend. If there are no changes to the out-of-box configuration with the new Azure AD Connect release, then a normal delta import/sync starts instead.
-This method is preferred when you have a single server and less than about 100,000 objects. If there are any changes to the out-of-box sync rules, a full import and full synchronization occur after the upgrade. This method ensures that the new configuration is applied to all existing objects in the system. This run might take a few hours, depending on the number of objects that are in scope of the sync engine. The normal delta synchronization scheduler (which synchronizes every 30 minutes by default) is suspended, but password synchronization continues. You might consider doing the in-place upgrade during a weekend. If there are no changes to the out-of-box configuration with the new Azure AD Connect release, then a normal delta import/sync starts instead.
![In-place upgrade](./media/how-to-upgrade-previous-version/inplaceupgrade.png)
-If you've made changes to the out-of-box synchronization rules, then these rules are set back to the default configuration on upgrade. To make sure that your configuration is kept between upgrades, make sure that you make changes as they're described in [Best practices for changing the default configuration](how-to-connect-sync-best-practices-changing-default-configuration.md).
+If you've made changes to the out-of-box synchronization rules, then these rules are set back to the default configuration on upgrade. To make sure that your configuration is kept between upgrades, make sure that you make changes as they're described in [Best practices for changing the default configuration](how-to-connect-sync-best-practices-changing-default-configuration.md). If you already changed the default sync rules, please see how to [Fix modified default rules in Azure AD Connect](/active-directory/hybrid/how-to-connect-sync-best-practices-changing-default-configuration), before starting the upgrade process.
During in-place upgrade, there may be changes introduced that require specific synchronization activities (including Full Import step and Full Synchronization step) to be executed after upgrade completes. To defer such activities, refer to section [How to defer full synchronization after upgrade](#how-to-defer-full-synchronization-after-upgrade).
-If you are using Azure AD Connect with non-standard connector (for example, Generic LDAP Connector and Generic SQL Connector), you must refresh the corresponding connector configuration in the [Synchronization Service Manager](./how-to-connect-sync-service-manager-ui-connectors.md) after in-place upgrade. For details on how to refresh the connector configuration, refer to article section [Connector Version Release History - Troubleshooting](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-version-history#troubleshooting). If you do not refresh the configuration, import and export run steps will not work correctly for the connector. You will receive the following error in the application event log with message *"Assembly version in AAD Connector configuration ("X.X.XXX.X") is earlier than the actual version ("X.X.XXX.X") of "C:\Program Files\Microsoft Azure AD Sync\Extensions\Microsoft.IAM.Connector.GenericLdap.dll".*
+If you are using Azure AD Connect with non-standard connector (for example, Generic LDAP (Lightweight Directory Access Protocol) Connector and Generic SQL Connector), you must refresh the corresponding connector configuration in the [Synchronization Service Manager](./how-to-connect-sync-service-manager-ui-connectors.md) after in-place upgrade. For details on how to refresh the connector configuration, refer to the article section [Connector Version Release History - Troubleshooting](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-version-history#troubleshooting). If you do not refresh the configuration, import and export run steps will not work correctly for the connector. You will receive the following error in the application event log:
+
+```
+Assembly version in AAD Connector configuration ("X.X.XXX.X") is earlier than the actual version ("X.X.XXX.X") of "C:\Program Files\Microsoft Azure AD Sync\Extensions\Microsoft.IAM.Connector.GenericLdap.dll".
+```
## Swing migration
-If you have a complex deployment or many objects, or if you need to upgrade the Windows Server operating system, it might be impractical to do an in-place upgrade on the live system. For some customers, this process might take multiple days--and during this time, no delta changes are processed. You can also use this method when you plan to make substantial changes to your configuration and you want to try them out before they're pushed to the cloud.
+For some customers, an in-place upgrade can impose a considerable risk to production in case there is an issue while upgrading and the server cannot be rolled back. A single production server might also be impractical as the initial sync cycle might take multiple days, and during this time, no delta changes are processed.
+
+The recommended method for these scenarios is to use a swing migration. You can also use this method when you need to upgrade the Windows Server operating system, or you plan to make substantial changes to your environment configuration, which need to be tested before they're pushed to production.
+
+You need (at least) two servers - one active server and one staging server. The active server (shown with solid blue lines in the following diagram) is responsible for the active production load. The staging server (shown with dashed purple lines) is prepared with the new release or configuration. When it is fully ready, this server is made active. The previous active server, which now has the outdated version or configuration installed, is made into the staging server and is upgraded.
-The recommended method for these scenarios is to use a swing migration. You need (at least) two servers--one active server and one staging server. The active server (shown with solid blue lines in the following picture) is responsible for the active production load. The staging server (shown with dashed purple lines) is prepared with the new release or configuration. When it's fully ready, this server is made active. The previous active server, which now has the old version or configuration installed, is made into the staging server and is upgraded.
+The two servers can use different versions. For example, the active server that you plan to decommission can use Azure AD Sync, and the new staging server can use Azure AD Connect. If you use swing migration to develop a new configuration, it is a good idea to have the same versions on the two servers.
-The two servers can use different versions. For example, the active server that you plan to decommission can use Azure AD Sync, and the new staging server can use Azure AD Connect. If you use swing migration to develop a new configuration, it's a good idea to have the same versions on the two servers.
-![Staging server](./media/how-to-upgrade-previous-version/stagingserver1.png)
+![Diagram of the staging server.](./media/how-to-upgrade-previous-version/stagingserver1.png)
> [!NOTE]
-> Some customers prefer to have three or four servers for this scenario. When the staging server is upgraded, you don't have a backup server for [disaster recovery](how-to-connect-sync-staging-server.md#disaster-recovery). With three or four servers, you can prepare one set of primary/standby servers with the new version, which ensures that there is always a staging server that's ready to take over.
+> Some customers prefer to have three or four servers for this scenario. When the staging server is upgraded, you don't have a backup server for [disaster recovery](how-to-connect-sync-staging-server.md#disaster-recovery). With three or four servers, you can prepare one set of primary/standby servers with the updated version, which ensures that there's always a staging server that's ready to take over.
These steps also work to move from Azure AD Sync or a solution with FIM + Azure AD Connector. These steps don't work for DirSync, but the same swing migration method (also called parallel deployment) with steps for DirSync is in [Upgrade Azure Active Directory sync (DirSync)](how-to-dirsync-upgrade-get-started.md). ### Use a swing migration to upgrade
-1. If you use Azure AD Connect on both servers and plan to only make a configuration change, make sure that your active server and staging server are both using the same version. That makes it easier to compare differences later. If you're upgrading from Azure AD Sync, then these servers have different versions. If you're upgrading from an older version of Azure AD Connect, it's a good idea to start with the two servers that are using the same version, but it's not required.
-2. If you've made a custom configuration and your staging server doesn't have it, follow the steps under [Move a custom configuration from the active server to the staging server](#move-a-custom-configuration-from-the-active-server-to-the-staging-server).
-3. If you're upgrading from an earlier release of Azure AD Connect, upgrade the staging server to the latest version. If you're moving from Azure AD Sync, then install Azure AD Connect on your staging server.
-4. Let the sync engine run full import and full synchronization on your staging server.
-5. Verify that the new configuration didn't cause any unexpected changes by using the steps under "Verify" in [Verify the configuration of a server](how-to-connect-sync-staging-server.md#verify-the-configuration-of-a-server). If something isn't as expected, correct it, run the import and sync, and verify the data until it looks good, by following the steps.
-6. Switch the staging server to be the active server. This is the final step "Switch active server" in [Verify the configuration of a server](how-to-connect-sync-staging-server.md#verify-the-configuration-of-a-server).
-7. If you're upgrading Azure AD Connect, upgrade the server that's now in staging mode to the latest release. Follow the same steps as before to get the data and configuration upgraded. If you upgraded from Azure AD Sync, you can now turn off and decommission your old server.
+1. If you only have one Azure AD Connect server, if you are upgrading from AD Sync, or upgrading from an old version, it is a good idea to install the new version on a new Windows Server. If you already have two Azure AD Connect servers, upgrade the staging server first. and promote the staging to active. It is recommended to always keep a pair of active/staging server running the same version, but it is not required.
+2. If you have made a custom configuration and your staging server does not have it, follow the steps under [Move a custom configuration from the active server to the staging server](#move-a-custom-configuration-from-the-active-server-to-the-staging-server).
+3. Let the sync engine run full import and full synchronization on your staging server.
+4. Verify that the new configuration did not cause any unexpected changes by using the steps under "Verify" in [Verify the configuration of a server](how-to-connect-sync-staging-server.md#verify-the-configuration-of-a-server). If something is not as expected, correct it, run a sync cycle, and verify the data until it looks good.
+5. Before upgrading the other server, switch it to staging mode and promote the staging server to be the active server. This is the last step "Switch active server" in the process to [Verify the configuration of a server](how-to-connect-sync-staging-server.md#verify-the-configuration-of-a-server).
+6. Upgrade the server that is now in staging mode to the latest release. Follow the same steps as before to get the data and configuration upgraded. If you upgrade from Azure AD Sync, you can now turn off and decommission your old server.
-### Move a custom configuration from the active server to the staging server
-If you've made configuration changes to the active server, you need to make sure that the same changes are applied to the staging server. To help with this move, you can use the [Azure AD Connect configuration documenter](https://github.com/Microsoft/AADConnectConfigDocumenter).
+> [!NOTE]
+> It's important to fully decommission old Azure AD Connect servers as these may cause synchronization issues, difficult to troubleshoot, when an old sync server is left on the network or is powered up again later by mistake. Such ΓÇ£rogueΓÇ¥ servers tend to overwrite Azure AD data with its old information because, they may no longer be able to access on-premises Active Directory (for example, when the computer account is expired, the connector account password has changed, etcetera), but can still connect to Azure AD and cause attribute values to continually revert in every sync cycle (for example, every 30 minutes). To fully decommission an Azure AD Connect server, make sure you completely uninstall the product and its components or permanently delete the server if it is a virtual machine.
-You can move the custom sync rules that you've created by using PowerShell. You must apply other changes the same way on both systems, and you can't migrate the changes. The [configuration documenter](https://github.com/Microsoft/AADConnectConfigDocumenter) can help you comparing the two systems to make sure they are identical. The tool can also help in automating the steps found in this section.
+### Move a custom configuration from the active server to the staging server
+If you have made configuration changes to the active server, you need to make sure that the same changes are applied to the new staging server. To help with this move, you can use the feature for [exporting and importing synchronization settings](/azure/active-directory/hybrid/how-to-connect-import-export-config). With this feature you can deploy a new staging server in a few steps, with the exact same settings as another Azure AD Connect server in your network.
-You need to configure the following things the same way on both servers:
+For individual custom sync rules that you have created, you can move them by using PowerShell. If you must apply other changes the same way on both systems, and you cannot migrate the changes, then you might have to manually do the following configurations on both servers:
* Connection to the same forests * Any domain and OU filtering * The same optional features, such as password sync and password writeback
-**Move custom synchronization rules**
-To move custom synchronization rules, do the following:
+**Copy custom synchronization rules**
+To copy custom synchronization rules to another server, do the following:
1. Open **Synchronization Rules Editor** on your active server.
-2. Select a custom rule. Click **Export**. This brings up a Notepad window. Save the temporary file with a PS1 extension. This makes it a PowerShell script. Copy the PS1 file to the staging server.
- ![Sync rule export](./media/how-to-upgrade-previous-version/exportrule.png)
-3. The Connector GUID is different on the staging server, and you must change it. To get the GUID, start **Synchronization Rules Editor**, select one of the out-of-box rules that represent the same connected system, and click **Export**. Replace the GUID in your PS1 file with the GUID from the staging server.
+2. Select a custom rule. Click **Export**. This brings up a Notepad window. Save the temporary file with a PS1 extension. This makes it a PowerShell script. Copy the PS1 file to the staging server.
+
+ ![Screenshot showing the syncronization rules editor export window.](./media/how-to-upgrade-previous-version/exportrule.png)
+
+3. The Connector GUID (globally-unique identifier) is different on the staging server, and you must change it. To get the GUID, start **Synchronization Rules Editor**, select one of the out-of-box rules that represent the same connected system, and click **Export**. Replace the GUID in your PS1 file with the GUID from the staging server.
4. In a PowerShell prompt, run the PS1 file. This creates the custom synchronization rule on the staging server. 5. Repeat this for all your custom rules. ## How to defer full synchronization after upgrade During in-place upgrade, there may be changes introduced that require specific synchronization activities (including Full Import step and Full Synchronization step) to be executed. For example, connector schema changes require **full import** step and out-of-box synchronization rule changes require **full synchronization** step to be executed on affected connectors. During upgrade, Azure AD Connect determines what synchronization activities are required and records them as *overrides*. In the following synchronization cycle, the synchronization scheduler picks up these overrides and executes them. Once an override is successfully executed, it is removed.
-There may be situations where you do not want these overrides to take place immediately after upgrade. For example, you have numerous synchronized objects and you would like these synchronization steps to occur after business hours. To remove these overrides:
+There may be situations where you do not want these overrides to take place immediately after upgrade. For example, you have numerous synchronized objects, and you would like these synchronization steps to occur after business hours. To remove these overrides:
1. During upgrade, **uncheck** the option **Start the synchronization process when configuration completes**. This disables the synchronization scheduler and prevents synchronization cycle from taking place automatically before the overrides are removed.
There may be situations where you do not want these overrides to take place imme
2. After upgrade completes, run the following cmdlet to find out what overrides have been added: `Get-ADSyncSchedulerConnectorOverride | fl` >[!NOTE]
- > The overrides are connector-specific. In the following example, Full Import step and Full Synchronization step have been added to both the on-premises AD Connector and Azure AD Connector.
+ > The overrides are connector specific. In the following example, Full Import step and Full Synchronization step have been added to both the on-premises AD Connector and Azure AD Connector.
![DisableFullSyncAfterUpgrade](./media/how-to-upgrade-previous-version/disablefullsync02.png)
To add the overrides for both full import and full synchronization on an arbitra
## Upgrading the server Operating System
-If you need to upgrade the operating system of your Azure AD Connect server, do not use an in place upgrade of the OS. Instead, prepare a new server with the desired operating system and perform [a swing migration](#swing-migration).
+If you need to upgrade the operating system of your Azure AD Connect server, do not use an in-place upgrade of the OS (Operating System). Instead, prepare a new server with the desired operating system and perform a [swing migration](#swing-migration).
## Troubleshooting The following section contains troubleshooting and information that you can use if you encounter an issue upgrading Azure AD Connect. ### Azure Active Directory connector missing error during Azure AD Connect upgrade
-When you upgrade Azure AD Connect from a previous version, you might hit following error at the beginning of the upgrade
+When you upgrade Azure AD Connect from a previous version, you might hit the following error at the beginning of the upgrade:
![Error](./media/how-to-upgrade-previous-version/error1.png)
At line:1 char:1
The PowerShell Cmdlet reports the error **the specified MA could not be found**.
-The reason that this occurs is because the current Azure AD Connect configuration is not supported for upgrade.
+This error occurs because the current Azure AD Connect configuration is not supported for upgrade.
If you want to install a newer version of Azure AD Connect: close the Azure AD Connect wizard, uninstall the existing Azure AD Connect, and perform a clean install of the newer Azure AD Connect. -- ## Next steps Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Plan Connect Userprincipalname https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-userprincipalname.md
When the updates to a user object are synchronized to the Azure AD Tenant, Azure
>Azure AD recalculates the UserPrincipalName attribute value only in case an update to the on-premises UserPrincipalName attribute/Alternate login ID value is synchronized to the Azure AD Tenant. > >Whenever Azure AD recalculates the UserPrincipalName attribute, it also recalculates the MOERA.
+>
+>In case of verified domain change, Azure AD also recalculates the UserPrincipalName attribute. For more information, see [Troubleshoot: Audit data on verified domain change](https://docs.microsoft.com/azure/active-directory/reports-monitoring/troubleshoot-audit-data-verified-domain)
## UPN scenarios The following are example scenarios of how the UPN is calculated based on the given scenario.
active-directory Admin Units Members Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-add.md
# Add users, groups, or devices to an administrative unit
-> [!IMPORTANT]
-> Administrative units support for devices is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- In Azure Active Directory (Azure AD), you can add users, groups, or devices to an administrative unit to restrict the scope of role permissions. Adding a group to an administrative unit brings the group itself into the management scope of the administrative unit, but **not** the members of the group. For additional details on what scoped administrators can do, see [Administrative units in Azure Active Directory](administrative-units.md). This article describes how to add users, groups, or devices to administrative units manually. For information about how to add users or devices to administrative units dynamically using rules, see [Manage users or devices for an administrative unit with dynamic membership rules](admin-units-members-dynamic.md).
This article describes how to add users, groups, or devices to administrative un
- Azure AD Premium P1 or P2 license for each administrative unit administrator - Azure AD Free licenses for administrative unit members - Privileged Role Administrator or Global Administrator-- AzureAD module when using PowerShell-- AzureADPreview module when using PowerShell for devices
+- Microsoft Graph PowerShell
- Admin consent when using Graph explorer for Microsoft Graph API For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
You can add users, groups, or devices to administrative units using the Azure po
## PowerShell
-Use the [Add-AzureADMSAdministrativeUnitMember](/powershell/module/azuread/add-azureadmsadministrativeunitmember) command to add users or groups to an administrative unit.
-
-Use the [Add-AzureADMSAdministrativeUnitMember (Preview)](/powershell/module/azuread/add-azureadmsadministrativeunitmember?view=azureadps-2.0-preview&preserve-view=true) command to add devices to an administrative unit.
-
-Use the [New-AzureADMSAdministrativeUnitMember (Preview)](/powershell/module/azuread/new-azureadmsadministrativeunitmember) to create a new group in an administrative unit. Currently, only group creation is supported with this command.
+Use the [Invoke-MgGraphRequest](/powershell/microsoftgraph/authentication-commands#using-invoke-mggraphrequest) command to add user, groups, or devices to an administrative unit or create a new group in an administrative unit.
### Add users to an administrative unit ```powershell
-$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
-$userObj = Get-AzureADUser -Filter "UserPrincipalName eq 'bill@example.com'"
-Add-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id -RefObjectId $userObj.ObjectId
+Invoke-MgGraphRequest -Method POST -Uri https://graph.microsoft.com/v1.0/directory/administrativeUnits/{ADMIN_UNIT_ID}/members/ -Body '{
+ "@odata.id": "https://graph.microsoft.com/v1.0/users/{USER_ID}"
+ }'
``` ### Add groups to an administrative unit ```powershell
-$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
-$groupObj = Get-AzureADGroup -Filter "displayname eq 'TestGroup'"
-Add-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id -RefObjectId $groupObj.ObjectId
+Invoke-MgGraphRequest -Method POST -Uri https://graph.microsoft.com/v1.0/directory/administrativeUnits/{ADMIN_UNIT_ID}/members/ -Body '{
+ "@odata.id": https://graph.microsoft.com/v1.0/groups/{GROUP_ID}
+ }'
``` ### Add devices to an administrative unit ```powershell
-$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
-$deviceObj = Get-AzureADDevice -Filter "displayname eq 'TestDevice'"
-Add-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id -RefObjectId $deviceObj.ObjectId
+Invoke-MgGraphRequest -Method POST -Uri https://graph.microsoft.com/v1.0/directory/administrativeUnits/{ADMIN_UNIT_ID}/members/ -Body '{
+ "@odata.id": https://graph.microsoft.com/v1.0/devices/{DEVICE_ID}
+ }'
``` ### Create a new group in an administrative unit ```powershell
-$exampleGroup = New-AzureADMSAdministrativeUnitMember -Id "<admin unit object id>" -OdataType "Microsoft.Graph.Group" -DisplayName "<Example group name>" -Description "<Example group description>" -MailEnabled $True -MailNickname "<examplegroup>" -SecurityEnabled $False -GroupTypes @("Unified")
+$exampleGroup = Invoke-MgGraphRequest -Method POST -Uri https://graph.microsoft.com/v1.0/directory/administrativeUnits/{ADMIN_UNIT_ID}/members/ -Body '{
+ "@odata.type": "#Microsoft.Graph.Group",
+ "description": "{Example group description}",
+ "displayName": "{Example group name}",
+ "groupTypes": [
+ "Unified"
+ ],
+ "mailEnabled": true,
+ "mailNickname": "{exampleGroup}",
+ "securityEnabled": false
+ }'
``` ## Microsoft Graph API
-Use the [Add a member](/graph/api/administrativeunit-post-members) API to add users or groups to an administrative unit.
-
-Use the [Add a member (Beta)](/graph/api/administrativeunit-post-members?view=graph-rest-beta&preserve-view=true) API to add devices to an administrative unit or create a new group in an administrative unit.
+Use the [Add a member](/graph/api/administrativeunit-post-members) API to add users, groups, or devices to an administrative unit or create a new group in an administrative unit.
### Add users to an administrative unit
Example
Request ```http
-POST https://graph.microsoft.com/beta/administrativeUnits/{admin-unit-id}/members/$ref
+POST https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-unit-id}/members/$ref
``` Body ```http {
- "@odata.id":"https://graph.microsoft.com/beta/devices/{device-id}"
+ "@odata.id":"https://graph.microsoft.com/v1.0/devices/{device-id}"
} ```
Body
Request ```http
-POST https://graph.microsoft.com/beta/administrativeUnits/{admin-unit-id}/members/
+POST https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-unit-id}/members/
``` Body
active-directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/prerequisites.md
If you want to manage Azure Active Directory (Azure AD) roles using PowerShell or Graph Explorer, you must have the required prerequisites. This article describes the PowerShell and Graph Explorer prerequisites for different Azure AD role features.
+## Microsoft Graph PowerShell
+
+To use PowerShell commands to do the following:
+
+- Add users, groups, or devices to an administrative unit
+- Create a new group in an administrative unit
+
+You must have the Microsoft Graph PowerShell SDK installed:
+
+- [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation)
+ ## AzureAD module To use PowerShell commands to do the following:
active-directory Ivm Smarthub Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ivm-smarthub-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with IVM Smarthub'
+description: Learn how to configure single sign-on between Azure Active Directory and IVM Smarthub.
++++++++ Last updated : 09/15/2022++++
+# Tutorial: Azure AD SSO integration with IVM Smarthub
+
+In this tutorial, you'll learn how to integrate IVM Smarthub with Azure Active Directory (Azure AD). When you integrate IVM Smarthub with Azure AD, you can:
+
+* Control in Azure AD who has access to IVM Smarthub.
+* Enable your users to be automatically signed-in to IVM Smarthub with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* IVM Smarthub single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* IVM Smarthub supports **SP** initiated SSO.
+
+## Add IVM Smarthub from the gallery
+
+To configure the integration of IVM Smarthub into Azure AD, you need to add IVM Smarthub from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **IVM Smarthub** in the search box.
+1. Select **IVM Smarthub** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for IVM Smarthub
+
+Configure and test Azure AD SSO with IVM Smarthub using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at IVM Smarthub.
+
+To configure and test Azure AD SSO with IVM Smarthub, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure IVM Smarthub SSO](#configure-ivm-smarthub-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create IVM Smarthub test user](#create-ivm-smarthub-test-user)** - to have a counterpart of B.Simon in IVM Smarthub that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **IVM Smarthub** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<Environment>.ivminc.com/saml`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<Environment>.ivminc.com/signin-saml-<CustomerName>`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<Environment>.ivmsmarthub.com`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [IVM Smarthub support team](mailto:icssupport@ivminc.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up IVM Smarthub** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to IVM Smarthub.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **IVM Smarthub**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure IVM Smarthub SSO
+
+To configure single sign-on on **IVM Smarthub** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [IVM Smarthub support team](mailto:icssupport@ivminc.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create IVM Smarthub test user
+
+In this section, you create a user called Britta Simon at IVM Smarthub. Work with [IVM Smarthub support team](mailto:icssupport@ivminc.com) to add the users in the IVM Smarthub platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to IVM Smarthub Sign-on URL where you can initiate the login flow.
+
+* Go to IVM Smarthub Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the IVM Smarthub tile in the My Apps, this will redirect to IVM Smarthub Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure IVM Smarthub you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
api-management Self Hosted Gateway Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-migration-guide.md
Learn more about the connectivity of our gateway, our new infrastructure require
## Prerequisites
-Before you can migrate to self-hosted gateway v2, you need to ensure your infrastructure [meets the requirements](self-hosted-gateway-overview.md#gateway-v2-requirements).
+Before you can migrate to self-hosted gateway v2, you need to ensure your infrastructure [meets the requirements](self-hosted-gateway-overview.md#fqdn-dependencies).
## Migrating to self-hosted gateway v2
Currently, Azure API Management provides the following Configuration APIs for se
| Configuration Service | URL | Supported | Requirements | | | | | |
-| v2 | `{name}.configuration.azure-api.net` | Yes | [Link](self-hosted-gateway-overview.md#gateway-v2-requirements) |
-| v1 | `{name}.management.azure-api.net/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.ApiManagement/service/{name}?api-version=2021-01-01-preview` | No | [Link](self-hosted-gateway-overview.md#gateway-v1-requirements) |
+| v2 | `{name}.configuration.azure-api.net` | Yes | [Link](self-hosted-gateway-overview.md#fqdn-dependencies) |
+| v1 | `{name}.management.azure-api.net/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.ApiManagement/service/{name}?api-version=2021-01-01-preview` | No | [Link](self-hosted-gateway-overview.md#fqdn-dependencies) |
Customer must use the new Configuration API v2 by changing their deployment scripts to use the new URL and meet infrastructure requirements.
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Self-hosted gateways require outbound TCP/IP connectivity to Azure on port 443.
To operate properly, each self-hosted gateway needs outbound connectivity on port 443 to the following endpoints associated with its cloud-based API Management instance: -- [Gateway v2 requirements](#gateway-v2-requirements)-- [Gateway v1 requirements](#gateway-v1-requirements)
+| Description | Required for v1 | Required for v2 | Notes |
+|:|:|:|:|
+| Hostname of the configuration endpoint | `<apim-service-name>.management.azure-api.net` | `<apim-service-name>.configuration.azure-api.net` | |
+| Public IP address of the API Management instance | ✔️ | ✔️ | IP addresses of primary location is sufficient. |
+| Public IP addresses of Azure Storage [service tag](../virtual-network/service-tags-overview.md) | ✔️ | Optional<sup>1</sup> | IP addresses must correspond to primary location of API Management instance. |
+| Hostname of Azure Blob Storage account | ✔️ | Optional<sup>1</sup> | Account associated with instance (`<blob-storage-account-name>.blob.core.windows.net`) |
+| Hostname of Azure Table Storage account | ✔️ | Optional<sup>1</sup> | Account associated with instance (`<table-storage-account-name>.table.core.windows.net`) |
+| Endpoints for [Azure Application Insights integration](api-management-howto-app-insights.md) | Optional<sup>2</sup> | Optional<sup>2</sup> | Minimal required endpoints are:<ul><li>`rt.services.visualstudio.com:443`</li><li>`dc.services.visualstudio.com:443`</li><li>`{region}.livediagnostics.monitor.azure.com:443`</li></ul>Learn more in [Azure Monitor docs](../azure-monitor/app/ip-addresses.md#outgoing-ports) |
+| Endpoints for [Event Hubs integration](api-management-howto-log-event-hubs.md) | Optional<sup>2</sup> | Optional<sup>2</sup> | Learn more in [Azure Event Hubs docs](../event-hubs/network-security.md) |
+| Endpoints for [external cache integration](api-management-howto-cache-external.md) | Optional<sup>2</sup> | Optional<sup>2</sup> | This requirement depends on the external cache that is being used |
+
+<sup>1</sup> Only required in v2 when API inspector or quotas are used in policies.<br/>
+<sup>2</sup> Only required when feature is used and requires public IP address, port and hostname information.<br/>
> [!IMPORTANT] > * DNS hostnames must be resolvable to IP addresses and the corresponding IP addresses must be reachable. > * The associated storage account names are listed in the service's **Network connectivity status** page in the Azure portal. > * Public IP addresses underlying the associated storage accounts are dynamic and can change without notice.
-If integrated with your API Management instance, also enable outbound connectivity to the associated public IP addresses, ports, and hostnames for:
-
-* [Event Hubs](api-management-howto-log-event-hubs.md)
-* [Application Insights](api-management-howto-app-insights.md)
-* [External cache](api-management-howto-cache-external.md)
-
-#### Gateway v2 requirements
-
-The self-hosted gateway v2 requires the following:
-
-* The public IP address of the API Management instance in its primary location
-* The hostname of the instance's configuration endpoint: `<apim-service-name>.configuration.azure-api.net`
-
-Additionally, customers that use API inspector or quotas in their policies have to ensure that the following dependencies are accessible:
-
-* The hostname of the instance's associated blob storage account: `<blob-storage-account-name>.blob.core.windows.net`
-* The hostname of the instance's associated table storage account: `<table-storage-account-name>.table.core.windows.net`
-* Public IP addresses from the Storage [service tag](../virtual-network/service-tags-overview.md) corresponding to the primary location of the API Management instance
-
-#### Gateway v1 requirements
-
-The self-hosted gateway v1 requires the following:
-
-* The public IP address of the API Management instance in its primary location
-* The hostname of the instance's management endpoint: `<apim-service-name>.management.azure-api.net`
-* The hostname of the instance's associated blob storage account: `<blob-storage-account-name>.blob.core.windows.net`
-* The hostname of the instance's associated table storage account: `<table-storage-account-name>.table.core.windows.net`
-* Public IP addresses from the Storage [service tag](../virtual-network/service-tags-overview.md) corresponding to the primary location of the API Management instance
- ### Connectivity failures When connectivity to Azure is lost, the self-hosted gateway is unable to receive configuration updates, report its status, or upload telemetry.
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
To complete this quickstart, you need an Azure account with an active subscripti
:::image type="content" source="./media/quickstart-wordpress/04-wordpress-basics-project-details.png?text=Azure portal WordPress Project Details" alt-text="Screenshot of WordPress project details.":::
-1. Under **Hosting details**, type a globally unique name for your web app and choose **Linux** for **Operating System**. Select **Basic** for **Hosting plan**. Select **Compare plans** to view features and price comparisons. See the table below for app and database SKUs for given hosting plans. You can view [hosting plans details in the announcement](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/announcing-the-general-availability-of-wordpress-on-azure-app/ba-p/3593481). For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
+1. Under **Hosting details**, type a globally unique name for your web app and choose **Linux** for **Operating System**. Select **Basic** for **Hosting plan**. Select **Compare plans** to view features and price comparisons.
:::image type="content" source="./media/quickstart-wordpress/05-wordpress-basics-instance-details.png?text=WordPress basics instance details" alt-text="Screenshot of WordPress instance details.":::
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
description: "This article provides a conceptual overview of GitOps in Azure for
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 5/26/2022 Last updated : 9/22/2022
For more information on private link scopes in Azure Arc, refer to [this documen
## Data residency The Azure GitOps service (Azure Kubernetes Configuration Management) stores/processes customer data. By default, customer data is replicated to the paired region. For the regions Singapore, East Asia, and Brazil South, all customer data is stored and processed in the region.
+## Apply Flux configurations at scale
+
+Because Azure Resource Manager manages your configurations, you can automate creating the same configuration across all Azure Kubernetes Service and Azure Arc-enabled Kubernetes resources using Azure Policy, within the scope of a subscription or a resource group. This at-scale enforcement ensures that specific configurations will be applied consistently across entire groups of clusters.
+
+[Learn how to use the built-in policies for Flux v2](./use-azure-policy-flux-2.md).
+ ## Next steps Advance to the next tutorial to learn how to enable GitOps on your AKS or Azure Arc-enabled Kubernetes clusters
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
You can delete the Azure Arc-enabled Kubernetes resource, any associated configu
az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest ```
-If the deletion process hangs, use the following command to force deletion (adding `-y` if you want to bypass the confirmation prompt):
+If the deletion process fails, use the following command to force deletion (adding `-y` if you want to bypass the confirmation prompt):
```azurecli az connectedk8s delete -g AzureArcTest1 -n AzureArcTest --force
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
General information about migration from Flux v1 to Flux v2 is available in the
## Next steps
-Advance to the next tutorial to learn how to implement CI/CD with GitOps.
+Advance to the next tutorial to learn how to apply configuration at scale with Azure Policy.
> [!div class="nextstepaction"]
-> [Implement CI/CD with GitOps](./tutorial-gitops-flux2-ci-cd.md)
+> [Use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
You can use Azure Arc-enabled VMware vSphere (preview) in these supported region
- Canada Central
+## Data Residency
+
+Azure Arc-enabled VMware vSphere doesn't store/process customer data outside the region the customer deploys the service instance in.
+ ## Next steps - [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md)
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
module.exports = df.entity(function(context) {
# [Python](#tab/python) ```python
-import logging
-import json
- import azure.functions as func import azure.durable_functions as df
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Application Insights pricing is consumption-based; you pay for only what you use
## How do I instrument an application?
-[Auto-Instrumentation](codeless-overview.md) is the preferred instrumentation method. It requires no developer investment and eliminates future overhead related to [updating the SDK](sdk-support-guidance.md). It's the only way to instrument an application in which you don't have access to the source code.
+[Auto-Instrumentation](codeless-overview.md) is the preferred instrumentation method. It requires no developer investment and eliminates future overhead related to [updating the SDK](sdk-support-guidance.md). It's also the only way to instrument an application in which you don't have access to the source code.
You only need to install the Application Insights SDK in the following circumstances:
You only need to install the Application Insights SDK in the following circumsta
To use the SDK, you install a small instrumentation package in your app and then instrument the web app, any background components, and JavaScript within the web pages. The app and its components don't have to be hosted in Azure. The instrumentation monitors your app and directs the telemetry data to an Application Insights resource by using a unique token. The effect on your app's performance is small; tracking calls are non-blocking and batched to be sent in a separate thread.
-Refer to the decision tree below to see what is available to instrument your app.
- ### [.NET](#tab/net)
+Integrated Auto-instrumentation is available for [Azure App Service .NET](azure-web-apps-net.md), [Azure App Service .NET Core](azure-web-apps-net-core.md), [Azure Functions](../../azure-functions/functions-monitoring.md#monitor-executions-in-azure-functions), and [Azure Virtual Machines](azure-vm-vmss-apps.md).
+
+[Azure Monitor Application Insights Agent](status-monitor-v2-overview.md) is available for workloads running in on-premises virtual machines.
-- [Auto-Instrumentation](codeless-overview.md)-- [Azure Application Insights libraries for .NET](https://docs.microsoft.com/dotnet/api/overview/azure/insights)-- [Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets](azure-vm-vmss-apps.md)-- [Deploy Azure Monitor Application Insights Agent for on-premises servers](status-monitor-v2-overview.md)
+A detailed view of all Auto-instrumentation supported environments, languages, and resource providers are available [here](codeless-overview.md#supported-environments-languages-and-resource-providers).
+For other scenarios, the [Application Insights SDK](/dotnet/api/overview/azure/insights) is required.
+A preview [Open Telemetry](opentelemetry-enable.md?tabs=net) offering is also available.
### [Java](#tab/java)
+Auto-instrumentation is available for any environment using [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md).
-Links:
-- [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md)
+Integrated Auto-Instrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md#distributed-tracing-for-java-applications-public-preview).
### [Node.js](#tab/nodejs)
+Auto-instrumentation is available for [Azure App Service](azure-web-apps-nodejs.md).
-Links:
-- [Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications](opentelemetry-enable.md)-- [Monitor your Node.js services and apps with Application Insights](nodejs.md)
+The [Application Insights SDK](nodejs.md) is an alternative and we also have a preview [Open Telemetry](opentelemetry-enable.md?tabs=nodejs) offering available.
### [JavaScript](#tab/javascript) -
-Links:
-- [Application Insights for webpages](javascript.md)
+JavaScript requires the [Application Insights SDK](javascript.md).
### [Python](#tab/python)
+Python applications can be monitored using [OpenCensus Python SDK via the Azure Monitor exporters](opencensus-python.md).
+
+An extension is available for monitoring [Azure Functions](opencensus-python.md#integrate-with-azure-functions).
-Links:
-- [Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications](opentelemetry-enable.md)-- [Set up Azure Monitor for your Python application](opencensus-python.md)
+A preview [Open Telemetry](opentelemetry-enable.md?tabs=python) offering is also available.
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
In a three-host cluster, FTT-1 accommodates a single host's failure. Microsoft g
vSAN datastores use data-at-rest encryption by default using keys stored in Azure Key Vault. The encryption solution is KMS-based and supports vCenter Server operations for key management. When a host is removed from a cluster, all data on SSDs is invalidated immediately.
+## Datastore capacity expansion options
+
+The vSAN datastore capacity can be expanded by connecting Azure storage resources such as [Azure NetApp Files volumes as datastores](/azure/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts). Virtual machines can be migrated between vSAN and Azure NetApp Files datastores using storage vMotion. Azure NetApp Files datastores can be replicated to other regions using storage based [Cross-region replication](/azure/azure-netapp-files/cross-region-replication-introduction) for testing, development and failover purposes.
+Azure NetApp Files is available in [Ultra, Premium and Standard performance tiers](/azure/azure-netapp-files/azure-netapp-files-service-levels) to allow for adjusting performance and cost to the requirements of the workloads.
+ ## Azure storage integration You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads.
-You can expand the datastore capacity by connecting Azure disk pools or [Azure NetApp Files datastores](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md). Azure NetApp Files is available in Ultra, [Premium and Standard performance tiers](/azure/azure-netapp-files/azure-netapp-files-service-levels) to allow adjusting the performance and cost to the requirements of the workloads.
## Alerts and monitoring
azure-vmware Configure External Identity Source Nsx T https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-external-identity-source-nsx-t.md
In this article, you'll learn how to configure an external identity source for N
- If you require Active Directory authentication with LDAPS: - You'll need access to the Active Directory Domain Controller(s) with Administrator permissions.
- - Your Active Directory Domain Controller(s) must have LDAPS enabled with a valid certificate. The certificate could be issued by an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [third-party CA](https://docs.microsoft.com/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority).
+ - Your Active Directory Domain Controller(s) must have LDAPS enabled with a valid certificate. The certificate could be issued by an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [third-party CA](/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority).
>[!Note] > Self-sign certificates are not recommended for production environments. ΓÇ»
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
To request support, send an email request to **avsStretchedCluster@microsoft.com
- Number of nodes in first stretched cluster (minimum 6, maximum 16 - in multiples of two) - Estimated provisioning date (used for billing purposes)
-When the request support details are received, quota will be reserved for a stretched cluster environment in the region requested. The subscription gets enabled to deploy a stretched cluster SDDC through the Azure portal. A confirmation email will be sent to the designated point of contact within two business days upon which you should be able to [self-deploy a stretched cluster private cloud via the Azure portal](https://docs.microsoft.com/azure/azure-vmware/tutorial-create-private-cloud?tabs=azure-portal#create-a-private-cloud). Be sure to select **Hosts in two availability zones** to ensure that a stretched cluster gets deployed in the region of your choice.
+When the request support details are received, quota will be reserved for a stretched cluster environment in the region requested. The subscription gets enabled to deploy a stretched cluster SDDC through the Azure portal. A confirmation email will be sent to the designated point of contact within two business days upon which you should be able to [self-deploy a stretched cluster private cloud via the Azure portal](/azure/azure-vmware/tutorial-create-private-cloud?tabs=azure-portal#create-a-private-cloud). Be sure to select **Hosts in two availability zones** to ensure that a stretched cluster gets deployed in the region of your choice.
:::image type="content" source="media/stretch-clusters/stretched-clusters-hosts-two-availability-zones.png" alt-text="Screenshot shows where to select hosts in two availability zones.":::
-Once the private cloud is created, you can peer both availability zones (AZs) to your on-premises ExpressRoute circuit with Global Reach that helps connect your on-premises data center to the private cloud. Peering both the AZs will ensure that an AZ failure doesn't result in a loss of connectivity to your private cloud. Since an ExpressRoute Auth Key is valid for only one connection, repeat the [Create an ExpressRoute auth key in the on-premises ExpressRoute circuit](https://docs.microsoft.com/azure/azure-vmware/tutorial-expressroute-global-reach-private-cloud#create-an-expressroute-auth-key-in-the-on-premises-expressroute-circuit) process to generate another authorization.
+Once the private cloud is created, you can peer both availability zones (AZs) to your on-premises ExpressRoute circuit with Global Reach that helps connect your on-premises data center to the private cloud. Peering both the AZs will ensure that an AZ failure doesn't result in a loss of connectivity to your private cloud. Since an ExpressRoute Auth Key is valid for only one connection, repeat the [Create an ExpressRoute auth key in the on-premises ExpressRoute circuit](/azure/azure-vmware/tutorial-expressroute-global-reach-private-cloud#create-an-expressroute-auth-key-in-the-on-premises-expressroute-circuit) process to generate another authorization.
:::image type="content" source="media/stretch-clusters/express-route-availability-zones.png" alt-text="Screenshot shows how to generate Express Route authorizations for both availability zones."lightbox="media/stretch-clusters/express-route-availability-zones.png":::
-Next, repeat the process to [peer ExpressRoute Global Reach](https://docs.microsoft.com/azure/azure-vmware/tutorial-expressroute-global-reach-private-cloud#peer-private-cloud-to-on-premises) two availability zones to the on-premises ExpressRoute circuit.
+Next, repeat the process to [peer ExpressRoute Global Reach](/azure/azure-vmware/tutorial-expressroute-global-reach-private-cloud#peer-private-cloud-to-on-premises) two availability zones to the on-premises ExpressRoute circuit.
:::image type="content" source="media/stretch-clusters/express-route-global-reach-peer-availability-zones.png" alt-text="Screenshot shows page to peer both availability zones to on-premises Express Route Global Reach."lightbox="media/stretch-clusters/express-route-global-reach-peer-availability-zones.png":::
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-vm-sizes.md
For each VM series, the following table also lists whether the VM series and VM
| NCv2 | All sizes | | NCv3 | All sizes | | NCasT4_v3 | All sizes |
+| NC_A100_v4 | All sizes |
| ND | All sizes | | NDv4 | All sizes | | NDv2 | None - not yet available |
batch Tutorial Run Python Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-run-python-batch-azure-data-factory.md
In this section, you'll use Batch Explorer to create the Batch pool that your Az
1. Create a pool by selecting **Pools** on the left side bar, then the **Add** button above the search form. 1. Choose an ID and display name. We'll use `custom-activity-pool` for this example. 1. Set the scale type to **Fixed size**, and set the dedicated node count to 2.
- 1. Under **Data science**, select **Dsvm Windows** as the operating system.
+ 1. Under **Image Type**, select **Marketplace** as the operating system and **Publisher** as **microsoft-dsvm**
1. Choose `Standard_f2s_v2` as the virtual machine size. 1. Enable the start task and add the command `cmd /c "pip install azure-storage-blob pandas"`. The user identity can remain as the default **Pool user**. 1. Select **OK**.
center-sap-solutions View Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/view-cost-analysis.md
+
+ Title: View post-deployment cost analysis in Azure Center for SAP solutions (preview)
+description: Learn how to view the cost of running an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions (ACSS).
++ Last updated : 09/23/2022++
+#Customer intent: As an SAP Basis Admin, I want to understand the cost incurred for running SAP systems on Azure.
++
+# View post-deployment cost analysis for SAP system (preview)
++
+In this how-to guide, you'll learn how to view the running cost of your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions (ACSS)*.
+
+After you deploy or register an SAP system as a VIS resource, you can [view the cost of running that SAP system on the VIS resource's page](#view-cost-analysis). This feature shows the post-deployment running costs in the context of your SAP system. When you have Azure resources of multiple SAP systems in a single resource group, you no longer need to analyze the cost for each system. Instead, you can easily view the system-level cost from the VIS resource.
+
+## How does cost analysis work?
+
+When you deploy infrastructure for a new SAP system with ACSS or register an existing system with ACSS, the **costanalysis-parent** tag is added to all virtual machines (VMs), disks, and load balancers related to that SAP system. The cost is determined by the total cost of all the Azure resources in the system with the **costanalysis-parent** tag.
+Whenever there are changes to the SAP system, such as the addition or removal of Application Server Instance VMs, tags are updated on the relevant Azure resources.
+
+> [!NOTE]
+> If you register an existing SAP system as a VIS, the cost analysis only shows data after the time of registration. Even if some infrastructure resources might have been deployed before the registration, the cost analysis tags aren't applied to historical data.
+
+The following Azure resources aren't included in the SAP system-level cost analysis. This list includes some resources that might be shared across multiple SAP systems.
+
+- Virtual networks
+- Storage accounts
+- Azure NetApp files (ANF)
+- Azure key vaults
+- Azure Monitor for SAP solutions resources
+- Azure Backup resources
+
+Cost and usage data is typically available within 8-24 hours. As such, your VIS resource can take 8-24 hours to start showing cost analysis data.
+
+## View cost analysis
+
+To view the post-deployment costs of running an SAP system registered as a VIS resource:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Azure Center for SAP solutions** in the Azure portal's search bar.
+1. Select **Virtual Instance for SAP solutions** in the sidebar menu.
+1. Select a VIS resource that is either successfully deployed or registered.
+1. Select **Cost Analysis** in the sidebar menu.
+1. To change the cost analysis from table view to a chart view, select the **Column (grouped)** option.
+
+## Next steps
+
+- [Monitor SAP system from the Azure portal](monitor-portal.md)
+- [Get quality checks and insights for a VIS resource](get-quality-checks-insights.md)
+- [Start and Stop SAP systems](start-stop-sap-systems.md)
cognitive-services Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
The following is an example of a JSON input for the SPACEANALYTICS_CONFIG parame
"type": "count", "config": { "trigger": "event",
- "threshold": 13.00,
"focus": "footprint" } }
The following is an example of a JSON input for the SPACEANALYTICS_CONFIG parame
| `zones` | list| List of zones. | | `name` | string| Friendly name for this zone.| | `polygon` | list| Each value pair represents the x,y for vertices of a polygon. The polygon represents the areas in which people are tracked or counted. Polygon points are based on normalized coordinates (0-1), where the top left corner is (0.0, 0.0) and the bottom right corner is (1.0, 1.0).
-| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. |
+| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.0253 will be 13 pixels on a video with image width = 512 (0.0253 X 512 = ~13).|
| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcount**, this should be `count`.| | `trigger` | string| The type of trigger for sending an event. Supported values are `event` for sending events when the count changes or `interval` for sending events periodically, irrespective of whether the count has changed or not. | `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The `output_frequency` is applicable to both `event` and `interval`. |
The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` para
"type": "linecrossing", "config": { "trigger": "event",
- "threshold": 13.00,
"focus": "footprint" } }
The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` para
| `line` | list| The definition of the line. This is a directional line allowing you to understand "entry" vs. "exit".| | `start` | value pair| x, y coordinates for line's starting point. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size. | | `end` | value pair| x, y coordinates for line's ending point. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size. |
-| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. The default value is 13. This is the recommended value to achieve maximum accuracy. |
+| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.0253 will be 13 pixels on a video with image width = 512 (0.0253 X 512 = ~13).|
| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingline**, this should be `linecrossing`.| |`trigger`|string|The type of trigger for sending an event.<br>Supported Values: "event": fire when someone crosses the line.| | `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box). The default value is footprint.|
This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter tha
"type": "zonecrossing", "config":{ "trigger": "event",
- "threshold": 38.00,
"focus": "footprint" } }]
This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter tha
"type": "zonedwelltime", "config":{ "trigger": "event",
- "threshold": 13.00,
"focus": "footprint" } }]
This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter tha
| `name` | string| Friendly name for this zone.| | `polygon` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which people are tracked or counted. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size. | `target_side` | int| Specifies a side of the zone defined by `polygon` to measure how long people face that side while in the zone. 'dwellTimeForTargetSide' will output that estimated time. Each side is a numbered edge between the two vertices of the polygon that represents your zone. For example, the edge between the first two vertices of the polygon represents the first side, 'side'=1. The value of `target_side` is between `[0,N-1]` where `N` is the number of sides of the `polygon`. This is an optional field. |
-| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. The default value is 38 when the type is `zonecrossing` and 13 when time is `DwellTime`. These are the recommended values to achieve maximum accuracy. |
+| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.074 will be 38 pixels on a video with image width = 512 (0.074 X 512 = ~38).|
| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingpolygon** this should be `zonecrossing` or `zonedwelltime`.| | `trigger`|string|The type of trigger for sending an event<br>Supported Values: "event": fire when someone enters or exits the zone.| | `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box). The default value is footprint.|
This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter tha
"output_frequency":1, "minimum_distance_threshold":6.0, "maximum_distance_threshold":35.0,
- "aggregation_method": "average"
- "threshold": 13.00,
+ "aggregation_method": "average",
"focus": "footprint" } }]
This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter tha
| `zones` | list| List of zones. | | `name` | string| Friendly name for this zone.| | `polygon` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which people are counted and the distance between people is measured. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
-| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. |
+| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.0253 will be 13 pixels on a video with image width = 512 (0.0253 X 512 = ~13).|
| `type` | string| For **cognitiveservices.vision.spatialanalysis-persondistance**, this should be `persondistance`.| | `trigger` | string| The type of trigger for sending an event. Supported values are `event` for sending events when the count changes or `interval` for sending events periodically, irrespective of whether the count has changed or not. | `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The `output_frequency` is applicable to both `event` and `interval`.|
The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` para
"type": "linecrossing", "config": { "trigger": "event",
- "threshold": 13.00,
"focus": "footprint" } }
The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` para
"output_frequency": 1, "minimum_distance_threshold": 6.0, "maximum_distance_threshold": 35.0,
- "threshold": 13.00,
"focus": "footprint" } },
The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` para
"config": { "trigger": "event", "output_frequency": 1,
- "threshold": 13.00,
"focus": "footprint" } }, { "type": "zonecrossing", "config": {
- "threshold": 38.00,
"focus": "footprint" } }, { "type": "zonedwelltime", "config": {
- "threshold": 13.00,
"focus": "footprint" } }
cognitive-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-audio-data.md
+
+ Title: Locate audio files for batch transcription - Speech service
+
+description: Batch transcription is used to transcribe a large amount of audio in storage. You should provide multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe.
+++++++ Last updated : 09/11/2022
+ms.devlang: csharp
+++
+# Locate audio files for batch transcription
+
+Batch transcription is used to transcribe a large amount of audio in storage. Batch transcription can read audio files from a public URI (such as "https://crbn.us/hello.wav") or a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI.
+
+You should provide multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time.
+
+## Supported audio formats
+
+The batch transcription API supports the following formats:
+
+| Format | Codec | Bits per sample | Sample rate |
+|--|-|||
+| WAV | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo |
+| MP3 | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo |
+| OGG | OPUS | 16-bit | 8 kHz or 16 kHz, mono or stereo |
+
+For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each input audio file. To create an ordered final transcript, use the timestamps that are generated per utterance.
+
+## Azure Blob Storage example
+
+Batch transcription can read audio files from a public URI (such as "https://crbn.us/hello.wav") or a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI. You can provide individual audio files, or an entire Azure Blob Storage container. You can also read or write transcription results in a container. This example shows how to transcribe audio files in [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md).
+
+The [SAS URI](../../storage/common/storage-sas-overview.md) must have `r` (read) and `l` (list) permissions. The storage container must have at most 5GB of audio data and a maximum number of 10,000 blobs. The maximum size for a blob is 2.5GB.
+
+Follow these steps to create a storage account, upload wav files from your local directory to a new container, and generate a SAS URL that you can use for batch transcriptions.
+
+1. Set the `RESOURCE_GROUP` environment variable to the name of an existing resource group where the new storage account will be created.
+
+ ```azurecli-interactive
+ set RESOURCE_GROUP=<your existing resource group name>
+ ```
+
+1. Set the `AZURE_STORAGE_ACCOUNT` environment variable to the name of a storage account that you want to create.
+
+ ```azurecli-interactive
+ set AZURE_STORAGE_ACCOUNT=<choose new storage account name>
+ ```
+
+1. Create a new storage account with the [`az storage account create`](/cli/azure/storage/account#az-storage-account-create) command. Replace `eastus` with the region of your resource group.
+
+ ```azurecli-interactive
+ az storage account create -n %AZURE_STORAGE_ACCOUNT% -g %RESOURCE_GROUP% -l eastus
+ ```
+
+ > [!TIP]
+ > When you are finished with batch transcriptions and want to delete your storage account, use the [`az storage delete create`](/cli/azure/storage/account#az-storage-account-delete) command.
+
+1. Get your new storage account keys with the [`az storage account keys list`](/cli/azure/storage/account#az-storage-account-keys-list) command.
+
+ ```azurecli-interactive
+ az storage account keys list -g %RESOURCE_GROUP% -n %AZURE_STORAGE_ACCOUNT%
+ ```
+
+1. Set the `AZURE_STORAGE_KEY` environment variable to one of the key values retrieved in the previous step.
+
+ ```azurecli-interactive
+ set AZURE_STORAGE_KEY=<your storage account key>
+ ```
+
+ > [!IMPORTANT]
+ > The remaining steps use the `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_KEY` environment variables. If you didn't set the environment variables, you can pass the values as parameters to the commands. See the [az storage container create](/cli/azure/storage/) documentation for more information.
+
+1. Create a container with the [`az storage container create`](/cli/azure/storage/container#az-storage-container-create) command. Replace `<mycontainer>` with a name for your container.
+
+ ```azurecli-interactive
+ az storage container create -n <mycontainer>
+ ```
+
+1. The following [`az storage blob upload-batch`](/cli/azure/storage/blob#az-storage-blob-upload-batch) command uploads all .wav files from the current local directory. Replace `<mycontainer>` with a name for your container. Optionally you can modify the command to upload files from a different directory.
+
+ ```azurecli-interactive
+ az storage blob upload-batch -d <mycontainer> -s . --pattern *.wav
+ ```
+
+1. Generate a SAS URL with read (r) and list (l) permissions for the container with the [`az storage container generate-sas`](/cli/azure/storage/container#az-storage-container-generate-sas) command. Replace `<mycontainer>` with the name of your container.
+
+ ```azurecli-interactive
+ az storage container generate-sas -n <mycontainer> --expiry 2022-09-09 --permissions rl --https-only
+ ```
+
+The previous command returns a SAS token. Append the SAS token to your container blob URL to create a SAS URL. For example: `https://<storage_account_name>.blob.core.windows.net/<container_name>?SAS_TOKEN`.
+
+You will use the SAS URL when you [create a batch transcription](batch-transcription-create.md) request. For example:
+
+```json
+{
+ "contentContainerUrl": "https://<storage_account_name>.blob.core.windows.net/<container_name>?SAS_TOKEN"
+}
+```
+
+## Next steps
+
+- [Batch transcription overview](batch-transcription.md)
+- [Create a batch transcription](batch-transcription-create.md)
+- [Get batch transcription results](batch-transcription-get.md)
cognitive-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md
+
+ Title: Create a batch transcription - Speech service
+
+description: With batch transcriptions, you submit the audio, and then retrieve transcription results asynchronously.
+++++++ Last updated : 09/11/2022
+zone_pivot_groups: speech-cli-rest
+++
+# Create a batch transcription
+
+With batch transcriptions, you submit the [audio data](batch-transcription-audio-data.md), and then retrieve transcription results asynchronously. The service transcribes the audio data and stores the results in a storage container. You can then [retrieve the results](batch-transcription-get.md) from the storage container.
+
+## Create a transcription job
++
+To create a transcription and connect it to an existing project, use the `spx batch transcription create` command. Construct the request parameters according to the following instructions:
+
+- Set the required `content` parameter. You can specify either a semi-colon delimited list of individual files, or the SAS URL for an entire container. This property will not be returned in the response. For more information about Azure blob storage and SAS URLs, see [Azure storage for audio files](batch-transcription-audio-data.md#azure-blob-storage-example).
+- Set the required `language` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+
+Here's an example Speech CLI command that creates a transcription job:
+
+```azurecli-interactive
+spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
+ },
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": true,
+ "displayFormWordLevelTimestampsEnabled": false,
+ "channels": [
+ 0,
+ 1
+ ],
+ "punctuationMode": "DictatedAndAutomatic",
+ "profanityFilterMode": "Masked"
+ },
+ "lastActionDateTime": "2022-09-10T18:39:07Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-09-10T18:39:07Z",
+ "locale": "en-US",
+ "displayName": "My Transcription"
+}
+```
+
+The top-level `self` property in the response body is the transcription's URI. Use this URI to get details such as the URI of the transcriptions and transcription report files. You also use this URI to update or delete a transcription.
+
+For Speech CLI help with transcriptions, run the following command:
+
+```azurecli-interactive
+spx help batch transcription
+```
+++
+To create a transcription, use the [CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) operation of the [Speech-to-text REST API](rest-speech-to-text.md#transcriptions). Construct the request body according to the following instructions:
+
+- You must set either the `contentContainerUrl` or `contentUrls` property. This property will not be returned in the response. For more information about Azure blob storage and SAS URLs, see [Azure storage for audio files](batch-transcription-audio-data.md#azure-blob-storage-example).
+- Set the required `locale` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.
+- Set the required `displayName` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later.
+
+Make an HTTP POST request using the URI as shown in the following [CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "contentUrls": [
+ "https://crbn.us/hello.wav",
+ "https://crbn.us/whatstheweatherlike.wav"
+ ],
+ "locale": "en-US",
+ "displayName": "My Transcription",
+ "model": null,
+ "properties": {
+ "wordLevelTimestampsEnabled": true,
+ },
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions"
+```
++
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
+ },
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": true,
+ "displayFormWordLevelTimestampsEnabled": false,
+ "channels": [
+ 0,
+ 1
+ ],
+ "punctuationMode": "DictatedAndAutomatic",
+ "profanityFilterMode": "Masked"
+ },
+ "lastActionDateTime": "2022-09-10T18:39:07Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-09-10T18:39:07Z",
+ "locale": "en-US",
+ "displayName": "My Transcription"
+}
+```
+
+The top-level `self` property in the response body is the transcription's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscription) details such as the URI of the transcriptions and transcription report files. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateTranscription) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription) a transcription.
+
+You can query the status of your transcriptions with the [GetTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscription) operation.
+
+Call [DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)
+regularly from the service, after you retrieve the results. Alternatively, set the `timeToLive` property to ensure the eventual deletion of the results.
++
+## Request configuration options
++
+For Speech CLI help with transcription configuration options, run the following command:
+
+```azurecli-interactive
+spx help batch transcription create advanced
+```
+++
+Here are some property options that you can use to configure a transcription when you call the [CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) operation.
+
+| Property | Description |
+|-|-|
+|`channels`|An array of channel numbers to process. Channels `0` and `1` are transcribed by default. |
+|`contentContainerUrl`| You can submit individual audio files, or a whole storage container. You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Azure storage for audio files](batch-transcription-audio-data.md#azure-blob-storage-example).|
+|`contentUrls`| You can submit individual audio files, or a whole storage container. You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Azure storage for audio files](batch-transcription-audio-data.md#azure-blob-storage-example).|
+|`destinationContainerUrl`|The result can be stored in an Azure container. Specify the [ad hoc SAS](../../storage/common/storage-sas-overview.md) with write permissions. SAS with stored access policies isn't supported. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted.|
+|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The feature isn't available with stereo recordings.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.|
+|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.|
+|`model`|You can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models).|
+|`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. |
+|`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.|
+|`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. For example, specify `PT12H` for 12 hours. As an alternative, you can call [DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription) regularly after you retrieve the transcription results.|
+|`wordLevelTimestampsEnabled`|Specifies if word level timestamps should be included in the output. The default value is `false`.|
++++
+## Using custom models
+
+Batch transcription uses the default base model for the locale that you specify. You don't need to set any properties to use the default base model.
+
+Optionally, you can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model.
+++
+```azurecli-interactive
+spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+```
+++
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "contentContainerUrl": "https://YourStorageAccountName.blob.core.windows.net/YourContainerName?YourSASToken",
+ "locale": "en-US",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "displayName": "My Transcription",
+ "properties": {
+ "wordLevelTimestampsEnabled": true,
+ },
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions"
+```
++
+To use a Custom Speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide. A [deployed custom endpoint](how-to-custom-speech-deploy-model.md) isn't needed for the batch transcription service.
+
+Batch transcription requests for expired models will fail with a 4xx error. You'll want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+## Next steps
+
+- [Batch transcription overview](batch-transcription.md)
+- [Locate audio files for batch transcription](batch-transcription-audio-data.md)
+- [Get batch transcription results](batch-transcription-get.md)
cognitive-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md
+
+ Title: Get batch transcription results - Speech service
+
+description: With batch transcription, the Speech service transcribes the audio data and stores the results in a storage container. You can then retrieve the results from the storage container.
+++++++ Last updated : 09/11/2022
+zone_pivot_groups: speech-cli-rest
+++
+# Get batch transcription results
+
+To get transcription results, first check the [status](#get-transcription-status) of the transcription job. If the job is completed, you can [retrieve](#get-batch-transcription-results) the transcriptions and transcription report.
+
+## Get transcription status
++
+To get the status of the transcription job, use the `spx batch transcription status` command. Construct the request parameters according to the following instructions:
+
+- Set the `transcription` parameter to the ID of the transcription that you want to get.
+
+Here's an example Speech CLI command to get the transcription status:
+
+```azurecli-interactive
+spx batch transcription status --transcription YourTranscriptionId
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
+ },
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": true,
+ "displayFormWordLevelTimestampsEnabled": false,
+ "channels": [
+ 0,
+ 1
+ ],
+ "punctuationMode": "DictatedAndAutomatic",
+ "profanityFilterMode": "Masked",
+ "duration": "PT3S"
+ },
+ "lastActionDateTime": "2022-09-10T18:39:09Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-09-10T18:39:07Z",
+ "locale": "en-US",
+ "displayName": "My Transcription"
+}
+```
+
+The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report will be available when the transcription status is `Succeeded`.
+
+For Speech CLI help with transcriptions, run the following command:
+
+```azurecli-interactive
+spx help batch transcription
+```
+++
+To get the status of the transcription job, call the [GetTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscription) operation of the [Speech-to-text REST API](rest-speech-to-text.md).
+
+Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/YourTranscriptionId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
+ },
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": true,
+ "displayFormWordLevelTimestampsEnabled": false,
+ "channels": [
+ 0,
+ 1
+ ],
+ "punctuationMode": "DictatedAndAutomatic",
+ "profanityFilterMode": "Masked",
+ "duration": "PT3S"
+ },
+ "lastActionDateTime": "2022-09-10T18:39:09Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-09-10T18:39:07Z",
+ "locale": "en-US",
+ "displayName": "My Transcription"
+}
+```
+
+The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report will be available when the transcription status is `Succeeded`.
+++
+## Get transcription results
+++
+The `spx batch transcription list` command returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
+
+- Set the required `files` flag.
+- Set the required `transcription` parameter to the ID of the transcription that you want to get logs.
+
+Here's an example Speech CLI command that gets a list of result files for a transcription:
+
+```azurecli-interactive
+spx batch transcription list --files --transcription YourTranscriptionId
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "values": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/2dd180a1-434e-4368-a1ac-37350700284f",
+ "name": "contenturl_0.json",
+ "kind": "Transcription",
+ "properties": {
+ "size": 3407
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_0_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=AobsqO9DH9CIOuGC5ifFH3QpkQay6PjHiWn5G87FcIg%3D"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/c027c6a9-2436-4303-b64b-e98e3c9fc2e3",
+ "name": "contenturl_1.json",
+ "kind": "Transcription",
+ "properties": {
+ "size": 8233
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_1_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=wO3VxbhLK4PhT3rwLpJXBYHYQi5EQqyl%2Fp1lgjNvfh0%3D"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/faea9a41-c95c-4d91-96ff-e39225def642",
+ "name": "report.json",
+ "kind": "TranscriptionReport",
+ "properties": {
+ "size": 279
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_report.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=gk1k%2Ft5qa1TpmM45tPommx%2F2%2Bc%2FUUfsYTX5FoSa1u%2FY%3D"
+ }
+ }
+ ]
+}
+```
+
+The location of each transcription and transcription report files with more details are returned in the response body. The `contentUrl` property contains the URL to the [transcription](#transcription-result-file) (`"kind": "Transcription"`) or [transcription report](#transcription-report-file) (`"kind": "TranscriptionReport"`) file.
+
+By default, the results are stored in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted.
++++
+The [GetTranscriptionsFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionsFiles) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
+
+Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/YourTranscriptionId/files" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "values": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/2dd180a1-434e-4368-a1ac-37350700284f",
+ "name": "contenturl_0.json",
+ "kind": "Transcription",
+ "properties": {
+ "size": 3407
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_0_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=AobsqO9DH9CIOuGC5ifFH3QpkQay6PjHiWn5G87FcIg%3D"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/c027c6a9-2436-4303-b64b-e98e3c9fc2e3",
+ "name": "contenturl_1.json",
+ "kind": "Transcription",
+ "properties": {
+ "size": 8233
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_1_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=wO3VxbhLK4PhT3rwLpJXBYHYQi5EQqyl%2Fp1lgjNvfh0%3D"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/faea9a41-c95c-4d91-96ff-e39225def642",
+ "name": "report.json",
+ "kind": "TranscriptionReport",
+ "properties": {
+ "size": 279
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_report.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=gk1k%2Ft5qa1TpmM45tPommx%2F2%2Bc%2FUUfsYTX5FoSa1u%2FY%3D"
+ }
+ }
+ ]
+}
+```
+
+The location of each transcription and transcription report files with more details are returned in the response body. The `contentUrl` property contains the URL to the [transcription](#transcription-result-file) (`"kind": "Transcription"`) or [transcription report](#transcription-report-file) (`"kind": "TranscriptionReport"`) file.
+
+If you didn't specify a container in the `destinationContainerUrl` property of the transcription request, the results are stored in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted.
+++
+### Transcription report file
+
+One transcription report file is provided for each submitted batch transcription job.
+
+The contents of each transcription result file are formatted as JSON, as shown in this example.
+
+```json
+{
+ "successfulTranscriptionsCount": 2,
+ "failedTranscriptionsCount": 0,
+ "details": [
+ {
+ "source": "https://crbn.us/hello.wav",
+ "status": "Succeeded"
+ },
+ {
+ "source": "https://crbn.us/whatstheweatherlike.wav",
+ "status": "Succeeded"
+ }
+ ]
+}
+```
+
+### Transcription result file
+
+One transcription result file is provided for each successfully transcribed audio file.
+
+The contents of each transcription result file are formatted as JSON, as shown in this example.
+
+```json
+{
+ "source": "...",
+ "timestamp": "2022-09-16T09:30:21Z",
+ "durationInTicks": 41200000,
+ "duration": "PT4.12S",
+ "combinedRecognizedPhrases": [
+ {
+ "channel": 0,
+ "lexical": "hello world",
+ "itn": "hello world",
+ "maskedITN": "hello world",
+ "display": "Hello world."
+ }
+ ],
+ "recognizedPhrases": [
+ {
+ "recognitionStatus": "Success",
+ "speaker": 1,
+ "channel": 0,
+ "offset": "PT0.07S",
+ "duration": "PT1.59S",
+ "offsetInTicks": 700000.0,
+ "durationInTicks": 15900000.0,
+
+ "nBest": [
+ {
+ "confidence": 0.898652852,
+ "lexical": "hello world",
+ "itn": "hello world",
+ "maskedITN": "hello world",
+ "display": "Hello world.",
+
+ "words": [
+ {
+ "word": "hello",
+ "offset": "PT0.09S",
+ "duration": "PT0.48S",
+ "offsetInTicks": 900000.0,
+ "durationInTicks": 4800000.0,
+ "confidence": 0.987572
+ },
+ {
+ "word": "world",
+ "offset": "PT0.59S",
+ "duration": "PT0.16S",
+ "offsetInTicks": 5900000.0,
+ "durationInTicks": 1600000.0,
+ "confidence": 0.906032
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+Depending in part on the request parameters set when you created the transcription job, the transcription file can contain the following result properties.
+
+|Property|Description|
+|--|--|
+|`channel`|The channel number of the results. For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each input audio file.|
+|`combinedRecognizedPhrases`|The concatenated results of all phrases for the channel.|
+|`confidence`|The confidence value for the recognition.|
+|`display`|The display form of the recognized text. Added punctuation and capitalization are included.|
+|`displayPhraseElements`|A list of results with display text for each word of the phrase. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.|
+|`duration`|The audio duration, ISO 8601 encoded duration.|
+|`durationInTicks`|The audio duration in ticks (1 tick is 100 nanoseconds).|
+|`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "doctor smith" to "dr smith", phone numbers, and other transformations are applied.|
+|`lexical`|The actual words recognized.|
+|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.|
+|`maskedITN`|The ITN form with profanity masking applied.|
+|`nBest`|A list of possible transcriptions for the current phrase with confidences.|
+|`offset`|The offset in audio of this phrase, ISO 8601 encoded duration.|
+|`offsetInTicks`|The offset in audio of this phrase in ticks (1 tick is 100 nanoseconds).|
+|`recognitionStatus`|The recognition state. For example: "Success" or "Failure".|
+|`recognizedPhrases`|The list of results for each phrase.|
+|`source`|The URL that was provided as the input audio source. The source corresponds to the `contentUrls` or `contentContainerUrl` request property. The `source` property is the only way to confirm the audio input for a transcription.|
+|`speaker`|The identified speaker. The `diarization` and `diarizationEnabled` request properties must be set, otherwise this property is not present.|
+|`timestamp`|The creation time of the transcription, ISO 8601 encoded timestamp, combined date and time.|
+|`words`|A list of results with lexical text for each word of the phrase. The `wordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.|
++
+## Next steps
+
+- [Batch transcription overview](batch-transcription.md)
+- [Locate audio files for batch transcription](batch-transcription-audio-data.md)
+- [Create a batch transcription](batch-transcription-create.md)
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription.md
Title: How to use batch transcription - Speech service
+ Title: Batch transcription overview - Speech service
-description: Batch transcription is ideal if you want to transcribe a large quantity of audio in storage, such as Azure blobs. By using the dedicated REST API, you can point to audio files with a shared access signature (SAS) URI, and asynchronously receive transcriptions.
+description: Batch transcription is ideal if you want to transcribe a large quantity of audio in storage, such as Azure blobs. Then you can asynchronously retrieve transcriptions.
- Previously updated : 01/23/2022+ Last updated : 09/11/2022 ms.devlang: csharp
-# How to use batch transcription
+# What is batch transcription?
-Batch transcription is a set of REST API operations that enables you to transcribe a large amount of audio in storage. You can point to audio files by using a typical URI or a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI, and asynchronously receive transcription results. With the v3.0 API, you can transcribe one or more audio files, or process a whole storage container.
-
-You can use batch transcription REST APIs to call the following methods:
-
-| Batch transcription operation | Method | REST API call |
-||--|-|
-| Creates a new transcription. | POST | speechtotext/v3.0/transcriptions |
-| Retrieves a list of transcriptions for the authenticated subscription. | GET | speechtotext/v3.0/transcriptions |
-| Gets a list of supported locales for offline transcriptions. | GET | speechtotext/v3.0/transcriptions/locales |
-| Updates the mutable details of the transcription identified by its ID. | PATCH | speechtotext/v3.0/transcriptions/{id} |
-| Deletes the specified transcription task. | DELETE | speechtotext/v3.0/transcriptions/{id} |
-| Gets the transcription identified by the specified ID. | GET | speechtotext/v3.0/transcriptions/{id} |
-| Gets the result files of the transcription identified by the specified ID. | GET | speechtotext/v3.0/transcriptions/{id}/files |
-
-For more information, see the [Speech-to-text REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation.
-
-Batch transcription jobs are scheduled on a best-effort basis. You can't estimate when a job will change into the running state, but it should happen within minutes under normal system load. When the job is in the running state, the transcription occurs faster than the audio runtime playback speed.
-
-## Prerequisites
-
-As with all features of the Speech service, you create a Speech resource from the [Azure portal](https://portal.azure.com).
+Batch transcription is used to transcribe a large amount of audio data in storage. Both the [Speech-to-text REST API](rest-speech-to-text.md#transcriptions) and [Speech CLI](spx-basics.md) support batch transcription.
>[!NOTE] > To use batch transcription, you need a standard Speech resource (S0) in your subscription. Free resources (F0) aren't supported. For more information, see [pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-If you plan to customize models, follow the steps in [Acoustic customization](./how-to-custom-speech-train-model.md) and [Language customization](./how-to-custom-speech-train-model.md). To use the created models in batch transcription, you need their model location. You can retrieve the model location when you inspect the details of the model (the `self` property). A deployed custom endpoint isn't needed for the batch transcription service.
-
->[!NOTE]
-> As a part of the REST API, batch transcription has a set of [quotas and limits](speech-services-quotas-and-limits.md#batch-transcription). It's a good idea to review these. To take full advantage of the ability to efficiently transcribe a large number of audio files, send multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe. The service transcribes the files concurrently, which reduces the turnaround time. For more information, see the [Configuration](#configuration) section of this article.
-
-## Batch transcription API
-
-The batch transcription API supports the following formats:
-
-| Format | Codec | Bits per sample | Sample rate |
-|--|-|||
-| WAV | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo |
-| MP3 | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo |
-| OGG | OPUS | 16-bit | 8 kHz or 16 kHz, mono or stereo |
-
-For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each channel. To create an ordered final transcript, use the timestamps that are generated per utterance.
-
-### Configuration
-
-Configuration parameters are provided as JSON. You can transcribe one or more individual files, process a whole storage container, and use a custom trained model in a batch transcription.
-
-If you have more than one file to transcribe, it's a good idea to send multiple files in one request. The following example uses three files:
-
-```json
-{
- "contentUrls": [
- "<URL to an audio file 1 to transcribe>",
- "<URL to an audio file 2 to transcribe>",
- "<URL to an audio file 3 to transcribe>"
- ],
- "properties": {
- "wordLevelTimestampsEnabled": true
- },
- "locale": "en-US",
- "displayName": "Transcription of file using default model for en-US"
-}
-```
-
-To process a whole storage container, you can make the following configurations. Container [SAS](../../storage/common/storage-sas-overview.md) should contain `r` (read) and `l` (list) permissions:
-
-```json
-{
- "contentContainerUrl": "<SAS URL to the Azure blob container to transcribe>",
- "properties": {
- "wordLevelTimestampsEnabled": true
- },
- "locale": "en-US",
- "displayName": "Transcription of container using default model for en-US"
-}
-```
-
-Here's an example of using a custom trained model in a batch transcription. This example uses three files:
-
-```json
-{
- "contentUrls": [
- "<URL to an audio file 1 to transcribe>",
- "<URL to an audio file 2 to transcribe>",
- "<URL to an audio file 3 to transcribe>"
- ],
- "properties": {
- "wordLevelTimestampsEnabled": true
- },
- "locale": "en-US",
- "model": {
- "self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/models/{id}"
- },
- "displayName": "Transcription of file using default model for en-US"
-}
-```
-
-### Configuration properties
-
-Use these optional properties to configure transcription:
-
- :::column span="1":::
- **Parameter**
- :::column-end:::
- :::column span="2":::
- **Description**
- :::column span="1":::
- `profanityFilterMode`
- :::column-end:::
- :::column span="2":::
- Optional, defaults to `Masked`. Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags.
- :::column span="1":::
- `punctuationMode`
- :::column-end:::
- :::column span="2":::
- Optional, defaults to `DictatedAndAutomatic`. Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation.
- :::column span="1":::
- `wordLevelTimestampsEnabled`
- :::column-end:::
- :::column span="2":::
- Optional, `false` by default. Specifies if word level timestamps should be added to the output.
- :::column span="1":::
- `diarizationEnabled`
- :::column-end:::
- :::column span="2":::
- Optional, `false` by default. Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. Requires `wordLevelTimestampsEnabled` to be set to `true`.
- :::column span="1":::
- `channels`
- :::column-end:::
- :::column span="2":::
- Optional, `0` and `1` transcribed by default. An array of channel numbers to process. Here, a subset of the available channels in the audio file can be specified to be processed (for example `0` only).
- :::column span="1":::
- `timeToLive`
- :::column-end:::
- :::column span="2":::
- Optional, no deletion by default. A duration to automatically delete transcriptions after completing the transcription. The `timeToLive` is useful in mass processing transcriptions to ensure they will be eventually deleted (for example, `PT12H` for 12 hours).
- :::column span="1":::
- `destinationContainerUrl`
- :::column-end:::
- :::column span="2":::
- Optional URL with [ad hoc SAS](../../storage/common/storage-sas-overview.md) to a writeable container in Azure. The result is stored in this container. SAS with stored access policies isn't supported. If you don't specify a container, Microsoft stores the results in a storage container managed by Microsoft. When the transcription is deleted by calling [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription), the result data is also deleted.
-
-### Storage
-
-Batch transcription can read audio from a public-visible internet URI,
-and can read audio or write transcriptions by using a SAS URI with [Blob Storage](../../storage/blobs/storage-blobs-overview.md).
-
-## Batch transcription result
-
-For each audio input, one transcription result file is created. The [Get transcriptions files](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles) operation returns a list of result files for this transcription. The only way to confirm the audio input for a transcription, is to check the `source` field in the transcription result file.
-
-Each transcription result file has this format:
-
-```json
-{
- "source": "...", // sas url of a given contentUrl or the path relative to the root of a given container
- "timestamp": "2020-06-16T09:30:21Z", // creation time of the transcription, ISO 8601 encoded timestamp, combined date and time
- "durationInTicks": 41200000, // total audio duration in ticks (1 tick is 100 nanoseconds)
- "duration": "PT4.12S", // total audio duration, ISO 8601 encoded duration
- "combinedRecognizedPhrases": [ // concatenated results for simple access in single string for each channel
- {
- "channel": 0, // channel number of the concatenated results
- "lexical": "hello world",
- "itn": "hello world",
- "maskedITN": "hello world",
- "display": "Hello world."
- }
- ],
- "recognizedPhrases": [ // results for each phrase and each channel individually
- {
- "recognitionStatus": "Success", // recognition state, e.g. "Success", "Failure"
- "speaker": 1, // if `diarizationEnabled` is `true`, this is the identified speaker (1 or 2), otherwise this property is not present
- "channel": 0, // channel number of the result
- "offset": "PT0.07S", // offset in audio of this phrase, ISO 8601 encoded duration
- "duration": "PT1.59S", // audio duration of this phrase, ISO 8601 encoded duration
- "offsetInTicks": 700000.0, // offset in audio of this phrase in ticks (1 tick is 100 nanoseconds)
- "durationInTicks": 15900000.0, // audio duration of this phrase in ticks (1 tick is 100 nanoseconds)
+You should provide multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time.
- // possible transcriptions of the current phrase with confidences
- "nBest": [
- {
- "confidence": 0.898652852, // confidence value for the recognition of the whole phrase
- "lexical": "hello world",
- "itn": "hello world",
- "maskedITN": "hello world",
- "display": "Hello world.",
+## How does it work?
- // if wordLevelTimestampsEnabled is `true`, there will be a result for each word of the phrase, otherwise this property is not present
- "words": [
- {
- "word": "hello",
- "offset": "PT0.09S",
- "duration": "PT0.48S",
- "offsetInTicks": 900000.0,
- "durationInTicks": 4800000.0,
- "confidence": 0.987572
- },
- {
- "word": "world",
- "offset": "PT0.59S",
- "duration": "PT0.16S",
- "offsetInTicks": 5900000.0,
- "durationInTicks": 1600000.0,
- "confidence": 0.906032
- }
- ]
- }
- ]
- }
- ]
-}
-```
+With batch transcriptions, you submit the audio data, and then retrieve transcription results asynchronously. The service transcribes the audio data and stores the results in a storage container. You can then retrieve the results from the storage container.
-The result contains the following fields:
+To get started with batch transcription, refer to the following how-to guides:
- :::column span="1":::
- **Field**
- :::column-end:::
- :::column span="2":::
- **Content**
- :::column span="1":::
- `lexical`
- :::column-end:::
- :::column span="2":::
- The actual words recognized.
- :::column span="1":::
- `itn`
- :::column-end:::
- :::column span="2":::
- The inverse-text-normalized (ITN) form of the recognized text. Abbreviations (for example, "doctor smith" to "dr smith"), phone numbers, and other transformations are applied.
- :::column span="1":::
- `maskedITN`
- :::column-end:::
- :::column span="2":::
- The ITN form with profanity masking applied.
- :::column span="1":::
- `display`
- :::column-end:::
- :::column span="2":::
- The display form of the recognized text. Added punctuation and capitalization are included.
+1. [Locate audio files for batch transcription](batch-transcription-audio-data.md) - You can upload your own data or use existing audio files via public URI or [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI.
+1. [Create a batch transcription](batch-transcription-create.md) - Submit the transcription job with parameters such as the audio files, the transcription language, and the transcription model.
+1. [Get batch transcription results](batch-transcription-get.md) - Check transcription status and retrieve transcription results asynchronously.
-## Speaker separation (diarization)
-
-*Diarization* is the process of separating speakers in a piece of audio. The batch pipeline supports diarization and is capable of recognizing two speakers on mono channel recordings. The feature isn't available on stereo recordings.
-
-The output of transcription with diarization enabled contains a `Speaker` entry for each transcribed phrase. If diarization isn't used, the `Speaker` property isn't present in the JSON output. For diarization, the speakers are identified as `1` or `2`.
-
-To request diarization, set the `diarizationEnabled` property to `true`. Here's an example:
-
-```json
-{
- "contentUrls": [
- "<URL to an audio file to transcribe>",
- ],
- "properties": {
- "diarizationEnabled": true,
- "wordLevelTimestampsEnabled": true,
- "punctuationMode": "DictatedAndAutomatic",
- "profanityFilterMode": "Masked"
- },
- "locale": "en-US",
- "displayName": "Transcription of file using default model for en-US"
-}
-```
-
-Word-level timestamps must be enabled, as the parameters in this request indicate.
-
-## Best practices
-
-The batch transcription service can handle a large number of submitted transcriptions. You can query the status of your transcriptions with [Get transcriptions](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions). Call [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)
-regularly from the service, after you retrieve the results. Alternatively, set the `timeToLive` property to ensure the eventual deletion of the results.
-
-> [!TIP]
-> You can use the [Ingestion Client](ingestion-client.md) tool and resulting solution to process a high volume of audio.
-
-## Sample code
-
-Complete samples are available in the [GitHub sample repository](https://aka.ms/csspeech/samples), inside the `samples/batch` subdirectory.
-
-Update the sample code with your subscription information, service region, URI pointing to the audio file to transcribe, and model location if you're using a custom model.
-
-[!code-csharp[Configuration variables for batch transcription](~/samples-cognitive-services-speech-sdk/samples/batch/csharp/batchclient/program.cs#transcriptiondefinition)]
-
-The sample code sets up the client and submits the transcription request. It then polls for the status information and prints details about the transcription progress.
-
-```csharp
-// get the status of our transcriptions periodically and log results
-int completed = 0, running = 0, notStarted = 0;
-while (completed < 1)
-{
- completed = 0; running = 0; notStarted = 0;
-
- // get all transcriptions for the user
- paginatedTranscriptions = null;
- do
- {
- // <transcriptionstatus>
- if (paginatedTranscriptions == null)
- {
- paginatedTranscriptions = await client.GetTranscriptionsAsync().ConfigureAwait(false);
- }
- else
- {
- paginatedTranscriptions = await client.GetTranscriptionsAsync(paginatedTranscriptions.NextLink).ConfigureAwait(false);
- }
-
- // delete all pre-existing completed transcriptions. If transcriptions are still running or not started, they will not be deleted
- foreach (var transcription in paginatedTranscriptions.Values)
- {
- switch (transcription.Status)
- {
- case "Failed":
- case "Succeeded":
- // we check to see if it was one of the transcriptions we created from this client.
- if (!createdTranscriptions.Contains(transcription.Self))
- {
- // not created form here, continue
- continue;
- }
-
- completed++;
-
- // if the transcription was successful, check the results
- if (transcription.Status == "Succeeded")
- {
- var paginatedfiles = await client.GetTranscriptionFilesAsync(transcription.Links.Files).ConfigureAwait(false);
-
- var resultFile = paginatedfiles.Values.FirstOrDefault(f => f.Kind == ArtifactKind.Transcription);
- var result = await client.GetTranscriptionResultAsync(new Uri(resultFile.Links.ContentUrl)).ConfigureAwait(false);
- Console.WriteLine("Transcription succeeded. Results: ");
- Console.WriteLine(JsonConvert.SerializeObject(result, SpeechJsonContractResolver.WriterSettings));
- }
- else
- {
- Console.WriteLine("Transcription failed. Status: {0}", transcription.Properties.Error.Message);
- }
-
- break;
-
- case "Running":
- running++;
- break;
-
- case "NotStarted":
- notStarted++;
- break;
- }
- }
-
- // for each transcription in the list we check the status
- Console.WriteLine(string.Format("Transcriptions status: {0} completed, {1} running, {2} not started yet", completed, running, notStarted));
- }
- while (paginatedTranscriptions.NextLink != null);
-
- // </transcriptionstatus>
- // check again after 1 minute
- await Task.Delay(TimeSpan.FromMinutes(1)).ConfigureAwait(false);
-}
-```
-
-For full details about the preceding calls, see the [Speech-to-text REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation. For the full sample shown here, go to [GitHub](https://aka.ms/csspeech/samples) in the `samples/batch` subdirectory.
-
-This sample uses an asynchronous setup to post audio and receive transcription status. The `PostTranscriptions` method sends the audio file details, and the `GetTranscriptions` method receives the states. `PostTranscriptions` returns a handle, and `GetTranscriptions` uses it to create a handle to get the transcription status.
-
-This sample code doesn't specify a custom model. The service uses the base model for transcribing the file or files. To specify the model, you can pass on the same method the model reference for the custom model.
-
-> [!NOTE]
-> For baseline transcriptions, you don't need to declare the ID for the base model.
+Batch transcription jobs are scheduled on a best-effort basis. You can't estimate when a job will change into the running state, but it should happen within minutes under normal system load. When the job is in the running state, the transcription occurs faster than the audio runtime playback speed.
## Next steps
-> [!div class="nextstepaction"]
-> [Speech to text v3.0 API reference](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)
+- [Locate audio files for batch transcription](batch-transcription-audio-data.md)
+- [Review quotas and limits](speech-services-quotas-and-limits.md#batch-transcription)
+- [Get batch transcription results](batch-transcription-get.md)
cognitive-services Call Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-overview.md
Previously updated : 08/10/2022 Last updated : 09/18/2022 # Call Center Overview
Some example scenarios for the implementation of Azure Cognitive Services in cal
- Post-call analytics: Post-call analysis to create insights into customer conversations to improve understanding and support continuous improvement of call handling, optimization of quality assurance and compliance control as well as other insight driven optimizations. > [!TIP]
+> Try the [Post-call transcription and analytics quickstart](/azure/cognitive-services/speech-service/call-center-quickstart).
+>
> To deploy a call center transcription solution to Azure with a no-code approach, try the [Ingestion Client](/azure/cognitive-services/speech-service/ingestion-client). ## Cognitive Services features for call centers
You can find an overview of all Language service features and customization opti
## Next steps
+* [Post-call transcription and analytics quickstart](/azure/cognitive-services/speech-service/call-center-quickstart)
* [Try out the Language Studio](https://language.cognitive.azure.com)
-* [Explore the Language service features](/azure/cognitive-services/language-service/overview#available-features)
* [Try out the Speech Studio](https://speech.microsoft.com)
-* [Explore the Speech service features](/azure/cognitive-services/speech-service/overview)
cognitive-services Call Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-quickstart.md
+
+ Title: "Post-call transcription and analytics quickstart - Speech service"
+
+description: In this quickstart, you perform sentiment analysis and conversation summarization of call center transcriptions.
++++++ Last updated : 09/20/2022+
+ms.devlang: csharp
++
+# Quickstart: Post-call transcription and analytics
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try the Ingestion Client](ingestion-client.md)
cognitive-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-create-project.md
spx help csr project
::: zone pivot="rest-api"
-To create a project, use the [CreateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateProject) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To create a project, use the [CreateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateProject) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the required `locale` property. This should be the locale of the contained datasets. The locale can't be changed later. - Set the required `displayName` property. This is the project name that will be displayed in the Speech Studio.
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
spx help csr endpoint
::: zone pivot="rest-api"
-To create an endpoint and deploy a model, use the [CreateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEndpoint) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To create an endpoint and deploy a model, use the [CreateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEndpoint) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the endpoint in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects. - Set the required `model` property to the URI of the model that you want deployed to the endpoint.
spx help csr endpoint
::: zone pivot="rest-api"
-To redeploy the custom endpoint with a new model, use the [UpdateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To redeploy the custom endpoint with a new model, use the [UpdateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the `model` property to the URI of the model that you want deployed to the endpoint.
The location of each log file with more details are returned in the response bod
::: zone pivot="rest-api"
-To get logs for an endpoint, start by using the [GetEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoint) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+To get logs for an endpoint, start by using the [GetEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoint) operation of the [Speech-to-text REST API](rest-speech-to-text.md).
Make an HTTP GET request using the URI as shown in the following example. Replace `YourEndpointId` with your endpoint ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
spx help csr evaluation
::: zone pivot="rest-api"
-To create a test, use the [CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To create a test, use the [CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects. - Set the `testingKind` property to `Evaluation` within `customProperties`. If you don't specify `Evaluation`, the test is treated as a quality inspection test. Whether the `testingKind` property is set to `Evaluation` or `Inspection`, or not set, you can access the accuracy scores via the API, but not in the Speech Studio.
spx help csr evaluation
::: zone pivot="rest-api"
-To get test results, start by using the [GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+To get test results, start by using the [GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation) operation of the [Speech-to-text REST API](rest-speech-to-text.md).
Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
cognitive-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-inspect-data.md
spx help csr evaluation
::: zone pivot="rest-api"
-To create a test, use the [CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To create a test, use the [CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects. - Set the required `model1` property to the URI of a model that you want to test.
spx help csr evaluation
::: zone pivot="rest-api"
-To get test results, start by using the [GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+To get test results, start by using the [GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation) operation of the [Speech-to-text REST API](rest-speech-to-text.md).
Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
When a custom model or base model expires, it is no longer available for transcr
|Transcription route |Expired model result |Recommendation | |||| |Custom endpoint|Speech recognition requests will fall back to the most recent base model for the same [locale](language-support.md?tabs=stt-tts). You will get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) guide. |
-|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models will fail with a 4xx error. |In each [CreateTranscription](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) REST API request body, set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. |
+|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models will fail with a 4xx error. |In each [CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) REST API request body, set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. |
## Get base model expiration dates
spx help csr model
::: zone pivot="rest-api"
-To get the training and transcription expiration dates for a base model, use the [GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). You can make a [GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels) request to get available base models for all locales.
+To get the training and transcription expiration dates for a base model, use the [GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel) operation of the [Speech-to-text REST API](rest-speech-to-text.md). You can make a [GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels) request to get available base models for all locales.
Make an HTTP GET request using the model URI as shown in the following example. Replace `BaseModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
spx help csr model
::: zone pivot="rest-api"
-To get the transcription expiration date for your custom model, use the [GetModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+To get the transcription expiration date for your custom model, use the [GetModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel) operation of the [Speech-to-text REST API](rest-speech-to-text.md).
Make an HTTP GET request using the model URI as shown in the following example. Replace `YourModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Training with plain text or structured text usually finishes within a few minute
> > Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
-If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) REST API.
+If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) REST API.
## Consider datasets by scenario
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
spx help csr model
::: zone pivot="rest-api"
-To create a model with datasets for training, use the [CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To create a model with datasets for training, use the [CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects. - Set the required `datasets` property to the URI of the datasets that you want used for training.
After the model is successfully copied, you'll be notified and can view it in th
::: zone pivot="speech-cli"
-Copying a model directly to a project in another region is not supported with the Speech CLI. You can copy a model to a project in another region using the [Speech Studio](https://aka.ms/speechstudio/customspeech) or [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+Copying a model directly to a project in another region is not supported with the Speech CLI. You can copy a model to a project in another region using the [Speech Studio](https://aka.ms/speechstudio/customspeech) or [Speech-to-text REST API](rest-speech-to-text.md).
::: zone-end ::: zone pivot="rest-api"
-To copy a model to another Speech resource, use the [CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To copy a model to another Speech resource, use the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the required `targetSubscriptionKey` property to the key of the destination Speech resource.
You should receive a response body in the following format:
```json {
- "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae",
"baseModel": {
- "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/eb5450a7-3ca2-461a-b2d7-ddbb3ad96540"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/eb5450a7-3ca2-461a-b2d7-ddbb3ad96540"
}, "links": {
- "manifest": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/manifest",
- "copyTo": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/copyto"
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/manifest",
+ "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/copyto"
}, "properties": { "deprecationDates": {
You should receive a response body in the following format:
```json { "project": {
- "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
}, } ```
spx help csr model
::: zone pivot="rest-api"
-To connect a new model to a project of the Speech resource where the model was copied, use the [UpdateModel](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To connect a new model to a project of the Speech resource where the model was copied, use the [UpdateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
-- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
+- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
-Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
```azurecli-interactive curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{ "project": {
- "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
}, }' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models" ```
You should receive a response body in the following format:
```json { "project": {
- "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
}, } ```
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
spx help csr dataset
[!INCLUDE [Map CLI and API kind to Speech Studio options](includes/how-to/custom-speech/cli-api-kind.md)]
-To create a dataset and connect it to an existing project, use the [CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To create a dataset and connect it to an existing project, use the [CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects. - Set the required `kind` property. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.
cognitive-services How To Get Speech Session Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-get-speech-session-id.md
https://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cogn
``` `9f4ffa5113a846eba289aa98b28e766f` will be your Session ID.
-## Getting Transcription ID for Batch transcription. (REST API v3.0).
+## Getting Transcription ID for Batch transcription
-[Batch transcription](batch-transcription.md) uses [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+[Batch transcription](batch-transcription.md) uses [Speech-to-text REST API](rest-speech-to-text.md).
-The required Transcription ID is the GUID value contained in the main `self` element of the Response body returned by requests, like [Create Transcription](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription).
+The required Transcription ID is the GUID value contained in the main `self` element of the Response body returned by requests, like [CreateTranscription](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription).
The example below is the Response body of a `Create Transcription` request. GUID value `537216f8-0620-4a10-ae2d-00bdb423b36f` found in the first `self` element is the Transcription ID.
The example below is the Response body of a `Create Transcription` request. GUID
} ``` > [!NOTE]
-> Use the same technique to determine different IDs required for debugging issues related to [Custom Speech](custom-speech-overview.md), like uploading a dataset using [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) request.
+> Use the same technique to determine different IDs required for debugging issues related to [Custom Speech](custom-speech-overview.md), like uploading a dataset using [CreateDataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) request.
> [!NOTE]
-> You can also see all existing transcriptions and their Transcription IDs for a given Speech resource by using [Get Transcriptions](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions) request.
+> You can also see all existing transcriptions and their Transcription IDs for a given Speech resource by using [GetTranscriptions](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions) request.
cognitive-services Migrate V2 To V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v2-to-v3.md
Title: Migrate from v2 to v3 REST API - Speech service
-description: This document helps developers migrate code from v2 to v3 of in the Speech services speech-to-text REST API.
+description: This document helps developers migrate code from v2 to v3 of the Speech to text REST API.speech-to-text REST API.
Previously updated : 08/09/2022 Last updated : 09/01/2022
Compared to v2, the v3 version of the Speech services REST API for speech-to-text is more reliable, easier to use, and more consistent with APIs for similar services. Most teams can migrate from v2 to v3 in a day or two. > [!IMPORTANT]
-> The Speech-to-text REST API v2.0 is deprecated and will be retired by February 29, 2024. Please migrate your applications to the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+> The Speech-to-text REST API v2.0 is deprecated and will be retired by February 29, 2024. Please migrate your applications to the Speech-to-text REST API v3.1. Complete the steps in this article and then see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide for additional requirements.
## Forward compatibility
General changes:
### Host name changes
-Endpoint host names have changed from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com`. Paths to the new endpoints no longer contain `api/` because it's part of the hostname. The [Speech-to-text REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation lists valid regions and paths.
+Endpoint host names have changed from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com`. Paths to the new endpoints no longer contain `api/` because it's part of the hostname. The [Speech-to-text REST API v3.0](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation lists valid regions and paths.
>[!IMPORTANT] >Change the hostname from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com` where region is the region of your speech subscription. Also remove `api/`from any path in your client code.
If the entity has additional functionality available through other paths, they a
```json {
- "self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
"createdDateTime": "2019-01-07T11:34:12Z", "lastActionDateTime": "2019-01-07T11:36:07Z", "status": "Succeeded", "locale": "en-US", "displayName": "Transcription using locale en-US", "links": {
- "files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
} } ```
The `values` property contains a subset of the available collection entities. Th
This change requires calling the `GET` for the collection in a loop until all elements have been returned. >[!IMPORTANT]
->When the response of a GET to `speechtotext/v3.0/{collection}` contains a value in `$.@nextLink`, continue issuing `GETs` on `$.@nextLink` until `$.@nextLink` is not set to retrieve all elements of that collection.
+>When the response of a GET to `speechtotext/v3.1/{collection}` contains a value in `$.@nextLink`, continue issuing `GETs` on `$.@nextLink` until `$.@nextLink` is not set to retrieve all elements of that collection.
### Creating transcriptions
to access the content of each file. To control the validity duration of the SAS
```json {
- "self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
"links": {
- "files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
} } ```
to access the content of each file. To control the validity duration of the SAS
{ "values": [ {
- "self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/f23e54f5-ed74-4c31-9730-2f1a3ef83ce8",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/f23e54f5-ed74-4c31-9730-2f1a3ef83ce8",
"name": "Name", "kind": "Transcription", "properties": {
to access the content of each file. To control the validity duration of the SAS
} }, {
- "self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/28bc946b-c251-4a86-84f6-ea0f0a2373ef",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/28bc946b-c251-4a86-84f6-ea0f0a2373ef",
"name": "Name", "kind": "TranscriptionReport", "properties": {
to access the content of each file. To control the validity duration of the SAS
} } ],
- "@nextLink": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files?skip=2&top=2"
+ "@nextLink": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files?skip=2&top=2"
} ``` The `kind` property indicates the format of content of the file. For transcriptions, the files of kind `TranscriptionReport` are the summary of the job and files of the kind `Transcription` are the result of the job itself. >[!IMPORTANT]
->To get the results of operations, use a `GET` on `/speechtotext/v3.0/{collection}/{id}/files`, they are no longer contained in the responses of `GET` on `/speechtotext/v3.0/{collection}/{id}` or `/speechtotext/v3.0/{collection}`.
+>To get the results of operations, use a `GET` on `/speechtotext/v3.1-preview.1/{collection}/{id}/files`, they are no longer contained in the responses of `GET` on `/speechtotext/v3.1-preview.1/{collection}/{id}` or `/speechtotext/v3.1-preview.1/{collection}`.
### Customizing models
With this change, the need for a `kind` in the `POST` operation has been removed
To improve the results of a trained model, the acoustic data is automatically used internally during language training. In general, models created through the v3 API deliver more accurate results than models created with the v2 API. >[!IMPORTANT]
->To customize both the acoustic and language model part, pass all of the required language and acoustic datasets in `datasets[]` of the POST to `/speechtotext/v3.0/models`. This will create a single model with both parts customized.
+>To customize both the acoustic and language model part, pass all of the required language and acoustic datasets in `datasets[]` of the POST to `/speechtotext/v3.1-preview.1/models`. This will create a single model with both parts customized.
### Retrieving base and custom models To simplify getting the available models, v3 has separated the collections of "base models" from the customer owned "customized models". The two routes are now
-`GET /speechtotext/v3.0/models/base` and `GET /speechtotext/v3.0/models/`.
+`GET /speechtotext/v3.1-preview.1/models/base` and `GET /speechtotext/v3.1-preview.1/models/`.
In v2, all models were returned together in a single response. >[!IMPORTANT]
->To get a list of provided base models for customization, use `GET` on `/speechtotext/v3.0/models/base`. You can find your own customized models with a `GET` on `/speechtotext/v3.0/models`.
+>To get a list of provided base models for customization, use `GET` on `/speechtotext/v3.1-preview.1/models/base`. You can find your own customized models with a `GET` on `/speechtotext/v3.1-preview.1/models`.
### Name of an entity
In v2, referenced entities were always inlined, for example the used models of a
```json {
- "self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
"model": {
- "self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/models/021a72d0-54c4-43d3-8254-27336ead9037"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/021a72d0-54c4-43d3-8254-27336ead9037"
} } ```
Version v2 of the service supported logging endpoint results. To retrieve the re
```json {
- "self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6",
"links": {
- "logs": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs"
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs"
} } ```
Version v2 of the service supported logging endpoint results. To retrieve the re
{ "values": [ {
- "self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/6d72ad7e-f286-4a6f-b81b-a0532ca6bcaa/files/logs/2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/6d72ad7e-f286-4a6f-b81b-a0532ca6bcaa/files/logs/2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav",
"name": "2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav", "kind": "Audio", "properties": {
Version v2 of the service supported logging endpoint results. To retrieve the re
} } ],
- "@nextLink": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs?top=2&SkipToken=2!188!MDAwMDk1ITZhMjhiMDllLTg0MDYtNDViMi1hMGRkLWFlNzRlOGRhZWJkNi8yMDIwLTA0LTAxLzEyNDY0M182MzI5NGRkMi1mZGYzLTRhZmEtOTA0NC1mODU5ZTcxOWJiYzYud2F2ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--"
+ "@nextLink": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs?top=2&SkipToken=2!188!MDAwMDk1ITZhMjhiMDllLTg0MDYtNDViMi1hMGRkLWFlNzRlOGRhZWJkNi8yMDIwLTA0LTAxLzEyNDY0M182MzI5NGRkMi1mZGYzLTRhZmEtOTA0NC1mODU5ZTcxOWJiYzYud2F2ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--"
} ```
Accuracy tests have been renamed to evaluations because the new name describes b
## Next steps
-* [Speech-to-text REST API v3.0](rest-speech-to-text.md)
-* [Speech-to-text REST API v3.0 reference](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
+* [Speech-to-text REST API](rest-speech-to-text.md)
+* [Speech-to-text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
cognitive-services Migrate V3 0 To V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v3-0-to-v3-1.md
+
+ Title: Migrate from v3.0 to v3.1 REST API - Speech service
+
+description: This document helps developers migrate code from v3.0 to v3.1 of the Speech to text REST API.
++++++ Last updated : 09/01/2022+
+ms.devlang: csharp
+++
+# Migrate code from v3.0 to v3.1 of the REST API
+
+The Speech-to-text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). Changes from version 3.0 to 3.1 are described in the sections below.
+
+> [!IMPORTANT]
+> Speech-to-text REST API v3.1 is currently in public preview. Once it's generally available, version 3.0 of the [Speech to Text REST API](rest-speech-to-text.md) will be deprecated.
+
+## Base path
+
+You must update the base path in your code from `/speechtotext/v3.0` to `/speechtotext/v3.1-preview.1`. For example, to get base models in the `eastus` region, use `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/models/base` instead of `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base`.
+
+Please note these additional changes:
+- The `/models/{id}/copyto` operation (includes '/') in version 3.0 is replaced by the `/models/{id}:copyto` operation (includes ':') in version 3.1.
+- The `/webhooks/{id}/ping` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:ping` operation (includes ':') in version 3.1.
+- The `/webhooks/{id}/test` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:test` operation (includes ':') in version 3.1.
+
+For more details, see [Operation IDs](#operation-ids) later in this guide.
+
+## Batch transcription
+
+> [!NOTE]
+> Don't use Speech-to-text REST API v3.0 to retrieve a transcription created via Speech-to-text REST API v3.1. You'll see an error message such as the following: "The API version cannot be used to access this transcription. Please use API version v3.1 or higher."
+
+In the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_Create) operation the following three properties are added:
+- The `displayFormWordLevelTimestampsEnabled` property can be used to enable the reporting of word-level timestamps on the display form of the transcription results. The results are returned in the `displayPhraseElements` property of the transcription file.
+- The `diarization` property can be used to specify hints for the minimum and maximum number of speaker labels to generate when performing optional diarization (speaker separation). With this feature, the service is now able to generate speaker labels for more than two speakers. The `diarizationEnabled` property is deprecated and will be removed in the next major version of the API.
+- The `languageIdentification` property can be used specify settings for language identification on the input prior to transcription. Up to 10 candidate locales are supported for language identification. The returned transcription will include a new `locale` property for the recognized language or the locale that you provided.
+
+The `filter` property is added to the [Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_List), [Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_ListFiles), and [Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListTranscriptions) operations. The `filter` expression can be used to select a subset of the available resources. You can filter by `displayName`, `description`, `createdDateTime`, `lastActionDateTime`, `status`, and `locale`. For example: `filter=createdDateTime gt 2022-02-01T11:00:00Z`
+
+## Custom Speech
+
+### Datasets
+
+The following operations are added for uploading and managing multiple data blocks for a dataset:
+ - [Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_UploadBlock) - Upload a block of data for the dataset. The maximum size of the block is 8MiB.
+ - [Datasets_GetDatasetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_GetDatasetBlocks) - Get the list of uploaded blocks for this dataset.
+ - [Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_CommitBlocks) - Commit block list to complete the upload of the dataset.
+
+To support model adaptation with [structured text in markdown](how-to-custom-speech-test-and-train.md#structured-text-data-for-training) data, the [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Create) operation now supports the **LanguageMarkdown** data kind. For more information, see [upload datasets](how-to-custom-speech-upload-data.md#upload-datasets).
+
+### Models
+
+The [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListBaseModels) and [Models_ListBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListBaseModel) operations return information on the type of adaptation supported by each base model.
+
+```json
+"features": {
+ "supportsAdaptationsWith": [
+ "Acoustic",
+ "Language",
+ "LanguageMarkdown",
+ "Pronunciation"
+ ]
+}
+```
+
+The [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_Create) operation has a new `customModelWeightPercent` property where you can specify the weight used when the Custom Language Model (trained from plain or structured text data) is combined with the Base Language Model. Valid values are integers between 1 and 100. The default value is currently 30.
+
+The `filter` property is added to the following operations:
+
+- [Datasets_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_List)
+- [Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_ListFiles)
+- [Endpoints_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_List)
+- [Evaluations_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_List)
+- [Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_ListFiles)
+- [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListBaseModels)
+- [Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListCustomModels)
+- [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_List)
+- [Projects_ListDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListDatasets)
+- [Projects_ListEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListEndpoints)
+- [Projects_ListEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListEvaluations)
+- [Projects_ListModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListModels)
+
+The `filter` expression can be used to select a subset of the available resources. You can filter by `displayName`, `description`, `createdDateTime`, `lastActionDateTime`, `status`, `locale`, and `kind`. For example: `filter=locale eq 'en-US'`
+
+Added the [Models_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListFiles) operation to get the files of the model identified by the given ID.
+
+Added the [Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_GetFile) operation to get one specific file (identified with fileId) from a model (identified with id). This lets you retrieve a **ModelReport** file that provides information on the data processed during training.
+
+## Operation IDs
+
+You must update the base path in your code from `/speechtotext/v3.0` to `/speechtotext/v3.1`. For example, to get base models in the `eastus` region, use `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/models/base` instead of `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base`.
+
+The name of each `operationId` in version 3.1 is prefixed with the object name. For example, the `operationId` for "Create Model" changed from [CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel) in version 3.0 to [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_Create) in version 3.1.
+
+|Path|Method|Version 3.1 Operation ID|Version 3.0 Operation ID|
+|||||
+|`/datasets`|GET|[Datasets_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_List)|[GetDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasets)|
+|`/datasets`|POST|[Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Create)|[CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset)|
+|`/datasets/{id}`|DELETE|[Datasets_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Delete)|[DeleteDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteDataset)|
+|`/datasets/{id}`|GET|[Datasets_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Get)|[GetDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDataset)|
+|`/datasets/{id}`|PATCH|[Datasets_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Update)|[UpdateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateDataset)|
+|`/datasets/{id}/blocks:commit`|POST|[Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_CommitBlocks)|Not applicable|
+|`/datasets/{id}/blocks`|GET|[Datasets_GetDatasetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_GetDatasetBlocks)|Not applicable|
+|`/datasets/{id}/blocks`|PUT|[Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_UploadBlock)|Not applicable|
+|`/datasets/{id}/files`|GET|[Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_ListFiles)|[GetDatasetFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFiles)|
+|`/datasets/{id}/files/{fileId}`|GET|[Datasets_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_GetFile)|[GetDatasetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFile)|
+|`/datasets/locales`|GET|[Datasets_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_ListSupportedLocales)|[GetSupportedLocalesForDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForDatasets)|
+|`/datasets/upload`|POST|[Datasets_Upload](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Upload)|[UploadDatasetFromForm](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UploadDatasetFromForm)|
+|`/endpoints`|GET|[Endpoints_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_List)|[GetEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoints)|
+|`/endpoints`|POST|[Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_Create)|[CreateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEndpoint)|
+|`/endpoints/{id}`|DELETE|[Endpoints_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_Delete)|[DeleteEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpoint)|
+|`/endpoints/{id}`|GET|[Endpoints_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_Get)|[GetEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoint)|
+|`/endpoints/{id}`|PATCH|[Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_Update)|[UpdateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint)|
+|`/endpoints/{id}/files/logs`|DELETE|[Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_DeleteLogs)|[DeleteEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLogs)|
+|`/endpoints/{id}/files/logs`|GET|[Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_ListLogs)|[GetEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLogs)|
+|`/endpoints/{id}/files/logs/{logId}`|DELETE|[Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_DeleteLog)|[DeleteEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLog)|
+|`/endpoints/{id}/files/logs/{logId}`|GET|[Endpoints_GetLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_GetLog)|[GetEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLog)|
+|`/endpoints/base/{locale}/files/logs`|DELETE|[Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_DeleteBaseModelLogs)|[DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLogs)|
+|`/endpoints/base/{locale}/files/logs`|GET|[Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_ListBaseModelLogs)|[GetBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLogs)|
+|`/endpoints/base/{locale}/files/logs/{logId}`|DELETE|[Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_DeleteBaseModelLog)|[DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLog)|
+|`/endpoints/base/{locale}/files/logs/{logId}`|GET|[Endpoints_GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_GetBaseModelLog)|[GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLog)|
+|`/endpoints/locales`|GET|[Endpoints_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_ListSupportedLocales)|[GetSupportedLocalesForEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEndpoints)|
+|`/evaluations`|GET|[Evaluations_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_List)|[GetEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluations)|
+|`/evaluations`|POST|[Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_Create)|[CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation)|
+|`/evaluations/{id}`|DELETE|[Evaluations_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_Delete)|[DeleteEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEvaluation)|
+|`/evaluations/{id}`|GET|[Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_Get)|[GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation)|
+|`/evaluations/{id}`|PATCH|[Evaluations_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_Update)|[UpdateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEvaluation)|
+|`/evaluations/{id}/files`|GET|[Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_ListFiles)|[GetEvaluationFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFiles)|
+|`/evaluations/{id}/files/{fileId}`|GET|[Evaluations_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_GetFile)|[GetEvaluationFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFile)|
+|`/evaluations/locales`|GET|[Evaluations_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_ListSupportedLocales)|[GetSupportedLocalesForEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEvaluations)|
+|`/healthstatus`|GET|[HealthStatus_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/HealthStatus_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
+|`/models`|GET|[Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListCustomModels)|[GetModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModels)|
+|`/models`|POST|[Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_Create)|[CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel)|
+|`/models/{id}:copyto`<sup>1</sup>|POST|[Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_CopyTo)|[CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)|
+|`/models/{id}`|DELETE|[Models_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_Delete)|[DeleteModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteModel)|
+|`/models/{id}`|GET|[Models_GetCustomModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_GetCustomModel)|[GetModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel)|
+|`/models/{id}`|PATCH|[Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_Update)|[UpdateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel)|
+|`/models/{id}/files`|GET|[Models_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListFiles)|Not applicable|
+|`/models/{id}/files/{fileId}`|GET|[Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_GetFile)|Not applicable|
+|`/models/{id}/manifest`|GET|[Models_GetCustomModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_GetCustomModelManifest)|[GetModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelManifest)|
+|`/models/base`|GET|[Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListBaseModels)|[GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels)|
+|`/models/base/{id}`|GET|[Models_ListBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListBaseModel)|[GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel)|
+|`/models/base/{id}/manifest`|GET|[Models_GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_GetBaseModelManifest)|[GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelManifest)|
+|`/models/locales`|GET|[Models_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListSupportedLocales)|[GetSupportedLocalesForModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForModels)|
+|`/projects`|GET|[Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_List)|[GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects)|
+|`/projects`|POST|[Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_Create)|[CreateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateProject)|
+|`/projects/{id}`|DELETE|[Projects_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_Delete)|[DeleteProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteProject)|
+|`/projects/{id}`|GET|[Projects_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_Get)|[GetProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProject)|
+|`/projects/{id}`|PATCH|[Projects_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_Update)|[UpdateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateProject)|
+|`/projects/{id}/datasets`|GET|[Projects_ListDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListDatasets)|[GetDatasetsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetsForProject)|
+|`/projects/{id}/endpoints`|GET|[Projects_ListEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListEndpoints)|[GetEndpointsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointsForProject)|
+|`/projects/{id}/evaluations`|GET|[Projects_ListEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListEvaluations)|[GetEvaluationsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationsForProject)|
+|`/projects/{id}/models`|GET|[Projects_ListModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListModels)|[GetModelsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelsForProject)|
+|`/projects/{id}/transcriptions`|GET|[Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListTranscriptions)|[GetTranscriptionsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionsForProject)|
+|`/projects/locales`|GET|[Projects_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListSupportedLocales)|[GetSupportedProjectLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedProjectLocales)|
+|`/transcriptions`|GET|[Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_List)|[GetTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions)|
+|`/transcriptions`|POST|[Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_Create)|[CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription)|
+|`/transcriptions/{id}`|DELETE|[Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_Delete)|[DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)|
+|`/transcriptions/{id}`|GET|[Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_Get)|[GetTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscription)|
+|`/transcriptions/{id}`|PATCH|[Transcriptions_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_Update)|[UpdateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateTranscription)|
+|`/transcriptions/{id}/files`|GET|[Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_ListFiles)|[GetTranscriptionFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles)|
+|`/transcriptions/{id}/files/{fileId}`|GET|[Transcriptions_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_GetFile)|[GetTranscriptionFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFile)|
+|`/transcriptions/locales`|GET|[Transcriptions_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_ListSupportedLocales)|[GetSupportedLocalesForTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForTranscriptions)|
+|`/webhooks`|GET|[WebHooks_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_List)|[GetHooks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHooks)|
+|`/webhooks`|POST|[WebHooks_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Create)|[CreateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateHook)|
+|`/webhooks/{id}:ping`<sup>2</sup>|POST|[WebHooks_Ping](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Ping)|[PingHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/PingHook)|
+|`/webhooks/{id}:test`<sup>3</sup>|POST|[WebHooks_Test](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Test)|[TestHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/TestHook)|
+|`/webhooks/{id}`|DELETE|[WebHooks_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Delete)|[DeleteHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteHook)|
+|`/webhooks/{id}`|GET|[WebHooks_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Get)|[GetHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHook)|
+|`/webhooks/{id}`|PATCH|[WebHooks_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Update)|[UpdateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateHook)|
+
+<sup>1</sup> The `/models/{id}/copyto` operation (includes '/') in version 3.0 is replaced by the `/models/{id}:copyto` operation (includes ':') in version 3.1.
+
+<sup>2</sup> The `/webhooks/{id}/ping` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:ping` operation (includes ':') in version 3.1.
+
+<sup>3</sup> The `/webhooks/{id}/test` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:test` operation (includes ':') in version 3.1.
+
+## Next steps
+
+* [Speech-to-text REST API](rest-speech-to-text.md)
+* [Speech-to-text REST API v3.1 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1)
+* [Speech-to-text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
++
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
You should create Speech Service resources in both a main and a secondary region
Custom Speech Service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails. 1. Create your custom model in one main region (Primary).
-2. Run the [Model Copy API](https://eastus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to replicate the custom model to all prepared regions (Secondary).
+2. Run the [CopyModelToSubscriptionToSubscription](https://eastus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) operation to replicate the custom model to all prepared regions (Secondary).
3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Deploy a Custom Speech model](./how-to-custom-speech-deploy-model.md). - If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md). 4. Configure your client to fail over on persistent errors as with the default endpoints usage.
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
# Speech-to-text REST API for short audio
-Use cases for the speech-to-text REST API for short audio are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md). For [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md), you should always use [Speech to Text API v3.0](rest-speech-to-text.md).
+Use cases for the speech-to-text REST API for short audio are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md). For [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md), you should always use [Speech to Text REST API](rest-speech-to-text.md).
Before you use the speech-to-text REST API for short audio, consider the following limitations:
cognitive-services Rest Speech To Text V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-v3-1.md
- Title: Speech-to-text REST API v3.1 Public Preview - Speech service-
-description: Get reference documentation for Speech-to-text REST API v3.1 (Public Preview).
------ Previously updated : 07/11/2022----
-# Speech-to-text REST API v3.1 (preview)
-
-The Speech-to-text REST API v3.1 is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). It is currently in Public Preview.
-
-> [!TIP]
-> See the [Speech to Text API v3.1 preview1](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/) reference documentation for details. This is an updated version of the [Speech to Text API v3.0](./rest-speech-to-text.md)
-
-Use the REST API v3.1 to:
-- Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.-- Transcribe data from a container (bulk transcription) and provide multiple URLs for audio files.-- Upload data from Azure storage accounts by using a shared access signature (SAS) URI.-- Get logs for each endpoint if logs have been requested for that endpoint.-- Request the manifest of the models that you create, to set up on-premises containers.-
-## Changes to the v3.0 API
-
-### Batch transcription changes:
-- In **Create Transcription** the following three new fields were added to properties:
- - **displayFormWordLevelTimestampsEnabled** can be used to enable the reporting of word-level timestamps on the display form of the transcription results.
- - **diarization** can be used to specify hints for the minimum and maximum number of speaker labels to generate when performing optional diarization (speaker separation). With this feature, the service is now able to generate speaker labels for more than two speakers.
- - **languageIdentification** can be used specify settings for optional language identification on the input prior to transcription. Up to 10 candidate locales are supported for language identification. For the preview API, transcription can only be performed with base models for the respective locales. The ability to use custom models for transcription will be added for the GA version.
-- **Get Transcriptions**, **Get Transcription Files**, **Get Transcriptions For Project** now include a new optional parameter to simplify finding the right resource:
- - **filter** can be used to provide a filtering expression for selecting a subset of the available resources. You can filter by displayName, description, createdDateTime, lastActionDateTime, status and locale. Example: filter=createdDateTime gt 2022-02-01T11:00:00Z
-
-### Custom Speech changes
-- **Create Dataset** now supports a new data type of **LanguageMarkdown** to support upload of the new structured text data.
- It also now supports uploading data in multiple blocks for which the following new operations were added:
- - **Upload Data Block** - Upload a block of data for the dataset. The maximum size of the block is 8MiB.
- - **Get Uploaded Blocks** - Get the list of uploaded blocks for this dataset.
- - **Commit Block List** - Commit block list to complete the upload of the dataset.
-- **Get Base Models** and **Get Base Model** now provide information on the type of adaptation supported by a base model:
- ```json
- "features": {
- …
- "supportsAdaptationsWith": [
- "Acoustic",
- "Language",
- "LanguageMarkdown",
- "Pronunciation"
- ]
- }
-```
-
-|Adaptation Type |DescriptionText |
-|||
-|Acoustic |Supports adapting the model with the audio provided to adapt to the audio condition or specific speaker characteristics. |
-|Language |Supports adapting with Plain Text. |
-|LanguageMarkdown |Supports adapting with Structured Text. |
-|Pronunciation |Supports adapting with a Pronunciation File. |
-- **Create Model** has a new optional parameter under **properties** called **customModelWeightPercent** that lets you specify the weight used when the Custom Language Model (trained from plain or structured text data) is combined with the Base Language Model. Valid values are integers between 1 and 100. The default value is currently 30.-- **Get Base Models**, **Get Datasets**, **Get Datasets For Project**, **Get Data Set Files**, **Get Endpoints**, **Get Endpoints For Project**, **Get Evaluations**, **Get Evaluations For Project**, **Get Evaluation Files**, **Get Models**, **Get Models For Project**, **Get Projects** now include a new optional parameter to simplify finding the right resource:
- - **filter** can be used to provide a filtering expression for selecting a subset of the available resources. You can filter by displayName, description, createdDateTime, lastActionDateTime, status, locale and kind. Example: filter=locale eq 'en-US'
--- Added a new **Get Model Files** operation to get the files of the model identified by the given ID as well as a new **Get Model File** operation to get one specific file (identified with fileId) from a model (identified with id). This lets you retrieve a **ModelReport** file that provides information on the data processed during training. -
-## Next steps
--- [Customize acoustic models](./how-to-custom-speech-train-model.md)-- [Customize language models](./how-to-custom-speech-train-model.md)-- [Get familiar with batch transcription](batch-transcription.md)-
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
Title: Speech-to-text REST API v3.0 - Speech service
+ Title: Speech-to-text REST API - Speech service
-description: Get reference documentation for Speech-to-text REST API v3.0.
+description: Get reference documentation for Speech-to-text REST API.
Previously updated : 04/01/2022 Last updated : 09/10/2022 ms.devlang: csharp
-# Speech-to-text REST API v3.0
+# Speech-to-text REST API
-Speech-to-text REST API v3.0 is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md).
+Speech-to-text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md).
-> See the [Speech to Text API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/) reference documentation for details.
+> [!IMPORTANT]
+> Speech-to-text REST API v3.1 is currently in public preview. Once it's generally available, version 3.0 of the [Speech to Text REST API](rest-speech-to-text.md) will be deprecated. For more information, see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide.
+
+> [!div class="nextstepaction"]
+> [See the Speech to Text API v3.1 preview reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/)
+
+> [!div class="nextstepaction"]
+> [See the Speech to Text API v3.0 reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/)
+
+Use Speech-to-text REST API to:
+
+- [Custom Speech](custom-speech-overview.md): With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.
+- [Batch transcription](batch-transcription.md): Transcribe audio files as a batch from multiple URLs or an Azure container.
+
+Speech-to-text REST API includes such features as:
-Use REST API v3.0 to:
-- Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.-- Transcribe data from a container (bulk transcription) and provide multiple URLs for audio files.-- Upload data from Azure storage accounts by using a shared access signature (SAS) URI. - Get logs for each endpoint if logs have been requested for that endpoint. - Request the manifest of the models that you create, to set up on-premises containers.
+- Upload data from Azure storage accounts by using a shared access signature (SAS) URI.
+- Bring your own storage. Use your own storage accounts for logs, transcription files, and other data.
+- Some operations support webhook notifications. You can register your webhooks where notifications are sent.
+
+## Datasets
+
+Datasets are applicable for [Custom Speech](custom-speech-overview.md). You can use datasets to train and test the performance of different models. For example, you can compare the performance of a model trained with a specific dataset to the performance of a model trained with a different dataset.
+
+See [Upload training and testing datasets](how-to-custom-speech-upload-data.md?pivots=rest-api) for examples of how to upload datasets. This table includes all the operations that you can perform on datasets.
+
+|Path|Method|Version 3.1 (Preview)|Version 3.0|
+|||||
+|`/datasets`|GET|[Datasets_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_List)|[GetDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasets)|
+|`/datasets`|POST|[Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Create)|[CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset)|
+|`/datasets/{id}`|DELETE|[Datasets_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Delete)|[DeleteDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteDataset)|
+|`/datasets/{id}`|GET|[Datasets_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Get)|[GetDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDataset)|
+|`/datasets/{id}`|PATCH|[Datasets_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Update)|[UpdateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateDataset)|
+|`/datasets/{id}/blocks:commit`|POST|[Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_CommitBlocks)|Not applicable|
+|`/datasets/{id}/blocks`|GET|[Datasets_GetDatasetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_GetDatasetBlocks)|Not applicable|
+|`/datasets/{id}/blocks`|PUT|[Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_UploadBlock)|Not applicable|
+|`/datasets/{id}/files`|GET|[Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_ListFiles)|[GetDatasetFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFiles)|
+|`/datasets/{id}/files/{fileId}`|GET|[Datasets_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_GetFile)|[GetDatasetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFile)|
+|`/datasets/locales`|GET|[Datasets_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_ListSupportedLocales)|[GetSupportedLocalesForDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForDatasets)|
+|`/datasets/upload`|POST|[Datasets_Upload](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Datasets_Upload)|[UploadDatasetFromForm](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UploadDatasetFromForm)|
+
+## Endpoints
+
+Endpoints are applicable for [Custom Speech](custom-speech-overview.md). You must deploy a custom endpoint to use a Custom Speech model.
+
+See [Deploy a model](how-to-custom-speech-deploy-model.md?pivots=rest-api) for examples of how to manage deployment endpoints. This table includes all the operations that you can perform on endpoints.
+
+|Path|Method|Version 3.1 (Preview)|Version 3.0|
+|||||
+|`/endpoints`|GET|[Endpoints_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_List)|[GetEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoints)|
+|`/endpoints`|POST|[Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_Create)|[CreateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEndpoint)|
+|`/endpoints/{id}`|DELETE|[Endpoints_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_Delete)|[DeleteEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpoint)|
+|`/endpoints/{id}`|GET|[Endpoints_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_Get)|[GetEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoint)|
+|`/endpoints/{id}`|PATCH|[Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_Update)|[UpdateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint)|
+|`/endpoints/{id}/files/logs`|DELETE|[Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_DeleteLogs)|[DeleteEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLogs)|
+|`/endpoints/{id}/files/logs`|GET|[Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_ListLogs)|[GetEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLogs)|
+|`/endpoints/{id}/files/logs/{logId}`|DELETE|[Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_DeleteLog)|[DeleteEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLog)|
+|`/endpoints/{id}/files/logs/{logId}`|GET|[Endpoints_GetLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_GetLog)|[GetEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLog)|
+|`/endpoints/base/{locale}/files/logs`|DELETE|[Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_DeleteBaseModelLogs)|[DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLogs)|
+|`/endpoints/base/{locale}/files/logs`|GET|[Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_ListBaseModelLogs)|[GetBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLogs)|
+|`/endpoints/base/{locale}/files/logs/{logId}`|DELETE|[Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_DeleteBaseModelLog)|[DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLog)|
+|`/endpoints/base/{locale}/files/logs/{logId}`|GET|[Endpoints_GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_GetBaseModelLog)|[GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLog)|
+|`/endpoints/locales`|GET|[Endpoints_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Endpoints_ListSupportedLocales)|[GetSupportedLocalesForEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEndpoints)|
+
+## Evaluations
+
+Evaluations are applicable for [Custom Speech](custom-speech-overview.md). You can use evaluations to compare the performance of different models. For example, you can compare the performance of a model trained with a specific dataset to the performance of a model trained with a different dataset.
+
+See [Test recognition quality](how-to-custom-speech-inspect-data.md?pivots=rest-api) and [Test accuracy](how-to-custom-speech-evaluate-data.md?pivots=rest-api) for examples of how to test and evaluate Custom Speech models. This table includes all the operations that you can perform on evaluations.
+
+|Path|Method|Version 3.1 (Preview)|Version 3.0|
+|||||
+|`/evaluations`|GET|[Evaluations_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_List)|[GetEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluations)|
+|`/evaluations`|POST|[Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_Create)|[CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation)|
+|`/evaluations/{id}`|DELETE|[Evaluations_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_Delete)|[DeleteEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEvaluation)|
+|`/evaluations/{id}`|GET|[Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_Get)|[GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation)|
+|`/evaluations/{id}`|PATCH|[Evaluations_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_Update)|[UpdateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEvaluation)|
+|`/evaluations/{id}/files`|GET|[Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_ListFiles)|[GetEvaluationFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFiles)|
+|`/evaluations/{id}/files/{fileId}`|GET|[Evaluations_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_GetFile)|[GetEvaluationFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFile)|
+|`/evaluations/locales`|GET|[Evaluations_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Evaluations_ListSupportedLocales)|[GetSupportedLocalesForEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEvaluations)|
+
+## Health status
+
+Health status provides insights about the overall health of the service and sub-components.
+
+|Path|Method|Version 3.1 (Preview)|Version 3.0|
+|||||
+|`/healthstatus`|GET|[HealthStatus_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/HealthStatus_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
+
+## Models
+
+Models are applicable for [Custom Speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). You can use models to transcribe audio files. For example, you can use a model trained with a specific dataset to transcribe audio files.
+
+See [Train a model](how-to-custom-speech-train-model.md?pivots=rest-api) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md?pivots=rest-api) for examples of how to train and manage Custom Speech models. This table includes all the operations that you can perform on models.
+
+|Path|Method|Version 3.1 (Preview)|Version 3.0|
+|||||
+|`/models`|GET|[Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListCustomModels)|[GetModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModels)|
+|`/models`|POST|[Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_Create)|[CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel)|
+|`/models/{id}:copyto`<sup>1</sup>|POST|[Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_CopyTo)|[CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)|
+|`/models/{id}`|DELETE|[Models_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_Delete)|[DeleteModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteModel)|
+|`/models/{id}`|GET|[Models_GetCustomModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_GetCustomModel)|[GetModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel)|
+|`/models/{id}`|PATCH|[Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_Update)|[UpdateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel)|
+|`/models/{id}/files`|GET|[Models_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListFiles)|Not applicable|
+|`/models/{id}/files/{fileId}`|GET|[Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_GetFile)|Not applicable|
+|`/models/{id}/manifest`|GET|[Models_GetCustomModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_GetCustomModelManifest)|[GetModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelManifest)|
+|`/models/base`|GET|[Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListBaseModels)|[GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels)|
+|`/models/base/{id}`|GET|[Models_ListBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListBaseModel)|[GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel)|
+|`/models/base/{id}/manifest`|GET|[Models_GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_GetBaseModelManifest)|[GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelManifest)|
+|`/models/locales`|GET|[Models_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Models_ListSupportedLocales)|[GetSupportedLocalesForModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForModels)|
+
+## Projects
+
+Projects are applicable for [Custom Speech](custom-speech-overview.md). Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt-tts). For example, you might create a project for English in the United States.
+
+See [Create a project](how-to-custom-speech-create-project.md?pivots=rest-api) for examples of how to create projects. This table includes all the operations that you can perform on projects.
+
+|Path|Method|Version 3.1 (Preview)|Version 3.0|
+|||||
+|`/projects`|GET|[Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_List)|[GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects)|
+|`/projects`|POST|[Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_Create)|[CreateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateProject)|
+|`/projects/{id}`|DELETE|[Projects_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_Delete)|[DeleteProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteProject)|
+|`/projects/{id}`|GET|[Projects_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_Get)|[GetProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProject)|
+|`/projects/{id}`|PATCH|[Projects_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_Update)|[UpdateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateProject)|
+|`/projects/{id}/datasets`|GET|[Projects_ListDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListDatasets)|[GetDatasetsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetsForProject)|
+|`/projects/{id}/endpoints`|GET|[Projects_ListEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListEndpoints)|[GetEndpointsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointsForProject)|
+|`/projects/{id}/evaluations`|GET|[Projects_ListEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListEvaluations)|[GetEvaluationsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationsForProject)|
+|`/projects/{id}/models`|GET|[Projects_ListModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListModels)|[GetModelsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelsForProject)|
+|`/projects/{id}/transcriptions`|GET|[Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListTranscriptions)|[GetTranscriptionsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionsForProject)|
+|`/projects/locales`|GET|[Projects_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Projects_ListSupportedLocales)|[GetSupportedProjectLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedProjectLocales)|
++
+## Transcriptions
+
+Transcriptions are applicable for [Batch Transcription](batch-transcription.md). Batch transcription is used to transcribe a large amount of audio in storage. You should send multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe.
+
+See [Create a transcription](batch-transcription-create.md?pivots=rest-api) for examples of how to create a transcription from multiple audio files. This table includes all the operations that you can perform on transcriptions.
+
+|Path|Method|Version 3.1 (Preview)|Version 3.0|
+|||||
+|`/transcriptions`|GET|[Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_List)|[GetTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions)|
+|`/transcriptions`|POST|[Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_Create)|[CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription)|
+|`/transcriptions/{id}`|DELETE|[Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_Delete)|[DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)|
+|`/transcriptions/{id}`|GET|[Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_Get)|[GetTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscription)|
+|`/transcriptions/{id}`|PATCH|[Transcriptions_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_Update)|[UpdateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateTranscription)|
+|`/transcriptions/{id}/files`|GET|[Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_ListFiles)|[GetTranscriptionFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles)|
+|`/transcriptions/{id}/files/{fileId}`|GET|[Transcriptions_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_GetFile)|[GetTranscriptionFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFile)|
+|`/transcriptions/locales`|GET|[Transcriptions_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/Transcriptions_ListSupportedLocales)|[GetSupportedLocalesForTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForTranscriptions)|
++
+## Web hooks
+
+Web hooks are applicable for [Custom Speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). In particular, web hooks apply to [datasets](#datasets), [endpoints](#endpoints), [evaluations](#evaluations), [models](#models), and [transcriptions](#transcriptions). Web hooks can be used to receive notifications about creation, processing, completion, and deletion events.
-## Features
+This table includes all the web hook operations that are available with the speech-to-text REST API.
-REST API v3.0 includes such features as:
-- **Webhook notifications**: All running processes of the service support webhook notifications. REST API v3.0 provides the calls to enable you to register your webhooks where notifications are sent.-- **Updating models behind endpoints** -- **Model adaptation with multiple datasets**: Adapt a model by using multiple dataset combinations of acoustic, language, and pronunciation data.-- **Bring your own storage**: Use your own storage accounts for logs, transcription files, and other data.
+|Path|Method|Version 3.1 (Preview)|Version 3.0|
+|||||
+|`/webhooks`|GET|[WebHooks_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_List)|[GetHooks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHooks)|
+|`/webhooks`|POST|[WebHooks_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Create)|[CreateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateHook)|
+|`/webhooks/{id}:ping`<sup>1</sup>|POST|[WebHooks_Ping](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Ping)|[PingHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/PingHook)|
+|`/webhooks/{id}:test`<sup>2</sup>|POST|[WebHooks_Test](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Test)|[TestHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/TestHook)|
+|`/webhooks/{id}`|DELETE|[WebHooks_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Delete)|[DeleteHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteHook)|
+|`/webhooks/{id}`|GET|[WebHooks_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Get)|[GetHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHook)|
+|`/webhooks/{id}`|PATCH|[WebHooks_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/operations/WebHooks_Update)|[UpdateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateHook)|
-For examples of using REST API v3.0 with batch transcription, see [How to use batch transcription](batch-transcription.md).
+<sup>1</sup> The `/webhooks/{id}/ping` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:ping` operation (includes ':') in version 3.1.
-For information about migrating to the latest version of the speech-to-text REST API, see [Migrate code from v2.0 to v3.0 of the REST API](./migrate-v2-to-v3.md).
+<sup>2</sup> The `/webhooks/{id}/test` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:test` operation (includes ':') in version 3.1.
## Next steps -- [Customize acoustic models](./how-to-custom-speech-train-model.md)-- [Customize language models](./how-to-custom-speech-train-model.md)
+- [Create a Custom Speech project](how-to-custom-speech-create-project.md)
- [Get familiar with batch transcription](batch-transcription.md)
cognitive-services Speech Container Howto On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto-on-premises.md
The following prerequisites before using Speech containers on-premises:
| Container Registry access | In order for Kubernetes to pull the docker images into the cluster, it will need access to the container registry. | | Kubernetes CLI | The [Kubernetes CLI][kubernetes-cli] is required for managing the shared credentials from the container registry. Kubernetes is also needed before Helm, which is the Kubernetes package manager. | | Helm CLI | Install the [Helm CLI][helm-install], which is used to to install a helm chart (container package definition). |
-|Speech resource |In order to use these containers, you must have:<br><br>A _Speech_ Azure resource to get the associated billing key and billing endpoint URI. Both values are available on the Azure portal's **Speech** Overview and Keys pages and are required to start the container.<br><br>**{API_KEY}**: resource key<br><br>**{ENDPOINT_URI}**: endpoint URI example is: `https://westus.api.cognitive.microsoft.com/sts/v1.0`|
+|Speech resource |In order to use these containers, you must have:<br><br>A _Speech_ Azure resource to get the associated billing key and billing endpoint URI. Both values are available on the Azure portal's **Speech** Overview and Keys pages and are required to start the container.<br><br>**{API_KEY}**: resource key<br><br>**{ENDPOINT_URI}**: endpoint URI example is: `https://eastus.api.cognitive.microsoft.com/sts/v1.0`|
## The recommended host computer configuration
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
Diarization is enabled by default. To get diarization in your response, use `dia
Starting in v2.6.0 of the speech-to-text container, you should use Language service 3.0 API endpoint instead of the preview one. For example:
-* `https://westus2.api.cognitive.microsoft.com/text/analytics/v3.0/sentiment`
+* `https://eastus.api.cognitive.microsoft.com/text/analytics/v3.0/sentiment`
* `https://localhost:5000/text/analytics/v3.0/sentiment` > [!NOTE]
Starting in v2.6.0 of the speech-to-text container, you should use Language serv
Starting in v2.2.0 of the speech-to-text container, you can call the [sentiment analysis v3 API](../text-analytics/how-tos/text-analytics-how-to-sentiment-analysis.md) on the output. To call sentiment analysis, you'll need a Language service API resource endpoint. For example:
-* `https://westus2.api.cognitive.microsoft.com/text/analytics/v3.0-preview.1/sentiment`
+* `https://eastus.api.cognitive.microsoft.com/text/analytics/v3.0-preview.1/sentiment`
* `https://localhost:5000/text/analytics/v3.0-preview.1/sentiment` If you're accessing a Language service endpoint in the cloud, you'll need a key. If you're running Language service features locally, you might not need to provide this.
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-sdk.md
The Speech SDK (software development kit) exposes many of the [Speech service capabilities](overview.md), so you can develop speech-enabled applications. The Speech SDK is available [in many programming languages](quickstarts/setup-platform.md) and across platforms. The Speech SDK is ideal for both real-time and non-real-time scenarios, by using local devices, files, Azure Blob Storage, and input and output streams.
-In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In those cases, you can use REST APIs to access the Speech service. For example, use the [Speech-to-text REST API v3.0](rest-speech-to-text.md) for [batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md).
+In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In those cases, you can use REST APIs to access the Speech service. For example, use the [Speech-to-text REST API](rest-speech-to-text.md) for [batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md).
## Supported languages
cognitive-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-private-link.md
Speech service has REST APIs for [Speech-to-text](rest-speech-to-text.md) and [T
Speech-to-text has two REST APIs. Each API serves a different purpose, uses different endpoints, and requires a different approach when you're using it in the private-endpoint-enabled scenario. The Speech-to-text REST APIs are:-- [Speech-to-text REST API v3.0](rest-speech-to-text.md), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). v3.0 is a [successor of v2.0](./migrate-v2-to-v3.md)
+- [Speech-to-text REST API](rest-speech-to-text.md), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md).
- [Speech-to-text REST API for short audio](rest-speech-to-text-short.md), which is used for online transcription Usage of the Speech-to-text REST API for short audio and the Text-to-speech REST API in the private endpoint scenario is the same. It's equivalent to the [Speech SDK case](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk) described later in this article.
-Speech-to-text REST API v3.0 uses a different set of endpoints, so it requires a different approach for the private-endpoint-enabled scenario.
+Speech-to-text REST API uses a different set of endpoints, so it requires a different approach for the private-endpoint-enabled scenario.
The next subsections describe both cases.
-#### Speech-to-text REST API v3.0
+#### Speech-to-text REST API
-Usually, Speech resources use [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech-to-text REST API v3.0](rest-speech-to-text.md). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`.
+Usually, Speech resources use [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech-to-text REST API](rest-speech-to-text.md). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`.
This is a sample request URL:
Compare it with the output from [this section](#resolve-dns-from-other-networks)
### Speech resource with a custom domain name and without private endpoints: Usage with the REST APIs
-#### Speech-to-text REST API v3.0
+#### Speech-to-text REST API
-Speech-to-text REST API v3.0 usage is fully equivalent to the case of [private-endpoint-enabled Speech resources](#speech-to-text-rest-api-v30).
+Speech-to-text REST API usage is fully equivalent to the case of [private-endpoint-enabled Speech resources](#speech-to-text-rest-api).
#### Speech-to-text REST API for short audio and Text-to-speech REST API
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
You can use online transcription with the [Speech SDK](speech-sdk.md) or the [sp
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
-| [Speech-to-text REST API V2.0 and v3.0](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute |
+| [Speech-to-text REST API](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute |
| Max audio input file size | N/A | 1 GB | | Max input blob size (for example, can contain more than one file in a zip archive). Note the file size limit from the preceding row. | N/A | 2.5 GB | | Max blob container size | N/A | 5 GB |
You can use online transcription with the [Speech SDK](speech-sdk.md) or the [sp
| Max acoustic dataset file size for data import | 2 GB | 2 GB | | Max language dataset file size for data import | 200 MB | 1.5 GB | | Max pronunciation dataset file size for data import | 1 KB | 1 MB |
-| Max text size when you're using the `text` parameter in the [Create Model](https://westcentralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel/) API request | 200 KB | 500 KB |
+| Max text size when you're using the `text` parameter in the [CreateModel](https://westcentralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel/) API request | 200 KB | 500 KB |
<sup>1</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/> <sup>2</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit).<br/>
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
In depth samples are available in the [Azure-Samples/cognitive-services-speech-s
## Batch transcription
-Batch transcription is a set of [Speech-to-text REST API v3.0](rest-speech-to-text.md) operations that enable you to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. For more information on how to use the batch transcription API, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
+Batch transcription is a set of [Speech-to-text REST API](rest-speech-to-text.md) operations that enable you to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. For more information on how to use the batch transcription API, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
## Custom Speech The Azure speech-to-text service analyzes audio in real-time or batch to transcribe the spoken word into text. Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This base model is pre-trained with dialects and phonetics representing a variety of common domains. The base model works well in most scenarios.
-The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, building a custom speech model makes sense by training with additional data associated with that specific domain. You can create and train custom acoustic, language, and pronunciation models. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, building a custom speech model makes sense by training with additional data associated with that specific domain. You can create and train custom acoustic, language, and pronunciation models. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech-to-text REST API](rest-speech-to-text.md).
Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt-tts).
cognitive-services Swagger Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/swagger-documentation.md
Last updated 02/16/2021
-# Generate a REST API client library for the Speech-to-text REST API v3.0
+# Generate a REST API client library for the Speech-to-text REST API
Speech service offers a Swagger specification to interact with a handful of REST APIs used to import data, create models, test model accuracy, create custom endpoints, queue up batch transcriptions, and manage subscriptions. Most operations available through the [Custom Speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech) can be completed programmatically using these APIs. > [!NOTE] > Speech service has several REST APIs for [Speech-to-text](rest-speech-to-text.md) and [Text-to-speech](rest-text-to-speech.md). >
-> However only [Speech-to-text REST API v3.0](rest-speech-to-text.md) is documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech Services REST APIs.
+> However only [Speech-to-text REST API](rest-speech-to-text.md) is documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech Services REST APIs.
## Generating code from the Swagger specification
-The [Swagger specification](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) has options that allow you to quickly test for various paths. However, sometimes it's desirable to generate code for all paths, creating a single library of calls that you can base future solutions on. Let's take a look at the process to generate a Python library.
+The [Swagger specification](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) has options that allow you to quickly test for various paths. However, sometimes it's desirable to generate code for all paths, creating a single library of calls that you can base future solutions on. Let's take a look at the process to generate a Python library.
You'll need to set Swagger to the region of your Speech resource. You can confirm the region in the **Overview** part of your Speech resource settings in Azure portal. The complete list of supported regions is available [here](regions.md#speech-service). 1. In a browser, go to the Swagger specification for your [region](regions.md#speech-service):
- `https://<your-region>.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0`
+ `https://<your-region>.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1`
1. On that page, click **API definition**, and click **Swagger**. Copy the URL of the page that appears. 1. In a new browser, go to [https://editor.swagger.io](https://editor.swagger.io) 1. Click **File**, click **Import URL**, paste the URL, and click **OK**.
You can use the Python library that you generated with the [Speech service sampl
## Next steps * [Speech service samples on GitHub](https://aka.ms/csspeech/samples).
-* [Speech-to-text REST API v3.0](rest-speech-to-text.md)
+* [Speech-to-text REST API](rest-speech-to-text.md)
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
In this how-to guide, you'll learn to use the [Translator service REST APIs](ref
* You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production. > [!TIP]
- > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../active-directory/authentication/overview-authentication.md).
+ > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Translator access only, create a Form Translator resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../active-directory/authentication/overview-authentication.md).
* You'll need the key and endpoint from the resource to connect your application to the Translator service. Later, you'll paste your key and endpoint into the code samples. You can find these values on the Azure portal **Keys and Endpoint** page:
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
Release notes for `v2.5.0`:
* `id-id-gadisneural` * `ka-ge-ekaneural` * `ka-ge-giorgineural`
+ * `th-th-acharaneural`
+ * `th-th-niwatneural`
+ * `th-th-premwadeeneural`
| Image Tags | Notes | ||:|
Release notes for `v2.5.0`:
| `sv-se-hillevineural`| Container image with the `sv-SE` locale and `sv-SE-hillevineural` voice.| | `sv-se-mattiasneural`| Container image with the `sv-SE` locale and `sv-SE-mattiasneural` voice.| | `sv-se-sofieneural`| Container image with the `sv-SE` locale and `sv-SE-sofieneural` voice.|
+| `th-th-acharaneural`| Container image with the `th-TH` locale and `th-TH-acharaneural` voice.|
+| `th-th-niwatneural`| Container image with the `th-TH` locale and `th-TH-niwatneural` voice.|
+| `th-th-premwadeeneural`| Container image with the `th-TH` locale and `th-TH-premwadeeneural` voice.|
| `tr-tr-ahmetneural`| Container image with the `tr-TR` locale and `tr-TR-ahmetneural` voice.| | `tr-tr-emelneural`| Container image with the `tr-TR` locale and `tr-TR-emelneural` voice.| | `zh-cn-xiaochenneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaochenneural` voice.|
cognitive-services Concept Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-rewards.md
By adding up reward scores, your final reward may be outside the expected score
## Reward wait time
-Personalizer will correlate the information of a Rank call with the rewards sent in Reward calls to train the model. These may come at different times. Personalizer waits for a limited time, starting when the Rank call happened, even if the Rank call was made as an inactive event, and activated later.
+Personalizer will correlate the information of a Rank call with the rewards sent in Reward calls to train the model, which may come at different times. Personalizer waits for the reward score for a defined limited time, starting when the corresponding Rank call occured. This is done even if the Rank call was made using deferred activation](concept-active-inactive-events.md).
-If the **Reward Wait Time** expires, and there has been no reward information, a default reward is applied to that event for training. The maximum wait duration is 2 days. If your scenario requires longer reward wait times (e.g. for marketing email campaigns) we are offering a private preview of longer wait times. Open a support ticket in the Azure portal to get in contact with team and see if you qualify and it can be offered to you.
+If the **Reward Wait Time** expires and there has been no reward information, a default reward is applied to that event for training. You can select a reward wait time of 10 minutes, 4 hours, 12 hours, or 24 hours. If your scenario requires longer reward wait times (e.g., for marketing email campaigns) we are offering a private preview of longer wait times. Open a support ticket in the Azure portal to get in contact with team and see if you qualify and it can be offered to you.
## Best practices for reward wait time
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
JSON objects can include nested JSON objects and simple property/values. An arra
## Inference Explainability Personalizer can help you to understand which features of a chosen action are the most and least influential to then model during inference. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
-Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions.
+Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to further analyze how the data is being used by the underlying model.
Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
Enabling inference explainability will add a collection to the JSON response fro
}, { "id": "SportsArticle",
- "probability": 0.15
+ "probability": 0.10
}, { "id": "NewsArticle",
- "probability": 0.05
+ "probability": 0.10
} ], "eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
For the best actions returned by Personalizer, the feature scores can provide ge
* Scores close to zero have a small effect on the decision to choose this action. ### Important considerations for Inference Explainability
-* **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements. Future versions of Inference Explainability will mitigate this issue.
+* **Increased latency.** Currently, enabling _Inference Explainability_ may significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements.
* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature AΓÇÖs score is a large positive value while Feature BΓÇÖs score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
-* **Default exploration only.** Currently, Inference Explainability supports only the default exploration algorithm. Future releases will enable the use of this capability with additional exploration algorithms.
+* **Default exploration only.** Currently, Inference Explainability supports only the default exploration algorithm at this time.
## Next steps
cognitive-services Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-features.md
For a comprehensive list of Azure service security recommendations see the [Cogn
|Feature | Description | |:|:| | [Transport Layer Security (TLS)](/dotnet/framework/network-programming/tls) | All of the Cognitive Services endpoints exposed over HTTP enforce the TLS 1.2 protocol. With an enforced security protocol, consumers attempting to call a Cognitive Services endpoint should follow these guidelines: </br>- The client operating system (OS) needs to support TLS 1.2.</br>- The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request. Depending on the language and platform, specifying TLS is done either implicitly or explicitly.</br>- For .NET users, consider the [Transport Layer Security best practices](/dotnet/framework/network-programming/tls). |
-| [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Cognitive Services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use manged roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Cognitive Services](/azure/cognitive-services/authentication). |
+| [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Cognitive Services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use managed roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Cognitive Services](/azure/cognitive-services/authentication). |
| [Environment variables](cognitive-services-environment-variables.md) | Environment variables are name-value pairs that are stored within a specific development environment. You can store your credentials in this way as a more secure alternative to using hardcoded values in your code. However, if your environment is compromised, the environment variables are compromised as well, so this is not the most secure approach.</br></br> For instructions on how to use environment variables in your code, see the [Environment variables guide](cognitive-services-environment-variables.md). | | [Customer-managed keys (CMK)](./encryption/cognitive-services-encryption-keys-portal.md) | This feature is for services that store customer data at rest (longer than 48 hours). While this data is already double-encrypted on Azure servers, users can get extra security by adding another layer of encryption, with keys they manage themselves. You can link your service to Azure Key Vault and manage your data encryption keys there. </br></br>You need special approval to get the E0 SKU for your service, which enables CMK. Within 3-5 business days after you submit the [request form](https://aka.ms/cogsvc-cmk), you'll get an update on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once you're approved for using the E0 SKU, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. You won't be able to upgrade from F0 to the new E0 SKU. </br></br>Only some services can use CMK; look for your service on the [Customer-managed keys](./encryption/cognitive-services-encryption-keys-portal.md) page.| | [Virtual networks](./cognitive-services-virtual-networks.md) | Virtual networks allow you to specify which endpoints can make API calls to your resource. The Azure service will reject API calls from devices outside of your network. You can set a formula-based definition of the allowed network, or you can define an exhaustive list of endpoints to allow. This is another layer of security that can be used in combination with others. |
For a comprehensive list of Azure service security recommendations see the [Cogn
## Next steps
-* Explore [Cognitive Services](./what-are-cognitive-services.md) and choose a service to get started.
+* Explore [Cognitive Services](./what-are-cognitive-services.md) and choose a service to get started.
communication-services Meeting Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md
The following list of capabilities is allowed when Teams user participates in Te
| | Interact with a Q&A | ❌ | | | Interact with a OneNote | ❌ | | | Manage SpeakerCoach | ❌ |
-| | [Include participant in Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ❌
+| | [Include participant in Teams meeting attendance report](https://support.microsoft.com/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ❌
| Accessibility | Receive closed captions | ❌ | | | Communication access real-time translation (CART) | ❌ | | | Language interpretation | ❌ |
Teams meeting organizers can configure the Teams meeting options to adjust the e
| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | Teams user can bypass the lobby, if Teams meeting organizer set value to include "people in my organization" for single tenant meetings and "people in trusted organizations" for cross-tenant meetings. Otherwise, Teams users have to wait in the lobby until an authenticated user admits them.| ✔️ | | [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable | | Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ |
-| [Choose co-organizers](/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Teams user can be selected as co-organizer. It affects the availability of actions in Teams meetings. | ✔️ |
+| [Choose co-organizers](https://support.microsoft.com/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Teams user can be selected as co-organizer. It affects the availability of actions in Teams meetings. | ✔️ |
| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ |
-|[Manage what attendees see](/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
-|[Allow mic for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local audio |✔️|
-|[Allow camera for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️|
+|[Manage what attendees see](https://support.microsoft.com/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
+|[Allow mic for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local audio |✔️|
+|[Allow camera for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️|
|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️| |Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️| |[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Services don't support reactions. |❌| |[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable|
-|[Provide CART Captions](/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
+|[Provide CART Captions](https://support.microsoft.com/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
## Next steps
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
For example, if your storage account requires *access key* authorization, you ha
| Property | Required | Value | Description | |-|-|-|-| | **Connection name** | Yes | <*connection-name*> | The name to use for your connection. |
-| **Authentication type** | Yes | - **Access Key** <br><br>- **Azure AD Integrated** <br><br>- **Logic Apps Managed Identity (Preview)** | The authentication type to use for your connection. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-for-triggers-and-actions-that-support-authentication). |
+| **Authentication type** | Yes | - **Access Key** <br><br>- **Azure AD Integrated** <br><br>- **Logic Apps Managed Identity** | The authentication type to use for your connection. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
| **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br><br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. | | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br><br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **Show keys**. Copy and save one of the key values. | |||||
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
The Service Bus connector has different versions, based on [logic app workflow t
* The logic app workflow where you connect to your Service Bus namespace and messaging entity. To start your workflow with a Service Bus trigger, you have to start with a blank workflow. To use a Service Bus action in your workflow, start your workflow with any trigger.
-* If your logic app resource uses a managed identity to authenticate access to your Service Bus namespace and messaging entity, make sure that you've assigned role permissions at the corresponding levels. For example, to access a queue, the managed identity requires a role that has the necessary permissions for that queue.
+* If your logic app resource uses a managed identity for authenticating access to your Service Bus namespace and messaging entity, make sure that you've assigned role permissions at the corresponding levels. For example, to access a queue, the managed identity requires a role that has the necessary permissions for that queue.
- Each managed identity that accesses a *different* messaging entity should have a separate connection to that entity. If you use different Service Bus actions to send and receive messages, and those actions require different permissions, make sure to use different connections.
+ If you're using the Service Bus *managed* connector, each managed identity that accesses a *different* messaging entity should have a separate API connection to that entity. If you use different Service Bus actions to send and receive messages, and those actions require different permissions, make sure to use different API connections.
For more information about managed identities, review [Authenticate access to Azure resources with managed identities in Azure Logic Apps](../logic-apps/create-managed-service-identity.md).
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
ms.suite: integration Previously updated : 08/19/2022 Last updated : 09/23/2022 tags: connectors
In the connection information box, complete the following steps:
| Authentication | Description | |-|-|
+ | **Connection String** | - Supported only in Standard workflows with the SQL Server built-in connector. <br><br>- Requires the connection string to your SQL server and database. |
+ | **Logic Apps Managed Identity** | - Supported with the SQL Server managed connector and ISE-versioned connector. In Standard workflows, this authentication type is available for the SQL Server built-in connector, but the option is named **Managed identity** instead. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles). |
+ | **Active Directory OAuth** | - Supported only in Standard workflows with the SQL Server built-in connector. For more information, see the following documentation: <br><br>- [Enable Azure Active Directory Open Authentication (Azure AD OAuth)](../logic-apps/logic-apps-securing-a-logic-app.md#enable-oauth) <br>- [Azure Active Directory Open Authentication](../logic-apps/logic-apps-securing-a-logic-app.md#azure-active-directory-oauth-authentication) |
| **Service principal (Azure AD application)** | - Supported with the SQL Server managed connector. <br><br>- Requires an Azure AD application and service principal. For more information, see [Create an Azure AD application and service principal that can access resources using the Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). |
- | **Logic Apps Managed Identity** | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles). |
| [**Azure AD Integrated**](/azure/azure-sql/database/authentication-aad-overview) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires a valid managed identity in Azure Active Directory (Azure AD) that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) <br>- [Azure SQL - Azure AD Integrated authentication](/azure/azure-sql/database/authentication-aad-overview) |
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector, SQL Server built-in connector, and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
The following examples show how the connection information box might appear if you select **Azure AD Integrated** authentication.
container-registry Tutorial Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-customer-managed-keys.md
Title: Customer-managed keys - overview
-description: Learn about the customer-managed keys, an overview on its key features and considerations before you encrypt your Premium registry with a customer-managed key stored in Azure Key Vault.
+ Title: Overview of customer-managed keys
+description: Learn how to encrypt your Premium container registry by using a customer-managed key stored in Azure Key Vault.
Last updated 08/5/2022
-# Tutorial: An overview of a customer-managed key encryption for your Azure Container Registry
-
-Azure Container Registry, automatically encrypts the images and other artifacts you store. By default, Azure automatically encrypts the registry content at rest with [service-managed keys](../security/fundamentals/encryption-models.md). You can supplement default encryption with an additional encryption layer using a customer-managed key.
+# Overview of customer-managed keys
+Azure Container Registry automatically encrypts images and other artifacts that you store. By default, Azure automatically encrypts the registry content at rest by using [service-managed keys](../security/fundamentals/encryption-models.md). By using a customer-managed key, you can supplement default encryption with an additional encryption layer.
-In this tutorial, part one in a four-part series:
+This article is part one in a four-part tutorial series. The tutorial covers:
> [!div class="checklist"]
-> * customer-managed key - Overview
-> * Enable a customer-managed key - CLI, Portal, and Resource Manager Template
+> * Overview of customer-managed keys
+> * Enable a customer-managed key
> * Rotate and revoke a customer-managed key > * Troubleshoot a customer-managed key
-## About customer-managed key
+## About customer-managed keys
-A customer-managed key gives you the ownership to bring your own key in the [Azure Key Vault](../key-vault/general/overview.md). The customer-managed key also allows you to manage key rotations, controls the access and permissions to use the key, and audit the usage of the key.
+A customer-managed key gives you the ownership to bring your own key in [Azure Key Vault](../key-vault/general/overview.md). When you enable a customer-managed key, you can manage its rotations, control the access and permissions to use it, and audit its use.
-The key features include:
+Key features include:
->* **Regulatory compliance standards**: By default, Azure automatically encrypts the registry content at rest with [service-managed keys,](../security/fundamentals/encryption-models.md) but customer-managed keys encryption meets the guidelines of standard regulatory compliance.
+* **Regulatory compliance**: Azure automatically encrypts registry content at rest with [service-managed keys](../security/fundamentals/encryption-models.md), but customer-managed key encryption helps you meet guidelines for regulatory compliance.
->* **Integration with Azure key vault**: Customer-managed keys support server-side encryption through integration with [Azure Key Vault.](../key-vault/general/overview.md). With customer-managed keys, you can create your own encryption keys and store them in an Azure Key Vault, or you can use Azure Key Vault APIs to generate keys.
+* **Integration with Azure Key Vault**: Customer-managed keys support server-side encryption through integration with [Azure Key Vault](../key-vault/general/overview.md). With customer-managed keys, you can create your own encryption keys and store them in a key vault. Or you can use Azure Key Vault APIs to generate keys.
->* **Key life cycle management**: Integrating customer-managed keys with [Azure Key Vault](../key-vault/general/overview.md), will give you full control and responsibility for the key lifecycle, including rotation and management.
+* **Key lifecycle management**: Integrating customer-managed keys with [Azure Key Vault](../key-vault/general/overview.md) gives you full control and responsibility for the key lifecycle, including rotation and management.
## Before you enable a customer-managed key
-Configure Azure Container Registry (ACR) with a customer-managed key consider knowing:
+Before you configure Azure Container Registry with a customer-managed key, consider the following information:
->* This feature is available in the **Premium** container registry service tier. For more information, see [ACR service tiers.](container-registry-skus.md)
->* You can currently enable a customer-managed key only while creating a registry.
->* You can't disable the encryption after enabling a customer-managed key on a registry.
->* You have to configure a *user-assigned* managed identity to access the key vault. Later, if required you can enable the registry's *system-assigned* managed identity for key vault access.
->* Azure Container Registry supports only RSA or RSA-HSM keys. Elliptic curve keys aren't currently supported.
->* In a registry encrypted with a customer-managed key, you can retain logs for [ACR Tasks](container-registry-tasks-overview.md) only for 24 hours. To retain logs for a longer period, see guidance to [export and store task run logs.](container-registry-tasks-logs.md#alternative-log-storage)
->* [Content trust](container-registry-content-trust.md) is currently not supported in a registry encrypted with a customer-managed key.
+* This feature is available in the Premium service tier for a container registry. For more information, see [Azure Container Registry service tiers](container-registry-skus.md).
+* You can currently enable a customer-managed key only while creating a registry.
+* You can't disable the encryption after you enable a customer-managed key on a registry.
+* You have to configure a *user-assigned* managed identity to access the key vault. Later, if required, you can enable the registry's *system-assigned* managed identity for key vault access.
+* Azure Container Registry supports only RSA or RSA-HSM keys. Elliptic-curve keys aren't currently supported.
+* In a registry that's encrypted with a customer-managed key, you can retain logs for [Azure Container Registry tasks](container-registry-tasks-overview.md) for only 24 hours. To retain logs for a longer period, see [View and manage task run logs](container-registry-tasks-logs.md#alternative-log-storage).
+* [Content trust](container-registry-content-trust.md) is currently not supported in a registry that's encrypted with a customer-managed key.
## Update the customer-managed key version
-Azure Container Registry supports both automatic and manual rotation of registry encryption keys when a new key version is available in Azure Key Vault.
+Azure Container Registry supports both automatic and manual rotation of registry encryption keys when a new key version is available in Azure Key Vault.
>[!IMPORTANT]
->It is an important security consideration for a registry with customer-managed key encryption to frequently update (rotate) the key versions. Follow your organization's compliance policies to regularly update [key versions,](../key-vault/general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning) while storing a customer-managed key in Azure Key Vault.
+>It's an important security consideration for a registry with customer-managed key encryption to frequently update (rotate) the key versions. Follow your organization's compliance policies to regularly update [key versions](../key-vault/general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning) while storing a customer-managed key in Azure Key Vault.
-* **Automatically update the key version** - With a registry encrypted with a non-versioned key, Azure Container Registry regularly checks the Azure key vault for a new key version and updates the customer-managed key within 1 hour. So, we suggest omitting the key version when you enable registry encryption with a customer-managed key. So, that ACR automatically uses and updates to the latest key version.
+* **Automatically update the key version**: When a registry is encrypted with a non-versioned key, Azure Container Registry regularly checks the key vault for a new key version and updates the customer-managed key within one hour. We suggest that you omit the key version when you enable registry encryption with a customer-managed key. Azure Container Registry will then automatically use and update the latest key version.
-* **Manually update the key version** - With a registry encrypted with a specific key version, Azure Container Registry uses that version for encryption until you manually rotate the customer-managed key. So, we suggest specifying the key version when you enable registry encryption with a customer-managed key. So, that ACR will use a specific version of a key for registry encryption.
+* **Manually update the key version**: When a registry is encrypted with a specific key version, Azure Container Registry uses that version for encryption until you manually rotate the customer-managed key. We suggest that you specify the key version when you enable registry encryption with a customer-managed key. Azure Container Registry will then use a specific version of a key for registry encryption.
-For details, see [Choose key ID with version](tutorial-enable-customer-managed-keys.md#option-1-manual-key-rotationkey-id-with-version) , or [Choose key ID without key version](tutorial-enable-customer-managed-keys.md#option-2-automatic-key-rotationkey-id-omitting-version), and [Update key version](tutorial-rotate-revoke-customer-managed-keys.md#create-or-update-key-versioncli) later in this tutorial.
+For details, see [Key rotation](tutorial-enable-customer-managed-keys.md#key-rotation) and [Update key version](tutorial-rotate-revoke-customer-managed-keys.md#create-or-update-the-key-version-by-using-the-azure-cli).
## Next steps
-In this tutorial, you have an overview on a customer-managed keys, their key features, and a brief of the considerations to enable a customer-managed key to your registry and types of updating key versions.
-
-Advance to the next [tutorial](tutorial-enable-customer-managed-keys.md) to enable your container registry with a customer-managed keys using Azure CLI, Azure portal, and Azure Resource Manager template.
+* To enable your container registry with a customer-managed key by using the Azure CLI, the Azure portal, or an Azure Resource Manager template, advance to the next article: [Enable a customer-managed key](tutorial-enable-customer-managed-keys.md).
* Learn more about [encryption at rest in Azure](../security/fundamentals/encryption-atrest.md). * Learn more about access policies and how to [secure access to a key vault](../key-vault/general/security-features.md).
container-registry Tutorial Enable Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-customer-managed-keys.md
Title: Enable a customer-managed key on Azure Container Registry
-description: In this tutorial, learn to encrypt your Premium registry with a customer-managed key stored in Azure Key Vault using Azure CLI.
+ Title: Enable a customer-managed key
+description: In this tutorial, learn how to encrypt your Premium registry with a customer-managed key stored in Azure Key Vault.
Last updated 08/5/2022
-# Tutorial: Encrypt Azure Container Registry with a customer-managed key
+# Enable a customer-managed key
-This article is part two in a four-part tutorial series. In [part one](tutorial-customer-managed-keys.md), you have an overview about a customer-managed key, key features, and the considerations before you enable a customer-managed key on your registry. This article walks you through the steps using the Azure CLI, Azure portal, or a Resource Manager template.
-
-In this article
-
->* Enable a customer-managed key - Azure CLI
->* Enable a customer-managed key - Azure portal
->* Enable a customer-managed key - Azure Resource Manager template
+This article is part two in a four-part tutorial series. [Part one](tutorial-customer-managed-keys.md) provides an overview of customer-managed keys, their features, and considerations before you enable one on your registry. This article walks you through the steps of enabling a customer-managed key by using the Azure CLI, the Azure portal, or an Azure Resource Manager template.
## Prerequisites
->* See [Install Azure CLI][azure-cli] or run in [Azure Cloud Shell.](../cloud-shell/quickstart.md).
->* Sign into [Azure Portal](https://ms.portal.azure.com/)
+* [Install the Azure CLI][azure-cli] or prepare to use [Azure Cloud Shell](../cloud-shell/quickstart.md).
+* Sign in to the [Azure portal](https://ms.portal.azure.com/).
-## Enable a customer-managed key - Azure CLI
+## Enable a customer-managed key by using the Azure CLI
### Create a resource group
-Create a resource group for creating the key vault, container registry, and other required resources.
-
-1. Run the [az group create][az-group-create](/cli/azure/group#az-group-create) command to create a resource group.
+Run the [az group create][az-group-create] command to create a resource group that will hold your key vault, container registry, and other required resources:
```azurecli az group create --name <resource-group-name> --location <location>
az group create --name <resource-group-name> --location <location>
### Create a user-assigned managed identity
-By configuring the *user-assigned managed identity* to the registry, you can access the Azure Key Vault.
+Configure a user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the registry so that you can access the key vault:
-1. Run the [az identity create][az-identity-create](/cli/azure/identity#az-identity-create) command to create a user-assigned [managed identity for Azure resources.](../active-directory/managed-identities-azure-resources/overview.md).
+1. Run the [az identity create][az-identity-create] command to create the managed identity:
-```azurecli
-az identity create \
- --resource-group <resource-group-name> \
- --name <managed-identity-name>
-```
+ ```azurecli
+ az identity create \
+ --resource-group <resource-group-name> \
+ --name <managed-identity-name>
+ ```
-2. In the command output, take a note of the following values: `id` and `principalId` to configure registry access with the Azure Key Vault.
+2. In the command output, take note of the `id` and `principalId` values to configure registry access with the key vault:
-```JSON
-{
- "clientId": "xxxx2bac-xxxx-xxxx-xxxx-192cxxxx6273",
- "clientSecretUrl": "https://control-eastus.identity.azure.net/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentityname/credentials?tid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&oid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&aid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myresourcegroup",
- "location": "eastus",
- "name": "myidentityname",
- "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "resourceGroup": "myresourcegroup",
- "tags": {},
- "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
+ ```JSON
+ {
+ "clientId": "xxxx2bac-xxxx-xxxx-xxxx-192cxxxx6273",
+ "clientSecretUrl": "https://control-eastus.identity.azure.net/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentityname/credentials?tid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&oid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&aid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myresourcegroup",
+ "location": "eastus",
+ "name": "myidentityname",
+ "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "resourceGroup": "myresourcegroup",
+ "tags": {},
+ "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+ }
+ ```
-3. For convenience, store the values of `id` and `principalId` in environment variables:
+3. For convenience, store the `id` and `principalId` values in environment variables:
-```azurecli
-identityID=$(az identity show --resource-group <resource-group-name> --name <managed-identity-name> --query 'id' --output tsv)
+ ```azurecli
+ identityID=$(az identity show --resource-group <resource-group-name> --name <managed-identity-name> --query 'id' --output tsv)
-identityPrincipalID=$(az identity show --resource-group <resource-group-name> --name <managed-identity-name> --query 'principalId' --output tsv)
-```
+ identityPrincipalID=$(az identity show --resource-group <resource-group-name> --name <managed-identity-name> --query 'principalId' --output tsv)
+ ```
### Create a key vault
-1. Run the [az keyvault create][az-keyvault-create](/cli/azure/keyvault#az-keyvault-create) to create a key vault and store a customer-managed key for registry encryption.
+1. Run the [az keyvault create][az-keyvault-create] command to create a key vault where you can store a customer-managed key for registry encryption.
-2. By default, new key vault automatically enables the **soft delete** setting. To prevent data loss by accidental key or key vault deletions, we recommend enabling the **purge protection** setting.
+2. By default, the new key vault automatically enables the *soft delete* setting. To prevent data loss from accidental deletion of keys or key vaults, we recommend enabling the *purge protection* setting:
-```azurecli
-az keyvault create --name <key-vault-name> \
- --resource-group <resource-group-name> \
- --enable-purge-protection
-```
+ ```azurecli
+ az keyvault create --name <key-vault-name> \
+ --resource-group <resource-group-name> \
+ --enable-purge-protection
+ ```
-3. For convenience, take a note of the key vault resource ID and store the value in environment variables:
+3. For convenience, take a note of the key vault's resource ID and store the value in environment variables:
-```azurecli
-keyvaultID=$(az keyvault show --resource-group <resource-group-name> --name <key-vault-name> --query 'id' --output tsv)
-```
-
-#### Enable key vault access by trusted services
-
-If the key vault is in protection with a firewall or virtual network (private endpoint), you must enable the network settings to allow access by [trusted Azure services.](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services)
+ ```azurecli
+ keyvaultID=$(az keyvault show --resource-group <resource-group-name> --name <key-vault-name> --query 'id' --output tsv)
+ ```
-For more information, see [Configure Azure Key Vault networking settings](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-cli).
+#### Enable trusted services to access the key vault
-#### Enable key vault access by managed identity
+If the key vault is in protection with a firewall or virtual network (private endpoint), you must enable the network settings to allow access by [trusted Azure services](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). For more information, see [Configure Azure Key Vault networking settings](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-cli).
-There are two ways to enable key vault access.
+#### Enable managed identities to access the key vault
-#### Option 1: Enable key vault access policy
+There are two ways to enable managed identities to access your key vault.
-Configure the access policy for the key vault and set key permissions to access with a *user-assigned* managed identity:
+The first option is to configure the access policy for the key vault and set key permissions for access with a user-assigned managed identity:
-1. Run the [az keyvault set policy][az-keyvault-set-policy](/cli/azure/keyvault#az-keyvault-set-policy) command, and pass the previously created and stored environment variable value of the `principal ID`.
+1. Run the [az keyvault set policy][az-keyvault-set-policy] command. Pass the previously created and stored environment variable value of `principalID`.
-2. Set key permissions to **get**, **unwrapKey**, and **wrapKey**.
-
-```azurecli
-az keyvault set-policy \
- --resource-group <resource-group-name> \
- --name <key-vault-name> \
- --object-id $identityPrincipalID \
- --key-permissions get unwrapKey wrapKey
-
-```
+2. Set key permissions to `get`, `unwrapKey`, and `wrapKey`:
-#### Option 2: Assign RBAC role
+ ```azurecli
+ az keyvault set-policy \
+ --resource-group <resource-group-name> \
+ --name <key-vault-name> \
+ --object-id $identityPrincipalID \
+ --key-permissions get unwrapKey wrapKey
-Alternatively, use [Azure RBAC for Key Vault](../key-vault/general/rbac-guide.md) to assign permissions to the *user-assigned* managed identity and access the key vault.
+ ```
-1. Run the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command and assign the `Key Vault Crypto Service Encryption role` to a *user-assigned* managed identity.
+The second option is to use [Azure role-based access control (RBAC)](../key-vault/general/rbac-guide.md) to assign permissions to the user-assigned managed identity and access the key vault. Run the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command and assign the `Key Vault Crypto Service Encryption User` role to a user-assigned managed identity:
```azurecli az role assignment create --assignee $identityPrincipalID \
az role assignment create --assignee $identityPrincipalID \
--scope $keyvaultID ```
-#### Create key and get key ID
+### Create a key and get the key ID
-1. Run the [az keyvault key create][az-keyvault-key-create](/cli/azure/keyvault/key#az-keyvault-key-create) command to create a key in the key vault.
+1. Run the [az keyvault key create][az-keyvault-key-create] command to create a key in the key vault:
-```azurecli
-az keyvault key create \
- --name <key-name> \
- --vault-name <key-vault-name>
-```
+ ```azurecli
+ az keyvault key create \
+ --name <key-name> \
+ --vault-name <key-vault-name>
+ ```
-2. In the command output, take note of the key's ID `kid`.
-
-```output
-[...]
- "key": {
- "crv": null,
- "d": null,
- "dp": null,
- "dq": null,
- "e": "AQAB",
- "k": null,
- "keyOps": [
- "encrypt",
- "decrypt",
- "sign",
- "verify",
- "wrapKey",
- "unwrapKey"
- ],
- "kid": "https://mykeyvault.vault.azure.net/keys/mykey/<version>",
- "kty": "RSA",
-[...]
-```
+2. In the command output, take note of the key ID (`kid`):
+
+ ```output
+ [...]
+ "key": {
+ "crv": null,
+ "d": null,
+ "dp": null,
+ "dq": null,
+ "e": "AQAB",
+ "k": null,
+ "keyOps": [
+ "encrypt",
+ "decrypt",
+ "sign",
+ "verify",
+ "wrapKey",
+ "unwrapKey"
+ ],
+ "kid": "https://mykeyvault.vault.azure.net/keys/mykey/<version>",
+ "kty": "RSA",
+ [...]
+ ```
-3. For convenience, store the format you choose for the key ID in the $keyID environment variable.
-4. You can use a key ID with a version or a key without a version.
+3. For convenience, store the format that you choose for the key ID in the `$keyID` environment variable. You can use a key ID with or without a version.
-#### Option 1: Manual key rotation - key ID with version
+#### Key rotation
-Encrypting a registry with a customer-managed key with a key version will only allow manual key rotation in Azure Container Registry.
+You can choose manual or automatic key rotation.
-1. This example stores the key's `kid` property:
+Encrypting a registry with a customer-managed key that has a key version will allow only manual key rotation in Azure Container Registry. This example stores the key's `kid` property:
```azurecli keyID=$(az keyvault key show \
keyID=$(az keyvault key show \
--query 'key.kid' --output tsv) ```
-#### Option 2: Automatic key rotation - key ID omitting version
-
-Encrypting a registry with a customer-managed key by omitting a key version will enable automatic key rotation to detect a new key version in Azure Key Vault.
-
-1. This example removes the version from the key's `kid` property:
+Encrypting a registry with a customer-managed key by omitting a key version will enable automatic key rotation to detect a new key version in Azure Key Vault. This example removes the version from the key's `kid` property:
```azurecli keyID=$(az keyvault key show \
keyID=$(echo $keyID | sed -e "s/\/[^/]*$//")
### Create a registry with a customer-managed key
-1. Run the [az acr create][az-acr-create](/cli/azure/acr#az-acr-create) command to create a registry in the *Premium* service tier and enable the customer-managed key.
+1. Run the [az acr create][az-acr-create] command to create a registry in the *Premium* service tier and enable the customer-managed key.
-2. Pass the managed identity ID `id`and the key ID `kid` values stored in the environment variables in previous steps.
+2. Pass the managed identity ID (`id`) and key ID (`kid`) values stored in the environment variables in previous steps:
-```azurecli
-az acr create \
- --resource-group <resource-group-name> \
- --name <container-registry-name> \
- --identity $identityID \
- --key-encryption-key $keyID \
- --sku Premium
-```
+ ```azurecli
+ az acr create \
+ --resource-group <resource-group-name> \
+ --name <container-registry-name> \
+ --identity $identityID \
+ --key-encryption-key $keyID \
+ --sku Premium
+ ```
### Show encryption status
-1. Run the [az acr encryption show][az-acr-encryption-show](/cli/azure/acr/encryption#az-acr-encryption-show) command, to show the status of the registry encryption with a customer-managed key is enabled.
+Run the [az acr encryption show][az-acr-encryption-show] command to show the status of the registry encryption with a customer-managed key:
```azurecli az acr encryption show --name <container-registry-name> ```
-2. Depending on the key used to, encrypt the registry and the output is similar to:
+Depending on the key that's used to encrypt the registry, the output is similar to:
```console {
az acr encryption show --name <container-registry-name>
} ```
-## Enable a customer-managed key - Azure Portal
+## Enable a customer-managed key by using the Azure portal
### Create a user-assigned managed identity
-Create a *user-assigned* [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for Azure resources in the Azure portal.
+To create a user-assigned [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) in the Azure portal:
-1. Follow the steps to [create a user-assigned identity.](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity).
+1. Follow the steps to [create a user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity).
-2. Save the `identity's name` to use it in later steps.
+2. Save the identity's name to use it in later steps.
### Create a key vault
-1. Follow the steps in the [Quickstart: Create a key vault using the Azure portal.](../key-vault/general/quick-create-portal.md).
+1. Follow the steps in [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).
-2. When creating a key vault for a customer-managed key, in the **Basics** tab, enable the **Purge protection** setting. This setting helps prevent data loss by accidental key or key vault deletions.
+2. When you're creating a key vault for a customer-managed key, on the **Basics** tab, enable the **Purge protection** setting. This setting helps prevent data loss from accidental deletion of keys or key vaults.
+ :::image type="content" source="media/container-registry-customer-managed-keys/create-key-vault.png" alt-text="Screenshot of the options for creating a key vault in the Azure portal.":::
-#### Enable key vault access by trusted services
+#### Enable trusted services to access the key vault
-If the key vault is in protection with a firewall or virtual network (private endpoint), enable the network setting to allow access by [trusted Azure services.](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services)
+If the key vault is in protection with a firewall or virtual network (private endpoint), enable the network setting to allow access by [trusted Azure services](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). For more information, see [Configure Azure Key Vault networking settings](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-portal).
-For more information, see [Configure Azure Key Vault networking settings.](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-portal)
+#### Enable managed identities to access the key vault
-#### Enable key vault access by managed identity
+There are two ways to enable managed identities to access your key vault.
-There are two ways to enable key vault access by managed identity.
+The first option is to configure the access policy for the key vault and set key permissions for access with a user-assigned managed identity:
-#### Option 1: Enable key vault access policy
-
-Configure the access policy for the key vault and set key permissions to access with a *user-assigned* managed identity:
-
-1. Navigate to your key vault.
+1. Go to your key vault.
2. Select **Settings** > **Access policies > +Add Access Policy**.
-3. Select **Key permissions**, and select **Get**, **Unwrap Key**, and **Wrap Key**.
+3. Select **Key permissions**, and then select **Get**, **Unwrap Key**, and **Wrap Key**.
4. In **Select principal**, select the resource name of your user-assigned managed identity.
-5. Select **Add**, then select **Save**.
-
+5. Select **Add**, and then select **Save**.
-#### Option 2: Assign RBAC role
-Alternatively, assign the `Key Vault Crypto Service Encryption User` role to the *user-assigned* managed identity at the key vault scope.
+The other option is to assign the `Key Vault Crypto Service Encryption User` RBAC role to the user-assigned managed identity at the key vault scope. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+### Create a key
-### Create key
+Create a key in the key vault and use it to encrypt the registry. Follow these steps if you want to select a specific key version as a customer-managed key. You might also need to create a key before creating the registry if key vault access is restricted to a private endpoint or selected networks.
-Create a key in the key vault and use it to encrypt the registry. Follow these steps if you want to select a specific key version as a customer-managed key. You may also need to create a key before creating the registry if key vault access is restricted to a private endpoint or selected networks.
-
-1. Navigate to your key vault.
+1. Go to your key vault.
1. Select **Settings** > **Keys**. 1. Select **+Generate/Import** and enter a unique name for the key.
-1. Accept the remaining default values and select **Create**.
+1. Accept the remaining default values, and then select **Create**.
1. After creation, select the key and then select the current version. Copy the **Key identifier** for the key version.
-### Create Azure Container Registry
+### Create a container registry
1. Select **Create a resource** > **Containers** > **Container Registry**.
-1. In the **Basics** tab, select or create a resource group, and enter a registry name. In **SKU**, select **Premium**.
-1. In the **Encryption** tab, in **Customer-managed key**, select **Enabled**.
-1. In **Identity**, select the managed identity you created.
-1. In **Encryption**, choose either of the following:
- * Select **Select from Key Vault**, and select an existing key vault and key, or **Create new**. The key you select is non-versioned and enables automatic key rotation.
- * Select **Enter key URI**, and provide the identifier of an existing key. You can provide either a versioned key URI (for a key that must be rotated manually) or a non-versioned key URI (which enables automatic key rotation). See the previous section for steps to create a key.
-1. In the **Encryption** tab, select **Review + create**.
+1. On the **Basics** tab, select or create a resource group, and then enter a registry name. In **SKU**, select **Premium**.
+1. On the **Encryption** tab, for **Customer-managed key**, select **Enabled**.
+1. For **Identity**, select the managed identity that you created.
+1. For **Encryption**, choose one of the following options:
+ * Choose **Select from Key Vault**, and then either select an existing key vault and key or select **Create new**. The key that you select is unversioned and enables automatic key rotation.
+ * Select **Enter key URI**, and provide the identifier of an existing key. You can provide either a versioned key URI (for a key that must be rotated manually) or an unversioned key URI (which enables automatic key rotation). See the previous section for steps to create a key.
+1. Select **Review + create**.
1. Select **Create** to deploy the registry instance. -
-### Show encryption status
-
-To see the encryption status of your registry in the portal, navigate to your registry. Under **Settings**, select **Encryption**.
-
-## Enable a customer-managed key - Azure Resource Manager template
-
-You can use a Resource Manager template to create a registry and enable encryption with a customer-managed key.
-
-The following Resource Manager template creates a new container registry and a *user-assigned* managed identity.
-
-1. Copy the following content of a Resource Manager template to a new file and save it using a filename `CMKtemplate.json`.
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "vault_name": {
- "defaultValue": "",
- "type": "String"
- },
- "registry_name": {
- "defaultValue": "",
- "type": "String"
- },
- "identity_name": {
- "defaultValue": "",
- "type": "String"
- },
- "kek_id": {
- "type": "String"
- }
- },
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.ContainerRegistry/registries",
- "apiVersion": "2019-12-01-preview",
- "name": "[parameters('registry_name')]",
- "location": "[resourceGroup().location]",
- "sku": {
- "name": "Premium",
- "tier": "Premium"
- },
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]": {}
- }
- },
- "dependsOn": [
- "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]"
- ],
- "properties": {
- "adminUserEnabled": false,
- "encryption": {
- "status": "enabled",
- "keyVaultProperties": {
- "identity": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name')), '2018-11-30').clientId]",
- "KeyIdentifier": "[parameters('kek_id')]"
- }
- },
- "networkRuleSet": {
- "defaultAction": "Allow",
- "virtualNetworkRules": [],
- "ipRules": []
- },
- "policies": {
- "quarantinePolicy": {
- "status": "disabled"
- },
- "trustPolicy": {
- "type": "Notary",
- "status": "disabled"
- },
- "retentionPolicy": {
- "days": 7,
- "status": "disabled"
- }
- }
- }
- },
- {
- "type": "Microsoft.KeyVault/vaults/accessPolicies",
- "apiVersion": "2018-02-14",
- "name": "[concat(parameters('vault_name'), '/add')]",
- "dependsOn": [
- "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]"
- ],
- "properties": {
- "accessPolicies": [
- {
- "tenantId": "[subscription().tenantId]",
- "objectId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name')), '2018-11-30').principalId]",
- "permissions": {
- "keys": [
- "get",
- "unwrapKey",
- "wrapKey"
- ]
- }
- }
- ]
- }
- },
- {
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
- "apiVersion": "2018-11-30",
- "name": "[parameters('identity_name')]",
- "location": "[resourceGroup().location]"
- }
- ]
-}
-```
+
+### Show the encryption status
+
+To see the encryption status of your registry in the portal, go to your registry. Under **Settings**, select **Encryption**.
+
+## Enable a customer-managed key by using a Resource Manager template
+
+You can use a Resource Manager template to create a container registry and enable encryption with a customer-managed key:
+
+1. Copy the following content of a Resource Manager template to a new file and save it as *CMKtemplate.json*:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vault_name": {
+ "defaultValue": "",
+ "type": "String"
+ },
+ "registry_name": {
+ "defaultValue": "",
+ "type": "String"
+ },
+ "identity_name": {
+ "defaultValue": "",
+ "type": "String"
+ },
+ "kek_id": {
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.ContainerRegistry/registries",
+ "apiVersion": "2019-12-01-preview",
+ "name": "[parameters('registry_name')]",
+ "location": "[resourceGroup().location]",
+ "sku": {
+ "name": "Premium",
+ "tier": "Premium"
+ },
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]": {}
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]"
+ ],
+ "properties": {
+ "adminUserEnabled": false,
+ "encryption": {
+ "status": "enabled",
+ "keyVaultProperties": {
+ "identity": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name')), '2018-11-30').clientId]",
+ "KeyIdentifier": "[parameters('kek_id')]"
+ }
+ },
+ "networkRuleSet": {
+ "defaultAction": "Allow",
+ "virtualNetworkRules": [],
+ "ipRules": []
+ },
+ "policies": {
+ "quarantinePolicy": {
+ "status": "disabled"
+ },
+ "trustPolicy": {
+ "type": "Notary",
+ "status": "disabled"
+ },
+ "retentionPolicy": {
+ "days": 7,
+ "status": "disabled"
+ }
+ }
+ }
+ },
+ {
+ "type": "Microsoft.KeyVault/vaults/accessPolicies",
+ "apiVersion": "2018-02-14",
+ "name": "[concat(parameters('vault_name'), '/add')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name'))]"
+ ],
+ "properties": {
+ "accessPolicies": [
+ {
+ "tenantId": "[subscription().tenantId]",
+ "objectId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('identity_name')), '2018-11-30').principalId]",
+ "permissions": {
+ "keys": [
+ "get",
+ "unwrapKey",
+ "wrapKey"
+ ]
+ }
+ }
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
+ "apiVersion": "2018-11-30",
+ "name": "[parameters('identity_name')]",
+ "location": "[resourceGroup().location]"
+ }
+ ]
+ }
+ ```
2. Follow the steps in the previous sections to create the following resources:
-* Key vault, identified by name
-* Key vault key, identified by key ID
+ * Key vault, identified by name
+ * Key vault key, identified by key ID
-3. Run the [az deployment group create][az-deployment-group-create] command to create the registry using the preceding template file. When indicated, provide a new registry name and a *user-assigned* managed identity name, as well as the key vault name and key ID you created.
+3. Run the [az deployment group create][az-deployment-group-create] command to create the registry by using the preceding template file. When indicated, provide a new registry name and a user-assigned managed identity name, along with the key vault name and key ID that you created.
-```azurecli
-az deployment group create \
- --resource-group <resource-group-name> \
- --template-file CMKtemplate.json \
- --parameters \
- registry_name=<registry-name> \
- identity_name=<managed-identity> \
- vault_name=<key-vault-name> \
- key_id=<key-vault-key-id>
-```
+ ```azurecli
+ az deployment group create \
+ --resource-group <resource-group-name> \
+ --template-file CMKtemplate.json \
+ --parameters \
+ registry_name=<registry-name> \
+ identity_name=<managed-identity> \
+ vault_name=<key-vault-name> \
+ key_id=<key-vault-key-id>
+ ```
-4. Run the [az acr encryption show][az-acr-encryption-show] command, to show the status of registry encryption
+4. Run the [az acr encryption show][az-acr-encryption-show] command to show the status of registry encryption:
-```azurecli
-az acr encryption show --name <registry-name>
-```
+ ```azurecli
+ az acr encryption show --name <registry-name>
+ ```
## Next steps
-In this tutorial, you've learned to enable a customer-managed key on your Azure Container Registry using Azure CLI, portal, and Resource Manager template. This article also explains how to create resources for the encryption and verify the encryption status of your registry.
-
-Advance to the next [tutorial](tutorial-rotate-revoke-customer-managed-keys.md), to have a walk-through of performing the customer-managed key rotation, update key versions, and revoke a customer-managed key.
+Advance to the [next article](tutorial-rotate-revoke-customer-managed-keys.md) to walk through rotating customer-managed keys, updating key versions, and revoking a customer-managed key.
<!-- LINKS - external --> <!-- LINKS - internal -->
+[azure-cli]: /cli/azure/install-azure-cli
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-show]: /cli/azure/feature#az_feature_show [az-group-create]: /cli/azure/group#az_group_create
container-registry Tutorial Rotate Revoke Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-rotate-revoke-customer-managed-keys.md
Title: Rotate and revoke a customer-managed key
-description: Learn how to rotate, update, revoke a customer-managed key.
+description: Learn how to rotate, update, and revoke a customer-managed key on Azure Container Registry.
Last updated 08/5/2022
+# Rotate and revoke a customer-managed key
-# Rotate and Revoke a customer-managed key
-
-This article is part three in a four-part tutorial series. In [part one](tutorial-customer-managed-keys.md), you have an overview of the customer-managed key, their key features, and the considerations before you enable a customer-managed key on your registry. In [part two](tutorial-enable-customer-managed-keys.md), you've learned to enable a customer-managed key using the Azure CLI, Azure portal, or a Resource Manager template. In this article walks you to rotate a customer-managed key, update key version and revoke the key.
+This article is part three in a four-part tutorial series. [Part one](tutorial-customer-managed-keys.md) provides an overview of customer-managed keys, their features, and considerations before you enable one on your registry. In [part two](tutorial-enable-customer-managed-keys.md), you learn how to enable a customer-managed key by using the Azure CLI, the Azure portal, or an Azure Resource Manager template. This article walks you through rotating, updating, and revoking a customer-managed key.
## Rotate a customer-managed key
->* To rotate a key, you can either update the key version in Azure Key Vault or create a new key.
->* While rotating the key, you can specify the same identity you have used to create the registry.
->* Optionally, you can also configure a new user-assigned identity to access the key, or enable and specify the registry's system-assigned identity.
+To rotate a key, you can either update the key version in Azure Key Vault or create a new key. While rotating the key, you can specify the same identity that you used to create the registry.
+
+Optionally, you can:
+
+- Configure a new user-assigned identity to access the key.
+- Enable and specify the registry's system-assigned identity.
> [!NOTE]
-> * To enable the registry's system-assigned identity in the portal, select **Settings** > **Identity** and set the system-assigned identity's status to **On**.
-> * Ensure that the required [key vault access](tutorial-enable-customer-managed-keys.md#enable-key-vault-access-by-managed-identity) is set for the identity you configure for key access.
+> To enable the registry's system-assigned identity in the portal, select **Settings** > **Identity** and set the system-assigned identity's status to **On**.
+>
+> Ensure that the required [key vault access](tutorial-enable-customer-managed-keys.md#enable-managed-identities-to-access-the-key-vault) is set for the identity that you configure for key access.
-### Create or update key version - CLI
+### Create or update the key version by using the Azure CLI
-1. To create a new key version, run the [az keyvault key create][az-keyvault-key-create](/cli/azure/keyvault/key#az-keyvault-key-create) command:
+To create a new key version, run the [az keyvault key create](/cli/azure/keyvault/key#az-keyvault-key-create) command:
```azurecli # Create new version of existing key az keyvault key create \
- ΓÇô-name <key-name> \
+ --name <key-name> \
--vault-name <key-vault-name> ```
-2. If you configure the registry to detect key version updates, the customer-managed key automatically updates within 1 hour.
+If you configure the registry to detect key version updates, the customer-managed key is automatically updated within one hour.
-3. If you configure the registry for manual updating for a new key version, run the [az-acr-encryption-rotate-key](/cli/azure/acr/#az-acr-encryption-rotate-key) command, passing the new key ID and the identity you want to configure.
+If you configure the registry for manual updating for a new key version, run the [az-acr-encryption-rotate-key](/cli/azure/acr/#az-acr-encryption-rotate-key) command. Pass the new key ID and the identity that you want to configure.
> [!TIP]
-> When you run `az-acr-encryption-rotate-key`, you can pass either a versioned key ID or a non-versioned key ID. If you use a non-versioned key ID, the registry is then configured to automatically detect later key version updates.
+> When you run `az-acr-encryption-rotate-key`, you can pass either a versioned key ID or an unversioned key ID. If you use an unversioned key ID, the registry is then configured to automatically detect later key version updates.
-Update a customer-managed key version manually:
+To update a customer-managed key version manually, you have two options:
- 1. Rotate key and use user-assigned identity
+- Rotate the key and use a user-assigned identity.
-If you're using the key from a different key vault, verify the `principal-id-user-assigned-identity` has the `get`, `wrap`, and `unwrap` permissions on that key vault.
+ If you're using the key from a different key vault, verify that `principal-id-user-assigned-identity` has the `get`, `wrap`, and `unwrap` permissions on that key vault.
-```azurecli
-az acr encryption rotate-key \
- --name <registry-name> \
- --key-encryption-key <new-key-id> \
- --identity <principal-id-user-assigned-identity>
-```
+ ```azurecli
+ az acr encryption rotate-key \
+ --name <registry-name> \
+ --key-encryption-key <new-key-id> \
+ --identity <principal-id-user-assigned-identity>
+ ```
- 2. Rotate key and use system-assigned identity
+- Rotate the key and use a system-assigned identity.
-Before you use the system-assigned identity, verify for the `get`, `wrap`, and `unwrap` permissions assigned to it.
+ Before you use the system-assigned identity, verify that the `get`, `wrap`, and `unwrap` permissions are assigned to it.
-```azurecli
-az acr encryption rotate-key \
- --name <registry-name> \
- --key-encryption-key <new-key-id> \
- --identity [system]
-```
+ ```azurecli
+ az acr encryption rotate-key \
+ --name <registry-name> \
+ --key-encryption-key <new-key-id> \
+ --identity [system]
+ ```
-### Create or update key version - Portal
+### Create or update the key version by using the Azure portal
-Use the registry's **Encryption** settings to update the key vault, key, or identity settings used for a customer-managed key.
+Use the registry's **Encryption** settings to update the key vault, key, or identity settings for a customer-managed key.
For example, to configure a new key:
-1. In the portal, navigate to your registry.
-1. Under **Settings**, select **Encryption** > **Change key**.
+1. In the portal, go to your registry.
+1. Under **Settings**, select **Encryption** > **Change key**.
- :::image type="content" source="media/container-registry-customer-managed-keys/rotate-key.png" alt-text="Rotate key in the Azure portal":::
-1. In **Encryption**, choose one of the following:
- * Select **Select from Key Vault**, and select an existing key vault and key, or **Create new**. The key you select is non-versioned and enables automatic key rotation.
- * Select **Enter key URI**, and provide a key identifier directly. You can provide either a versioned key URI (for a key that must be rotated manually) or a non-versioned key URI (which enables automatic key rotation).
-1. Complete the key selection and select **Save**.
+ :::image type="content" source="media/container-registry-customer-managed-keys/rotate-key.png" alt-text="Screenshot of encryption key options in the Azure portal.":::
+1. In **Encryption**, choose one of the following options:
+ * Choose **Select from Key Vault**, and then either select an existing key vault and key or select **Create new**. The key that you select is unversioned and enables automatic key rotation.
+ * Select **Enter key URI**, and provide a key identifier directly. You can provide either a versioned key URI (for a key that must be rotated manually) or an unversioned key URI (which enables automatic key rotation).
+1. Complete the key selection, and then select **Save**.
## Revoke a customer-managed key
->* You can revoke a customer-managed encryption key by changing the access policy, or changing the permissions on the key vault, or by deleting the key.
+You can revoke a customer-managed encryption key by changing the access policy, by changing the permissions on the key vault, or by deleting the key.
-1. Run the [az-keyvault-delete-policy](/cli/azure/keyvault#az-keyvault-delete-policy) command to change the access policy of the managed identity used by your registry:
+To change the access policy of the managed identity that your registry uses, run the [az-keyvault-delete-policy](/cli/azure/keyvault#az-keyvault-delete-policy) command:
```azurecli az keyvault delete-policy \
az keyvault delete-policy \
--key_id <key-vault-key-id> ```
-2. Run the [az-keyvault-key-delete](/cli/azure/keyvault/key#az-keyvault-key-delete) command to delete the individual versions of a key. This operation requires the keys/delete permission.
+To delete the individual versions of a key, run the [az-keyvault-key-delete](/cli/azure/keyvault/key#az-keyvault-key-delete) command. This operation requires the *keys/delete* permission.
```azurecli az keyvault key delete \
az keyvault key delete \
--object-id $identityPrincipalID \ ```
->* Revoking a customer-managed key will block access to all registry data.
->* If you enable access to the key or restore a deleted key, the registry will pick the key, and you can gain back control on access to the encrypted registry data.
+> [!NOTE]
+> Revoking a customer-managed key will block access to all registry data. If you enable access to the key or restore a deleted key, the registry will pick the key, and you can regain control of access to the encrypted registry data.
## Next steps
-In this tutorial, you've learned to perform key rotations, update key versions using CLI and Portal, and revoking a customer-managed key on your Azure Container Registry.
-
-Advance to the next tutorial to [troubleshoot](tutorial-troubleshoot-customer-managed-keys.md) most common issues like removing a managed identity, 403 errors, and restoring accidental key deletes.
+Advance to the [next article](tutorial-troubleshoot-customer-managed-keys.md) to troubleshoot common problems like errors when you're removing a managed identity, 403 errors, and accidental key deletions.
container-registry Tutorial Troubleshoot Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-troubleshoot-customer-managed-keys.md
Title: Troubleshoot a customer-managed key
-description: Tutorial to troubleshoot the most common issues from a registry enabled with a customer-managed key.
+description: Learn how to troubleshoot the most common problems for a registry that's enabled with a customer-managed key.
Last updated 08/5/2022
# Troubleshoot a customer-managed key
-This article is part four in a four-part tutorial series. In [part one](tutorial-customer-managed-keys.md), you have an overview of the customer-managed keys, their key features, and the considerations before you enable a customer-managed key on your registry. In [part two](tutorial-enable-customer-managed-keys.md), you've learned to enable customer-managed keys using the Azure CLI, Azure portal, or a Resource Manager template. In [part three](tutorial-rotate-revoke-customer-managed-keys.md), you'll learn to rotate, update, revoke a customer-managed key. In this article, learn to troubleshoot any issues with customer-managed keys.
+This article is part four in a four-part tutorial series. [Part one](tutorial-customer-managed-keys.md) provides an overview of customer-managed keys, their features, and considerations before you enable one on your registry. In [part two](tutorial-enable-customer-managed-keys.md), you learn how to enable a customer-managed key by using the Azure CLI, the Azure portal, or an Azure Resource Manager template. In [part three](tutorial-rotate-revoke-customer-managed-keys.md), you learn how to rotate, update, and revoke a customer-managed key. This article helps you troubleshoot and resolve common problems with customer-managed keys.
-## Troubleshoot a customer-managed key
+## Error when you're removing a managed identity
-This article helps you to troubleshoot and resolve most common issues such as authentication issues, accidental deletions of keys, etc.
-## Removing managed identity
-
-If you try to remove a user-assigned or a system-assigned managed identity that you've used to configure encryption for your registry, you may see an error:
+If you try to remove a user-assigned or system-assigned managed identity that you used to configure encryption for your registry, you might see an error:
``` Azure resource '/subscriptions/xxxx/resourcegroups/myGroup/providers/Microsoft.ContainerRegistry/registries/myRegistry' does not have access to identity 'xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx' Try forcibly adding the identity to the registry <registry name>. For more information on bring your own key, please visit 'https://aka.ms/acr/cmk'. ```
-You'll also not be able to change (rotate) the encryption key. The resolution steps depend on the type of identity you've used for encryption.
+You also won't be able to change (rotate) the encryption key. The resolution steps depend on the type of identity that you used for encryption.
-### Removing a **user-assigned identity**:
+### Removing a user-assigned identity
-If this issue occurs while removing a user-assigned identity, follow the steps:
+If you get the error when you try to remove a user-assigned identity, follow these steps:
-1. Reassign the user-assigned identity using the [az acr identity assign](/cli/azure/acr/identity/#az-acr-identity-assign) command.
-2. Pass the user-assigned identity's resource ID, or use the identity's name when it is in the same resource group as the registry.
+1. Reassign the user-assigned identity by using the [az acr identity assign](/cli/azure/acr/identity/#az-acr-identity-assign) command.
+2. Pass the user-assigned identity's resource ID, or use the identity's name when it's in the same resource group as the registry.
-For example:
+ For example:
-```azurecli
-az acr identity assign -n myRegistry \
- --identities "/subscriptions/mysubscription/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentity"
-```
+ ```azurecli
+ az acr identity assign -n myRegistry \
+ --identities "/subscriptions/mysubscription/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentity"
+ ```
3. Change the key and assign a different identity. 4. Now, you can remove the original user-assigned identity.
-### Removing a **System-assigned identity**
+### Removing a system-assigned identity
-If issue occurs while you try to remove a system-assigned identity, please [create an Azure support ticket](https://azure.microsoft.com/support/create-ticket/) for assistance to restore the identity.
+If you get the error when you try to remove a system-assigned identity, [create an Azure support ticket](https://azure.microsoft.com/support/create-ticket/) for assistance in restoring the identity.
-## Enabling the key vault firewall
+## Error after you enable a key vault firewall
-If you enable a key vault firewall or virtual network after creating an encrypted registry, you might see HTTP 403 or other errors with image import or automated key rotation. To correct this problem, reconfigure the managed identity and key you used initially for encryption. See steps in [Rotate a customer-managed key.](tutorial-rotate-revoke-customer-managed-keys.md#rotate-a-customer-managed-key)
+If you enable a key vault firewall or virtual network after creating an encrypted registry, you might see HTTP 403 or other errors with image import or automated key rotation. To correct this problem, reconfigure the managed identity and key that you initially used for encryption. See the steps in [Rotate a customer-managed key](tutorial-rotate-revoke-customer-managed-keys.md#rotate-a-customer-managed-key).
-If the problem persists, please contact Azure Support.
+If the problem persists, contact Azure Support.
-## Accidental deletion of key vault or key
+## Accidental deletion of a key vault or key
-Deletion of the key vault, or the key, used to encrypt a registry with a customer-managed key will make the registry's content inaccessible. If [soft delete](../key-vault/general/soft-delete-overview.md) is enabled in the key vault (the default option), you can recover a deleted vault, or key vault object and resume registry operations.
+Deletion of the key vault, or the key, that's used to encrypt a registry with a customer-managed key will make the registry's content inaccessible. If [soft delete](../key-vault/general/soft-delete-overview.md) is enabled in the key vault (the default option), you can recover a deleted vault or key vault object and resume registry operations.
## Next steps
-For key vault deletion and recovery scenarios, see [Azure Key Vault recovery management with soft delete and purge protection](../key-vault/general/key-vault-recovery.md).
+For key vault deletion and recovery scenarios, see [Azure Key Vault recovery management with soft delete and purge protection](../key-vault/general/key-vault-recovery.md).
cosmos-db Kafka Connector Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-source.md
curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file>
## Insert document into Azure Cosmos DB
-1. Sign into the [Azure portal](https://portal.azure.com/learn.learn.microsoft.com) and navigate to your Azure Cosmos DB account.
+1. Sign into the [Azure portal](https://portal.azure.com/learn.docs.microsoft.com) and navigate to your Azure Cosmos DB account.
1. Open the **Data Explore** tab and select **Databases** 1. Open the "kafkaconnect" database and "kafka" container you created earlier. 1. To create a new JSON document, in the SQL API pane, expand "kafka" container, select **Items**, then select **New Item** in the toolbar.
cosmos-db Performance Tips Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-java-sdk-v4-sql.md
In Azure Cosmos DB Java SDK v4, Direct mode is the best choice to improve databa
:::image type="content" source="./media/performance-tips-async-java/rntbdtransportclient.png" alt-text="Illustration of the Direct mode architecture" border="false":::
-The client-side architecture employed in Direct mode enables predictable network utilization and multiplexed access to Azure Cosmos DB replicas. The diagram above shows how Direct mode routes client requests to replicas in the Cosmos DB backend. The Direct mode architecture allocates up to 10 **Channels** on the client side per DB replica. A Channel is a TCP connection preceded by a request buffer, which is 30 requests deep. The Channels belonging to a replica are dynamically allocated as needed by the replica's **Service Endpoint**. When the user issues a request in Direct mode, the **TransportClient** routes the request to the proper service endpoint based on the partition key. The **Request Queue** buffers requests before the Service Endpoint.
+The client-side architecture employed in Direct mode enables predictable network utilization and multiplexed access to Azure Cosmos DB replicas. The diagram above shows how Direct mode routes client requests to replicas in the Cosmos DB backend. The Direct mode architecture allocates up to 130 **Channels** on the client side per DB replica. A Channel is a TCP connection preceded by a request buffer, which is 30 requests deep. The Channels belonging to a replica are dynamically allocated as needed by the replica's **Service Endpoint**. When the user issues a request in Direct mode, the **TransportClient** routes the request to the proper service endpoint based on the partition key. The **Request Queue** buffers requests before the Service Endpoint.
* ***Configuration options for Direct mode***
cost-management-billing Transfer Subscriptions Subscribers Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/transfer-subscriptions-subscribers-csp.md
Previously updated : 03/22/2022 Last updated : 09/23/2022
This article provides high-level steps used to transfer Azure subscriptions to a
Before you start a transfer request, you should download or export any cost and billing information that you want to keep. Billing and utilization information doesn't transfer with the subscription. For more information about exporting cost management data, see [Create and manage exported data](../costs/tutorial-export-acm-data.md). For more information about downloading your invoice and usage data, see [Download or view your Azure billing invoice and daily usage data](download-azure-invoice-daily-usage-date.md).
-If you have any existing reservations, they stop applying 90 days after you transfer a subscription. Be sure to [cancel any reservations and refund them](../reservations/exchange-and-refund-azure-reservations.md) before you transfer a subscription to avoid charges after the 90 day grace period.
- ## Transfer EA subscriptions to a CSP partner CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure subscriptions for their customers that have a Direct Enterprise Agreement (EA). Subscription transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.
data-factory Connector Troubleshoot Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-synapse-sql.md
description: Learn how to troubleshoot issues with the Azure Synapse Analytics,
Previously updated : 09/02/2022 Last updated : 09/20/2022
This article provides suggestions to troubleshoot common problems with the Azure
- **Cause**: Currently, ingesting data using the COPY command into an Azure Storage account that is using the new DNS partitioning feature results in an error. DNS partition feature enables customers to create up to 5000 storage accounts per subscription. - **Resolutions**: Provision a storage account in a subscription that does not use the new [Azure Storage DNS partition feature](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466) (currently in Public Preview).
+## Error code: SqlDeniedPublicAccess
+
+- **Message**: `Cannot connect to SQL Database: '%server;', Database: '%database;', Reason: Connection was denied since Deny Public Network Access is set to Yes. To connect to this server, 1. If you persist public network access disabled, please use Managed Vritual Network IR and create private endpoint. https://docs.microsoft.com/en-us/azure/data-factory/managed-virtual-network-private-endpoint; 2. Otherwise you can enable public network access, set "Public network access" option to "Selected networks" on Auzre SQL Networking setting.`
+
+- **Causes**: Azure SQL Database is set to deny public network access. This requires to use managed virtual network and create private endpoint to access.
+
+- **Recommendation**:
+
+ 1. If you insist on disabling public network access, use managed virtual network integration runtime and create private endpoint. For more information, see [Azure Data Factory managed virtual network](managed-virtual-network-private-endpoint.md).
+
+ 2. Otherwise, enable public network access by setting **Public network access** option to **Selected networks** on Azure SQL Database **Networking** setting page.
+
## Next steps For more troubleshooting help, try these resources:
data-factory Parameters Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameters-data-flow.md
Previously updated : 09/09/2021 Last updated : 08/18/2022 # Parameterizing mapping data flows
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Title: Troubleshoot pipeline orchestration and triggers in Azure Data Factory
description: Use different methods to troubleshoot pipeline trigger issues in Azure Data Factory. Previously updated : 02/21/2022 Last updated : 08/18/2022
data-factory Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/plan-manage-costs.md
Previously updated : 11/01/2021 Last updated : 08/18/2022 # Plan to manage costs for Azure Data Factory
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md
Previously updated : 08/09/2022 Last updated : 08/18/2022 # Understanding Data Factory pricing through examples
data-factory Quickstart Create Data Factory Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-azure-cli.md
Previously updated : 10/14/2021 Last updated : 08/18/2022
data-factory Quickstart Create Data Factory Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-bicep.md
Previously updated : 06/17/2022 Last updated : 08/19/2022 # Quickstart: Create an Azure Data Factory using Bicep
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-dot-net.md
ms.devlang: csharp Previously updated : 12/10/2021 Last updated : 08/18/2022
data-factory Quickstart Create Data Factory Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-powershell.md
ms.devlang: powershell Previously updated : 01/26/2022 Last updated : 08/18/2022
data-factory Quickstart Create Data Factory Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-python.md
ms.devlang: python Previously updated : 05/27/2021 Last updated : 08/18/2022
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
ms.devlang: rest-api Previously updated : 05/31/2021 Last updated : 08/18/2022
data-factory Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quota-increase.md
description: How to create a support request in the Azure portal for Azure Data
Previously updated : 01/27/2022 Last updated : 08/18/2022
data-factory Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/samples-powershell.md
Previously updated : 03/16/2021 Last updated : 08/18/2022 # Azure PowerShell samples for Azure Data Factory
data-factory Sap Change Data Capture Debug Shir Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-debug-shir-logs.md
Previously updated : 06/01/2022 Last updated : 08/18/2022
data-factory Sap Change Data Capture Introduction Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md
Previously updated : 06/01/2022 Last updated : 08/18/2022
data-factory Sap Change Data Capture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-management.md
Previously updated : 06/01/2022 Last updated : 08/18/2022
data-factory Sap Change Data Capture Prepare Linked Service Source Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prepare-linked-service-source-dataset.md
Previously updated : 06/01/2022 Last updated : 08/18/2022
data-factory Sap Change Data Capture Prerequisites Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prerequisites-configuration.md
Previously updated : 06/01/2022 Last updated : 08/18/2022
data-factory Sap Change Data Capture Shir Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-shir-preparation.md
Previously updated : 06/01/2022 Last updated : 08/18/2022
data-factory Scenario Dataflow Process Data Aml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-dataflow-process-data-aml-models.md
description: Learn how to use Azure Data Factory data flows to process data from
co-- - Previously updated : 1/31/2021 Last updated : 08/18/2022 ms.co-
data-factory Scenario Ssis Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-overview.md
Previously updated : 07/07/2022 Last updated : 08/18/2022 # Migrate on-premises SSIS workloads to SSIS in ADF or Synapse Pipelines
data-factory Scenario Ssis Migration Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-rules.md
Previously updated : 07/07/2022 Last updated : 08/18/2022 # SSIS migration assessment rules
data-factory Security And Access Control Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/security-and-access-control-troubleshoot-guide.md
Previously updated : 08/15/2022 Last updated : 08/18/2022
data-factory Self Hosted Integration Runtime Auto Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-auto-update.md
Previously updated : 06/16/2021 Last updated : 08/18/2022 # Self-hosted integration runtime auto-update and expire notification
data-factory Self Hosted Integration Runtime Automation Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-automation-scripts.md
Previously updated : 01/31/2022 Last updated : 08/18/2022 # Automating self-hosted integration runtime installation using local PowerShell scripts
data-factory Self Hosted Integration Runtime Diagnostic Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-diagnostic-tool.md
Previously updated : 07/28/2021 Last updated : 08/18/2022 # Diagnostic tool for self-hosted integration runtime
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
Previously updated : 02/16/2022 Last updated : 08/18/2022 # Configure a self-hosted IR as a proxy for an Azure-SSIS IR
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
Previously updated : 02/16/2022 Last updated : 08/18/2022
data-factory Solution Template Bulk Copy From Files To Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-bulk-copy-from-files-to-database.md
Previously updated : 12/09/2020 Last updated : 08/18/2022 # Bulk copy from files to database
The template defines the following two parameters:
## How to use this solution template
-1. Go to the **Bulk Copy from Files to Database** template. Create a **New** connection to the source Gen2 store. Be aware that "GetMetadataDataset" and "SourceDataset" are references to the same connection of your source file store.
+1. Open the Azure Data Factory Studio and select the **Author** tab with the pencil icon.
+1. Hover over the **Pipelines** section and select the ellipsis that appears to the right side. Select **Pipeline from template** then.
+ :::image type="content" source="media/how-to-send-notifications-to-teams/pipeline-from-template.png" alt-text="Screenshot of the data factory user interface showing the Pipeline from template button.":::
+1. Select the **Bulk Copy from Files to Database** template, then select **Continue**.
+ :::image type="content" source="media/solution-template-bulk-copy-from-files-to-database/bulk-copy-files-to-database-template.png" alt-text="Screenshot of the Bulk copy files to database template in the template browser.":::
+1. Create a **New** connection to the source Gen2 store as your source, and one to the database for your sink. Then select **Use this template**.
- :::image type="content" source="media/solution-template-bulk-copy-from-files-to-database/source-connection.png" alt-text="Create a new connection to the source data store":::
-
-2. Create a **New** connection to the sink data store that you're copying data to.
-
- :::image type="content" source="media/solution-template-bulk-copy-from-files-to-database/destination-connection.png" alt-text="Create a new connection to the sink data store":::
-
-3. Select **Use this template**.
-
- :::image type="content" source="media/solution-template-bulk-copy-from-files-to-database/use-template.png" alt-text="Use this template":::
-
-4. You would see a pipeline created as shown in the following example:
+ :::image type="content" source="media/solution-template-bulk-copy-from-files-to-database/select-source-and-sink.png" alt-text="Screenshot of the template editor with source and sink data sources highlighted.":::
+
+1. A new pipeline is created as shown in the following example:
:::image type="content" source="media/solution-template-bulk-copy-from-files-to-database/new-pipeline.png" alt-text="Review the pipeline":::
- > [!NOTE]
- > If you chose **Azure Synapse Analytics** as the data destination in **step 2** mentioned above, you must enter a connection to Azure Blob storage for staging, as required by Azure Synapse Analytics Polybase. As the following screenshot shows, the template will automatically generate a *Storage Path* for your Blob storage. Check if the container has been created after the pipeline run.
-
- :::image type="content" source="media/solution-template-bulk-copy-from-files-to-database/staging-account.png" alt-text="Polybase setting":::
-
-5. Select **Debug**, enter the **Parameters**, and then select **Finish**.
+1. Select **Debug**, enter the **Parameters**, and then select **Finish**.
:::image type="content" source="media/solution-template-bulk-copy-from-files-to-database/debug-run.png" alt-text="Click **Debug**":::
-6. When the pipeline run completes successfully, you would see results similar to the following example:
+1. When the pipeline run completes successfully, you will see results similar to the following example:
:::image type="content" source="media/solution-template-bulk-copy-from-files-to-database/run-succeeded.png" alt-text="Review the result":::
data-factory Solution Template Bulk Copy With Control Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-bulk-copy-with-control-table.md
Previously updated : 12/09/2020 Last updated : 09/22/2022 # Bulk copy from a database with a control table
The last three parameters, which define the path in your destination store are o
2. Go to the **Bulk Copy from Database** template. Create a **New** connection to the external control table that you created in step 1.
- :::image type="content" source="mediB_with_ControlTable2.png" alt-text="Create a new connection to the control table":::
+ :::image type="content" source="media/solution-template-bulk-copy-with-control-table/bulk-copy-from-db-with-control-table-2.png" alt-text="Screenshot showing the creation of a new connection to the control table.":::
3. Create a **New** connection to the source database that you're copying data from.
- :::image type="content" source="mediB_with_ControlTable3.png" alt-text="Create a new connection to the source database":::
+ :::image type="content" source="media/solution-template-bulk-copy-with-control-table/bulk-copy-from-db-with-control-table-3.png" alt-text="Screenshot showing the creation of a new connection to the source database.":::
4. Create a **New** connection to the destination data store that you're copying the data to.
- :::image type="content" source="mediB_with_ControlTable4.png" alt-text="Create a new connection to the destination store":::
+ :::image type="content" source="media/solution-template-bulk-copy-with-control-table/bulk-copy-from-db-with-control-table-4.png" alt-text="Screenshot showing the creation of a new connection to the destination store.":::
5. Select **Use this template**. 6. You see the pipeline, as shown in the following example:
- :::image type="content" source="mediB_with_ControlTable6.png" alt-text="Review the pipeline":::
+ :::image type="content" source="media/solution-template-bulk-copy-with-control-table/bulk-copy-from-db-with-control-table-6.png" alt-text="Screenshot showing the pipeline.":::
7. Select **Debug**, enter the **Parameters**, and then select **Finish**.
- :::image type="content" source="mediB_with_ControlTable7.png" alt-text="Click **Debug**":::
+ :::image type="content" source="mediB_with_ControlTable7.png" alt-text="Screenshot showing the Debug button.":::
8. You see results that are similar to the following example:
- :::image type="content" source="mediB_with_ControlTable8.png" alt-text="Review the result":::
+ :::image type="content" source="mediB_with_ControlTable8.png" alt-text="Screenshot showing the result of the pipeline run.":::
9. (Optional) If you chose "Azure Synapse Analytics" as the data destination, you must enter a connection to Azure Blob storage for staging, as required by Azure Synapse Analytics Polybase. The template will automatically generate a container path for your Blob storage. Check if the container has been created after the pipeline run.
- :::image type="content" source="mediB_with_ControlTable9.png" alt-text="Polybase setting":::
+ :::image type="content" source="mediB_with_ControlTable9.png" alt-text="Screenshot showing the Polybase setting.":::
## Next steps
data-factory Solution Template Copy Files Multiple Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-copy-files-multiple-containers.md
Previously updated : 01/31/2022 Last updated : 09/22/2022 # Copy multiple folders with Azure Data Factory
data-factory Solution Template Copy New Files Last Modified Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-copy-new-files-last-modified-date.md
+
+ Title: Copy new and changed files by LastModifiedDate
+description: Learn how to use a solution template to copy new and changed files by LastModifiedDate with Azure Data Factory.
+++++++ Last updated : 09/22/2022++
+# Copy new and changed files by LastModifiedDate with Azure Data Factory
++
+This article describes a solution template that you can use to copy new and changed files only by LastModifiedDate from a file-based store to a destination store.
+
+## About this solution template
+
+This template first selects the new and changed files only by their attributes **LastModifiedDate**, and then copies those selected files from the data source store to the data destination store.
+
+The template contains one activity:
+- **Copy** to copy new and changed files only by LastModifiedDate from a file store to a destination store.
+
+The template defines six parameters:
+- *FolderPath_Source* is the folder path where you can read the files from the source store. You need to replace the default value with your own folder path.
+- *Directory_Source* is the subfolder path where you can read the files from the source store. You need to replace the default value with your own subfolder path.
+- *FolderPath_Destination* is the folder path where you want to copy files to the destination store. You need to replace the default value with your own folder path.
+- *Directory_Destination* is the subfolder path where you want to copy files to the destination store. You need to replace the default value with your own subfolder path.
+- *LastModified_From* is used to select the files whose LastModifiedDate attribute is after or equal to this datetime value. In order to select the new files only, which has not been copied last time, this datetime value can be the time when the pipeline was triggered last time. You can replace the default value '2019-02-01T00:00:00Z' to your expected LastModifiedDate in UTC timezone.
+- *LastModified_To* is used to select the files whose LastModifiedDate attribute is before this datetime value. In order to select the new files only, which has not been copied last time, this datetime value can be the present time. You can replace the default value '2019-02-01T00:00:00Z' to your expected LastModifiedDate in UTC timezone.
+
+## How to use this solution template
+
+1. Go to template **Copy new files only by LastModifiedDate**. Create a **New** connection to your source storage store. The source storage store is where you want to copy files from.
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-1.png" alt-text="Create a new connection to the source":::
+
+2. Create a **New** connection to your destination store. The destination store is where you want to copy files to.
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-3.png" alt-text="Create a new connection to the destination":::
+
+3. Select **Use this template**.
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-4.png" alt-text="Use this template":::
+
+4. You will see the pipeline available in the panel, as shown in the following example:
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-5.png" alt-text="Show the pipeline":::
+
+5. Select **Debug**, write the value for the **Parameters** and select **Finish**. In the picture below, we set the parameters as following.
+ - **FolderPath_Source** = sourcefolder
+ - **Directory_Source** = subfolder
+ - **FolderPath_Destination** = destinationfolder
+ - **Directory_Destination** = subfolder
+ - **LastModified_From** = 2019-02-01T00:00:00Z
+ - **LastModified_To** = 2019-03-01T00:00:00Z
+
+ The example is indicating that the files, which have been last modified within the timespan (**2019-02-01T00:00:00Z** to **2019-03-01T00:00:00Z**) will be copied from the source path **sourcefolder/subfolder** to the destination path **destinationfolder/subfolder**. You can replace these with your own parameters.
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-6.png" alt-text="Run the pipeline":::
+
+6. Review the result. You will see only the files last modified within the configured timespan has been copied to the destination store.
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-7.png" alt-text="Review the result":::
+
+7. Now you can add a tumbling windows trigger to automate this pipeline, so that the pipeline can always copy new and changed files only by LastModifiedDate periodically. Select **Add trigger**, and select **New/Edit**.
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-8.png" alt-text="Screenshot that highlights the New/Edit menu option that appears when you select Add trigger.":::
+
+8. In the **Add Triggers** window, select **+ New**.
+
+9. Select **Tumbling Window** for the trigger type, set **Every 15 minute(s)** as the recurrence (you can change to any interval time). Select **Yes** for Activated box, and then select **OK**.
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-10.png" alt-text="Create trigger":::
+
+10. Set the value for the **Trigger Run Parameters** as following, and select **Finish**.
+ - **FolderPath_Source** = **sourcefolder**. You can replace with your folder in source data store.
+ - **Directory_Source** = **subfolder**. You can replace with your subfolder in source data store.
+ - **FolderPath_Destination** = **destinationfolder**. You can replace with your folder in destination data store.
+ - **Directory_Destination** = **subfolder**. You can replace with your subfolder in destination data store.
+ - **LastModified_From** = **\@trigger().outputs.windowStartTime**. It is a system variable from the trigger determining the time when the pipeline was triggered last time.
+ - **LastModified_To** = **\@trigger().outputs.windowEndTime**. It is a system variable from the trigger determining the time when the pipeline is triggered this time.
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-11.png" alt-text="Input parameters":::
+
+11. Select **Publish All**.
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-12.png" alt-text="Publish All":::
+
+12. Create new files in your source folder of data source store. You are now waiting for the pipeline to be triggered automatically and only the new files will be copied to the destination store.
+
+13. Select **Monitor** tab in the left navigation panel, and wait for about 15 minutes if the recurrence of trigger has been set to every 15 minutes.
+
+14. Review the result. You will see your pipeline will be triggered automatically every 15 minutes, and only the new or changed files from source store will be copied to the destination store in each pipeline run.
+
+ :::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-15.png" alt-text="Screenshot that shows the results that return when the pipeline is triggered.":::
+
+## Next steps
+
+- [Introduction to Azure Data Factory](introduction.md)
data-factory Solution Template Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-databricks-notebook.md
Previously updated : 01/31/2022 Last updated : 09/22/2022 # Transformation with Azure Databricks
data-factory Solution Template Delta Copy With Control Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-delta-copy-with-control-table.md
Previously updated : 12/09/2020 Last updated : 09/22/2022 # Delta copy from a database with a control table
The template defines following parameters:
4. Go to the **Delta copy from Database** template. Create a **New** connection to the source database that you want to data copy from.
- :::image type="content" source="mediB_with_ControlTable4.png" alt-text="Create a new connection to the source table":::
+ :::image type="content" source="media/solution-template-delta-copy-with-control-table/delta-copy-from-db-with-control-table-4.png" alt-text="Screenshot showing the creation of a new connection to the source table.":::
5. Create a **New** connection to the destination data store that you want to copy the data to.
- :::image type="content" source="mediB_with_ControlTable5.png" alt-text="Create a new connection to the destination table":::
+ :::image type="content" source="media/solution-template-delta-copy-with-control-table/delta-copy-from-db-with-control-table-5.png" alt-text="Screenshot showing the creation of a new connection to the destination table.":::
6. Create a **New** connection to the external control table and stored procedure that you created in steps 2 and 3.
- :::image type="content" source="mediB_with_ControlTable6.png" alt-text="Create a new connection to the control table data store":::
+ :::image type="content" source="media/solution-template-delta-copy-with-control-table/delta-copy-from-db-with-control-table-6.png" alt-text="Screenshot showing the creation of a new connection to the control table data store.":::
7. Select **Use this template**. 8. You see the available pipeline, as shown in the following example:
- :::image type="content" source="mediB_with_ControlTable8.png" alt-text="Review the pipeline":::
+ :::image type="content" source="media/solution-template-delta-copy-with-control-table/delta-copy-from-db-with-control-table-8.png" alt-text="Screenshot showing the pipeline.":::
9. Select **Stored Procedure**. For **Stored procedure name**, choose **[dbo].[update_watermark]**. Select **Import parameter**, and then select **Add dynamic content**.
- :::image type="content" source="mediB_with_ControlTable9.png" alt-text="Set the stored procedure activity":::
+ :::image type="content" source="media/solution-template-delta-copy-with-control-table/delta-copy-from-db-with-control-table-9.png" alt-text="Screenshot showing where to set the stored procedure activity.":::
10. Write the content **\@{activity('LookupCurrentWaterMark').output.firstRow.NewWatermarkValue}**, and then select **Finish**.
- :::image type="content" source="mediB_with_ControlTable10.png" alt-text="Write the content for the parameters of the stored procedure":::
+ :::image type="content" source="media/solution-template-delta-copy-with-control-table/delta-copy-from-db-with-control-table-10.png" alt-text="Screenshot showing where to write the content for the parameters of the stored procedure.":::
11. Select **Debug**, enter the **Parameters**, and then select **Finish**.
- :::image type="content" source="mediB_with_ControlTable11.png" alt-text="Select **Debug**":::
+ :::image type="content" source="mediB_with_ControlTable11.png" alt-text="Screenshot showing the Debug button.":::
12. Results similar to the following example are displayed:
- :::image type="content" source="mediB_with_ControlTable12.png" alt-text="Review the result":::
+ :::image type="content" source="mediB_with_ControlTable12.png" alt-text="Sreenshot showing the result of the pipeline run.":::
13. You can create new rows in your source table. Here is sample SQL language to create new rows:
The template defines following parameters:
15. (Optional:) If you select Azure Synapse Analytics as the data destination, you must also provide a connection to Azure Blob storage for staging, which is required by Azure Synapse Analytics Polybase. The template will generate a container path for you. After the pipeline run, check whether the container has been created in Blob storage.
- :::image type="content" source="mediB_with_ControlTable15.png" alt-text="Configure Polybase":::
+ :::image type="content" source="mediB_with_ControlTable15.png" alt-text="Screenshot showing where to configure Polybase.":::
## Next steps
data-factory Solution Template Extract Data From Pdf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md
Previously updated : 04/22/2022 Last updated : 09/22/2022 # Extract data from PDF
data-factory Solution Template Move Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-move-files.md
Previously updated : 01/26/2022 Last updated : 09/22/2022 # Move files with Azure Data Factory
The template defines four parameters:
## How to use this solution template 1. Go to the **Move files** template. Select existing connection or create a **New** connection to your source file store where you want to move files from. Be aware that **DataSource_Folder** and **DataSource_File** are reference to the same connection of your source file store.-
- :::image type="content" source="media/solution-template-move-files/move-files-1-small.png" alt-text="Create a new connection to the source" lightbox="media/solution-template-move-files/move-files-1.png":::
+ :::image type="content" source="media/solution-template-move-files/move-files-1.png" alt-text="Screenshot showing creation of a new connection to the source." lightbox="media/solution-template-move-files/move-files-1.png" :::
2. Select existing connection or create a **New** connection to your destination file store where you want to move files to.
- :::image type="content" source="media/solution-template-move-files/move-files-2-small.png" alt-text="Create a new connection to the destination" lightbox="media/solution-template-move-files/move-files-2.png":::
+ :::image type="content" source="media/solution-template-move-files/move-files-2.png" alt-text="Screenshot showing creation a new connection to the destination." lightbox="media/solution-template-move-files/move-files-2.png" :::
3. Select **Use this template** tab. 4. You'll see the pipeline, as in the following example:
- :::image type="content" source="media/solution-template-move-files/move-files4.png" alt-text="Show the pipeline":::
+ :::image type="content" source="media/solution-template-move-files/move-files-4.png" alt-text="Screenshot showing the pipeline.":::
5. Select **Debug**, enter the **Parameters**, and then select **Finish**. The parameters are the folder path where you want to move files from and the folder path where you want to move files to.
- :::image type="content" source="media/solution-template-move-files/move-files5.png" alt-text="Run the pipeline":::
+ :::image type="content" source="media/solution-template-move-files/move-files5.png" alt-text="Screenshot showing where to run the pipeline.":::
6. Review the result.
- :::image type="content" source="media/solution-template-move-files/move-files6.png" alt-text="Review the result":::
+ :::image type="content" source="media/solution-template-move-files/move-files6.png" alt-text="Screenshot showing the result of the pipeline run.":::
## Next steps
data-factory Solution Template Pii Detection And Masking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-pii-detection-and-masking.md
Previously updated : 04/22/2022 Last updated : 09/22/2022 # PII detection and masking
data-factory Solution Template Synapse Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-synapse-notebook.md
Previously updated : 11/23/2021 Last updated : 09/22/2022 # Call Synapse pipeline with a notebook activity
data-factory Solution Templates Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-templates-introduction.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Templates
data-factory Ssis Azure Connect With Windows Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-azure-connect-with-windows-auth.md
Title: Access data stores and file shares with Windows authentication description: Learn how to configure SSIS catalog in Azure SQL Database and Azure-SSIS Integration Runtime in Azure Data Factory to run packages that access data stores and file shares with Windows authentication. Previously updated : 02/15/2022 Last updated : 09/22/2022
data-factory Ssis Integration Runtime Diagnose Connectivity Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-diagnose-connectivity-faq.md
Previously updated : 02/15/2022 Last updated : 09/22/2022 # Use the diagnose connectivity feature in the SSIS integration runtime
data-factory Ssis Integration Runtime Management Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-management-troubleshoot.md
Previously updated : 02/15/2022 Last updated : 09/22/2022 # Troubleshoot SSIS Integration Runtime management
data-factory Ssis Integration Runtime Ssis Activity Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-ssis-activity-faq.md
Previously updated : 02/21/2022 Last updated : 09/22/2022 # Troubleshoot package execution in the SSIS integration runtime
data-factory Store Credentials In Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/store-credentials-in-key-vault.md
Previously updated : 01/21/2022 Last updated : 09/22/2022
The following properties are supported for Azure Key Vault linked service:
Select **Connections** -> **Linked Services** -> **New**. In New linked service, search for and select "Azure Key Vault": Select the provisioned Azure Key Vault where your credentials are stored. You can do **Test Connection** to make sure your AKV connection is valid. **JSON example:**
Select **Azure Key Vault** for secret fields while creating the connection to yo
>[!TIP] >For connectors using connection string in linked service like SQL Server, Blob storage, etc., you can choose either to store only the secret field e.g. password in AKV, or to store the entire connection string in AKV. You can find both options on the UI. **JSON example: (see the "password" section)**
data-factory Supported File Formats And Compression Codecs Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/supported-file-formats-and-compression-codecs-legacy.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Supported file formats and compression codecs in Azure Data Factory and Synapse Analytics (legacy)
data-factory Supported File Formats And Compression Codecs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/supported-file-formats-and-compression-codecs.md
Previously updated : 09/09/2021 Last updated : 09/22/2022
data-factory Transform Data Databricks Jar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-jar.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Transform data by running a Jar activity in Azure Databricks
data-factory Transform Data Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-notebook.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Transform data by running a Databricks notebook
data-factory Transform Data Databricks Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-python.md
description: Learn how to process or transform data by running a Databricks Pyth
Previously updated : 09/09/2021 Last updated : 09/22/2022
data-factory Transform Data Machine Learning Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-machine-learning-service.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Execute Azure Machine Learning pipelines in Azure Data Factory and Synapse Analytics
data-factory Transform Data Using Custom Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-custom-activity.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Use custom activities in an Azure Data Factory or Azure Synapse Analytics pipeline
data-factory Transform Data Using Data Lake Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-data-lake-analytics.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Process data by running U-SQL scripts on Azure Data Lake Analytics with Azure Data Factory and Synapse Analytics
data-factory Transform Data Using Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-databricks-notebook.md
Previously updated : 09/08/2021 Last updated : 09/22/2022 # Run a Databricks notebook with the Databricks Notebook Activity in Azure Data Factory
data-factory Transform Data Using Hadoop Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-hive.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Transform data using Hadoop Hive activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Hadoop Map Reduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-map-reduce.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Transform data using Hadoop MapReduce activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Hadoop Pig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-pig.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Transform data using Hadoop Pig activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Hadoop Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-streaming.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Transform data using Hadoop Streaming activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-machine-learning.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Create a predictive pipeline using Machine Learning Studio (classic) with Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md
Previously updated : 04/20/2022 Last updated : 09/22/2022 # Transform data by using the Script activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data Using Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-spark.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Transform data using Spark activity in Azure Data Factory and Synapse Analytics
data-factory Transform Data Using Stored Procedure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-stored-procedure.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Transform data by using the SQL Server Stored Procedure activity in Azure Data Factory or Synapse Analytics
data-factory Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Transform data in Azure Data Factory and Azure Synapse Analytics
data-factory Tumbling Window Trigger Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tumbling-window-trigger-dependency.md
Previously updated : 09/09/2021 Last updated : 09/22/2022 # Create a tumbling window trigger dependency
For a demonstration on how to create dependent pipelines using tumbling window t
To create dependency on a trigger, select **Trigger > Advanced > New**, and then choose the trigger to depend on with the appropriate offset and size. Select **Finish** and publish the changes for the dependencies to take effect. ## Tumbling window dependency properties
databox-online Azure Stack Edge Gpu Virtual Machine Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md
Previously updated : 08/09/2022 Last updated : 09/21/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device by using APIs, so that I can efficiently manage my VMs.
ddos-protection Manage Ddos Protection Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell.md
You can enable DDoS protection when creating a virtual network. In this example,
$ddosProtectionPlanID = Get-AzDdosProtectionPlan -ResourceGroupName MyResourceGroup -Name MyDdosProtectionPlan #Creates the virtual network
-New-AzVirtualNetwork -Name MyVnet -ResourceGroupName MyResourceGroup -Location "East US" -AddressPrefix 10.0.0.0/16 -DdosProtectionPlan $ddosProtectionPlanID -EnableDdosProtection
+New-AzVirtualNetwork -Name MyVnet -ResourceGroupName MyResourceGroup -Location "East US" -AddressPrefix 10.0.0.0/16 -DdosProtectionPlan $ddosProtectionPlanID.Id -EnableDdosProtection
``` ### Enable DDoS for an existing virtual network
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
Defender for Cloud's MFA recommendations refer to [Azure RBAC](../role-based-acc
Defender for Cloud's MFA recommendations currently don't support PIM accounts. You can add these accounts to a CA Policy in the Users/Group section. ### Can I exempt or dismiss some of the accounts?
-The capability to exempt some accounts that donΓÇÖt use MFA isn't currently supported. There are plans to add this capability, and the information can be viewed in our [Important upcoming changes](/azure/defender-for-cloud/upcoming-changes#multiple-changes-to-identity-recommendations) page.
+
+The capability to exempt some accounts that donΓÇÖt use MFA is available on the new recommendations in preview:
+
+- Accounts with owner permissions on Azure resources should be MFA enabled
+- Accounts with write permissions on Azure resources should be MFA enabled
+- Accounts with read permissions on Azure resources should be MFA enabled
+
+To exempt account(s), follow these steps:
+
+1. Select an MFA recommendation associated with an unhealthy account.
+1. In the Accounts tab, select an account to exempt.
+1. Select the three dots button, then select **Exempt account**.
+1. Select a scope and exemption reason.
+
+If you would like to see which accounts are exempt, navigate to **Exempted accounts** for each recommendation.
+
+> [!TIP]
+> When you exempt an account, it won't be shown as unhealthy and won't cause a subscription to appear unhealthy.
### Are there any limitations to Defender for Cloud's identity and access protections? There are some limitations to Defender for Cloud's identity and access protections:
There are some limitations to Defender for Cloud's identity and access protectio
## Next steps To learn more about recommendations that apply to other Azure resource types, see the following article: -- [Protecting your network in Microsoft Defender for Cloud](protect-network-resources.md)
+- [Protecting your network in Microsoft Defender for Cloud](protect-network-resources.md)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Last updated 09/20/2022 - # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
## September 2022
+Updates in September include:
+ - [Suppress alerts based on Container and Kubernetes entities](#suppress-alerts-based-on-container-and-kubernetes-entities) - [Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent](#defender-for-servers-supports-file-integrity-monitoring-with-azure-monitor-agent) - [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation)
+- [Extra recommendations added to identity](#extra-recommendations-added-to-identity)
### Suppress alerts based on Container and Kubernetes entities
The following APIs are deprecated:
These three APIs exposed old formats of assessments and are replaced by the [Assessments APIs](/rest/api/defenderforcloud/assessments) and [SubAssessments APIs](/rest/api/defenderforcloud/sub-assessments). All data that is exposed by these legacy APIs are also available in the new APIs.
+### Extra recommendations added to identity
+
+Defender for Cloud's recommendations for improving the management of users and accounts.
+
+#### New recommendations
+
+The new release contains the following capabilities:
+
+- **Extended evaluation scope** ΓÇô Coverage has been improved for identity accounts without MFA and external accounts on Azure resources (instead of subscriptions only) which allows your security administrators to view role assignments per account.
+
+- **Improved freshness interval** - The identity recommendations now have a freshness interval of 12 hours.
+
+- **Account exemption capability** - Defender for Cloud has many features you can use to customize your experience and ensure that your secure score reflects your organization's security priorities. For example, you can [exempt resources and recommendations from your secure score](exempt-resource.md).
+
+ This update allows you to exempt specific accounts from evaluation with the six recommendations listed in the following table.
+
+ Typically, you'd exempt emergency ΓÇ£break glassΓÇ¥ accounts from MFA recommendations, because such accounts are often deliberately excluded from an organization's MFA requirements. Alternatively, you might have external accounts that you'd like to permit access to, that don't have MFA enabled.
+
+ > [!TIP]
+ > When you exempt an account, it won't be shown as unhealthy and also won't cause a subscription to appear unhealthy.
+
+ |Recommendation| Assessment key|
+ |-|-|
+ |MFA should be enabled on accounts with owner permissions on your subscription|94290b00-4d0c-d7b4-7cea-064a9554e681|
+ |MFA should be enabled on accounts with read permissions on your subscription|151e82c5-5341-a74b-1eb0-bc38d2c84bb5|
+ |MFA should be enabled on accounts with write permissions on your subscription|57e98606-6b1e-6193-0e3d-fe621387c16b|
+ |External accounts with owner permissions should be removed from your subscription|c3b6ae71-f1f0-31b4-e6c1-d5951285d03d|
+ |External accounts with read permissions should be removed from your subscription|a8c6a4ad-d51e-88fe-2979-d3ee3c864f8b|
+ |External accounts with write permissions should be removed from your subscription|04e7147b-0deb-9796-2e5c-0336343ceb3d|
+
+The recommendations although in preview, will appear next to the recommendations that are currently in GA.
+ ## August 2022 Updates in August include:
Learn more about [viewing vulnerabilities for running images](defender-for-conta
### Azure Monitor Agent integration now in preview
-Defender for Cloud now includes preview support for the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA). AMA is intended to replace the legacy Log Analytics agent (also referred to as the Microsoft Monitoring Agent (MMA)), which is on a path to deprecation. AMA [provides a number of benefits](../azure-monitor/agents/azure-monitor-agent-migration.md#benefits) over legacy agents.
+Defender for Cloud now includes preview support for the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA). AMA is intended to replace the legacy Log Analytics agent (also referred to as the Microsoft Monitoring Agent (MMA)), which is on a path to deprecation. AMA [provides many benefits](../azure-monitor/agents/azure-monitor-agent-migration.md#benefits) over legacy agents.
-In Defender for Cloud, when you [enable auto provisioning for AMA](auto-deploy-azure-monitoring-agent.md), the agent is deployed on **existing and new** VMs and Azure Arc-enabled machines that are detected in your subscriptions. If Defender for Cloud plans are enabled, AMA collects configuration information and event logs from Azure VMs and Azure Arc machines. Note that the AMA integration is in preview, so we recommend using it in test environments, rather than in production environments.
+In Defender for Cloud, when you [enable auto provisioning for AMA](auto-deploy-azure-monitoring-agent.md), the agent is deployed on **existing and new** VMs and Azure Arc-enabled machines that are detected in your subscriptions. If Defenders for Cloud plans are enabled, AMA collects configuration information and event logs from Azure VMs and Azure Arc machines. The AMA integration is in preview, so we recommend using it in test environments, rather than in production environments.
### Deprecated VM alerts regarding suspicious activity related to a Kubernetes cluster
defender-for-iot How To View Information Per Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-information-per-zone.md
Last updated 06/12/2022 -
digital-twins Concepts 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-3d-scenes-studio.md
To work with 3D Scenes Studio, you'll need the following required resources:
You can grant required roles at either the storage account level or the container level. For more information about Azure storage permissions, see [Assign an Azure role](../storage/blobs/assign-azure-role-data-access.md?tabs=portal#assign-an-azure-role). * You should also configure [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) for your storage account, so that 3D Scenes Studio will be able to access your storage container. For complete CORS setting information, see [Use 3D Scenes Studio (preview)](how-to-use-3d-scenes-studio.md#prerequisites).
-Then, you can access 3D Scenes Studio at this link: [3D Scenes Studio](https://dev.explorer.azuredigitaltwins-test.net/3dscenes).
+Then, you can access 3D Scenes Studio at this link: [3D Scenes Studio](https://explorer.digitaltwins.azure.net/3dscenes).
Once there, you'll link your 3D environment to your storage resources, and configure your first scene. For detailed instructions on how to perform these actions, see [Initialize your 3D Scenes Studio environment](how-to-use-3d-scenes-studio.md#initialize-your-3d-scenes-studio-environment) and [Create, edit, and view scenes](how-to-use-3d-scenes-studio.md#create-edit-and-view-scenes).
digital-twins How To Use 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md
Once someone has the required permissions, there are two ways to give them acces
:::image type="content" source="media/how-to-use-3d-scenes-studio/copy-url.png" alt-text="Screenshot of the Share environment button in 3D Scenes Studio." lightbox="media/how-to-use-3d-scenes-studio/copy-url.png"::: Share it with the recipient, who can paste this URL directly into their browser to connect to your environment.
-* Share the **URL of your Azure Digital Twins instance** and the **URL of your Azure storage container** that you used when [initializing your 3D Scenes Studio environment](#initialize-your-3d-scenes-studio-environment). The recipient can access [3D Scenes Studio](https://dev.explorer.azuredigitaltwins-test.net/3dscenes) and initialize it with these same URL values to connect to your same environment.
+* Share the **URL of your Azure Digital Twins instance** and the **URL of your Azure storage container** that you used when [initializing your 3D Scenes Studio environment](#initialize-your-3d-scenes-studio-environment). The recipient can access [3D Scenes Studio](https://explorer.digitaltwins.azure.net/3dscenes) and initialize it with these same URL values to connect to your same environment.
After this, the recipient can view and interact with your scenes in the studio.
event-grid Event Schema Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-blob-storage.md
Title: Azure Blob Storage as Event Grid source description: Describes the properties that are provided for blob storage events with Azure Event Grid Previously updated : 05/26/2022 Last updated : 09/22/2022 # Azure Blob Storage as an Event Grid source
This article provides the properties and schema for blob storage events. For an
## Available event types
-### List of events for Blob REST APIs
+## Blob Storage events
These events are triggered when a client creates, replaces, or deletes a blob by calling Blob REST APIs.
These events are triggered when a client creates, replaces, or deletes a blob by
|Event name |Description| |-|--|
- |**Microsoft.Storage.BlobCreated** |Triggered when a blob is created or replaced. <br>Specifically, this event is triggered when clients use the `PutBlob`, `PutBlockList`, or `CopyBlob` operations that are available in the Blob REST API **and** when the Block Blob is completely committed. <br>If clients use the `CopyBlob` operation on accounts that have the **hierarchical namespace** feature enabled on them, the `CopyBlob` operation works a little differently. In that case, the **Microsoft.Storage.BlobCreated** event is triggered when the `CopyBlob` operation is **initiated** and not when the Block Blob is completely committed. |
- |**Microsoft.Storage.BlobDeleted** |Triggered when a blob is deleted. <br>Specifically, this event is triggered when clients call the `DeleteBlob` operation that is available in the Blob REST API. |
- |**Microsoft.Storage.BlobTierChanged** |Triggered when the blob access tier is changed. Specifically, when clients call the `Set Blob Tier` operation that is available in the Blob REST API, this event is triggered after the tier change completes. |
-|**Microsoft.Storage.AsyncOperationInitiated** |Triggered when an operation involving moving or copying of data from the archive to hot or cool tiers is initiated. Specifically, this event is triggered either when clients call the `Set Blob Tier` API to move a blob from archive tier to hot or cool tier, or when clients call the `Copy Blob` API to copy data from a blob in the archive tier to a blob in the hot or cool tier.|
+ | [Microsoft.Storage.BlobCreated](#microsoftstorageblobcreated-event) |Triggered when a blob is created or replaced. <br>Specifically, this event is triggered when clients use the `PutBlob`, `PutBlockList`, or `CopyBlob` operations that are available in the Blob REST API **and** when the Block Blob is completely committed. <br>If clients use the `CopyBlob` operation on accounts that have the **hierarchical namespace** feature enabled on them, the `CopyBlob` operation works a little differently. In that case, the **Microsoft.Storage.BlobCreated** event is triggered when the `CopyBlob` operation is **initiated** and not when the Block Blob is completely committed. |
+ |[Microsoft.Storage.BlobDeleted](#microsoftstorageblobdeleted-event) |Triggered when a blob is deleted. <br>Specifically, this event is triggered when clients call the `DeleteBlob` operation that is available in the Blob REST API. |
+ | [Microsoft.Storage.BlobTierChanged](#microsoftstorageblobtierchanged-event) |Triggered when the blob access tier is changed. Specifically, when clients call the `Set Blob Tier` operation that is available in the Blob REST API, this event is triggered after the tier change completes. |
+| [Microsoft.Storage.AsyncOperationInitiated](#microsoftstorageasyncoperationinitiated-event) |Triggered when an operation involving moving or copying of data from the archive to hot or cool tiers is initiated. Specifically, this event is triggered either when clients call the `Set Blob Tier` API to move a blob from archive tier to hot or cool tier, or when clients call the `Copy Blob` API to copy data from a blob in the archive tier to a blob in the hot or cool tier.|
-### List of the events for Azure Data Lake Storage Gen 2 REST APIs
-
-These events are triggered if you enable a hierarchical namespace on the storage account, and clients use Azure Data Lake Storage Gen2 REST APIs. For more information bout Azure Data Lake Storage Gen2, see [Introduction to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md).
-
-|Event name|Description|
-|-|--|
-|**Microsoft.Storage.BlobCreated** | Triggered when a blob is created or replaced. <br>Specifically, this event is triggered when clients use the `CreateFile` and `FlushWithClose` operations that are available in the Azure Data Lake Storage Gen2 REST API. |
-|**Microsoft.Storage.BlobDeleted** |Triggered when a blob is deleted. <br>Specifically, This event is also triggered when clients call the `DeleteFile` operation that is available in the Azure Data Lake Storage Gen2 REST API. |
-|**Microsoft.Storage.BlobRenamed**|Triggered when a blob is renamed. <br>Specifically, this event is triggered when clients use the `RenameFile` operation that is available in the Azure Data Lake Storage Gen2 REST API.|
-|**Microsoft.Storage.DirectoryCreated**|Triggered when a directory is created. <br>Specifically, this event is triggered when clients use the `CreateDirectory` operation that is available in the Azure Data Lake Storage Gen2 REST API.|
-|**Microsoft.Storage.DirectoryRenamed**|Triggered when a directory is renamed. <br>Specifically, this event is triggered when clients use the `RenameDirectory` operation that is available in the Azure Data Lake Storage Gen2 REST API.|
-|**Microsoft.Storage.DirectoryDeleted**|Triggered when a directory is deleted. <br>Specifically, this event is triggered when clients use the `DeleteDirectory` operation that is available in the Azure Data Lake Storage Gen2 REST API.|
-
-> [!NOTE]
-> For **Azure Data Lake Storage Gen2**, if you want to ensure that the **Microsoft.Storage.BlobCreated** event is triggered only when a Block Blob is completely committed, filter the event for the `FlushWithClose` REST API call. This API call triggers the **Microsoft.Storage.BlobCreated** event only after data is fully committed to a Block Blob. To learn how to create a filter, see [Filter events for Event Grid](./how-to-filter-events.md).
-
-### List of the events for SFTP APIs
-
-These events are triggered if you enable a hierarchical namespace on the storage account, and clients use SFTP APIs. For more information about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) in Azure Blob Storage](../storage/blobs/secure-file-transfer-protocol-support.md).
-
-|Event name|Description|
-|-|--|
-|**Microsoft.Storage.BlobCreated** |Triggered when a blob is created or overwritten. <br>Specifically, this event is triggered when clients use the `put` operation, which corresponds to the `SftpCreate` and `SftpCommit` APIs. An empty blob is created when the file is opened and the uploaded contents are committed when the file is closed.|
-|**Microsoft.Storage.BlobDeleted** |Triggered when a blob is deleted. <br>Specifically, this event is also triggered when clients call the `rm` operation, which corresponds to the `SftpRemove` API.|
-|**Microsoft.Storage.BlobRenamed**|Triggered when a blob is renamed. <br>Specifically, this event is triggered when clients use the `rename` operation on files, which corresponds to the `SftpRename` API.|
-|**Microsoft.Storage.DirectoryCreated**|Triggered when a directory is created. <br>Specifically, this event is triggered when clients use the `mkdir` operation, which corresponds to the `SftpMakeDir` API.|
-|**Microsoft.Storage.DirectoryRenamed**|Triggered when a directory is renamed. <br>Specifically, this event is triggered when clients use the `rename` operation on a directory, which corresponds to the `SftpRename` API.|
-|**Microsoft.Storage.DirectoryDeleted**|Triggered when a directory is deleted. <br>Specifically, this event is triggered when clients use the `rmdir` operation, which corresponds to the `SftpRemoveDir` API.|
-
-### List of policy-related events
-
-These events are triggered when the actions defined by a policy are performed.
-
- |Event name |Description|
- |-|--|
- |**Microsoft.Storage.BlobInventoryPolicyCompleted** |Triggered when the inventory run completes for a rule that is defined an inventory policy . This event also occurs if the inventory run fails with a user error before it starts to run. For example, an invalid policy, or an error that occurs when a destination container is not present will trigger the event. |
- |**Microsoft.Storage.LifecyclePolicyCompleted** |Triggered when the actions defined by a lifecycle management policy are performed. |
-
-## Example events
-When an event is triggered, the Event Grid service sends data about that event to subscribing endpoint. This section contains an example of what that data would look like for each blob storage event.
+### Example events
# [Event Grid event schema](#tab/event-grid-event-schema)
When an event is triggered, the Event Grid service sends data about that event t
}] ```
-### Microsoft.Storage.BlobCreated event (Data Lake Storage Gen2)
-
-If the blob storage account has a hierarchical namespace, the data looks similar to the previous example with an exception of these changes:
-
-* The `dataVersion` key is set to a value of `2`.
-
-* The `data.api` key is set to the string `CreateFile` or `FlushWithClose`.
-
-* The `contentOffset` key is included in the data set.
-
-> [!NOTE]
-> If applications use the `PutBlockList` operation to upload a new blob to the account, the data won't contain these changes.
+### Microsoft.Storage.BlobDeleted event
```json [{ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/new-file.txt",
- "eventType": "Microsoft.Storage.BlobCreated",
- "eventTime": "2017-06-26T18:41:00.9584103Z",
- "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/file-to-delete.txt",
+ "eventType": "Microsoft.Storage.BlobDeleted",
+ "eventTime": "2017-11-07T20:09:22.5674003Z",
+ "id": "4c2359fe-001e-00ba-0e04-58586806d298",
"data": {
- "api": "CreateFile",
- "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
- "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "eTag": "\"0x8D4BCC2E4835CD0\"",
+ "api": "DeleteBlob",
+ "requestId": "4c2359fe-001e-00ba-0e04-585868000000",
"contentType": "text/plain",
- "contentLength": 0,
- "contentOffset": 0,
"blobType": "BlockBlob",
- "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/new-file.txt",
- "sequencer": "00000000000004420000000000028963",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/file-to-delete.txt",
+ "sequencer": "0000000000000281000000000002F5CA",
"storageDiagnostics": {
- "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
} },
- "dataVersion": "2",
+ "dataVersion": "",
"metadataVersion": "1" }] ```
-### Microsoft.Storage.BlobCreated event (SFTP)
-
-If the blob storage account uses SFTP to create or overwrite a blob, then the data looks similar to the previous example with an exception of these changes:
-
-* The `dataVersion` key is set to a value of `3`.
+### Microsoft.Storage.BlobTierChanged event
-* The `data.api` key is set to the string `SftpCreate` or `SftpCommit`.
+```json
+{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/Auto.jpg",
+ "eventType": "Microsoft.Storage.BlobTierChanged",
+ "eventTime": "2021-05-04T15:00:00.8350154Z",
+ "id": "0fdefc06-b01e-0034-39f6-4016610696f6",
+ "data": {
+ "api": "SetBlobTier",
+ "clientRequestId": "68be434c-1a0d-432f-9cd7-1db90bff83d7",
+ "requestId": "0fdefc06-b01e-0034-39f6-401661000000",
+ "contentType": "image/jpeg",
+ "contentLength": 105891,
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/Auto.jpg",
+ "sequencer": "000000000000000000000000000089A4000000000018d6ea",
+ "storageDiagnostics": {
+ "batchId": "3418f7a9-7006-0014-00f6-406dc6000000"
+ }
+ },
+ "dataVersion": "",
+ "metadataVersion": "1"
+}
+```
-* The `clientRequestId` key is not included.
+### Microsoft.Storage.AsyncOperationInitiated event
-* The `contentType` key is set to `application/octet-stream`.
+```json
+{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/00000.avro",
+ "eventType": "Microsoft.Storage.AsyncOperationInitiated",
+ "eventTime": "2021-05-04T14:44:59.3204652Z",
+ "id": "8ea4e3f2-101e-003d-5ff4-4053b2061016",
+ "data": {
+ "api": "SetBlobTier",
+ "clientRequestId": "777fb4cd-f890-4c5b-b024-fb47300bae62",
+ "requestId": "8ea4e3f2-101e-003d-5ff4-4053b2000000",
+ "contentType": "application/octet-stream",
+ "contentLength": 3660,
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/00000.avro",
+ "sequencer": "000000000000000000000000000089A4000000000018c6d7",
+ "storageDiagnostics": {
+ "batchId": "34128c8a-7006-0014-00f4-406dc6000000"
+ }
+ },
+ "dataVersion": "",
+ "metadataVersion": "1"
+}
+```
-* The `contentOffset` key is included in the data set.
-* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+# [Cloud event schema](#tab/cloud-event-schema)
-> [!NOTE]
-> SFTP uploads will generate 2 events. One `SftpCreate` for an initial empty blob created when opening the file and one `SftpCommit` when the file contents are written.
+### Microsoft.Storage.BlobCreated event
```json [{
- "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/testcontainer/blobs/new-file.txt",
- "eventType": "Microsoft.Storage.BlobCreated",
- "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/test-container/blobs/new-file.txt",
+ "type": "Microsoft.Storage.BlobCreated",
+ "time": "2017-06-26T18:41:00.9584103Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "SftpCommit",
+ "api": "PutBlockList",
+ "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000", "eTag": "\"0x8D4BCC2E4835CD0\"",
- "contentType": "application/octet-stream",
- "contentLength": 0,
- "contentOffset": 0,
+ "contentType": "text/plain",
+ "contentLength": 524288,
"blobType": "BlockBlob", "url": "https://my-storage-account.blob.core.windows.net/testcontainer/new-file.txt", "sequencer": "00000000000004420000000000028963",
- "identity":"localuser",
"storageDiagnostics": {
- "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
} },
- "dataVersion": "3",
- "metadataVersion": "1"
+ "specversion": "1.0"
}] ```
If the blob storage account uses SFTP to create or overwrite a blob, then the da
```json [{
- "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
"subject": "/blobServices/default/containers/testcontainer/blobs/file-to-delete.txt",
- "eventType": "Microsoft.Storage.BlobDeleted",
- "eventTime": "2017-11-07T20:09:22.5674003Z",
+ "type": "Microsoft.Storage.BlobDeleted",
+ "time": "2017-11-07T20:09:22.5674003Z",
"id": "4c2359fe-001e-00ba-0e04-58586806d298", "data": { "api": "DeleteBlob",
If the blob storage account uses SFTP to create or overwrite a blob, then the da
"batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "dataVersion": "",
- "metadataVersion": "1"
+ "specversion": "1.0"
}] ```
-### Microsoft.Storage.BlobDeleted event (Data Lake Storage Gen2)
+### Microsoft.Storage.BlobTierChanged event
-If the blob storage account has a hierarchical namespace, the data looks similar to the previous example with an exception of these changes:
+```json
+{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/Auto.jpg",
+ "type": "Microsoft.Storage.BlobTierChanged",
+ "time": "2021-05-04T15:00:00.8350154Z",
+ "id": "0fdefc06-b01e-0034-39f6-4016610696f6",
+ "data": {
+ "api": "SetBlobTier",
+ "clientRequestId": "68be434c-1a0d-432f-9cd7-1db90bff83d7",
+ "requestId": "0fdefc06-b01e-0034-39f6-401661000000",
+ "contentType": "image/jpeg",
+ "contentLength": 105891,
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/Auto.jpg",
+ "sequencer": "000000000000000000000000000089A4000000000018d6ea",
+ "storageDiagnostics": {
+ "batchId": "3418f7a9-7006-0014-00f6-406dc6000000"
+ }
+ },
+ "specversion": "1.0"
+}
+```
+
+### Microsoft.Storage.AsyncOperationInitiated event
+
+```json
+{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/00000.avro",
+ "type": "Microsoft.Storage.AsyncOperationInitiated",
+ "time": "2021-05-04T14:44:59.3204652Z",
+ "id": "8ea4e3f2-101e-003d-5ff4-4053b2061016",
+ "data": {
+ "api": "SetBlobTier",
+ "clientRequestId": "777fb4cd-f890-4c5b-b024-fb47300bae62",
+ "requestId": "8ea4e3f2-101e-003d-5ff4-4053b2000000",
+ "contentType": "application/octet-stream",
+ "contentLength": 3660,
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/00000.avro",
+ "sequencer": "000000000000000000000000000089A4000000000018c6d7",
+ "storageDiagnostics": {
+ "batchId": "34128c8a-7006-0014-00f4-406dc6000000"
+ }
+ },
+ "specversion": "1.0"
+}
+```
+++
+## Data Lake Storage Gen 2 events
+
+These events are triggered if you enable a hierarchical namespace on the storage account, and clients use Azure Data Lake Storage Gen2 REST APIs. For more information bout Azure Data Lake Storage Gen2, see [Introduction to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md).
+
+|Event name|Description|
+|-|--|
+| [Microsoft.Storage.BlobCreated](#microsoftstorageblobcreated-event-data-lake-storage-gen2) | Triggered when a blob is created or replaced. <br>Specifically, this event is triggered when clients use the `CreateFile` and `FlushWithClose` operations that are available in the Azure Data Lake Storage Gen2 REST API. |
+| [Microsoft.Storage.BlobDeleted](#microsoftstorageblobdeleted-event-data-lake-storage-gen2) |Triggered when a blob is deleted. <br>Specifically, This event is also triggered when clients call the `DeleteFile` operation that is available in the Azure Data Lake Storage Gen2 REST API. |
+| [Microsoft.Storage.BlobRenamed](#microsoftstorageblobrenamed-event-data-lake-storage-gen2) |Triggered when a blob is renamed. <br>Specifically, this event is triggered when clients use the `RenameFile` operation that is available in the Azure Data Lake Storage Gen2 REST API.|
+| [Microsoft.Storage.DirectoryCreated](#microsoftstoragedirectorycreated-event-data-lake-storage-gen2) |Triggered when a directory is created. <br>Specifically, this event is triggered when clients use the `CreateDirectory` operation that is available in the Azure Data Lake Storage Gen2 REST API.|
+| [Microsoft.Storage.DirectoryRenamed](#microsoftstoragedirectoryrenamed-event-data-lake-storage-gen2) |Triggered when a directory is renamed. <br>Specifically, this event is triggered when clients use the `RenameDirectory` operation that is available in the Azure Data Lake Storage Gen2 REST API.|
+| [Microsoft.Storage.DirectoryDeleted](#microsoftstoragedirectorydeleted-event-data-lake-storage-gen2) |Triggered when a directory is deleted. <br>Specifically, this event is triggered when clients use the `DeleteDirectory` operation that is available in the Azure Data Lake Storage Gen2 REST API.|
+
+> [!NOTE]
+> For **Azure Data Lake Storage Gen2**, if you want to ensure that the **Microsoft.Storage.BlobCreated** event is triggered only when a Block Blob is completely committed, filter the event for the `FlushWithClose` REST API call. This API call triggers the **Microsoft.Storage.BlobCreated** event only after data is fully committed to a Block Blob. To learn how to create a filter, see [Filter events for Event Grid](./how-to-filter-events.md).
+
+### Example events
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+### Microsoft.Storage.BlobCreated event (Data Lake Storage Gen2)
+
+If the blob storage account has a hierarchical namespace, the data looks similar to the Blob Storage example with an exception of these changes:
* The `dataVersion` key is set to a value of `2`.
-* The `data.api` key is set to the string `DeleteFile`.
+* The `data.api` key is set to the string `CreateFile` or `FlushWithClose`.
-* The `url` key contains the path `dfs.core.windows.net`.
+* The `contentOffset` key is included in the data set.
> [!NOTE]
-> If applications use the `DeleteBlob` operation to delete a blob from the account, the data won't contain these changes.
+> If applications use the `PutBlockList` operation to upload a new blob to the account, the data won't contain these changes.
```json [{ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/file-to-delete.txt",
- "eventType": "Microsoft.Storage.BlobDeleted",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/new-file.txt",
+ "eventType": "Microsoft.Storage.BlobCreated",
"eventTime": "2017-06-26T18:41:00.9584103Z", "id": "831e1650-001e-001b-66ab-eeb76e069631",
- "data": {
- "api": "DeleteFile",
+ "data": {
+ "api": "CreateFile",
"clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760", "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "eTag": "\"0x8D4BCC2E4835CD0\"",
"contentType": "text/plain",
+ "contentLength": 0,
+ "contentOffset": 0,
"blobType": "BlockBlob",
- "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/file-to-delete.txt",
+ "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/new-file.txt",
"sequencer": "00000000000004420000000000028963", "storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
If the blob storage account has a hierarchical namespace, the data looks similar
}] ```
-### Microsoft.Storage.BlobDeleted event (SFTP)
+### Microsoft.Storage.BlobDeleted event (Data Lake Storage Gen2)
-If the blob storage account uses SFTP to delete a blob, then the data looks similar to the previous example with an exception of these changes:
+If the blob storage account has a hierarchical namespace, the data looks similar to the previous example with an exception of these changes:
* The `dataVersion` key is set to a value of `2`.
-* The `data.api` key is set to the string `SftpRemove`.
-
-* The `clientRequestId` key is not included.
+* The `data.api` key is set to the string `DeleteFile`.
-* The `contentType` key is set to `application/octet-stream`.
+* The `url` key contains the path `dfs.core.windows.net`.
-* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+> [!NOTE]
+> If applications use the `DeleteBlob` operation to delete a blob from the account, the data won't contain these changes.
```json [{ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/testcontainer/blobs/new-file.txt",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/file-to-delete.txt",
"eventType": "Microsoft.Storage.BlobDeleted",
- "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "eventTime": "2017-06-26T18:41:00.9584103Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631",
- "data": {
- "api": "SftpRemove",
+ "data": {
+ "api": "DeleteFile",
+ "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000", "contentType": "text/plain", "blobType": "BlockBlob",
- "url": "https://my-storage-account.blob.core.windows.net/testcontainer/new-file.txt",
+ "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/file-to-delete.txt",
"sequencer": "00000000000004420000000000028963",
- "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" }
If the blob storage account uses SFTP to delete a blob, then the data looks simi
}] ```
-### Microsoft.Storage.BlobTierChanged event
-
-```json
-{
- "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/testcontainer/blobs/Auto.jpg",
- "eventType": "Microsoft.Storage.BlobTierChanged",
- "id": "0fdefc06-b01e-0034-39f6-4016610696f6",
- "data": {
- "api": "SetBlobTier",
- "clientRequestId": "68be434c-1a0d-432f-9cd7-1db90bff83d7",
- "requestId": "0fdefc06-b01e-0034-39f6-401661000000",
- "contentType": "image/jpeg",
- "contentLength": 105891,
- "blobType": "BlockBlob",
- "url": "https://my-storage-account.blob.core.windows.net/testcontainer/Auto.jpg",
- "sequencer": "000000000000000000000000000089A4000000000018d6ea",
- "storageDiagnostics": {
- "batchId": "3418f7a9-7006-0014-00f6-406dc6000000"
- }
- },
- "dataVersion": "",
- "metadataVersion": "1",
- "eventTime": "2021-05-04T15:00:00.8350154Z"
-}
-```
-
-### Microsoft.Storage.AsyncOperationInitiated event
-
-```json
-{
- "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/testcontainer/blobs/00000.avro",
- "eventType": "Microsoft.Storage.AsyncOperationInitiated",
- "id": "8ea4e3f2-101e-003d-5ff4-4053b2061016",
- "data": {
- "api": "SetBlobTier",
- "clientRequestId": "777fb4cd-f890-4c5b-b024-fb47300bae62",
- "requestId": "8ea4e3f2-101e-003d-5ff4-4053b2000000",
- "contentType": "application/octet-stream",
- "contentLength": 3660,
- "blobType": "BlockBlob",
- "url": "https://my-storage-account.blob.core.windows.net/testcontainer/00000.avro",
- "sequencer": "000000000000000000000000000089A4000000000018c6d7",
- "storageDiagnostics": {
- "batchId": "34128c8a-7006-0014-00f4-406dc6000000"
- }
- },
- "dataVersion": "",
- "metadataVersion": "1",
- "eventTime": "2021-05-04T14:44:59.3204652Z"
-}
-```
--
-### Microsoft.Storage.BlobRenamed event
+### Microsoft.Storage.BlobRenamed event (Data Lake Storage Gen2)
```json [{
If the blob storage account uses SFTP to delete a blob, then the data looks simi
}] ```
-### Microsoft.Storage.BlobRenamed event (SFTP)
-
-If the blob storage account uses SFTP to rename a blob, then the data looks similar to the previous example with an exception of these changes:
-
-* The `data.api` key is set to the string `SftpRename`.
-
-* The `clientRequestId` key is not included.
-
-* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+### Microsoft.Storage.DirectoryCreated event (Data Lake Storage Gen2)
```json [{ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/testcontainer/blobs/my-renamed-file.txt",
- "eventType": "Microsoft.Storage.BlobRenamed",
- "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/my-new-directory",
+ "eventType": "Microsoft.Storage.DirectoryCreated",
+ "eventTime": "2017-06-26T18:41:00.9584103Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "SftpRename",
+ "api": "CreateDirectory",
+ "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "destinationUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-renamed-file.txt",
- "sourceUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-original-file.txt",
+ "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-new-directory",
"sequencer": "00000000000004420000000000028963",
- "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" }
If the blob storage account uses SFTP to rename a blob, then the data looks simi
}] ```
-### Microsoft.Storage.DirectoryCreated event
+### Microsoft.Storage.DirectoryRenamed event (Data Lake Storage Gen2)
```json [{ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/my-new-directory",
- "eventType": "Microsoft.Storage.DirectoryCreated",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/my-renamed-directory",
+ "eventType": "Microsoft.Storage.DirectoryRenamed",
"eventTime": "2017-06-26T18:41:00.9584103Z", "id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "CreateDirectory",
+ "api": "RenameDirectory",
"clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760", "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-new-directory",
+ "destinationUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-renamed-directory",
+ "sourceUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-original-directory",
"sequencer": "00000000000004420000000000028963", "storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
If the blob storage account uses SFTP to rename a blob, then the data looks simi
}] ```
-### Microsoft.Storage.DirectoryCreated event (SFTP)
-
-If the blob storage account uses SFTP to create a directory, then the data looks similar to the previous example with an exception of these changes:
-
-* The `dataVersion` key is set to a value of `2`.
-
-* The `data.api` key is set to the string `SftpMakeDir`.
-
-* The `clientRequestId` key is not included.
-
-* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+### Microsoft.Storage.DirectoryDeleted event (Data Lake Storage Gen2)
```json [{ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/testcontainer/blobs/my-new-directory",
- "eventType": "Microsoft.Storage.DirectoryCreated",
- "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/directory-to-delete",
+ "eventType": "Microsoft.Storage.DirectoryDeleted",
+ "eventTime": "2017-06-26T18:41:00.9584103Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "SftpMakeDir",
+ "api": "DeleteDirectory",
+ "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "url": "https://my-storage-account.blob.core.windows.net/testcontainer/my-new-directory",
+ "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/directory-to-delete",
+ "recursive": "true",
"sequencer": "00000000000004420000000000028963",
- "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "dataVersion": "2",
+ "dataVersion": "1",
"metadataVersion": "1" }] ```
-### Microsoft.Storage.DirectoryRenamed event
+
+# [Cloud event schema](#tab/cloud-event-schema)
+
+### Microsoft.Storage.BlobCreated event (Data Lake Storage Gen2)
+
+If the blob storage account has a hierarchical namespace, the data looks similar to the previous example with an exception of these changes:
+
+* The `data.api` key is set to the string `CreateFile` or `FlushWithClose`.
+* The `contentOffset` key is included in the data set.
+
+> [!NOTE]
+> If applications use the `PutBlockList` operation to upload a new blob to the account, the data won't contain these changes.
```json [{
- "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/my-renamed-directory",
- "eventType": "Microsoft.Storage.DirectoryRenamed",
- "eventTime": "2017-06-26T18:41:00.9584103Z",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/new-file.txt",
+ "type": "Microsoft.Storage.BlobCreated",
+ "time": "2017-06-26T18:41:00.9584103Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "RenameDirectory",
+ "api": "CreateFile",
"clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760", "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "destinationUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-renamed-directory",
- "sourceUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-original-directory",
+ "eTag": "\"0x8D4BCC2E4835CD0\"",
+ "contentType": "text/plain",
+ "contentLength": 0,
+ "contentOffset": 0,
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/new-file.txt",
"sequencer": "00000000000004420000000000028963", "storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "dataVersion": "1",
- "metadataVersion": "1"
+ "specversion": "1.0"
}] ```
-### Microsoft.Storage.DirectoryRenamed event (SFTP)
-If the blob storage account uses SFTP to rename a directory, then the data looks similar to the previous example with an exception of these changes:
+### Microsoft.Storage.BlobDeleted event (Data Lake Storage Gen2)
-* The `data.api` key is set to the string `SftpRename`.
+If the blob storage account has a hierarchical namespace, the data looks similar to the previous example with an exception of these changes:
-* The `clientRequestId` key is not included.
-* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+* The `data.api` key is set to the string `DeleteFile`.
+* The `url` key contains the path `dfs.core.windows.net`.
+
+> [!NOTE]
+> If applications use the `DeleteBlob` operation to delete a blob from the account, the data won't contain these changes.
```json [{
- "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/testcontainer/blobs/my-renamed-directory",
- "eventType": "Microsoft.Storage.DirectoryRenamed",
- "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/file-to-delete.txt",
+ "type": "Microsoft.Storage.BlobDeleted",
+ "time": "2017-06-26T18:41:00.9584103Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631",
- "data": {
- "api": "SftpRename",
+ "data": {
+ "api": "DeleteFile",
+ "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "destinationUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-renamed-directory",
- "sourceUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-original-directory",
+ "contentType": "text/plain",
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/file-to-delete.txt",
"sequencer": "00000000000004420000000000028963",
- "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "dataVersion": "1",
- "metadataVersion": "1"
+ "specversion": "1.0"
}] ```
-### Microsoft.Storage.DirectoryDeleted event
+### Microsoft.Storage.BlobRenamed event (Data Lake Storage Gen2)
```json [{
- "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/directory-to-delete",
- "eventType": "Microsoft.Storage.DirectoryDeleted",
- "eventTime": "2017-06-26T18:41:00.9584103Z",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/my-renamed-file.txt",
+ "type": "Microsoft.Storage.BlobRenamed",
+ "time": "2017-06-26T18:41:00.9584103Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "DeleteDirectory",
+ "api": "RenameFile",
"clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760", "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/directory-to-delete",
- "recursive": "true",
+ "destinationUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-renamed-file.txt",
+ "sourceUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-original-file.txt",
"sequencer": "00000000000004420000000000028963", "storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "dataVersion": "1",
- "metadataVersion": "1"
+ "specversion": "1.0"
}] ```
-### Microsoft.Storage.DirectoryDeleted event (SFTP)
-
-If the blob storage account uses SFTP to delete a directory, then the data looks similar to the previous example with an exception of these changes:
-
-* The `data.api` key is set to the string `SftpRemoveDir`.
-
-* The `clientRequestId` key is not included.
-
-* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+### Microsoft.Storage.DirectoryCreated event (Data Lake Storage Gen2)
```json [{
- "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/testcontainer/blobs/directory-to-delete",
- "eventType": "Microsoft.Storage.DirectoryDeleted",
- "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/my-new-directory",
+ "type": "Microsoft.Storage.DirectoryCreated",
+ "time": "2017-06-26T18:41:00.9584103Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "SftpRemoveDir",
+ "api": "CreateDirectory",
+ "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "url": "https://my-storage-account.blob.core.windows.net/testcontainer/directory-to-delete",
- "recursive": "false",
+ "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-new-directory",
"sequencer": "00000000000004420000000000028963",
- "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "dataVersion": "1",
- "metadataVersion": "1"
+ "specversion": "1.0"
}] ```
-### Microsoft.Storage.BlobInventoryPolicyCompleted event
+### Microsoft.Storage.DirectoryRenamed event (Data Lake Storage Gen2)
```json
-{
- "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/BlobInventory/providers/Microsoft.EventGrid/topics/BlobInventoryTopic",
- "subject": "BlobDataManagement/BlobInventory",
- "eventType": "Microsoft.Storage.BlobInventoryPolicyCompleted",
- "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+[{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/my-renamed-directory",
+ "type": "Microsoft.Storage.DirectoryRenamed",
+ "time": "2017-06-26T18:41:00.9584103Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
"data": {
- "scheduleDateTime": "2021-05-28T03:50:27Z",
- "accountName": "testaccount",
- "ruleName": "Rule_1",
- "policyRunStatus": "Succeeded",
- "policyRunStatusMessage": "Inventory run succeeded, refer manifest file for inventory details.",
- "policyRunId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "manifestBlobUrl": "https://testaccount.blob.core.windows.net/inventory-destination-container/2021/05/26/13-25-36/Rule_1/Rule_1.csv"
+ "api": "RenameDirectory",
+ "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "destinationUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-renamed-directory",
+ "sourceUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-original-directory",
+ "sequencer": "00000000000004420000000000028963",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
},
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-05-28T15:03:18Z"
-}
+ "specversion": "1.0"
+}]
```
-### Microsoft.Storage.LifecyclePolicyCompleted event
-
-```json
-{
- "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/contosoresourcegroup/providers/Microsoft.Storage/storageAccounts/contosostorageaccount",
- "subject": "BlobDataManagement/LifeCycleManagement/SummaryReport",
- "eventType": "Microsoft.Storage.LifecyclePolicyCompleted",
- "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "data": {
- "scheduleTime": "2022/05/24 22:57:29.3260160",
- "deleteSummary": {
- "totalObjectsCount": 16,
- "successCount": 14,
- "errorList": ""
- },
- "tierToCoolSummary": {
- "totalObjectsCount": 0,
- "successCount": 0,
- "errorList": ""
- },
- "tierToArchiveSummary": {
- "totalObjectsCount": 0,
- "successCount": 0,
- "errorList": ""
- }
- },
- "dataVersion": "1",
- "metadataVersion": "1",
- "eventTime": "2022-05-26T00:00:40.1880331"
-}
-```
-
-# [Cloud event schema](#tab/cloud-event-schema)
-
-### Microsoft.Storage.BlobCreated event
+### Microsoft.Storage.DirectoryDeleted event (Data Lake Storage Gen2)
```json [{ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/test-container/blobs/new-file.txt",
- "type": "Microsoft.Storage.BlobCreated",
+ "subject": "/blobServices/default/containers/my-file-system/blobs/directory-to-delete",
+ "type": "Microsoft.Storage.DirectoryDeleted",
"time": "2017-06-26T18:41:00.9584103Z", "id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "PutBlockList",
+ "api": "DeleteDirectory",
"clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760", "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "eTag": "\"0x8D4BCC2E4835CD0\"",
- "contentType": "text/plain",
- "contentLength": 524288,
- "blobType": "BlockBlob",
- "url": "https://my-storage-account.blob.core.windows.net/testcontainer/new-file.txt",
- "sequencer": "00000000000004420000000000028963",
+ "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/directory-to-delete",
+ "recursive": "true",
+ "sequencer": "00000000000004420000000000028963",
"storageDiagnostics": {
- "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
} }, "specversion": "1.0" }] ```
-### Microsoft.Storage.BlobCreated event (Data Lake Storage Gen2)
-If the blob storage account has a hierarchical namespace, the data looks similar to the previous example with an exception of these changes:
++
+## SFTP events
+
+These events are triggered if you enable a hierarchical namespace on the storage account, and clients use SFTP APIs. For more information about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) in Azure Blob Storage](../storage/blobs/secure-file-transfer-protocol-support.md).
+
+|Event name|Description|
+|-|--|
+| [Microsoft.Storage.BlobCreated](#microsoftstorageblobcreated-event-sftp) |Triggered when a blob is created or overwritten. <br>Specifically, this event is triggered when clients use the `put` operation, which corresponds to the `SftpCreate` and `SftpCommit` APIs. An empty blob is created when the file is opened and the uploaded contents are committed when the file is closed.|
+| [Microsoft.Storage.BlobDeleted](#microsoftstorageblobdeleted-event-sftp) |Triggered when a blob is deleted. <br>Specifically, this event is also triggered when clients call the `rm` operation, which corresponds to the `SftpRemove` API.|
+| [Microsoft.Storage.BlobRenamed](#microsoftstorageblobrenamed-event-sftp) |Triggered when a blob is renamed. <br>Specifically, this event is triggered when clients use the `rename` operation on files, which corresponds to the `SftpRename` API.|
+| [Microsoft.Storage.DirectoryCreated](#microsoftstoragedirectorycreated-event-sftp) |Triggered when a directory is created. <br>Specifically, this event is triggered when clients use the `mkdir` operation, which corresponds to the `SftpMakeDir` API.|
+| [Microsoft.Storage.DirectoryRenamed](#microsoftstoragedirectoryrenamed-event-sftp) |Triggered when a directory is renamed. <br>Specifically, this event is triggered when clients use the `rename` operation on a directory, which corresponds to the `SftpRename` API.|
+| [Microsoft.Storage.DirectoryDeleted](#microsoftstoragedirectorydeleted-event-sftp) |Triggered when a directory is deleted. <br>Specifically, this event is triggered when clients use the `rmdir` operation, which corresponds to the `SftpRemoveDir` API.|
+
+### Example events
+When an event is triggered, the Event Grid service sends data about that event to subscribing endpoint. This section contains an example of what that data would look like for each blob storage event.
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+### Microsoft.Storage.BlobCreated event (SFTP)
+
+If the blob storage account uses SFTP to create or overwrite a blob, then the data looks similar to the previous example with an exception of these changes:
+
+* The `dataVersion` key is set to a value of `3`.
+
+* The `data.api` key is set to the string `SftpCreate` or `SftpCommit`.
+
+* The `clientRequestId` key is not included.
+
+* The `contentType` key is set to `application/octet-stream`.
-* The `data.api` key is set to the string `CreateFile` or `FlushWithClose`.
* The `contentOffset` key is included in the data set.
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+ > [!NOTE]
-> If applications use the `PutBlockList` operation to upload a new blob to the account, the data won't contain these changes.
+> SFTP uploads will generate 2 events. One `SftpCreate` for an initial empty blob created when opening the file and one `SftpCommit` when the file contents are written.
```json [{
- "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/new-file.txt",
- "type": "Microsoft.Storage.BlobCreated",
- "time": "2017-06-26T18:41:00.9584103Z",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/new-file.txt",
+ "eventType": "Microsoft.Storage.BlobCreated",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "CreateFile",
- "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
+ "api": "SftpCommit",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000", "eTag": "\"0x8D4BCC2E4835CD0\"",
- "contentType": "text/plain",
+ "contentType": "application/octet-stream",
"contentLength": 0, "contentOffset": 0, "blobType": "BlockBlob",
- "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/new-file.txt",
- "sequencer": "00000000000004420000000000028963",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/new-file.txt",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "specversion": "1.0"
+ "dataVersion": "3",
+ "metadataVersion": "1"
}] ```
-### Microsoft.Storage.BlobDeleted event
+
+### Microsoft.Storage.BlobDeleted event (SFTP)
+
+If the blob storage account uses SFTP to delete a blob, then the data looks similar to the previous example with an exception of these changes:
+
+* The `dataVersion` key is set to a value of `2`.
+
+* The `data.api` key is set to the string `SftpRemove`.
+
+* The `clientRequestId` key is not included.
+
+* The `contentType` key is set to `application/octet-stream`.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
```json [{
- "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/testcontainer/blobs/file-to-delete.txt",
- "type": "Microsoft.Storage.BlobDeleted",
- "time": "2017-11-07T20:09:22.5674003Z",
- "id": "4c2359fe-001e-00ba-0e04-58586806d298",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/new-file.txt",
+ "eventType": "Microsoft.Storage.BlobDeleted",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
"data": {
- "api": "DeleteBlob",
- "requestId": "4c2359fe-001e-00ba-0e04-585868000000",
+ "api": "SftpRemove",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
"contentType": "text/plain", "blobType": "BlockBlob",
- "url": "https://my-storage-account.blob.core.windows.net/testcontainer/file-to-delete.txt",
- "sequencer": "0000000000000281000000000002F5CA",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/new-file.txt",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
"storageDiagnostics": {
- "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
} },
- "specversion": "1.0"
+ "dataVersion": "2",
+ "metadataVersion": "1"
}] ```
-### Microsoft.Storage.BlobDeleted event (Data Lake Storage Gen2)
-If the blob storage account has a hierarchical namespace, the data looks similar to the previous example with an exception of these changes:
+### Microsoft.Storage.BlobRenamed event (SFTP)
+If the blob storage account uses SFTP to rename a blob, then the data looks similar to the previous example with an exception of these changes:
-* The `data.api` key is set to the string `DeleteFile`.
-* The `url` key contains the path `dfs.core.windows.net`.
+* The `data.api` key is set to the string `SftpRename`.
-> [!NOTE]
-> If applications use the `DeleteBlob` operation to delete a blob from the account, the data won't contain these changes.
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
```json [{
- "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/file-to-delete.txt",
- "type": "Microsoft.Storage.BlobDeleted",
- "time": "2017-06-26T18:41:00.9584103Z",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/my-renamed-file.txt",
+ "eventType": "Microsoft.Storage.BlobRenamed",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631",
- "data": {
- "api": "DeleteFile",
- "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
+ "data": {
+ "api": "SftpRename",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "contentType": "text/plain",
- "blobType": "BlockBlob",
- "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/file-to-delete.txt",
+ "destinationUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-renamed-file.txt",
+ "sourceUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-original-file.txt",
"sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "specversion": "1.0"
+ "dataVersion": "1",
+ "metadataVersion": "1"
}] ```
-### Microsoft.Storage.BlobRenamed event
+### Microsoft.Storage.DirectoryCreated event (SFTP)
+
+If the blob storage account uses SFTP to create a directory, then the data looks similar to the previous example with an exception of these changes:
+
+* The `dataVersion` key is set to a value of `2`.
+
+* The `data.api` key is set to the string `SftpMakeDir`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
```json [{
- "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/my-renamed-file.txt",
- "type": "Microsoft.Storage.BlobRenamed",
- "time": "2017-06-26T18:41:00.9584103Z",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/my-new-directory",
+ "eventType": "Microsoft.Storage.DirectoryCreated",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "RenameFile",
- "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
+ "api": "SftpMakeDir",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "destinationUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-renamed-file.txt",
- "sourceUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-original-file.txt",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/my-new-directory",
"sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "specversion": "1.0"
+ "dataVersion": "2",
+ "metadataVersion": "1"
}] ```
-### Microsoft.Storage.DirectoryCreated event
+
+### Microsoft.Storage.DirectoryRenamed event (SFTP)
+
+If the blob storage account uses SFTP to rename a directory, then the data looks similar to the previous example with an exception of these changes:
+
+* The `data.api` key is set to the string `SftpRename`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
```json [{
- "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/my-new-directory",
- "type": "Microsoft.Storage.DirectoryCreated",
- "time": "2017-06-26T18:41:00.9584103Z",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/my-renamed-directory",
+ "eventType": "Microsoft.Storage.DirectoryRenamed",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "CreateDirectory",
- "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
+ "api": "SftpRename",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-new-directory",
+ "destinationUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-renamed-directory",
+ "sourceUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-original-directory",
"sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "specversion": "1.0"
+ "dataVersion": "1",
+ "metadataVersion": "1"
}] ```
-### Microsoft.Storage.DirectoryRenamed event
+
+### Microsoft.Storage.DirectoryDeleted event (SFTP)
+
+If the blob storage account uses SFTP to delete a directory, then the data looks similar to the previous example with an exception of these changes:
+
+* The `data.api` key is set to the string `SftpRemoveDir`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
```json [{
- "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/my-renamed-directory",
- "type": "Microsoft.Storage.DirectoryRenamed",
- "time": "2017-06-26T18:41:00.9584103Z",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/directory-to-delete",
+ "eventType": "Microsoft.Storage.DirectoryDeleted",
+ "eventTime": "2022-04-25T19:13:00.1522383Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "RenameDirectory",
- "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
+ "api": "SftpRemoveDir",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "destinationUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-renamed-directory",
- "sourceUrl": "https://my-storage-account.dfs.core.windows.net/my-file-system/my-original-directory",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/directory-to-delete",
+ "recursive": "false",
"sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" } },
- "specversion": "1.0"
+ "dataVersion": "1",
+ "metadataVersion": "1"
}] ```
-### Microsoft.Storage.DirectoryDeleted event
-```json
-[{
+# [Cloud event schema](#tab/cloud-event-schema)
+
+### Microsoft.Storage.BlobCreated event (SFTP)
+
+If the blob storage account uses SFTP to create or overwrite a blob, then the data looks similar to the previous example with an exception of these changes:
+
+* The `dataVersion` key is set to a value of `3`.
+
+* The `data.api` key is set to the string `SftpCreate` or `SftpCommit`.
+
+* The `clientRequestId` key is not included.
+
+* The `contentType` key is set to `application/octet-stream`.
+
+* The `contentOffset` key is included in the data set.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+> [!NOTE]
+> SFTP uploads will generate 2 events. One `SftpCreate` for an initial empty blob created when opening the file and one `SftpCommit` when the file contents are written.
+
+```json
+[{
"source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
- "subject": "/blobServices/default/containers/my-file-system/blobs/directory-to-delete",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/new-file.txt",
+ "type": "Microsoft.Storage.BlobCreated",
+ "time": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpCommit",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "eTag": "\"0x8D4BCC2E4835CD0\"",
+ "contentType": "application/octet-stream",
+ "contentLength": 0,
+ "contentOffset": 0,
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/new-file.txt",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "specversion": "1.0"
+}]
+```
++
+### Microsoft.Storage.BlobDeleted event (SFTP)
+
+If the blob storage account uses SFTP to delete a blob, then the data looks similar to the previous example with an exception of these changes:
+
+* The `dataVersion` key is set to a value of `2`.
+
+* The `data.api` key is set to the string `SftpRemove`.
+
+* The `clientRequestId` key is not included.
+
+* The `contentType` key is set to `application/octet-stream`.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+```json
+[{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/new-file.txt",
+ "type": "Microsoft.Storage.BlobDeleted",
+ "time": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpRemove",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "contentType": "text/plain",
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/new-file.txt",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "specversion": "1.0"
+}]
+```
++
+### Microsoft.Storage.BlobRenamed event (SFTP)
+
+If the blob storage account uses SFTP to rename a blob, then the data looks similar to the previous example with an exception of these changes:
+
+* The `data.api` key is set to the string `SftpRename`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+```json
+[{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/my-renamed-file.txt",
+ "type": "Microsoft.Storage.BlobRenamed",
+ "time": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpRename",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "destinationUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-renamed-file.txt",
+ "sourceUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-original-file.txt",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "specversion": "1.0"
+}]
+```
+
+### Microsoft.Storage.DirectoryCreated event (SFTP)
+
+If the blob storage account uses SFTP to create a directory, then the data looks similar to the previous example with an exception of these changes:
+
+* The `dataVersion` key is set to a value of `2`.
+
+* The `data.api` key is set to the string `SftpMakeDir`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+```json
+[{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/my-new-directory",
+ "type": "Microsoft.Storage.DirectoryCreated",
+ "time": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpMakeDir",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/my-new-directory",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "specversion": "1.0"
+}]
+```
++
+### Microsoft.Storage.DirectoryRenamed event (SFTP)
+
+If the blob storage account uses SFTP to rename a directory, then the data looks similar to the previous example with an exception of these changes:
+
+* The `data.api` key is set to the string `SftpRename`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+```json
+[{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/my-renamed-directory",
+ "type": "Microsoft.Storage.DirectoryRenamed",
+ "time": "2022-04-25T19:13:00.1522383Z",
+ "id": "831e1650-001e-001b-66ab-eeb76e069631",
+ "data": {
+ "api": "SftpRename",
+ "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
+ "destinationUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-renamed-directory",
+ "sourceUrl": "https://my-storage-account.blob.core.windows.net/testcontainer/my-original-directory",
+ "sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
+ "storageDiagnostics": {
+ "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
+ }
+ },
+ "specversion": "1.0"
+}]
+```
++
+### Microsoft.Storage.DirectoryDeleted event (SFTP)
+
+If the blob storage account uses SFTP to delete a directory, then the data looks similar to the previous example with an exception of these changes:
+
+* The `data.api` key is set to the string `SftpRemoveDir`.
+
+* The `clientRequestId` key is not included.
+
+* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
+
+```json
+[{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/directory-to-delete",
"type": "Microsoft.Storage.DirectoryDeleted",
- "time": "2017-06-26T18:41:00.9584103Z",
+ "time": "2022-04-25T19:13:00.1522383Z",
"id": "831e1650-001e-001b-66ab-eeb76e069631", "data": {
- "api": "DeleteDirectory",
- "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
+ "api": "SftpRemoveDir",
"requestId": "831e1650-001e-001b-66ab-eeb76e000000",
- "url": "https://my-storage-account.dfs.core.windows.net/my-file-system/directory-to-delete",
- "recursive": "true",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/directory-to-delete",
+ "recursive": "false",
"sequencer": "00000000000004420000000000028963",
+ "identity":"localuser",
"storageDiagnostics": { "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0" }
If the blob storage account has a hierarchical namespace, the data looks similar
}] ``` +++
+## Policy-related events
+
+These events are triggered when the actions defined by a policy are performed.
+
+ |Event name |Description|
+ |-|--|
+ | [Microsoft.Storage.BlobInventoryPolicyCompleted](#microsoftstorageblobinventorypolicycompleted-event) |Triggered when the inventory run completes for a rule that is defined an inventory policy. This event also occurs if the inventory run fails with a user error before it starts to run. For example, an invalid policy, or an error that occurs when a destination container is not present will trigger the event. |
+ | [Microsoft.Storage.LifecyclePolicyCompleted](#microsoftstoragelifecyclepolicycompleted-event) |Triggered when the actions defined by a lifecycle management policy are performed. |
+
+### Example events
+When an event is triggered, the Event Grid service sends data about that event to subscribing endpoint. This section contains an example of what that data would look like for each blob storage event.
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+### Microsoft.Storage.BlobInventoryPolicyCompleted event
+
+```json
+{
+ "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/BlobInventory/providers/Microsoft.EventGrid/topics/BlobInventoryTopic",
+ "subject": "BlobDataManagement/BlobInventory",
+ "eventType": "Microsoft.Storage.BlobInventoryPolicyCompleted",
+ "eventTime": "2021-05-28T15:03:18Z",
+ "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "data": {
+ "scheduleDateTime": "2021-05-28T03:50:27Z",
+ "accountName": "testaccount",
+ "ruleName": "Rule_1",
+ "policyRunStatus": "Succeeded",
+ "policyRunStatusMessage": "Inventory run succeeded, refer manifest file for inventory details.",
+ "policyRunId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "manifestBlobUrl": "https://testaccount.blob.core.windows.net/inventory-destination-container/2021/05/26/13-25-36/Rule_1/Rule_1.csv"
+ },
+ "dataVersion": "1.0",
+ "metadataVersion": "1"
+}
+```
+
+### Microsoft.Storage.LifecyclePolicyCompleted event
+
+```json
+{
+ "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/contosoresourcegroup/providers/Microsoft.Storage/storageAccounts/contosostorageaccount",
+ "subject": "BlobDataManagement/LifeCycleManagement/SummaryReport",
+ "eventType": "Microsoft.Storage.LifecyclePolicyCompleted",
+ "eventTime": "2022-05-26T00:00:40.1880331",
+ "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "data": {
+ "scheduleTime": "2022/05/24 22:57:29.3260160",
+ "deleteSummary": {
+ "totalObjectsCount": 16,
+ "successCount": 14,
+ "errorList": ""
+ },
+ "tierToCoolSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ },
+ "tierToArchiveSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ }
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1"
+}
+```
+
+# [Cloud event schema](#tab/cloud-event-schema)
+
+### Microsoft.Storage.BlobInventoryPolicyCompleted event
+
+```json
+{
+ "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/BlobInventory/providers/Microsoft.EventGrid/topics/BlobInventoryTopic",
+ "subject": "BlobDataManagement/BlobInventory",
+ "type": "Microsoft.Storage.BlobInventoryPolicyCompleted",
+ "time": "2021-05-28T15:03:18Z",
+ "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "data": {
+ "scheduleDateTime": "2021-05-28T03:50:27Z",
+ "accountName": "testaccount",
+ "ruleName": "Rule_1",
+ "policyRunStatus": "Succeeded",
+ "policyRunStatusMessage": "Inventory run succeeded, refer manifest file for inventory details.",
+ "policyRunId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "manifestBlobUrl": "https://testaccount.blob.core.windows.net/inventory-destination-container/2021/05/26/13-25-36/Rule_1/Rule_1.csv"
+ },
+ "specversion": "1.0"
+}
+```
+
+### Microsoft.Storage.LifecyclePolicyCompleted event
+
+```json
+{
+ "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/contosoresourcegroup/providers/Microsoft.Storage/storageAccounts/contosostorageaccount",
+ "subject": "BlobDataManagement/LifeCycleManagement/SummaryReport",
+ "type": "Microsoft.Storage.LifecyclePolicyCompleted",
+ "time": "2022-05-26T00:00:40.1880331",
+ "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "data": {
+ "scheduleTime": "2022/05/24 22:57:29.3260160",
+ "deleteSummary": {
+ "totalObjectsCount": 16,
+ "successCount": 14,
+ "errorList": ""
+ },
+ "tierToCoolSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ },
+ "tierToArchiveSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ }
+ },
+ "specversion": "1.0"
+}
+```
++ ## Event properties
event-grid Manage Event Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/manage-event-delivery.md
To set a dead letter location, you need a storage account for holding events tha
> [!NOTE] > - Create a storage account and a blob container in the storage before running commands in this article. > - The Event Grid service creates blobs in this container. The names of blobs will have the name of the Event Grid subscription with all the letters in upper case. For example, if the name of the subscription is My-Blob-Subscription, names of the dead letter blobs will have MY-BLOB-SUBSCRIPTION (myblobcontainer/MY-BLOB-SUBSCRIPTION/2019/8/8/5/111111111-1111-1111-1111-111111111111.json). This behavior is to protect against differences in case handling between Azure services.
+> - In the above example .../2019/8/8/5/... represents the non-zero padded date and hour (UTC): .../YYYY/MM/DD/HH/...
> - The dead letter blobs created will contain one or more events in an array. An important behavior to consider when processing dead letters.
governance Attestation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md
+
+ Title: Details of the Azure Policy attestation structure
+description: Describes the components of the Azure Policy attestation JSON object.
Last updated : 09/23/2022++++
+# Azure Policy attestation structure
+
+`Microsoft.PolicyInsights/attestations`, called an Attestation resource, is a new proxy resource type
+ that sets the compliance states for targeted resources in a manual policy. You can only have one
+ attestation on one resource for an individual policy. In preview, Attestations are available
+only through the Azure Resource Manager (ARM) API.
+
+Below is an example of creating a new attestation resource:
+
+```http
+PUT http://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.PolicyInsights/attestations/{name}?api-version=2019-10-01
+```
+
+## Request body
+
+Below is a sample attestation resource JSON object:
+
+```json
+"properties": {
+ "policyAssignmentId": "/subscriptions/{subscriptionID}/providers/microsoft.authorization/policyassignments/{assignmentID}",
+ "policyDefinitionReferenceId": "{definitionReferenceID}",
+ "complianceState": "Compliant",
+ "expiresOn": "2023-07-14T00:00:00Z",
+ "owner": "{AADObjectID}",
+ "comments": "This subscription has passed a security audit. See attached details for evidence",
+ "evidence": [
+ {
+ "description": "The results of the security audit.",
+ "sourceUri": "https://gist.github.com/contoso/9573e238762c60166c090ae16b814011"
+ },
+ {
+ "description": "Description of the attached evidence document.",
+ "sourceUri": "https://storagesamples.blob.core.windows.net/sample-container/contingency_evidence_adendum.docx"
+ },
+ ],
+}
+```
+
+|Property |Description |
+|||
+|policyAssignmentId |Required assignment ID for which the state is being set. |
+|policyDefinitionReferenceId |Optional definition reference ID, if within a policy initiative. |
+|complianceState |Desired state of the resources. Allowed values are `Compliant`, `NonCompliant`, and `Unknown`. |
+|owner |Optional Azure AD object ID of responsible party. |
+|comments |Optional description of why state is being set. |
+|evidence |Optional link array for attestation evidence. |
+
+Because attestations are a separate resource from policy assignments, they have their own lifecycle. You can PUT, GET and DELETE attestations by using the ARM API. See the [Policy REST API Reference](/rest/api/policy) for more details.
+
+## Next steps
+
+- Review [Understanding policy effects](effects.md).
+- Study the [initiative definition structure](./initiative-definition-structure.md)
+- Review examples at [Azure Policy samples](../samples/index.md).
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
Title: Understand how effects work description: Azure Policy definitions have various effects that determine how compliance is managed and reported. Previously updated : 09/21/2022 Last updated : 09/23/2022
you'll need to create an attestation for that compliance state.
> During Public Preview, support for manual policy is available through various Microsoft Defender > for Cloud regulatory compliance initiatives. If you are a Microsoft Defender for Cloud [Premium tier](https://azure.microsoft.com/pricing/details/defender-for-cloud/) customer, refer to their experience overview.
+Currently, the following regulatory policy initiatives include policy definitions containing the manual effect:
+
+- FedRAMP High
+- FedRAMP Medium
+- HIPAA
+- HITRUST
+- ISO 27001
+- Microsoft CIS 1.3.0
+- Microsoft CIS 1.4.0
+- NIST SP 800-171 Rev. 2
+- NIST SP 800-53 Rev. 4
+- NIST SP 800-53 Rev. 5
+- PCI DSS 3.2.1
+- PCI DSS 4.0
+- SOC TSP
+- SWIFT CSP CSCF v2022
+ The following example targets Azure subscriptions and sets the initial compliance state to `Unknown`. ```json
When a policy definition with `manual` effect is assigned, you have the option t
### Attestations `Microsoft.PolicyInsights/attestations`, called an Attestation resource, is a new proxy resource type
- that sets the compliance states for targeted resources in a manual policy. You can only have one
- attestation on one resource for an individual policy. In preview, Attestations are available
-only through the Azure Resource Manager (ARM) API.
-
-Below is an example of creating a new attestation resource:
-
-```http
-PUT http://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.PolicyInsights/attestations/{name}?api-version=2019-10-01
-```
-
-#### Request body
-
-Below is a sample attestation resource JSON object:
-
-```json
-"properties": {
- "policyAssignmentId": "/subscriptions/{subscriptionID}/providers/microsoft.authorization/policyassignments/{assignmentID}",
- "policyDefinitionReferenceId": "{definitionReferenceID}",
- "complianceState": "Compliant",
- "expiresOn": "2023-07-14T00:00:00Z",
- "owner": "{AADObjectID}",
- "comments": "This subscription has passed a security audit. See attached details for evidence",
- "evidence": [
- {
- "description": "The results of the security audit.",
- "sourceUri": "https://gist.github.com/contoso/9573e238762c60166c090ae16b814011"
- },
- {
- "description": "Description of the attached evidence document.",
- "sourceUri": "https://storagesamples.blob.core.windows.net/sample-container/contingency_evidence_adendum.docx"
- },
- ],
-}
-```
-
-|Property |Description |
-|||
-|policyAssignmentId |Required assignment ID for which the state is being set. |
-|policyDefinitionReferenceId |Optional definition reference ID, if within a policy initiative. |
-|complianceState |Desired state of the resources. Allowed values are `Compliant`, `NonCompliant`, and `Unknown`. |
-|owner |Optional Azure AD object ID of responsible party. |
-|comments |Optional description of why state is being set. |
-|evidence |Optional link array for attestation evidence. |
-
-Because attestations are a separate resource from policy assignments, they have their own lifecycle. You can PUT, GET and DELETE attestations by using the ARM API. See the [Policy REST API Reference](/rest/api/policy) for more details.
+ that sets the compliance states for targeted resources in a manual policy. Learn more about
+the attestation resource by reading [Azure Policy attestation structure](attestation-structure.md).
## Modify
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
+
+ Title: Benefits of migrating to Azure HDInsight 4.0.
+description: Learn the benefits of migrating to Azure HDInsight 4.0.
++ Last updated : 09/23/2022+
+# Significant version changes in HDInsight 4.0 and advantages
+
+HDInsight 4.0 has several advantages over HDInsight 3.6. Here's an overview of what's new in Azure HDInsight 4.0.
+
+| # | OSS component | HDInsight 4.0 Version | HDInsight 3.6 Version |
+| | | | |
+| 1 | Apache Hadoop | 3.1.1 | 2.7.3 |
+| 2 | Apache HBase | 2.1.6 | 1.1.2 |
+| 3 | Apache Hive | 3.1.0 | 1.2.1, 2.1 (LLAP) |
+| 4 | Apache Kafka | 2.1.1, 2.4(GA) | 1.1 |
+| 5 | Apache Phoenix | 5 | 4.7.0 |
+| 6 | Apache Spark | 2.4.4, 3.0.0(Preview) | 2.2 |
+| 7 | Apache TEZ | 0.9.1 | 0.7.0 |
+| 8 | Apache ZooKeeper | 3.4.6 | 3.4.6 |
+| 9 | Apache Kafka | 2.1.1, 2.4.1(Preview) | 1.1 |
+| 10 | Apache Ranger | 1.1.0 | 0.7.0 |
+
+## Workloads and Features
+
+**Hive**
+- Advanced features
+ - LLAP workload management
+ - LLAP Support JDBC, Druid and Kafka connectors
+ - Better SQL features ΓÇô Constraints and default values
+ - Surrogate Keys
+ - Information schema.
+- Performance advantage
+ - Result caching - Caching query results allow a previously computed query result to be reused
+ - Dynamic materialized views - Pre-computation of summaries
+ - ACID V2 performance improvements in both storage format and execution engine
+- Security
+ - GDPR compliance enabled on Apache Hive transactions
+ - Hive UDF execution authorization in ranger
+
+ **HBase**
+- Advanced features
+ - Procedure 2. Procedure V2, or procv2, is an updated framework for executing multistep HBase administrative operations.
+ - Fully off-heap read/write path.
+ - In-memory compactions
+ - HBase cluster supports Premium ADLS Gen2
+- Performance advantage
+ - Accelerated Writes uses Azure premium SSD managed disks to improve performance of the Apache HBase Write Ahead Log (WAL).
+- Security
+ - Hardening of both secondary indexes, which include Local and Global
+ -
+**Kafka**
+- Advanced features
+ - Kafka partition distribution on Azure fault domains
+ - Zstd compression support
+ - Kafka Consumer Incremental Rebalance
+ - Support MirrorMaker 2.0
+- Performance advantage
+ - Improved windowed aggregation performance in Kafka Streams
+ - Improved broker resiliency by reducing memory footprint of message conversion
+ - Replication protocol improvements for fast leader failover
+- Security
+ - Access control for topic creation for specific topics/topic prefix
+ - Hostname verification to prevent SSL configuration man-in-the- middle attacks
+ - Improved encryption support with faster Transport Layer Security (TLS) and CRC32C implementation
+
+**Spark**
+- Advanced features
+ - Structured streaming support for ORC
+ - Capability to integrate with new Metastore Catalog feature
+ - Structured Streaming support for Hive Streaming library
+ - Transparent write to Hive warehouse
+ - Spark Cruise - an automatic computation reuse system for Spark.
+- Performance advantage
+ - Result caching - Caching query results allow a previously computed query result to be reused
+ - Dynamic materialized views - Pre-computation of summaries
+- Security
+ - GDPR compliance enabled for Spark transactions
+
+## Hive Partition Discovery and Repair
+
+Hive automatically discovers and synchronizes the metadata of the partition in Hive Metastore.
+The `discover.partitions` table property enables and disables synchronization of the file system with partitions. In external partitioned tables, this property is enabled (true) by default.
+When Hive Metastore Service (HMS) is started in remote service mode, a background thread `(PartitionManagementTask)` gets scheduled periodically every 300 s (configurable via `metastore.partition.management.task.frequency config`) that looks for tables with `discover.partitions` table property set to true and performs `msck` repair in sync mode.
+
+If the table is a transactional table, then Exclusive Lock is obtained for that table before performing `msck repair`. With this table property, `MSCK REPAIR TABLE table_name SYNC PARTITIONS` is no longer required to be run manually.
+Assuming you have an external table created using a version of Hive that doesn't support partition discovery, enable partition discovery for the table.
+
+```ALTER TABLE exttbl SET TBLPROPERTIES ('discover.partitions' = 'true');```
+
+Set synchronization of partitions to occur every 10 minutes expressed in seconds: In Ambari > Hive > Configs, `set metastore.partition.management.task.frequency` to 3600 or more.
+++
+> [!WARNING]
+> With the `management.task` running every 10 minutes, there will be pressure on the SQL server DTU.
+>
+You can verify the output from Microsoft Azure portal.
++
+Hive drops the metadata and corresponding data in any partition created after the retention period. You express the retention time using a numeral and the following character or characters.
+Hive drops the metadata and corresponding data in any partition created after the retention period. You express the retention time using a numeral and the following character(s).
+
+```
+ms (milliseconds)
+s (seconds)
+m (minutes)
+d (days)
+```
+
+To configure a partition retention period for one week.
+
+```
+ALTER TABLE employees SET TBLPROPERTIES ('partition.retention.period'='7d');
+```
+
+The partition metadata and the actual data for employees in Hive is automatically dropped after a week.
+
+## Hive 3
+
+### Performance optimizations available under Hive 3
+
+OLAP Vectorization Dynamic Semijoin reduction Parquet support for vectorization with LLAP Automatic query cache.
+
+**New SQL features**
+
+Materialized Views Surrogate Keys Constraints Metastore CachedStore.
+
+**OLAP Vectorization**
+
+Vectorization allows Hive to process a batch of rows together instead of processing one row at a time. Each batch is usually an array of primitive types. Operations are performed on the entire column vector, which improves the instruction pipelines and cache usage.
+Vectorized execution of PTF, roll up and grouping sets.
+
+**Dynamic `Semijoin` reduction**
+
+Dramatically improves performance for selective joins.
+It builds a bloom filter from one side of join and filters rows from other side.
+Skips scan and further evaluation of rows that wouldn't qualify the join.
+
+**Parquet support for vectorization with LLAP**
+
+Vectorized query execution is a feature that greatly reduces the CPU usage for typical query operations such as
+
+* scans
+* filters
+* aggregate
+* joins
+
+Vectorization is also implemented for the ORC format. Spark also uses Whole Stage Codegen and this vectorization (for Parquet) since Spark 2.0.
+Added timestamp column for Parquet vectorization and format under LLAP.
+
+> [!WARNING]
+> Parquet writes are slow when conversion to zoned times from timestamp. For more information, see [**here**](https://issues.apache.org/jira/browse/HIVE-24693).
++
+### Automatic query cache
+1. With `hive.query.results.cache.enabled=true`, every query that runs in Hive 3 stores its result in a cache.
+1. If the input table changes, Hive evicts invalid data from the cache. For example, if you perform aggregation and the base table changes, queries you run most frequently stay in cache, but stale queries are evicted.
+1. The query result cache works with managed tables only because Hive can't track changes to an external table.
+1. If you join external and managed tables, Hive falls back to executing the full query. The query result cache works with ACID tables. If you update an ACID table, Hive reruns the query automatically.
+1. You can enable and disable the query result cache from command line. You might want to do so to debug a query.
+1. Disable the query result cache by setting the following parameter to false: `hive.query.results.cache.enabled=false`
+1. Hive stores the query result cache in `/tmp/hive/__resultcache__/`. By default, Hive allocates 2 GB for the query result cache. You can change this setting by configuring the following parameter in bytes: `hive.query.results.cache.max.size`
+1. Changes to query processing: During query compilation, check the results cache to see if it already has the query results. If there's a cache hit, then the query plan will be set to a `FetchTask` that will read from the cached location.
+
+During query execution:
+
+Parquet `DataWriteableWriter` relies on `NanoTimeUtils` to convert a timestamp object into a binary value. This query calls `toString()` on the timestamp object, and then parses the String.
+
+1. If the results cache can be used for this query
+ 1. The query will be the `FetchTask` reading from the cached results directory.
+ 1. No cluster tasks will be required.
+1. If the results cache can't be used, run the cluster tasks as normal
+ 1. Check if the query results that have been computed are eligible to add to the results cache.
+ 1. If results can be cached, the temporary results generated for the query will be saved to the results cache. Steps may need to be done here to ensure the query results directory isn't deleted by query clean-up.
+
+## SQL features
+
+**Materialized Views**
+
+The initial implementation introduced in Apache Hive 3.0.0 focuses on introducing materialized views and automatic query rewriting based on those materializations in the project. Materialized views can be stored natively in Hive or in other custom storage handlers (ORC), and they can seamlessly exploit exciting new Hive features such as LLAP acceleration.
+
+More information, see [Hive - Materialized Views - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/hive-materialized-views/ba-p/2502785)
+
+## Surrogate Keys
+
+Use the built-in `SURROGATE_KEY` user-defined function (UDF) to automatically generate numerical Ids for rows as you enter data into a table. The generated surrogate keys can replace wide, multiple composite keys.
+
+Hive supports the surrogate keys on ACID tables only. The table you want to join using surrogate keys can't have column types that need casting. These data types must be primitives, such as INT or `STRING`.
+
+Joins using the generated keys are faster than joins using strings. Using generated keys doesn't force data into a single node by a row number. You can generate keys as abstractions of natural keys. Surrogate keys have an advantage over UUIDs, which are slower and probabilistic.
+
+The `SURROGATE_KEY UDF` generates a unique ID for every row that you insert into a table.
+It generates keys based on the execution environment in a distributed system, which includes many factors, such as
+
+1. Internal data structures
+2. State of a table
+3. Last transaction ID.
+
+Surrogate key generation doesn't require any coordination between compute tasks. The UDF takes no arguments, or two arguments are
+
+1. Write ID bits
+1. Task ID bits
+
+### Constraints
+
+SQL constraints to enforce data integrity and improve performance. The optimizer uses the constraint information to make smart decisions. Constraints can make data predictable and easy to locate.
+
+|Constraints|Description|
+|||
+|Check|Limits the range of values you can place in a column.|
+|PRIMARY KEY|Identifies each row in a table using a unique identifier.|
+|FOREIGN KEY|Identifies a row in another table using a unique identifier.|
+|UNIQUE KEY|Checks that values stored in a column are different.|
+|NOT NULL|Ensures that a column can't be set to NULL.|
+|ENABLE|Ensures that all incoming data conforms to the constraint.|
+|DISABLE|Doesn't ensure that all incoming data conforms to the constraint.|
+|VALIDATEC|hecks that all existing data in the table conforms to the constraint.|
+|NOVALIDATE|Doesn't check that all existing data in the table conforms to the constraint
+|ENFORCED|Maps to ENABLE NOVALIDATE.|
+|NOT ENFORCED|Maps to DISABLE NOVALIDATE.|
+|RELY|Specifies abiding by a constraint; used by the optimizer to apply further optimizations.|
+|NORELY|Specifies not abiding by a constraint.|
+
+For more information, see https://cwiki.apache.org/confluence/display/Hive/Supported+Features%3A++Apache+Hive+3.1
+
+### Metastore `CachedStore`
+
+Hive metastore operation takes much time and thus slow down Hive compilation. In some extreme case, it takes much longer than the actual query run time. Especially, we find the latency of cloud db is high and 90% of total query runtime is waiting for metastore SQL database operations. Based on this observation, the metastore operation performance will be greatly enhanced, if we have a memory structure which cache the database query result.
+
+`hive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.cache.CachedStore`
++
+## Troubleshooting guide
+
+[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](/azure/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
+
+## References
+
+**Hive 3.1.0**
+
+https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/hive-overview/content/hive_whats_new_in_this_release_hive.html
+
+**HBase 2.1.6**
+
+https://apache.googlesource.com/hbase/+/ba26a3e1fd5bda8a84f99111d9471f62bb29ed1d/RELEASENOTES.md
+
+**Hadoop 3.1.1**
+
+https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/release/3.1.1/RELEASENOTES.3.1.1.html
+
+## Further reading
+
+* [HDInsight 4.0 Announcement](/azure/hdinsight/hdinsight-version-release.md)
+* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0.md)
hdinsight Apache Hbase Build Java Maven Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-build-java-maven-linux.md
description: Learn how to use Apache Maven to build a Java-based Apache HBase ap
Previously updated : 12/24/2019 Last updated : 09/23/2022 # Build Java applications for Apache HBase
Use the `-showErr` parameter to view the standard error (STDERR) that is produce
## Next steps
-[Learn how to use SQLLine with Apache HBase](apache-hbase-query-with-phoenix.md)
+[Learn how to use SQLLine with Apache HBase](apache-hbase-query-with-phoenix.md)
hdinsight Hdinsight Troubleshoot Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-hive.md
description: Get answers to common questions about working with Apache Hive and
keywords: Azure HDInsight, Hive, FAQ, troubleshooting guide, common questions Previously updated : 08/15/2019 Last updated : 09/23/2022 # Troubleshoot Apache Hive by using Azure HDInsight
There are two ways to collect the Tez DAG data:
## Next steps
hdinsight Use Pig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/use-pig.md
description: Learn how to use Pig with Apache Hadoop on HDInsight.
Previously updated : 01/28/2020 Last updated : 09/23/2022 # Use Apache Pig with Apache Hadoop on HDInsight
healthcare-apis Register Application Cli Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application-cli-rest.md
Choose a name for the secret and specify the expiration duration. The default is
###Add client secret with expiration. The default is one year. clientsecretname=mycert2 clientsecretduration=2
-clientsecret=$(az ad app credential reset --id $clientid --append --credential-description $clientsecretname --years $clientsecretduration --query password --output tsv)
+clientsecret=$(az ad app credential reset --id $clientid --append --display-name $clientsecretname --years $clientsecretduration --query password --output tsv)
echo $clientsecret ```
industrial-iot Overview What Is Industrial Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/overview-what-is-industrial-iot.md
Azure IIoT solutions are built from specific components:
The [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) acts as a central message hub for secure, bi-directional communications between any IoT application and the devices it manages. It's an open and flexible cloud platform as a service (PaaS) that supports open-source SDKs and multiple protocols.
-Gathering your industrial and business data onto an IoT Hub lets you store your data securely, perform business and efficiency analyses on it, and generate reports from it. You can process your combined data with Microsoft Azure services and tools, for example [Azure Stream Analytics](https://docs.microsoft.com/azure/stream-analytics), or visualize in your Business Intelligence platform of choice such as [Power BI](https://powerbi.microsoft.com).
+Gathering your industrial and business data onto an IoT Hub lets you store your data securely, perform business and efficiency analyses on it, and generate reports from it. You can process your combined data with Microsoft Azure services and tools, for example [Azure Stream Analytics](/azure/stream-analytics), or visualize in your Business Intelligence platform of choice such as [Power BI](https://powerbi.microsoft.com).
### IoT Edge devices
iot-central Tutorial Industrial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-industrial-end-to-end.md
+
+ Title: Tutorial - Explore an Azure IoT Central industrial scenario | Microsoft Docs
+description: This tutorial shows you how to deploy an end-to-end industrial IoT solution. You install an IoT Edge gateway, an IoT Central application, and an Azure Data Explorer workspace.
++ Last updated : 09/15/2022++++
+#Customer intent: As a solution builder, I want to deploy a complete industrial IoT solution that uses IoT Central so that I understand how IoT Central enables industrial IoT scenarios.
++
+# Explore an industrial IoT scenario with IoT Central
+
+The solution shows how to use Azure IoT Central to ingest industrial IoT data from edge resources and then export the data to Azure Data Explorer for further analysis. The sample deploys and configures resources such as:
+
+- An Azure virtual machine to host the Azure IoT Edge runtime.
+- An IoT Central application to ingest OPC-UA data, transform it, and then export it to Azure Data Explorer.
+- An Azure Data Explorer environment to store, manipulate, and explore the OPC-UA data.
+
+The following diagram shows the data flow in the scenario and highlights the key capabilities of IoT Central relevant to industrial solutions:
++
+The sample uses a custom tool to deploy and configure all of the resources. The tool shows you what resources it deploys and provides links to further information.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Deploy an end-to-end industrial IoT solution
+> * Use the **IoT Central Solution Builder** tool to deploy a solution
+> * Create a customized deployment
+
+## Prerequisites
+
+- Azure subscription.
+- Local machine to run the **IoT Central Solution Builder** tool. Pre-built binaries are available for Windows and macOS.
+- If you need to build the **IoT Central Solution Builder** tool instead of using one of the pre-built binaries, you need a local Git installation.
+- Text editor. If you want to edit the configuration file to customize your solution.
+
+In this tutorial, you use the Azure CLI to create an app registration in Azure Active Directory:
++
+## Setup
+
+Complete the following tasks to prepare the tool to deploy your solution:
+
+- Create an Azure Active Directory app registration
+- Install the **IoT Central Solution Builder** tool
+- Configure the **IoT Central Solution Builder** tool
+
+To create an Active Directory app registration in your Azure subscription:
+
+- If you're running the Azure CLI on your local machine, sign in to your Azure tenant:
+
+ ```azurecli
+ az login
+ ```
+
+ > [!TIP]
+ > If you're using the Azure Cloud Shell, you're signed in automatically. If you want to use a different subscription, use the [az account](/cli/azure/account?view=azure-cli-latest#az-account-set&preserve-view=true) command.
+
+- Make a note of the `id` value from the previous command. This value is your *subscription ID*. You use this value later in the tutorial.
+
+- Make a note of the `tenantId` value from the previous command. This value is your *tenant ID*. You use this value later in the tutorial.
+
+- To create an Active Directory app registration, run the following command:
+
+ ```azurecli
+ az ad app create \
+ --display-name "IoT Central Solution Builder" \
+ --enable-access-token-issuance false \
+ --enable-id-token-issuance false \
+ --is-fallback-public-client false \
+ --public-client-redirect-uris "msald38cef1a-9200-449d-9ce5-3198067beaa5://auth" \
+ --required-resource-accesses "[{\"resourceAccess\":[{\"id\":\"00d678f0-da44-4b12-a6d6-c98bcfd1c5fe\",\"type\":\"Scope\"}],\"resourceAppId\":\"2746ea77-4702-4b45-80ca-3c97e680e8b7\"},{\"resourceAccess\":[{\"id\":\"73792908-5709-46da-9a68-098589599db6\",\"type\":\"Scope\"}],\"resourceAppId\":\"9edfcdd9-0bc5-4bd4-b287-c3afc716aac7\"},{\"resourceAccess\":[{\"id\":\"41094075-9dad-400e-a0bd-54e686782033\",\"type\":\"Scope\"}],\"resourceAppId\":\"797f4846-ba00-4fd7-ba43-dac1f8f63013\"},{\"resourceAccess\":[{\"id\":\"e1fe6dd8-ba31-4d61-89e7-88639da4683d\",\"type\":\"Scope\"}],\"resourceAppId\":\"00000003-0000-0000-c000-000000000000\"}]" \
+ --sign-in-audience "AzureADandPersonalMicrosoftAccount"
+ ```
+
+ > [!NOTE]
+ > The display name must be unique in your subscription.
+
+- Make a note of the `appId` value from the output of the previous command. This value is your *application (client) ID*. You use this value later in the tutorial.
+
+To install the **IoT Central Solution Builder** tool:
+
+- If you're using Windows, download and run the latest setup file from the [releases](https://github.com/Azure-Samples/iotc-solution-builder/releases) page.
+- For other platforms, clone the [iotc-solution-builder](https://github.com/Azure-Samples/iotc-solution-builder) GitHub repository and follow the instructions in the readme file to [build the tool](https://github.com/Azure-Samples/iotc-solution-builder#build-the-tool).
+
+To configure the **IoT Central Solution Builder** tool:
+
+- Run the **IoT Central Solution Builder** tool.
+- Select **Action > Edit Azure config**:
+
+ :::image type="content" source="media/tutorial-industrial-end-to-end/iot-central-solution-builder-azure-config.png" alt-text="Screenshot that shows the edit Azure config menu option in the IoT solution builder tool.":::
+
+- Enter the application ID, subscription ID, and tenant ID that you made a note of previously. Select **OK**.
+
+- Select **Action > Sign in**. Sign in with the same credentials you used to create the Active Directory app registration.
+
+The **IoT Central Solution Builder** tool is now ready to use to deploy your industrial IoT solution.
+
+## Deploy the solution
+
+Use the **IoT Central Solution Builder** tool to deploy the Azure resources for the solution. The tool deploys and configures the resources to create a running solution.
+
+Download the [adxconfig-opcpub.json](https://raw.githubusercontent.com/Azure-Samples/iotc-solution-builder/main/iotedgeDeploy/configs/adxconfig-opcpub.json) configuration file. This configuration file deploys the required resources.
+
+To load the configuration file for the solution to deploy:
+
+- In the tool, select **Open Configuration**.
+- Select the `adxconfig-opcpub.json` file you download.
+- The tool displays the deployment steps:
+
+ :::image type="content" source="media/tutorial-industrial-end-to-end/iot-central-solution-builder-steps.png" alt-text="Screenshot that shows the deployment steps defined in the configuration file loaded into the tool.":::
+
+ > [!TIP]
+ > Select any step to view relevant documentation.
+
+Each step uses either an ARM template or REST API call to deploy or configure resources. Open the `adxconfig-opcpub.json` to see the details of each step.
+
+To deploy the solution:
+
+- Select **Start Provisioning**.
+- Optionally, change the suffix and Azure location to use. The suffix is appended to the name of all the resources the tool creates to help you identify them in the Azure portal.
+- Select **Configure**.
+- The tool shows its progress as it deploys the solution.
+
+ > [!TIP]
+ > The tool takes about 15 minutes to deploy and configure all the resources.
+
+- Navigate to the Azure portal and sign in with the same credentials you used to sign in to the tool.
+- Find the resource group the tool created. The name of the resource group is **iotc-rg-{suffix from tool}**. In the following screenshot, the suffix used by the tool is **iotcsb29472**:
+
+ :::image type="content" source="media/tutorial-industrial-end-to-end/azure-portal-resources.png" alt-text="Screenshot that shows the deployed resources in the Azure portal.":::
+
+To customize the deployed solution, you can edit the `adxconfig-opcpub.json` configuration file and then run the tool.
+
+## Walk through the solution
+
+The configuration file run by the tool defines the Azure resources to deploy and any required configuration. The tool runs the steps in the configuration file in sequence. Some steps are dependent on previous steps.
+
+The following sections describe the resources you deployed and what they do. The order here follows the device data as it flows from the IoT Edge device to IoT Central, and then on to Azure Data Explorer:
++
+### IoT Edge
+
+The tool deploys the IoT Edge 1.2 runtime to an Azure virtual machine. The installation script that the tool runs edits the IoT Edge *config.toml* file to add the following values from IoT Central:
+
+- **Id scope** for the IoT Central app.
+- **Device Id** for the gateway device registered in the IoT Central app.
+- **Symmetric key** for the gateway device registered in the IoT Central app.
+
+The IoT Edge deployment manifest defines four custom modules:
+
+- [azuremetricscollector](../../iot-edge/how-to-collect-and-transport-metrics.md?view=iotedge-2020-11&tabs=iotcentral&preserve-view=true) - sends metrics from the IoT Edge device to the IoT Central application.
+- [opcplc](https://github.com/Azure-Samples/iot-edge-opc-plc) - generates simulated OPC-UA data.
+- [opcpublisher](https://github.com/Azure/Industrial-IoT/blob/main/docs/modules/publisher.md) - forwards OPC-UA data from an OPC-UA server to the **miabgateway**.
+- [miabgateway](https://github.com/iot-for-all/iotc-miab-gateway) - gateway to send OPC-UA data to your IoT Central app and handle commands sent from your IoT Central app.
+
+You can see the deployment manifest in the tool configuration file. The manifest is part of the device template that the tool adds to your IoT Central application.
+
+To learn more about how to use the REST API to deploy and configure the IoT Edge runtime, see [Run Azure IoT Edge on Ubuntu Virtual Machines](../../iot-edge/how-to-install-iot-edge-ubuntuvm.md).
+
+### Simulated OPC-UA telemetry
+
+The [opcplc](https://github.com/Azure-Samples/iot-edge-opc-plc) module on the IoT Edge device generates simulated OPC-UA data for the solution. This module implements an OPC-UA server with multiple nodes that generate random data and anomalies. The module also lets you configure user defined nodes.
+
+The [opcpublisher](https://github.com/Azure/Industrial-IoT/blob/main/docs/modules/publisher.md) module on the IoT Edge device forwards OPC-UA data from an OPC-UA server to the **miabgateway** module.
+
+### IoT Central application
+
+The IoT Central application in the solution:
+
+- Provides a cloud-hosted endpoint to receive OPC-UA data from the IoT Edge device.
+- Lets you manage and control the connected devices and gateways.
+- Transforms the OPC-UA data it receives and exports it to Azure Data Explorer.
+
+The configuration file uses a control plane [REST API to create and manage IoT Central applications](howto-manage-iot-central-with-rest-api.md).
+
+### Device templates and devices
+
+The solution uses a single device template called **Manufacturing In A Box Gateway** in your IoT Central application. The device template models the IoT Edge gateway and includes the **Manufacturing In A Box Gateway** and **Azure Metrics Collector** modules.
+
+The **Manufacturing In A Box Gateway** module includes the following interfaces:
+
+- **Manufacturing In A Box Gateway Device Interface**. This interface defines read-only properties and events such as **Processor architecture**, **Operating system**, **Software version**, and **Module Started** that the device reports to IoT Central. The interface also defines a **Restart Gateway Module** command and a writable **Debug Telemetry** property.
+- **Manufacturing In A Box Gateway Module Interface**. This interface lets you manage the downstream OPC-UA servers connected to the gateway. The interface includes commands such as the **Provision OPC Device** command that the tool calls during the configuration process.
+
+There are two devices registered in your IoT Central application:
+
+- **opc-anomaly-device**. This device isn't assigned to a device template. The device represents the OPC-UA server implemented in the **opcplc** IoT Edge module. This OPC-UA server generates simulated OPC-UA data. Because the device isn't associated with a device template, IoT Central marks the telemetry as **Unmodeled**.
+- **industrial-connect-gw**. This device is assigned to the **Manufacturing In A Box Gateway** device template. Use this device to monitor the health of the gateway and manage the downstream OPC-UA servers. The configuration file run by the tool calls the **Provision OPC Device** command to provision the downstream OPC-UA server.
+
+The configuration file uses the following data plane REST APIs to add the device templates and devices to the IoT Central application, register the devices, and retrieve the device provisioning authentication keys:
+
+- [How to use the IoT Central REST API to manage device templates](howto-manage-device-templates-with-rest-api.md).
+- [How to use the IoT Central REST API to control devices](howto-control-devices-with-rest-api.md).
+
+You can also use the IoT Central UI or CLI to manage the devices and gateways in your solution. For example, to check the **opc-anomaly-device** is sending data, navigate to the **Raw data** view for the device in the IoT Central application. If the device is sending telemetry, you see telemetry messages in the **Raw data** view. If there are no telemetry messages, restart the Azure virtual machine in the Azure portal.
+
+> [!TIP]
+> You can find the Azure virtual machine with IoT Edge runtime in the resource group created by the configuration tool.
+
+### Data export configuration
+
+The solution uses the IoT Central data export capability to export OPC-UA data. Data export continuously sends filtered telemetry received from the OPC-UA server to an Azure Data Explorer environment. The filter ensures that only data from the OPC-UA is exported. The data export uses a [transformation](howto-transform-data-internally.md) to map the raw telemetry into a tabular structure suitable for Azure Data Explorer to ingest. The following snippet shows the transformation query:
+
+```jq
+{
+ applicationId: .applicationId,
+ deviceId: .device.id,
+ deviceName: .device.name,
+ templateName: .device.templateName,
+ enqueuedTime: .enqueuedTime,
+ telemetry: .telemetry | map({ key: .name, value: .value }) | from_entries,
+ }
+```
+
+The configuration file uses the data plane REST API to create the data export configuration in IoT Central. To learn more, see [How to use the IoT Central REST API to manage data exports](howto-manage-data-export-with-rest-api.md).
+
+### Azure Data Explorer
+
+The solution uses Azure Data Explore to store and analyze the OPC-UA telemetry. The solution uses two tables and a function to process the data as it arrives:
+
+- The **rawOpcData** table receives the data from the IoT Central data export. The solution configures this table for streaming ingestion.
+- The **opcDeviceData** table stores the transformed data.
+- The **extractOpcTagData** function processes the data as it arrives in the **rawOpcData** table and adds transformed records to the **opcDeviceData** table.
+
+You can query the transformed data in the **opcDeviceData** table. For example:
+
+```kusto
+opcDeviceData
+| where enqueuedTime > ago(1d)
+| where tag=="DipData"
+| summarize avgValue = avg(value) by deviceId, bin(sourceTimestamp, 15m)
+| render timechart
+```
+
+The configuration file uses a control plane REST API to deploy the Azure Data Explorer cluster and data plane REST APIS to create and configure the database.
+
+## Customize the solution
+
+The **IoT Central Solution Builder** tool uses a JSON configuration file to define the sequence of steps to run. To customize the solution, edit the configuration file. You can't modify an existing solution with the tool, you can only deploy a new solution.
+
+The example configuration file adds all the resources to the same resource group in your solution. To remove a deployed solution, delete the resource group.
+
+Each step in the configuration file defines one of the following actions:
+
+- Use an Azure Resource Manager template to deploy an Azure resource. For example, the sample configuration file uses a Resource Manager template to deploy the Azure virtual machine that hosts the IoT Edge runtime.
+- Make a REST API call to deploy or configure a resource. For example, the sample configuration file uses REST APIs to create and configure the IoT Central application.
+
+## Tidy up
+
+To avoid unnecessary charges, delete the resource group created by the tool when you've finished exploring the solution.
+
+## Next steps
+
+In this tutorial, you learned how to deploy an end-to-end industrial IoT scenario that uses IoT Central. To learn more about industrial IoT solutions with IoT Central, see:
+
+> [!div class="nextstepaction"]
+> [Industrial IoT patterns with Azure IoT Central](./concepts-iiot-architecture.md)
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
More about Azure Key Vault management guidelines, see:
| Key Vault Crypto User | Perform cryptographic operations using keys. Only works for key vaults that use the 'Azure role-based access control' permission model. | 12338af0-0e69-4776-bea7-57ae8d297424 | | Key Vault Reader | Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. | 21090545-7ca7-4776-b22c-e363652d74d2 | | Key Vault Secrets Officer| Perform any action on the secrets of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | b86a8fe4-44ce-4948-aee5-eccb2c155cd7 |
-| Key Vault Secrets User | Read secret contents. Only works for key vaults that use the 'Azure role-based access control' permission model. | 4633458b-17de-408a-b874-0445c86b69e6 |
+| Key Vault Secrets User | Read secret contents including secret portion of a certificate with private key. Only works for key vaults that use the 'Azure role-based access control' permission model. | 4633458b-17de-408a-b874-0445c86b69e6 |
+
+> [!NOTE]
+> There is no 'Key Vault Certificate User` because applications require secrets portion of certificate with private key. 'Key Vault Secrets User` role should be used for applications to retrieve certificate.
+ For more information about Azure built-in roles definitions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
For more Information about how to create custom roles, see:
- 2000 Azure role assignments per subscription - Role assignments latency: at current expected performance, it will take up to 10 minutes (600 seconds) after role assignments is changed for role to be applied
+## Frequently Asked Questions:
+
+### Can I use Key Vault role-based access control (RBAC) permission model object-scope assignments to provide isolation for application teams within Key Vault?
+No. RBAC permission model allows to assign access to individual objects in Key Vault to user or application, but any administrative operations like network access control, monitoring, and objects management require vault level permissions which will then expose secure information to operators across application teams.
+ ## Learn more - [Azure RBAC Overview](../../role-based-access-control/overview.md)
lab-services How To Request Capacity Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-request-capacity-increase.md
To create a support request, you must be assigned to one of the following roles
### Determine the regions for your labs Azure Lab Services resources can exist in many regions. You can choose to deploy resources in multiple regions close to your students. For more information about Azure regions, how they relate to global geographies, and which services are available in each region, see [Azure global infrastructure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
-### Locate and copy lab plan or lab account resource ID
-To add extra capacity to an existing lab, you must specify the lab's resource ID when you make the request.
+### Determine the total number of cores in your request
-Use the following steps to locate and copy the resource ID so that you can paste it into your support request.
-1. In the [Azure portal](https://portal.azure.com), navigate to the lab plan or lab account you want to add cores to.
+Your capacity can be divided amongst virtual machines (VMs) of different sizes. You must calculate the total number of cores for each size. You must include the number of cores you already have, and the number of cores you want to add to determine the total number of cores. You must then map the total number of cores you want to the SKU size groups listed below.
+
+**Size groups**
+
+Azure Lab Services groups SKU sizes as follows:
+- Small / Medium / Large Cores
+- Medium (Nested Virtualization) / Large (Nested Virtualization) Cores
+- Small (GPU Compute) Cores
+- Small GPU (Visualization) Cores
+- Medium GPU (Visualization) Cores
+
+To determine the total number of cores for your request, you must:
+1. Select the VM sizes you want
+2. Calculate the total cores needed for each VM size
+3. Map to SKU group and sum all cores under each group
+4. Enter the resulting total number of cores for each group in your request
+
+As an example, suppose you have existing VMs and want to request more as shown in the following table:
+
+| Size | Existing VMs | Additional VMs required | Total VMs |
+|--|--|--|--|
+|Small|15|25|40|
+|Large|1|2|3|
+|Large Virtualization|0|1|1|
+
+1. **Select the VM sizes you want.** In the virtual machine size list, select each of the VM sizes you want to use:
+
+ :::image type="content" source="./media/how-to-request-capacity-increase/multiple-sku.png" alt-text="Screenshot showing the core increase request with multiple virtual machine sizes selected.":::
+
+2. **Next calculate the total cores needed for each VM size.**
+Using the figures in the table above and the number of cores for each size in the dropdown, you can calculate the total number of cores as shown:
+ - *Small:* 40 small VMs x 2 cores = 80 cores
+ - *Large:* 3 large VMs x 8 cores = 24 cores
+ - *Large (Nested Virtualization):* 1 large nested virtualization VM x 8 cores = 8 cores
+
+3. **Map your cores to the SKU group and sum all the cores under each group.**
+Calculate the total number of cores for each size group.
+
+ The 40 small VMs and 3 large VMs are grouped together:
+ Requested total core limit for Small / Medium / Large = 80 + 24 = 104 cores
+
+ The large nested virtualization VM is grouped separately:
+ Requested total core limit for Medium (Nested Virtualization) / Large (Nested Virtualization) = 8 cores
+
+4. **Enter the resulting total number of cores for each group in your request.**
+
+ :::image type="content" source="./media/how-to-request-capacity-increase/total-cores-grouped.png" alt-text="Screenshot showing the total number of cores in each group.":::
+
+
+Remember that the total number of cores = existing cores + desired cores.
+
+### Locate and copy lab plan resource ID
+Complete this step if you want to extend a lab plan in the updated version of Lab Services (August 2022).
+
+To add extra capacity to an existing subscription, you must specify a lab plan resource ID when you make the request. Although a lab plan is needed to make a capacity request, the actual capacity is assigned to your subscription, so you can use it where you need it. Capacity is not tied to individual lab plans. This means that you can delete all your lab plans and still have the same capacity assigned to your subscription.
+
+Use the following steps to locate and copy the lab plan resource ID so that you can paste it into your support request.
+1. In the [Azure portal](https://portal.azure.com), navigate to the lab plan to which you want to add cores.
1. Under Settings, select Properties, and then copy the **Resource ID**. :::image type="content" source="./media/how-to-request-capacity-increase/resource-id.png" alt-text="Screenshot showing the lab plan properties with resource ID highlighted.":::
-1. Paste the Resource ID into a document for safekeeping; you'll need it to complete the support request.
+1. Paste the resource ID into a document for safekeeping; you'll need it to complete the support request.
## Start a new support request You can follow these steps to request a limit increase:
You can follow these steps to request a limit increase:
## Make core limit increase request When you request core limit increase (sometimes called an increase in capacity), you must supply some information to help the Azure Lab Services team evaluate and action your request as quickly as possible. The more information you can supply and the earlier you supply it, the more quickly the Azure Lab Services team will be able to process your request.
-The information required for the lab accounts used in original version of Lab Services (May 2019) and the lab plans used in the updated version of Lab Services (August 2022) is different. Use the appropriate tab below to guide you as you complete the **Quota details**.
-#### [Lab Accounts](#tab/LabAccounts/)
+You need to specify different information depending on the version of Azure Lab Services you're using. The information required for the lab accounts used in original version of Lab Services (May 2019) and the lab plans used in the updated version of Lab Services (August 2022) is detailed on the tabs below. Use the appropriate tab to guide you as you complete the **Quota details** for your lab account or lab plan.
+#### [**Lab Accounts (Classic) - May 2019 version**](#tab/LabAccounts/)
|Name |Value | ||| |**Deployment Model**|Select **Lab Account (Classic)**| |**Requested total core limit**|Enter the total number of cores for your subscription. Add the number of existing cores to the number of cores you're requesting.| |**Region**|Select the regions that you would like to use. |
- |**Is this for an existing lab or to create a new lab?**|Select **Existing lab** or **New lab**. </br> If you're adding cores to an existing lab, enter the lab's resource ID.|
- |**What's the month-by-month usage plan for the requested cores?**|Enter the rate at which you want to add the extra cores.|
+ |**Is this for an existing lab or to create a new lab?**|Select **Existing lab** or **New lab**.|
+ |**What is the lab account name?**|Only applies if you're adding cores to an existing lab. Select the lab account name.|
+ |**What's the month-by-month usage plan for the requested cores?**|Enter the rate at which you want to add the extra cores, on a monthly basis.|
|**Additional details**|Answer the questions in the additional details box. The more information you can provide here, the easier it will be for the Azure Lab Services team to process your request. For example, you could include your preferred date for the new cores to be available. |
-#### [Lab Plans](#tab/Labplans/)
+#### [**Lab Plans - August 2022 version**](#tab/Labplans/)
:::image type="content" source="./media/how-to-request-capacity-increase/lab-plan.png" alt-text="Screenshot of the Quota details page for Lab Services v2.":::
The information required for the lab accounts used in original version of Lab Se
|**What is the minimum number of cores you can start with?**|Your new cores may be made available gradually. Enter the minimum number of cores you require.| |**What's the ideal date to have this by? (MM/DD/YYY)**|Enter the date on which you want the extra cores to be available.| |**Is this for an existing lab or to create a new lab?**|Select **Existing lab** or **New lab**. </br> If you're adding cores to an existing lab, enter the lab's resource ID.|
- |**What is the month-by-month usage plan for the requested cores?**|Enter the rate at which you want to add the extra cores.|
+ |**What is the month-by-month usage plan for the requested cores?**|Enter the rate at which you want to add the extra cores, on a monthly basis.|
|**Additional details**|Answer the questions in the additional details box. The more information you can provide here, the easier it will be for the Azure Lab Services team to process your request. |
load-balancer Move Across Regions External Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-external-load-balancer-powershell.md
The following steps show how to prepare the external load balancer for the move
"name": "[parameters('publicIPAddresses_myPubIP_name')]", "location": "<target-region>", "sku": {
- "name": "Basic",
+ "name": "Standard",
"tier": "Regional" }, "properties": {
The following steps show how to prepare the external load balancer for the move
"resourceGuid": "7549a8f1-80c2-481a-a073-018f5b0b69be", "ipAddress": "52.177.6.204", "publicIPAddressVersion": "IPv4",
- "publicIPAllocationMethod": "Dynamic",
+ "publicIPAllocationMethod": "Static",
"idleTimeoutInMinutes": 4, "ipTags": [] }
The following steps show how to prepare the external load balancer for the move
``` 8. You can also change other parameters in the template if you choose, and are optional depending on your requirements:
- * **Sku** - You can change the sku of the public IP in the configuration from standard to basic or basic to standard by altering the **sku** > **name** property in the **\<resource-group-name>.json** file:
+ * **Sku** - You can change the sku of the public IP in the configuration from Standard to Basic or Basic to Standard by altering the **sku** > **name** property in the **\<resource-group-name>.json** file:
```json "resources": [
The following steps show how to prepare the external load balancer for the move
"name": "[parameters('publicIPAddresses_myPubIP_name')]", "location": "<target-region>", "sku": {
- "name": "Basic",
+ "name": "Standard",
"tier": "Regional" }, ``` For more information on the differences between basic and standard sku public ips, see [Create, change, or delete a public IP address](../virtual-network/ip-services/virtual-network-public-ip-address.md).
- * **Public IP allocation method** and **Idle timeout** - You can change both of these options in the template by altering the **publicIPAllocationMethod** property from **Dynamic** to **Static** or **Static** to **Dynamic**. The idle timeout can be changed by altering the **idleTimeoutInMinutes** property to your desired amount. The default is **4**:
+ * **Availability zone**. You can change the zone(s) of the public IP by changing the **zone** property. If the zone property isn't specified, the public IP will be created as no-zone. You can specify a single zone to create a zonal public IP or all 3 zones for a zone-redundant public IP.
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/publicIPAddresses",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('publicIPAddresses_myPubIP_name')]",
+ "location": "<target-region>",
+ "sku": {
+ "name": "Standard",
+ "tier": "Regional"
+ },
+ "zones": [
+ "1",
+ "2",
+ "3"
+ ],
+ ```
+
+ * **Public IP allocation method** and **Idle timeout** - You can change both of these options in the template by altering the **publicIPAllocationMethod** property from **Static** to **Dynamic** or **Dynamic** to **Static**. The idle timeout can be changed by altering the **idleTimeoutInMinutes** property to your desired amount. The default is **4**:
```json "resources": [
The following steps show how to prepare the external load balancer for the move
"name": "[parameters('publicIPAddresses_myPubIP_name')]", "location": "<target-region>", "sku": {
- "name": "Basic",
+ "name": "Standard",
"tier": "Regional" }, "properties": {
The following steps show how to prepare the external load balancer for the move
"resourceGuid": "7549a8f1-80c2-481a-a073-018f5b0b69be", "ipAddress": "52.177.6.204", "publicIPAddressVersion": "IPv4",
- "publicIPAllocationMethod": "Dynamic",
+ "publicIPAllocationMethod": "Static",
"idleTimeoutInMinutes": 4, "ipTags": [] }
In this tutorial, you moved an Azure network security group from one region to a
- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-In logic app workflows, some triggers and actions support using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate access to resources protected by Azure Active Directory (Azure AD). This identity was previously known as a *Managed Service Identity (MSI)*. When you enable your logic app resource to use a managed identity for authentication, you don't have to provide credentials, secrets, or Azure AD tokens. Azure manages this identity and helps keep authentication information secure because you don't have to manage this sensitive information.
+In logic app workflows, some triggers and actions support using a managed identity for authenticating access to resources protected by Azure Active Directory (Azure AD). When you use a managed identity to authenticate your connection, you don't have to provide credentials, secrets, or Azure AD tokens. Azure manages this identity and helps keep authentication information secure because you don't have to manage this sensitive information. For more information, see [What are managed identities for Azure resources?](/active-directory/managed-identities-azure-resources/overview.md).
-Azure Logic Apps supports the [*system-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md) and the [*user-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md), but the following differences exist between these identity types:
+Azure Logic Apps supports the [*system-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md) and the [*user-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md). The following list describes some differences between these identity types:
* A logic app resource can enable and use only one unique system-assigned identity. * A logic app resource can share the same user-assigned identity across a group of other logic app resources.
-* Based on your logic app resource type, you can enable either the system-assigned identity, user-assigned identity, or both at the same time:
+This article shows how to enable and set up a managed identity for your logic app and provides an example for how use the identity for authentication. Unlike the system-assigned identity, which you don't have to manually create, you *do* have to manually create the user-assigned identity. This article shows how to create a user-assigned identity using the Azure portal and Azure Resource Manager template (ARM template). For Azure PowerShell, Azure CLI, and Azure REST API, review the following documentation:
- | Logic app resource type | Environment | Managed identity support |
- |-|-|--|
- | Consumption | - Multi-tenant Azure Logic Apps <p><p>- Integration service environment (ISE) | - You can enable *either* the system-assigned identity type *or* the user-assigned identity type on your logic app resource. <p>- If enabled with the user-assigned identity type, your logic app resource can have *only a single user-assigned identity* at any one time. <p>- You can use the identity at the logic app resource level and at the connection level. |
- | Standard | - Single-tenant Azure Logic Apps <p><p>- App Service Environment v3 (ASEv3) <p><p>- Azure Arc enabled Logic Apps | - You can enable *both* the system-assigned identity type, which is enabled by default, *and* the user-assigned identity type at the same time. <p>- Your logic app resource can have *multiple* user-assigned identities at the same time. <p>- You can use the identity at the logic app resource level and at the connection level. |
- ||||
+| Tool | Documentation |
+|||
+| Azure PowerShell | [Create user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-powershell.md) |
+| Azure CLI | [Create user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md) |
+| Azure REST API | [Create user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-rest.md) |
+
+## Consumption versus Standard logic apps
+
+Based on your logic app resource type, you can enable either the system-assigned identity, user-assigned identity, or both at the same time:
+
+| Logic app | Environment | Managed identity support |
+|--|-|--|
+| Consumption | - Multi-tenant Azure Logic Apps <br><br>- Integration service environment (ISE) | - Your logic app can enable *either* the system-assigned identity or the user-assigned identity. <br><br>- You can use the managed identity at the logic app resource level and connection level. <br><br>- If you enable the user-assigned identity, your logic app can have *only one* user-assigned identity at a time. |
+| Standard | - Single-tenant Azure Logic Apps <br><br>- App Service Environment v3 (ASEv3) <br><br>- Azure Arc enabled Logic Apps | - You can enable *both* the system-assigned identity, which is enabled by default, *and* the user-assigned identity at the same time. <br><br>- You can use the managed identity at the logic app resource level and connection level. <br><br>- If you enable the user-assigned identity, your logic app resource can have *multiple* user-assigned identities at a time. |
-To learn more about managed identity limits in Azure Logic Apps, review [Limits on managed identities for logic apps](logic-apps-limits-and-config.md#managed-identity). For more information about the Consumption and Standard logic app resource types and environments, review the following documentation:
+For more information about managed identity limits in Azure Logic Apps, review [Limits on managed identities for logic apps](logic-apps-limits-and-config.md#managed-identity). For more information about the Consumption and Standard logic app resource types and environments, review the following documentation:
* [What is Azure Logic Apps?](logic-apps-overview.md#resource-environment-differences) * [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md)
Only specific built-in and managed connector operations that support Azure AD Op
### [Consumption](#tab/consumption)
-The following table lists the operations where you can use either the system-assigned managed identity or user-assigned managed identity in the **Logic App (Consumption)** resource type:
+The following table lists the connectors that support using a managed identity in a Consumption logic app workflow:
-| Operation type | Supported operations |
+| Connector type | Supported connectors |
|-|-| | Built-in | - Azure API Management <br>- Azure App Services <br>- Azure Functions <br>- HTTP <br>- HTTP + Webhook <p>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. However, they don't support the user-assigned managed identity for authenticating the same connections. |
-| Managed connector | - Azure AD <br>- Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
-|||
+| Managed | - Azure AD <br>- Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
### [Standard](#tab/standard)
-The following table lists the operations where you can use both the system-assigned managed identity and multiple user-assigned managed identities in the **Logic App (Standard)** resource type:
+The following table lists the connectors that support using a managed identity in a Standard logic app workflow:
-| Operation type | Supported operations |
+| Connector type | Supported connectors |
|-|-|
-| Built-in | - HTTP <br>- HTTP + Webhook <p>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. |
+| Built-in | - Azure Event Hubs <br>- Azure Service Bus <br>- HTTP <br>- HTTP + Webhook <br>- SQL Server <br><br>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. |
| Managed connector | - Azure AD <br>- Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
-|||
-This article shows how to enable and set up the system-assigned identity or user-assigned identity, based on whether you're using the **Logic App (Consumption)** or **Logic App (Standard)** resource type. Unlike the system-assigned identity, which you don't have to manually create, you *do* have to manually create the user-assigned identity. This article includes the steps to create the user-assigned identity using the Azure portal and Azure Resource Manager template (ARM template). For Azure PowerShell, Azure CLI, and Azure REST API, review the following documentation:
-
-| Tool | Documentation |
-|||
-| Azure PowerShell | [Create user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-powershell.md) |
-| Azure CLI | [Create user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md) |
-| Azure REST API | [Create user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-rest.md) |
-|||
- ## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Both the managed identity and the target Azure resource where you need access must use the same Azure subscription.
-* To give a managed identity access to an Azure resource, you need to add a role to the target resource for that identity. To add roles, you need [Azure AD administrator permissions](../active-directory/roles/permissions-reference.md) that can assign roles to identities in the corresponding Azure AD tenant.
-
-* The target Azure resource that you want to access. On this resource, you'll add a role for the managed identity, which helps the logic app resource or connection authenticate access to the target resource.
+* The target Azure resource that you want to access. On this resource, you'll add the necessary role for the managed identity to access that resource on your logic app's or connection's behalf. To add a role to a managed identity, you need [Azure AD administrator permissions](../active-directory/roles/permissions-reference.md) that can assign roles to identities in the corresponding Azure AD tenant.
-* The logic app resource where you want to use the [trigger or actions that support managed identities](logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions).
+* The logic app resource and workflow where you want to use the [trigger or actions that support managed identities](logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions).
<a name="system-assigned-azure-portal"></a> <a name="azure-portal-system-logic-app"></a>
This article shows how to enable and set up the system-assigned identity or user
### [Consumption](#tab/consumption)
-1. In the [Azure portal](https://portal.azure.com), open your logic app resource.
+1. In the [Azure portal](https://portal.azure.com), go to your logic app resource.
1. On the logic app menu, under **Settings**, select **Identity**.
This article shows how to enable and set up the system-assigned identity or user
| Property | Value | Description | |-|-|-| | **Object (principal) ID** | <*identity-resource-ID*> | A Globally Unique Identifier (GUID) that represents the system-assigned identity for your logic app in an Azure AD tenant. |
- ||||
1. Now follow the [steps that give that identity access to the resource](#access-other-resources) later in this topic.
On a **Logic App (Standard)** resource, the system-assigned identity is automati
## Enable system-assigned identity in an ARM template
-To automate creating and deploying logic app resources, you can use an [ARM template](logic-apps-azure-resource-manager-templates-overview.md). To enable the system-assigned managed identity for your logic app resource in the template, add the `identity` object and the `type` child property to the logic app's resource definition in the template, for example:
+To automate creating and deploying logic app resources, you can use an [ARM template](logic-apps-azure-resource-manager-templates-overview.md). To enable the system-assigned identity for your logic app resource in the template, add the `identity` object and the `type` child property to the logic app's resource definition in the template, for example:
### [Consumption](#tab/consumption)
Before you can enable the user-assigned identity on your **Logic App (Consumptio
### [Standard](#tab/standard)
-1. In the Azure portal, open your logic app resource.
+1. In the Azure portal, go to your logic app resource.
1. On the logic app menu, under **Settings**, select **Identity**.
For example, to access an Azure Blob storage account with your managed identity,
| Azure PowerShell | [Add role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-powershell.md) | | Azure CLI | [Add role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md) | | Azure REST API | [Add role assignment](../role-based-access-control/role-assignments-rest.md) |
-|||
However, to access an Azure key vault with your managed identity, you have to create an access policy for that identity on your key vault and assign the appropriate permissions for that identity on that key vault. The later steps in this section describe how to complete this task by using the [Azure portal](#azure-portal-access-policy). For Resource Manager templates, PowerShell, and Azure CLI, review the following documentation:
However, to access an Azure key vault with your managed identity, you have to cr
| Azure Resource Manager template (ARM template) | [Key Vault access policy resource definition](/azure/templates/microsoft.keyvault/vaults) | | Azure PowerShell | [Assign a Key Vault access policy](../key-vault/general/assign-access-policy.md?tabs=azure-powershell) | | Azure CLI | [Assign a Key Vault access policy](../key-vault/general/assign-access-policy.md?tabs=azure-cli) |
-|||
<a name="azure-portal-assign-role"></a>
To use a managed identity for authentication, some Azure resources, such as Azur
1. On the resource's menu, select **Access control (IAM)** > **Add** > **Add role assignment**. > [!NOTE]
+ >
> If the **Add role assignment** option is disabled, you don't have permissions to assign roles. > For more information, review [Azure AD built-in roles](../active-directory/roles/permissions-reference.md).
To use a managed identity for authentication, some Azure resources, such as Azur
|||--|--| | **System-assigned** | **Logic App** | <*Azure-subscription-name*> | <*your-logic-app-name*> | | **User-assigned** | Not applicable | <*Azure-subscription-name*> | <*your-user-assigned-identity-name*> |
- |||||
For more information about assigning roles, review the documentation, [Assign roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
These steps show how to use the managed identity with a trigger or action throug
1. If you haven't done so yet, add the [trigger or action that supports managed identities](logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). > [!NOTE]
+ >
> Not all triggers and actions support letting you add an authentication type. For more information, review > [Authentication types for triggers and actions that support authentication](logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions).
The built-in HTTP trigger or action can use the system-assigned identity that yo
| **Headers** | No | Any header values that you need or want to include in the outgoing request, such as the content type | | **Queries** | No | Any query parameters that you need or want to include in the request. For example, query parameters for a specific operation or for the API version of the operation that you want to run. | | **Authentication** | Yes | The authentication type to use for authenticating access to the target resource or entity |
-||||
As a specific example, suppose that you want to run the [Snapshot Blob operation](/rest/api/storageservices/snapshot-blob) on a blob in the Azure Storage account where you previously set up access for your identity. However, the [Azure Blob Storage connector](/connectors/azureblob/) doesn't currently offer this operation. Instead, you can run this operation by using the [HTTP action](logic-apps-workflow-actions-triggers.md#http-action) or another [Blob Service REST API operation](/rest/api/storageservices/operations-on-blobs). > [!IMPORTANT]
+>
> To access Azure storage accounts behind firewalls by using the Azure Blob connector and managed identities, > make sure that you also set up your storage account with the [exception that allows access by trusted Microsoft services](../connectors/connectors-create-api-azureblobstorage.md#access-blob-storage-in-same-region-with-system-managed-identities).
To run the [Snapshot Blob operation](/rest/api/storageservices/snapshot-blob), t
| **URI** | Yes | `https://<storage-account-name>/<folder-name>/{name}` | The resource ID for an Azure Blob Storage file in the Azure Global (public) environment, which uses this syntax | | **Headers** | For Azure Storage | `x-ms-blob-type` = `BlockBlob` <p>`x-ms-version` = `2019-02-02` <p>`x-ms-date` = `@{formatDateTime(utcNow(),'r')}` | The `x-ms-blob-type`, `x-ms-version`, and `x-ms-date` header values are required for Azure Storage operations. <p><p>**Important**: In outgoing HTTP trigger and action requests for Azure Storage, the header requires the `x-ms-version` property and the API version for the operation that you want to run. The `x-ms-date` must be the current date. Otherwise, your workflow fails with a `403 FORBIDDEN` error. To get the current date in the required format, you can use the expression in the example value. <p>For more information, review these topics: <p><p>- [Request headers - Snapshot Blob](/rest/api/storageservices/snapshot-blob#request) <br>- [Versioning for Azure Storage services](/rest/api/storageservices/versioning-for-the-azure-storage-services#specifying-service-versions-in-requests) | | **Queries** | Only for the Snapshot Blob operation | `comp` = `snapshot` | The query parameter name and value for the operation. |
-|||||
### [Consumption](#tab/consumption)
The following example shows a sample HTTP action with all the previously describ
![Screenshot showing Consumption workflow with HTTP action and "Add new parameter" list open with "Authentication" property selected.](./media/create-managed-service-identity/add-authentication-property.png) > [!NOTE]
+ >
> Not all triggers and actions support letting you add an authentication type. For more information, review > [Authentication types for triggers and actions that support authentication](logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions).
The following example shows a sample HTTP action with all the previously describ
This example continues with the **System-assigned managed identity**. 1. On some triggers and actions, the **Audience** property also appears for you to set the target resource ID. Set the **Audience** property to the [resource ID for the target resource or service](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication). Otherwise, by default, the **Audience** property uses the `https://management.azure.com/` resource ID, which is the resource ID for Azure Resource Manager.
-
+ For example, if you want to authenticate access to a [Key Vault resource in the global Azure cloud](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-key-vault), you must set the **Audience** property to *exactly* the following resource ID: `https://vault.azure.net`. This specific resource ID *doesn't* have any trailing slashes. In fact, including a trailing slash might produce either a `400 Bad Request` error or a `401 Unauthorized` error. > [!IMPORTANT]
+ >
> Make sure that the target resource ID *exactly matches* the value that Azure Active Directory (AD) expects, > including any required trailing slashes. For example, the resource ID for all Azure Blob Storage accounts requires > a trailing slash. However, the resource ID for a specific storage account doesn't require a trailing slash. Check the
The following example shows a sample HTTP action with all the previously describ
![Screenshot showing Standard workflow with HTTP action and "Add new parameter" list open with "Authentication" property selected.](./media/create-managed-service-identity/add-authentication-property-standard.png) > [!NOTE]
+ >
> Not all triggers and actions support letting you add an authentication type. For more information, review > [Authentication types for triggers and actions that support authentication](logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions).
The following example shows a sample HTTP action with all the previously describ
For example, if you want to authenticate access to a [Key Vault resource in the global Azure cloud](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-key-vault), you must set the **Audience** property to *exactly* the following resource ID: `https://vault.azure.net`. This specific resource ID *doesn't* have any trailing slashes. In fact, including a trailing slash might produce either a `400 Bad Request` error or a `401 Unauthorized` error. > [!IMPORTANT]
+ >
> Make sure that the target resource ID *exactly matches* the value that Azure Active Directory (AD) expects, > including any required trailing slashes. For example, the resource ID for all Azure Blob Storage accounts requires > a trailing slash. However, the resource ID for a specific storage account doesn't require a trailing slash. Check the
The following example shows a sample HTTP action with all the previously describ
This example sets the **Audience** property to `https://storage.azure.com/` so that the access tokens used for authentication are valid for all storage accounts. However, you can also specify the root service URL, `https://<your-storage-account>.blob.core.windows.net`, for a specific storage account.
- ![Set "Audience" property to target resource ID](./media/create-managed-service-identity/specify-audience-url-target-resource-standard.png)
+ ![Screenshot showing the "Audience" property set to the target resource ID.](./media/create-managed-service-identity/specify-audience-url-target-resource-standard.png)
For more information about authorizing access with Azure AD for Azure Storage, review the following documentation:
The Azure Resource Manager managed connector has an action named **Read a resour
![Screenshot showing Azure Resource Manager action with the connection name entered and "System-assigned managed identity" selected.](./media/create-managed-service-identity/single-system-identity-consumption.png) > [!NOTE]
+ >
> If the managed identity isn't enabled when you try to create the connection, change the connection, > or was removed while a managed identity-enabled connection still exists, you get an error appears > that you must enable the identity and grant access to the target resource.
The Azure Resource Manager managed connector has an action named **Read a resour
If you're using a multiple-authentication trigger or action, such as Azure Blob Storage, the connection information pane shows an **Authentication type** list that includes the **Logic Apps Managed Identity** option among other authentication types. After you select this option, on the next pane, you can select an identity from the **Managed identity** list. > [!NOTE]
+ >
> If the managed identity isn't enabled when you try to create the connection, change the connection, > or was removed while a managed identity-enabled connection still exists, you get an error appears > that you must enable the identity and grant access to the target resource.
This example shows what the configuration looks like when the logic app enables
} } ```
-
+ This example shows what the configuration looks like when the logic app enables a *user-assigned* managed identity: ```json
This example shows what the configuration looks like when the logic app enables
If you use an ARM template to automate deployment, and your workflow includes an *API connection*, which is created by a [managed connector](../connectors/managed.md) such as Office 365 Outlook, Azure Key Vault, and so on that uses a managed identity, you have an extra step to take. In an ARM template, the underlying connector resource definition differs based on whether you have a Consumption or Standard logic app and whether the [connector shows single-authentication or multi-authentication options](#managed-connectors-managed-identity).
-
+ ### [Consumption](#tab/consumption) The following examples apply to Consumption logic apps and show how the underlying connector resource definition differs between a single-authentication connector, such as Azure Automation, and a multi-authentication connector, such as Azure Blob Storage.
This example shows the underlying connection resource definition for an Azure Au
* The `apiVersion` property is set to `2016-06-01`. * The `kind` property is set to `V1` for a Consumption logic app. * The `parameterValueType` property is set to `Alternative`.
-
+ ```json { "type": "Microsoft.Web/connections",
Following this `Microsoft.Web/connections` resource definition, make sure that y
| <*connection-name*> | The name for your API connection, for example, `azureblob` | | <*object-ID*> | The object ID for your Azure AD identity, previously saved from your app registration | | <*tenant-ID*> | The tenant ID for your Azure AD identity, previously saved from your app registration |
-|||
```json {
To stop using the managed identity for authentication, first [remove the identit
When you disable the managed identity on your logic app resource, you remove the capability for that identity to request access for Azure resources where the identity had access. > [!NOTE]
+>
> If you disable the system-assigned identity, any and all connections used by workflows in that > logic app's workflow won't work at runtime, even if you immediately enable the identity again. > This behavior happens because disabling the identity deletes the object ID. Each time that you
The steps in this section cover using the [Azure portal](#azure-portal-disable)
| Azure PowerShell | 1. [Remove role assignment](../role-based-access-control/role-assignments-powershell.md). <br>2. [Delete user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-powershell.md). | | Azure CLI | 1. [Remove role assignment](../role-based-access-control/role-assignments-cli.md). <br>2. [Delete user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md). | | Azure REST API | 1. [Remove role assignment](../role-based-access-control/role-assignments-rest.md). <br>2. [Delete user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-rest.md). |
-|||
<a name="azure-portal-disable"></a>
The following steps remove access to the target resource from the managed identi
1. In the roles list, select the managed identities that you want to remove. On the toolbar, select **Remove**. > [!TIP]
+ >
> If the **Remove** option is disabled, you most likely don't have permissions. > For more information about the permissions that let you manage roles for resources, review > [Administrator role permissions in Azure Active Directory](../active-directory/roles/permissions-reference.md).
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
This example template that has multiple secured parameter definitions that use t
<a name="authentication-types-supported-triggers-actions"></a>
-## Authentication types for triggers and actions that support authentication
+## Authentication types for connectors that support authentication
-The following table identifies the authentication types that are available on the triggers and actions where you can select an authentication type:
+The following table identifies the authentication types that are available on the connector operations where you can select an authentication type:
-| Authentication type | Supported triggers and actions |
-||--|
+| Authentication type | Logic app & supported connectors |
+||-|
| [Basic](#basic-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook | | [Client Certificate](#client-certificate-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Active Directory OAuth](#azure-active-directory-oauth-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
+| [Active Directory OAuth](#azure-active-directory-oauth-authentication) | - **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook <br><br>- **Standard**: Azure Event Hubs, Azure Service Bus, Azure Event Hubs, Azure Service Bus, HTTP, HTTP Webhook, SQL Server |
| [Raw](#raw-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Managed identity](#managed-identity-authentication) | **Consumption logic app**: <br><br>- **Built-in**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <p><p>- **Managed connector**: Azure AD, Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure VM, HTTP with Azure AD, SQL Server <p><p>___________________________________________________________________________________________<p><p>**Standard logic app**: <p><p>- **Built-in**: HTTP, HTTP Webhook <p><p>- **Managed connector**: Azure AD, Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure VM, HTTP with Azure AD, SQL Server |
-|||
+| [Managed identity](#managed-identity-authentication) | **Built-in connectors**: <br><br>- **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <br><br>- **Standard**: Azure Event Hubs, Azure Service Bus, HTTP, HTTP Webhook, SQL Server <br><br>**Managed connectors**: Azure AD, Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure VM, HTTP with Azure AD, SQL Server |
<a name="secure-inbound-requests"></a>
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Classification is a common machine learning task. Classification is a type of su
The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection. Learn more and see an example at [Create a classification model with automated ML](tutorial-first-experiment-automated-ml.md).
-See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/classification-text-dnn)
+See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn)
### Regression
Similar to classification, regression tasks are also a common supervised learnin
Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning](v1/how-to-auto-train-models-v1.md).
-See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
+See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
### Time-series forecasting
Advanced forecasting configuration includes:
* rolling window aggregate features
-See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
+See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
### Computer vision (preview)
See the [how-to](./v1/how-to-configure-auto-train-v1.md#ensemble) for changing d
With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about [accelerating ML models with ONNX](concept-onnx.md).
-See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#supported-algorithms).
+See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#supported-algorithms).
The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](./how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://onnxruntime.ai/docs/api/csharp-api.html).
How-to articles provide additional detail into what functionality automated ML o
### Jupyter notebook samples
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml).
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml).
### Python SDK reference
machine-learning Concept Azure Machine Learning V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md
ws_basic = Workspace(
ml_client.workspaces.begin_create(ws_basic) # use MLClient to connect to the subscription and resource group and create workspace ```
-This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/resources/workspace/workspace.ipynb) shows more ways to create an Azure ML workspace using SDK v2.
+This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/resources/workspace/workspace.ipynb) shows more ways to create an Azure ML workspace using SDK v2.
cluster_basic = AmlCompute(
ml_client.begin_create_or_update(cluster_basic) ```
-This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/resources/compute/compute.ipynb) shows more ways to create compute using SDK v2.
+This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/resources/compute/compute.ipynb) shows more ways to create compute using SDK v2.
blob_datastore1 = AzureBlobDatastore(
ml_client.create_or_update(blob_datastore1) ```
-This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/resources/datastores/datastore.ipynb) shows more ways to create datastores using SDK v2.
+This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/resources/datastores/datastore.ipynb) shows more ways to create datastores using SDK v2.
my_env = Environment(
ml_client.environments.create_or_update(my_env) # use the MLClient to connect to workspace and create/register the environment ```
-This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/assets/environment/environment.ipynb) shows more ways to create custom environments using SDK v2.
+This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/assets/environment/environment.ipynb) shows more ways to create custom environments using SDK v2.
machine-learning Concept Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-component.md
To learn more about how to build a component, see:
- [Component CLI v2 YAML reference](./reference-yaml-component-command.md). - [What is Azure Machine Learning Pipeline?](concept-ml-pipelines.md). - Try out [CLI v2 component example](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components).-- Try out [Python SDK v2 component example](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines).
+- Try out [Python SDK v2 component example](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/pipelines).
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
We don't recommend that admins revoke the access of the managed identity to the
> > If your workspace has attached AKS clusters, _and they were created before May 14th, 2021_, __do not delete this Azure AD account__. In this scenario, you must first delete and recreate the AKS cluster before you can delete the Azure AD account.
-You can provision the workspace to use user-assigned managed identity, and grant the managed identity additional roles, for example to access your own Azure Container Registry for base Docker images. For more information, see [Use managed identities for access control](how-to-use-managed-identities.md).
+You can provision the workspace to use user-assigned managed identity, and grant the managed identity additional roles, for example to access your own Azure Container Registry for base Docker images. For more information, see [Use managed identities for access control](how-to-identity-based-service-authentication.md).
You can also configure managed identities for use with Azure Machine Learning compute cluster. This managed identity is independent of workspace managed identity. With a compute cluster, the managed identity is used to access resources such as secured datastores that the user running the training job may not have access to. For more information, see [Identity-based data access to storage services on Azure](how-to-identity-based-data-access.md).
For more information, see the following articles:
* [Manage access to Azure Machine Learning](how-to-assign-roles.md) * [Connect to storage services](how-to-access-data.md) * [Use Azure Key Vault for secrets when training](how-to-use-secrets-in-runs.md)
-* [Use Azure AD managed identity with Azure Machine Learning](how-to-use-managed-identities.md)
+* [Use Azure AD managed identity with Azure Machine Learning](how-to-identity-based-service-authentication.md)
## Network security and isolation
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
Azure Machine Learning pipelines are a powerful facility that begins delivering
+ [Define pipelines with the Azure ML SDK v2](./how-to-create-component-pipeline-python.md) + [Define pipelines with Designer](./how-to-create-component-pipelines-ui.md) + Try out [CLI v2 pipeline example](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components)
-+ Try out [Python SDK v2 pipeline example](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines)
++ Try out [Python SDK v2 pipeline example](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/pipelines)
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
The following table shows which operations are supported by each of the tools av
## Example notebooks
-If you're getting started with MLflow in Azure Machine Learning, we recommend that you explore the [notebook examples about how to use MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/readme.md):
-
-* [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments by using MLflow, log models, and combine multiple flavors into pipelines.
-* [Training and tracking an XGBoost classifier with MLflow using service principal authentication](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_service_principal.ipynb): Demonstrates how to track experiments by using MLflow from compute that's running outside Azure Machine Learning. It shows how to authenticate against Azure Machine Learning services by using a service principal.
-* [Hyper-parameter optimization using Hyperopt and nested runs in MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_nested_runs.ipynb): Demonstrates how to use child runs in MLflow to do hyper-parameter optimization for models by using the popular library Hyperopt. It shows how to transfer metrics, parameters, and artifacts from child runs to parent runs.
-* [Logging models with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/logging-models/logging_model_with_mlflow.ipynb): Demonstrates how to use the concept of models instead of artifacts with MLflow, including how to construct custom models.
-* [Manage runs and experiments with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters, and artifacts from Azure Machine Learning by using MLflow.
-* [Manage model registries with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/model-management/model_management.ipynb): Demonstrates how to manage models in registries by using MLflow.
-* [Deploying models with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/deploying_with_mlflow.ipynb): Demonstrates how to deploy no-code models in MLflow format to a deployment target in Azure Machine Learning.
-* [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure Machine Learning. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
-* [Migrating models with a scoring script to MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/migrating-scoring-to-mlflow/scoring_to_mlmodel.ipynb): Demonstrates how to migrate models with scoring scripts to no-code deployment with MLflow.
-* [Using MLflow REST with Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb): Demonstrates how to work with the MLflow REST API when you're connected to Azure Machine Learning.
+
+If you're getting started with MLflow in Azure Machine Learning, we recommend that you explore the [notebook examples about how to use MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/readme.md):
+
+* [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments by using MLflow, log models, and combine multiple flavors into pipelines.
+* [Training and tracking an XGBoost classifier with MLflow using service principal authentication](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/train-with-mlflow/xgboost_service_principal.ipynb): Demonstrates how to track experiments by using MLflow from compute that's running outside Azure Machine Learning. It shows how to authenticate against Azure Machine Learning services by using a service principal.
+* [Hyper-parameter optimization using Hyperopt and nested runs in MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/train-with-mlflow/xgboost_nested_runs.ipynb): Demonstrates how to use child runs in MLflow to do hyper-parameter optimization for models by using the popular library Hyperopt. It shows how to transfer metrics, parameters, and artifacts from child runs to parent runs.
+* [Logging models with MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/logging-models/logging_model_with_mlflow.ipynb): Demonstrates how to use the concept of models instead of artifacts with MLflow, including how to construct custom models.
+* [Manage runs and experiments with MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters, and artifacts from Azure Machine Learning by using MLflow.
+* [Manage model registries with MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/model-management/model_management.ipynb): Demonstrates how to manage models in registries by using MLflow.
+* [Deploying models with MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/no-code-deployment/deploying_with_mlflow.ipynb): Demonstrates how to deploy no-code models in MLflow format to a deployment target in Azure Machine Learning.
+* [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure Machine Learning. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
+* [Migrating models with a scoring script to MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/migrating-scoring-to-mlflow/scoring_to_mlmodel.ipynb): Demonstrates how to migrate models with scoring scripts to no-code deployment with MLflow.
+* [Using MLflow REST with Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb): Demonstrates how to work with the MLflow REST API when you're connected to Azure Machine Learning.
## Next steps
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Registered models are identified by name and version. Each time you register a m
> * When you use the **Filter by** `Tags` option on the **Models** page of Azure Machine Learning Studio, instead of using `TagName : TagValue`, use `TagName=TagValue` without spaces. > * You can't delete a registered model that's being used in an active deployment.
-For more information, [Work with models in Azure Machine Learning](how-to-manage-models.md).
+For more information, [Work with models in Azure Machine Learning](./how-to-manage-models.md).
### Package and debug models
Machine Learning gives you the capability to track the end-to-end audit trail of
- [Machine Learning datasets](how-to-create-register-datasets.md) help you track, profile, and version data. - [Interpretability](how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for specific input. - Machine Learning Job history stores a snapshot of the code, data, and computes used to train a model.-- The Machine Learning Model Registry captures all the metadata associated with your model. For example, metadata includes which experiment trained it, where it's being deployed, and if its deployments are healthy.
+- The [Machine Learning Model Registry](./how-to-manage-models.md?tabs=use-local#create-a-model-in-the-model-registry) captures all the metadata associated with your model. For example, metadata includes which experiment trained it, where it's being deployed, and if its deployments are healthy.
- [Integration with Azure](how-to-use-event-grid.md) allows you to act on events in the machine learning lifecycle. Examples are model registration, deployment, data drift, and training (job) events. > [!TIP] > While some information on models and datasets is automatically captured, you can add more information by using _tags_. When you look for registered models and datasets in your workspace, you can use tags as a filter.
->
-> Associating a dataset with a registered model is an optional step. For information on how to reference a dataset when you register a model, see the [Model](/python/api/azureml-core/azureml.core.model%28class%29) class reference.
## Notify, automate, and alert on events in the machine learning lifecycle Machine Learning publishes key events to Azure Event Grid, which can be used to notify and automate on events in the machine learning lifecycle. For more information, see [Use Event Grid](how-to-use-event-grid.md).
-## Monitor for operational and machine learning issues
-
-Monitoring enables you to understand what data is being sent to your model, and the predictions that it returns.
-
-This information helps you understand how your model is being used. The collected input data might also be useful in training future versions of the model.
-
-For more information, see [Enable model data collection](v1/how-to-enable-data-collection.md).
-
-## Retrain your model on new data
-
-Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](v1/how-to-monitor-datasets.md), model performance can degrade because of:
--- Changes to a particular sensor.-- Natural data changes such as seasonal effects.-- Features shifting in their relation to other features.-
-There's no universal answer to "How do I know if I should retrain?" The Machine Learning event and monitoring tools previously discussed are good starting points for automation. After you've decided to retrain, you should:
--- Preprocess your data by using a repeatable, automated process.-- Train your new model.-- Compare the outputs of your new model to the outputs of your old model.-- Use predefined criteria to choose whether to replace your old model.-
-A theme of the preceding steps is that your retraining should be automated, not improvised. [Machine Learning pipelines](concept-ml-pipelines.md) are a good answer for creating workflows that relate to data preparation, training, validation, and deployment. Read [Retrain models with Machine Learning designer](how-to-retrain-designer.md) to see how pipelines and the Machine Learning designer fit into a retraining scenario.
- ## Automate the machine learning lifecycle You can use GitHub and Azure Pipelines to create a continuous integration process that trains a model. In a typical scenario, when a data scientist checks a change into the Git repo for a project, Azure Pipelines starts a training job. The results of the job can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.
The [Machine Learning extension](https://marketplace.visualstudio.com/items?item
For more information on using Azure Pipelines with Machine Learning, see: * [Continuous integration and deployment of machine learning models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
-* [Machine Learning MLOps](https://aka.ms/mlops) repository
-* [Machine Learning MLOpsPython](https://github.com/Microsoft/MLOpspython) repository
+* [Machine Learning MLOps](https://github.com/Azure/mlops-v2) repository
-You can also use Azure Data Factory to create a data ingestion pipeline that prepares data for use with training. For more information, see [Data ingestion pipeline](v1/how-to-cicd-data-ingestion.md).
## Next steps
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
These rule collections are described in more detail in [What are some Azure Fire
### Kubernetes Compute
-[Kubernetes Cluster](./how-to-attach-kubernetes-anywhere.md) running behind an outbound proxy server or firewall needs extra network configuration. Configure the [Azure Arc network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli#meet-network-requirements) needed by Azure Arc agents. The following outbound URLs are also required for Azure Machine Learning,
+[Kubernetes Cluster](./how-to-attach-kubernetes-anywhere.md) running behind an outbound proxy server or firewall needs extra egress network configuration.
+
+* For Kubernetes with Azure Arc connection, configure the [Azure Arc network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli#meet-network-requirements) needed by Azure Arc agents.
+* For AKS cluster without Azure Arc connection, configure the [AKS extension network requirements](../aks/limit-egress-traffic.md#cluster-extensions).
+
+Besides above requirements, the following outbound URLs are also required for Azure Machine Learning,
| Outbound Endpoint| Port | Description|Training |Inference | |--|--|--|--|--| | __\*.kusto.windows.net__<br>__\*.table.core.windows.net__<br>__\*.queue.core.windows.net__ | https:443 | Required to upload system logs to Kusto. |**&check;**|**&check;**|
-| __\*.azurecr.io__ | https:443 | Azure container registry, required to pull docker images used for machine learning workloads.|**&check;**|**&check;**|
-| __\*.blob.core.windows.net__ | https:443 | Azure blob storage, required to fetch machine learning project scripts,data or models, and upload job logs/outputs.|**&check;**|**&check;**|
-| __\*.workspace.\<region\>.api.azureml.ms__<br>__\<region\>.experiments.azureml.net__<br>__\<region\>.api.azureml.ms__ | https:443 | Azure Machine Learning service API.|**&check;**|**&check;**|
+| __\<your ACR name\>.azurecr.io__<br>__\<your ACR name>\.\<region name>\.data.azurecr.io__ | https:443 | Azure container registry, required to pull docker images used for machine learning workloads.|**&check;**|**&check;**|
+| __\<your storage account name\>.blob.core.windows.net__ | https:443 | Azure blob storage, required to fetch machine learning project scripts,data or models, and upload job logs/outputs.|**&check;**|**&check;**|
+| __\<your AzureML workspace ID>.workspace.\<region\>.api.azureml.ms__<br>__\<region\>.experiments.azureml.net__<br>__\<region\>.api.azureml.ms__ | https:443 | Azure Machine Learning service API.|**&check;**|**&check;**|
| __pypi.org__ | https:443 | Python package index, to install pip packages used for training job environment initialization.|**&check;**|N/A| | __archive.ubuntu.com__<br>__security.ubuntu.com__<br>__ppa.launchpad.net__ | http:80 | Required to download the necessary security patches. |**&check;**|N/A| > [!NOTE] > `<region>` is the lowcase full spelling of Azure Region, for example, eastus, southeastasia.-
+>
+> `<your AML workspace ID>` can be found in Azure portal - your Machine Learning resource page - Properties - Workspace ID.
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] +
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ Learn how to access Azure resources from your scoring script with an online endpoint and either a system-assigned managed identity or a user-assigned managed identity.
This guide assumes you don't have a managed identity, a storage account or an on
## Prerequisites
+# [System-assigned (CLI)](#tab/system-identity-cli)
+ * To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. * Install and configure the Azure CLI and ML (v2) extension. For more information, see [Install, set up, and use the 2.0 CLI](how-to-configure-cli.md).
This guide assumes you don't have a managed identity, a storage account or an on
```azurecli az account set --subscription <subscription ID>
- az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
+ az configure --defaults gitworkspace=<Azure Machine Learning workspace name> group=<resource group>
``` * To follow along with the sample, clone the samples repository
This guide assumes you don't have a managed identity, a storage account or an on
git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples/cli ```
+
+# [User-assigned (CLI)](#tab/user-identity-cli)
-## Limitations
+* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* The identity for an endpoint is immutable. During endpoint creation, you can associate it with a system-assigned identity (default) or a user-assigned identity. You can't change the identity after the endpoint has been created.
+* Install and configure the Azure CLI and ML (v2) extension. For more information, see [Install, set up, and use the 2.0 CLI](how-to-configure-cli.md).
-## Define configuration YAML file for deployment
+* An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
-To deploy an online endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-online.md) document.
+* An Azure Machine Learning workspace. You'll have a workspace if you configured your ML extension per the above article.
-The YAML files in the following examples are used to create online endpoints.
+* A trained machine learning model ready for scoring and deployment. If you are following along with the sample, a model is provided.
-# [System-assigned managed identity](#tab/system-identity)
+* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
-The following YAML example is located at `endpoints/online/managed/managed-identities/1-sai-create-endpoint`. The file,
+ ```azurecli
+ az account set --subscription <subscription ID>
+ az configure --defaults gitworkspace=<Azure Machine Learning workspace name> group=<resource group>
+ ```
-* Defines the name by which you want to refer to the endpoint, `my-sai-endpoint`.
-* Specifies the type of authorization to use to access the endpoint, `auth-mode: key`.
+* To follow along with the sample, clone the samples repository
+ ```azurecli
+ git clone https://github.com/Azure/azureml-examples --depth 1
+ cd azureml-examples/cli
+ ```
-This YAML example, `2-sai-deployment.yml`,
+# [System-assigned (Python)](#tab/system-identity-python)
-* Specifies that the type of endpoint you want to create is an `online` endpoint.
-* Indicates that the endpoint has an associated deployment called `blue`.
-* Configures the details of the deployment such as, which model to deploy and which environment and scoring script to use.
+* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+* Install and configure the Azure ML Python SDK (v2). For more information, see [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install).
-# [User-assigned managed identity](#tab/user-identity)
+* An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
-The following YAML example is located at `endpoints/online/managed/managed-identities/1-uai-create-endpoint`. The file,
+* An Azure Machine Learning workspace. You'll have a workspace if you configured your ML extension per the above article.
-* Defines the name by which you want to refer to the endpoint, `my-uai-endpoint`.
-* Specifies the type of authorization to use to access the endpoint, `auth-mode: key`.
-* Indicates the identity type to use, `type: user_assigned`
+* A trained machine learning model ready for scoring and deployment. If you are following along with the sample, a model is provided.
+* Clone the samples repository.
-This YAML example, `2-sai-deployment.yml`,
+ ```azurecli
+ git clone https://github.com/Azure/azureml-examples --depth 1
+ cd azureml-examples/sdk/endpoints/online/managed/managed-identities
+ ```
+* To follow along with this notebook, access the companion [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb) within in the `sdk/endpoints/online/managed/managed-identities` directory.
-* Specifies that the type of endpoint you want to create is an `online` endpoint.
-* Indicates that the endpoint has an associated deployment called `blue`.
-* Configures the details of the deployment such as, which model to deploy and which environment and scoring script to use.
+* Additional Python packages are required for this example:
+ * Microsoft Azure Storage Management Client
+
+ * Microsoft Azure Authorization Management Client
+
+ Install them with the following code:
+
+ ```python
+ pip install --pre azure-mgmt-storage
+ pip install --pre azure-mgmt-authorization
+ ```
++
+Install them with the following code:
+
+# [User-assigned (Python)](#tab/user-identity-python)
+
+* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+* Role creation permissions for your subscription or the Azure resources accessed by the User-assigned identity.
+
+* Install and configure the Azure ML Python SDK (v2). For more information, see [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install).
+
+* An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
+
+* An Azure Machine Learning workspace. You'll have a workspace if you configured your ML extension per the above article.
+
+* A trained machine learning model ready for scoring and deployment. If you are following along with the sample, a model is provided.
+
+* Clone the samples repository.
+
+ ```azurecli
+ git clone https://github.com/Azure/azureml-examples --depth 1
+ cd azureml-examples/sdk/endpoints/online/managed/managed-identities
+ ```
+* To follow along with this notebook, access the companion [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb) within in the `sdk/endpoints/online/managed/managed-identities` directory.
+* Additional Python packages are required for this example:
+
+ * Microsoft Azure Msi Management Client
+
+ * Microsoft Azure Storage Client
+
+ * Microsoft Azure Authorization Management Client
+
+ Install them with the following code:
+
+ ```python
+ pip install --pre azure-mgmt-msi
+ pip install --pre azure-mgmt-storage
+ pip install --pre azure-mgmt-authorization
+ ```
+
+## Limitations
+
+* The identity for an endpoint is immutable. During endpoint creation, you can associate it with a system-assigned identity (default) or a user-assigned identity. You can't change the identity after the endpoint has been created.
+ ## Configure variables for deployment Configure the variable names for the workspace, workspace location, and the endpoint you want to create for use with your deployment.
-# [System-assigned managed identity](#tab/system-identity)
+# [System-assigned (CLI)](#tab/system-identity-cli)
The following code exports these values as environment variables in your endpoint:
The following code exports those values as environment variables:
After these variables are exported, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the system-assigned managed identity that's generated upon endpoint creation.
-# [User-assigned managed identity](#tab/user-identity)
+# [User-assigned (CLI)](#tab/user-identity-cli)
Decide on the name of your endpoint, workspace, workspace location and export that value as an environment variable:
Decide on the name of your user identity name, and export that value as an envir
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="set_user_identity_name" :::
+# [System-assigned (Python)](#tab/system-identity-python)
+
+Assign values for the workspace and deployment-related variables:
+
+```python
+subscription_id = "<SUBSCRIPTION_ID>"
+resource_group = "<RESOURCE_GROUP>"
+workspace_name = "<AML_WORKSPACE_NAME>"
+
+endpoint_name = "<ENDPOINT_NAME>"
+```
+
+Next, specify what you want to name your blob storage account, blob container, and file. These variable names are defined here, and are referred to in the storage account and container creation code by the `StorageManagementClient` and `ContainerClient`.
+
+```python
+storage_account_name = "<STORAGE_ACCOUNT_NAME>"
+storage_container_name = "<CONTAINER_TO_ACCESS>"
+file_name = "<FILE_TO_ACCESS>"
+```
+
+After these variables are assigned, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the system-assigned managed identity that's generated upon endpoint creation.
+
+Now, get a handle to the workspace and retrieve its location:
+
+```python
+from azure.ai.ml import MLClient
+from azure.identity import AzureCliCredential
+from azure.ai.ml.entities import (
+ ManagedOnlineDeployment,
+ ManagedOnlineEndpoint,
+ Model,
+ CodeConfiguration,
+ Environment,
+)
+
+credential = AzureCliCredential()
+ml_client = MLClient(credential, subscription_id, resource_group, workspace_name)
+
+workspace_location = ml_client.workspaces.get(workspace_name).location
+```
+
+We will use this value to create a storage account.
++
+# [User-assigned (Python)](#tab/user-identity-python)
++
+Assign values for the workspace and deployment-related variables:
+
+```python
+subscription_id = "<SUBSCRIPTION_ID>"
+resource_group = "<RESOURCE_GROUP>"
+workspace_name = "<AML_WORKSPACE_NAME>"
+
+endpoint_name = "<ENDPOINT_NAME>"
+```
+
+Next, specify what you want to name your blob storage account, blob container, and file. These variable names are defined here, and are referred to in the storage account and container creation code by the `StorageManagementClient` and `ContainerClient`.
+
+```python
+storage_account_name = "<STORAGE_ACCOUNT_NAME>"
+storage_container_name = "<CONTAINER_TO_ACCESS>"
+file_name = "<FILE_TO_ACCESS>"
+```
+
+After these variables are assigned, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the user-assigned managed identity that's generated upon endpoint creation.
+
+Decide on the name of your user identity name:
+```python
+uai_name = "<USER_ASSIGNED_IDENTITY_NAME>"
+```
+
+Now, get a handle to the workspace and retrieve its location:
+```python
+from azure.ai.ml import MLClient
+from azure.identity import AzureCliCredential
+from azure.ai.ml.entities import (
+ ManagedOnlineDeployment,
+ ManagedOnlineEndpoint,
+ Model,
+ CodeConfiguration,
+ Environment,
+)
+
+credential = AzureCliCredential()
+ml_client = MLClient(credential, subscription_id, resource_group, workspace_name)
+
+workspace_location = ml_client.workspaces.get(workspace_name).location
+```
+
+We will use this value to create a storage account.
+++
+## Define the deployment configuration
++
+# [System-assigned (CLI)](#tab/system-identity-cli)
+
+To deploy an online endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-online.md) document.
+
+The YAML files in the following examples are used to create online endpoints.
+
+The following YAML example is located at `endpoints/online/managed/managed-identities/1-sai-create-endpoint`. The file,
+
+* Defines the name by which you want to refer to the endpoint, `my-sai-endpoint`.
+* Specifies the type of authorization to use to access the endpoint, `auth-mode: key`.
++
+This YAML example, `2-sai-deployment.yml`,
+
+* Specifies that the type of endpoint you want to create is an `online` endpoint.
+* Indicates that the endpoint has an associated deployment called `blue`.
+* Configures the details of the deployment such as, which model to deploy and which environment and scoring script to use.
++
+# [User-assigned (CLI)](#tab/user-identity-cli)
+
+To deploy an online endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-online.md) document.
+
+The YAML files in the following examples are used to create online endpoints.
+
+The following YAML example is located at `endpoints/online/managed/managed-identities/1-uai-create-endpoint`. The file,
+
+* Defines the name by which you want to refer to the endpoint, `my-uai-endpoint`.
+* Specifies the type of authorization to use to access the endpoint, `auth-mode: key`.
+* Indicates the identity type to use, `type: user_assigned`
++
+This YAML example, `2-sai-deployment.yml`,
+
+* Specifies that the type of endpoint you want to create is an `online` endpoint.
+* Indicates that the endpoint has an associated deployment called `blue`.
+* Configures the details of the deployment such as, which model to deploy and which environment and scoring script to use.
++
+# [System-assigned (Python)](#tab/system-identity-python)
+
+To deploy an online endpoint with the Python SDK (v2), objects may be used to define the configuration as below. Alternatively, YAML files may be loaded using the `.load` method.
+
+The following Python endpoint object:
+
+* Assigns the name by which you want to refer to the endpoint to the variable `endpoint_name.
+* Specifies the type of authorization to use to access the endpoint `auth-mode="key"`.
+
+```python
+endpoint = ManagedOnlineEndpoint(name=endpoint_name, auth_mode="key")
+```
+
+This deployment object:
+
+* Specifies that the type of deployment you want to create is a `ManagedOnlineDeployment` via the class.
+* Indicates that the endpoint has an associated deployment called `blue`.
+* Configures the details of the deployment such as the `name` and `instance_count`
+* Defines additional objects inline and associates them with the deployment for `Model`,`CodeConfiguration`, and `Environment`.
+* Includes environment variables needed for the system-assigned managed identity to access storage.
++
+```python
+deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=endpoint_name,
+ model=Model(path="../../model-1/model/"),
+ code_configuration=CodeConfiguration(
+ code="../../model-1/onlinescoring/", scoring_script="score_managedidentity.py"
+ ),
+ environment=Environment(
+ conda_file="../../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ ),
+ instance_type="Standard_DS2_v2",
+ instance_count=1,
+ environment_variables={
+ "STORAGE_ACCOUNT_NAME": storage_account_name,
+ "STORAGE_CONTAINER_NAME": storage_container_name,
+ "FILE_NAME": file_name,
+ },
+)
+```
+
+# [User-assigned (Python)](#tab/user-identity-python)
+
+To deploy an online endpoint with the Python SDK (v2), objects may be used to define the configuration as below. Alternatively, YAML files may be loaded using the `.load` method.
+
+For a user-assigned identity, we will define the endpoint configuration below once the User-Assigned Managed Identity has been created.
+
+This deployment object:
+
+* Specifies that the type of deployment you want to create is a `ManagedOnlineDeployment` via the class.
+* Indicates that the endpoint has an associated deployment called `blue`.
+* Configures the details of the deployment such as the `name` and `instance_count`
+* Defines additional objects inline and associates them with the deployment for `Model`,`CodeConfiguration`, and `Environment`.
+* Includes environment variables needed for the user-assigned managed identity to access storage.
+* Adds a placeholder environment variable for `UAI_CLIENT_ID`, which will be added after creating one and before actually deploying this configuration.
++
+```python
+deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=endpoint_name,
+ model=Model(path="../../model-1/model/"),
+ code_configuration=CodeConfiguration(
+ code="../../model-1/onlinescoring/", scoring_script="score_managedidentity.py"
+ ),
+ environment=Environment(
+ conda_file="../../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ ),
+ instance_type="Standard_DS2_v2",
+ instance_count=1,
+ environment_variables={
+ "STORAGE_ACCOUNT_NAME": storage_account_name,
+ "STORAGE_CONTAINER_NAME": storage_container_name,
+ "FILE_NAME": file_name,
+ # We will update this after creating an identity
+ "UAI_CLIENT_ID": "uai_client_id_place_holder",
+ },
+)
+```
+ + ## Create the managed identity To access Azure resources, create a system-assigned or user-assigned managed identity for your online endpoint.
-# [System-assigned managed identity](#tab/system-identity)
+# [System-assigned (CLI)](#tab/system-identity-cli)
When you [create an online endpoint](#create-an-online-endpoint), a system-assigned managed identity is automatically generated for you, so no need to create a separate one.
-# [User-assigned managed identity](#tab/user-identity)
+# [User-assigned (CLI)](#tab/user-identity-cli)
To create a user-assigned managed identity, use the following: ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="create_user_identity" :::
+# [System-assigned (Python)](#tab/system-identity-python)
+
+When you [create an online endpoint](#create-an-online-endpoint), a system-assigned managed identity is automatically generated for you, so no need to create a separate one.
+
+# [User-assigned (Python)](#tab/user-identity-python)
+
+To create a user-assigned managed identity, first get a handle to the `ManagedServiceIdentityClient`:
+
+```python
+from azure.mgmt.msi import ManagedServiceIdentityClient
+from azure.mgmt.msi.models import Identity
+
+credential = AzureCliCredential()
+msi_client = ManagedServiceIdentityClient(
+ subscription_id=subscription_id,
+ credential=credential,
+)
+```
+
+Then, create the identity:
+
+```python
+msi_client.user_assigned_identities.create_or_update(
+ resource_group_name=resource_group,
+ resource_name=uai_name,
+ parameters=Identity(location=workspace_location),
+)
+```
+
+Now, retrieve the identity object, which contains details we will use below:
+
+```python
+uai_identity = msi_client.user_assigned_identities.get(
+ resource_group_name=resource_group,
+ resource_name=uai_name,
+)
+uai_identity.as_dict()
+```
+ ## Create storage account and container
To create a user-assigned managed identity, use the following:
For this example, create a blob storage account and blob container, and then upload the previously created text file to the blob container. This is the storage account and blob container that you'll give the online endpoint and managed identity access to.
-# [System-assigned managed identity](#tab/system-identity)
+# [System-assigned (CLI)](#tab/system-identity-cli)
First, create a storage account.
Then, upload your text file to the blob container.
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="upload_file_to_storage" :::
-# [User-assigned managed identity](#tab/user-identity)
+# [User-assigned (CLI)](#tab/user-identity-cli)
First, create a storage account.
Then, upload file in container.
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="upload_file_to_storage" :::
+# [System-assigned (Python)](#tab/system-identity-python)
+
+First, get a handle to the `StorageManagementclient`:
+
+```python
+from azure.mgmt.storage import StorageManagementClient
+from azure.storage.blob import ContainerClient
+from azure.mgmt.storage.models import Sku, StorageAccountCreateParameters, BlobContainer
+
+credential = AzureCliCredential()
+storage_client = StorageManagementClient(
+ credential=credential, subscription_id=subscription_id
+)
+```
+
+Then, create a storage account:
+
+```python
+storage_account_parameters = StorageAccountCreateParameters(
+ sku=Sku(name="Standard_LRS"), kind="Storage", location=workspace_location
+)
+
+poller = storage_client.storage_accounts.begin_create(
+ resource_group_name=resource_group,
+ account_name=storage_account_name,
+ parameters=storage_account_parameters,
+)
+poller.wait()
+
+storage_account = poller.result()
+```
+
+Next, create the blob container in the storage account:
+
+```python
+blob_container = storage_client.blob_containers.create(
+ resource_group_name=resource_group,
+ account_name=storage_account_name,
+ container_name=storage_container_name,
+ blob_container=BlobContainer(),
+)
+```
+
+Retrieve the storage account key and create a handle to the container with `ContainerClient`:
+
+```python
+res = storage_client.storage_accounts.list_keys(
+ resource_group_name=resource_group,
+ account_name=storage_account_name,
+)
+key = res.keys[0].value
+
+container_client = ContainerClient(
+ account_url=storage_account.primary_endpoints.blob,
+ container_name=storage_container_name,
+ credential=key,
+)
+```
+
+Then, upload a blob to the container with the `ContainerClient`:
+
+```python
+file_path = "hello.txt"
+with open(file_path, "rb") as f:
+ container_client.upload_blob(name=file_name, data=f.read())
+```
+
+# [User-assigned (Python)](#tab/user-identity-python)
+
+First, get a handle to the `StorageManagementclient`:
+
+```python
+from azure.mgmt.storage import StorageManagementClient
+from azure.storage.blob import ContainerClient
+from azure.mgmt.storage.models import Sku, StorageAccountCreateParameters, BlobContainer
+
+credential = AzureCliCredential()
+storage_client = StorageManagementClient(
+ credential=credential, subscription_id=subscription_id
+)
+```
+
+Then, create a storage account:
+
+```python
+storage_account_parameters = StorageAccountCreateParameters(
+ sku=Sku(name="Standard_LRS"), kind="Storage", location=workspace_location
+)
+
+poller = storage_client.storage_accounts.begin_create(
+ resource_group_name=resource_group,
+ account_name=storage_account_name,
+ parameters=storage_account_parameters,
+)
+poller.wait()
+
+storage_account = poller.result()
+```
+
+Next, create the blob container in the storage account:
+
+```python
+blob_container = storage_client.blob_containers.create(
+ resource_group_name=resource_group,
+ account_name=storage_account_name,
+ container_name=storage_container_name,
+ blob_container=BlobContainer(),
+)
+```
+
+Retrieve the storage account key and create a handle to the container with `ContainerClient`:
+
+```python
+res = storage_client.storage_accounts.list_keys(
+ resource_group_name=resource_group,
+ account_name=storage_account_name,
+)
+key = res.keys[0].value
+
+container_client = ContainerClient(
+ account_url=storage_account.primary_endpoints.blob,
+ container_name=storage_container_name,
+ credential=key,
+)
+```
+
+Then, upload a blob to the container with the `ContainerClient`:
+
+```python
+file_path = "hello.txt"
+with open(file_path, "rb") as f:
+ container_client.upload_blob(name=file_name, data=f.read())
+```
+ ## Create an online endpoint
The following code creates an online endpoint without specifying a deployment.
> [!WARNING] > The identity for an endpoint is immutable. During endpoint creation, you can associate it with a system-assigned identity (default) or a user-assigned identity. You can't change the identity after the endpoint has been created.
-# [System-assigned managed identity](#tab/system-identity)
+# [System-assigned (CLI)](#tab/system-identity-cli)
When you create an online endpoint, a system-assigned managed identity is created for the endpoint by default. ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="create_endpoint" :::
Check the status of the endpoint with the following.
If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md).
-# [User-assigned managed identity](#tab/user-identity)
+# [User-assigned (CLI)](#tab/user-identity-cli)
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="create_endpoint" :::
Check the status of the endpoint with the following.
If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md).
+# [System-assigned (Python)](#tab/system-identity-python)
+
+When you create an online endpoint, a system-assigned managed identity is created for the endpoint by default.
+
+```python
+endpoint = ml_client.online_endpoints.begin_create_or_update(endpoint)
+```
+
+Check the status of the endpoint via the details of the deployed endpoint object with the following code:
+
+```python
+endpoint = ml_client.online_endpoints.get(endpoint_name)
+endpoint.identity.as_dict()
+```
+
+If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md).
++
+# [User-assigned (Python)](#tab/user-identity-python)
+
+The following Python endpoint object:
+
+* Assigns the name by which you want to refer to the endpoint to the variable `endpoint_name.
+* Specifies the type of authorization to use to access the endpoint `auth-mode="key"`.
+* Defines its identity as a ManagedServiceIdentity and specifies the Managed Identity created above as user-assigned.
+
+Define and deploy the endpoint:
+
+```python
+from azure.ai.ml._restclient.v2022_05_01.models import ManagedServiceIdentity
+
+endpoint = ManagedOnlineEndpoint(
+ name=endpoint_name,
+ auth_mode="key",
+ identity=ManagedServiceIdentity(
+ type="user_assigned",
+ user_assigned_identities=[{"resource_id": uai_identity.id}],
+ ),
+)
+
+ml_client.online_endpoints.begin_create_or_update(endpoint)
+```
+
+Check the status of the endpoint via the details of the deployed endpoint object with the following code:
+
+```python
+endpoint = ml_client.online_endpoints.get(endpoint_name)
+endpoint.identity.as_dict()
+```
+
+If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md).
+ ## Give access permission to the managed identity
If you encounter any issues, see [Troubleshooting online endpoints deployment an
You can allow the online endpoint permission to access your storage via its system-assigned managed identity or give permission to the user-assigned managed identity to access the storage account created in the previous section.
-# [System-assigned managed identity](#tab/system-identity)
+# [System-assigned (CLI)](#tab/system-identity-cli)
Retrieve the system-assigned managed identity that was created for your endpoint.
From here, you can give the system-assigned managed identity permission to acces
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="give_permission_to_user_storage_account" :::
-# [User-assigned managed identity](#tab/user-identity)
+# [User-assigned (CLI)](#tab/user-identity-cli)
Retrieve user-assigned managed identity client ID.
Give permission of default workspace storage to user-assigned managed identity.
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="give_permission_to_workspace_storage_account" :::
+# [System-assigned (Python)](#tab/system-identity-python)
+
+First, make an `AuthorizationManagementClient` to list Role Definitions:
+
+```python
+from azure.mgmt.authorization import AuthorizationManagementClient
+from azure.mgmt.authorization.v2018_01_01_preview.models import RoleDefinition
+import uuid
+
+role_definition_client = AuthorizationManagementClient(
+ credential=credential,
+ subscription_id=subscription_id,
+ api_version="2018-01-01-preview",
+)
+```
+
+Now, initialize one to make Role Assignments:
+
+```python
+from azure.mgmt.authorization.v2020_10_01_preview.models import (
+ RoleAssignment,
+ RoleAssignmentCreateParameters,
+)
+
+role_assignment_client = AuthorizationManagementClient(
+ credential=credential,
+ subscription_id=subscription_id,
+ api_version="2020-10-01-preview",
+)
+```
+
+Then, get the Principal ID of the System-assigned managed identity:
+
+```python
+endpoint = ml_client.online_endpoints.get(endpoint_name)
+system_principal_id = endpoint.identity.principal_id
+```
+
+Next, give assign the `Storage Blob Data Reader` role to the endpoint. The Role Definition is retrieved by name and passed along with the Principal ID of the endpoint. The role is applied at the scope of the storage account created above and allows the endpoint to read the file.
+
+```python
+role_name = "Storage Blob Data Reader"
+scope = storage_account.id
+
+role_defs = role_definition_client.role_definitions.list(scope=scope)
+role_def = next((r for r in role_defs if r.role_name == role_name))
+
+role_assignment_client.role_assignments.create(
+ scope=scope,
+ role_assignment_name=str(uuid.uuid4()),
+ parameters=RoleAssignmentCreateParameters(
+ role_definition_id=role_def.id, principal_id=system_principal_id
+ ),
+)
+```
++
+# [User-assigned (Python)](#tab/user-identity-python)
+
+First, make an `AuthorizationManagementClient` to list Role Definitions:
+
+```python
+from azure.mgmt.authorization import AuthorizationManagementClient
+from azure.mgmt.authorization.v2018_01_01_preview.models import RoleDefinition
+import uuid
+
+role_definition_client = AuthorizationManagementClient(
+ credential=credential,
+ subscription_id=subscription_id,
+ api_version="2018-01-01-preview",
+)
+```
+
+Now, initialize one to make Role Assignments:
+
+```python
+from azure.mgmt.authorization.v2020_10_01_preview.models import (
+ RoleAssignment,
+ RoleAssignmentCreateParameters,
+)
+
+role_assignment_client = AuthorizationManagementClient(
+ credential=credential,
+ subscription_id=subscription_id,
+ api_version="2020-10-01-preview",
+)
+```
+
+Then, get the Principal ID and Client ID of the User-assigned managed identity. To assign roles, we only need the Principal ID. However, we will use the Client ID to fill the `UAI_CLIENT_ID` placeholder environment variable before creating the deployment.
+
+```python
+uai_identity = msi_client.user_assigned_identities.get(
+ resource_group_name=resource_group, resource_name=uai_name
+)
+uai_principal_id = uai_identity.principal_id
+uai_client_id = uai_identity.client_id
+```
+
+Next, assign the `Storage Blob Data Reader` role to the endpoint. The Role Definition is retrieved by name and passed along with the Principal ID of the endpoint. The role is applied at the scope of the storage account created above to allow the endpoint to read the file.
+
+```python
+role_name = "Storage Blob Data Reader"
+scope = storage_account.id
+
+role_defs = role_definition_client.role_definitions.list(scope=scope)
+role_def = next((r for r in role_defs if r.role_name == role_name))
+
+role_assignment_client.role_assignments.create(
+ scope=scope,
+ role_assignment_name=str(uuid.uuid4()),
+ parameters=RoleAssignmentCreateParameters(
+ role_definition_id=role_def.id, principal_id=uai_principal_id
+ ),
+)
+```
+For the next two permissions, we'll need the workspace and container registry objects:
+
+```python
+workspace = ml_client.workspaces.get(workspace_name)
+container_registry = workspace.container_registry
+```
+
+Next, assign the `AcrPull` role to the User-assigned identity. This role allows images to be pulled from an Azure Container Registry. The scope is applied at the level of the container registry associated with the workspace.
+
+```python
+role_name = "AcrPull"
+scope = container_registry
+
+role_defs = role_definition_client.role_definitions.list(scope=scope)
+role_def = next((r for r in role_defs if r.role_name == role_name))
+
+role_assignment_client.role_assignments.create(
+ scope=scope,
+ role_assignment_name=str(uuid.uuid4()),
+ parameters=RoleAssignmentCreateParameters(
+ role_definition_id=role_def.id, principal_id=uai_principal_id
+ ),
+)
+```
+
+Finally, assign the `Storage Blob Data Reader` role to the endpoint at the workspace storage account scope. This role assignment will allow the endpoint to read blobs in the workspace storage account as well as the newly created storage account.
+
+The role has the same name and capabilities as the first role assigned above, however it is applied at a different scope and has a different ID.
+
+```python
+role_name = "Storage Blob Data Reader"
+scope = workspace.storage_account
+
+role_defs = role_definition_client.role_definitions.list(scope=scope)
+role_def = next((r for r in role_defs if r.role_name == role_name))
+
+role_assignment_client.role_assignments.create(
+ scope=scope,
+ role_assignment_name=str(uuid.uuid4()),
+ parameters=RoleAssignmentCreateParameters(
+ role_definition_id=role_def.id, principal_id=uai_principal_id
+ ),
+)
+```
+ ## Scoring script to access Azure resource
Create a deployment that's associated with the online endpoint. [Learn more abou
>[!WARNING] > This deployment can take approximately 8-14 minutes depending on whether the underlying environment/image is being built for the first time. Subsequent deployments using the same environment will go quicker.
-# [System-assigned managed identity](#tab/system-identity)
+# [System-assigned (CLI)](#tab/system-identity-cli)
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="deploy" :::
Check the status of the deployment.
To refine the above query to only return specific data, see [Query Azure CLI command output](/cli/azure/query-azure-cli). > [!NOTE]
-> The init method in the scoring script reads the file from your storage account using the system assigned managed identity token.
+> The init method in the scoring script reads the file from your storage account using the system-assigned managed identity token.
To check the init method output, see the deployment log with the following code. ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="check_deployment_log" :::
-# [User-assigned managed identity](#tab/user-identity)
+# [User-assigned (CLI)](#tab/user-identity-cli)
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="create_endpoint" :::
To refine the above query to only return specific data, see [Query Azure CLI com
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="check_deployment_log" ::: > [!NOTE]
-> The init method in the scoring script reads the file from your storage account using the system assigned managed identity token.
+> The init method in the scoring script reads the file from your storage account using the user-assigned managed identity token.
To check the init method output, see the deployment log with the following code. ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="check_deployment_log" :::
+# [System-assigned (Python)](#tab/system-identity-python)
+
+First, create the deployment:
+
+```python
+deployment = ml_client.online_deployments.begin_create_or_update(deployment)
+```
+
+Once deployment completes, check its status and confirm its identity details:
++
+```python
+deployment = ml_client.online_deployments.get(
+ endpoint_name=endpoint_name, name=deployment.name
+)
+print(deployment)
+```
+
+> [!NOTE]
+> The init method in the scoring script reads the file from your storage account using the system-assigned managed identity token.
+
+To check the init method output, see the deployment log with the following code.
+
+```python
+ml_client.online_deployments.get_logs(deployment.name, deployment.endpoint_name, 1000)
+```
+
+Now that the deployment is confirmed, set the traffic to 100%:
+
+```python
+endpoint.traffic = {str(deployment.name): 100}
+ml_client.begin_create_or_update(endpoint)
+```
+
+# [User-assigned (Python)](#tab/user-identity-python)
+
+Before we deploy, update the `UAI_CLIENT_ID` environment variable placeholder.
+
+```python
+deployment.environment_variables['UAI_CLIENT_ID'] = uai_client_id
+```
+
+Now, create the deployment:
+
+```python
+deployment = ml_client.online_deployments.begin_create_or_update(deployment)
+```
+
+Once deployment completes, check its status and confirm its identity details:
+
+```python
+deployment = ml_client.online_deployments.get(
+ endpoint_name=endpoint_name, name=deployment.name
+)
+print(deployment)
+```
+
+> [!NOTE]
+> The init method in the scoring script reads the file from your storage account using the user-assigned managed identity token.
+
+To check the init method output, see the deployment log with the following code.
+
+```python
+ml_client.online_deployments.get_logs(deployment.name, deployment.endpoint_name, 1000)
+```
+
+Now that the deployment is confirmed, set the traffic to 100%:
+
+```python
+endpoint.traffic = {str(deployment.name): 100}
+ml_client.begin_create_or_update(endpoint)
+```
+ When your deployment completes, the model, the environment, and the endpoint are registered to your Azure Machine Learning workspace.
-## Confirm your endpoint deployed successfully
+## Test the endpoint
-Once your online endpoint is deployed, confirm its operation. Details of inferencing vary from model to model. For this guide, the JSON query parameters look like:
+Once your online endpoint is deployed, test and confirm its operation with a request. Details of inferencing vary from model to model. For this guide, the JSON query parameters look like:
:::code language="json" source="~/azureml-examples-main/cli/endpoints/online/model-1/sample-request.json" ::: To call your endpoint, run:
-# [System-assigned managed identity](#tab/system-identity)
+# [System-assigned (CLI)](#tab/system-identity-cli)
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="test_endpoint" :::
-# [User-assigned managed identity](#tab/user-identity)
+# [User-assigned (CLI)](#tab/user-identity-cli)
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="test_endpoint" :::
+# [System-assigned (Python)](#tab/system-identity-python)
+
+```python
+sample_data = "../../model-1/sample-request.json"
+ml_client.online_endpoints.invoke(endpoint_name=endpoint_name, request_file=sample_data)
+```
+
+# [User-assigned (Python)](#tab/user-identity-python)
++
+```python
+sample_data = "../../model-1/sample-request.json"
+ml_client.online_endpoints.invoke(endpoint_name=endpoint_name, request_file=sample_data)
+```
+ ## Delete the endpoint and storage account If you don't plan to continue using the deployed online endpoint and storage, delete them to reduce costs. When you delete the endpoint, all of its associated deployments are deleted as well.
-# [System-assigned managed identity](#tab/system-identity)
+# [System-assigned (CLI)](#tab/system-identity-cli)
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="delete_endpoint" ::: ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="delete_storage_account" :::
-# [User-assigned managed identity](#tab/user-identity)
+# [User-assigned (CLI)](#tab/user-identity-cli)
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="delete_endpoint" ::: ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="delete_storage_account" ::: ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="delete_user_identity" :::
+# [System-assigned (Python)](#tab/system-identity-python)
+
+Delete the endpoint:
+
+```python
+ml_client.online_endpoints.begin_delete(endpoint_name)
+```
+
+Delete the storage account:
+
+```python
+storage_client.storage_accounts.delete(
+ resource_group_name=resource_group, account_name=storage_account_name
+)
+```
+
+# [User-assigned (Python)](#tab/user-identity-python)
+
+Delete the endpoint:
+
+```python
+ml_client.online_endpoints.begin_delete(endpoint_name)
+```
+
+Delete the storage account:
+
+```python
+storage_client.storage_accounts.delete(
+ resource_group_name=resource_group, account_name=storage_account_name
+)
+```
+
+Delete the User-assigned managed identity:
+
+```python
+msi_client.user_assigned_identities.delete(
+ resource_group_name=resource_group, resource_name=uai_name
+)
+```
++ ## Next steps
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
For any AzureML example, you only need to update the compute target name to your
* Explore training job samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/jobs](https://github.com/Azure/azureml-examples/tree/main/cli/jobs) * Explore model deployment with online endpoint samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes) * Explore batch endpoint samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch)
-* Explore training job samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/main/sdk/jobs](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs)
-* Explore model deployment with online endpoint samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints/online/kubernetes)
+* Explore training job samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs)
+* Explore model deployment with online endpoint samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints/online/kubernetes)
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
The table shows resulting feature engineering that occurs when window aggregatio
![target rolling window](./media/how-to-auto-train-forecast/target-roll.svg)
-View a Python code example applying the [target rolling window aggregate feature](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
+View a Python code example applying the [target rolling window aggregate feature](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
### Short series handling
best_run, fitted_model = local_run.get_output()
Use the best model iteration to forecast values for data that wasn't used to train the model.
-The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
In the following example, you first replace all values in `y_pred` with `NaN`. The forecast origin is at the end of training data in this case. However, if you replaced only the second half of `y_pred` with `NaN`, the function would leave the numerical values in the first half unmodified, but forecast the `NaN` values in the second half. The function returns both the forecasted values and the aligned features.
fitted_model.forecast_quantiles(
test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8)) ```
-You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example.
+You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example.
After the overall model accuracy has been determined, the most realistic next step is to use the model to forecast unknown future values.
The following diagram shows the workflow for the many models solution.
![Many models concept diagram](./media/how-to-auto-train-forecast/many-models.svg)
-The following code demonstrates the key parameters users need to set up their many models run. See the [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) for a many models forecasting example
+The following code demonstrates the key parameters users need to set up their many models run. See the [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) for a many models forecasting example
```python from azureml.train.automl.runtime._many_models.many_models_parameters import ManyModelsTrainParameters
To further visualize this, the leaf levels of the hierarchy contain all the time
The hierarchical time series solution is built on top of the Many Models Solution and share a similar configuration setup.
-The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs. See the [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb), for an end to end example.
+The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs. See the [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb), for an end to end example.
```python
hts_parameters = HTSTrainParameters(
## Example notebooks
-See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml) for detailed code examples of advanced forecasting configuration including:
+See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml) for detailed code examples of advanced forecasting configuration including:
-* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
-* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
-* [configurable lags](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
-* [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
+* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
+* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
+* [configurable lags](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
+* [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
## Next steps
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
The following is a sample JSONL file for image classification:
Once your data is in JSONL format, you can create training and validation `MLTable` as shown below. Automated ML doesn't impose any constraints on training or validation data size for computer vision tasks. Maximum dataset size is only limited by the storage layer behind the dataset (i.e. blob store). There's no minimum number of images or labels. However, we recommend starting with a minimum of 10-15 samples per label to ensure the output model is sufficiently trained. The higher the total number of labels/classes, the more samples you need per label.
Automated ML doesn't impose any constraints on training or validation data size
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-Training data is a required parameter and is passed in using the `training` key of the data section. You can optionally specify another MLtable as a validation data with the `validation` key. If no validation data is specified, 20% of your training data will be used for validation by default, unless you pass `validation_data_size` argument with a different value.
+Training data is a required parameter and is passed in using the `training_data` key. You can optionally specify another MLtable as a validation data with the `validation_data` key. If no validation data is specified, 20% of your training data will be used for validation by default, unless you pass `validation_data_size` argument with a different value.
-Target column name is a required parameter and used as target for supervised ML task. It's passed in using the `target_column_name` key in the data section. For example,
+Target column name is a required parameter and used as target for supervised ML task. It's passed in using the `target_column_name` key. For example,
```yaml target_column_name: label
validation_data:
You can create data inputs from training and validation MLTable from your local directory or cloud storage with the following code:
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]
Training data is a required parameter and is passed in using the `training_data` parameter of the task specific `automl` type function. You can optionally specify another MLTable as a validation data with the `validation_data` parameter. If no validation data is specified, 20% of your training data will be used for validation by default, unless you pass `validation_data_size` argument with a different value.
image_object_detection_job = automl.image_object_detection(
Provide a [compute target](concept-azure-machine-learning-architecture.md#compute-targets) for automated ML to conduct model training. Automated ML models for computer vision tasks require GPU SKUs and support NC and ND families. We recommend the NCsv3-series (with v100 GPUs) for faster training. A compute target with a multi-GPU VM SKU leverages multiple GPUs to also speed up training. Additionally, when you set up a compute target with multiple nodes you can conduct faster model training through parallelism when tuning hyperparameters for your model.
+> [!NOTE]
+> If you are using a [compute instance](concept-compute-instance.md) as your compute target, please make sure that multiple AutoML jobs are not run at the same time. Also, please make sure that `max_concurrent_trials` is set to 1 in your [job limits](#job-limits).
+ The compute target is passed in using the `compute` parameter. For example: # [Azure CLI](#tab/cli)
Before doing a large sweep to search for the optimal models and hyperparameters,
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using model_name key in image_model section. For example,
+If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using model_name key in training_parameters section. For example,
```yaml
-image_model:
- model_name: "yolov5"
+training_parameters:
+ model_name: yolov5
``` # [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using model_name parameter in set_image_model method of the task specific `automl` job. For example,
+If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using model_name parameter in set_training_parameters method of the task specific `automl` job. For example,
```python
-image_object_detection_job.set_image_model(model_name="yolov5")
+image_object_detection_job.set_training_parameters(model_name="yolov5")
```
-Once you've built a baseline model, you might want to optimize model performance in order to sweep over the model algorithm and hyperparameter space. You can use the following sample config to sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, optimizer, lr_scheduler, etc., to generate a model with the optimal primary metric. If hyperparameter values aren't specified, then default values are used for the specified algorithm.
+Once you've built a baseline model, you might want to optimize model performance in order to sweep over the model algorithm and hyperparameter space. You can use the following sample config to [sweep over the hyperparameters](./how-to-auto-train-image-models.md#sweeping-hyperparameters-for-your-model) for each algorithm, choosing from a range of values for learning_rate, optimizer, lr_scheduler, etc., to generate a model with the optimal primary metric. If hyperparameter values aren't specified, then default values are used for the specified algorithm.
### Primary metric
The primary metric used for model optimization and hyperparameter tuning depends
* `mean_average_precision` for IMAGE_OBJECT_DETECTION * `mean_average_precision` for IMAGE_INSTANCE_SEGMENTATION
-### Experiment budget
+### Job Limits
+
+You can control the resources spent on your AutoML Image training job by specifying the `timeout_minutes`, `max_trials` and the `max_concurrent_trials` for the job in limit settings as described in the below example.
+
+Parameter | Detail
+--|-
+`max_trials` | Parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1. default value is 1.
+`max_concurrent_trials`| Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1 and 100. <br><br> **NOTE:** The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency. default value is 1.
+`timeout_minutes`| The amount of time in minutes before the experiment terminates. If none specified, default experiment timeout_minutes is seven days (maximum 60 days)
-You can optionally specify the maximum time budget for your AutoML Vision training job using the `timeout` parameter in the `limits` - the amount of time in minutes before the experiment terminates. If none specified, default experiment timeout is seven days (maximum 60 days). For example,
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ```yaml limits:
- timeout: 60
+ timeout_minutes: 60
+ max_trials: 10
+ max_concurrent_trials: 2
``` # [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=limit-settings)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=limit-settings)]
limits:
When training computer vision models, model performance depends heavily on the hyperparameter values selected. Often, you might want to tune the hyperparameters to get optimal performance. With support for computer vision tasks in automated ML, you can sweep hyperparameters to find the optimal settings for your model. This feature applies the hyperparameter tuning capabilities in Azure Machine Learning. [Learn how to tune hyperparameters](how-to-tune-hyperparameters.md).
+# [Azure CLI](#tab/cli)
++
+```yaml
+search_space:
+ - model_name:
+ type: choice
+ values: [yolov5]
+ learning_rate:
+ type: uniform
+ min_value: 0.0001
+ max_value: 0.01
+ model_size:
+ type: choice
+ values: [small, medium]
+
+ - model_name:
+ type: choice
+ values: [fasterrcnn_resnet50_fpn]
+ learning_rate:
+ type: uniform
+ min_value: 0.0001
+ max_value: 0.001
+ optimizer:
+ type: choice
+ values: [sgd, adam, adamw]
+ min_size:
+ type: choice
+ values: [600, 800]
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=search-space-settings)]
+++ ### Define the parameter search space You can define the model algorithms and hyperparameters to sweep in the parameter space.
When sweeping hyperparameters, you need to specify the sampling method to use fo
|[Bayesian Sampling](how-to-tune-hyperparameters.md#bayesian-sampling)| `bayesian` | > [!NOTE]
-> Currently only random sampling supports conditional hyperparameter spaces.
+> Currently only random and grid sampling support conditional hyperparameter spaces.
### Early termination policies
You can automatically end poorly performing runs with an early termination polic
Learn more about [how to configure the early termination policy for your hyperparameter sweep](how-to-tune-hyperparameters.md#early-termination).
-### Resources for the sweep
-
-You can control the resources spent on your hyperparameter sweep by specifying the `max_trials` and the `max_concurrent_trials` for the sweep.
> [!NOTE] > For a complete sweep configuration sample, please refer to this [tutorial](tutorial-auto-train-image-models.md#hyperparameter-sweeping-for-image-tasks).
-Parameter | Detail
|-
-`max_trials` | Required parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1.
-`max_concurrent_trials`| Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1 and 100. <br><br> **NOTE:** The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
You can configure all the sweep related parameters as shown in the example below.
You can configure all the sweep related parameters as shown in the example below
```yaml sweep:
- limits:
- max_trials: 10
- max_concurrent_trials: 2
sampling_algorithm: random early_termination: type: bandit
sweep:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
You can pass fixed settings or parameters that don't change during the parameter
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ```yaml
-image_model:
+training_parameters:
early_stopping: True evaluation_frequency: 1 ```
image_model:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=pass-arguments)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=pass-arguments)]
You can pass the run ID that you want to load the checkpoint from.
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ```yaml
-image_model:
+training_parameters:
checkpoint_run_id : "target_checkpoint_run_id" ```
mlflow_parent_run = mlflow_client.get_run(automl_job.name)
target_checkpoint_run_id = mlflow_parent_run.data.tags["automl_best_child_run_id"] ```
-To pass a checkpoint via the run ID, you need to use the `checkpoint_run_id` parameter in `set_image_model` function.
+To pass a checkpoint via the run ID, you need to use the `checkpoint_run_id` parameter in `set_training_parameters` function.
```python image_object_detection_job = automl.image_object_detection(
image_object_detection_job = automl.image_object_detection(
tags={"my_custom_tag": "My custom value"}, )
-image_object_detection_job.set_image_model(checkpoint_run_id=target_checkpoint_run_id)
+image_object_detection_job.set_training_parameters(checkpoint_run_id=target_checkpoint_run_id)
automl_image_job_incremental = ml_client.jobs.create_or_update( image_object_detection_job
az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZUR
When you've configured your AutoML Job to the desired settings, you can submit the job.
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]
## Outputs and evaluation metrics
CLI example not available, please use Python SDK.
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=best_run)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=best_run)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_local_dir)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_local_dir)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=download_model)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=download_model)]
### register the model
Register the model either using the azureml path or your locally downloaded path
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=register_model)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=register_model)]
After you register the model you want to use, you can deploy it using the managed online endpoint [deploy-managed-online-endpoint](how-to-deploy-managed-online-endpoint-sdk-v2.md)
auth_mode: key
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=endpoint)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=endpoint)]
az ml online-endpoint create --file .\create_endpoint.yml --workspace-name [YOUR
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_endpoint)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_endpoint)]
### Configure online deployment
readiness_probe:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=deploy)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=deploy)]
az ml online-deployment create --file .\create_deployment.yml --workspace-name [
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_deploy)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_deploy)]
### update traffic:
az ml online-endpoint update --name 'od-fridge-items-endpoint' --traffic 'od-fri
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=update_traffic)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=update_traffic)]
this is how your review page looks like. we can select instance type, instance c
### Update inference settings
-In the previous step, we downloaded a file `mlflow-model/artifacts/settings.json` from the best model. which can be used to update the inference settings before registering the model. Although its's recommended to use the same parameters as training for best performance.
+In the previous step, we downloaded a file `mlflow-model/artifacts/settings.json` from the best model. which can be used to update the inference settings before registering the model. Although it's recommended to use the same parameters as training for best performance.
Each of the tasks (and some models) has a set of parameters. By default, we use the same values for the parameters that were used during the training and validation. Depending on the behavior that we need when using the model for inference, we can change these parameters. Below you can find a list of parameters for each task type and model.
Please check this [Test the deployment](./tutorial-auto-train-image-models.md#te
## Example notebooks
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
Review detailed code examples and use cases in the [azureml-examples repository
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs).
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs).
## Next steps
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information.
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
[!INCLUDE [automl-sdk-version](../../includes/machine-learning-automl-sdk-version.md)]
See the following sample YAML files for each NLP task.
See the sample notebooks for detailed code examples for each NLP task.
-* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-text-classification-multiclass-task-sentiment.ipynb)
+* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-text-classification-multiclass-task-sentiment.ipynb)
* [Multi-label text classification](
-https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-nlp-text-classification-multilabel-task-paper-categorization/automl-nlp-text-classification-multilabel-task-paper-cat.ipynb)
-* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-nlp-text-named-entity-recognition-task/automl-nlp-text-ner-task.ipynb)
+https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multilabel-task-paper-categorization/automl-nlp-text-classification-multilabel-task-paper-cat.ipynb)
+* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-named-entity-recognition-task/automl-nlp-text-ner-task.ipynb)
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-features.md
In order to invoke BERT, set `enable_dnn: True` in your automl_settings and use
Automated ML takes the following steps for BERT.
-1. **Preprocessing and tokenization of all text columns**. For example, the "StringCast" transformer can be found in the final model's featurization summary. An example of how to produce the model's featurization summary can be found in [this notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-text-dnn/auto-ml-classification-text-dnn.ipynb).
+1. **Preprocessing and tokenization of all text columns**. For example, the "StringCast" transformer can be found in the final model's featurization summary. An example of how to produce the model's featurization summary can be found in [this notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn/auto-ml-classification-text-dnn.ipynb).
2. **Concatenate all text columns into a single text column**, hence the `StringConcatTransformer` in the final model.
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
With additional algorithms below.
* [NLP Text Classification Multi-label Algorithms](how-to-auto-train-nlp-models.md#language-settings) * [NLP Text Named Entity Recognition (NER) Algorithms](how-to-auto-train-nlp-models.md#language-settings)
-Follow [this link](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs) for example notebooks of each task type.
+Follow [this link](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs) for example notebooks of each task type.
### Primary metric
After you test a model and confirm you want to use it in production, you can reg
To leverage AutoML in your MLOps workflows, you can add AutoML Job steps to your [AzureML Pipelines](./how-to-create-component-pipeline-python.md). This allows you to automate your entire workflow by hooking up your data prep scripts to AutoML and then registering and validating the resulting best model.
-Below is a [sample pipeline](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/1h_automl_in_pipeline/automl-classification-bankmarketing-in-pipeline) with an AutoML classification component and a command component that shows the resulting AutoML output. Note how the inputs (training & validation data) and the outputs (best model) are referenced in different steps.
+Below is a [sample pipeline](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/pipelines/1h_automl_in_pipeline/automl-classification-bankmarketing-in-pipeline) with an AutoML classification component and a command component that shows the resulting AutoML output. Note how the inputs (training & validation data) and the outputs (best model) are referenced in different steps.
``` python # Define pipeline
pipeline_classification = automl_classification(
) # ...
-# Note that the above is only a snippet from the bankmarketing example you can find in our examples repo -> https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/1h_automl_in_pipeline/automl-classification-bankmarketing-in-pipeline
+# Note that the above is only a snippet from the bankmarketing example you can find in our examples repo -> https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/pipelines/1h_automl_in_pipeline/automl-classification-bankmarketing-in-pipeline
```
-For more examples on how to do include AutoML in your pipelines, please check out our [examples repo](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/1h_automl_in_pipeline/).
+For more examples on how to do include AutoML in your pipelines, please check out our [examples repo](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/pipelines/1h_automl_in_pipeline/).
## Next steps
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
If you have access to multiple Azure subscriptions, you can set your active subs
Optionally, setup common variables in your shell for usage in subsequent commands: > [!WARNING] > This uses Bash syntax for setting variables -- adjust as needed for your shell. You can also replace the values in commands below inline rather than using variables. If it doesn't already exist, you can create the Azure resource group: And create a machine learning workspace: Machine learning subcommands require the `--workspace/-w` and `--resource-group/-g` parameters. To avoid typing these repeatedly, configure defaults:
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-databricks-automl-environment.md
In AutoML config, when using Azure Databricks add the following parameters:
## ML notebooks that work with Azure Databricks Try it out:
-+ While many sample notebooks are available, **only [these sample notebooks](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-databricks) work with Azure Databricks.**
++ While many sample notebooks are available, **only [these sample notebooks](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-databricks) work with Azure Databricks.** + Import these samples directly from your workspace. See below: ![Select Import](./media/how-to-configure-environment/azure-db-screenshot.png)
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Previously updated : 09/20/2022 Last updated : 09/21/2022 # Create an Azure Machine Learning compute cluster
To create a persistent Azure Machine Learning Compute resource in Python, specif
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=cluster_basic)]
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/compute/compute.ipynb?name=cluster_basic)]
You can also configure several advanced properties when you create Azure Machine Learning Compute. The properties allow you to create a persistent cluster of fixed size, or within an existing Azure Virtual Network in your subscription. See the [AmlCompute class](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute) for details.
Use any of these ways to specify a low-priority VM:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v1.md)]
-[!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=cluster_low_pri)]
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/compute/compute.ipynb?name=cluster_low_pri)]
# [Azure CLI](#tab/azure-cli)
In the studio, choose **Low Priority** when you create a VM.
## Set up managed identity -
-# [Python SDK](#tab/python)
--
-# [Azure CLI](#tab/azure-cli)
---
-### Create a new managed compute cluster with managed identity
-
-Use this command:
-
-```azurecli
-az ml compute create -f create-cluster.yml
-```
-
-Where the contents of *create-cluster.yml* are as follows:
-
-* User-assigned managed identity
-
- :::code language="yaml" source="~/azureml-examples-main/cli/resources/compute/cluster-user-identity.yml":::
-
-* System-assigned managed identity
-
- :::code language="yaml" source="~/azureml-examples-main/cli/resources/compute/cluster-system-identity.yml":::
-
-### Add a managed identity to an existing cluster
-
-To update an existing cluster:
-
-* User-assigned managed identity
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-mlcompute-update-to-user-identity.sh":::
-
-* System-assigned managed identity
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-mlcompute-update-to-system-identity.sh":::
--
-# [Studio](#tab/azure-studio)
-
-During cluster creation or when editing compute cluster details, in the **Advanced settings**, toggle **Assign a managed identity** and specify a system-assigned identity or user-assigned identity.
----
-### Managed identity usage
-
+For information on how to configure a managed identity with your compute cluster, see [Set up authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md#compute-cluster).
## Troubleshooting
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
If you don't have an Azure subscription, create a free account before you begin.
This article uses the Python SDK for Azure ML to create and control an Azure Machine Learning pipeline. The article assumes that you'll be running the code snippets interactively in either a Python REPL environment or a Jupyter notebook.
-This article is based on the [image_classification_keras_minist_convnet.ipynb](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb) notebook found in the `sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet` directory of the [AzureML Examples](https://github.com/azure/azureml-examples) repository.
+This article is based on the [image_classification_keras_minist_convnet.ipynb](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb) notebook found in the `sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet` directory of the [AzureML Examples](https://github.com/azure/azureml-examples) repository.
## Import required libraries Import all the Azure Machine Learning required libraries that you'll need for this article:
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=required-library)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=required-library)]
## Prepare input data for your pipeline job
Fashion-MNIST is a dataset of fashion images divided into 10 classes. Each image
To define the input data of a job that references the Web-based data, run:
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=define-input)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=define-input)]
By defining an `Input`, you create a reference to the data source location. The data remains in its existing location, so no extra storage cost is incurred.
The next section will show create components in two different ways: the first tw
The first component in this pipeline will convert the compressed data files of `fashion_ds` into two csv files, one for training and the other for scoring. You'll use python function to define this component.
-If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in `prep/` folder. This folder contains two files to construct the component: `prep_component.py`, which defines the component and `conda.yaml`, which defines the run-time environment of the component.
+If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in `prep/` folder. This folder contains two files to construct the component: `prep_component.py`, which defines the component and `conda.yaml`, which defines the run-time environment of the component.
#### Define component using python function By using command_component() function as a decorator, you can easily define the component's interface, metadata and code to execute from a python function. Each decorated Python function will be transformed into a single static specification (YAML) that the pipeline service can process. The code above define a component with display name `Prep Data` using `@command_component` decorator:
Following is what a component looks like in the studio UI.
You'll need to modify the runtime environment in which your component runs. The above code creates an object of `Environment` class, which represents the runtime environment in which the component runs. The `conda.yaml` file contains all packages used for the component like following: Now, you've prepared all source files for the `Prep Data` component.
In this section, you'll create a component for training the image classification
The difference is that since the training logic is more complicated, you can put the original training code in a separate Python file.
-The source files of this component are under `train/` folder in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet). This folder contains three files to construct the component:
+The source files of this component are under `train/` folder in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet). This folder contains three files to construct the component:
- `train.py`: contains the actual logic to train model. - `train_component.py`: defines the interface of the component and imports the function in `train.py`.
The source files of this component are under `train/` folder in the [AzureML Exa
#### Get a script containing execution logic
-The `train.py` file contains a normal python function, which performs the training model logic to train a Keras neural network for image classification. You can find the code [here](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/train.py).
+The `train.py` file contains a normal python function, which performs the training model logic to train a Keras neural network for image classification. You can find the code [here](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/train.py).
#### Define component using python function After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in AzureML pipelines. The code above define a component with display name `Train Image Classification Keras` using `@command_component`:
The code above define a component with display name `Train Image Classification
The train-model component has a slightly more complex configuration than the prep-data component. The `conda.yaml` is like following: Now, you've prepared all source files for the `Train Image Classification Keras` component.
Now, you've prepared all source files for the `Train Image Classification Keras`
In this section, other than the previous components, you'll create a component to score the trained model via Yaml specification and script.
-If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in `score/` folder. This folder contains three files to construct the component:
+If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in `score/` folder. This folder contains three files to construct the component:
- `score.py`: contains the source code of the component. - `score.yaml`: defines the interface and other details of the component.
If you're following along with the example in the [AzureML Examples repo](https:
The `score.py` file contains a normal python function, which performs the training model logic. The code in score.py takes three command-line arguments: `input_data`, `input_model` and `output_result`. The program score the input model using input data and then output the scoring result.
In this section, you'll learn to create a component specification in the valid Y
- Interface: inputs and outputs - Command, code, & environment: The command, code, and environment used to run the component * `name` is the unique identifier of the component. Its display name is `Score Image Classification Keras`. * This component has two inputs and one output.
In this section, you'll learn to create a component specification in the valid Y
#### Specify component run-time environment
-The score component uses the same image and conda.yaml file as the train component. The source file is in the [sample repository](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/conda.yaml).
+The score component uses the same image and conda.yaml file as the train component. The source file is in the [sample repository](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/conda.yaml).
Now, you've got all source files for score-model component.
For prep-data component and train-model component defined by python function, yo
In the following code, you import `prepare_data_component()` and `keras_train_component()` function from the `prep_component.py` file under `prep` folder and `train_component` file under `train` folder respectively.
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=load-from-dsl-component)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=load-from-dsl-component)]
For score component defined by yaml, you can use `load_component()` function to load.
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=load-from-yaml)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=load-from-yaml)]
## Build your pipeline Now that you've created and loaded all components and input data to build the pipeline. You can compose them into a pipeline:
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)]
The pipeline has a default compute `cpu_compute_target`, which means if you don't specify compute for a specific node, that node will run on the default compute.
We'll use `DefaultAzureCredential` to get access to workspace. `DefaultAzureCred
Reference for more available credentials if it doesn't work for you: [configure credential example](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb), [azure-identity reference doc](/python/api/azure-identity/azure.identity?view=azure-python&preserve-view=true ).
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=credential)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=credential)]
#### Get a handle to a workspace with compute Create a `MLClient` object to manage Azure Machine Learning services.
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=workspace)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=workspace)]
> [!IMPORTANT] > This code snippet expects the workspace configuration json file to be saved in the current directory or its parent. For more information on creating a workspace, see [Create workspace resources](quickstart-create-resources.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md#workspace).
Create a `MLClient` object to manage Azure Machine Learning services.
Now you've get a handle to your workspace, you can submit your pipeline job.
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=submit-pipeline)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=submit-pipeline)]
The code above submit this image classification pipeline job to experiment called `pipeline_samples`. It will auto create the experiment if not exists. The `pipeline_input_data` uses `fashion_ds`.
The call to `submit` the `Experiment` completes quickly, and produces output sim
You can monitor the pipeline run by opening the link or you can block until it completes by running:
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=stream-pipeline)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=stream-pipeline)]
> [!IMPORTANT] > The first pipeline run takes roughly *15 minutes*. All dependencies must be downloaded, a Docker image is created, and the Python environment is provisioned and created. Running the pipeline again takes significantly less time because those resources are reused instead of created. However, total run time for the pipeline depends on the workload of your scripts and the processes that are running in each pipeline step.
You can check the logs and outputs of each component by right clicking the compo
In the previous section, you have built a pipeline using three components to E2E complete an image classification task. You can also register components to your workspace so that they can be shared and resued within the workspace. Following is an example to register prep-data component.
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=register-component)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=register-component)]
Using `ml_client.components.get()`, you can get a registered component by name and version. Using `ml_client.compoennts.create_or_update()`, you can register a component previously loaded from python function or yaml. ## Next steps
-* For more examples of how to build pipelines by using the machine learning SDK, see the [example repository](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/pipelines).
+* For more examples of how to build pipelines by using the machine learning SDK, see the [example repository](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/pipelines).
* For how to use studio UI to submit and debug your pipeline, refer to [how to create pipelines using component in the UI](how-to-create-component-pipelines-ui.md). * For how to use Azure Machine Learning CLI to create components and pipelines, refer to [how to create pipelines using component with CLI](how-to-create-component-pipelines-cli.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
The following example demonstrates how to create a compute instance:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=ci_basic)]
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/compute/compute.ipynb?name=ci_basic)]
For more information on the classes, methods, and parameters used in this example, see the following reference documents:
In the examples below, the name of the compute instance is stored in the variabl
* Get status
- [!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=ci_basic_state)]
+ [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/compute/compute.ipynb?name=ci_basic_state)]
* Stop
- [!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=stop_compute)]
+ [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/compute/compute.ipynb?name=stop_compute)]
* Start
- [!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=start_compute)]
+ [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/compute/compute.ipynb?name=start_compute)]
* Restart
- [!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=restart_compute)]
+ [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/compute/compute.ipynb?name=restart_compute)]
* Delete
- [!notebook-python[](~/azureml-examples-main/sdk/resources/compute/compute.ipynb?name=delete_compute)]
+ [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/compute/compute.ipynb?name=delete_compute)]
# [Azure CLI](#tab/azure-cli)
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
The example template has two **required** parameters:
> [!TIP] > While the template associated with this document creates a new Azure Container Registry, you can also create a new workspace without creating a container registry. One will be created when you perform an operation that requires a container registry. For example, training or deploying a model. >
-> You can also reference an existing container registry or storage account in the Azure Resource Manager template, instead of creating a new one. When doing so, you must either [use a managed identity](how-to-use-managed-identities.md) (preview), or [enable the admin account](../container-registry/container-registry-authentication.md#admin-account) for the container registry.
+> You can also reference an existing container registry or storage account in the Azure Resource Manager template, instead of creating a new one. When doing so, you must either [use a managed identity](how-to-identity-based-service-authentication.md) (preview), or [enable the admin account](../container-registry/container-registry-authentication.md#admin-account) for the container registry.
[!INCLUDE [machine-learning-delete-acr](../../includes/machine-learning-delete-acr.md)]
machine-learning How To Customize Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-customize-compute-instance.md
Script arguments can be referred to in the script as $1, $2, etc.
If your script was doing something specific to azureuser such as installing conda environment or Jupyter kernel, you'll have to put it within `sudo -u azureuser` block like this The command `sudo -u azureuser` changes the current working directory to `/home/azureuser`. You also can't access the script arguments in this block.
-For other example scripts, see [azureml-examples](https://github.com/Azure/azureml-examples/tree/main/setup-ci).
+For other example scripts, see [azureml-examples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/setup/setup-ci).
You can also use the following environment variables in your script:
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
This guide assumes you have the following items installed locally on your PC.
For more information, see the guide on [how to prepare your system to deploy managed online endpoints](how-to-deploy-managed-online-endpoints.md#prepare-your-system).
-The examples in this article can be found in the [Debug online endpoints locally in Visual Studio Code](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb) notebook within the[azureml-examples](https://github.com/azure/azureml-examples) repository. To run the code locally, clone the repo and then change directories to the notebook's parent directory `sdk/endpoints/online/managed`.
+The examples in this article can be found in the [Debug online endpoints locally in Visual Studio Code](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb) notebook within the[azureml-examples](https://github.com/azure/azureml-examples) repository. To run the code locally, clone the repo and then change directories to the notebook's parent directory `sdk/endpoints/online/managed`.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Using the `MLClient` created earlier, we will get a handle to the endpoint. The
- `request_file` - File with request data - `deployment_name` - Name of the specific deployment to test in an endpoint
-We will send a sample request using a json file. The sample json is in the [example repository](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints/online/custom-container).
+We will send a sample request using a json file. The sample json is in the [example repository](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints/online/custom-container).
```python # test the blue deployment with some sample data
machine-learning How To Deploy Managed Online Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoint-sdk-v2.md
Using the `MLClient` created earlier, we'll get a handle to the endpoint. The en
* `request_file` - File with request data * `deployment_name` - Name of the specific deployment to test in an endpoint
-We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/model-1/sample-request.json) file.
+We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/endpoints/online/model-1/sample-request.json) file.
```python # test the blue deployment with some sample data
ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
Try these next steps to learn how to use the Azure Machine Learning SDK (v2) for Python: * [Managed online endpoint safe rollout](how-to-safely-rollout-managed-endpoints-sdk-v2.md)
-* Explore online endpoint samples - [https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints)
+* Explore online endpoint samples - [https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints)
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
The following diagram illustrates that you can generate the code for automated M
* You can run your code via a Jupyter notebook in an [Azure Machine Learning compute instance](), which contains the latest Azure ML SDK already installed. The compute instance comes with a ready-to-use Conda environment that is compatible with the automated ML code generation (preview) capability.
- * Alternatively, you can create a new local Conda environment on your local machine and then install the latest Azure ML SDK. [How to install AutoML client SDK in Conda environment with the `automl` package](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml#setup-using-a-local-conda-environment).
+ * Alternatively, you can create a new local Conda environment on your local machine and then install the latest Azure ML SDK. [How to install AutoML client SDK in Conda environment with the `automl` package](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml#setup-using-a-local-conda-environment).
## Code generation with the SDK
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-data-access.md
The same behavior applies when you:
### Model training on private data
-Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without being exposed to the confidential input data. In this scenario, a [managed identity](how-to-use-managed-identities.md) of the training compute is used for data access authentication. This approach allows storage admins to grant Storage Blob Data Reader access to the managed identity that the training compute uses to run the training job. The individual data scientists don't need to be granted access. For more information, see [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without being exposed to the confidential input data. In this scenario, a [managed identity](how-to-identity-based-service-authentication.md) of the training compute is used for data access authentication. This approach allows storage admins to grant Storage Blob Data Reader access to the managed identity that the training compute uses to run the training job. The individual data scientists don't need to be granted access. For more information, see [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
## Prerequisites
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
+
+ Title: Set up service authentication
+
+description: Learn how to set up and configure authentication between Azure ML and other Azure services.
++++++ Last updated : 09/23/2022++++
+# Set up authentication between Azure ML and other services
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
+> * [v1](./v1/how-to-use-managed-identities.md)
+> * [v2 (current version)](./how-to-identity-based-service-authentication.md)
+
+Azure Machine Learning is composed of multiple Azure services. There are multiple ways that authentication can happen between Azure Machine Learning and the services it relies on.
++
+* The Azure Machine Learning workspace uses a __managed identity__ to communicate with other services. By default, this is a system-assigned managed identity. You can also use a user-assigned managed identity instead.
+* Azure Machine Learning uses Azure Container Registry (ACR) to store Docker images used to train and deploy models. If you allow Azure ML to automatically create ACR, it will enable the __admin account__.
+* The Azure ML compute cluster uses a __managed identity__ to retrieve connection information for datastores from Azure Key Vault and to pull Docker images from ACR. You can also configure identity-based access to datastores, which will instead use the managed identity of the compute cluster.
+* Data access can happen along multiple paths depending on the data storage service and your configuration. For example, authentication to the datastore may use an account key, token, security principal, managed identity, or user identity.
+
+ For information on how data access is authenticated, see the [Data administration](how-to-administrate-data-authentication.md) article.
+
+* Managed online endpoints can use a managed identity to access Azure resources when performing inference. For more information, see [Access Azure resources from an online endpoint](how-to-access-resources-from-endpoints-managed-identities.md).
+
+## Prerequisites
++
+* To assign roles, the login for your Azure subscription must have the [Managed Identity Operator](../role-based-access-control/built-in-roles.md#managed-identity-operator) role, or other role that grants the required actions (such as __Owner__).
+
+* You must be familiar with creating and working with [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md).
+
+## User-assigned managed identity
+
+You can add a user-assigned managed identity when creating an Azure Machine Learning workspace from the [Azure portal](https://portal.azure.com). Use the following steps while creating the workspace:
+
+1. From the __Basics__ page, select the Azure Storage Account, Azure Container Registry, and Azure Key Vault you want to use with the workspace.
+1. From the __Advanced__ page, select __User-assigned identity__ and then select the managed identity to use.
+
+You can also use [an ARM template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/) to create a workspace with user-assigned managed identity.
+
+> [!TIP]
+> For a workspace with [customer-managed keys for encryption](concept-data-encryption.md), you can pass in a user-assigned managed identity to authenticate from storage to Key Vault. Use the `user-assigned-identity-for-cmk-encryption` (CLI) or `user_assigned_identity_for_cmk_encryption` (SDK) parameters to pass in the managed identity. This managed identity can be the same or different as the workspace primary user assigned managed identity.
+
+### Compute cluster
+
+> [!NOTE]
+> Azure Machine Learning compute clusters support only **one system-assigned identity** or **multiple user-assigned identities**, not both concurrently.
+
+The **default managed identity** is the system-assigned managed identity or the first user-assigned managed identity.
+
+During a run there are two applications of an identity:
+
+1. The system uses an identity to set up the user's storage mounts, container registry, and datastores.
+
+ * In this case, the system will use the default-managed identity.
+
+1. You apply an identity to access resources from within the code for a submitted job:
+
+ * In this case, provide the *client_id* corresponding to the managed identity you want to use to retrieve a credential.
+ * Alternatively, get the user-assigned identity's client ID through the *DEFAULT_IDENTITY_CLIENT_ID* environment variable.
+
+ For example, to retrieve a token for a datastore with the default-managed identity:
+
+ ```python
+ client_id = os.environ.get('DEFAULT_IDENTITY_CLIENT_ID')
+ credential = ManagedIdentityCredential(client_id=client_id)
+ token = credential.get_token('https://storage.azure.com/')
+ ```
+
+To configure a compute cluster with managed identity, use one of the following methods:
+
+# [Azure CLI](#tab/cli)
++
+```azurecli
+az ml compute create -f create-cluster.yml
+```
+
+Where the contents of *create-cluster.yml* are as follows:
++
+For comparison, the following example is from a YAML file that creates a cluster that uses a system-assigned managed identity:
++
+If you have an existing compute cluster, you can change between user-managed and system-managed identity. The following examples demonstrate how to change the configuration:
+
+__User-assigned managed identity__
++
+__System-assigned managed identity__
++
+# [Python SDK](#tab/python)
++
+```python
+from azure.ai.ml.entities import UserAssignedIdentity, IdentityConfiguration, AmlCompute
+from azure.ai.ml.constants import IdentityType
+
+# Create an identity configuration from the user-assigned managed identity
+managed_identity = UserAssignedIdentity(resource_id="/subscriptions/<subscription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity>")
+identity_config = IdentityConfiguration(type = IdentityType.USER_ASSIGNED, user_assigned_identities=[managed_identity])
+
+# specify aml compute name.
+cpu_compute_target = "cpu-cluster"
+
+try:
+ ml_client.compute.get(cpu_compute_target)
+except Exception:
+ print("Creating a new cpu compute target...")
+ # Pass the identity configuration
+ compute = AmlCompute(
+ name=cpu_compute_target, size="STANDARD_D2_V2", min_instances=0, max_instances=4, identity=identity_config
+ )
+ ml_client.compute.begin_create_or_update(compute)
+```
+
+# [Studio](#tab/azure-studio)
+
+During cluster creation or when editing compute cluster details, in the **Advanced settings**, toggle **Assign a managed identity** and specify a system-assigned identity or user-assigned identity.
+++
+## Scenario: Azure Container Registry without admin user
+
+When you disable the admin user for ACR, Azure ML uses a managed identity to build and pull Docker images. There are two workflows when configuring Azure ML to use an ACR with the admin user disabled:
+
+* Allow Azure ML to create the ACR instance and then disable the admin user afterwards.
+* Bring an existing ACR with the admin user already disabled.
+
+### Azure ML with auto-created ACR instance
+
+1. Create a new Azure Machine Learning workspace.
+1. Perform an action that requires Azure Container Registry. For example, the [Tutorial: Train your first model](tutorial-1st-experiment-sdk-train.md).
+1. Get the name of the ACR created by the cluster.
+
+ [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
+
+ ```azurecli-interactive
+ az ml workspace show -w <my workspace> \
+ -g <my resource group>
+ --query containerRegistry
+ ```
+
+ This command returns a value similar to the following text. You only want the last portion of the text, which is the ACR instance name:
+
+ ```output
+ /subscriptions/<subscription id>/resourceGroups/<my resource group>/providers/MicrosoftContainerReggistry/registries/<ACR instance name>
+ ```
+
+1. Update the ACR to disable the admin user:
+
+ ```azurecli-interactive
+ az acr update --name <ACR instance name> --admin-enabled false
+ ```
+
+### Bring your own ACR
+
+If ACR admin user is disallowed by subscription policy, you should first create ACR without admin user, and then associate it with the workspace. Also, if you have existing ACR with admin user disabled, you can attach it to the workspace.
+
+[Create ACR from Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) without setting ```--admin-enabled``` argument, or from Azure portal without enabling admin user. Then, when creating Azure Machine Learning workspace, specify the Azure resource ID of the ACR. The following example demonstrates creating a new Azure ML workspace that uses an existing ACR:
+
+> [!TIP]
+> To get the value for the `--container-registry` parameter, use the [az acr show](/cli/azure/acr#az-acr-show) command to show information for your ACR. The `id` field contains the resource ID for your ACR.
++
+```azurecli-interactive
+az ml workspace create -w <workspace name> \
+-g <workspace resource group> \
+-l <region> \
+--container-registry /subscriptions/<subscription id>/resourceGroups/<acr resource group>/providers/Microsoft.ContainerRegistry/registries/<acr name>
+```
+
+### Create compute with managed identity to access Docker images for training
+
+To access the workspace ACR, create machine learning compute cluster with system-assigned managed identity enabled. You can enable the identity from Azure portal or Studio when creating compute, or from Azure CLI using the below. For more information, see [using managed identity with compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+
+# [Azure CLI](#tab/cli)
++
+```azurecli-interaction
+az ml compute create --name cpu-cluster --type <cluster name> --identity-type systemassigned
+```
+
+# [Python](#tab/python)
++
+```python
+from azure.ai.ml.entities import IdentityConfiguration, AmlCompute
+from azure.ai.ml.constants import IdentityType
+
+# Create an identity configuration for a system-assigned managed identity
+identity_config = IdentityConfiguration(type = IdentityType.SYSTEM_ASSIGNED)
+
+# specify aml compute name.
+cpu_compute_target = "cpu-cluster"
+
+try:
+ ml_client.compute.get(cpu_compute_target)
+except Exception:
+ print("Creating a new cpu compute target...")
+ # Pass the identity configuration
+ compute = AmlCompute(
+ name=cpu_compute_target, size="STANDARD_D2_V2", min_instances=0, max_instances=4, identity=identity_config
+ )
+ ml_client.compute.begin_create_or_update(compute)
+```
++
+# [Studio](#tab/azure-studio)
+
+For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+++
+A managed identity is automatically granted ACRPull role on workspace ACR to enable pulling Docker images for training.
+
+> [!NOTE]
+> If you create compute first, before workspace ACR has been created, you have to assign the ACRPull role manually.
+
+### Use Docker images for inference
+
+Once you've configured ACR without admin user as described earlier, you can access Docker images for inference without admin keys from your Azure Kubernetes service (AKS). When you create or attach AKS to workspace, the cluster's service principal is automatically assigned ACRPull access to workspace ACR.
+
+> [!NOTE]
+> If you bring your own AKS cluster, the cluster must have service principal enabled instead of managed identity.
+
+## Scenario: Use a private Azure Container Registry
+
+By default, Azure Machine Learning uses Docker base images that come from a public repository managed by Microsoft. It then builds your training or inference environment on those images. For more information, see [What are ML environments?](concept-environments.md).
+
+To use a custom base image internal to your enterprise, you can use managed identities to access your private ACR. There are two use cases:
+
+ * Use base image for training as is.
+ * Build Azure Machine Learning managed image with custom image as a base.
+
+### Pull Docker base image to machine learning compute cluster for training as is
+
+Create machine learning compute cluster with system-assigned managed identity enabled as described earlier. Then, determine the principal ID of the managed identity.
++
+```azurecli-interactive
+az ml compute show --name <cluster name> -w <workspace> -g <resource group>
+```
+
+Optionally, you can update the compute cluster to assign a user-assigned managed identity:
++
+```azurecli-interactive
+az ml compute update --name <cluster name> --user-assigned-identities <my-identity-id>
+```
+
+To allow the compute cluster to pull the base images, grant the managed service identity ACRPull role on the private ACR
++
+```azurecli-interactive
+az role assignment create --assignee <principal ID> \
+--role acrpull \
+--scope "/subscriptions/<subscription ID>/resourceGroups/<private ACR resource group>/providers/Microsoft.ContainerRegistry/registries/<private ACR name>"
+```
+
+Finally, create an environment and specify the base image location in the [environment YAML file](reference-yaml-environment.md).
+++
+```azurecli
+az ml environment create --file <yaml file>
+```
+
+You can now use the environment in a [training job](how-to-train-cli.md).
+
+### Build Azure Machine Learning managed environment into base image from private ACR for training or inference
++
+In this scenario, Azure Machine Learning service builds the training or inference environment on top of a base image you supply from a private ACR. Because the image build task happens on the workspace ACR using ACR Tasks, you must perform more steps to allow access.
+
+1. Create __user-assigned managed identity__ and grant the identity ACRPull access to the __private ACR__.
+1. Grant the workspace __managed identity__ a __Managed Identity Operator__ role on the __user-assigned managed identity__ from the previous step. This role allows the workspace to assign the user-assigned managed identity to ACR Task for building the managed environment.
+
+ 1. Obtain the principal ID of workspace system-assigned managed identity:
+
+ [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
+
+ ```azurecli-interactive
+ az ml workspace show -w <workspace name> -g <resource group> --query identityPrincipalId
+ ```
+
+ 1. Grant the Managed Identity Operator role:
+
+ ```azurecli-interactive
+ az role assignment create --assignee <principal ID> --role managedidentityoperator --scope <user-assigned managed identity resource ID>
+ ```
+
+ The user-assigned managed identity resource ID is Azure resource ID of the user assigned identity, in the format `/subscriptions/<subscription ID>/resourceGroups/<resource group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user-assigned managed identity name>`.
+
+1. Specify the external ACR and client ID of the __user-assigned managed identity__ in workspace connections by using the `az ml connection` command. This command accepts a YAML file that provides information on the connection. The following example demonstrates the format for specifying a managed identity. Replace the `client_id` and `resource_id` values with the ones for your managed identity:
+
+ [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/resources/connections/container-registry-managed-identity.yml":::
+
+ The following command demonstrates how to use the YAML file to create a connection with your workspace. Replace `<yaml file>`, `<workspace name>`, and `<resource group>` with the values for your configuration:
+
+ ```azurecli-interactive
+ az ml connection --file <yml file> -w <workspace name> -g <resource group>
+ ```
+
+1. Once the configuration is complete, you can use the base images from private ACR when building environments for training or inference. The following code snippet demonstrates how to specify the base image ACR and image name in an environment definition:
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+ ```yml
+ $schema: https://azuremlschemas.azureedge.net/latest/environment.schema.json
+ name: private-acr-example
+ image: <acr url>/pytorch/pytorch:latest
+ description: Environment created from private ACR.
+ ```
+
+## Next steps
+
+* Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md)
+* Learn about [identity-based data access](how-to-identity-based-data-access.md)
+* Learn about [managed identities on compute cluster](how-to-create-attach-compute-cluster.md).
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md
inputs = {'model_name': 'maskrcnn_resnet50_fpn', # enter the maskrcnn model nam
-Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory to submit the script. Use the following command job to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/automl-standalone-jobs), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory.
+Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory to submit the script. Use the following command job to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory.
# [Multi-class image classification ](#tab/multi-class) For multi-class image classification, the generated ONNX model for the best child-run supports batch scoring by default. Therefore, no model specific arguments are needed for this task type and you can skip to the [Load the labels and ONNX model files](#load-the-labels-and-onnx-model-files) section.
Every ONNX model has a predefined set of input and output formats.
# [Multi-class image classification](#tab/multi-class)
-This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
+This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
### Input format
The output is an array of logits for all the classes/labels.
# [Multi-label image classification](#tab/multi-label)
-This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
+This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
### Input format
The output is an array of logits for all the classes/labels.
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
### Input format
The following table describes boxes, labels and scores returned for each sample
# [Object detection with YOLO](#tab/object-detect-yolo)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
### Input format
Each cell in the list indicates box detections of a sample with shape `(n_boxes,
# [Instance segmentation](#tab/instance-segmentation)
-For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
+For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
>[!IMPORTANT] > Only Mask R-CNN is supported for instance segmentation tasks. The input and output formats are based on Mask R-CNN only.
batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
batch, channel, height_onnx, width_onnx ```
-For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
```python import glob
machine-learning How To Integrate Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-integrate-azure-policy.md
To configure this policy, set the effect parameter to __DeployIfNotExists__. Set
### Workspace should use user-assigned managed identity
-Controls whether a workspace is created using a system-assigned managed identity (default) or a user-assigned managed identity. The managed identity for the workspace is used to access associated resources such as Azure Storage, Azure Container Registry, Azure Key Vault, and Azure Application Insights. For more information, see [Use managed identities with Azure Machine Learning](how-to-use-managed-identities.md).
+Controls whether a workspace is created using a system-assigned managed identity (default) or a user-assigned managed identity. The managed identity for the workspace is used to access associated resources such as Azure Storage, Azure Container Registry, Azure Key Vault, and Azure Application Insights. For more information, see [Use managed identities with Azure Machine Learning](how-to-identity-based-service-authentication.md).
To configure this policy, set the effect parameter to __audit__, __deny__, or __disabled__. If set to __audit__, you can create a workspace without specifying a user-assigned managed identity. A system-assigned identity is used and a warning event is created in the activity log.
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
accuracy = accuracy_score(y_test, y_pred)
``` > [!TIP]
-> If you are using Machine Learning pipelines, like for instance [Scikit-Learn pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), use the `autolog` functionality of that flavor for logging models. Models are automatically logged when the `fit()` method is called on the pipeline object. The notebook [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb) demostrates how to log a model with preprocessing using pipelines.
+> If you are using Machine Learning pipelines, like for instance [Scikit-Learn pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), use the `autolog` functionality of that flavor for logging models. Models are automatically logged when the `fit()` method is called on the pipeline object. The notebook [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb) demostrates how to log a model with preprocessing using pipelines.
## Logging models with a custom signature, environment or samples
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
As your needs change or requirements for automation increase you can also manage
1. [Install the SDK v2](https://aka.ms/sdk-v2-install). 1. Provide your subscription details
- [!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=subscription_id)]
+ [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/workspace/workspace.ipynb?name=subscription_id)]
1. Get a handle to the subscription. `ml_client` will be used in all the Python code in this article.
- [!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=ml_client)]
+ [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/workspace/workspace.ipynb?name=ml_client)]
* (Optional) If you have multiple accounts, add the tenant ID of the Azure Active Directory you wish to use into the `DefaultAzureCredential`. Find your tenant ID from the [Azure portal](https://portal.azure.com) under **Azure Active Directory, External Identities**.
You can create a workspace [directly in Azure Machine Learning studio](./quickst
* **Default specification.** By default, dependent resources and the resource group will be created automatically. This code creates a workspace named `myworkspace` and a resource group named `myresourcegroup` in `eastus2`.
- [!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=basic_workspace_name)]
+ [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/workspace/workspace.ipynb?name=basic_workspace_name)]
* **Use existing Azure resources**. You can also create a workspace that uses existing Azure resources with the Azure resource ID format. Find the specific Azure resource IDs in the Azure portal or with the SDK. This example assumes that the resource group, storage account, key vault, App Insights, and container registry already exist.
- [!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=basic_ex_workspace_name)]
+ [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/workspace/workspace.ipynb?name=basic_ex_workspace_name)]
For more information, see [Workspace SDK reference](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace).
If you have problems in accessing your subscription, see [Set up authentication
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=basic_private_link_workspace_name)]
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/workspace/workspace.ipynb?name=basic_private_link_workspace_name)]
This class requires an existing virtual network.
When running machine learning tasks using the SDK, you require a MLClient object
``` * **From parameters**: There's no need to have a config.json file available if you use this approach.
- [!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=ws)]
+ [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/workspace/workspace.ipynb?name=ws)]
If you have problems in accessing your subscription, see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md), as well as the [Authentication in Azure Machine Learning](https://aka.ms/aml-notebook-auth) notebook.
You can also search for workspace inside studio. See [Search for Azure Machine
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=my_ml_client)]
-[!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=ws_name)]
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/workspace/workspace.ipynb?name=my_ml_client)]
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/workspace/workspace.ipynb?name=ws_name)]
To get details of a specific workspace:
-[!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=ws_location)]
+[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/workspace/workspace.ipynb?name=ws_location)]
# [Portal](#tab/azure-portal)
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
In this article, you learn how to prepare image data for training computer visio
To generate models for computer vision tasks with automated machine learning, you need to bring labeled image data as input for model training in the form of an `MLTable`. You can create an `MLTable` from labeled training data in JSONL format.
-If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a conversion script to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model.
+If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a [conversion script](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/coco2jsonl.py) to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model.
## Prerequisites
If your labeled training data is in a different format (like, pascal VOC or COCO
## Get labeled data In order to train computer vision models using AutoML, you need to first get labeled training data. The images need to be uploaded to the cloud and label annotations need to be in JSONL format. You can either use the Azure ML Data Labeling tool to label your data or you could start with pre-labeled image data.
-### Using Azure ML Data Labeling tool to label your training data
+## Using Azure ML Data Labeling tool to label your training data
If you don't have pre-labeled data, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images. This tool automatically generates the data required for training in the accepted format. It helps to create, manage, and monitor data labeling tasks for
It helps to create, manage, and monitor data labeling tasks for
+ Object detection (bounding box) + Instance segmentation (polygon)
-If you already have a data labeling project and you want to use that data, you can [export your labeled data as an Azure ML Dataset](how-to-create-image-labeling-projects.md#export-the-labels). You can then access the exported dataset under the 'Datasets' tab in Azure ML Studio, and download the underlying JSONL file from the Dataset details page under Data sources. The downloaded JSONL file can then be used to create an `MLTable` that can be used by automated ML for training computer vision models.
+If you already have a data labeling project and you want to use that data, you can [export your labeled data as an Azure ML Dataset](how-to-create-image-labeling-projects.md#export-the-labels) and then access the dataset under 'Datasets' tab in Azure ML Studio. This exported dataset can then be passed as an input using `azureml:<tabulardataset_name>:<version>` format. Here is an example on how to pass existing dataset as input for training computer vision models.
-### Using pre-labeled training data
+# [Azure CLI](#tab/cli)
++
+```yaml
+training_data:
+ path: azureml:odFridgeObjectsTrainingDataset:1
+ type: mltable
+ mode: direct
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+```python
+from azure.ai.ml.constants import AssetTypes, InputOutputModes
+from azure.ai.ml import Input
+
+# Training MLTable with v1 TabularDataset
+my_training_data_input = Input(
+ type=AssetTypes.MLTABLE, path="azureml:odFridgeObjectsTrainingDataset:1",
+ mode=InputOutputModes.DIRECT
+)
+```
+++
+## Using pre-labeled training data
If you have previously labeled data that you would like to use to train your model, you will first need to upload the images to the default Azure Blob Storage of your Azure ML Workspace and register it as a data asset. # [Azure CLI](#tab/cli)
az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE]
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
Next, you will need to get the label annotations in JSONL format. The schema of labeled data depends on the computer vision task at hand. Refer to [schemas for JSONL files for AutoML computer vision experiments](reference-automl-images-schema.md) to learn more about the required JSONL schema for each task type.
-If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/automl-standalone-jobs).
+If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs).
+
-## Create MLTable
+### Create MLTable
Once you have your labeled data in JSONL format, you can use it to create `MLTable` as shown below. MLtable packages your data into a consumable object for training.
-You can then pass in the `MLTable` as a data input for your AutoML training job.
+You can then pass in the `MLTable` as a [data input for your AutoML training job](./how-to-auto-train-image-models.md#consume-data).
## Next steps * [Train computer vision models with automated machine learning](how-to-auto-train-image-models.md). * [Train a small object detection model with automated machine learning](how-to-use-automl-small-object-detect.md).
-* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
The following example defines a pipeline containing three nodes and moves data b
* `train_node` that trains a CNN model with Keras using the training data, `mnist_train.csv` . * `score_node` that scores the model using test data, `mnist_test.csv`.
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)]
## Next steps
machine-learning How To Safely Rollout Managed Endpoints Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints-sdk-v2.md
Using the `MLClient` created earlier, we'll get a handle to the endpoint. The en
* `request_file` - File with request data * `deployment_name` - Name of the specific deployment to test in an endpoint
-We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/model-1/sample-request.json) file.
+We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/endpoints/online/model-1/sample-request.json) file.
```python # test the blue deployment with some sample data
ml_client.begin_create_or_update(green_deployment)
### Test the new deployment
-Though green has 0% of traffic allocated, you can still invoke the endpoint and deployment with [json](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/model-2/sample-request.json) file.
+Though green has 0% of traffic allocated, you can still invoke the endpoint and deployment with [json](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/endpoints/online/model-2/sample-request.json) file.
```python ml_client.online_endpoints.invoke(
ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
``` ## Next steps-- [Explore online endpoint samples](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints)
+- [Explore online endpoint samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints)
- [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md) - [Monitor managed online endpoints](how-to-monitor-online-endpoints.md) - [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)
machine-learning How To Schedule Pipeline Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-pipeline-job.md
List continues below.
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule_recurrence)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/schedules/job-schedule.ipynb?name=create_schedule_recurrence)]
`RecurrenceTrigger` contains following properties:
List continues below.
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule_cron)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/schedules/job-schedule.ipynb?name=create_schedule_cron)]
The `CronTrigger` section defines the schedule details and contains following properties:
When defining a schedule using an existing job, you can change the runtime setti
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=change_run_settings)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/schedules/job-schedule.ipynb?name=change_run_settings)]
After you create the schedule yaml, you can use the following command to create
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/schedules/job-schedule.ipynb?name=create_schedule)]
After you create the schedule yaml, you can use the following command to create
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=show_schedule)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/schedules/job-schedule.ipynb?name=show_schedule)]
After you create the schedule yaml, you can use the following command to create
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=list_schedule)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/schedules/job-schedule.ipynb?name=list_schedule)]
After you create the schedule yaml, you can use the following command to create
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/schedules/job-schedule.ipynb?name=create_schedule)]
After you create the schedule yaml, you can use the following command to create
# [Python SDK](#tab/python)
-[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=disable_schedule)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/schedules/job-schedule.ipynb?name=disable_schedule)]
After you create the schedule yaml, you can use the following command to create
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=enable_schedule)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/schedules/job-schedule.ipynb?name=enable_schedule)]
You can also apply [Azure CLI JMESPath query](/cli/azure/query-azure-cli) to que
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=delete_schedule)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/schedules/job-schedule.ipynb?name=delete_schedule)]
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
The following diagram shows the overall architecture of this example:
To create the resources, use the following Azure CLI commands. Replace `<UNIQUE_SUFFIX>` with a unique suffix for the resources that are created. ### Create the virtual machine jump box
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
Previously updated : 07/18/2022 Last updated : 09/23/2022
> * [v1](./v1/how-to-setup-authentication.md) > * [v2 (current version)](how-to-setup-authentication.md)
-Learn how to set up authentication to your Azure Machine Learning workspace from the Azure CLI or Azure Machine Learning SDK v2 (preview). Authentication to your Azure Machine Learning workspace is based on __Azure Active Directory__ (Azure AD) for most things. In general, there are four authentication workflows that you can use when connecting to the workspace:
+Learn how to set up client to Azure authentication to your Azure Machine Learning workspace. Specifically, authenticating from the Azure CLI or the Azure Machine Learning SDK v2 (preview). Authentication to your Azure Machine Learning workspace is based on __Azure Active Directory__ (Azure AD) for most things. In general, there are four authentication workflows that you can use when connecting to the workspace:
* __Interactive__: You use your account in Azure Active Directory to either directly authenticate, or to get a token that is used for authentication. Interactive authentication is used during _experimentation and iterative development_. Interactive authentication enables you to control access to resources (such as a web service) on a per-user basis.
For more on Azure AD, see [What is Azure Active Directory authentication](..//ac
Once you've created the Azure AD accounts, see [Manage access to Azure Machine Learning workspace](how-to-assign-roles.md) for information on granting them access to the workspace and other operations in Azure Machine Learning.
+## Use interactive authentication
+
+# [Python SDK v2](#tab/sdk)
++
+Interactive authentication uses the [Azure Identity package for Python](/python/api/overview/azure/identity-readme). Most examples use `DefaultAzureCredential` to access your credentials. When a token is needed, it requests one using multiple identities (`EnvironmentCredential`, `ManagedIdentityCredential`, `SharedTokenCacheCredential`, `VisualStudioCodeCredential`, `AzureCliCredential`, `AzurePowerShellCredential`) in turn, stopping when one provides a token. For more information, see the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential) class reference.
+
+The following is an example of using `DefaultAzureCredential` to authenticate. If authentication using `DefaultAzureCredential` fails, a fallback of authenticating through your web browser is used instead.
+
+```python
+from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
+
+try:
+ credential = DefaultAzureCredential()
+ # Check if given credential can get token successfully.
+ credential.get_token("https://management.azure.com/.default")
+except Exception as ex:
+ # Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
+ # This will open a browser page for
+ credential = InteractiveBrowserCredential()
+```
+
+After the credential object has been created, the [MLClient](/python/api/azure-ai-ml/azure.ai.ml.mlclient) class is used to connect to the workspace. For example, the following code uses the `from_config()` method to load connection information:
+
+```python
+try:
+ ml_client = MLClient.from_config(credential=credential)
+except Exception as ex:
+ # NOTE: Update following workspace information to contain
+ # your subscription ID, resource group name, and workspace name
+ client_config = {
+ "subscription_id": "<SUBSCRIPTION_ID>",
+ "resource_group": "<RESOURCE_GROUP>",
+ "workspace_name": "<AZUREML_WORKSPACE_NAME>",
+ }
+
+ # write and reload from config file
+ import json, os
+
+ config_path = "../.azureml/config.json"
+ os.makedirs(os.path.dirname(config_path), exist_ok=True)
+ with open(config_path, "w") as fo:
+ fo.write(json.dumps(client_config))
+ ml_client = MLClient.from_config(credential=credential, path=config_path)
+
+print(ml_client)
+```
+
+# [Azure CLI](#tab/cli)
+
+When using the Azure CLI, the `az login` command is used to authenticate the CLI session. For more information, see [Get started with Azure CLI](/cli/azure/get-started-with-azure-cli).
+++ ## Configure a service principal To use a service principal (SP), you must first create the SP. Then grant it access to your workspace. As mentioned earlier, Azure role-based access control (Azure RBAC) is used to control access, so you must also decide what access to grant the SP.
The easiest way to create an SP and grant access to your workspace is by using t
For more information, see [Set up managed identity for compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity). -
-## Use interactive authentication
-
-# [Python SDK v2](#tab/sdk)
--
-Interactive authentication uses the [Azure Identity package for Python](/python/api/overview/azure/identity-readme). Most examples use `DefaultAzureCredential` to access your credentials. When a token is needed, it requests one using multiple identities (`EnvironmentCredential`, `ManagedIdentityCredential`, `SharedTokenCacheCredential`, `VisualStudioCodeCredential`, `AzureCliCredential`, `AzurePowerShellCredential`) in turn, stopping when one provides a token. For more information, see the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential) class reference.
-
-The following is an example of using `DefaultAzureCredential` to authenticate. If authentication using `DefaultAzureCredential` fails, a fallback of authenticating through your web browser is used instead.
-
-```python
-from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
-
-try:
- credential = DefaultAzureCredential()
- # Check if given credential can get token successfully.
- credential.get_token("https://management.azure.com/.default")
-except Exception as ex:
- # Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
- # This will open a browser page for
- credential = InteractiveBrowserCredential()
-```
-
-After the credential object has been created, the [MLClient](/python/api/azure-ai-ml/azure.ai.ml.mlclient) class is used to connect to the workspace. For example, the following code uses the `from_config()` method to load connection information:
-
-```python
-try:
- ml_client = MLClient.from_config(credential=credential)
-except Exception as ex:
- # NOTE: Update following workspace information to contain
- # your subscription ID, resource group name, and workspace name
- client_config = {
- "subscription_id": "<SUBSCRIPTION_ID>",
- "resource_group": "<RESOURCE_GROUP>",
- "workspace_name": "<AZUREML_WORKSPACE_NAME>",
- }
-
- # write and reload from config file
- import json, os
-
- config_path = "../.azureml/config.json"
- os.makedirs(os.path.dirname(config_path), exist_ok=True)
- with open(config_path, "w") as fo:
- fo.write(json.dumps(client_config))
- ml_client = MLClient.from_config(credential=credential, path=config_path)
-
-print(ml_client)
-```
-
-# [Azure CLI](#tab/cli)
-
-When using the Azure CLI, the `az login` command is used to authenticate the CLI session. For more information, see [Get started with Azure CLI](/cli/azure/get-started-with-azure-cli).
--- <a id="service-principal-authentication"></a> ## Use service principal authentication
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
To create the key vault, see [Create a key vault](../key-vault/general/quick-cre
> For more information, see the following articles: > * [Provide access to key vault keys, certificates, and secrets](../key-vault/general/rbac-guide.md) > * [Assign a key vault access policy](../key-vault/general/assign-access-policy.md)
-> * [Use managed identities with Azure Machine Learning](how-to-use-managed-identities.md)
+> * [Use managed identities with Azure Machine Learning](how-to-identity-based-service-authentication.md)
1. From the [Azure portal](https://portal.azure.com), select the key vault instance. Then select __Keys__ from the left. 1. Select __+ Generate/import__ from the top of the page. Use the following values to create a key:
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
child_runs = mlflow.search_runs(
## Example notebooks
-The [MLflow with Azure ML notebooks](https://github.com/Azure/azureml-examples/tree/master/notebooks/using-mlflow) demonstrate and expand upon concepts presented in this article.
+The [MLflow with Azure ML notebooks](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/notebooks/using-mlflow) demonstrate and expand upon concepts presented in this article.
- * [Training and tracking a classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments using MLflow, log models and combine multiple flavors into pipelines.
- * [Manage experiments and runs with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters and artifacts from Azure ML using MLflow.
+ * [Training and tracking a classifier with MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments using MLflow, log models and combine multiple flavors into pipelines.
+ * [Manage experiments and runs with MLflow](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters and artifacts from Azure ML using MLflow.
## Support matrix for querying runs and experiments
machine-learning How To Train Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-cli.md
Using `--depth 1` clones only the latest commit to the repository, which reduces
You can create an Azure Machine Learning compute cluster from the command line. For instance, the following commands will create one cluster named `cpu-cluster` and one named `gpu-cluster`. You are not charged for compute at this point as `cpu-cluster` and `gpu-cluster` will remain at zero nodes until a job is submitted. Learn more about how to [manage and optimize cost for AmlCompute](how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
As an example, you can train a convolutional neural network (CNN) on the CIFAR-1
The CIFAR-10 dataset in `torchvision` expects as input a directory that contains the `cifar-10-batches-py` directory. You can download the zipped source and extract into a local directory: Then create an Azure Machine Learning data asset from the local directory, which will be uploaded to the default datastore: Optionally, remove the local file and directory: Registered data assets can be used as inputs to job using the `path` field for a job input. The format is `azureml:<data_name>:<data_version>`, so for the CIFAR-10 dataset just created, it is `azureml:cifar-10-example:1`. You can optionally use the `azureml:<data_name>@latest` syntax instead if you want to reference the latest version of the data asset. Azure ML will resolve that reference to the explicit version.
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
Make sure your code follows these tips:
### Horovod example
-* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod)
+* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod)
### DeepSpeed
Make sure your code follows these tips:
### DeepSeed example
-* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/deepspeed/cifar)
+* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/workflows/train/deepspeed/cifar)
### Environment variables from Open MPI
run = Experiment(ws, 'experiment_name').submit(run_config)
### Pytorch per-process-launch example -- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/pytorch/cifar-distributed)
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
### <a name="per-node-launch"></a> Using `torch.distributed.launch` (per-node-launch)
run = Experiment(ws, 'experiment_name').submit(run_config)
### PyTorch per-node-launch example -- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/pytorch/cifar-distributed)
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
### PyTorch Lightning
TF_CONFIG='{
### TensorFlow example -- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/tensorflow/mnist-distributed)
+- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/workflows/train/tensorflow/mnist-distributed)
## <a name="infiniband"></a> Accelerating distributed GPU training with InfiniBand
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
ws = Workspace.from_config()
### Get the data
-The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
+The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
### Prepare training script
machine-learning How To Train Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-sdk.md
cd azureml-examples/sdk
## Start on your local machine
-Start by running a script, which trains a model using `lightgbm`. The script file is available [here](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/single-step/lightgbm/iris/src/main.py). The script needs three inputs
+Start by running a script, which trains a model using `lightgbm`. The script file is available [here](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/src/main.py). The script needs three inputs
* _input data_: You'll use data from a web location for your run - [web location](https://azuremlexamples.blob.core.windows.net/datasets/iris.csv). In this example, we're using a file in a remote location for brevity, but you can use a local file as well. * _learning-rate_: You'll use a learning rate of _0.9_
Let us tackle these steps below
### 1. Connect to the workspace
-To connect to the workspace, you need identifier parameters - a subscription, resource group and workspace name. You'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. To authenticate, you use the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true). Check this [example](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace.
+To connect to the workspace, you need identifier parameters - a subscription, resource group and workspace name. You'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. To authenticate, you use the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true). Check this [example](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace.
```python #import required libraries
ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group,
You'll create a compute called `cpu-cluster` for your job, with this code:
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/configuration.ipynb?name=create-cpu-compute)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/configuration.ipynb?name=create-cpu-compute)]
### 3. Environment to run the script
To run your script on `cpu-cluster`, you need an environment, which has the requ
* A base docker image with a conda YAML to customize further * A docker build context
- Check this [example](https://github.com/Azure/azureml-examples/blob/main/sdk/assets/environment/environment.ipynb) on how to create custom environments.
+ Check this [example](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/assets/environment/environment.ipynb) on how to create custom environments.
You'll use a curated environment provided by Azure ML for `lightgm` called `AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu`
You'll use a curated environment provided by Azure ML for `lightgm` called `Azur
To run this script, you'll use a `command`. The command will be run by submitting it as a `job` to Azure ML.
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=create-command)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=create-command)]
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-command)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-command)]
In the above, you configured:
To perform a sweep, there needs to be input(s) against which the sweep needs to
Let us improve our model by sweeping on `learning_rate` and `boosting` inputs to the script. In the previous step, you used a specific value for these parameters, but now you'll use a range or choice of values.
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=search-space)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=search-space)]
Now that you've defined the parameters, run the sweep
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=configure-sweep)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=configure-sweep)]
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-sweep)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-sweep)]
As seen above, the `sweep` function allows user to configure the following key aspects:
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-custom-image.md
print(compute_target.get_status().serialize())
## Configure your training job
-For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/main/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning.
+For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning.
Create a `ScriptRunConfig` resource to configure your job for running on the desired [compute target](v1/how-to-set-up-training-targets.md).
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
sweep_job.early_termination = MedianStoppingPolicy(
) ```
-The `command_job` is called as a function so we can apply the parameter expressions to the sweep inputs. The `sweep` function is then configured with `trial`, `sampling-algorithm`, `objective`, `limits`, and `compute`. The above code snippet is taken from the sample notebook [Run hyperparameter sweep on a Command or CommandComponent](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb). In this sample, the `learning_rate` and `boosting` parameters will be tuned. Early stopping of jobs will be determined by a `MedianStoppingPolicy`, which stops a job whose primary metric value is worse than the median of the averages across all training jobs.(see [MedianStoppingPolicy class reference](/python/api/azure-ai-ml/azure.ai.ml.sweep.medianstoppingpolicy)).
+The `command_job` is called as a function so we can apply the parameter expressions to the sweep inputs. The `sweep` function is then configured with `trial`, `sampling-algorithm`, `objective`, `limits`, and `compute`. The above code snippet is taken from the sample notebook [Run hyperparameter sweep on a Command or CommandComponent](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb). In this sample, the `learning_rate` and `boosting` parameters will be tuned. Early stopping of jobs will be determined by a `MedianStoppingPolicy`, which stops a job whose primary metric value is worse than the median of the averages across all training jobs.(see [MedianStoppingPolicy class reference](/python/api/azure-ai-ml/azure.ai.ml.sweep.medianstoppingpolicy)).
-To see how the parameter values are received, parsed, and passed to the training script to be tuned, refer to this [code sample](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/single-step/lightgbm/iris/src/main.py)
+To see how the parameter values are received, parsed, and passed to the training script to be tuned, refer to this [code sample](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/src/main.py)
> [!Important] > Every hyperparameter sweep job restarts the training from scratch, including rebuilding the model and _all the data loaders_. You can minimize
az ml job download --name <sweep-job> --output-name model
## References -- [Hyperparameter tuning example](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/single-step/lightgbm/iris/src/main.py)
+- [Hyperparameter tuning example](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/src/main.py)
- [CLI (v2) sweep job YAML schema here](reference-yaml-job-sweep.md#parameter-expressions) ## Next steps
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
ONNX is an open-source format for AI models. ONNX supports interoperability betw
- [.NET Core SDK 3.1 or greater](https://dotnet.microsoft.com/download) - Text Editor or IDE (such as [Visual Studio](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/Download))-- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
+- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
- [Netron](https://github.com/lutzroeder/netron) (optional) ## Create a C# console application
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
Title: Use AutoML to detect small objects in images
-description: Set up Azure Machine Learning automated ML to train small object detection models.
+
+description: Set up Azure Machine Learning automated ML to train small object detection models with the CLI v2 and Python SDK v2 (preview).
Last updated 10/13/2021-+ # Train a small object detection model with AutoML (preview)
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](v1/how-to-use-automl-small-object-detect-v1.md)
+> * [v2 (current version)](how-to-use-automl-small-object-detect.md)
> [!IMPORTANT] > This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
In this article, you'll learn how to train an object detection model to detect s
Typically, computer vision models for object detection work well for datasets with relatively large objects. However, due to memory and computational constraints, these models tend to under-perform when tasked to detect small objects in high-resolution images. Because high-resolution images are typically large, they are resized before input into the model, which limits their capability to detect smaller objects--relative to the initial image size.
-To help with this problem, automated ML supports tiling as part of the public preview computer vision capabilities. The tiling capability in automated ML is based on the concepts in [The Power of Tiling for Small Object Detection](https://openaccess.thecvf.com/content_CVPRW_2019/papers/UAVision/Unel_The_Power_of_Tiling_for_Small_Object_Detection_CVPRW_2019_paper.pdf).
+To help with this problem, automated ML supports tiling as part of the computer vision capabilities. The tiling capability in automated ML is based on the concepts in [The Power of Tiling for Small Object Detection](https://openaccess.thecvf.com/content_CVPRW_2019/papers/UAVision/Unel_The_Power_of_Tiling_for_Small_Object_Detection_CVPRW_2019_paper.pdf).
-When tiling, each image is divided into a grid of tiles. Adjacent tiles overlap with each other in width and height dimensions. The tiles are cropped from the original as shown in the following image.
+When tiling, each image is divided into a grid of tiles. Adjacent tiles overlap with each other in width and height dimensions. The tiles are cropped from the original as shown in the following image.
-![Tiles generation](./media/how-to-use-automl-small-object-detect/tiles-generation.png)
## Prerequisites * An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
-* This article assumes some familiarity with how to configure an [automated machine learning experiment for computer vision tasks](how-to-auto-train-image-models.md).
+* This article assumes some familiarity with how to configure an [automated machine learning experiment for computer vision tasks](how-to-auto-train-image-models.md).
## Supported models
-Small object detection using tiling is currently supported for the following models:
-
-* fasterrcnn_resnet18_fpn
-* fasterrcnn_resnet50_fpn
-* fasterrcnn_resnet34_fpn
-* fasterrcnn_resnet101_fpn
-* fasterrcnn_resnet152_fpn
-* retinanet_resnet50_fpn
+Small object detection using tiling is supported for all models supported by Automated ML for images for object detection task.
## Enable tiling during training
-To enable tiling, you can set the `tile_grid_size` parameter to a value like (3, 2); where 3 is the number of tiles along the width dimension and 2 is the number of tiles along the height dimension. When this parameter is set to (3, 2), each image is split into a grid of 3 x 2 tiles. Each tile overlaps with the adjacent tiles, so that any objects that fall on the tile border are included completely in one of the tiles. This overlap can be controlled by the `tile_overlap_ratio` parameter, which defaults to 25%.
+To enable tiling, you can set the `tile_grid_size` parameter to a value like '3x2'; where 3 is the number of tiles along the width dimension and 2 is the number of tiles along the height dimension. When this parameter is set to '3x2', each image is split into a grid of 3 x 2 tiles. Each tile overlaps with the adjacent tiles, so that any objects that fall on the tile border are included completely in one of the tiles. This overlap can be controlled by the `tile_overlap_ratio` parameter, which defaults to 25%.
+
+When tiling is enabled, the entire image and the tiles generated from it are passed through the model. These images and tiles are resized according to the `min_size` and `max_size` parameters before feeding to the model. The computation time increases proportionally because of processing this extra data.
-When tiling is enabled, the entire image and the tiles generated from it are passed through the model. These images and tiles are resized according to the `min_size` and `max_size` parameters before feeding to the model. The computation time increases proportionally because of processing this extra data.
+For example, when the `tile_grid_size` parameter is '3x2', the computation time would be approximately seven times higher than without tiling.
-For example, when the `tile_grid_size` parameter is (3, 2), the computation time would be approximately seven times when compared to no tiling.
+You can specify the value for `tile_grid_size` in your training parameters as a string.
-You can specify the value for `tile_grid_size` in your hyperparameter space as a string.
+# [CLI v2](#tab/CLI-v2)
++
+```yaml
+training_parameters:
+ tile_grid_size: '3x2'
+```
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
```python
-parameter_space = {
- 'model_name': choice('fasterrcnn_resnet50_fpn'),
- 'tile_grid_size': choice('(3, 2)'),
- ...
-}
+image_object_detection_job.set_training_parameters(
+ tile_grid_size='3x2'
+)
```+ The value for `tile_grid_size` parameter depends on the image dimensions and size of objects within the image. For example, larger number of tiles would be helpful when there are smaller objects in the images. To choose the optimal value for this parameter for your dataset, you can use hyperparameter search. To do so, you can specify a choice of values for this parameter in your hyperparameter space.
+# [CLI v2](#tab/CLI-v2)
++
+```yaml
+search_space:
+ - model_name:
+ type: choice
+ values: ['fasterrcnn_resnet50_fpn']
+ tile_grid_size:
+ type: choice
+ values: ['2x1', '3x2', '5x3']
+```
+
+# [Python SDK v2 (preview)](#tab/SDK-v2)
+ ```python
-parameter_space = {
- 'model_name': choice('fasterrcnn_resnet50_fpn'),
- 'tile_grid_size': choice('(2, 1)', '(3, 2)', '(5, 3)'),
- ...
-}
+image_object_detection_job.extend_search_space(
+ SearchSpace(
+ model_name=Choice(['fasterrcnn_resnet50_fpn']),
+ tile_grid_size=Choice(['2x1', '3x2', '5x3'])
+ )
+)
```++ ## Tiling during inference When a model trained with tiling is deployed, tiling also occurs during inference. Automated ML uses the `tile_grid_size` value from training to generate the tiles during inference. The entire image and corresponding tiles are passed through the model, and the object proposals from them are merged to output final predictions, like in the following image.
-![Object proposals merge](./media/how-to-use-automl-small-object-detect/tiles-merge.png)
-> [!NOTE]
+> [!NOTE]
> It's possible that the same object is detected from multiple tiles, duplication detection is done to remove such duplicates. > > Duplicate detection is done by running NMS on the proposals from the tiles and the image. When multiple proposals overlap, the one with the highest score is picked and others are discarded as duplicates.Two proposals are considered to be overlapping when the intersection over union (iou) between them is greater than the `tile_predictions_nms_thresh` parameter.
-You also have the option to enable tiling only during inference without enabling it in training. To do so, set the `tile_grid_size` parameter only during inference, not for training.
+You also have the option to enable tiling only during inference without enabling it in training. To do so, set the `tile_grid_size` parameter only during inference, not for training.
-Doing so, may improve performance for some datasets, and won't incur the extra cost that comes with tiling at training time.
+Doing so, may improve performance for some datasets, and won't incur the extra cost that comes with tiling at training time.
-## Tiling hyperparameters
+## Tiling hyperparameters
The following are the parameters you can use to control the tiling feature. | Parameter Name | Description | Default | | |-| -|
-| `tile_grid_size` | The grid size to use for tiling each image. Available for use during training, validation, and inference.<br><br>Tuple of two integers passed as a string, e.g `'(3, 2)'`<br><br> *Note: Setting this parameter increases the computation time proportionally, since all tiles and images are processed by the model.*| no default value |
+| `tile_grid_size` | The grid size to use for tiling each image. Available for use during training, validation, and inference.<br><br>Should be passed as a string in `'3x2'` format.<br><br> *Note: Setting this parameter increases the computation time proportionally, since all tiles and images are processed by the model.*| no default value |
| `tile_overlap_ratio` | Controls the overlap ratio between adjacent tiles in each dimension. When the objects that fall on the tile boundary are too large to fit completely in one of the tiles, increase the value of this parameter so that the objects fit in at least one of the tiles completely.<br> <br> Must be a float in [0, 1).| 0.25 | | `tile_predictions_nms_thresh` | The intersection over union threshold to use to do non-maximum suppression (nms) while merging predictions from tiles and image. Available during validation and inference. Change this parameter if there are multiple boxes detected per object in the final predictions. <br><br> Must be float in [0, 1]. | 0.25 | ## Example notebooks
-See the [object detection sample notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) for detailed code examples of setting up and training an object detection model.
+See the [object detection sample notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb) for detailed code examples of setting up and training an object detection model.
>[!NOTE]
-> All images in this article are made available in accordance with the permitted use section of the [MIT licensing agreement](https://choosealicense.com/licenses/mit/).
+> All images in this article are made available in accordance with the permitted use section of the [MIT licensing agreement](https://choosealicense.com/licenses/mit/).
> Copyright © 2020 Roboflow, Inc. ## Next steps * Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
-* For definitions and examples of the performance charts and metrics provided for each job, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md).
+* For definitions and examples of the performance charts and metrics provided for each job, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md).
* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md). * See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters.md).
-*[Make predictions with ONNX on computer vision models from AutoML](how-to-inference-onnx-automl-image-models.md)
+* [Make predictions with ONNX on computer vision models from AutoML](how-to-inference-onnx-automl-image-models.md)
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-environments.md
build = env.build_local(workspace=ws, useDocker=True, pushImageToWorkspaceAcr=Tr
### Utilize adminless Azure Container Registry (ACR) with VNet
-It is no longer required for users to have admin mode enabled on their workspace attached ACR in VNet scenarios. Ensure that the derived image build time on the compute is less than 1 hour to enable successful build. Once the image is pushed to the workspace ACR, this image can now only be accessed with a compute identity. For more information on set up, see [How to use managed identities with Azure Machine Learning](./how-to-use-managed-identities.md).
+It is no longer required for users to have admin mode enabled on their workspace attached ACR in VNet scenarios. Ensure that the derived image build time on the compute is less than 1 hour to enable successful build. Once the image is pushed to the workspace ACR, this image can now only be accessed with a compute identity. For more information on set up, see [How to use managed identities with Azure Machine Learning](./how-to-identity-based-service-authentication.md).
## Use environments for training
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-identities.md
- Title: Use managed identities for access control-
-description: Learn how to use managed identities to control access to Azure resources from Azure Machine Learning workspace.
------- Previously updated : 05/06/2021---
-# Use Managed identities with Azure Machine Learning
--
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/how-to-use-managed-identities.md)
-> * [v2 (current version)](how-to-use-managed-identities.md)
-
-[Managed identities](../active-directory/managed-identities-azure-resources/overview.md) allow you to configure your workspace with the *minimum required permissions to access resources*.
-
-When configuring Azure Machine Learning workspace in trustworthy manner, it's important to ensure that different services associated with the workspace have the correct level of access. For example, during machine learning workflow the workspace needs access to Azure Container Registry (ACR) for Docker images, and storage accounts for training data.
-
-Furthermore, managed identities allow fine-grained control over permissions, for example you can grant or revoke access from specific compute resources to a specific ACR.
-
-In this article, you'll learn how to use managed identities to:
-
- * Configure and use ACR for your Azure Machine Learning workspace without having to enable admin user access to ACR.
- * Access a private ACR external to your workspace, to pull base images for training or inference.
- * Create workspace with user-assigned managed identity to access associated resources.
-
-## Prerequisites
--- An Azure Machine Learning workspace. For more information, see [Create workspace resources](quickstart-create-resources.md).-- The [Azure CLI extension for Machine Learning service](v1/reference-azure-machine-learning-cli.md)-- The [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).-- To assign roles, the login for your Azure subscription must have the [Managed Identity Operator](../role-based-access-control/built-in-roles.md#managed-identity-operator) role, or other role that grants the required actions (such as __Owner__).-- You must be familiar with creating and working with [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md).-
-## Configure managed identities
-
-In some situations, it's necessary to disallow admin user access to Azure Container Registry. For example, the ACR may be shared and you need to disallow admin access by other users. Or, creating ACR with admin user enabled is disallowed by a subscription level policy.
-
-> [!IMPORTANT]
-> When using Azure Machine Learning for inference on Azure Container Instance (ACI), admin user access on ACR is __required__. Do not disable it if you plan on deploying models to ACI for inference.
-
-When you create ACR without enabling admin user access, managed identities are used to access the ACR to build and pull Docker images.
-
-You can bring your own ACR with admin user disabled when you create the workspace. Alternatively, let Azure Machine Learning create workspace ACR and disable admin user afterwards.
-
-### Bring your own ACR
-
-If ACR admin user is disallowed by subscription policy, you should first create ACR without admin user, and then associate it with the workspace. Also, if you have existing ACR with admin user disabled, you can attach it to the workspace.
-
-[Create ACR from Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) without setting ```--admin-enabled``` argument, or from Azure portal without enabling admin user. Then, when creating Azure Machine Learning workspace, specify the Azure resource ID of the ACR. The following example demonstrates creating a new Azure ML workspace that uses an existing ACR:
-
-> [!TIP]
-> To get the value for the `--container-registry` parameter, use the [az acr show](/cli/azure/acr#az-acr-show) command to show information for your ACR. The `id` field contains the resource ID for your ACR.
--
-```azurecli-interactive
-az ml workspace create -w <workspace name> \
--g <workspace resource group> \--l <region> \container-registry /subscriptions/<subscription id>/resourceGroups/<acr resource group>/providers/Microsoft.ContainerRegistry/registries/<acr name>
-```
-
-### Let Azure Machine Learning service create workspace ACR
-
-If you don't bring your own ACR, Azure Machine Learning service will create one for you when you perform an operation that needs one. For example, submit a training job to Machine Learning Compute, build an environment, or deploy a web service endpoint. The ACR created by the workspace will have admin user enabled, and you need to disable the admin user manually.
--
-1. Create a new workspace
--
- ```azurecli-interactive
- az ml workspace show -n <my workspace> -g <my resource group>
- ```
-
-1. Perform an action that requires ACR. For example, the [tutorial on training a model](tutorial-train-deploy-notebook.md).
-
-1. Get the ACR name created by the cluster:
-
- ```azurecli-interactive
- az ml workspace show -w <my workspace> \
- -g <my resource group>
- --query containerRegistry
- ```
-
- This command returns a value similar to the following text. You only want the last portion of the text, which is the ACR instance name:
-
- ```output
- /subscriptions/<subscription id>/resourceGroups/<my resource group>/providers/MicrosoftContainerReggistry/registries/<ACR instance name>
- ```
-
-1. Update the ACR to disable the admin user:
-
- ```azurecli-interactive
- az acr update --name <ACR instance name> --admin-enabled false
- ```
-
-### Create compute with managed identity to access Docker images for training
-
-To access the workspace ACR, create machine learning compute cluster with system-assigned managed identity enabled. You can enable the identity from Azure portal or Studio when creating compute, or from Azure CLI using the below. For more information, see [using managed identity with compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
-
-# [Python SDK](#tab/python)
-
-When creating a compute cluster with the [AmlComputeProvisioningConfiguration](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcomputeprovisioningconfiguration), use the `identity_type` parameter to set the managed identity type.
-
-# [Azure CLI](#tab/azure-cli)
--
-```azurecli-interaction
-az ml compute create --name cpucluster --type <cluster name> --identity-type systemassigned
-```
-
-# [Studio](#tab/azure-studio)
-
-For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
---
-A managed identity is automatically granted ACRPull role on workspace ACR to enable pulling Docker images for training.
-
-> [!NOTE]
-> If you create compute first, before workspace ACR has been created, you have to assign the ACRPull role manually.
-
-## Access base images from private ACR
-
-By default, Azure Machine Learning uses Docker base images that come from a public repository managed by Microsoft. It then builds your training or inference environment on those images. For more information, see [What are ML environments?](concept-environments.md).
-
-To use a custom base image internal to your enterprise, you can use managed identities to access your private ACR. There are two use cases:
-
- * Use base image for training as is.
- * Build Azure Machine Learning managed image with custom image as a base.
-
-### Pull Docker base image to machine learning compute cluster for training as is
-
-Create machine learning compute cluster with system-assigned managed identity enabled as described earlier. Then, determine the principal ID of the managed identity.
--
-```azurecli-interactive
-az ml compute show --name <cluster name> -w <workspace> -g <resource group>
-```
-
-Optionally, you can update the compute cluster to assign a user-assigned managed identity:
--
-```azurecli-interactive
-az ml compute update --name <cluster name> --user-assigned-identities <my-identity-id>
-```
--
-To allow the compute cluster to pull the base images, grant the managed service identity ACRPull role on the private ACR
--
-```azurecli-interactive
-az role assignment create --assignee <principal ID> \
role acrpull \scope "/subscriptions/<subscription ID>/resourceGroups/<private ACR resource group>/providers/Microsoft.ContainerRegistry/registries/<private ACR name>"
-```
-
-Finally, when submitting a training job, specify the base image location in the [environment definition](how-to-use-environments.md#use-existing-environments).
--
-```python
-from azureml.core import Environment
-env = Environment(name="private-acr")
-env.docker.base_image = "<ACR name>.azurecr.io/<base image repository>/<base image version>"
-env.python.user_managed_dependencies = True
-```
-
-> [!IMPORTANT]
-> To ensure that the base image is pulled directly to the compute resource, set `user_managed_dependencies = True` and do not specify a Dockerfile. Otherwise Azure Machine Learning service will attempt to build a new Docker image and fail, because only the compute cluster has access to pull the base image from ACR.
-
-### Build Azure Machine Learning managed environment into base image from private ACR for training or inference
--
-In this scenario, Azure Machine Learning service builds the training or inference environment on top of a base image you supply from a private ACR. Because the image build task happens on the workspace ACR using ACR Tasks, you must perform more steps to allow access.
-
-1. Create __user-assigned managed identity__ and grant the identity ACRPull access to the __private ACR__.
-1. Grant the workspace __system-assigned managed identity__ a Managed Identity Operator role on the __user-assigned managed identity__ from the previous step. This role allows the workspace to assign the user-assigned managed identity to ACR Task for building the managed environment.
-
- 1. Obtain the principal ID of workspace system-assigned managed identity:
-
- ```azurecli-interactive
- az ml workspace show -w <workspace name> -g <resource group> --query identityPrincipalId
- ```
-
- 1. Grant the Managed Identity Operator role:
-
- ```azurecli-interactive
- az role assignment create --assignee <principal ID> --role managedidentityoperator --scope <user-assigned managed identity resource ID>
- ```
-
- The user-assigned managed identity resource ID is Azure resource ID of the user assigned identity, in the format `/subscriptions/<subscription ID>/resourceGroups/<resource group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user-assigned managed identity name>`.
-
-1. Specify the external ACR and client ID of the __user-assigned managed identity__ in workspace connections by using [Workspace.set_connection method](/python/api/azureml-core/azureml.core.workspace.workspace#set-connection-name--category--target--authtype--value-):
-
- [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
-
- ```python
- workspace.set_connection(
- name="privateAcr",
- category="ACR",
- target = "<acr url>",
- authType = "ManagedIdentity",
- value={"ResourceId": "<user-assigned managed identity resource id>", "ClientId": "<user-assigned managed identity client ID>"})
- ```
-
-Once the configuration is complete, you can use the base images from private ACR when building environments for training or inference. The following code snippet demonstrates how to specify the base image ACR and image name in an environment definition:
--
-```python
-from azureml.core import Environment
-
-env = Environment(name="my-env")
-env.docker.base_image = "<acr url>/my-repo/my-image:latest"
-```
-
-Optionally, you can specify the managed identity resource URL and client ID in the environment definition itself by using [RegistryIdentity](/python/api/azureml-core/azureml.core.container_registry.registryidentity). If you use registry identity explicitly, it overrides any workspace connections specified earlier:
--
-```python
-from azureml.core.container_registry import RegistryIdentity
-
-identity = RegistryIdentity()
-identity.resource_id= "<user-assigned managed identity resource ID>"
-identity.client_id="<user-assigned managed identity client ID>"
-env.docker.base_image_registry.registry_identity=identity
-env.docker.base_image = "my-acr.azurecr.io/my-repo/my-image:latest"
-```
-
-## Use Docker images for inference
-
-Once you've configured ACR without admin user as described earlier, you can access Docker images for inference without admin keys from your Azure Kubernetes service (AKS). When you create or attach AKS to workspace, the cluster's service principal is automatically assigned ACRPull access to workspace ACR.
-
-> [!NOTE]
-> If you bring your own AKS cluster, the cluster must have service principal enabled instead of managed identity.
-
-## Create workspace with user-assigned managed identity
-
-When creating a workspace, you can bring your own [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md) that will be used to access the associated resources: ACR, KeyVault, Storage, and App Insights.
-
-> [!IMPORTANT]
-> When creating workspace with user-assigned managed identity, you must create the associated resources yourself, and grant the managed identity roles on those resources. Use the [role assignment ARM template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-dependencies-role-assignment) to make the assignments.
-
-Use Azure CLI or Python SDK to create the workspace. When using the CLI, specify the ID using the `--primary-user-assigned-identity` parameter. When using the SDK, use `primary_user_assigned_identity`. The following are examples of using the Azure CLI and Python to create a new workspace using these parameters:
-
-__Azure CLI__
--
-```azurecli-interactive
-az ml workspace create -w <workspace name> -g <resource group> --primary-user-assigned-identity <managed identity ARM ID>
-```
-
-__Python__
--
-```python
-from azureml.core import Workspace
-
-ws = Workspace.create(name="workspace name",
- subscription_id="subscription id",
- resource_group="resource group name",
- primary_user_assigned_identity="managed identity ARM ID")
-```
-
-You can also use [an ARM template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/) to create a workspace with user-assigned managed identity.
-
-For a workspace with [customer-managed keys for encryption](concept-data-encryption.md), you can pass in a user-assigned managed identity to authenticate from storage to Key Vault. Use argument
- __user-assigned-identity-for-cmk-encryption__ (CLI) or __user_assigned_identity_for_cmk_encryption__ (SDK) to pass in the managed identity. This managed identity can be the same or different as the workspace primary user assigned managed identity.
-
-## Next steps
-
-* Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md)
-* Learn about [identity-based data access](how-to-identity-based-data-access.md)
-* Learn about [managed identities on compute cluster](how-to-create-attach-compute-cluster.md).
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
mlflow.set_registry_uri(azureml_mlflow_uri)
> [!NOTE] > The value of `azureml_mlflow_uri` was obtained in the same way it was demostrated in [Set MLflow Tracking to only track in your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace)
-For a complete example about this scenario please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb).
+For a complete example about this scenario please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb).
## Deploying and consuming models registered in Azure Machine Learning
Models registered in Azure Machine Learning Service using MLflow can be consumed
You can leverage the `azureml-mlflow` plugin to deploy a model to your Azure Machine Learning workspace. Check [How to deploy MLflow models](how-to-deploy-mlflow-models.md) page for a complete detail about how to deploy models to the different targets. > [!IMPORTANT]
-> Models need to be registered in Azure Machine Learning registry in order to deploy them. If your models happen to be registered in the MLflow instance inside Azure Databricks, you will have to register them again in Azure Machine Learning. If this is you case, please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb)
+> Models need to be registered in Azure Machine Learning registry in order to deploy them. If your models happen to be registered in the MLflow instance inside Azure Databricks, you will have to register them again in Azure Machine Learning. If this is you case, please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb)
### Deploy models to ADB for batch scoring using UDFs
If you don't plan to use the logged metrics and artifacts in your workspace, the
## Example notebooks
-The [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb) demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments and models with the MLflow instance in Azure Databricks and leverage Azure ML for deployment.
+The [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb) demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments and models with the MLflow instance in Azure Databricks and leverage Azure ML for deployment.
## Next steps * [Deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md).
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
To register and view a model from a job, use the following steps:
## Example files
-[Using MLflow (Jupyter Notebooks)](https://github.com/Azure/azureml-examples/tree/main/notebooks/using-mlflow)
+[Using MLflow (Jupyter Notebooks)](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/notebooks/using-mlflow)
## Limitations
machine-learning How To Use Sweep In Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-sweep-in-pipeline.md
In Azure Machine Learning Python SDK v2, you can enable hyperparameter tuning fo
Below code snippet shows how to enable sweep for `train_model`.
-[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/1c_pipeline_with_hyperparameter_sweep/pipeline_with_hyperparameter_sweep.ipynb?name=enable-sweep)]
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1c_pipeline_with_hyperparameter_sweep/pipeline_with_hyperparameter_sweep.ipynb?name=enable-sweep)]
We first load `train_component_func` defined in `train.yml` file. When creating `train_model`, we add `c_value`, `kernel` and `coef0` into search space(line 15-17). Line 30-35 defines the primary metric, sampling algorithm etc.
If a child jobs failed, select the name of that child job to enter detail page o
## Sample notebooks -- [Build pipeline with sweep node](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/pipelines/1c_pipeline_with_hyperparameter_sweep/pipeline_with_hyperparameter_sweep.ipynb)-- [Run hyperparameter sweep on a command job](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb)
+- [Build pipeline with sweep node](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/pipelines/1c_pipeline_with_hyperparameter_sweep/pipeline_with_hyperparameter_sweep.ipynb)
+- [Run hyperparameter sweep on a command job](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb)
## Next steps
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
-- Previously updated : 08/03/2021+++ Last updated : 09/22/2022 adobe-target: true
Data scientists and ML engineers will find tools to accelerate and automate thei
Enterprises working in the Microsoft Azure cloud will find familiar security and role-based access control (RBAC) for infrastructure. You can set up a project to deny access to protected data and select operations.
-### Collaboration for machine learning teams
+## Productivity for everyone on the team
-Machine learning projects often require a team with varied skillsets to build and maintain. Azure Machine Learning has tools that help enable collaboration, such as:
+Machine learning projects often require a team with varied skill set to build and maintain. Azure Machine Learning has tools that help enable you to:
-- Shared notebooks, compute resources, data, and environments-- Tracking and auditability that shows who made changes and when-- Asset versioning
+* Collaborate with your team via shared notebooks, compute resources, data, and environments
-### Tools for developers
+* Develop models for fairness and explainability, tracking and auditability to fulfill lineage and audit compliance requirements
-Developers find familiar interfaces in Azure Machine Learning, such as:
+* Deploy ML models quickly and easily at scale, and manage and govern them efficiently with MLOps
-- [Python SDK](/python/api/overview/azure/ml/)-- [Azure Resource Manager REST APIs (preview)](/rest/api/azureml/)-- [CLI v2 ](/cli/azure/ml)
+* Run machine learning workloads anywhere with built-in governance, security, and compliance
-### Studio UI
+### Cross-compatible platform tools that meet your needs
-The [Azure Machine Learning studio](https://ml.azure.com) is a graphical user interface for a project workspace. In the studio, you can:
+Anyone on an ML team can use their preferred tools to get the job done. Whether you're running rapid experiments, hyperparameter-tuning, building pipelines, or managing inferences, you can use familiar interfaces including:
-- View runs, metrics, logs, outputs, and so on.-- Author and edit notebooks and files.-- Manage common assets, such as
- - Data credentials
- - Compute
- - Environments
-- Visualize run metrics, results, and reports.-- Visualize pipelines authored through developer interfaces.-- Author AutoML jobs.
+* [Azure Machine Learning studio](https://ml.azure.com)
+* [Python SDK](https://aka.ms/sdk-v2-install)
+* [CLI v2 ](how-to-configure-cli.md))
+* [Azure Resource Manager REST APIs (preview)](/rest/api/azureml/)
-Plus, the designer has a drag-and-drop interface where you can train and deploy models.
+As you're refining the model and collaborating with others throughout the rest of Machine Learning development cycle, you can share and find assets, resources, and metrics for your projects on the Azure Machine Learning studio UI.
+
+### Studio
+
+The [Azure Machine Learning studio](https://ml.azure.com) offers multiple authoring experiences depending on the type of project and the level of your past ML experience, without having to install anything.
+
+* Notebooks: write and run your own code in managed Jupyter Notebook servers that are directly integrated in the studio.
+
+* Visualize run metrics: analyze and optimize your experiments with visualization.
+
+ :::image type="content" source="media/overview-what-is-azure-machine-learning/metrics.png" alt-text="Screenshot of metrics for a training run.":::
+
+* Azure Machine Learning designer: use the designer to train and deploy machine learning models without writing any code. Drag and drop datasets and components to create ML pipelines. Try out the [designer tutorial](tutorial-designer-automobile-price-train-score.md).
+
+* Automated machine learning UI: Learn how to create [automated ML experiments](tutorial-first-experiment-automated-ml.md) with an easy-to-use interface.
+
+* Data labeling: Use Azure Machine Learning data labeling to efficiently coordinate [image labeling](how-to-create-image-labeling-projects.md) or [text labeling](how-to-create-text-labeling-projects.md) projects.
-If you're a ML Studio (classic) user, [learn about Studio (classic) deprecation and the difference between it and Azure Machine Learning studio](overview-what-is-machine-learning-studio.md#ml-studio-classic-vs-azure-machine-learning-studio).
## Enterprise-readiness and security
Azure Machine Learning integrates with the Azure cloud platform to add security
Security integrations include: -- Azure Virtual Networks (VNets) with network security groups -- Azure Key Vault where you can save security secrets, such as access information for storage accounts-- Azure Container Registry set up behind a VNet
+* Azure Virtual Networks (VNets) with network security groups
+* Azure Key Vault where you can save security secrets, such as access information for storage accounts
+* Azure Container Registry set up behind a VNet
See [Tutorial: Set up a secure workspace](tutorial-create-secure-workspace.md).
See [Tutorial: Set up a secure workspace](tutorial-create-secure-workspace.md).
Other integrations with Azure services support a machine learning project from end-to-end. They include: -- Azure Synapse Analytics to process and stream data with Spark-- Azure Arc, where you can run Azure services in a Kubernetes environment-- Storage and database options, such as Azure SQL Database, Azure Storage Blobs, and so on-- Azure App Service allowing you to deploy and manage ML-powered apps
+* Azure Synapse Analytics to process and stream data with Spark
+* Azure Arc, where you can run Azure services in a Kubernetes environment
+* Storage and database options, such as Azure SQL Database, Azure Storage Blobs, and so on
+* Azure App Service allowing you to deploy and manage ML-powered apps
> [!Important] > Azure Machine Learning doesn't store or process your data outside of the region where you deploy. > - ## Machine learning project workflow Typically models are developed as part of a project with an objective and goals. Projects often involve more than one person. When experimenting with data, algorithms, and models, development is iterative.
In Azure Machine Learning, you can run your training script in the cloud or buil
Data scientists can use models in Azure Machine Learning that they've created in common Python frameworks, such as: -- PyTorch-- TensorFlow-- scikit-learn-- XGBoost-- LightGBM
+* PyTorch
+* TensorFlow
+* scikit-learn
+* XGBoost
+* LightGBM
Other languages and frameworks are supported as well, including: -- R-- .NET
+* R
+* .NET
See [Open-source integration with Azure Machine Learning](concept-open-source.md).
Efficiency of training for deep learning and sometimes classical machine learnin
Supported via Azure ML Kubernetes and Azure ML compute clusters: -- PyTorch-- TensorFlow-- MPI
+* PyTorch
+* TensorFlow
+* MPI
The MPI distribution can be used for Horovod or custom multinode logic. Additionally, Apache Spark is supported via Azure Synapse Analytics Spark clusters (preview).
Scaling a machine learning project may require scaling embarrassingly parallel m
## Deploy models
-To bring a model into production, it is deployed. Azure Machine Learning's managed endpoints abstract the required infrastructure for both batch or real-time (online) model scoring (inferencing).
+To bring a model into production, it's deployed. Azure Machine Learning's managed endpoints abstract the required infrastructure for both batch or real-time (online) model scoring (inferencing).
### Real-time and batch scoring (inferencing)
To bring a model into production, it is deployed. Azure Machine Learning's manag
*Real-time scoring*, or *online inferencing*, involves invoking an endpoint with one or more model deployments and receiving a response in near-real-time via HTTPs. Traffic can be split across multiple deployments, allowing for testing new model versions by diverting some amount of traffic initially and increasing once confidence in the new model is established. See:
+ * [Deploy a model with a real-time managed endpoint](how-to-deploy-managed-online-endpoints.md)
+ * [Use batch endpoints for scoring](how-to-use-batch-endpoint.md)
## MLOps: DevOps for machine learning
DevOps for machine learning models, often called MLOps, is a process for develop
### ML model lifecycle
-![Machine learning model lifecycle - MLOps](./media/overview-what-is-azure-machine-learning/model-lifecycle.png)
+![Machine learning model lifecycle * MLOps](./media/overview-what-is-azure-machine-learning/model-lifecycle.png)
Learn more about [MLOps in Azure Machine Learning](concept-model-management-and-deployment.md).
Azure Machine Learning is built with the model lifecycle in mind. You can audit
Some key features enabling MLOps include: -- `git` integration-- MLflow integration-- Machine learning pipeline scheduling-- Azure Event Grid integration for custom triggers-- Easy to use with CI/CD tools like GitHub Actions or Azure DevOps
+* `git` integration
+* MLflow integration
+* Machine learning pipeline scheduling
+* Azure Event Grid integration for custom triggers
+* Easy to use with CI/CD tools like GitHub Actions or Azure DevOps
Also, Azure Machine Learning includes features for monitoring and auditing:-- Job artifacts, such as code snapshots, logs, and other outputs-- Lineage between jobs and assets, such as containers, data, and compute resources
+* Job artifacts, such as code snapshots, logs, and other outputs
+* Lineage between jobs and assets, such as containers, data, and compute resources
## Next steps Start using Azure Machine Learning:-- [Set up an Azure Machine Learning workspace](quickstart-create-resources.md)-- [Tutorial: Build a first machine learning project](tutorial-1st-experiment-hello-world.md)-- [Preview: Run model training jobs with the v2 CLI](how-to-train-cli.md)
+* [Set up an Azure Machine Learning workspace](quickstart-create-resources.md)
+* [Tutorial: Build a first machine learning project](tutorial-1st-experiment-hello-world.md)
+* [Preview: Run model training jobs with the v2 CLI](how-to-train-cli.md)
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-hyperparameters.md
This table summarizes hyperparameters specific to the `yolov5` algorithm.
| `multi_scale` | Enable multi-scale image by varying image size by +/- 50% <br> Must be 0 or 1. <br> <br> *Note: training run may get into CUDA OOM if no sufficient GPU memory*. | 0 | | `box_score_threshold` | During inference, only return proposals with a score greater than `box_score_threshold`. The score is the multiplication of the objectness score and classification probability. <br> Must be a float in the range [0, 1]. | 0.1 | | `nms_iou_threshold` | IOU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
+| `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](how-to-use-automl-small-object-detect.md) logic*<br> Should be passed as a string in '3x2' format. Example: --tile_grid_size '3x2' | No Default |
+| `tile_overlap_ratio` | Overlap ratio between adjacent tiles in each dimension. <br> Must be float in the range of [0, 1) | 0.25 |
+| `tile_predictions_nms_threshold` | The IOU threshold to use to perform NMS while merging predictions from tiles and image. Used in validation/ inference. <br> Must be float in the range of [0, 1] | 0.25 |
This table summarizes hyperparameters specific to the `maskrcnn_*` for instance segmentation during inference.
The following table describes the hyperparameters that are model agnostic.
## Image classification (multi-class and multi-label) specific hyperparameters The following table summarizes hyperparmeters for image classification (multi-class and multi-label) tasks.
-
+ | Parameter name | Description | Default | | - |-|--| | `weighted_loss` | <li> 0 for no weighted loss. <li> 1 for weighted loss with sqrt.(class_weights) <li> 2 for weighted loss with class_weights. <li> Must be 0 or 1 or 2. | 0 | | `validation_resize_size` | <li> Image size to which to resize before cropping for validation dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> Training run may get into CUDA OOM if the size is too big*. | 256  | | `validation_crop_size` | <li> Image crop size that's input to your neural network for validation dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `validation_crop_size` and `training_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
-| `training_crop_size` | <li> Image crop size that's input to your neural network for train dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `validation_crop_size` and `training_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
+| `training_crop_size` | <li> Image crop size that's input to your neural network for train dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `validation_crop_size` and `training_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
## Object detection and instance segmentation task specific hyperparameters
The following hyperparameters are for object detection and instance segmentation
| `box_score_threshold` | During inference, only return proposals with a classification score greater than `box_score_threshold`. <br> Must be a float in the range [0, 1].| 0.3 | | `nms_iou_threshold` | IOU (intersection over union) threshold used in non-maximum suppression (NMS) for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 | | `box_detections_per_image` | Maximum number of detections per image, for all classes. <br> Must be a positive integer.| 100 |
-| `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](how-to-use-automl-small-object-detect.md) logic*<br> A tuple of two integers passed as a string. Example: --tile_grid_size "(3, 2)" | No Default |
+| `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](how-to-use-automl-small-object-detect.md) logic*<br> Should be passed as a string in '3x2' format. Example: --tile_grid_size '3x2' | No Default |
| `tile_overlap_ratio` | Overlap ratio between adjacent tiles in each dimension. <br> Must be float in the range of [0, 1) | 0.25 | | `tile_predictions_nms_threshold` | The IOU threshold to use to perform NMS while merging predictions from tiles and image. Used in validation/ inference. <br> Must be float in the range of [0, 1] | 0.25 |
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
You'll write code using the Python SDK in this tutorial and learn the following
* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md#create-the-workspace) if you don't already have an Azure Machine Learning workspace.
-* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb) section of the notebook.
+* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb) section of the notebook.
# [Azure CLI](#tab/cli)
This tutorial is also available in the [azureml-examples repository on GitHub](h
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items). If you wish to run it in your own local environment, setup using the following instructions
+This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items). If you wish to run it in your own local environment, setup using the following instructions
* Use the following commands to install Azure ML Python SDK v2: * Uninstall previous preview version:
az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE]
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
Next step is to create `MLTable` from your data in jsonl format as shown below. MLtable package your data into a consumable object for training. # [Azure CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
validation_data:
You can create data inputs from training and validation MLTable with the following code:
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]
primary_metric: mean_average_precision
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=image-object-detection-configuration)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=image-object-detection-configuration)]
In your AutoML job, you can specify the model algorithms by using `model_name` p
In this example, we will train an object detection model with `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over thousands of labeled images with over 80 label categories.
+### Job Limits
+
+You can control the resources spent on your AutoML Image training job by specifying the `timeout_minutes`, `max_trials` and the `max_concurrent_trials` for the job in limit settings. PLease refer to [detailed description on Job Limits parameters](./how-to-auto-train-image-models.md#job-limits).
+# [Azure CLI](#tab/cli)
++
+```yaml
+limits:
+ timeout_minutes: 60
+ max_trials: 10
+ max_concurrent_trials: 2
+```
+
+# [Python SDK](#tab/python)
+
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=limit-settings)]
+++ ### Hyperparameter sweeping for image tasks You can perform a hyperparameter sweep over a defined search space to find the optimal model. The following code, defines the search space in preparation for the hyperparameter sweep for each defined algorithm, `yolov5` and `fasterrcnn_resnet50_fpn`. In the search space, specify the range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for each algorithm.
-For the tuning settings, use random sampling to pick samples from this parameter space by using the `random` sampling_algorithm. Doing so, tells automated ML to try a total of 10 trials with these different samples, running two trials at a time on our compute target, which was set up using four nodes. The more parameters the search space has, the more trials you need to find optimal models.
+For the tuning settings, use random sampling to pick samples from this parameter space by using the `random` sampling_algorithm. The job limits configured above, tells automated ML to try a total of 10 trials with these different samples, running two trials at a time on our compute target, which was set up using four nodes. The more parameters the search space has, the more trials you need to find optimal models.
The Bandit early termination policy is also used. This policy terminates poor performing configurations; that is, those configurations that are not within 20% slack of the best performing configuration, which significantly saves compute resources.
The Bandit early termination policy is also used. This policy terminates poor pe
```yaml sweep:
- limits:
- max_trials: 10
- max_concurrent_trials: 2
sampling_algorithm: random early_termination: type: bandit
sweep:
```yaml search_space:
- - model_name: "yolov5"
- learning_rate: "uniform(0.0001, 0.01)"
- model_size: "choice('small', 'medium')"
- - model_name: "fasterrcnn_resnet50_fpn"
- learning_rate: "uniform(0.0001, 0.001)"
- optimizer: "choice('sgd', 'adam', 'adamw')"
- min_size: "choice(600, 800)"
+ - model_name:
+ type: choice
+ values: [yolov5]
+ learning_rate:
+ type: uniform
+ min_value: 0.0001
+ max_value: 0.01
+ model_size:
+ type: choice
+ values: [small, medium]
+
+ - model_name:
+ type: choice
+ values: [fasterrcnn_resnet50_fpn]
+ learning_rate:
+ type: uniform
+ min_value: 0.0001
+ max_value: 0.001
+ optimizer:
+ type: choice
+ values: [sgd, adam, adamw]
+ min_size:
+ type: choice
+ values: [600, 800]
``` # [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=search-space-settings)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=search-space-settings)]
az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZUR
When you've configured your AutoML Job to the desired settings, you can submit the job.
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]
CLI example not available, please use Python SDK.
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=best_run)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=best_run)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_local_dir)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_local_dir)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=download_model)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=download_model)]
### Register the model
Register the model either using the azureml path or your locally downloaded path
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=register_model)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=register_model)]
After you register the model you want to use, you can deploy it using the managed online endpoint [deploy-managed-online-endpoint](how-to-deploy-managed-online-endpoint-sdk-v2.md)
auth_mode: key
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=endpoint)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=endpoint)]
### Create the endpoint
az ml online-endpoint create --file .\create_endpoint.yml --workspace-name [YOUR
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_endpoint)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_endpoint)]
### Configure online deployment
readiness_probe:
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=deploy)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=deploy)]
az ml online-deployment create --file .\create_deployment.yml --workspace-name [
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_deploy)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_deploy)]
### Update traffic:
az ml online-endpoint update --name 'od-fridge-items-endpoint' --traffic 'od-fri
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=update_traffic)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=update_traffic)]
## Test the deployment
CLI example not available, please use Python SDK.
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_inference_request)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_inference_request)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=dump_inference_request)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=dump_inference_request)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=invoke_inference)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=invoke_inference)]
## Visualize detections
CLI example not available, please use Python SDK.
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=visualize_detections)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=visualize_detections)]
## Clean up resources
In this automated machine learning tutorial, you did the following tasks:
# [Python SDK](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
- * Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
+ * Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
Also try automated machine learning for these other model types:
* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
-* Download the [bike-no.csv](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv) data file
+* Download the [bike-no.csv](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv) data file
## Sign in to the studio
Before you configure your experiment, upload your data file to your workspace in
1. Select **Upload files** from the **Upload** drop-down..
- 1. Choose the **bike-no.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv).
+ 1. Choose the **bike-no.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv).
1. Select **Next**
machine-learning Concept Automated Ml V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml-v1.md
Classification is a common machine learning task. Classification is a type of su
The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection. Learn more and see an example at [Create a classification model with automated ML (v1)](../tutorial-first-experiment-automated-ml.md).
-See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/classification-text-dnn)
+See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn)
### Regression
Similar to classification, regression tasks are also a common supervised learnin
Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning (v1)](how-to-auto-train-models-v1.md).
-See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
+See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
### Time-series forecasting
Advanced forecasting configuration includes:
* rolling window aggregate features
-See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
+See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
### Computer vision (preview)
See the [how-to (v1)](how-to-configure-auto-train-v1.md#ensemble-configuration)
With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about [accelerating ML models with ONNX](../concept-onnx.md).
-See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX (v1)](../how-to-configure-auto-train.md#supported-algorithms).
+See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX (v1)](../how-to-configure-auto-train.md#supported-algorithms).
The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](../how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://onnxruntime.ai/docs/api/csharp-api.html).
How-to articles provide additional detail into what functionality automated ML o
### Jupyter notebook samples
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml).
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml).
### Python SDK reference
machine-learning How To Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models-v1.md
Automated ML supports model training for computer vision tasks like image classi
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. For more information, see [Create and manage an Azure Machine Learning compute instance](../how-to-create-manage-compute-instance.md).
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
> [!NOTE] > Only Python 3.6 and 3.7 are compatible with automated ML support for computer vision tasks.
automl_image_config = AutoMLImageConfig(training_data=training_dataset)
Provide a [compute target](../v1/concept-azure-machine-learning-architecture.md#compute-targets) for automated ML to conduct model training. Automated ML models for computer vision tasks require GPU SKUs and support NC and ND families. We recommend the NCsv3-series (with v100 GPUs) for faster training. A compute target with a multi-GPU VM SKU leverages multiple GPUs to also speed up training. Additionally, when you set up a compute target with multiple nodes you can conduct faster model training through parallelism when tuning hyperparameters for your model.
+> [!NOTE]
+> If you are using a [compute instance](../concept-compute-instance.md) as your compute target, please make sure that multiple AutoML jobs are not run at the same time. Also, please make sure that `max_concurrent_iterations` is set to 1 in your [experiment resources](#resources-for-the-sweep).
+ The compute target is a required parameter and is passed in using the `compute_target` parameter of the `AutoMLImageConfig`. For example: ```python
When sweeping hyperparameters, you need to specify the sampling method to use fo
* [Bayesian sampling](../how-to-tune-hyperparameters.md#bayesian-sampling) > [!NOTE]
-> Currently only random sampling supports conditional hyperparameter spaces.
+> Currently only random and grid sampling support conditional hyperparameter spaces.
### Early termination policies
For a detailed description on task specific hyperparameters, please refer to [Hy
If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](../how-to-use-automl-small-object-detect.md). ## Example notebooks
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
## Next steps
machine-learning How To Auto Train Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-models-v1.md
If you donΓÇÖt have an Azure subscription, create a free account before you begi
This article is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to run it in your own [local environment](../how-to-configure-environment.md#local). To get the required packages,
-* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment).
+* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment).
* Run `pip install azureml-opendatasets azureml-widgets` to get the required packages. ## Download and prepare data
machine-learning How To Auto Train Nlp Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models-v1.md
You can seamlessly integrate with the [Azure Machine Learning data labeling](../
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](../how-to-create-manage-compute-instance.md) for more information.
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
[!INCLUDE [automl-sdk-version](../../../includes/machine-learning-automl-sdk-version.md)]
Doing so, schedules distributed training of the NLP models and automatically sca
## Example notebooks See the sample notebooks for detailed code examples for each NLP task.
-* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb)
+* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb)
* [Multi-label text classification](
-https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb)
-* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb)
+https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb)
+* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb)
## Next steps + Learn more about [how and where to deploy a model](../how-to-deploy-managed-online-endpoints.md).
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
For this article you need,
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is preconfigured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](../how-to-create-manage-compute-instance.md) for more information.
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
[!INCLUDE [automl-sdk-version](../../../includes/machine-learning-automl-sdk-version.md)]
Use&nbsp;data&nbsp;streaming&nbsp;algorithms <br> [(studio UI experiments)](../h
Next determine where the model will be trained. An automated ML training experiment can run on the following compute options.
- * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available. See [this notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example.
+ * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available. See [this notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example.
* **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.[Azure Machine Learning Managed Compute](../concept-compute-target.md#amlcompute) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines. Compute instance is also supported as a compute target.
- * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](../how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
+ * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](../how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
Consider these factors when choosing your compute target:
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-and-where.md
For more information on `az ml model register`, see the [reference documentation
You can register a model by providing the local path of the model. You can provide the path of either a folder or a single file on your local machine. <!-- pyhton nb call -->
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=register-model-from-local-file-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=register-model-from-local-file-code)]
To include multiple files in the model registration, set `model_path` to the path of a folder that contains the files.
The two things you need to accomplish in your entry script are:
For your initial deployment, use a dummy entry script that prints the data it receives. Save this file as `echo_score.py` inside of a directory called `source_dir`. This dummy script returns the data you send to it, so it doesn't use the model. But it is useful for testing that the scoring script is running.
You can use any [Azure Machine Learning inference curated environments](../conce
A minimal inference configuration can be written as: Save this file with the name `dummyinferenceconfig.json`.
Save this file with the name `dummyinferenceconfig.json`.
The following example demonstrates how to create a minimal environment with no pip dependencies, using the dummy scoring script you defined above.
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=inference-configuration-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=inference-configuration-code)]
For more information on environments, see [Create and manage environments for training and deployment](../how-to-use-environments.md).
For more information, see the [deployment schema](reference-azure-machine-learni
The following Python demonstrates how to create a local deployment configuration:
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deployment-configuration-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deployment-configuration-code)]
az ml model deploy -n myservice \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-code)]
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-print-logs)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-print-logs)]
For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
curl -v -X POST -H "content-type:application/json" \
# [Python SDK](#tab/python) <!-- python nb call -->
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-into-model-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-into-model-code)]
curl -v -X POST -H "content-type:application/json" \
Now it's time to actually load your model. First, modify your entry script: Save this file as `score.py` inside of `source_dir`.
Notice the use of the `AZUREML_MODEL_DIR` environment variable to locate your re
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)] Save this file as `inferenceconfig.json`
az ml model deploy -n myservice \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-code)]
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-print-logs)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-print-logs)]
For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
curl -v -X POST -H "content-type:application/json" \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=send-post-request-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=send-post-request-code)]
Change your deploy configuration to correspond to the compute target you've chos
The options available for a deployment configuration differ depending on the compute target you choose. Save this file as `re-deploymentconfig.json`.
For more information, see [this reference](reference-azure-machine-learning-cli.
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-on-cloud-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-on-cloud-code)]
az ml service get-logs -n myservice \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-code)]
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-print-logs)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-print-logs)]
For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
For more information, see the documentation for [Model.deploy()](/python/api/azu
When you deploy remotely, you may have key authentication enabled. The example below shows how to get your service key with Python in order to make an inference request.
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-web-service-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-web-service-code)]
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-webservice-print-logs)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-webservice-print-logs)]
The following table describes the different service states:
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=delete-resource-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=delete-resource-code)]
```azurecli-interactive az ml service delete -n myservice
Read more about [deleting a webservice](/cli/azure/ml(v1)/computetarget/create#a
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=delete-resource-code)]
+[!Notebook-python[] (~/azureml-examples-v2samplesreorg/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=delete-resource-code)]
To delete a deployed web service, use `service.delete()`. To delete a registered model, use `model.delete()`.
machine-learning How To Inference Onnx Automl Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-inference-onnx-automl-image-models-v1.md
arguments = ['--model_name', 'maskrcnn_resnet50_fpn', # enter the maskrcnn mode
-Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory and submit the script. Use [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory.
+Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory and submit the script. Use [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory.
```python script_run_config = ScriptRunConfig(source_directory='.', script='ONNX_batch_model_generator_automl_for_images.py',
Every ONNX model has a predefined set of input and output formats.
# [Multi-class image classification](#tab/multi-class)
-This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
+This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
### Input format
The output is an array of logits for all the classes/labels.
# [Multi-label image classification](#tab/multi-label)
-This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
+This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
### Input format
The output is an array of logits for all the classes/labels.
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
### Input format
The following table describes boxes, labels and scores returned for each sample
# [Object detection with YOLO](#tab/object-detect-yolo)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
### Input format
Each cell in the list indicates box detections of a sample with shape `(n_boxes,
# [Instance segmentation](#tab/instance-segmentation)
-For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
+For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
>[!IMPORTANT] > Only Mask R-CNN is supported for instance segmentation tasks. The input and output formats are based on Mask R-CNN only.
batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
batch, channel, height_onnx, width_onnx ```
-For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
```python import glob
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-move-data-in-out-of-pipelines.md
step1_output_ds = step1_output_data.register_on_complete(name='processed_data',
Azure does not automatically delete intermediate data written with `OutputFileDatasetConfig`. To avoid storage charges for large amounts of unneeded data, you should either:
-* Programmatically delete intermediate data at the end of a pipeline job, when it is no longer needed
-* Use blob storage with a short-term storage policy for intermediate data (see [Optimize costs by automating Azure Blob Storage access tiers](/azure/storage/blobs/lifecycle-management-overview))
-* Regularly review and delete no-longer-needed data
+> [!CAUTION]
+> Only delete intermediate data after 30 days from the last change date of the data. Deleting the data earlier could cause the pipeline run to fail because the pipeline will assume the intermediate data exists within 30 day period for reuse.
+
+* Programmatically delete intermediate data at the end of a pipeline job, when it is no longer needed.
+* Use blob storage with a short-term storage policy for intermediate data (see [Optimize costs by automating Azure Blob Storage access tiers](/azure/storage/blobs/lifecycle-management-overview)). This policy can only be set to a workspace's non-default datastore. Use `OutputFileDatasetConfig` to export intermediate data to another datastore that isn't the default.
+ ```Python
+ # Get adls gen 2 datastore already registered with the workspace
+ datastore = workspace.datastores['my_adlsgen2']
+ step1_output_data = OutputFileDatasetConfig(name="processed_data", destination=(datastore, "mypath/{run-id}/{output-name}")).as_upload()
+ ```
+* Regularly review and delete no-longer-needed data.
For more information, see [Plan and manage costs for Azure Machine Learning](../concept-plan-manage-cost.md).
machine-learning How To Prepare Datasets For Automl Images V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images-v1.md
If you already have a data labeling project and you want to use that data, you c
## Use conversion scripts
-If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml).
+If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml).
If your data doesn't follow any of the previously mentioned formats, you can use your own script to generate JSON Lines files based on schemas defined in [Schema for JSONL files for AutoML image experiments](../reference-automl-images-schema.md).
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-web-service.md
You can enable TLS either with Microsoft certificate or a custom certificate pur
> [!IMPORTANT] > When you use a certificate from Microsoft, you don't need to purchase your own certificate or domain name.
-* **When you use a custom certificate that you purchased**, you use the *ssl_cert_pem_file*, *ssl_key_pem_file*, and *ssl_cname* parameters. The following example demonstrates how to use .pem files to create a configuration that uses a TLS/SSL certificate that you purchased:
+* **When you use a custom certificate that you purchased**, you use the *ssl_cert_pem_file*, *ssl_key_pem_file*, and *ssl_cname* parameters. The PEM file with pass phrase protection is not supported. The following example demonstrates how to use .pem files to create a configuration that uses a TLS/SSL certificate that you purchased:
```python from azureml.core.compute import AksCompute
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-distributed-gpu.md
Make sure your code follows these tips:
### Horovod example
-* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod)
+* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod)
### DeepSpeed
Make sure your code follows these tips:
### DeepSeed example
-* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/deepspeed/cifar)
+* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/workflows/train/deepspeed/cifar)
### Environment variables from Open MPI
run = Experiment(ws, 'experiment_name').submit(run_config)
### Pytorch per-process-launch example -- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/pytorch/cifar-distributed)
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
### <a name="per-node-launch"></a> Using torch.distributed.launch (per-node-launch)
run = Experiment(ws, 'experiment_name').submit(run_config)
### PyTorch per-node-launch example -- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/pytorch/cifar-distributed)
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
### PyTorch Lightning
TF_CONFIG='{
### TensorFlow example -- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/tensorflow/mnist-distributed)
+- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/workflows/train/tensorflow/mnist-distributed)
## <a name="infiniband"></a> Accelerating distributed GPU training with InfiniBand
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md
ws = Workspace.from_config()
### Get the data
-The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
+The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
### Prepare training script
machine-learning How To Use Automl Small Object Detect V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-automl-small-object-detect-v1.md
+
+ Title: Use AutoML to detect small objects in images (v1)
+
+description: Set up Azure Machine Learning automated ML to train small object detection models.
+++++ Last updated : 10/13/2021+++
+# Train a small object detection model with AutoML (preview) (v1)
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](how-to-use-automl-small-object-detect-v1.md)
+> * [v2 (current version)](../how-to-use-automl-small-object-detect.md)
+++
+> [!IMPORTANT]
+> This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this article, you'll learn how to train an object detection model to detect small objects in high-resolution images with [automated ML](concept-automated-ml-v1.md) in Azure Machine Learning.
+
+Typically, computer vision models for object detection work well for datasets with relatively large objects. However, due to memory and computational constraints, these models tend to under-perform when tasked to detect small objects in high-resolution images. Because high-resolution images are typically large, they are resized before input into the model, which limits their capability to detect smaller objects--relative to the initial image size.
+
+To help with this problem, automated ML supports tiling as part of the public preview computer vision capabilities. The tiling capability in automated ML is based on the concepts in [The Power of Tiling for Small Object Detection](https://openaccess.thecvf.com/content_CVPRW_2019/papers/UAVision/Unel_The_Power_of_Tiling_for_Small_Object_Detection_CVPRW_2019_paper.pdf).
+
+When tiling, each image is divided into a grid of tiles. Adjacent tiles overlap with each other in width and height dimensions. The tiles are cropped from the original as shown in the following image.
++
+## Prerequisites
+
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](../quickstart-create-resources.md).
+
+* This article assumes some familiarity with how to configure an [automated machine learning experiment for computer vision tasks](how-to-auto-train-image-models-v1.md).
+
+## Supported models
+
+Small object detection using tiling is supported for all models supported by Automated ML for images for object detection task.
+
+## Enable tiling during training
+
+To enable tiling, you can set the `tile_grid_size` parameter to a value like (3, 2); where 3 is the number of tiles along the width dimension and 2 is the number of tiles along the height dimension. When this parameter is set to (3, 2), each image is split into a grid of 3 x 2 tiles. Each tile overlaps with the adjacent tiles, so that any objects that fall on the tile border are included completely in one of the tiles. This overlap can be controlled by the `tile_overlap_ratio` parameter, which defaults to 25%.
+
+When tiling is enabled, the entire image and the tiles generated from it are passed through the model. These images and tiles are resized according to the `min_size` and `max_size` parameters before feeding to the model. The computation time increases proportionally because of processing this extra data.
+
+For example, when the `tile_grid_size` parameter is (3, 2), the computation time would be approximately seven times higher than without tiling.
+
+You can specify the value for `tile_grid_size` in your hyperparameter space as a string.
+
+```python
+parameter_space = {
+ 'model_name': choice('fasterrcnn_resnet50_fpn'),
+ 'tile_grid_size': choice('(3, 2)'),
+ ...
+}
+```
+
+The value for `tile_grid_size` parameter depends on the image dimensions and size of objects within the image. For example, larger number of tiles would be helpful when there are smaller objects in the images.
+
+To choose the optimal value for this parameter for your dataset, you can use hyperparameter search. To do so, you can specify a choice of values for this parameter in your hyperparameter space.
+
+```python
+parameter_space = {
+ 'model_name': choice('fasterrcnn_resnet50_fpn'),
+ 'tile_grid_size': choice('(2, 1)', '(3, 2)', '(5, 3)'),
+ ...
+}
+```
+## Tiling during inference
+
+When a model trained with tiling is deployed, tiling also occurs during inference. Automated ML uses the `tile_grid_size` value from training to generate the tiles during inference. The entire image and corresponding tiles are passed through the model, and the object proposals from them are merged to output final predictions, like in the following image.
++
+> [!NOTE]
+> It's possible that the same object is detected from multiple tiles, duplication detection is done to remove such duplicates.
+>
+> Duplicate detection is done by running NMS on the proposals from the tiles and the image. When multiple proposals overlap, the one with the highest score is picked and others are discarded as duplicates.Two proposals are considered to be overlapping when the intersection over union (iou) between them is greater than the `tile_predictions_nms_thresh` parameter.
+
+You also have the option to enable tiling only during inference without enabling it in training. To do so, set the `tile_grid_size` parameter only during inference, not for training.
+
+Doing so, may improve performance for some datasets, and won't incur the extra cost that comes with tiling at training time.
+
+## Tiling hyperparameters
+
+The following are the parameters you can use to control the tiling feature.
+
+| Parameter Name | Description | Default |
+| |-| -|
+| `tile_grid_size` | The grid size to use for tiling each image. Available for use during training, validation, and inference.<br><br>Tuple of two integers passed as a string, e.g `'(3, 2)'`<br><br> *Note: Setting this parameter increases the computation time proportionally, since all tiles and images are processed by the model.*| no default value |
+| `tile_overlap_ratio` | Controls the overlap ratio between adjacent tiles in each dimension. When the objects that fall on the tile boundary are too large to fit completely in one of the tiles, increase the value of this parameter so that the objects fit in at least one of the tiles completely.<br> <br> Must be a float in [0, 1).| 0.25 |
+| `tile_predictions_nms_thresh` | The intersection over union threshold to use to do non-maximum suppression (nms) while merging predictions from tiles and image. Available during validation and inference. Change this parameter if there are multiple boxes detected per object in the final predictions. <br><br> Must be float in [0, 1]. | 0.25 |
++
+## Example notebooks
+
+See the [object detection sample notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) for detailed code examples of setting up and training an object detection model.
+
+>[!NOTE]
+> All images in this article are made available in accordance with the permitted use section of the [MIT licensing agreement](https://choosealicense.com/licenses/mit/).
+> Copyright © 2020 Roboflow, Inc.
+
+## Next steps
+
+* Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
+* For definitions and examples of the performance charts and metrics provided for each job, see [Evaluate automated machine learning experiment results](../how-to-understand-automated-ml.md).
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models-v1.md).
+* See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters-v1.md).
+* [Make predictions with ONNX on computer vision models from AutoML](how-to-inference-onnx-automl-image-models-v1.md)
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-managed-identities.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
> * [v1](how-to-use-managed-identities.md)
-> * [v2 (current version)](../how-to-use-managed-identities.md)
+> * [v2 (current version)](../how-to-identity-based-service-authentication.md)
[Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) allow you to configure your workspace with the *minimum required permissions to access resources*.
machine-learning Reference Automl Images Hyperparameters V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-automl-images-hyperparameters-v1.md
This table summarizes hyperparameters specific to the `yolov5` algorithm.
| `multi_scale` | Enable multi-scale image by varying image size by +/- 50% <br> Must be 0 or 1. <br> <br> *Note: training run may get into CUDA OOM if no sufficient GPU memory*. | 0 | | `box_score_thresh` | During inference, only return proposals with a score greater than `box_score_thresh`. The score is the multiplication of the objectness score and classification probability. <br> Must be a float in the range [0, 1]. | 0.1 | | `nms_iou_thresh` | IOU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
+| `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](how-to-use-automl-small-object-detect-v1.md) logic*<br> A tuple of two integers passed as a string. Example: --tile_grid_size "(3, 2)" | No Default |
+| `tile_overlap_ratio` | Overlap ratio between adjacent tiles in each dimension. <br> Must be float in the range of [0, 1) | 0.25 |
+| `tile_predictions_nms_thresh` | The IOU threshold to use to perform NMS while merging predictions from tiles and image. Used in validation/ inference. <br> Must be float in the range of [0, 1] | 0.25 |
This table summarizes hyperparameters specific to the `maskrcnn_*` for instance segmentation during inference.
The following table describes the hyperparameters that are model agnostic.
## Image classification (multi-class and multi-label) specific hyperparameters The following table summarizes hyperparmeters for image classification (multi-class and multi-label) tasks.
-
+ | Parameter name | Description | Default | | - |-|--| | `weighted_loss` | 0 for no weighted loss.<br>1 for weighted loss with sqrt.(class_weights) <br> 2 for weighted loss with class_weights. <br> Must be 0 or 1 or 2. | 0 | | `valid_resize_size` | <li> Image size to which to resize before cropping for validation dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> Training run may get into CUDA OOM if the size is too big*. | 256  | | `valid_crop_size` | <li> Image crop size that's input to your neural network for validation dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
-| `train_crop_size` | <li> Image crop size that's input to your neural network for train dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
+| `train_crop_size` | <li> Image crop size that's input to your neural network for train dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
## Object detection and instance segmentation task specific hyperparameters
The following hyperparameters are for object detection and instance segmentation
| `box_score_thresh` | During inference, only return proposals with a classification score greater than `box_score_thresh`. <br> Must be a float in the range [0, 1].| 0.3 | | `nms_iou_thresh` | IOU (intersection over union) threshold used in non-maximum suppression (NMS) for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 | | `box_detections_per_img` | Maximum number of detections per image, for all classes. <br> Must be a positive integer.| 100 |
-| `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](../how-to-use-automl-small-object-detect.md) logic*<br> A tuple of two integers passed as a string. Example: --tile_grid_size "(3, 2)" | No Default |
+| `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](how-to-use-automl-small-object-detect-v1.md) logic*<br> A tuple of two integers passed as a string. Example: --tile_grid_size "(3, 2)" | No Default |
| `tile_overlap_ratio` | Overlap ratio between adjacent tiles in each dimension. <br> Must be float in the range of [0, 1) | 0.25 | | `tile_predictions_nms_thresh` | The IOU threshold to use to perform NMS while merging predictions from tiles and image. Used in validation/ inference. <br> Must be float in the range of [0, 1] | 0.25 |
machine-learning Tutorial Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models-v1.md
You'll write code using the Python SDK in this tutorial and learn the following
* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md#create-the-workspace) if you don't already have an Azure Machine Learning workspace.
-* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) section of the notebook.
+* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) section of the notebook.
-This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection) if you wish to run it in your own [local environment](../how-to-configure-environment.md#local). To get the required packages,
+This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection) if you wish to run it in your own [local environment](../how-to-configure-environment.md#local). To get the required packages,
* Run `pip install azureml`
-* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment)
+* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment)
## Compute target setup
In this automated machine learning tutorial, you did the following tasks:
* [Learn how to set up AutoML to train computer vision models with Python (preview)](../how-to-auto-train-image-models.md). * [Learn how to configure incremental training on computer vision models](../how-to-auto-train-image-models.md#incremental-training-optional). * See [what hyperparameters are available for computer vision tasks](../reference-automl-images-hyperparameters.md).
-* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
+* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
> [!NOTE] > Use of the fridge objects dataset is available through the license under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-pipeline-python-sdk.md
The above code specifies a dataset that is based on the output of a pipeline ste
The code that you've executed so far has create and controlled Azure resources. Now it's time to write code that does the first step in the domain.
-If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`.
+If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/v1/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`.
If you're working from scratch, create a subdirectory called `keras-mnist-fashion/`. Create a new file, add the following code to it, and name the file `prepare.py`.
Once the data has been converted from the compressed format to CSV files, it can
With larger pipelines, it's a good practice to put each step's source code in a separate directory (`src/prepare/`, `src/train/`, and so on) but for this tutorial, just use or create the file `train.py` in the same `keras-mnist-fashion/` source directory. Most of this code should be familiar to ML developers:
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
Managed Grafana has the following known limitations:
* API key usage isn't included in the audit log.
+* Users can be assigned the following Grafana Organization level roles: Admin, Editor, or Viewer. The Grafana Server Admin role isn't available to customers.
+
+* Some Data plane APIs require Grafana Server Admin permissions and can't be called by users. This includes the [Admin API](https://grafana.com/docs/grafana/latest/developers/http_api/admin/), the [User API](https://grafana.com/docs/grafana/latest/developers/http_api/user/#user-api) and the [Admin Organizations API](https://grafana.com/docs/grafana/latest/developers/http_api/org/#admin-organizations-api).
+
+* Azure Managed Grafana currently doesn't support the Grafana Role Based Access Control (RBAC) feature and the [RBAC API](https://grafana.com/docs/grafana/latest/developers/http_api/access_control/) is therefore disabled.
+ ## Next steps > [!div class="nextstepaction"]
marketplace Azure Container Plan Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-plan-overview.md
description: Create and edit plans for an Azure Container offer in Microsoft App
-- Previously updated : 07/05/2021++ Last updated : 09/23/2022 # Create and edit plans for an Azure Container offer
marketplace Iot Edge Plan Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-plan-overview.md
description: Create and edit plans for an IoT Edge Module offer on Azure Marketp
-+ Previously updated : 07/08/2021 Last updated : 9/23/2022 # Create and edit plans for an IoT Edge Module offer
marketplace Submission Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/submission-api-overview.md
Previously updated : 09/22/2021 Last updated : 09/23/2022 # Commercial marketplace submission API overview
Use API to programmatically query, create submissions for, and publish offers. A
There are two sets of submission API available: -- **Partner Center submission API** ΓÇô The common set of APIs that work across consumer and commercial products to publish through Partner Center. New capabilities are continuously added to this set of APIs. For more information on how to integrate with this API, see [Partner Center submission API onboarding](submission-api-onboard.md).
+- **Partner Center submission API** ΓÇô The common set of APIs that work across consumer and commercial products to publish through Partner Center. New capabilities are continuously added to this set of APIs. For more information on how to integrate with this API, see [Partner Center submission API onboarding](submission-api-onboard.md).
+- **Product Ingestion API** ΓÇô The new set of modern APIs to create and manage commercial offers through Partner Center. New capabilities are continuously added to this set of APIs. The APIs are in preview state and will soon be launched for all offer types and will eventually replace the Partner Center submission and Legacy Cloud Partner Portal APIs. For more information on how to integrate with the modern Product Ingestion API, see [Product Ingestion API for the commercial marketplace](product-ingestion-api.md).
- **Legacy Cloud Partner Portal API** ΓÇô The APIs carried over from the deprecated Cloud Partner Portal; it is integrated with and continues to work in Partner Center. This set of APIs is in maintenance mode only; new capabilities introduced in Partner Center may not be supported, and it should only be used for existing products that were already integrated before transition to Partner Center. For more information on how to continue to use the Cloud Partner Portal APIs, see [Cloud Partner Portal API Reference](cloud-partner-portal-api-overview.md). Refer to the following table for supported submission APIs for each offer type.
-| Offer type | Legacy Cloud Partner Portal API Support | Partner Center submission API support |
-| | :: | :: |
-| Azure Application | | &#x2714; |
-| Azure Container | &#x2714; | |
-| Azure Virtual Machine | &#x2714; | |
-| Consulting Service | &#x2714; | |
-| Dynamics 365 | | &#x2714; |
-| IoT Edge Module | &#x2714; | |
-| Managed Service | &#x2714; | |
-| Power BI App | &#x2714; | |
-| Software as a Service | | &#x2714; |
+| Offer type | Legacy Cloud Partner Portal API Support | Partner Center submission API support | Product Ingestion API support |
+| | :: | :: |--|
+| Azure Application | | &#x2714; | |
+| Azure Container | &#x2714; | | |
+| Azure Virtual Machine | &#x2714; | | &#x2714;|
+| Consulting Service | &#x2714; | | |
+| Dynamics 365 | | &#x2714; | |
+| IoT Edge Module | &#x2714; | | |
+| Managed Service | &#x2714; | | |
+| Power BI App | &#x2714; | | |
+| Software as a Service | | &#x2714; | |
Microsoft 365 Office add-ins, Microsoft 365 SharePoint solutions, Microsoft 365 Teams apps, and Power BI Visuals donΓÇÖt have submission API support.
private-5g-core Azure Private 5G Core Release Notes 2208 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2208.md
+
+ Title: Azure Private 5G Core 2208 release notes
+description: Discover what's new in the Azure Private 5G Core 2208 release
++++ Last updated : 09/23/2022++
+# Azure Private 5G Core 2208 release notes
+
+The following release notes identify the new features, critical open issues, and resolved issues for the 2208 release for the Azure Private 5G Core. The release notes are continuously updated, and critical issues requiring a workaround are added here as they're discovered. Before deploying this new version, carefully review the information contained in these release notes.
+
+This article applies to the Azure Private 5G Core 2208 version (PMN-4-16). This release is compatible with the Azure Stack Edge Pro GPU running the 2207 release and is supported by the 2022-04-01-preview [Microsoft.MobileNetwork API version](/rest/api/mobilenetwork).
+
+## What's new
+
+- **NRF removal** - Azure Private 5G Core no longer includes the NRF network function (NF). Since peer NFs are statically configured and there's no longer the need to perform NF discovery, this change simplifies the solution, decreases the attack surface, and reduces the latency.
+
+## Issues fixed in the 2208 release
+
+The following table provides a summary of issues fixed in this release.
+
+ |No. |Feature | Issue |
+ |--|--|--|
+ | 1 | 4G/5G signaling | In some scenarios, Azure Private 5G Core may fail to resume N2 or S1-MME connectivity if the system is restarted. This issue is fixed in this release. |
+ | 2 | Local distributed tracing | Azure Private 5G Core local distributed tracing web GUI may show an authentication error when accessed from multiple browser windows by a single user. This issue has been fixed in this release. |
+ | 3 | Local dashboards | Azure Private 5G Core local dashboards display a higher value than the true value for multiple graphs due to lines being vertically stacked rather than overlaid. This issue has been fixed in this release. |
+ | 4 | Local dashboards | In rare scenarios, Azure Private 5G Core local dashboards lose older data. This issue has been fixed in this release. |
+ | 5 | Local distributed tracing | Azure Private 5G Core local distributed tracing dashboard shows wrong representation of the user equipment (UE) registration type. This has now been improved to provide a plain text message detailing the UE registration type. |
+ | 6 | 4G/5G signaling | If a UE reuses the same protocol data unit (PDU) session ID as an existing session, an error may occur. This issue has been fixed as part of support for network-initiated session release. |
+ | 7 | Local dashboards | In the event of a system restart, Azure Private 5G Core local dashboard passwords were reset to the default value. This issue has been fixed in this release. |
+
+## Known issues in the 2208 release
+
+The following table provides a summary of known issues in this release.
+
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | 4G/5G signaling | In rare scenarios, Azure Private 5G Core may lose the copy of subscriber data stored at the edge, resulting in a loss of service until the edge is reinstalled. | Reprovision SIM policies and SIMs. |
+ | 2 | 4G/5G signaling | In rare scenarios, Azure Private 5G Core may fail to notify a UE of downlink data that arrives while the UE is idle. | Toggle airplane mode **on/off** on the UE. The downlink data will then transmit to the UE correctly. |
+ | 3 | Local dashboards | Azure Private 5G Core local dashboards don't automatically refresh to show the latest data. | Manually refresh the web browser to refresh the dashboard contents. |
+ | 4 | Local dashboards | Azure Private 5G Core local dashboard configuration may be lost during a configuration change. | Manually reset the local dashboard password and recreate any custom dashboards. |
+ | 5 | Policy configuration | Azure Private 5G Core may ignore non-default quality of service (QoS) and policy configuration when handling 4G subscribers. | Not applicable. |
+ | 6 | Packet forwarding | Azure Private 5G Core may not forward buffered packets if NAT is enabled. | Not applicable. |
+ | 7 | 4G/5G signaling | Azure Private 5G Core may, with low periodicity, reject a small number of attach requests. | The attach requests should be reattempted. |
+ | 8 | 4G/5G signaling | Azure Private 5G Core will incorrectly accept Stream Control Transmission Protocol (SCTP) connections on the wrong N2 IP address. | Connect to Packet Core's N2 interface on the correct IP and port. |
+ | 9 | 4G/5G signaling | Azure Private 5G Core may perform an unnecessary PDU Session Resource Setup Transaction following a UE initiated service request. | Not applicable. |
+ | 10 | 4G/5G signaling | In rare scenarios, Azure Private 5G Core may corrupt the internal state of a packet data session, resulting in subsequent changes to that packet data session failing. | Reinstall the Packet Core. |
+ | 11 | 4G/5G signaling | In scenarios when the establishment of a PDU session has failed, Azure Private 5G Core may not automatically release the session, and the UE may need to re-register. | The UE should re-register. |
+ | 12 | 4G/5G signaling | 4G UEs that require both circuit switched (CS) and packet switched (PS) network availability to successfully attach to Azure Private 5G Core may disconnect after a successful attach. | Update/enhance the UEs to support PS only networks if possible, as CS isn't supported by Azure Private 5G Core. If it isn't possible to reconfigure the UEs, customer support can apply a low-level configuration tweak to Azure Private 5G Core to let these UEs believe there's a CS network available. |
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table to define the packet core instance
|The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. | **N6 gateway** (for 5G) or **SGi gateway** (for 4G). | | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**| | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
- | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses). You must collect these addresses to allow the UEs to resolve domain names. </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network. In this case, UEs in this data network will be unable to access the public internet. | **DNS Addresses** |
+ | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses). </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network. In this case, UEs in this data network will be unable to resolve domain names. | **DNS Addresses** |
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses.</br></br>If you want to use [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) in this data network, keep NAPT disabled. |**NAPT**| ## Next steps
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
For each site you're deploying, do the following:
## Configure Domain Name System (DNS) servers > [!IMPORTANT]
-> If you don't configure DNS servers for a data network, all UEs using that network will be unable to resolve domain names and access the public internet.
+> If you don't configure DNS servers for a data network, all UEs using that network will be unable to resolve domain names.
DNS allows the translation between human-readable domain names and their associated machine-readable IP addresses. Depending on your requirements, you have the following options for configuring a DNS server for your data network:
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
Four Azure resources are defined in the template.
|**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. | | **Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. | | **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. |
- | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if the UEs in this data network don't need to access the public internet. |
+ | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. |
| **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. | 1. Select **Review + create**.
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
The following Azure resources are defined in the template.
|**Data Network Name** | Enter the name of the data network. | |**Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. | |**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.|
- | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if the UEs in this data network don't need to access the public internet. |
+ | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. |
|**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.| 1. Select **Review + create**.
purview Register Scan Power Bi Tenant Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-troubleshoot.md
Previously updated : 05/06/2022 Last updated : 09/22/2022
This article explores common troubleshooting methods for scanning Power BI tenan
|**Scenarios** |**Microsoft Purview public access allowed/denied** |**Power BI public access allowed /denied** | **Runtime option** | **Authentication option** | **Deployment checklist** | ||||||| |Public access with Azure IR |Allowed |Allowed |Azure Runtime | Microsoft Purview Managed Identity | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
-|Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
-|Private access |Allowed |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
-|Private access |Denied |Allowed* |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
-|Private access |Denied |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
-
-\* Power BI tenant must have a private endpoint which is deployed in a Virtual Network accessible from the self-hosted integration runtime VM. For more information, see [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links).
+|Public access with self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated authentication / Service principal | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
+|Private access |Allowed |Denied |Self-hosted runtime |Delegated authentication / Service principal | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
+|Private access |Denied |Allowed |Self-hosted runtime |Delegated authentication / Service principal | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
+|Private access |Denied |Denied |Self-hosted runtime |Delegated authentication / Service principal | [Review deployment checklist](register-scan-power-bi-tenant.md#deployment-checklist) |
### Cross-tenant |**Scenarios** |**Microsoft Purview public access allowed/denied** |**Power BI public access allowed /denied** | **Runtime option** | **Authentication option** | **Deployment checklist** | ||||||| |Public access with Azure IR |Allowed |Allowed |Azure runtime |Delegated Authentication | [Deployment checklist](register-scan-power-bi-tenant-cross-tenant.md#deployment-checklist) |
-|Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Deployment checklist](register-scan-power-bi-tenant-cross-tenant.md#deployment-checklist) |
+|Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated authentication / Service principal | [Deployment checklist](register-scan-power-bi-tenant-cross-tenant.md#deployment-checklist) |
## Troubleshooting tips
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 09/09/2022 Last updated : 09/23/2022
The following table provides a brief description of each built-in role. Click th
> | [Virtual Machine Administrator Login](#virtual-machine-administrator-login) | View Virtual Machines in the portal and login as administrator | 1c0163c0-47e6-4577-8991-ea5c82e286e4 | > | [Virtual Machine Contributor](#virtual-machine-contributor) | Create and manage virtual machines, manage disks, install and run software, reset password of the root user of the virtual machine using VM extensions, and manage local user accounts using VM extensions. This role does not grant you management access to the virtual network or storage account the virtual machines are connected to. This role does not allow you to assign roles in Azure RBAC. | 9980e02c-c2be-4d73-94e8-173b1dc7cf3c | > | [Virtual Machine User Login](#virtual-machine-user-login) | View Virtual Machines in the portal and login as a regular user. | fb879df8-f326-4884-b1cf-06f3ad86be52 |
+> | [Windows Admin Center Administrator Login](#windows-admin-center-administrator-login) | Let's you manage the OS of your resource via Windows Admin Center as an administrator. | a6333a3e-0164-44c3-b281-7a577aff287f |
> | **Networking** | | | > | [CDN Endpoint Contributor](#cdn-endpoint-contributor) | Can manage CDN endpoints, but can't grant access to other users. | 426e0c7f-0c7e-4658-b36f-ff54d6c29b45 | > | [CDN Endpoint Reader](#cdn-endpoint-reader) | Can view CDN endpoints, but can't make changes. | 871e35f6-b5c1-49cc-a043-bde969a0f2cd |
View Virtual Machines in the portal and login as a regular user. [Learn more](..
} ```
+### Windows Admin Center Administrator Login
+
+Let's you manage the OS of your resource via Windows Admin Center as an administrator. [Learn more](/windows-server/manage/windows-admin-center/azure/manage-vm)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/*/read | |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/* | |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/upgradeExtensions/action | Upgrades Extensions on Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/operations/read | Read all Operations for Azure Arc for Servers |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/read | Gets a network interface definition. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/read | Gets a load balancer definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/publicIPAddresses/read | Gets a public ip address definition. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/read | Get the virtual network definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/read | Gets a network security group definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/defaultSecurityRules/read | Gets a default security rule definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkWatchers/securityGroupView/action | View the configured and effective network security group rules applied on a VM. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/read | Gets a security rule definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/write | Creates a security rule or updates an existing security rule |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/write | Create or update the endpoint to the target resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/read | Get or list of endpoints to the target resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listManagedProxyDetails/action | |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/read | Get the properties of a virtual machine |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchAssessmentResults/latest/read | Retrieves the summary of the latest patch assessment operation |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchAssessmentResults/latest/softwarePatches/read | Retrieves list of patches assessed during the last patch assessment operation |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchInstallationResults/read | Retrieves the summary of the latest patch installation operation |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchInstallationResults/softwarePatches/read | Retrieves list of patches attempted to be installed during the last patch installation operation |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/extensions/read | Get the properties of a virtual machine extension |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/instanceView/read | Gets the detailed runtime status of the virtual machine and its resources |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/runCommands/read | Get the properties of a virtual machine run command |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/vmSizes/read | Lists available sizes the virtual machine can be updated to |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/locations/publishers/artifacttypes/types/read | Get the properties of a VMExtension Type |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/locations/publishers/artifacttypes/types/versions/read | Get the properties of a VMExtension Version |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/diskAccesses/read | Get the properties of DiskAccess resource |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/galleries/images/read | Gets the properties of Gallery Image |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/images/read | Get the properties of the Image |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Clusters/Read | Gets clusters |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Clusters/ArcSettings/Read | Gets arc resource of HCI cluster |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Clusters/ArcSettings/Extensions/Read | Gets extension resource of HCI cluster |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Clusters/ArcSettings/Extensions/Write | Create or update extension resource of HCI cluster |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Clusters/ArcSettings/Extensions/Delete | Delete extension resources of HCI cluster |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Operations/Read | Gets operations |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/WACLoginAsAdmin/action | Lets you manage the OS of your resource via Windows Admin Center as an administrator. |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/WACloginAsAdmin/action | Lets you manage the OS of your resource via Windows Admin Center as an administrator |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Clusters/WACloginAsAdmin/Action | Manage OS of HCI resource via Windows Admin Center as an administrator |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Let's you manage the OS of your resource via Windows Admin Center as an administrator.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/a6333a3e-0164-44c3-b281-7a577aff287f",
+ "name": "a6333a3e-0164-44c3-b281-7a577aff287f",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.HybridCompute/machines/*/read",
+ "Microsoft.HybridCompute/machines/extensions/*",
+ "Microsoft.HybridCompute/machines/upgradeExtensions/action",
+ "Microsoft.HybridCompute/operations/read",
+ "Microsoft.Network/networkInterfaces/read",
+ "Microsoft.Network/loadBalancers/read",
+ "Microsoft.Network/publicIPAddresses/read",
+ "Microsoft.Network/virtualNetworks/read",
+ "Microsoft.Network/networkSecurityGroups/read",
+ "Microsoft.Network/networkSecurityGroups/defaultSecurityRules/read",
+ "Microsoft.Network/networkWatchers/securityGroupView/action",
+ "Microsoft.Network/networkSecurityGroups/securityRules/read",
+ "Microsoft.Network/networkSecurityGroups/securityRules/write",
+ "Microsoft.HybridConnectivity/endpoints/write",
+ "Microsoft.HybridConnectivity/endpoints/read",
+ "Microsoft.HybridConnectivity/endpoints/listManagedProxyDetails/action",
+ "Microsoft.Compute/virtualMachines/read",
+ "Microsoft.Compute/virtualMachines/patchAssessmentResults/latest/read",
+ "Microsoft.Compute/virtualMachines/patchAssessmentResults/latest/softwarePatches/read",
+ "Microsoft.Compute/virtualMachines/patchInstallationResults/read",
+ "Microsoft.Compute/virtualMachines/patchInstallationResults/softwarePatches/read",
+ "Microsoft.Compute/virtualMachines/extensions/read",
+ "Microsoft.Compute/virtualMachines/instanceView/read",
+ "Microsoft.Compute/virtualMachines/runCommands/read",
+ "Microsoft.Compute/virtualMachines/vmSizes/read",
+ "Microsoft.Compute/locations/publishers/artifacttypes/types/read",
+ "Microsoft.Compute/locations/publishers/artifacttypes/types/versions/read",
+ "Microsoft.Compute/diskAccesses/read",
+ "Microsoft.Compute/galleries/images/read",
+ "Microsoft.Compute/images/read",
+ "Microsoft.AzureStackHCI/Clusters/Read",
+ "Microsoft.AzureStackHCI/Clusters/ArcSettings/Read",
+ "Microsoft.AzureStackHCI/Clusters/ArcSettings/Extensions/Read",
+ "Microsoft.AzureStackHCI/Clusters/ArcSettings/Extensions/Write",
+ "Microsoft.AzureStackHCI/Clusters/ArcSettings/Extensions/Delete",
+ "Microsoft.AzureStackHCI/Operations/Read"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.HybridCompute/machines/WACLoginAsAdmin/action",
+ "Microsoft.Compute/virtualMachines/WACloginAsAdmin/action",
+ "Microsoft.AzureStackHCI/Clusters/WACloginAsAdmin/Action"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Windows Admin Center Administrator Login",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ## Networking
Full access to Azure SignalR Service REST APIs
> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/send/action | Send messages to user, who may consist of multiple client connections. | > | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/read | Check user existence. | > | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/write | Modify a user. |
+> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/livetrace/* | |
> | **NotDataActions** | | > | *none* | |
Full access to Azure SignalR Service REST APIs
"Microsoft.SignalRService/SignalR/serverConnection/write", "Microsoft.SignalRService/SignalR/user/send/action", "Microsoft.SignalRService/SignalR/user/read",
- "Microsoft.SignalRService/SignalR/user/write"
+ "Microsoft.SignalRService/SignalR/user/write",
+ "Microsoft.SignalRService/SignalR/livetrace/*"
], "notDataActions": [] }
Management Group Contributor Role [Learn more](../governance/management-groups/o
> | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/subscriptions/write | Associates existing subscription with the management group. | > | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/write | Create or update a management group. | > | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/subscriptions/read | Lists subscription under the given management group. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Management Group Contributor Role [Learn more](../governance/management-groups/o
"Microsoft.Management/managementGroups/subscriptions/delete", "Microsoft.Management/managementGroups/subscriptions/write", "Microsoft.Management/managementGroups/write",
- "Microsoft.Management/managementGroups/subscriptions/read"
+ "Microsoft.Management/managementGroups/subscriptions/read",
+ "Microsoft.Authorization/*/read"
], "notActions": [], "dataActions": [],
Management Group Reader Role
> | | | > | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/read | List management groups for the authenticated user. | > | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/subscriptions/read | Lists subscription under the given management group. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Management Group Reader Role
{ "actions": [ "Microsoft.Management/managementGroups/read",
- "Microsoft.Management/managementGroups/subscriptions/read"
+ "Microsoft.Management/managementGroups/subscriptions/read",
+ "Microsoft.Authorization/*/read"
], "notActions": [], "dataActions": [],
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 09/09/2022 Last updated : 09/23/2022
Click the resource provider name in the following table to see the list of opera
| [Microsoft.ApiManagement](#microsoftapimanagement) | | [Microsoft.AppConfiguration](#microsoftappconfiguration) | | [Microsoft.AzureStack](#microsoftazurestack) |
+| [Microsoft.AzureStackHCI](#microsoftazurestackhci) |
| [Microsoft.DataBoxEdge](#microsoftdataboxedge) | | [Microsoft.DataCatalog](#microsoftdatacatalog) | | [Microsoft.EventGrid](#microsofteventgrid) |
Azure service: core
> | Microsoft.Marketplace/privateStores/adminRequestApprovals/write | Admin update the request with decision on the request | > | Microsoft.Marketplace/privateStores/collections/approveAllItems/action | Delete all specific approved items and set collection to allItemsApproved | > | Microsoft.Marketplace/privateStores/collections/disableApproveAllItems/action | Set approve all items property to false for the collection |
+> | Microsoft.Marketplace/privateStores/collections/setRules/action | Set Rules on a given collection |
+> | Microsoft.Marketplace/privateStores/collections/queryRules/action | Get Rules on a given collection |
> | Microsoft.Marketplace/privateStores/collections/upsertOfferWithMultiContext/action | Upsert an offer with different contexts | > | Microsoft.Marketplace/privateStores/offers/write | Creates offer in PrivateStore. | > | Microsoft.Marketplace/privateStores/offers/delete | Deletes offer from PrivateStore. |
Azure service: [Azure Spring Apps](../spring-apps/index.yml)
> | Microsoft.AppPlatform/Spring/apps/deployments/generateHeapDump/action | Generate heap dump for a specific application | > | Microsoft.AppPlatform/Spring/apps/deployments/generateThreadDump/action | Generate thread dump for a specific application | > | Microsoft.AppPlatform/Spring/apps/deployments/startJFR/action | Start JFR for a specific application |
+> | Microsoft.AppPlatform/Spring/apps/deployments/enableRemoteDebugging/action | Enable remote debugging for a specific application |
+> | Microsoft.AppPlatform/Spring/apps/deployments/disableRemoteDebugging/action | Disable remote debugging for a specific application |
> | Microsoft.AppPlatform/Spring/apps/deployments/connectorProps/read | Get the service connectors for a specific application | > | Microsoft.AppPlatform/Spring/apps/deployments/connectorProps/write | Create or update the service connector for a specific application | > | Microsoft.AppPlatform/Spring/apps/deployments/connectorProps/delete | Delete the service connector for a specific application |
Azure service: [Azure Spring Apps](../spring-apps/index.yml)
> | Microsoft.AppPlatform/Spring/storages/delete | Delete the storage for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/storages/read | Get storage for a specific Azure Spring Apps service instance | > | **DataAction** | **Description** |
+> | Microsoft.AppPlatform/Spring/apps/deployments/remotedebugging/action | Remote debugging app instance for a specific application |
+> | Microsoft.AppPlatform/Spring/apps/deployments/connect/action | Connect to an instance for a specific application |
> | Microsoft.AppPlatform/Spring/configService/read | Read the configuration content(for example, application.yaml) for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/configService/write | Write config server content for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/configService/delete | Delete config server content for a specific Azure Spring Apps service instance |
Azure service: [Azure Kubernetes Service (AKS)](../aks/index.yml)
> | Microsoft.ContainerService/fleets/apps/deployments/read | Reads deployments | > | Microsoft.ContainerService/fleets/apps/deployments/write | Writes deployments | > | Microsoft.ContainerService/fleets/apps/deployments/delete | Deletes deployments |
-> | Microsoft.ContainerService/fleets/apps/replicasets/read | Reads replicasets |
-> | Microsoft.ContainerService/fleets/apps/replicasets/write | Writes replicasets |
-> | Microsoft.ContainerService/fleets/apps/replicasets/delete | Deletes replicasets |
> | Microsoft.ContainerService/fleets/apps/statefulsets/read | Reads statefulsets | > | Microsoft.ContainerService/fleets/apps/statefulsets/write | Writes statefulsets | > | Microsoft.ContainerService/fleets/apps/statefulsets/delete | Deletes statefulsets |
Azure service: [Azure Kubernetes Service (AKS)](../aks/index.yml)
> | Microsoft.ContainerService/fleets/extensions/podsecuritypolicies/read | Reads podsecuritypolicies | > | Microsoft.ContainerService/fleets/extensions/podsecuritypolicies/write | Writes podsecuritypolicies | > | Microsoft.ContainerService/fleets/extensions/podsecuritypolicies/delete | Deletes podsecuritypolicies |
-> | Microsoft.ContainerService/fleets/extensions/replicasets/read | Reads replicasets |
-> | Microsoft.ContainerService/fleets/extensions/replicasets/write | Writes replicasets |
-> | Microsoft.ContainerService/fleets/extensions/replicasets/delete | Deletes replicasets |
> | Microsoft.ContainerService/fleets/groups/impersonate/action | Impersonate groups | > | Microsoft.ContainerService/fleets/healthz/read | Reads healthz | > | Microsoft.ContainerService/fleets/healthz/autoregister-completion/read | Reads autoregister-completion |
Azure service: [Azure Kubernetes Service (AKS)](../aks/index.yml)
> | Microsoft.ContainerService/fleets/persistentvolumes/read | Reads persistentvolumes | > | Microsoft.ContainerService/fleets/persistentvolumes/write | Writes persistentvolumes | > | Microsoft.ContainerService/fleets/persistentvolumes/delete | Deletes persistentvolumes |
-> | Microsoft.ContainerService/fleets/pods/read | Reads pods |
-> | Microsoft.ContainerService/fleets/pods/write | Writes pods |
-> | Microsoft.ContainerService/fleets/pods/delete | Deletes pods |
-> | Microsoft.ContainerService/fleets/pods/exec/action | Exec into pods resource |
> | Microsoft.ContainerService/fleets/podtemplates/read | Reads podtemplates | > | Microsoft.ContainerService/fleets/podtemplates/write | Writes podtemplates | > | Microsoft.ContainerService/fleets/podtemplates/delete | Deletes podtemplates |
Azure service: [Data Factory](../data-factory/index.yml)
> | Microsoft.DataFactory/factories/querytriggerruns/action | Queries the Trigger Runs. | > | Microsoft.DataFactory/factories/querypipelineruns/action | Queries the Pipeline Runs. | > | Microsoft.DataFactory/factories/querydebugpipelineruns/action | Queries the Debug Pipeline Runs. |
+> | Microsoft.DataFactory/factories/adfcdcs/read | Reads ADF Change data capture. |
+> | Microsoft.DataFactory/factories/adfcdcs/delete | Deletes ADF Change data capture. |
+> | Microsoft.DataFactory/factories/adfcdcs/write | Create or update ADF Change data capture. |
> | Microsoft.DataFactory/factories/adflinkconnections/read | Reads ADF Link Connection. | > | Microsoft.DataFactory/factories/adflinkconnections/delete | Deletes ADF Link Connection. | > | Microsoft.DataFactory/factories/adflinkconnections/write | Create or update ADF Link Connection |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/managedInstances/distributedAvailabilityGroups/read | Return the list of distributed availability groups or gets the properties for the specified distributed availability group. | > | Microsoft.Sql/managedInstances/distributedAvailabilityGroups/write | Creates distributed availability groups with a specified parameters. | > | Microsoft.Sql/managedInstances/distributedAvailabilityGroups/delete | Deletes a distributed availability group. |
+> | Microsoft.Sql/managedInstances/distributedAvailabilityGroups/setRole/action | Set Role for Azure SQL Managed Instance Link to Primary or Secondary. |
> | Microsoft.Sql/managedInstances/dnsAliases/read | Return the list of Azure SQL Managed Instance Dns Aliases for the specified instance. | > | Microsoft.Sql/managedInstances/dnsAliases/write | Creates an Azure SQL Managed Instance Dns Alias with the specified parameters or updates the properties for the specified Azure SQL Managed Instance Dns Alias. | > | Microsoft.Sql/managedInstances/dnsAliases/delete | Deletes an existing Azure SQL Managed Instance Dns Alias. |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/servers/databases/ledgerDigestUploads/read | Read ledger digest upload settings | > | Microsoft.Sql/servers/databases/ledgerDigestUploads/write | Enable uploading ledger digests | > | Microsoft.Sql/servers/databases/ledgerDigestUploads/disable/action | Disable uploading ledger digests |
+> | Microsoft.Sql/servers/databases/linkWorkspaces/read | Return the list of synapselink workspaces for the specified database |
> | Microsoft.Sql/servers/databases/maintenanceWindowOptions/read | Gets a list of available maintenance windows for a selected database. | > | Microsoft.Sql/servers/databases/maintenanceWindows/read | Gets maintenance windows settings for a selected database. | > | Microsoft.Sql/servers/databases/maintenanceWindows/write | Sets maintenance windows settings for a selected database. |
Azure service: [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-m
> | Microsoft.SqlVirtualMachine/sqlVirtualMachineGroups/availabilityGroupListeners/write | Create a new or changes properties of existing SQL availability group listener | > | Microsoft.SqlVirtualMachine/sqlVirtualMachineGroups/availabilityGroupListeners/delete | Delete existing availability group listener | > | Microsoft.SqlVirtualMachine/sqlVirtualMachineGroups/sqlVirtualMachines/read | List Sql virtual machines by a particular sql virtual virtual machine group |
-> | Microsoft.SqlVirtualMachine/sqlVirtualMachines/startAssessment/action | |
-> | Microsoft.SqlVirtualMachine/sqlVirtualMachines/redeploy/action | Redeploy existing SQL virtual machine |
> | Microsoft.SqlVirtualMachine/sqlVirtualMachines/read | Retrieve details of SQL virtual machine | > | Microsoft.SqlVirtualMachine/sqlVirtualMachines/write | Create a new or change properties of existing SQL virtual machine | > | Microsoft.SqlVirtualMachine/sqlVirtualMachines/delete | Delete existing SQL virtual machine |
+> | Microsoft.SqlVirtualMachine/sqlVirtualMachines/troubleshoot/action | |
+> | Microsoft.SqlVirtualMachine/sqlVirtualMachines/startAssessment/action | |
+> | Microsoft.SqlVirtualMachine/sqlVirtualMachines/redeploy/action | Redeploy existing SQL virtual machine |
## Analytics
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/action | Answer Knowledgebase. | > | Microsoft.CognitiveServices/accounts/Language/query-text/action | Answer Text. | > | Microsoft.CognitiveServices/accounts/Language/query-dataverse/action | Query Dataverse. |
+> | Microsoft.CognitiveServices/accounts/Language/generate-questionanswers/action | Submit a Generate question answers Job request. |
> | Microsoft.CognitiveServices/accounts/Language/analyze-conversations/action | Analyzes the input conversation. | > | Microsoft.CognitiveServices/accounts/Language/analyze-text/action | Submit a collection of text documents for analysis. Specify a single unique task to be executed immediately. |
+> | Microsoft.CognitiveServices/accounts/Language/write | Creates a new project or updates an existing one. |
+> | Microsoft.CognitiveServices/accounts/Language/delete | Deletes a project. |
+> | Microsoft.CognitiveServices/accounts/Language/:export/action | Triggers a job to export a project's data. |
+> | Microsoft.CognitiveServices/accounts/Language/read | Gets the details of a project. Lists the existing projects.* |
+> | Microsoft.CognitiveServices/accounts/Language/:import/action | Triggers a job to import a project. If a project with the same name already exists, the data of that project is replaced. |
+> | Microsoft.CognitiveServices/accounts/Language/:train/action | Triggers a training job for a project. |
> | Microsoft.CognitiveServices/accounts/Language/analyze-conversation/jobscancel/action | Cancel a long-running analysis job on conversation. | > | Microsoft.CognitiveServices/accounts/Language/analyze-conversation/jobs/action | Submit a long conversation for analysis. Specify one or more unique tasks to be executed as a long-running operation. | > | Microsoft.CognitiveServices/accounts/Language/analyze-conversation/jobs/read | Get the status of an analysis job. A job may consist of one or more tasks. Once all tasks are succeeded, the job will transition to the suceeded state and results will be available for each task. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/models/verification/read | Get trained model verification report. | > | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/train/jobs/read | Get training jobs. Get training job status and result details.* | > | Microsoft.CognitiveServices/accounts/Language/analyze-text/projects/train/jobs/cancel/action | Cancels a running training job. |
+> | Microsoft.CognitiveServices/accounts/Language/copy/jobs/read | Gets the status of an existing copy project job. |
+> | Microsoft.CognitiveServices/accounts/Language/deployments/delete | Deletes a project deployment. |
+> | Microsoft.CognitiveServices/accounts/Language/deployments/read | Gets the details of a deployment. Lists the deployments belonging to a project.* |
+> | Microsoft.CognitiveServices/accounts/Language/deployments/:swap/action | Swaps two existing deployments with each other. |
+> | Microsoft.CognitiveServices/accounts/Language/deployments/write | Creates a new deployment or replaces an existing one. |
+> | Microsoft.CognitiveServices/accounts/Language/deployments/jobs/read | Gets the status of an existing deployment job. |
+> | Microsoft.CognitiveServices/accounts/Language/deployments/swap/jobs/read | Gets the status of an existing swap deployment job. |
+> | Microsoft.CognitiveServices/accounts/Language/export/jobs/read | Gets the status of an export job. Once job completes, returns the project metadata, and assets. |
+> | Microsoft.CognitiveServices/accounts/Language/export/jobs/result/read | Gets the result of an export job. |
+> | Microsoft.CognitiveServices/accounts/Language/generate-questionanswers/jobs/read | Get QA generation Job Status. |
+> | Microsoft.CognitiveServices/accounts/Language/global/deletion-jobs/read | Gets the status for a project deletion job. |
+> | Microsoft.CognitiveServices/accounts/Language/global/deployments/resources/read | Gets the deployments to which an Azure resource is assigned. |
+> | Microsoft.CognitiveServices/accounts/Language/global/languages/read | Lists the supported languages for the given project type. |
+> | Microsoft.CognitiveServices/accounts/Language/global/prebuilt-entities/read | Lists the supported prebuilt entities that can be used while creating composed entities. |
+> | Microsoft.CognitiveServices/accounts/Language/global/training-config-versions/read | Lists the support training config version for a given project type. |
+> | Microsoft.CognitiveServices/accounts/Language/import/jobs/read | Gets the status for an import. |
+> | Microsoft.CognitiveServices/accounts/Language/models/delete | Deletes an existing trained model. |
+> | Microsoft.CognitiveServices/accounts/Language/models/read | Gets the details of a trained model. Lists the trained models belonging to a project.* |
+> | Microsoft.CognitiveServices/accounts/Language/models/:load-snapshot/action | Restores the snapshot of this trained model to be the current working directory of the project. |
+> | Microsoft.CognitiveServices/accounts/Language/models/evaluation/result/read | Gets the detailed results of the evaluation for a trained model. This includes the raw inference results for the data included in the evaluation process. |
+> | Microsoft.CognitiveServices/accounts/Language/models/evaluation/summary-result/read | Gets the evaluation summary of a trained model. The summary includes high level performance measurements of the model e.g., F1, Precision, Recall, etc. |
+> | Microsoft.CognitiveServices/accounts/Language/models/load-snapshot/jobs/read | Gets the status for loading a snapshot. |
> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/read | List Projects. Get Project Details.* | > | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/write | Create Project. | > | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/delete | Delete Project. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/sources/jobs/read | Get Update Sources Job Status. | > | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/synonyms/read | Get Synonyms. | > | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/projects/synonyms/write | Update Synonyms. |
+> | Microsoft.CognitiveServices/accounts/Language/resources/write | Assign new Azure resources to a project to allow deploying new deployments to them. |
+> | Microsoft.CognitiveServices/accounts/Language/resources/delete | Unassign resources from a project. |
+> | Microsoft.CognitiveServices/accounts/Language/resources/read | Gets the deployments resources assigned to the project. |
+> | Microsoft.CognitiveServices/accounts/Language/resources/assign/jobs/read | Gets the status of an existing assign deployment resources job. |
+> | Microsoft.CognitiveServices/accounts/Language/resources/jobs/read | Gets the status of an existing deployment resources job. |
+> | Microsoft.CognitiveServices/accounts/Language/resources/unassign/jobs/read | Gets the status of an existing unassign deployment resources job. |
+> | Microsoft.CognitiveServices/accounts/Language/train/jobs/:cancel/action | Triggers a cancellation for a running training job. |
+> | Microsoft.CognitiveServices/accounts/Language/train/jobs/read | Gets the status for a training job. Lists the non-expired training jobs created for a project.* |
> | Microsoft.CognitiveServices/accounts/LanguageAuthoring/projects/action | Creates a new project. | > | Microsoft.CognitiveServices/accounts/LanguageAuthoring/projects/delete | Deletes a project. | > | Microsoft.CognitiveServices/accounts/LanguageAuthoring/projects/read | Returns a project. Returns the list of projects.* |
Azure service: [Machine Learning](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/registries/read | Gets the Machine Learning Services Registry(ies) | > | Microsoft.MachineLearningServices/registries/write | Creates or updates the Machine Learning Services Registry(ies) | > | Microsoft.MachineLearningServices/registries/delete | Deletes the Machine Learning Services Registry(ies) |
+> | Microsoft.MachineLearningServices/registries/privateEndpointConnectionsApproval/action | Approve or reject a connection to a Private Endpoint resource of Microsoft.Network provider |
> | Microsoft.MachineLearningServices/registries/assets/read | Reads assets in Machine Learning Services Registry(ies) | > | Microsoft.MachineLearningServices/registries/assets/write | Creates or updates assets in Machine Learning Services Registry(ies) | > | Microsoft.MachineLearningServices/registries/assets/delete | Deletes assets in Machine Learning Services Registry(ies) | > | Microsoft.MachineLearningServices/registries/checkNameAvailability/read | Checks name for Machine Learning Services Registry(ies) |
+> | Microsoft.MachineLearningServices/registries/privateEndpointConnectionProxies/read | View the state of a connection proxy to a Private Endpoint resource of Microsoft.Network provider |
+> | Microsoft.MachineLearningServices/registries/privateEndpointConnectionProxies/write | Change the state of a connection proxy to a Private Endpoint resource of Microsoft.Network provider |
+> | Microsoft.MachineLearningServices/registries/privateEndpointConnectionProxies/delete | Delete a connection proxy to a Private Endpoint resource of Microsoft.Network provider |
+> | Microsoft.MachineLearningServices/registries/privateEndpointConnectionProxies/validate/action | Validate a connection proxy to a Private Endpoint resource of Microsoft.Network provider |
+> | Microsoft.MachineLearningServices/registries/privateEndpointConnections/read | View the state of a connection to a Private Endpoint resource of Microsoft.Network provider |
+> | Microsoft.MachineLearningServices/registries/privateEndpointConnections/write | Change the state of a connection to a Private Endpoint resource of Microsoft.Network provider |
+> | Microsoft.MachineLearningServices/registries/privateEndpointConnections/delete | Delete a connection to a Private Endpoint resource of Microsoft.Network provider |
+> | Microsoft.MachineLearningServices/registries/privateLinkResources/read | Gets the available private link resources for the specified instance of the Machine Learning Services Registry(ies) |
> | Microsoft.MachineLearningServices/virtualclusters/read | Gets the Machine Learning Services Virtual Cluster(s) | > | Microsoft.MachineLearningServices/virtualclusters/write | Creates or updates a Machine Learning Services Virtual Cluster(s) | > | Microsoft.MachineLearningServices/virtualclusters/delete | Deletes the Machine Learning Services Virtual Cluster(s) |
Azure service: [Machine Learning](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/workspaces/labeling/labels/write | Creates labels of labeling projects in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action | Reject labels of labeling projects in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/labeling/labels/delete | Deletes labels of labeling project in Machine Learning Services Workspace(s) |
+> | Microsoft.MachineLearningServices/workspaces/labeling/labels/update/action | Updates labels of labeling project in Machine Learning Services Workspace(s) |
+> | Microsoft.MachineLearningServices/workspaces/labeling/labels/approve_unapprove/action | Approve or unapprove labels of labeling project in Machine Learning Services Workspace(s) |
> | Microsoft.MachineLearningServices/workspaces/labeling/projects/read | Gets labeling project in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/labeling/projects/write | Creates or updates labeling project in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/labeling/projects/delete | Deletes labeling project in Machine Learning Services Workspace(s) |
Azure service: core
### Microsoft.AzureStack
-Azure service: core
+Azure service: [Azure Stack](/azure-stack/)
> [!div class="mx-tableFixed"] > | Action | Description |
Azure service: core
> | Microsoft.AzureStack/registrations/products/getProduct/action | Retrieves Azure Stack Marketplace product | > | Microsoft.AzureStack/registrations/products/uploadProductLog/action | Record Azure Stack Marketplace product operation status and timestamp |
+### Microsoft.AzureStackHCI
+
+Azure service: [Azure Stack HCI](/azure-stack/hci/)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.AzureStackHCI/Register/Action | Registers the subscription for the Azure Stack HCI resource provider and enables the creation of Azure Stack HCI resources. |
+> | Microsoft.AzureStackHCI/Unregister/Action | Unregisters the subscription for the Azure Stack HCI resource provider. |
+> | Microsoft.AzureStackHCI/Clusters/Read | Gets clusters |
+> | Microsoft.AzureStackHCI/Clusters/Write | Creates or updates a cluster |
+> | Microsoft.AzureStackHCI/Clusters/Delete | Deletes cluster resource |
+> | Microsoft.AzureStackHCI/Clusters/CreateClusterIdentity/Action | Create cluster identity |
+> | Microsoft.AzureStackHCI/Clusters/UploadCertificate/Action | Upload cluster certificate |
+> | Microsoft.AzureStackHCI/Clusters/ArcSettings/Read | Gets arc resource of HCI cluster |
+> | Microsoft.AzureStackHCI/Clusters/ArcSettings/Write | Create or updates arc resource of HCI cluster |
+> | Microsoft.AzureStackHCI/Clusters/ArcSettings/Delete | Delete arc resource of HCI cluster |
+> | Microsoft.AzureStackHCI/Clusters/ArcSettings/GeneratePassword/Action | Generate password for Arc settings identity |
+> | Microsoft.AzureStackHCI/Clusters/ArcSettings/CreateArcIdentity/Action | Create Arc settings identity |
+> | Microsoft.AzureStackHCI/Clusters/ArcSettings/Extensions/Read | Gets extension resource of HCI cluster |
+> | Microsoft.AzureStackHCI/Clusters/ArcSettings/Extensions/Write | Create or update extension resource of HCI cluster |
+> | Microsoft.AzureStackHCI/Clusters/ArcSettings/Extensions/Delete | Delete extension resources of HCI cluster |
+> | Microsoft.AzureStackHCI/GalleryImages/Delete | Deletes gallery images resource |
+> | Microsoft.AzureStackHCI/GalleryImages/Write | Creates/Updates gallery images resource |
+> | Microsoft.AzureStackHCI/GalleryImages/Read | Gets/Lists gallery images resource |
+> | Microsoft.AzureStackHCI/NetworkInterfaces/Delete | Deletes network interfaces resource |
+> | Microsoft.AzureStackHCI/NetworkInterfaces/Write | Creates/Updates network interfaces resource |
+> | Microsoft.AzureStackHCI/NetworkInterfaces/Read | Gets/Lists network interfaces resource |
+> | Microsoft.AzureStackHCI/Operations/Read | Gets operations |
+> | Microsoft.AzureStackHCI/VirtualHardDisks/Delete | Deletes virtual hard disk resource |
+> | Microsoft.AzureStackHCI/VirtualHardDisks/Write | Creates/Updates virtual hard disk resource |
+> | Microsoft.AzureStackHCI/VirtualHardDisks/Read | Gets/Lists virtual hard disk resource |
+> | Microsoft.AzureStackHCI/VirtualMachines/Restart/Action | Restarts virtual machine resource |
+> | Microsoft.AzureStackHCI/VirtualMachines/Start/Action | Starts virtual machine resource |
+> | Microsoft.AzureStackHCI/VirtualMachines/Stop/Action | Stops virtual machine resource |
+> | Microsoft.AzureStackHCI/VirtualMachines/Delete | Deletes virtual machine resource |
+> | Microsoft.AzureStackHCI/VirtualMachines/Write | Creates/Updates virtual machine resource |
+> | Microsoft.AzureStackHCI/VirtualMachines/Read | Gets/Lists virtual machine resource |
+> | Microsoft.AzureStackHCI/VirtualMachines/Extensions/Read | Gets/Lists virtual machine extensions resource |
+> | Microsoft.AzureStackHCI/VirtualMachines/Extensions/Write | Creates/Updates virtual machine extensions resource |
+> | Microsoft.AzureStackHCI/VirtualMachines/Extensions/Delete | Deletes virtual machine extensions resource |
+> | Microsoft.AzureStackHCI/VirtualMachines/HybridIdentityMetadata/Read | Gets/Lists virtual machine hybrid identity metadata proxy resource |
+> | Microsoft.AzureStackHCI/VirtualNetworks/Delete | Deletes virtual networks resource |
+> | Microsoft.AzureStackHCI/VirtualNetworks/Write | Creates/Updates virtual networks resource |
+> | Microsoft.AzureStackHCI/VirtualNetworks/Read | Gets/Lists virtual networks resource |
+> | **DataAction** | **Description** |
+> | Microsoft.AzureStackHCI/Clusters/WACloginAsAdmin/Action | Manage OS of HCI resource via Windows Admin Center as an administrator |
+ ### Microsoft.DataBoxEdge Azure service: [Azure Stack Edge](../databox-online/azure-stack-edge-overview.md)
Azure service: [Microsoft Sentinel](../sentinel/index.yml)
> | Microsoft.SecurityInsights/ContentPackages/read | Read available Content Packages. | > | Microsoft.SecurityInsights/ContentPackages/write | Install or uninstall Content Packages. | > | Microsoft.SecurityInsights/ContentTemplates/read | Read installed Content Templates. |
+> | Microsoft.SecurityInsights/ContentTemplates/delete | Delete installed Content Templates. |
> | Microsoft.SecurityInsights/dataConnectors/read | Gets the data connectors | > | Microsoft.SecurityInsights/dataConnectors/write | Updates a data connector | > | Microsoft.SecurityInsights/dataConnectors/delete | Deletes a data connector |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AUIEventsOperational/read | Read data from the AUIEventsOperational table | > | Microsoft.OperationalInsights/workspaces/query/AutoscaleEvaluationsLog/read | Read data from the AutoscaleEvaluationsLog table | > | Microsoft.OperationalInsights/workspaces/query/AutoscaleScaleActionsLog/read | Read data from the AutoscaleScaleActionsLog table |
+> | Microsoft.OperationalInsights/workspaces/query/AVSSyslog/read | Read data from the AVSSyslog table |
> | Microsoft.OperationalInsights/workspaces/query/AWSCloudTrail/read | Read data from the AWSCloudTrail table | > | Microsoft.OperationalInsights/workspaces/query/AWSGuardDuty/read | Read data from the AWSGuardDuty table | > | Microsoft.OperationalInsights/workspaces/query/AWSVPCFlow/read | Read data from the AWSVPCFlow table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/ConfidentialWatchlist/read | Read data from the ConfidentialWatchlist table | > | Microsoft.OperationalInsights/workspaces/query/ConfigurationChange/read | Read data from the ConfigurationChange table | > | Microsoft.OperationalInsights/workspaces/query/ConfigurationData/read | Read data from the ConfigurationData table |
+> | Microsoft.OperationalInsights/workspaces/query/ContainerAppConsoleLogs/read | Read data from the ContainerAppConsoleLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/ContainerAppSystemLogs/read | Read data from the ContainerAppSystemLogs table |
> | Microsoft.OperationalInsights/workspaces/query/ContainerImageInventory/read | Read data from the ContainerImageInventory table | > | Microsoft.OperationalInsights/workspaces/query/ContainerInventory/read | Read data from the ContainerInventory table | > | Microsoft.OperationalInsights/workspaces/query/ContainerLog/read | Read data from the ContainerLog table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/OLPSupplyChainEvents/read | Read data from the OLPSupplyChainEvents table | > | Microsoft.OperationalInsights/workspaces/query/Operation/read | Read data from the Operation table | > | Microsoft.OperationalInsights/workspaces/query/Perf/read | Read data from the Perf table |
+> | Microsoft.OperationalInsights/workspaces/query/PFTitleAuditLogs/read | Read data from the PFTitleAuditLogs table |
> | Microsoft.OperationalInsights/workspaces/query/PowerBIActivity/read | Read data from the PowerBIActivity table | > | Microsoft.OperationalInsights/workspaces/query/PowerBIAuditTenant/read | Read data from the PowerBIAuditTenant table | > | Microsoft.OperationalInsights/workspaces/query/PowerBIDatasetsTenant/read | Read data from the PowerBIDatasetsTenant table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/SynapseIntegrationActivityRuns/read | Read data from the SynapseIntegrationActivityRuns table | > | Microsoft.OperationalInsights/workspaces/query/SynapseIntegrationPipelineRuns/read | Read data from the SynapseIntegrationPipelineRuns table | > | Microsoft.OperationalInsights/workspaces/query/SynapseIntegrationTriggerRuns/read | Read data from the SynapseIntegrationTriggerRuns table |
+> | Microsoft.OperationalInsights/workspaces/query/SynapseLinkEvent/read | Read data from the SynapseLinkEvent table |
> | Microsoft.OperationalInsights/workspaces/query/SynapseRBACEvents/read | Read data from the SynapseRBACEvents table | > | Microsoft.OperationalInsights/workspaces/query/SynapseRbacOperations/read | Read data from the SynapseRbacOperations table | > | Microsoft.OperationalInsights/workspaces/query/SynapseScopePoolScopeJobsEnded/read | Read data from the SynapseScopePoolScopeJobsEnded table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/UCClientReadinessStatus/read | Read data from the UCClientReadinessStatus table | > | Microsoft.OperationalInsights/workspaces/query/UCClientUpdateStatus/read | Read data from the UCClientUpdateStatus table | > | Microsoft.OperationalInsights/workspaces/query/UCDeviceAlert/read | Read data from the UCDeviceAlert table |
+> | Microsoft.OperationalInsights/workspaces/query/UCDOAggregatedStatus/read | Read data from the UCDOAggregatedStatus table |
+> | Microsoft.OperationalInsights/workspaces/query/UCDOStatus/read | Read data from the UCDOStatus table |
> | Microsoft.OperationalInsights/workspaces/query/UCServiceUpdateStatus/read | Read data from the UCServiceUpdateStatus table | > | Microsoft.OperationalInsights/workspaces/query/UCUpdateAlert/read | Read data from the UCUpdateAlert table | > | Microsoft.OperationalInsights/workspaces/query/Update/read | Read data from the Update table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/UserAccessAnalytics/read | Read data from the UserAccessAnalytics table | > | Microsoft.OperationalInsights/workspaces/query/UserPeerAnalytics/read | Read data from the UserPeerAnalytics table | > | Microsoft.OperationalInsights/workspaces/query/VIAudit/read | Read data from the VIAudit table |
+> | Microsoft.OperationalInsights/workspaces/query/VIIndexing/read | Read data from the VIIndexing table |
> | Microsoft.OperationalInsights/workspaces/query/VMBoundPort/read | Read data from the VMBoundPort table | > | Microsoft.OperationalInsights/workspaces/query/VMComputer/read | Read data from the VMComputer table | > | Microsoft.OperationalInsights/workspaces/query/VMConnection/read | Read data from the VMConnection table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/WUDOStatus/read | Read data from the WUDOStatus table | > | Microsoft.OperationalInsights/workspaces/query/WVDAgentHealthStatus/read | Read data from the WVDAgentHealthStatus table | > | Microsoft.OperationalInsights/workspaces/query/WVDCheckpoints/read | Read data from the WVDCheckpoints table |
+> | Microsoft.OperationalInsights/workspaces/query/WVDConnectionGraphicsDataPreview/read | Read data from the WVDConnectionGraphicsDataPreview table |
> | Microsoft.OperationalInsights/workspaces/query/WVDConnectionNetworkData/read | Read data from the WVDConnectionNetworkData table | > | Microsoft.OperationalInsights/workspaces/query/WVDConnections/read | Read data from the WVDConnections table | > | Microsoft.OperationalInsights/workspaces/query/WVDErrors/read | Read data from the WVDErrors table |
Azure service: Microsoft.DataProtection
> | Microsoft.DataProtection/backupVaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' | > | Microsoft.DataProtection/backupVaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' | > | Microsoft.DataProtection/backupVaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
+> | Microsoft.DataProtection/backupVaults/deletedBackupInstances/undelete/action | Perform undelete of soft-deleted Backup Instance. Backup Instance moves from SoftDeleted to ProtectionStopped state. |
+> | Microsoft.DataProtection/backupVaults/deletedBackupInstances/read | Get soft-deleted Backup Instance in a Backup Vault by name |
+> | Microsoft.DataProtection/backupVaults/deletedBackupInstances/read | List soft-deleted Backup Instances in a Backup Vault. |
> | Microsoft.DataProtection/backupVaults/operationResults/read | Gets Operation Result of a Patch Operation for a Backup Vault | > | Microsoft.DataProtection/backupVaults/operationStatus/read | Returns Backup Operation Status for Backup Vault. | > | Microsoft.DataProtection/locations/getBackupStatus/action | Check Backup Status for Recovery Services Vaults |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Action | Description | > | | | > | Microsoft.RecoveryServices/register/action | Registers subscription for given Resource Provider |
+> | Microsoft.RecoveryServices/unregister/action | Unregisters subscription for given Resource Provider |
> | Microsoft.RecoveryServices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. | > | Microsoft.RecoveryServices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. | > | Microsoft.RecoveryServices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/Vaults/extendedInformation/read | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.RecoveryServices/Vaults/extendedInformation/write | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.RecoveryServices/Vaults/extendedInformation/delete | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? |
+> | Microsoft.RecoveryServices/Vaults/locations/capabilities/action | List capabilities at a given location. |
> | Microsoft.RecoveryServices/Vaults/monitoringAlerts/read | Gets the alerts for the Recovery services vault. | > | Microsoft.RecoveryServices/Vaults/monitoringAlerts/write | Resolves the alert. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/read | Gets the Recovery services vault notification configuration. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/write | Configures e-mail notifications to Recovery services vault. |
+> | Microsoft.RecoveryServices/Vaults/operationResults/read | The Get Operation Results operation can be used get the operation status and result for the asynchronously submitted operation |
+> | Microsoft.RecoveryServices/Vaults/operationStatus/read | Gets Operation Status for a given Operation |
> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. | > | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers | > | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers | > | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. | > | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/read | Returns all the private endpoint connections. |
> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
+> | Microsoft.RecoveryServices/Vaults/privateLinkResources/read | Returns all the private link resources. |
> | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/read | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/write | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/logDefinitions/read | Azure Backup Logs |
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
Title: Get started with Azure Service Bus queues (.NET)
-description: This tutorial shows you how to send messages to and receive messages from Azure Service Bus queues using the .NET programming language.
+ Title: Quickstart - Use Azure Service Bus queues from .NET app
+description: This quickstart shows you how to send messages to and receive messages from Azure Service Bus queues using the .NET programming language.
dotnet Previously updated : 03/23/2022 Last updated : 09/21/2022 ms.devlang: csharp
-# Get started with Azure Service Bus queues (.NET)
+# Quickstart: Send and receive messages from an Azure Service Bus queue (.NET)
-> [!div class="op_single_selector" title1="Select the programming language:"]
-> * [C#](service-bus-dotnet-get-started-with-queues.md)
-> * [Java](service-bus-java-how-to-use-queues.md)
-> * [JavaScript](service-bus-nodejs-how-to-use-queues.md)
-> * [Python](service-bus-python-how-to-use-queues.md)
--
-In this quickstart, you'll do the following steps:
+In this quickstart, you will do the following steps:
1. Create a Service Bus namespace, using the Azure portal. 2. Create a Service Bus queue, using the Azure portal.
In this quickstart, you'll do the following steps:
4. Write a .NET Core console application to receive those messages from the queue. > [!NOTE]
-> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For an overview of the .NET client library, see [Azure Service Bus client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/README.md). For more samples, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
+> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For an overview of the .NET client library, see [Azure Service Bus client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/README.md). For more samples, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
## Prerequisites
-If you're new to the service, see [Service Bus overview](service-bus-messaging-overview.md) before you do this quickstart.
+
+If you're new to the service, see [Service Bus overview](service-bus-messaging-overview.md) before you do this quickstart.
- **Azure subscription**. To use Azure services, including Azure Service Bus, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/). - **Microsoft Visual Studio 2019**. The Azure Service Bus client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# language versions, but the new syntax won't be available. To make use of the full syntax, we recommend that you compile with the .NET Core SDK 3.0 or higher and language version set to `latest`. If you're using Visual Studio, versions before Visual Studio 2019 aren't compatible with the tools needed to build C# 8.0 projects. - [!INCLUDE [service-bus-create-namespace-portal](./includes/service-bus-create-namespace-portal.md)] [!INCLUDE [service-bus-create-queue-portal](./includes/service-bus-create-queue-portal.md)] - ## Send messages to the queue
-This section shows you how to create a .NET Core console application to send messages to a Service Bus queue.
-> [!NOTE]
-> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
+This section shows you how to create a .NET Core console application to send messages to a Service Bus queue.
+> [!NOTE]
+> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
### Create a console application
-1. Start Visual Studio 2019.
-1. Select **Create a new project**.
-1. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**.
+1. Start Visual Studio 2019.
+1. Select **Create a new project**.
+1. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**.
1. Select **C#** for the programming language.
- 1. Select **Console** for the type of the application.
- 1. Select **Console Application** from the results list.
- 1. Then, select **Next**.
+ 1. Select **Console** for the type of the application.
+ 1. Select **Console Application** from the results list.
+ 1. Then, select **Next**.
:::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/new-send-project.png" alt-text="Image showing the Create a new project dialog box with C# and Console selected":::
-1. Enter **QueueSender** for the project name, **ServiceBusQueueQuickStart** for the solution name, and then select **Next**.
+1. Enter **QueueSender** for the project name, **ServiceBusQueueQuickStart** for the solution name, and then select **Next**.
:::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/project-solution-names.png" alt-text="Image showing the solution and project names in the Configure your new project dialog box ":::
-1. On the **Additional information** page, select **Create** to create the solution and the project.
+1. On the **Additional information** page, select **Create** to create the solution and the project.
### Add the Service Bus NuGet package
-1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package: ```cmd
This section shows you how to create a .NET Core console application to send mes
using System.Threading.Tasks; using Azure.Messaging.ServiceBus; ```
-2. Within the `Program` class, declare the following properties, just before the `Main` method.
+
+2. Within the `Program` class, declare the following properties, just before the `Main` method.
Replace `<NAMESPACE CONNECTION STRING>` with the primary connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of your queue.
This section shows you how to create a .NET Core console application to send mes
private const int numOfMessages = 3; ```+ 3. Replace code in the `Main` method with the following code. See code comments for details about the code. Here are the important steps from the code.
- 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the primary connection string to the namespace.
- 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus queue.
+ 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the primary connection string to the namespace.
+ 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus queue.
1. Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync) method.
- 1. Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
+ 1. Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
1. Sends the batch of messages to the Service Bus queue using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method. ```csharp
This section shows you how to create a .NET Core console application to send mes
Console.WriteLine("Press any key to end the application"); Console.ReadKey(); }
- ```
-1. Here's what your Program.cs file should look like:
-
+ ```
+
+4. Here's what your Program.cs file should look like:
+ ```csharp using System; using System.Threading.Tasks;
This section shows you how to create a .NET Core console application to send mes
} } }
- ```
-1. Replace `<NAMESPACE CONNECTION STRING>` with the primary connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of your queue.
-1. Build the project, and ensure that there are no errors.
-1. Run the program and wait for the confirmation message.
-
+ ```
+
+5. Replace `<NAMESPACE CONNECTION STRING>` with the primary connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of your queue.
+6. Build the project, and ensure that there are no errors.
+7. Run the program and wait for the confirmation message.
+ ```bash A batch of 3 messages has been published to the queue ```
-1. In the Azure portal, follow these steps:
- 1. Navigate to your Service Bus namespace.
- 1. On the **Overview** page, select the queue in the bottom-middle pane.
-
+
+8. In the Azure portal, follow these steps:
+ 1. Navigate to your Service Bus namespace.
+ 1. On the **Overview** page, select the queue in the bottom-middle pane.
+ :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/select-queue.png" alt-text="Image showing the Service Bus Namespace page in the Azure portal with the queue selected." lightbox="./media/service-bus-dotnet-get-started-with-queues/select-queue.png"::: 1. Notice the values in the **Essentials** section.
This section shows you how to create a .NET Core console application to send mes
Notice the following values: - The **Active** message count value for the queue is now **3**. Each time you run this sender app without retrieving the messages, this value increases by 3. - The **current size** of the queue increments each time the app adds messages to the queue.
- - In the **Messages** chart in the bottom **Metrics** section, you can see that there are three incoming messages for the queue.
-
+ - In the **Messages** chart in the bottom **Metrics** section, you can see that there are three incoming messages for the queue.
## Receive messages from the queue
-In this section, you'll create a .NET Core console application that receives messages from the queue.
-> [!NOTE]
-> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
+In this section, you'll create a .NET Core console application that receives messages from the queue.
+> [!NOTE]
+> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
### Create a project for the receiver
-1. In the Solution Explorer window, right-click the **ServiceBusQueueQuickStart** solution, point to **Add**, and select **New Project**.
-1. Select **Console application**, and select **Next**.
-1. Enter **QueueReceiver** for the **Project name**, and select **Create**.
-1. In the **Solution Explorer** window, right-click **QueueReceiver**, and select **Set as a Startup Project**.
+1. In the Solution Explorer window, right-click the **ServiceBusQueueQuickStart** solution, point to **Add**, and select **New Project**.
+1. Select **Console application**, and select **Next**.
+1. Enter **QueueReceiver** for the **Project name**, and select **Create**.
+1. In the **Solution Explorer** window, right-click **QueueReceiver**, and select **Set as a Startup Project**.
-### Add the Service Bus NuGet package
+### Add the Service Bus NuGet package to the Receiver project
-1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
1. In the **Package Manager Console** window, confirm that **QueueReceiver** is selected for the **Default project**. If not, use the drop-down list to select **QueueReceiver**. :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console":::
In this section, you'll create a .NET Core console application that receives mes
``` ### Add the code to receive messages from the queue+ In this section, you'll add code to retrieve messages from the queue. 1. In **Program.cs**, add the following `using` statements at the top of the namespace definition, before the class declaration.
In this section, you'll add code to retrieve messages from the queue.
using System.Threading.Tasks; using Azure.Messaging.ServiceBus; ```
-2. Within the `Program` class, declare the following properties, just before the `Main` method.
+
+2. Within the `Program` class, declare the following properties, just before the `Main` method.
Replace `<NAMESPACE CONNECTION STRING>` with the primary connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of your queue.
In this section, you'll add code to retrieve messages from the queue.
// the processor that reads and processes messages from the queue static ServiceBusProcessor processor; ```
-3. Add the following methods to the `Program` class to handle received messages and any errors.
+
+3. Add the following methods to the `Program` class to handle received messages and any errors.
```csharp // handle received messages
In this section, you'll add code to retrieve messages from the queue.
return Task.CompletedTask; } ```
-4. Replace code in the `Main` method with the following code. See code comments for details about the code. Here are the important steps from the code.
- 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the primary connection string to the namespace.
- 1. Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
- 1. Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
- 1. Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
- 1. When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+
+4. Replace code in the `Main` method with the following code. See code comments for details about the code. Here are the important steps from the code.
+ 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the primary connection string to the namespace.
+ 1. Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
+ 1. Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ 1. Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ 1. When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
For more information, see code comments.
In this section, you'll add code to retrieve messages from the queue.
} } ```
-4. Here's what your `Program.cs` should look like:
+
+5. Here's what your `Program.cs` should look like:
```csharp using System;
In this section, you'll add code to retrieve messages from the queue.
} } ```
-1. Replace `<NAMESPACE CONNECTION STRING>` with the primary connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of your queue.
-1. Build the project, and ensure that there are no errors.
-1. Run the receiver application. You should see the received messages. Press any key to stop the receiver and the application.
+
+6. Replace `<NAMESPACE CONNECTION STRING>` with the primary connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of your queue.
+7. Build the project, and ensure that there are no errors.
+8. Run the receiver application. You should see the received messages. Press any key to stop the receiver and the application.
```console Wait for a minute and then press any key to end the processing
In this section, you'll add code to retrieve messages from the queue.
Stopping the receiver... Stopped receiving messages ```
-1. Check the portal again. Wait for a few minutes and refresh the page if you don't see `0` for **Active** messages.
+
+9. Check the portal again. Wait for a few minutes and refresh the page if you don't see `0` for **Active** messages.
- The **Active** message count and **Current size** values are now **0**.
- - In the **Messages** chart in the bottom **Metrics** section, you can see that there are three incoming messages and three outgoing messages for the queue.
-
- :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/queue-messages-size-final.png" alt-text="Active messages and size after receive" lightbox="./media/service-bus-dotnet-get-started-with-queues/queue-messages-size-final.png":::
+ - In the **Messages** chart in the bottom **Metrics** section, you can see that there are three incoming messages and three outgoing messages for the queue.
+ :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/queue-messages-size-final.png" alt-text="Screenshot showing active messages and size after receive." lightbox="./media/service-bus-dotnet-get-started-with-queues/queue-messages-size-final.png":::
+
+## Clean up resources
+
+Navigate to your Service Bus namespace in the Azure portal, and select **Delete** on the Azure portal to delete the namespace and the queue in it.
+
+## See also
-## Next steps
See the following documentation and samples: - [Azure Service Bus client library for .NET - Readme](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus) - [Samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples) - [.NET API reference](/dotnet/api/azure.messaging.servicebus)+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with Azure Service Bus topics and subscriptions (.NET)](service-bus-dotnet-how-to-use-topics-subscriptions.md)
service-connector How To Integrate Cosmos Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-cassandra.md
+
+ Title: Integrate the Azure Cosmos DB Cassandra API with Service Connector
+description: Integrate the Azure Cosmos DB Cassandra API into your application with Service Connector
++++ Last updated : 09/19/2022+++
+# Integrate the Azure Cosmos DB API for Cassandra with Service Connector
+
+This page shows the supported authentication types and client types for the Azure Cosmos DB Cassandra API using Service Connector. You might still be able to connect to the Azure Cosmos DB API for Cassandra in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+
+## Supported compute services
+
+- Azure App Service
+- Azure Container Apps
+- Azure Spring Apps
+
+## Supported authentication types and client types
+
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+++
+## Default environment variable names or application properties
+
+Use the connection details below to connect your compute services to the Cosmos DB Cassandra API. For each example below, replace the placeholder texts `<Azure-Cosmos-DB-account>`, `keyspace`, `<username>`, `<password>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`,`<client-secret>`, `<tenant-id>`, and `<Azure-region>` with your own information.
+
+### Azure App Service and Azure Container Apps
+
+#### Secret / Connection string
+
+| Default environment variable name | Description | Example value |
+|--|--||
+| AZURE_COSMOS_CONTACTPOINT | Cassandra API contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
+| AZURE_COSMOS_PORT | Cassandra connection port | 10350 |
+| AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` |
+| AZURE_COSMOS_USERNAME | Cassandra username | `<username>` |
+| AZURE_COSMOS_PASSWORD | Cassandra password | `<password>` |
+
+#### System-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTKEYURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-account>/listKeys?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-account>.documents.azure.com:443/` |
+| AZURE_COSMOS_CONTACTPOINT | Cassandra API contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
+| AZURE_COSMOS_PORT | Cassandra connection port | 10350 |
+| AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` |
+| AZURE_COSMOS_USERNAME | Cassandra username | `<username>` |
+
+#### User-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTKEYURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-account>/listKeys?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-account>.documents.azure.com:443/` |
+| AZURE_COSMOS_CONTACTPOINT | Cassandra API contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
+| AZURE_COSMOS_PORT | Cassandra connection port | 10350 |
+| AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` |
+| AZURE_COSMOS_USERNAME | Cassandra username | `<username>` |
+| AZURE_COSMOS_CLIENTID | Your client ID | `<client-ID>` |
+
+#### Service principal
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTKEYURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-account>/listKeys?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-account>.documents.azure.com:443/` |
+| AZURE_COSMOS_CONTACTPOINT | Cassandra API contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
+| AZURE_COSMOS_PORT | Cassandra connection port | 10350 |
+| AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` |
+| AZURE_COSMOS_USERNAME | Cassandra username | `<username>` |
+| AZURE_COSMOS_CLIENTID | Your client ID | `<client-ID>` |
+| AZURE_COSMOS_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` |
+
+### Azure Spring Apps
+
+| Default environment variable name | Description | Example value |
+|-|--|--|
+| spring.data.cassandra.contact_points | Cassandra API contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` |
+| spring.data.cassandra.port | Cassandra connection port | 10350 |
+| spring.data.cassandra.keyspace_name | Cassandra keyspace | `<keyspace>` |
+| spring.data.cassandra.username | Cassandra username | `<username>` |
+| spring.data.cassandra.password | Cassandra password | `<password>` |
+| spring.data.cassandra.local_datacenter | Azure Region | `<Azure-region>` |
+| spring.data.cassandra.ssl | SSL status | true |
+
+## Next steps
+
+Follow the tutorials listed below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md
Previously updated : 08/11/2022 Last updated : 09/19/2022
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties
-Use the connection details below to connect compute services to Cosmos DB. For each example below, replace the placeholder texts `<mongo-db-admin-user>`, `<password>`, `<mongo-db-server>`, `<subscription-ID>`, `<resource-group-name>`, `<database-server>`, `<client-secret>`, and `<tenant-id>` with your Mongo DB Admin username, password, Mongo DB server, subscription ID, resource group name, database server, client secret and tenant ID.
+Use the connection details below to connect compute services to Cosmos DB. For each example below, replace the placeholder texts `<mongo-db-admin-user>`, `<password>`, `<Azure-Cosmos-DB-API-for-MongoDB-account>`, `<subscription-ID>`, `<resource-group-name>`, `<client-secret>`, and `<tenant-id>` with your own information.
### Azure App Service and Azure Container Apps #### Secret / Connection string
-| Default environment variable name | Description | Example value |
-|--|--|-|
-| AZURE_COSMOS_CONNECTIONSTRING | Cosmos DB MongoDB API connection string | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| AZURE_COSMOS_CONNECTIONSTRING | MongoDB API connection string | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
#### System-assigned managed identity | Default environment variable name | Description | Example value | |--|--|--|
-| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-API-for-MongoDB-account>/listConnectionStrings?api-version=2021-04-15` |
| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
-| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-API-for-MongoDB-account>.documents.azure.com:443/` |
#### User-assigned managed identity | Default environment variable name | Description | Example value | |--|--|--|
-| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-API-for-MongoDB-account>/listConnectionStrings?api-version=2021-04-15` |
| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
-| AZURE_COSMOS_CLIENTID | Your client secret ID | `<client-ID>` |
-| AZURE_COSMOS_SUBSCRIPTIONID | Your subscription ID | `<subscription-ID>` |
-| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
+| AZURE_COSMOS_CLIENTID | Your client ID | `<client-ID>` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-API-for-MongoDB-account>.documents.azure.com:443/` |
#### Service principal | Default environment variable name | Description | Example value | |--|--|--|
-| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-API-for-MongoDB-account>/listConnectionStrings?api-version=2021-04-15` |
| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
-| AZURE_COSMOS_CLIENTID | Your client secret ID | `<client-ID>` |
+| AZURE_COSMOS_CLIENTID | Your client ID | `<client-ID>` |
| AZURE_COSMOS_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` |
-| AZURE_COSMOS_SUBSCRIPTIONID | Your subscription ID | `<subscription-ID>` |
-| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-API-for-MongoDB-account>.documents.azure.com:443/` |
### Azure Spring Apps
service-connector How To Integrate Cosmos Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-gremlin.md
+
+ Title: Integrate the Azure Cosmos DB Gremlin API with Service Connector
+description: Integrate the Azure Cosmos DB Gremlin API into your application with Service Connector
++++ Last updated : 09/19/2022+++
+# Integrate the Azure Cosmos DB API for Gremlin with Service Connector
+
+This page shows the supported authentication types and client types for the Azure Cosmos DB Gremlin API using Service Connector. You might still be able to connect to the Azure Cosmos DB API for Gremlin in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+
+## Supported compute services
+
+- Azure App Service
+- Azure Container Apps
+- Azure Spring Apps
+
+## Supported authentication types and client types
+
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|-|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| PHP | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+
+ Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|-|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| PHP | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|-|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| PHP | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+++
+## Default environment variable names or application properties
+
+Use the connection details below to connect your compute services to the Cosmos DB Gremlin API. For each example below, replace the placeholder texts `<Azure-Cosmos-DB-account>`, `<database>`, `<collection or graphs>`, `<username>`, `<password>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`,`<client-secret>`, and `<tenant-id>` with your own information.
+
+### Azure App Service and Azure Container Apps
+
+#### Secret / Connection string
+
+| Default environment variable name | Description | Example value |
+|--|--||
+| AZURE_COSMOS_HOSTNAME | Your Gremlin Unique Resource Identifier (UFI) | `<Azure-Cosmos-DB-account>.gremlin.cosmos.azure.com` |
+| AZURE_COSMOS_PORT | Connection port | 443 |
+| AZURE_COSMOS_USERNAME | Your username | `/dbs/<database>/colls/<collection or graphs>` |
+| AZURE_COSMOS_PASSWORD | Your password | `<password>` |
+
+#### System-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|--|--|-|
+| AZURE_COSMOS_LISTKEYURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-account>/listKeys?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-account>.documents.azure.com:443/` |
+| AZURE_COSMOS_HOSTNAME | Your Gremlin Unique Resource Identifier (UFI) | `<Azure-Cosmos-DB-account>.gremlin.cosmos.azure.com` |
+| AZURE_COSMOS_PORT | Connection port | 443 |
+| AZURE_COSMOS_USERNAME | Your username | `/dbs/<database>/colls/<collection or graphs>` |
+
+#### User-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|--|--|-|
+| AZURE_COSMOS_LISTKEYURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-account>/listKeys?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-account>.documents.azure.com:443/` |
+| AZURE_COSMOS_HOSTNAME | Your Gremlin Unique Resource Identifier (UFI) | `<Azure-Cosmos-DB-account>.gremlin.cosmos.azure.com` |
+| AZURE_COSMOS_PORT | Connection port | 443 |
+| AZURE_COSMOS_USERNAME | Your username | `/dbs/<database>/colls/<collection or graphs>` |
+| AZURE_CLIENTID | Your client ID | `<client_ID>` |
+
+#### Service principal
+
+| Default environment variable name | Description | Example value |
+|--|--|-|
+| AZURE_COSMOS_LISTKEYURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<Azure-Cosmos-DB-account>/listKeys?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-account>.documents.azure.com:443/` |
+| AZURE_COSMOS_HOSTNAME | Your Gremlin Unique Resource Identifier (UFI) | `<Azure-Cosmos-DB-account>.gremlin.cosmos.azure.com` |
+| AZURE_COSMOS_PORT | Gremlin connection port | 10350 |
+| AZURE_COSMOS_USERNAME | Your username | `</dbs/<database>/colls/<collection or graphs>` |
+| AZURE_COSMOS_CLIENTID | Your client ID | `<client-ID>` |
+| AZURE_COSMOS_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` |
+
+## Next steps
+
+Follow the tutorials listed below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-connector How To Integrate Cosmos Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-sql.md
+
+ Title: Integrate the Azure Cosmos DB SQL API with Service Connector
+description: Integrate the Azure Cosmos DB SQL into your application with Service Connector
++++ Last updated : 09/19/2022+++
+# Integrate the Azure Cosmos DB API for SQL with Service Connector
+
+This page shows the supported authentication types and client types for the Azure Cosmos DB SQL API using Service Connector. You might still be able to connect to the Azure Cosmos DB API for SQL in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+
+## Supported compute services
+
+- Azure App Service
+- Azure Container Apps
+- Azure Spring Apps
+
+## Supported authentication types and client types
+
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+++
+## Default environment variable names or application properties
+
+Use the connection details below to connect your compute services to the Cosmos DB SQL API. For each example below, replace the placeholder texts `<database-server>`, `<database-name>`,`<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<SQL-server>`, `<client-secret>`, `<tenant-id>`, and `<access-key>` with your own information.
+
+### Azure App Service and Azure Container Apps
+
+#### Secret / Connection string
+
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| AZURE_COSMOS_CONNECTIONSTRING | Cosmos DB SQL API connection string | `AccountEndpoint=https://<database-server>.documents.azure.com:443/;AccountKey=<account-key>` |
+
+#### System-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
+
+#### User-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_CLIENTID | Your client secret ID | `<client-ID>` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
+
+#### Service principal
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_CLIENTID | Your client secret ID | `<client-ID>` |
+| AZURE_COSMOS_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
+
+### Azure Spring Apps
+
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| azure.cosmos.key | The access key for your database | `<access-key>` |
+| azure.cosmos.database | Your database | `<database-name>` |
+| azure.cosmos.uri | Your database URI | `https://<database-server>.documents.azure.com:443/` |
+
+## Next steps
+
+Follow the tutorials listed below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-connector How To Integrate Cosmos Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-table.md
+
+ Title: Integrate the Azure Cosmos DB Table API with Service Connector
+description: Integrate the Azure Cosmos DB Table API into your application with Service Connector
++++ Last updated : 08/11/2022+++
+# Integrate the Azure Cosmos DB Table API with Service Connector
+
+This page shows the supported authentication types and client types for the Azure Cosmos DB Table API using Service Connector. You might still be able to connect to the Azure Cosmos DB Table API in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+
+## Supported compute services
+
+- Azure App Service
+- Azure Container Apps
+- Azure Spring Apps
+
+## Supported authentication types and client types
+
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+++
+## Default environment variable names or application properties
+
+Use the connection details below to connect your compute services to the Cosmos DB Table API. For each example below, replace the placeholder texts `<account-name>`, `<table-name>`, `<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<client-secret>`, `<tenant-id>` with your own information.
+
+### Azure App Service and Azure Container Apps
+
+#### Secret / Connection string
+
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| AZURE_COSMOS_CONNECTIONSTRING | Cosmos DB Table API connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;TableEndpoint=https://<table-name>.table.cosmos.azure.com:443/; ` |
+
+#### System-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<table-name>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<table-name>.documents.azure.com:443/` |
+
+#### User-assigned managed identity
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<table-name>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_CLIENTID | Your client secret ID | `<client-ID>` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<table-name>.documents.azure.com:443/` |
+
+#### Service principal
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<table-name>/listConnectionStrings?api-version=2021-04-15` |
+| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
+| AZURE_COSMOS_CLIENTID | Your client secret ID | `<client-ID>` |
+| AZURE_COSMOS_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` |
+| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<table-name>.documents.azure.com:443/` |
+
+## Next steps
+
+Follow the tutorials listed below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
static-web-apps Front Door Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/front-door-manual.md
Learn to add [Azure Front Door](../frontdoor/front-door-overview.md) as the CDN for your static web app. Azure Front Door is a scalable and secure entry point for fast delivery of your web applications.
+> [!NOTE]
+> Consider using [enterprise-grade edge](enterprise-edge.md) for faster page loads, enhanced security, and optimized reliability for global applications.
+ In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
When creating an Azure Front Door profile, you must select an origin from the same subscription as the selected the Front Door.
-1. Navigate to the Azure home screen.
+1. Navigate to the Azure portal home.
1. Select **Create a resource**.
When creating an Azure Front Door profile, you must select an origin from the sa
1. Select **Create**.
-1. Select the **Azure Front Door Standard/Premium** option.
+1. Select the **Azure Front Door** option.
1. Select the **Quick create** option.
-1. Select the **Continue to create a front door** button.
+1. Select the **Continue to create a Front Door** button.
1. In the *Basics* tab, enter the following values:
When creating an Azure Front Door profile, you must select an origin from the sa
| Name | Enter **my-static-web-app-front-door**. | | Tier | Select **Standard**. | | Endpoint name | Enter a unique name for your Front Door host. |
- | Origin type | Select **Custom**. |
- | Origin host name | Enter the hostname of your static web app that you set aside from the beginning of this tutorial. Make sure your value does not include a trailing slash or protocol. (For example, `desert-rain-04056.azurestaticapps.net`) |
- | Origin type | Select **Custom**. |
- | Origin host name | Enter the host name for your website. For example, `contoso.com`. |
+ | Origin type | Select **Static Web App**. |
+ | Origin host name | Select the host name of your static web app from the dropdown. |
| Caching | Check the **Enable caching** checkbox. |
+ | Query string caching behavior | Select **Use Query String** |
+ | Compression | Select **Enable compression** |
| WAF policy | Select **Create new** or select an existing Web Application Firewall policy from the dropdown if you want to enable this feature. | 1. Select **Review + create**.
+ The validation process may take a moment to complete before you can continue.
+ 1. Select **Create**. The creation process may take a few minutes to complete.
Add the following settings to disable Front Door's caching policies from trying
1. Select **Request path**.
-1. Select **Begins With** in the *Operator* drop down.
+1. Select **Begins With** in the *Operator* drop-down.
1. Select the **Edit** link above the *Value* textbox.
Add the following settings to disable Front Door's caching policies from trying
1. Select the **Update** button.
-1. Select the **No transform** option from the *Case transform* dropdown.
- ### Add an action 1. Select the **Add an action** dropdown.
Now that the rule is created, you apply the rule to a Front Door endpoint.
1. Select the **Unassociated** link.
-1. Select the Endpoint name to which you want to apply the caching rule.
+1. Select the endpoint name to which you want to apply the caching rule.
1. Select the **Next** button.
Open the [staticwebapp.config.json](configuration.md) file for your site and mak
```json {
- "route": "/members",
- "allowedRoles": ["authenticated, members"],
- "headers": {
- "Cache-Control": "no-store"
- }
+ ...
+ "routes": [
+ {
+ "route": "/members",
+ "allowedRoles": ["authenticated", "members"],
+ "headers": {
+ "Cache-Control": "no-store"
+ }
+ }
+ ]
+ ...
} ```
storage Upgrade To Data Lake Storage Gen2 How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
description: Shows you how to use Resource Manager templates to upgrade from Azu
Previously updated : 01/25/2022 Last updated : 09/23/2022
To learn more about these capabilities and evaluate the impact of this upgrade o
> [!IMPORTANT] > An upgrade is one-way. There's no way to revert your account once you've performed the upgrade. We recommend that you validate your upgrade in a nonproduction environment.
-## Review feature support
+## Prepare to upgrade
-You're account might be configured to use features that aren't yet supported in Data Lake Storage Gen2 enabled accounts. If your account is using a feature that isn't yet supported, the upgrade will not pass the validation step.
+1. Review feature support
-Review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to identify unsupported features. If you're using any of those unsupported features in your account, make sure to disable them before you begin the upgrade.
+ You're account might be configured to use features that aren't yet supported in Data Lake Storage Gen2 enabled accounts. If your account is using a feature that isn't yet supported, the upgrade will not pass the validation step. Review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to identify unsupported features. If you're using any of those unsupported features in your account, make sure to disable them before you begin the upgrade.
+
+2. Ensure that the segments of each blob path are named
+
+ The migration process creates a directory for each path segment of a blob. Data Lake Storage Gen2 directories must have a name so for migration to succeed, each path segment in a virtual directory must have a name. The same requirement is true for segments that are named only with a space character. If any path segments are either unnamed (`//`) or named only with a space character (`_`), then before you proceed with the migration, you must copy those blobs to a new path that is compatible with these naming requirements.
## Perform the upgrade
storage Table Storage Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design.md
Title: Design scalable and performant tables in Azure table storage. | Microsoft Docs
-description: Learn to design scalable and performant tables in Azure table storage. Review table partitions, Entity Group Transactions, and capacity and cost considerations.
+ Title: Design scalable and performant tables in Azure Table storage. | Microsoft Docs
+description: Learn to design scalable and performant tables in Azure Table storage. Review table partitions, Entity Group Transactions, and capacity and cost considerations.
Table storage is relatively inexpensive, but you should include cost estimates f
- [Modeling relationships](table-storage-design-modeling.md) - [Design for querying](table-storage-design-for-query.md) - [Encrypting Table Data](table-storage-design-encrypt-data.md)-- [Design for data modification](table-storage-design-for-modification.md)
+- [Design for data modification](table-storage-design-for-modification.md)
synapse-analytics Get Started Analyze Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-pool.md
A dedicated SQL pool consumes billable resources as long as it's active. You can
SELECT PassengerCount, SUM(TripDistanceMiles) as SumTripDistance, AVG(TripDistanceMiles) as AvgTripDistance
+ INTO dbo.PassengerCountStats
FROM dbo.NYCTaxiTripSmall WHERE TripDistanceMiles > 0 AND PassengerCount > 0 GROUP BY PassengerCount
synapse-analytics Best Practices Dedicated Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-dedicated-sql-pool.md
Previously updated : 11/02/2021 Last updated : 09/22/2022
Segment quality can be measured by the number of rows in a compressed Row Group.
Because high-quality columnstore segments are important, it's a good idea to use users IDs that are in the medium or large resource class for loading data. Using lower [data warehouse units](resource-consumption-models.md) means you want to assign a larger resource class to your loading user.
-Columnstore tables generally won't push data into a compressed columnstore segment until there are more than 1 million rows per table. Each dedicated SQL pool table is partitioned into 60 tables. As such, columnstore tables won't benefit a query unless the table has more than 60 million rows.
+Columnstore tables generally won't push data into a compressed columnstore segment until there are more than 1 million rows per table. Each dedicated SQL pool table is distributed into 60 different distributions. As such, columnstore tables won't benefit a query unless the table has more than 60 million rows.
> [!TIP] > For tables with less than 60 million rows, having a columnstore index may not be the optimal solution.
virtual-machines Constrained Vcpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/constrained-vcpu.md
> [!TIP] > Try the **[Virtual Machine selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
-Some database workloads like SQL Server require high memory, storage, and I/O bandwidth, but not a high core count. Many database workloads are not CPU-intensive. Azure offers certain VM sizes where you can lower the VM vCPU count to reduce the cost of software licensing, while maintaining the same memory, storage, and I/O bandwidth.
+Some database workloads like SQL Server require high memory, storage, and I/O bandwidth, but not a high number of cores. Many database workloads are not CPU-intensive. Azure offers pre-defined VM sizes with lower vCPU count which can help to reduce the cost of software licensing, while maintaining the same memory, storage, and I/O bandwidth.
The available vCPU count can be reduced to one half or one quarter of the original VM specification. These new VM sizes have a suffix that specifies the number of available vCPUs to make them easier for you to identify. There are no additional cores available that can be used by the VM.
-For example, the current VM size Standard_E32s_v5 comes with 32 vCPUs, 256 GiB RAM, 32 disks, and 80,000 IOPs or 2 GB/s of I/O bandwidth. The new VM sizes Standard_E32-16s_v5 and Standard_E32-8s_v5 comes with 16 and 8 active vCPUs respectively, while maintaining the rest of the specs of the Standard_E32s_v5 for memory, storage, and I/O bandwidth.
+For example, the Standard_E32s_v5 VM size comes with 32 vCPUs, 256 GiB RAM, 32 disks, and 80,000 IOPs or 2 GB/s of I/O bandwidth. The pre-defined Standard_E32-16s_v5 and Standard_E32-8s_v5 VM sizes comes with 16 and 8 active vCPUs respectively, while maintaining the memory, storage, and I/O bandwidth specifications of the Standard_E32s_v5.
-The licensing fees charged for SQL Server are based on the avaialble vCPU count. Third party products should count the available vCPU which represents the max to be used and licensed. This results in a 50% to 75% increase in the ratio of the VM specs to available (billable) vCPUs. These new VM sizes allow customer workloads to use the same memory, storage, and I/O bandwidth while optimizing their software licensing cost. At this time, the compute cost, which includes OS licensing, remains the same one as the original size. For more information, see [Azure VM sizes for more cost-effective database workloads](https://azure.microsoft.com/blog/announcing-new-azure-vm-sizes-for-more-cost-effective-database-workloads/).
+The licensing fees charged for SQL Server are based on the avaialble vCPU count. Third party products should count the available vCPU which represents the max to be used and licensed. This results in a 50% to 75% increase in the ratio of the VM specs to available (billable) vCPUs. At this time, the VM pricing, which includes OS licensing, remains the same as the original size. For more information, see [Azure VM sizes for more cost-effective database workloads](https://azure.microsoft.com/blog/announcing-new-azure-vm-sizes-for-more-cost-effective-database-workloads/).
| Name | vCPU | Specs |
virtual-machines Disks Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-cross-tenant-customer-managed-keys.md
description: Learn how to use customer-managed keys with your Azure disks in dif
Previously updated : 09/13/2022 Last updated : 09/23/2022
This article covers building a solution where you encrypt managed disks with cus
A disk encryption set with federated identity in a cross-tenant CMK workflow spans service provider/ISV tenant resources (disk encryption set, managed identities, and app registrations) and customer tenant resources (enterprise apps, user role assignments, and key vault). In this case, the source Azure resource is the service provider's disk encryption set.
-If you have any questions about cross-tenant customer-managed keys with managed disks, email <crosstenantcmkvteam@service.microsoft.com>.
+If you have questions about cross-tenant customer-managed keys with managed disks, email <crosstenantcmkvteam@service.microsoft.com>.
## Prerequisites - Install the latest [Azure PowerShell module](/powershell/azure/install-az-ps).
If you have any questions about cross-tenant customer-managed keys with managed
## Limitations
-Currently this feature is only available in the West Central US region. Managed Disks and the customer's Key Vault must be in the same Azure region, but they can be in different subscriptions. This feature doesn't support Ultra Disks or Azure Premium SSD v2 managed disks.
+Currently this feature is only available in the North Central US, West Central US, and West US regions. Managed Disks and the customer's Key Vault must be in the same Azure region, but they can be in different subscriptions. This feature doesn't support Ultra Disks or Azure Premium SSD v2 managed disks.
[!INCLUDE [active-directory-msi-cross-tenant-cmk-overview](../../includes/active-directory-msi-cross-tenant-cmk-overview.md)]
virtual-machines Field Programmable Gate Arrays Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/field-programmable-gate-arrays-attestation.md
Last updated 04/01/2021
The FPGA Attestation service performs a series of validations on a design checkpoint file (called a ΓÇ£netlistΓÇ¥) generated by the Xilinx toolset and produces a file that contains the validated image (called a ΓÇ£bitstreamΓÇ¥) that can be loaded onto the Xilinx U250 FPGA card in an NP series VM. ## News
-The current attestation service is using Vitis 2020.2 from Xilinx, on Jan 17th 2022, weΓÇÖll be moving to Vitis 2021.1, the change should be transparent to most users. Once your designs are ΓÇ£attestedΓÇ¥ using Vitis 2021.1, you should be moving to XRT2021.1. Xilinx will publish new marketplace images based on XRT 2021.1.
-Please note that current designs already attested on Vitis 2020.2, will work on the current deployment marketplace images as well as new images based on XRT2021.1
+The current attestation service is using Vitis 2021.1 from Xilinx, on Sept 26th 2022, weΓÇÖll be moving to Vitis 2022.1, the change should be transparent to most users. Once your designs are ΓÇ£attestedΓÇ¥ using Vitis 2022.1, you should be moving to XRT2022.1. Xilinx published new marketplace images based on XRT 2022.1.
+Please note that current designs already attested on Vitis 2020.2 or 2021.1, will work on the current deployment marketplace images as well as new images based on XRT2022.1
As part of the move to 2021.1, Xilinx introduced a new DRC that might affect some designs previously working on Vitis 2020.2 regarding BUFCE_LEAF failing attestation, for more details here: [Xilinx AR 75980 UltraScale/UltraScale+ BRAM: CLOCK_DOMAIN = Common Mode skew checks](https://support.xilinx.com/s/article/75980?language=en_US).
You will need to have your tenant and subscription ID authorized to submit to th
## Building your design for attestation
-The preferred Xilinx toolset for building designs is Vitis 2020.2. Netlist files that were created with an earlier version of the toolset and are still compatible with 2020.2 can be used. Make sure you have loaded the correct shell to build against. The currently supported version is `xilinx_u250_gen3x16_xdma_2_1_202010_1`. The support files can be downloaded from the Xilinx Alveo lounge.
+The preferred Xilinx toolset for building designs is Vitis 2022.1. Netlist files that were created with an earlier version of the toolset and are still compatible with 2022.1 can be used. Make sure you have loaded the correct shell to build against. The currently supported version is `xilinx_u250_gen3x16_xdma_2_1_202010_1`. The support files can be downloaded from the Xilinx Alveo lounge.
You must include the following argument to Vitis (v++ cmd line) to build an `xclbin` file that contains a netlist instead of a bitstream.
virtual-machines Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-powershell.md
**Applies to:** :heavy_check_mark: Linux VMs
-The Azure PowerShell module is used to create and manage Azure resources from the PowerShell command line or in scripts. This quickstart shows you how to use the Azure PowerShell module to deploy a Linux virtual machine (VM) in Azure. This quickstart uses the Ubuntu 18.04 LTS marketplace image from Canonical. To see your VM in action, you'll also SSH to the VM and install the NGINX web server.
+The Azure PowerShell module is used to create and manage Azure resources from the PowerShell command line or in scripts. This quickstart shows you how to use the Azure PowerShell module to deploy a Linux virtual machine (VM) in Azure. This quickstart uses the latest Debian marketplace image. To see your VM in action, you'll also SSH to the VM and install the NGINX web server.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
New-AzResourceGroup -Name 'myResourceGroup' -Location 'EastUS'
We will be automatically generating an SSH key pair to use for connecting to the VM. The public key that is created using `-GenerateSshKey` will be stored in Azure as a resource, using the name you provide as `SshKeyName`. The SSH key resource can be reused for creating additional VMs. Both the public and private keys will also be downloaded for you. When you create your SSH key pair using the Cloud Shell, the keys are stored in a [storage account that is automatically created by Cloud Shell](../../cloud-shell/persisting-shell-storage.md). Don't delete the storage account, or the file share in it, until after you have retrieved your keys or you will lose access to the VM.
-You will be prompted for a user name that will be used when you connect to the VM. You will also be asked for a password, which you can leave blank. Password login for the VM is disabled when using an SSH key.
+You will be prompted for a user name that will be used when you connect to the VM. You will also be asked for a password, which you can leave blank. Password log in for the VM is disabled when using an SSH key.
In this example, you create a VM named *myVM*, in *East US*, using the *Standard_B2s* VM size.
virtual-machines External Ntpsource Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/external-ntpsource-configuration.md
Time synchronization in Active Directory should be managed by only allowing the
To check current time source in your **PDC**, from an elevated command prompt run *w32tm /query /source* and note the output for later comparison.
-1. From *Start* run *gpedit.msc*
+1. From *Start* run *gpedit.msc*.
2. Navigate to the *Global Configuration Settings* policy under *Computer Configuration* -> *Administrative Templates* -> *System* -> *Windows Time Service*. 3. Set it to *Enabled* and configure the *AnnounceFlags* parameter to **5**. 4. Navigate to *Computer Settings* -> *Administrative Templates* -> *System* -> *Windows Time Service* -> *Time Providers*.
To mark the VMIC provider as *Disabled* from *Start* type *regedit.exe* -> In th
From an elevated command prompt rerun *w32tm /query /source* and compare the output to the one you noted at the beginning of the configuration. Now it will be set to the NTP Server you chose.
+>[!TIP]
+>Follow the steps below if you want to speed-up the process of changing the NTP source on your PDC. You can create a scheduled task to run at **System Start-up** with the **Delay** task for up to (random delay) set to **2 minutes**.
+
+## Scheduled task to set NTP source on your PDC
+
+1. From *Start* run *Task Scheduler*.
+2. Browse to *Task Scheduler* Library -> *Microsoft* -> *Windows* -> *Time Synchronization* -> Right-click in the right hand side pane and select *Create New Task*.
+3. In the *General* tab, click the *Change User or Group...* button and set it to run as *LOCAL SERVICE*. Then check the box to *Run with highest privileges*.
+4. Under *Configure for:* select your operating system version.
+5. Switch to the *Triggers* tab, click the *New...* button, and set the schedule as per your requirements. Before clicking *OK*, make sure the box next to *Enabled* is checked.
+6. Go to the *Actions* tab. Click the *New...* button and enter the following details:
+- On *Action:* set *Start a program*.
+- On *Program/script:* set the path to *%windir%\system32\w32tm.exe*.
+- On *Add arguments:* type */resync*, and click *OK* to save changes.
+7. Under the *Conditions* tab ensure that *Start the task only if the computer is in idle for* and *Start the task only if the computer is on AC power* is *not selected*. Click *OK*.
+ ## GPO for Clients Configure the following Group Policy Object to enable your clients to synchronize time with any Domain Controller in your Domain: To check current time source in your client, from an elevated command prompt run *w32tm /query /source* and note the output for later comparison.
-1. From a Domain Controller go to *Start* run *gpmc.msc*
+1. From a Domain Controller go to *Start* run *gpmc.msc*.
2. Browse to the Forest and Domain where you want to create the GPO. 3. Create a new GPO, for example *Clients Time Sync*, in the container *Group Policy Objects*. 4. Right-click on the newly created GPO and Edit.
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/hb-hc-known-issues.md
This article attempts to list recent common issues and their solutions when using the [H-series](../../sizes-hpc.md) and [N-series](../../sizes-gpu.md) HPC and GPU VMs.
-## InfiniBand Errors on HBv3
-As of the week of August 12, we've identified a bug in the firmware of the ConnectX-6 InfiniBand NIC adapters in HBv3-series VMs that can cause MPI jobs to fail on a transient basis. This issue applies to all VM sizes within the HBv3-series. This issue doesn't apply to other H-series VMs (HB-series, HBv2-series, or HC-series). A firmware update will be issued in the coming days to remediate this issue.
- ## Memory Capacity on Standard_HB120rs_v2 As of the week of December 6, 2021 we've temporarily reducing the amount of memory (RAM) exposed to the Standard_HB120rs_v2 VM size, otherwise known as [HBv2](../../hbv2-series.md). We've reducing the memory footprint to 432 GB from its current value of 456 GB (a 5.2% reduction). This reduction is temporary and the full memory capacity should be restored in early 2022. We've made this change to ensure to address an issue that can result in long VM deployment times or VM deployments for which not all devices function correctly. The reduction in memory capacity doesn't affect VM performance.
To prevent low-level hardware access that can result in security vulnerabilities
On Ubuntu-18.04 based marketplace VM images with kernels version `5.4.0-1039-azure #42` and newer, some older Mellanox OFED are incompatible causing an increase in VM boot time up to 30 minutes in some cases. This has been reported for both Mellanox OFED versions 5.2-1.0.4.0 and 5.2-2.2.0.0. The issue is resolved with Mellanox OFED 5.3-1.0.0.1. If it is necessary to use the incompatible OFED, a solution is to use the **Canonical:UbuntuServer:18_04-lts-gen2:18.04.202101290** marketplace VM image, or older and not to update the kernel.
-## MPI QP creation errors
-If in the midst of running any MPI workloads, InfiniBand QP creation errors such as shown below, are thrown, we suggest rebooting the VM and retrying the workload. This issue will be fixed in the future.
-
-```bash
-ib_mlx5_dv.c:150 UCX ERROR mlx5dv_devx_obj_create(QP) failed, syndrome 0: Invalid argument
-```
-
-You may verify the values of the maximum number of queue-pairs when the issue is observed as follows.
-```bash
-[user@azurehpc-vm ~]$ ibv_devinfo -vv | grep qp
-max_qp: 4096
-```
- ## Accelerated Networking on HB, HC, HBv2, HBv3 and NDv2 [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](../../hb-series.md), [HC](../../hc-series.md), [HBv2](../../hbv2-series.md), [HBv3](../../hbv3-series.md) and [NDv2](../../ndv2-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0). This may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X). The simplest solution currently may be to use the latest HPC-X on the CentOS-HPC VM images or disable Accelerated Networking if not required.
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
This section applies only if you want to use a fencing device with an Azure fenc
This section applies only if you're using a fencing device that's based on an Azure fence agent. The fencing device uses either a managed identity or a service principal to authorize against Microsoft Azure. #### Using managed identity
-To create a managed identity (MSI), [create a system-assigned](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Fence device, based on managed identity is supported on SLES 15 SP1 and above.
+To create a managed identity (MSI), [create a system-assigned](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Azure fence agent, based on managed identity is supported for SLES 12 SP5 and SLES 15 SP1 and above.
#### Using service principal
Make sure to assign the custom role to the service principal at all VM (cluster
> The installed version of the *fence-agents* package must be 4.4.0 or later to benefit from the faster failover times with the Azure fence agent, when a cluster node is fenced. If you're running an earlier version, we recommend that you update the package. >[!IMPORTANT]
- > If using managed identity, the installed version of the *fence-agents* package must be fence-agents 4.5.2+git.1592573838.1eee0863 or later. Earlier versions will not work correctly with a managed identity configuration.
- > Currently only SLES 15 SP1 and newer are supported for managed identity configuration.
-
+ > If using managed identity, the installed version of the *fence-agents* package must be
+ > SLES 12 SP5: fence-agents 4.9.0+git.1624456340.8d746be9-3.35.2 or later
+ > SLES 15 SP1 and higher: fence-agents 4.5.2+git.1592573838.1eee0863 or later.
+ > Earlier versions will not work correctly with a managed identity configuration.
+
1. **[A]** Install the Azure Python SDK and Azure Identity Python module. Install the Azure Python SDK on SLES 12 SP4 or SLES 12 SP5:
Make sure to assign the custom role to the service principal at all VM (cluster
> The 'pcmk_host_map' option is required in the command only if the hostnames and the Azure VM names are *not* identical. Specify the mapping in the format *hostname:vm-name*. > Refer to the bold section in the following command.
- If using **managed identity** for your fence agent, run the following command (SLES 15 SP1 and newer, only)
+ If using **managed identity** for your fence agent, run the following command
<pre><code> # replace the bold strings with your subscription ID and resource group of the VM
virtual-network-manager Concept Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-deployments.md
Previously updated : 07/06/2022- Last updated : 09/22/2022+ # Configuration deployments in Azure Virtual Network Manager (Preview)
In this article, you'll learn about how configurations are applied to your netwo
## Deployment
-*Deployment* is the method Azure Virtual Network Manager uses to apply configurations to your virtual networks in network groups. Configurations won't take effect until they're deployed. Changes to network groups, including events such as removal and addition of a virtual network into a network group, will take effect without the need for redeployment. For example, if you have a configuration deployed, and a virtual network is added to a network group, it takes effect immediately. When committing a deployment, you select the region(s) to which the configuration will be applied. When a deployment request is sent to Azure Virtual Network Manager, it will calculate the [goal state](#goalstate) of network resources and request the necessary changes to your infrastructure. The changes can take a few minutes depending on how large the configuration is.
- *Deployment* is the method Azure Virtual Network Manager uses to apply configurations to your virtual networks in network groups. Configurations won't take effect until they're deployed. When a deployment request is sent to Azure Virtual Network Manager, it will calculate the [goal state](#goalstate) of all resources under your network manager in that region. Goal state is a combination of deployed configurations and network group membership. Network manager will then apply the necessary changes to your infrastructure.
-When committing a deployment, you select the region(s) to which the configuration will be applied. The deployed configuration is also static. Once deployed, you can edit your configurations freely without impacting your deployed setup. Applying any of these new changes will take another deployment. The changes reprocess the entire region and can take a few minutes depending on how large the configuration is. There are two factors in how quick the configurations are applied:
+When committing a deployment, you select the region(s) to which the configuration will be applied. The time this takes depends on how large the configuration is. Once the VNets are members of a network group, deploying a configuration onto that network group takes a few minutes. This includes adding or removing group members directly, or configuring an Azure Policy resource. Safe deployment practices recommend gradually rolling out changes on a per-region basis.
+## Deployment latency
+
+Deployment latency is the time it takes for a deployment configuration to be applied and take effect. There are two factors in how quickly the configurations are applied:
+
+- The base time of applying a configuration is a few minutes.
-- The time of applying configuration is a few minutes.-- The time to get notification of what is in a network group can very.
+- The time to receive a notification of network group membership can vary.
-For static members, it's immediate. For dynamic members where the scope is less than 1000 subscriptions, it takes a few minutes. In environments with over 1000 subscriptions, the notification mechanism works in a 24-hour window. Once the policy is deployed, commits are faster in the future. However, Changes to network groups will take effect without the need for redeployment. This includes adding or removing group members directly, or configuring an Azure Policy resource. Safe deployment practices recommend gradually rolling out changes on a per-region basis.
+For manually added members, notification is immediate. For dynamic members where the scope is less than 1000 subscriptions, notification takes a few minutes. In environments with more than 1000 subscriptions, the notification mechanism works in a 24-hour window. Changes to network groups will take effect without the need for configuration redeployment.
-AVNM will apply the configuration to the VNets in the network group. So even if your network group consists of dynamic members from more than 1000 subscriptions, if AVNM also is notified who is in the network group, the configuration will be applied in a few minutes.
+AVNM will apply the configuration to the VNets in the network group. So even if your network group consists of dynamic members from more than 1000 subscriptions, if AVNM also is notified who is in the network group, the configuration will be applied in a few minutes.
## Deployment status When you commit a configuration deployment, the API does a POST operation. Once the deployment request has been made, Azure Virtual Network Manager will calculate the goal state of your networks in the deployed regions and request the underlying infrastructure to make the changes. You can see the deployment status on the *Deployment* page of the Virtual Network Manager.
virtual-network Virtual Network Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-public-ip-address.md
For more detail on the specific attributes of a public IP address during creatio
|Resource|Azure portal|Azure PowerShell|Azure CLI| |||||
-|[Virtual machine](./remove-public-ip-address-vm.md)|Select **Dissociate** to dissociate the IP address from the NIC configuration, then select **Delete**.|[Set-AzPublicIpAddress](/powershell/module/az.network/set-azpublicipaddress) to dissociate the IP address from the NIC configuration; [Remove-AzPublicIpAddress](/powershell/module/az.network/remove-azpublicipaddress) to delete|[az network public-ip update with the "--remove" parameter](/cli/azure/network/public-ip#az-network-public-ip-update) to remove the IP address from the NIC configuration. Use [az network public-ip delete](/cli/azure/network/public-ip#az-network-public-ip-delete) to delete the public IP. |
+|[Virtual machine](./remove-public-ip-address-vm.md)|Select **Dissociate** to dissociate the IP address from the NIC configuration, then select **Delete**.|[Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface) to dissociate the IP address from the NIC configuration; [Remove-AzPublicIpAddress](/powershell/module/az.network/remove-azpublicipaddress) to delete|[az network public-ip update with the "--remove" parameter](/cli/azure/network/public-ip#az-network-public-ip-update) to remove the IP address from the NIC configuration. Use [az network public-ip delete](/cli/azure/network/public-ip#az-network-public-ip-delete) to delete the public IP. |
|Load balancer frontend | Browse to an unused public IP address and select **Associate**. Pick the load balancer with the relevant front-end IP configuration to replace the IP. The old IP can be deleted using the same method as a virtual machine. | Use [Set-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/set-azloadbalancerfrontendipconfig) to associate a new front-end IP config with a public load balancer. Use[Remove-AzPublicIpAddress](/powershell/module/az.network/remove-azpublicipaddress) to delete a public IP. You can also use [Remove-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/remove-azloadbalancerfrontendipconfig) to remove a frontend IP config if there are more than one. | Use [az network lb frontend-ip update](/cli/azure/network/lb/frontend-ip#az-network-lb-frontend-ip-update) to associate a new frontend IP config with a public load balancer. Use [Remove-AzPublicIpAddress](/powershell/module/az.network/remove-azpublicipaddress) to delete a public IP. You can also use [az network lb frontend-ip delete](/cli/azure/network/lb/frontend-ip#az-network-lb-frontend-ip-delete) to remove a frontend IP config if there are more than one. | |Firewall|N/A| [Deallocate](../../firewall/firewall-faq.yml#how-can-i-stop-and-start-azure-firewall) to deallocate firewall and remove all IP configurations | Use [az network firewall ip-config delete](/cli/azure/network/firewall/ip-config#az-network-firewall-ip-config-delete) to remove IP. Use PowerShell to deallocate first. |
To manage public IP addresses, your account must be assigned to the [network con
Public IP addresses have a nominal charge. To view the pricing, read the [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses) page. - Create a public IP address using [PowerShell](../../virtual-network/powershell-samples.md) or [Azure CLI](../../virtual-network/cli-samples.md) sample scripts, or using Azure [Resource Manager templates](../../virtual-network/template-samples.md)-- Create and assign [Azure Policy definitions](../../virtual-network/policy-reference.md) for public IP addresses
+- Create and assign [Azure Policy definitions](../../virtual-network/policy-reference.md) for public IP addresses
virtual-network Virtual Networks Name Resolution For Vms And Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
na Previously updated : 09/16/2022 Last updated : 09/22/2022 # Name resolution for resources in Azure virtual networks
-Depending on how you use Azure to host IaaS, PaaS, and hybrid solutions, you might need to allow the virtual machines (VMs), and other resources deployed in a virtual network to communicate with each other. Although you can enable communication by using IP addresses, it is much simpler to use names that can be easily remembered, and do not change.
+Depending on how you use Azure to host IaaS, PaaS, and hybrid solutions, you might need to allow the virtual machines (VMs), and other resources deployed in a virtual network to communicate with each other. Although you can enable communication by using IP addresses, it's much simpler to use names that can be easily remembered, and don't change.
When resources deployed in virtual networks need to resolve domain names to internal IP addresses, they can use one of four methods:
The type of name resolution you use depends on how your resources need to commun
| Resolution of on-premises computer and service names from VMs or role instances in Azure. |[Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md) or customer-managed DNS servers (on-premises domain controller, local read-only domain controller, or a DNS secondary synced using zone transfers, for example). See [Name resolution using your own DNS server](#name-resolution-that-uses-your-own-dns-server). |FQDN only | | Resolution of Azure hostnames from on-premises computers. |Forward queries to a customer-managed DNS proxy server in the corresponding virtual network, the proxy server forwards queries to Azure for resolution. See [Name resolution using your own DNS server](#name-resolution-that-uses-your-own-dns-server). |FQDN only | | Reverse DNS for internal IPs. |[Azure DNS private zones](../dns/private-dns-overview.md), [Azure-provided name resolution](#azure-provided-name-resolution), [Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md), or [Name resolution using your own DNS server](#name-resolution-that-uses-your-own-dns-server). |Not applicable |
-| Name resolution between VMs or role instances located in different cloud services, not in a virtual network. |Not applicable. Connectivity between VMs and role instances in different cloud services is not supported outside a virtual network. |Not applicable|
+| Name resolution between VMs or role instances located in different cloud services, not in a virtual network. |Not applicable. Connectivity between VMs and role instances in different cloud services isn't supported outside a virtual network. |Not applicable|
## Azure-provided name resolution
-Azure provided name resolution provides only basic authoritative DNS capabilities. If you use this option the DNS zone names and records will be automatically managed by Azure and you will not be able to control the DNS zone names or the life cycle of DNS records. If you need a fully featured DNS solution for your virtual networks you must use [Azure DNS private zones](../dns/private-dns-overview.md) or [Customer-managed DNS servers](#name-resolution-that-uses-your-own-dns-server).
+Azure provided name resolution provides only basic authoritative DNS capabilities. If you use this option the DNS zone names and records will be automatically managed by Azure and you won't be able to control the DNS zone names or the life cycle of DNS records. If you need a fully featured DNS solution for your virtual networks, you can use [Azure DNS private zones](../dns/private-dns-overview.md) with [Customer-managed DNS servers](#name-resolution-that-uses-your-own-dns-server) or an [Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md).
-Along with resolution of public DNS names, Azure provides internal name resolution for VMs and role instances that reside within the same virtual network or cloud service. VMs and instances in a cloud service share the same DNS suffix, so the host name alone is sufficient. But in virtual networks deployed using the classic deployment model, different cloud services have different DNS suffixes. In this situation, you need the FQDN to resolve names between different cloud services. In virtual networks deployed using the Azure Resource Manager deployment model, the DNS suffix is consistent across the all virtual machines within a virtual network, so the FQDN is not needed. DNS names can be assigned to both VMs and network interfaces. Although Azure-provided name resolution does not require any configuration, it is not the appropriate choice for all deployment scenarios, as detailed in the previous table.
+Along with resolution of public DNS names, Azure provides internal name resolution for VMs and role instances that reside within the same virtual network or cloud service. VMs and instances in a cloud service share the same DNS suffix, so the host name alone is sufficient. But in virtual networks deployed using the classic deployment model, different cloud services have different DNS suffixes. In this situation, you need the FQDN to resolve names between different cloud services. In virtual networks deployed using the Azure Resource Manager deployment model, the DNS suffix is consistent across the all virtual machines within a virtual network, so the FQDN isn't needed. DNS names can be assigned to both VMs and network interfaces. Although Azure-provided name resolution does not require any configuration, it's not the appropriate choice for all deployment scenarios, as detailed in the previous table.
> [!NOTE] > When using cloud services web and worker roles, you can also access the internal IP addresses of role instances using the Azure Service Management REST API. For more information, see the [Service Management REST API Reference](/previous-versions/azure/ee460799(v=azure.100)). The address is based on the role name and instance number.
Azure-provided name resolution includes the following features:
* High availability. You don't need to create and manage clusters of your own DNS servers. * You can use the service in conjunction with your own DNS servers, to resolve both on-premises and Azure host names. * You can use name resolution between VMs and role instances within the same cloud service, without the need for an FQDN.
-* You can use name resolution between VMs in virtual networks that use the Azure Resource Manager deployment model, without need for an FQDN. Virtual networks in the classic deployment model require an FQDN when you are resolving names in different cloud services.
+* You can use name resolution between VMs in virtual networks that use the Azure Resource Manager deployment model, without need for an FQDN. Virtual networks in the classic deployment model require an FQDN when you're resolving names in different cloud services.
* You can use host names that best describe your deployments, rather than working with auto-generated names. ### Considerations
-Points to consider when you are using Azure-provided name resolution:
-* The Azure-created DNS suffix cannot be modified.
-* DNS lookup is scoped to a virtual network. DNS names created for one virtual networks can't be resolved from other virtual networks.
-* You cannot manually register your own records.
-* WINS and NetBIOS are not supported. You cannot see your VMs in Windows Explorer.
-* Host names must be DNS-compatible. Names must use only 0-9, a-z, and '-', and cannot start or end with a '-'.
+Points to consider when you're using Azure-provided name resolution:
+* The Azure-created DNS suffix can't be modified.
+* DNS lookup is scoped to a virtual network. DNS names created for one virtual network can't be resolved from other virtual networks.
+* You can't manually register your own records.
+* WINS and NetBIOS are not supported. You can't see your VMs in Windows Explorer.
+* Host names must be DNS-compatible. Names must use only 0-9, a-z, and '-', and can't start or end with a '-'.
* DNS query traffic is throttled for each VM. Throttling shouldn't impact most applications. If request throttling is observed, ensure that client-side caching is enabled. For more information, see [DNS client configuration](#dns-client-configuration). * Use a different name for each virtual machine in a virtual network to avoid DNS resolution issues. * Only VMs in the first 180 cloud services are registered for each virtual network in a classic deployment model. This limit does not apply to virtual networks in Azure Resource Manager.
-* The Azure DNS IP address is 168.63.129.16. This is a static IP address and will not change.
+* The Azure DNS IP address is 168.63.129.16. This is a static IP address and won't change.
### Reverse DNS Considerations
-Reverse DNS is supported in all ARM based virtual networks. You can issue reverse DNS queries (PTR queries) to map IP addresses of virtual machines to FQDNs of virtual machines.
-* All PTR queries for IP addresses of virtual machines will return FQDNs of form \[vmname\].internal.cloudapp.net
-* Forward lookup on FQDNs of form \[vmname\].internal.cloudapp.net will resolve to IP address assigned to the virtual machine.
-* If the virtual network is linked to an [Azure DNS private zones](../dns/private-dns-overview.md) as a registration virtual network, the reverse DNS queries will return two records. One record will be of the form \[vmname\].[privatednszonename] and the other will be of the form \[vmname\].internal.cloudapp.net
-* Reverse DNS lookup is scoped to a given virtual network even if it is peered to other virtual networks. Reverse DNS queries (PTR queries) for IP addresses of virtual machines located in peered virtual networks will return NXDOMAIN.
-* If you want to turn off reverse DNS function in a virtual network you can do so by creating a reverse lookup zone using [Azure DNS private zones](../dns/private-dns-overview.md) and link this zone to your virtual network. For example if the IP address space of your virtual network is 10.20.0.0/16 then you can create a empty private DNS zone 20.10.in-addr.arpa and link it to the virtual network. While linking the zone to your virtual network you should disable auto registration on the link. This zone will override the default reverse lookup zones for the virtual network and since this zone is empty you will get NXDOMAIN for your reverse DNS queries. See our [Quickstart guide](../dns/private-dns-getstarted-portal.md) for details on how to create a private DNS zone and link it to a virtual network.
+Reverse DNS for VMs is supported in all ARM based virtual networks. Azure-managed reverse DNS (PTR) records of form **\[vmname\].internal.cloudapp.net** are automatically added to when you start a VM, and removed when the VM is stopped (deallocated). See the following example:
+
+```cmd
+C:\>nslookup -type=ptr 10.11.0.4
+Server: UnKnown
+Address: 168.63.129.16
+
+Non-authoritative answer:
+4.0.11.10.in-addr.arpa name = myeastspokevm1.internal.cloudapp.net
+```
+The **internal.cloudapp.net** reverse DNS zone is Azure-managed and can't be directly viewed or edited. Forward lookup on the FQDN of form **\[vmname\].internal.cloudapp.net** will also resolve to the IP address assigned to the virtual machine.
+
+If an [Azure DNS private zone](../dns/private-dns-overview.md) is linked to the vnet with a [virtual network link](../dns/private-dns-virtual-network-links.md) and [auto-registration](../dns/private-dns-autoregistration.md) is enabled on that link, then reverse DNS queries will return two records. One record is of the form **\[vmname\].[privatednszonename]** and the other is of the form **\[vmname\].internal.cloudapp.net**. See the following example:
+
+```cmd
+C:\>nslookup -type=ptr 10.20.2.4
+Server: UnKnown
+Address: 168.63.129.16
+
+Non-authoritative answer:
+4.2.20.10.in-addr.arpa name = mywestvm1.internal.cloudapp.net
+4.2.20.10.in-addr.arpa name = mywestvm1.azure.contoso.com
+```
+
+When two PTR records are returned as shown above, then forward lookup of either FQDN will return the IP address of the VM.
+
+Reverse DNS lookups are scoped to a given virtual network, even if it's peered to other virtual networks. Reverse DNS queries for IP addresses of virtual machines located in peered virtual networks will return **NXDOMAIN**.
> [!NOTE]
-> If you want reverse DNS lookup to span across virtual network you can create a reverse lookup zone (in-addr.arpa) [Azure DNS private zones](../dns/private-dns-overview.md) and links it to multiple virtual networks. You'll however have to manually manage the reverse DNS records for the virtual machines.
->
+> Reverse DNS (PTR) records are not stored in a forward private DNS zone. Reverse DNS records are stored in a reverse DNS (in-addr.arpa) zone. The default reverse DNS zone associated with a vnet isn't viewable or editable.
+
+You can disable the reverse DNS function in a virtual network by creating your own reverse lookup zone using [Azure DNS private zones](../dns/private-dns-overview.md), and then linking this zone to your virtual network. For example, if the IP address space of your virtual network is 10.20.0.0/16, then you can create an empty private DNS zone **20.10.in-addr.arpa** and link it to the virtual network. This zone will override the default reverse lookup zones for the virtual network and since this zone is empty you'll get **NXDOMAIN** for your reverse DNS queries, unless you manually create these entries.
+
+Auto-registration of PTR records isn't supported, so if you wish to create entries, these must be entered manually. You must also disable auto-registration in the vnet if it's enabled for other zones due to [restrictions](../dns/private-dns-autoregistration.md#restrictions) that permit only one private zone to be linked if autoregistration is enabled. See the [private DNS quickstart guide](../dns/private-dns-getstarted-portal.md) for details on how to create a private DNS zone and link it to a virtual network.
+> [!NOTE]
+> Since Azure DNS private zones are global, you can create a reverse DNS lookup to span across multiple virtual networks. To do this, create an [Azure DNS private zone](../dns/private-dns-overview.md) for reverse lookups (an **in-addr.arpa** zone), and link it to the virtual networks. You'll have to manually manage the reverse DNS records for the VMs.
## DNS client configuration
This section covers client-side caching and client-side retries.
Not every DNS query needs to be sent across the network. Client-side caching helps reduce latency and improve resilience to network blips, by resolving recurring DNS queries from a local cache. DNS records contain a time-to-live (TTL) mechanism, which allows the cache to store the record for as long as possible without impacting record freshness. Thus, client-side caching is suitable for most situations.
-The default Windows DNS client has a DNS cache built-in. Some Linux distributions do not include caching by default. If you find that there isn't a local cache already, add a DNS cache to each Linux VM.
+The default Windows DNS client has a DNS cache built-in. Some Linux distributions don't include caching by default. If you find that there isn't a local cache already, add a DNS cache to each Linux VM.
-There are a number of different DNS caching packages available (such as dnsmasq). Here's how to install dnsmasq on the most common distributions:
+There are many different DNS caching packages available (such as dnsmasq). Here's how to install dnsmasq on the most common distributions:
* **Ubuntu (uses resolvconf)**: * Install the dnsmasq package with `sudo apt-get install dnsmasq`.
This section covers VMs, role instances, and web apps.
### VMs and role instances
-Your name resolution needs might go beyond the features provided by Azure. For example, you might need to use Microsoft Windows Server Active Directory domains, resolve DNS names between virtual networks. To cover these scenarios, Azure provides the ability for you to use your own DNS servers.
+Your name resolution needs might go beyond the features provided by Azure. For example, you might need to use Microsoft Windows Server Active Directory domains, resolve DNS names between virtual networks. To cover these scenarios, Azure enables you to use your own DNS servers.
DNS servers within a virtual network can forward DNS queries to the recursive resolvers in Azure. This enables you to resolve host names within that virtual network. For example, a domain controller (DC) running in Azure can respond to DNS queries for its domains, and forward all other queries to Azure. Forwarding queries allows VMs to see both your on-premises resources (via the DC) and Azure-provided host names (via the forwarder). Access to the recursive resolvers in Azure is provided via the virtual IP 168.63.129.16.
DNS forwarding also enables DNS resolution between virtual networks, and allows
![Diagram of DNS between virtual networks](./media/virtual-networks-name-resolution-for-vms-and-role-instances/inter-vnet-dns.png)
-When you are using Azure-provided name resolution, Azure Dynamic Host Configuration Protocol (DHCP) provides an internal DNS suffix (**.internal.cloudapp.net**) to each VM. This suffix enables host name resolution because the host name records are in the **internal.cloudapp.net** zone. When you are using your own name resolution solution, this suffix is not supplied to VMs because it interferes with other DNS architectures (like domain-joined scenarios). Instead, Azure provides a non-functioning placeholder (*reddog.microsoft.com*).
+When you're using Azure-provided name resolution, Azure Dynamic Host Configuration Protocol (DHCP) provides an internal DNS suffix (**.internal.cloudapp.net**) to each VM. This suffix enables host name resolution because the host name records are in the **internal.cloudapp.net** zone. When you're using your own name resolution solution, this suffix isn't supplied to VMs because it interferes with other DNS architectures (like domain-joined scenarios). Instead, Azure provides a non-functioning placeholder (*reddog.microsoft.com*).
If necessary, you can determine the internal DNS suffix by using PowerShell or the API: * For virtual networks in Azure Resource Manager deployment models, the suffix is available via the [network interface REST API](/rest/api/virtualnetwork/networkinterfaces), the [Get-AzNetworkInterface](/powershell/module/az.network/get-aznetworkinterface) PowerShell cmdlet, and the [az network nic show](/cli/azure/network/nic#az-network-nic-show) Azure CLI command. * In classic deployment models, the suffix is available via the [Get Deployment API](/previous-versions/azure/reference/ee460804(v=azure.100)) call or the [Get-AzureVM -Debug](/powershell/module/servicemanagement/azure.service/get-azurevm) cmdlet.
-If forwarding queries to Azure doesn't suit your needs, you should provide your own DNS solution. Your DNS solution needs to:
+If forwarding queries to Azure doesn't suit your needs, provide your own DNS solution or deploy an [Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md).
+
+If you provide your own DNS solution, it needs to:
-* Provide appropriate host name resolution, via [DDNS](virtual-networks-name-resolution-ddns.md), for example. If you are using DDNS, you might need to disable DNS record scavenging. Azure DHCP leases are long, and scavenging might remove DNS records prematurely.
+* Provide appropriate host name resolution, via [DDNS](virtual-networks-name-resolution-ddns.md), for example. If you're using DDNS, you might need to disable DNS record scavenging. Azure DHCP leases are long, and scavenging might remove DNS records prematurely.
* Provide appropriate recursive resolution to allow resolution of external domain names. * Be accessible (TCP and UDP on port 53) from the clients it serves, and be able to access the internet. * Be secured against access from the internet, to mitigate threats posed by external agents. > [!NOTE]
-> * For best performance, when you are using Azure VMs as DNS servers, IPv6 should be disabled.
+> * For best performance, when you're using Azure VMs as DNS servers, IPv6 should be disabled.
> * NSGs act as firewalls for you DNS resolver endpoints. You should modify or override your NSG security rules to allow access for UDP Port 53 (and optionally TCP Port 53) to your DNS listener endpoints. Once custom DNS servers are set on a network, then the traffic through port 53 will bypass the NSG's of the subnet. ### Web apps
Suppose you need to perform name resolution from your web app built by using App
![Screenshot of virtual network name resolution](./media/virtual-networks-name-resolution-for-vms-and-role-instances/webapps-dns.png)
-If you need to perform name resolution from your web app built by using App Service, linked to a virtual network, to VMs in a different virtual network, you have to use custom DNS servers on both virtual networks, as follows:
+If you need to perform name resolution from your vnet-linked web app (built by using App Service) to VMs in a different vnet, use custom DNS servers or [Azure DNS Private Resolvers](../dns/dns-private-resolver-overview.md) on both vnets.
+
+To use custom DNS servers:
* Set up a DNS server in your target virtual network, on a VM that can also forward queries to the recursive resolver in Azure (virtual IP 168.63.129.16). An example DNS forwarder is available in the [Azure Quickstart Templates gallery](https://azure.microsoft.com/resources/templates/dns-forwarder/) and [GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/dns-forwarder). * Set up a DNS forwarder in the source virtual network on a VM. Configure this DNS forwarder to forward queries to the DNS server in your target virtual network.
If you need to perform name resolution from your web app built by using App Serv
* In the Azure portal, for the App Service plan hosting the web app, select **Sync Network** under **Networking**, **Virtual Network Integration**. ## Specify DNS servers
-When you are using your own DNS servers, Azure provides the ability to specify multiple DNS servers per virtual network. You can also specify multiple DNS servers per network interface (for Azure Resource Manager), or per cloud service (for the classic deployment model). DNS servers specified for a network interface or cloud service get precedence over DNS servers specified for the virtual network.
+
+When you're using your own DNS servers, Azure enables you to specify multiple DNS servers per virtual network. You can also specify multiple DNS servers per network interface (for Azure Resource Manager), or per cloud service (for the classic deployment model). DNS servers specified for a network interface or cloud service get precedence over DNS servers specified for the virtual network.
> [!NOTE] > Network connection properties, such as DNS server IPs, should not be edited directly within VMs. This is because they might get erased during service heal when the virtual network adaptor gets replaced. This applies to both Windows and Linux VMs.
-When you are using the Azure Resource Manager deployment model, you can specify DNS servers for a virtual network and a network interface. For details, see [Manage a virtual network](manage-virtual-network.md) and [Manage a network interface](virtual-network-network-interface.md).
+When you're using the Azure Resource Manager deployment model, you can specify DNS servers for a virtual network and a network interface. For details, see [Manage a virtual network](manage-virtual-network.md) and [Manage a network interface](virtual-network-network-interface.md).
> [!NOTE] > If you opt for custom DNS server for your virtual network, you must specify at least one DNS server IP address; otherwise, virtual network will ignore the configuration and use Azure-provided DNS instead.
-When you are using the classic deployment model, you can specify DNS servers for the virtual network in the Azure portal or the [Network Configuration file](/previous-versions/azure/reference/jj157100(v=azure.100)). For cloud services, you can specify DNS servers via the [Service Configuration file](/previous-versions/azure/reference/ee758710(v=azure.100)) or by using PowerShell, with [New-AzureVM](/powershell/module/servicemanagement/azure.service/new-azurevm).
+When you're using the classic deployment model, you can specify DNS servers for the virtual network in the Azure portal or the [Network Configuration file](/previous-versions/azure/reference/jj157100(v=azure.100)). For cloud services, you can specify DNS servers via the [Service Configuration file](/previous-versions/azure/reference/ee758710(v=azure.100)) or by using PowerShell, with [New-AzureVM](/powershell/module/servicemanagement/azure.service/new-azurevm).
> [!NOTE] > If you change the DNS settings for a virtual network or virtual machine that is already deployed, for the new DNS settings to take effect, you must perform a DHCP lease renewal on all affected VMs in the virtual network. For VMs running the Windows OS, you can do this by typing `ipconfig /renew` directly in the VM. The steps vary depending on the OS. See the relevant documentation for your OS type.
virtual-wan Virtual Wan About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-about.md
Previously updated : 06/07/2022 Last updated : 09/22/2022 # Customer intent: As someone with a networking background, I want to understand what Virtual WAN is and if it is the right choice for my Azure network.
If you have pre-existing routes in the Routing section for the hub in the Azure
* **Basic Virtual WAN Customers with pre-existing routes in virtual hub**: If you have pre-existing routes in Routing section for the hub in the Azure portal, you'll need to first delete them, then **upgrade** your Basic Virtual WAN to Standard Virtual WAN. See [Upgrade a virtual WAN from Basic to Standard](upgrade-virtual-wan.md). It's best to perform the delete step for all hubs in a virtual WAN.
-## Gated public preview
-
-The following features are currently in gated public preview. If, after working with the listed articles, you have questions or require support, please reach out the the contact alias that corresponds to the feature.
-
-| Feature | Description | Contact alias |
-| - | | |
-| Routing intent and policies enabling Inter-hub security | This feature allows you to configure internet-bound, private, or inter-hub traffic flow through the Azure Firewall. For more information, see [Routing intent and policies](../virtual-wan/how-to-routing-policies.md).| previewinterhub@microsoft.com |
-| Hub-to-hub over ER preview link | This feature allows traffic between 2 hubs traverse through the Azure Virtual WAN router in each hub and uses a hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft edge routers/MSEE). For more information, see [Hub-to-hub over ER preview link](virtual-wan-faq.md#expressroute-bow-tie).| previewpreferh2h@microsoft.com |
-| BGP peering with a virtual hub | This feature provides the ability for the virtual hub to pair with and directly exchange routing information through Border Gateway Protocol (BGP) routing protocol. For more information, see [BGP peering with a virtual hub](create-bgp-peering-hub-portal.md) and [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).| previewbgpwithvhub@microsoft.com |
-| Virtual hub routing preference | This features allows you to influence routing decisions for the virtual hub router. For more information, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md). | preview-vwan-hrp@microsoft.com |
- ## <a name="faq"></a>FAQ For frequently asked questions, see the [Virtual WAN FAQ](virtual-wan-faq.md).
-## <a name="new"></a>What's new?
+## <a name="new"></a>Previews and What's new?
-Subscribe to the RSS feed and view the latest Virtual WAN feature updates on the [Azure Updates](https://azure.microsoft.com/updates/?category=networking&query=VIRTUAL%20WAN) page.
+* For information about recent releases, previews underway, preview limitations, known issues, and deprecated functionality, see [What's new?](whats-new.md)
+* Subscribe to the RSS feed and view the latest Virtual WAN feature updates on the [Azure Updates - Virtual WAN](https://azure.microsoft.com/updates/?category=networking&query=VIRTUAL%20WAN) page.
## Next steps
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
+
+ Title: What's new in Azure Virtual WAN?
+description: Learn what's new with Azure Virtual WAN such as the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes.
+++ Last updated : 09/23/2022+++
+# What's new in Azure Virtual WAN?
+
+Azure Virtual WAN is updated regularly. Stay up to date with the latest announcements. This article provides you with information about:
+
+* Recent releases
+* Previews underway with known limitations (if applicable)
+* Known issues
+* Deprecated functionality (if applicable)
+
+You can also find the latest Azure Virtual WAN updates and subscribe to the RSS feed [here](https://azure.microsoft.com/updates/?category=networking&query=Virtual%20WAN).
+
+## Recent releases
+
+| Type |Area |Name |Description | Date added | Limitations |
+| ||||||
+|Feature |ExpressRoute | [ExpressRoute circuit page now shows vWAN connection](virtual-wan-expressroute-portal.md)|| August 2022||
+|Feature | Site-to-site VPN | [BGP dashboard](monitor-bgp-dashboard.md)| Using the BGP dashboard, you can monitor BGP peers, advertised routes, and learned routes. The BGP dashboard is available for site-to-site VPNs that are configured to use BGP. |August 2022| |
+|Feature|Branch connectivity/Site-to-site VPN|[Multi-APIPA BGP](virtual-wan-site-to-site-portal.md)| Ability to specify multiple custom BGP IPs for VPN gateway instances in vWAN. |June 2022|Currently only available via portal. (Not available yet in PowerShell)|
+|SKU/Feature/Validation | Routing | [BGP end point (General availability)](scenario-bgp-peering-hub.md) | The virtual hub router now exposes the ability to peer with it, thereby exchanging routing information directly through Border Gateway Protocol (BGP) routing protocol. | June 2022 | |
+|Feature |Branch connectivity/Site-to-site VPN|Custom traffic selectors|Ability to specify what traffic selector pairs site-to-site VPN gateway negotiates|May 2022|Azure negotiates traffic selectors for all pairs of remote and local prefixes. You can't specify individual pairs of Traffic selectors to negotiate.|
+|Feature|Branch connectivity/Site-to-site VPN|[Site-to-site connection mode choices](virtual-wan-site-to-site-portal.md)|Ability to configure if customer or vWAN gateway should initiate the site-to-site connection while creating a new S2S connection.| February 2022|
+|Feature|Remote User connectivity/Point-to-site VPN|[Global profile include/exclude](global-hub-profile.md#include-or-exclude-a-hub-from-a-global-profile)|Ability to mark a point-to-site gateway as "excluded", meaning users who connect to global profile won't be load-balanced to that gateway.|February 2022| |
+|Feature|Branch connectivity/Site-to-site VPN|[Packet capture](packet-capture-site-to-site-portal.md)|Ability for customer to perform packet captures on site-to-site VPN gateway. |November 2021| |
+|Feature |Network Virtual Appliances (NVA)/Integrated Third-party solutions in Virtual WAN hubs| [Versa SD-WAN](about-nva-hub.md#partners)|Preview of Versa SD-WAN.|November 2021| |
+|Feature|Remote User connectivity/Point-to-site VPN|[Forced tunneling for P2S VPN](how-to-forced-tunnel.md)|Ability to force all traffic to Azure Virtual WAN for egress.|October 2021|Only available for Azure VPN Client version 2:1900:39.0 or newer.|
+|Feature|Remote User connectivity/Point-to-site VPN|[macOS Azure VPN client](openvpn-azure-ad-client-mac.md)|General Availability of Azure VPN Client for macOS.|August 2021| |
+|Feature|Network Virtual Appliances <br><br>(NVA)/Integrated Third-party solutions in Virtual WAN hubs|[Cisco Viptela, Barracuda and VMware (Velocloud) SD-WAN](about-nva-hub.md#partners) |General Availability of SD-WAN solutions in Virtual WAN.|June/July 2021| |
+|Feature|Branch connectivity/Site-to-site VPN<br><br>Remote User connectivity/Point-to-site VPN|[Hot-potato vs cold-potato routing for VPN traffic](virtual-wan-site-to-site-portal.md)|Ability to specify Microsoft or ISP POP preference for Azure VPN egress traffic. For more information, see [Routing preference in Azure](../virtual-network/ip-services/routing-preference-overview.md).|June 2021|This parameter can only be specified at gateway creation time and can't be modified after the fact.|
+|Feature|Remote User connectivity/Point-to-site VPN|[Remote RADIUS server](virtual-wan-point-to-site-portal.md)|Ability for a P2S VPN gateway to forward authentication traffic to a RADIUS server in a VNet connected to a different hub, or a RADIUS server hosted on-premises.|April 2021| |
+|Feature|Remote User connectivity/Point-to-site VPN|[Dual-RADIUS server](virtual-wan-point-to-site-portal.md)|Ability to specify primary and backup RADIUS servers to service authentication traffic.|March 2021| |
+|Feature|Routing|[0.0.0.0/0 via NVA in the spoke](scenario-route-through-nvas-custom.md)|Ability to send internet traffic to an NVA in spoke for egress.|March 2021| 0.0.0.0/0 doesn't propagate across hubs.<br><br>Can't specify multiple public prefixes with different next hop IP addresses.|
+|Feature|Branch connectivity/Site-to-site VPN|[NAT](nat-rules-vpn-gateway.md)|Ability to NAT overlapping addresses between site-to-site VPN branches, and between site-to-site VPN branches and Azure.|March 2021|NAT isn't supported with policy-based VPN connections.|
+|Feature|Remote User connectivity/Point-to-site VPN|[Custom IPsec policies](point-to-site-ipsec.md)|Ability to specify connection/encryption parameters for IKEv2 point-to-site connections.|March 2021|Only supported for IKEv2- based connections.<br><br>View the [list of available parameters](point-to-site-ipsec.md). |
+|SKU|Remote User connectivity/Point-to-site VPN|[Support up to 100K users connected to a single hub](about-client-address-pools.md)|Increased maximum number of concurrent users connected to a single gateway to 100,000.|March 2021| |
+|Feature|Remote User connectivity/Point-to-site VPN|Multiple-authentication methods|Ability for a single gateway to use multiple authentication mechanisms.|March 2021|Only supported for OpenVPN-based gateways.|
+
+## Preview
+
+The following features are currently in gated public preview. After working with the listed articles, you have questions or require support, reach out the contact alias that corresponds to the feature.
+
+|Type of preview|Feature |Description|Contact alias|Limitations|
+||||||
+|Managed preview|Routing intent and policies enabling inter-hub security|This feature allows you to configure internet-bound, private, or inter-hub traffic flow through the Azure Firewall. For more information, see [Routing intent and policies](how-to-routing-policies.md).|For access to the preview, contact previewinterhub@microsoft.com|Not compatible with NVA in a spoke, but compatible with BGP peering.<br><br>For additional limitations, see [How to configure Virtual WAN hub routing intent and routing policies](how-to-routing-policies.md#key-considerations).|
+|Managed preview|Checkpoint NGFW|Deployment of Checkpoint NGFW NVA in to the Virtual WAN hub|DL-vwan-support-preview@checkpoint.com, previewinterhub@microsoft.com|Same limitations as routing intent.<br><br>Doesn't support internet inbound scenario.|
+|Managed preview|Fortinet NGFW/SD-WAN|Deployment of Fortinet dual-role SD-WAN/NGFW NVA into the Virtual WAN hub|azurevwan@fortinet.com, previewinterhub@microsoft.com|Same limitations as routing intent.<br><br>Doesn't support internet inbound scenario.|
+|Public preview/Self serve|Virtual hub routing preference|This feature allows you to influence routing decisions for the virtual hub router. For more information, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md).|For questions or feedback, contact preview-vwan-hrp@microsoft.com|If a route-prefix is reachable via ER or VPN connections, and via virtual hub SD-WAN NVA, then the latter route is ignored by the route-selection algorithm. Therefore, the flows to prefixes reachable only via virtual hub. SD-WAN NVA will take the route through the NVA. This is a limitation during the preview phase of the hub routing preference feature.|
+|Public preview/Self serve|Hub-to-hub traffic flows instead of an ER circuit connected to different hubs (Hub-to-hub over ER)|This feature allows traffic between 2 hubs traverse through the Azure Virtual WAN router in each hub and uses a hub-to-hub path, instead of the ExpressRoute path (which traverses through the Microsoft edge routers/MSEE). For more information, see the [Hub-to-hub over ER](virtual-wan-faq.md#expressroute-bow-tie) preview link.|For questions or feedback, contact preview-vwan-hrp@microsoft.com|
+
+## Known issues
+
+|#|Issue|Description |Date first reported|Mitigation|
+||||||
+|1|Virtual hub router upgrade: Compatibility with NVA in a hub.|For deployments with an NVA provisioned in the hub, the virtual hub router can't be upgraded to VMSS.| July 2022|The Virtual WAN team is working on a fix that will allow Virtual hub routers to be upgraded to VMSS, even if an NVA is provisioned in the hub. After upgrading, users will have to re-peer the NVA with the hub routerΓÇÖs new IP addresses (instead of having to delete the NVA).|
+|2|Virtual hub router upgrade: Compatibility with NVA in a spoke VNet.|For deployments with an NVA provisioned in a spoke VNet, the customer will have to delete and recreate the BGP peering with the spoke NVA.|March 2022|The Virtual WAN team is working on a fix to remove the need for users to delete and recreate the BGP peering with a spoke NVA after upgrading.|
+|3|Virtual hub router upgrade: Spoke VNets in different region than the Virtual hub.|If one or more spoke VNets are in a different region than the virtual hub, then these VNet connections will have to be deleted and recreated after the hub router is upgraded|August 2022|The Virtual WAN team is working on a fix to remove the need for users to delete and recreate these VNet connections after upgrading the hub router.|
+|4|Virtual hub router upgrade: More than 100 Spoke VNets connected to the Virtual hub.|If there are more than 100 spoke VNets connected to the virtual hub, then the virtual hub router can't be upgraded.|September 2022|The Virtual WAN team is working on removing this limitation of 100 spoke VNets connected to the virtual hub during the router upgrade.|
+
+## Next steps
+
+For more information about Azure Virtual WAN, see [What is Azure Virtual WAN](virtual-wan-about.md) and [frequently asked questions- FAQ](virtual-wan-faq.md).