Updates from: 12/30/2020 04:03:50
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/aws-clientvpn-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/aws-clientvpn-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 12/11/2020
+ms.date: 12/29/2020
ms.author: jeedes ---
@@ -86,7 +86,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Reply URL. Contact [AWS ClientVPN Client support team](https://aws.amazon.com/contact-us/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign on URL and Reply URL. The Sign on URL and Reply URL can have the same value (http://127.0.0.1:35001). Refer to [AWS Client VPN Documentation](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html#ad) for details. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. Contact [AWS ClientVPN support team](https://aws.amazon.com/contact-us/) for any configuration issues.
1. In the Azure Active Directory service, navigate to **App registrations** and then select **All Applications**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/easysso-for-confluence-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/easysso-for-confluence-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 05/28/2020
+ms.date: 12/24/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate EasySSO for Confluence with Azur
* Enable your users to be automatically signed-in to Confluence with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -36,13 +34,12 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* EasySSO for Confluence supports **SP and IDP** initiated SSO * EasySSO for Confluence supports **Just In Time** user provisioning
-* Once you configure EasySSO for Confluence you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Adding EasySSO for Confluence from the gallery To configure the integration of EasySSO for Confluence into Azure AD, you need to add EasySSO for Confluence from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -50,11 +47,11 @@ To configure the integration of EasySSO for Confluence into Azure AD, you need t
1. Select **EasySSO for Confluence** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for EasySSO for Confluence
+## Configure and test Azure AD SSO for EasySSO for Confluence
Configure and test Azure AD SSO with EasySSO for Confluence using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in EasySSO for Confluence.
-To configure and test Azure AD SSO with EasySSO for Confluence, complete the following building blocks:
+To configure and test Azure AD SSO with EasySSO for Confluence, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -67,7 +64,7 @@ To configure and test Azure AD SSO with EasySSO for Confluence, complete the fol
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **EasySSO for Confluence** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **EasySSO for Confluence** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
@@ -131,19 +128,23 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the applications list, select **EasySSO for Confluence**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure EasySSO for Confluence SSO
-1. Sign into your Atlassian Confluence instance with Administrator privileges and navigate to the **Manage Apps** section.
+1. To automate the configuration within EasySSO for Confluence, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up EasySSO for Confluence** will direct you to the EasySSO for Confluence application. From there, provide the admin credentials to sign into EasySSO for Confluence. The browser extension will automatically configure the application for you and automate steps 3-9.
+
+ ![Setup configuration](common/setup-sso.png)
+
+1. If you want to setup EasySSO for Confluence manually, sign into your Atlassian Confluence instance with Administrator privileges and navigate to the **Manage Apps** section.
![Manage Apps](./media/easysso-for-confluence-tutorial/confluence-admin-1.png)
@@ -192,9 +193,9 @@ However, if you do not wish to enable automatic user provisioning on the user fi
### IdP-initiated workflow
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration using the My Apps.
-When you click the EasySSO for Confluence tile in the Access Panel, you should be automatically signed in to the Confluence instance for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+When you click the EasySSO for Confluence tile in the My Apps, you should be automatically signed in to the Confluence instance for which you set up SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
### SP-initiated workflow
@@ -212,16 +213,6 @@ In this case you have to follow the [instructions on this page]( https://techtim
Should you have any issues digesting the log messages, please contact [EasySSO support team](mailto:support@techtime.co.nz).
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)--- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)--- [Try EasySSO for Confluence with Azure AD](https://aad.portal.azure.com/)--- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect EasySSO for Confluence with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure EasySSO for Confluence you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/ekarda-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ekarda-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 06/15/2020
+ms.date: 12/24/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate ekarda with Azure Active Directo
* Enable your users to be automatically signed in to ekarda by using their Azure AD accounts. * Manage your accounts in one central location: the Azure portal.
-To learn more about software as a service (SaaS) app integration with Azure AD, see [What is single sign-on (SSO)?](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -36,13 +34,12 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* ekarda supports SP-initiated and IDP-initiated SSO. * ekarda supports just-in-time user provisioning.
-* After you configure ekarda, you can enforce session control. This precaution protects against exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access App Control. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Add ekarda from the gallery To configure the integration of ekarda into Azure AD, add ekarda from the gallery to your list of managed SaaS apps:
-1. Sign in to the [Azure portal](https://portal.azure.com) by using a work or school account or a personal Microsoft account.
+1. Sign in to the Azure portal by using a work or school account or a personal Microsoft account.
1. On the left pane, select the **Azure Active Directory** service. 1. Go to **Enterprise Applications**, and then select **All Applications**.
@@ -50,11 +47,11 @@ To configure the integration of ekarda into Azure AD, add ekarda from the galler
1. In the **Add from the gallery** section, type **ekarda** in the search box. 1. Select **ekarda** from results panel, and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for ekarda
+## Configure and test Azure AD SSO for ekarda
Configure and test Azure AD SSO with ekarda by using a test user called **B.Simon**. For SSO to work, you need to establish a linked relationship between an Azure AD user and the related user in ekarda.
-To configure and test Azure AD SSO with ekarda, complete the following steps:
+To configure and test Azure AD SSO with ekarda, perform the following steps:
1. [Configure Azure AD SSO](#configure-azure-ad-sso) to enable your users to use this feature.
@@ -68,7 +65,7 @@ To configure and test Azure AD SSO with ekarda, complete the following steps:
Follow these steps in the Azure portal to enable Azure AD SSO:
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Sign in to the Azure portal.
1. On the **ekarda** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, select the pencil icon to edit the **Basic SAML Configuration** settings.
@@ -92,7 +89,7 @@ Follow these steps in the Azure portal to enable Azure AD SSO:
1. If you want to configure the application in SP-initiated mode, select **Set additional URLs** and do this:
- * In the **Sign-on URL** text box, type a URL that follows this pattern:
+ In the **Sign-on URL** text box, type a URL that follows this pattern:
`https://my.ekarda.com/users/saml_sso/<COMPANY_ID>` > [!NOTE]
@@ -127,19 +124,24 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the applications list, select **ekarda**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![Screenshot of the Manage section, with Users and groups highlighted.](common/users-groups-blade.png)
- 1. Select **Add user**, and then select **Users and groups** in the **Add Assignment** dialog box.
- ![Screenshot of the Users and groups section, with Add user highlighted.](common/add-assign-user.png)
- 1. In the **Users and groups** dialog box, select **B.Simon** from the list of users. Then, choose **Select** at the bottom of the screen.
-1. If you expect any role value in the SAML assertion, select the appropriate role for the user from the list in the **Select Role** dialog box. Then, choose **Select** at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog box, select **Assign**. ## Configure ekarda SSO
-1. In a different web-browser window, sign in to your ekarda company site as an administrator.
+1. To automate the configuration within ekarda, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up ekarda** will direct you to the ekarda application. From there, provide the admin credentials to sign into ekarda. The browser extension will automatically configure the application for you and automate steps 3-6.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup ekarda manually, in a different web browser window, sign in to your ekarda company site as an administrator.
+ 1. Select **Admin** > **My Account**. ![Screenshot of ekarda site UI with My Account highlighted on the Admin menu.](./media/ekarda-tutorial/ekarda.png)
@@ -164,16 +166,20 @@ In this section, a user called B.Simon is created in ekarda. ekarda supports jus
## Test SSO
-In this section, you test your Azure AD single sign-on configuration by using the My Apps portal.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to ekarda Sign on URL where you can initiate the login flow.
+
+* Go to ekarda Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the ekarda for which you set up the SSO
-When you select the ekarda tile in the My Apps portal, you should be automatically signed in to the ekarda site for which you set up SSO. For more information about the My Apps portal, see [Introduction to the My Apps portal](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the ekarda tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the ekarda for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-## Additional resources
+## Next steps
-* [List of tutorials for integrating SaaS apps with Azure Active Directory](./tutorial-list.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
-* [Try ekarda with Azure AD](https://aad.portal.azure.com/)
-* Use [ekarda's enterprise eCard solution](https://ekarda.com/ecards-ecards-with-logo-for-business-corporate-enterprise) to provision any number of your staff to send eCards, branded with your company logo, to their clients and colleagues. Learn more about [provisioning ekarda as an SSO solution](https://support.ekarda.com/#SSO-Implementation).
-* [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
-* [How to protect ekarda with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
\ No newline at end of file
+After you configure ekarda, you can enforce session control. This precaution protects against exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access App Control. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/invision-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/invision-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 04/09/2020
+ms.date: 12/24/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate InVision with Azure Active Direc
* Enable your users to be automatically signed-in to InVision with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -35,37 +33,36 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * InVision supports **SP and IDP** initiated SSO
-* Once you configure InVision you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Adding InVision from the gallery To configure the integration of InVision into Azure AD, you need to add InVision from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **InVision** in the search box. 1. Select **InVision** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for InVision
+## Configure and test Azure AD SSO for InVision
Configure and test Azure AD SSO with InVision using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in InVision.
-To configure and test Azure AD SSO with InVision, complete the following building blocks:
+To configure and test Azure AD SSO with InVision, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure InVision SSO](#configure-invision-sso)** - to configure the single sign-on settings on application side.
- * **[Create InVision test user](#create-invision-test-user)** - to have a counterpart of B.Simon in InVision that is linked to the Azure AD representation of user.
+ 1. **[Create InVision test user](#create-invision-test-user)** - to have a counterpart of B.Simon in InVision that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **InVision** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **InVision** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
@@ -115,19 +112,23 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the applications list, select **InVision**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure InVision SSO
-1. In a different web browser window, sign into InVision site as an administrator.
+1. To automate the configuration within InVision, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up InVision** will direct you to the InVision application. From there, provide the admin credentials to sign into InVision. The browser extension will automatically configure the application for you and automate steps 3-6.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup InVision manually, in a different web browser window, sign in to your InVision company site as an administrator.
1. Click on **Team** and select **Settings**.
@@ -183,20 +184,20 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the InVision tile in the Access Panel, you should be automatically signed in to the InVision for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to InVision Sign on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to InVision Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the InVision for which you set up the SSO
-- [Try InVision with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the InVision tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the InVision for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect InVision with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure InVision you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/litmus-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/litmus-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 04/06/2020
+ms.date: 12/24/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Litmus with Azure Active Directo
* Enable your users to be automatically signed-in to Litmus with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -35,13 +33,12 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Litmus supports **SP and IDP** initiated SSO
-* Once you configure Litmus you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Adding Litmus from the gallery To configure the integration of Litmus into Azure AD, you need to add Litmus from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -49,11 +46,11 @@ To configure the integration of Litmus into Azure AD, you need to add Litmus fro
1. Select **Litmus** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Litmus
+## Configure and test Azure AD SSO for Litmus
Configure and test Azure AD SSO with Litmus using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Litmus.
-To configure and test Azure AD SSO with Litmus, complete the following building blocks:
+To configure and test Azure AD SSO with Litmus, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -66,7 +63,7 @@ To configure and test Azure AD SSO with Litmus, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Litmus** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Litmus** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
@@ -109,19 +106,23 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the applications list, select **Litmus**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Litmus SSO
-1. In a different web browser window, sign into Litmus application as an administrator.
+1. To automate the configuration within Litmus, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Litmus** will direct you to the Litmus application. From there, provide the admin credentials to sign into Litmus. The browser extension will automatically configure the application for you and automate steps 3-6.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup Litmus manually, in a different web browser window, sign in to your Litmus company site as an administrator.
1. Click on the **Security** from the left navigation panel.
@@ -173,20 +174,20 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Litmus tile in the Access Panel, you should be automatically signed in to the Litmus for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Litmus Sign on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Litmus Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Litmus for which you set up the SSO
-- [Try Litmus with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Litmus tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Litmus for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Litmus with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure Litmus you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/meraki-dashboard-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/meraki-dashboard-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 12/07/2020
+ms.date: 12/28/2020
ms.author: jeedes ---
@@ -128,7 +128,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select a role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
![user role](./media/meraki-dashboard-tutorial/user-role.png)
@@ -139,7 +139,15 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Meraki Dashboard SSO
-1. In a different web browser window, sign into meraki dashboard as an administrator.
+1. To automate the configuration within Meraki Dashboard, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Meraki Dashboard** will direct you to the Meraki Dashboard application. From there, provide the admin credentials to sign into Meraki Dashboard. The browser extension will automatically configure the application for you and automate steps 3-7.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup Meraki Dashboard manually, in a different web browser window, sign in to your Meraki Dashboard company site as an administrator.
1. Navigate to **Organization** -> **Settings**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/opsgenie-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/opsgenie-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 03/19/2020
+ms.date: 12/28/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate OpsGenie with Azure Active Direc
* Enable your users to be automatically signed-in to OpsGenie with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -35,37 +33,36 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * OpsGenie supports **IDP** initiated SSO
-* Once you configure OpsGenie you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Adding OpsGenie from the gallery To configure the integration of OpsGenie into Azure AD, you need to add OpsGenie from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **OpsGenie** in the search box. 1. Select **OpsGenie** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for OpsGenie
+## Configure and test Azure AD SSO for OpsGenie
Configure and test Azure AD SSO with OpsGenie using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in OpsGenie.
-To configure and test Azure AD SSO with OpsGenie, complete the following building blocks:
+To configure and test Azure AD SSO with OpsGenie, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure OpsGenie SSO](#configure-opsgenie-sso)** - to configure the single sign-on settings on application side.
- * **[Create OpsGenie test user](#create-opsgenie-test-user)** - to have a counterpart of B.Simon in OpsGenie that is linked to the Azure AD representation of user.
+ 1. **[Create OpsGenie test user](#create-opsgenie-test-user)** - to have a counterpart of B.Simon in OpsGenie that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **OpsGenie** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **OpsGenie** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
@@ -110,19 +107,23 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the applications list, select **OpsGenie**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure OpsGenie SSO
-1. Open another browser instance, and then sign into OpsGenie as an administrator.
+1. To automate the configuration within OpsGenie, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up OpsGenie** will direct you to the OpsGenie application. From there, provide the admin credentials to sign into OpsGenie. The browser extension will automatically configure the application for you and automate steps 3-7.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup OpsGenie manually, in a different web browser window, sign in to your OpsGenie company site as an administrator.
2. Click **Settings**, and then click the **Single Sign On** tab.
@@ -179,18 +180,12 @@ The objective of this section is to create a user called B.Simon in OpsGenie.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the OpsGenie tile in the Access Panel, you should be automatically signed in to the OpsGenie for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the OpsGenie for which you set up the SSO
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the OpsGenie tile in the My Apps, you should be automatically signed in to the OpsGenie for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try OpsGenie with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+* Once you configure OpsGenie you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/slack-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/slack-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/24/2020
+ms.date: 12/28/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Slack with Azure Active Director
* Enable your users to be automatically signed-in to Slack with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -43,7 +41,6 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Slack supports **SP** initiated SSO * Slack supports **Just In Time** user provisioning * Slack supports [**Automated** user provisioning](./slack-provisioning-tutorial.md)
-* Once you configure Slack you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
@@ -52,7 +49,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Slack into Azure AD, you need to add Slack from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -63,20 +60,20 @@ To configure the integration of Slack into Azure AD, you need to add Slack from
Configure and test Azure AD SSO with Slack using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Slack.
-To configure and test Azure AD SSO with Slack, complete the following building blocks:
+To configure and test Azure AD SSO with Slack, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Slack SSO](#configure-slack-sso)** - to configure the single sign-on settings on application side.
- * **[Create Slack test user](#create-slack-test-user)** - to have a counterpart of B.Simon in Slack that is linked to the Azure AD representation of user.
+ 1. **[Create Slack test user](#create-slack-test-user)** - to have a counterpart of B.Simon in Slack that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ### Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Slack** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Slack** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
@@ -145,19 +142,23 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the applications list, select **Slack**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Slack SSO
-1. In a different web browser window, sign in to your Slack company site as an administrator.
+1. To automate the configuration within Slack, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Slack** will direct you to the Slack application. From there, provide the admin credentials to sign into Slack. The browser extension will automatically configure the application for you and automate steps 3-6.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup Slack manually, in a different web browser window, sign in to your Slack company site as an administrator.
2. Navigate to **Microsoft Azure AD** then go to **Team Settings**.
@@ -200,18 +201,14 @@ The objective of this section is to create a user called B.Simon in Slack. Slack
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Slack tile in the Access Panel, you should be automatically signed in to the Slack for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to Slack Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Go to Slack Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the Slack tile in the My Apps, this will redirect to Slack Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try Slack with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure Slack you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/tableauonline-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tableauonline-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 01/31/2020
+ms.date: 12/28/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory single sign-on (SSO) integration with Tableau Online
@@ -20,8 +20,6 @@ In this tutorial, you'll learn how to integrate Tableau Online with Azure Active
* Enable your users to be automatically signed-in to Tableau Online with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -34,25 +32,24 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD single sign-on in a test environment. * Tableau Online supports **SP** initiated SSO
-* Once you configure Tableau Online you can enforce Session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
## Adding Tableau Online from the gallery To configure the integration of Tableau Online into Azure AD, you need to add Tableau Online from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Tableau Online** in the search box. 1. Select **Tableau Online** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for Tableau Online
In this section, you configure and test Azure AD single sign-on with Tableau Online based on a test user called **Britta Simon**. For single sign-on to work, a link relationship between an Azure AD user and the related user in Tableau Online needs to be established.
-To configure and test Azure AD SSO with Tableau Online, complete the following building blocks:
+To configure and test Azure AD SSO with Tableau Online, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -63,25 +60,15 @@ To configure and test Azure AD SSO with Tableau Online, complete the following b
### Configure Azure AD SSO
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with Tableau Online, perform the following steps:
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Tableau Online** application integration page, select **Single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Tableau Online** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![Tableau Online Domain and URLs single sign-on information](common/sp-identifier.png)
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
a. In the **Sign on URL** text box, type the URL: `https://sso.online.tableau.com/public/sp/login?alias=<entityid>`
@@ -100,66 +87,45 @@ To configure Azure AD single sign-on with Tableau Online, perform the following
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon\@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Tableau Online.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Tableau Online.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Tableau Online**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Tableau Online**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-2. In the applications list, select **Tableau Online**.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The Tableau Online link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+## Configure Tableau Online SSO
- ![The Add Assignment pane](common/add-assign-user.png)
+1. To automate the configuration within Tableau Online, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+ ![My apps extension](common/install-myappssecure-extension.png)
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+2. After adding extension to the browser, click on **Set up Tableau Online** will direct you to the Tableau Online application. From there, provide the admin credentials to sign into Tableau Online. The browser extension will automatically configure the application for you and automate steps 3-7.
-7. In the **Add Assignment** dialog click the **Assign** button.
+ ![Setup configuration](common/setup-sso.png)
-## Configure Tableau Online SSO
+3. If you want to setup Tableau Online manually, in a different web browser window, sign in to your Tableau Online company site as an administrator.
-1. In a different browser window, sign-on to your Tableau Online application. Go to **Settings** and then **Authentication**.
+1. Go to **Settings** and then **Authentication**.
![Screenshot shows Authentication selected from the Settings menu.](./media/tableauonline-tutorial/tutorial_tableauonline_09.png)
@@ -217,16 +183,14 @@ In this section, you create a user called Britta Simon in Tableau Online.
### Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Tableau Online tile in the Access Panel, you should be automatically signed in to the Tableau Online for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on **Test this application** in Azure portal. This will redirect to Tableau Online Sign-on URL where you can initiate the login flow.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* Go to Tableau Online Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Tableau Online tile in the My Apps, this will redirect to Tableau Online Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure Tableau Online you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
\ No newline at end of file
azure-government https://docs.microsoft.com/en-us/azure/azure-government/compliance/azure-services-in-fedramp-auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
@@ -125,7 +125,7 @@ This article provides a detailed list of in-scope cloud services across Azure Pu
| [Dynamics 365 Service Omni-Channel Engagement Hub](/dynamics365/omnichannel/introduction-omnichannel) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Dynamics 365 Customer Engagement (Common Data Service)](/powerapps/maker/common-data-service/data-platform-intro) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Event Grid](https://azure.microsoft.com/services/event-grid/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Microsoft Defender Advanced Threat Protection](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Microsoft Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Flow](/flow/getting-started) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
@@ -289,7 +289,7 @@ This article provides a detailed list of in-scope cloud services across Azure Pu
| [Microsoft Azure Peering Service](../../peering-service/about.md) | :heavy_check_mark: | | | | :heavy_check_mark: | | [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Microsoft Cloud App Security](/cloud-app-security/what-is-cloud-app-security)| :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
-| [Microsoft Defender Advanced Threat Protection](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| [Microsoft Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [Microsoft Graph](/graph/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Microsoft PowerApps](/powerapps/powerapps-overview) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | | [Microsoft Stream](/stream/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/source-map-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/source-map-support.md
@@ -2,8 +2,8 @@
title: Source map support for JavaScript applications - Azure Monitor Application Insights description: Learn how to upload source maps to your own storage account Blob container using Application Insights. ms.topic: conceptual
-author: markwolff
-ms.author: marwolff
+author: DavidCBerry13
+ms.author: daberry
ms.date: 06/23/2020 ms.custom: devx-track-js ---
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/logs-dedicated-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/log-query/logs-dedicated-clusters.md
@@ -51,6 +51,20 @@ If your workspace is using legacy Per Node pricing tier, when it is linked to a
More details are billing for Log Analytics dedicated clusters are available [here]( https://docs.microsoft.com/azure/azure-monitor/platform/manage-cost-storage#log-analytics-dedicated-clusters).
+## Asynchronous operations and status check
+
+Some of the configuration steps run asynchronously because they can't be completed quickly. The status in response contains can be one of the followings: 'InProgress', 'Updating', 'Deleting', 'Succeeded or 'Failed' including the error code. When using REST, the response initially returns an HTTP status code 200 (OK) and header with Azure-AsyncOperation property when accepted:
+
+```JSON
+"Azure-AsyncOperation": "https://management.azure.com/subscriptions/subscription-id/providers/Microsoft.OperationalInsights/locations/region-name/operationStatuses/operation-id?api-version=2020-08-01"
+```
+
+You can check the status of the asynchronous operation by sending a GET request to the Azure-AsyncOperation header value:
+
+```rst
+GET https://management.azure.com/subscriptions/subscription-id/providers/microsoft.operationalInsights/locations/region-name/operationstatuses/operation-id?api-version=2020-08-01
+Authorization: Bearer <token>
+```
## Creating a cluster
@@ -85,7 +99,7 @@ Get-Job -Command "New-AzOperationalInsightsCluster*" | Format-List -Property *
*Call* ```rst
-PUT https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-03-01-preview
+PUT https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-08-01
Authorization: Bearer <token> Content-type: application/json
@@ -108,7 +122,7 @@ Content-type: application/json
Should be 200 OK and a header.
-### Check provisioning status
+## Check cluster provisioning status
The provisioning of the Log Analytics cluster takes a while to complete. You can check the provisioning state in several ways:
@@ -122,7 +136,7 @@ The provisioning of the Log Analytics cluster takes a while to complete. You can
- Send a GET request on the *Cluster* resource and look at the *provisioningState* value. The value is *ProvisioningAccount* while provisioning and *Succeeded* when completed. ```rst
- GET https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-03-01-preview
+ GET https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-08-01
Authorization: Bearer <token> ```
@@ -154,11 +168,34 @@ The provisioning of the Log Analytics cluster takes a while to complete. You can
The *principalId* GUID is generated by the managed identity service for the *Cluster* resource.
+## Check workspace link status
+
+Perform Get operation on the workspace and observe if *clusterResourceId* property is present in the response under *features*. A linked workspace will have the *clusterResourceId* property.
+
+**CLI**
+
+```azurecli
+az monitor log-analytics cluster show --resource-group "resource-group-name" --name "cluster-name"
+```
+
+**PowerShell**
+
+```powershell
+Get-AzOperationalInsightsWorkspace -ResourceGroupName "resource-group-name" -Name "workspace-name"
+```
+
+**REST**
+
+```rest
+GET https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>?api-version=2020-08-01
+Authorization: Bearer <token>
+```
+ ## Change cluster properties After you create your *Cluster* resource and it is fully provisioned, you can edit additional properties at the cluster level using PowerShell or REST API. Other than the properties that are available during cluster creation, additional properties can only be set after the cluster has been provisioned: -- **keyVaultProperties**: Used to configure the Azure Key Vault used to provision an [Azure Monitor customer-managed key](../platform/customer-managed-keys.md#customer-managed-key-provisioning-procedure). It contains the following parameters: *KeyVaultUri*, *KeyName*, *KeyVersion*.
+- **keyVaultProperties**: Used to configure the Azure Key Vault used to provision an [Azure Monitor customer-managed key](../platform/customer-managed-keys.md#customer-managed-key-provisioning). It contains the following parameters: *KeyVaultUri*, *KeyName*, *KeyVersion*.
- **billingType** - The *billingType* property determines the billing attribution for the *cluster* resource and its data: - **Cluster** (default) - The Capacity Reservation costs for your Cluster are attributed to the *Cluster* resource. - **Workspaces** - The Capacity Reservation costs for your Cluster are attributed proportionately to the workspaces in the Cluster, with the *Cluster* resource being billed some of the usage if the total ingested data for the day is under the Capacity Reservation. See [Log Analytics Dedicated Clusters](../platform/manage-cost-storage.md#log-analytics-dedicated-clusters) to learn more about the Cluster pricing model.
@@ -183,7 +220,7 @@ For example:
*Call* ```rst
-PATCH https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-03-01-preview
+PATCH https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-08-01
Authorization: Bearer <token> Content-type: application/json
@@ -212,7 +249,7 @@ Content-type: application/json
### Check cluster update status
-The propagation of the Key identifier takes a few minutes to complete. You can check the update state in two ways:
+The propagation of the Key identifier takes a while to complete. You can check the update state in two ways:
- Copy the Azure-AsyncOperation URL value from the response and follow the asynchronous operations status check.
@@ -294,7 +331,7 @@ Use the following REST call to link to a cluster:
*Send* ```rst
-PUT https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>/linkedservices/cluster?api-version=2020-03-01-preview
+PUT https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>/linkedservices/cluster?api-version=2020-08-01
Authorization: Bearer <token> Content-type: application/json
@@ -324,7 +361,7 @@ A send request looks like the following:
*Send* ```rest
-GET https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalInsights/workspaces/<workspace-name>?api-version=2020-03-01-preview
+GET https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalInsights/workspaces/<workspace-name>?api-version=2020-08-01
Authorization: Bearer <token> ```
@@ -402,7 +439,213 @@ Use the following REST call to delete a cluster:
200 OK
+## Get all clusters in a resource group
+
+**CLI**
+
+```azurecli
+az monitor log-analytics cluster list --resource-group "resource-group-name"
+```
+
+**PowerShell**
+
+```powershell
+Get-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name"
+```
+
+**REST**
+
+*Call*
+
+ ```rst
+ GET https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters?api-version=2020-08-01
+ Authorization: Bearer <token>
+ ```
+
+*Response*
+
+ ```json
+ {
+ "value": [
+ {
+ "identity": {
+ "type": "SystemAssigned",
+ "tenantId": "tenant-id",
+ "principalId": "principal-Id"
+ },
+ "sku": {
+ "name": "capacityReservation",
+ "capacity": 1000,
+ "lastSkuUpdate": "Sun, 22 Mar 2020 15:39:29 GMT"
+ },
+ "properties": {
+ "keyVaultProperties": {
+ "keyVaultUri": "https://key-vault-name.vault.azure.net",
+ "keyName": "key-name",
+ "keyVersion": "current-version"
+ },
+ "provisioningState": "Succeeded",
+ "billingType": "cluster",
+ "clusterId": "cluster-id"
+ },
+ "id": "/subscriptions/subscription-id/resourcegroups/resource-group-name/providers/microsoft.operationalinsights/workspaces/workspace-name",
+ "name": "cluster-name",
+ "type": "Microsoft.OperationalInsights/clusters",
+ "location": "region-name"
+ }
+ ]
+ }
+ ```
+
+## Get all clusters in a subscription
+
+**CLI**
+
+```azurecli
+az monitor log-analytics cluster list
+```
+
+**PowerShell**
+
+```powershell
+Get-AzOperationalInsightsCluster
+```
+
+**REST**
+
+*Call*
+
+```rst
+GET https://management.azure.com/subscriptions/<subscription-id>/providers/Microsoft.OperationalInsights/clusters?api-version=2020-08-01
+Authorization: Bearer <token>
+```
+
+*Response*
+
+The same as for 'clusters in a resource group', but in subscription scope.
+
+## Update capacity reservation in cluster
+
+When the data volume to your linked workspaces change over time and you want to update the capacity reservation level appropriately. The Capacity is specified in units of GB and can have values of 1000 GB/day or more in increments of 100 GB/day. Note that you donΓÇÖt have to provide the full REST request body but should include the sku.
+
+**CLI**
+
+```azurecli
+az monitor log-analytics cluster update --name "cluster-name" --resource-group "resource-group-name" --sku-capacity daily-ingestion-gigabyte
+```
+
+**PowerShell**
+
+```powershell
+Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -SkuCapacity daily-ingestion-gigabyte
+```
+
+**REST**
+
+*Call*
+
+ ```rst
+ PATCH https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-08-01
+ Authorization: Bearer <token>
+ Content-type: application/json
+
+ {
+ "sku": {
+ "name": "capacityReservation",
+ "Capacity": 2000
+ }
+ }
+ ```
+
+## Update billingType in cluster
+
+The *billingType* property determines the billing attribution for the cluster and its data:
+- *cluster* (default) -- The billing is attributed to the subscription hosting your Cluster resource
+- *workspaces* -- The billing is attributed to the subscriptions hosting your workspaces proportionally
+
+**REST**
+
+*Call*
+
+ ```rst
+ PATCH https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-08-01
+ Authorization: Bearer <token>
+ Content-type: application/json
+
+ {
+ "properties": {
+ "billingType": "cluster",
+ }
+ }
+ ```
+
+## Limits and constraints
+
+- The max number of cluster per region and subscription is 2
+
+- The maximum of linked workspaces to cluster is 1000
+
+- You can link a workspace to your cluster and then unlink it. The number of workspace link operations on particular workspace is limited to 2 in a period of 30 days.
+
+- Workspace link to cluster should be carried ONLY after you have verified that the Log Analytics cluster provisioning was completed. Data sent to your workspace prior to the completion will be dropped and won't be recoverable.
+
+- Cluster move to another resource group or subscription isn't supported currently.
+
+- Workspace link to cluster will fail if it is linked to another cluster.
+
+- Lockbox isn't available in China currently.
+
+- [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) is configured automatically for clusters created from October 2020 in supported regions. You can verify if your cluster is configured for Double encryption by a GET request on the cluster and observing the `"isDoubleEncryptionEnabled"` property value - it's `true` for clusters with Double encryption enabled.
+ - If you create a cluster and get an error "<region-name> doesnΓÇÖt support Double Encryption for clusters.", you can still create the cluster without Double Encryption. Add `"properties": {"isDoubleEncryptionEnabled": false}` property in the REST request body.
+ - Double encryption setting can not be changed after the cluster has been created.
+
+## Troubleshooting
+
+- If you get conflict error when creating a cluster ΓÇô it may be that you have deleted your cluster in the last 14 days and itΓÇÖs in a soft-delete state. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.
+
+- If you update your cluster while an operation is in progress, the operation will fail.
+
+- Some operations are long and can take a while to complete -- these are cluster create, cluster key update and cluster delete. You can check the operation status in two ways:
+ - When using REST, copy the Azure-AsyncOperation URL value from the response and follow the [asynchronous operations status check](#asynchronous-operations-and-status-check).
+ - Send GET request to cluster or workspace and observe the response. For example, unlinked workspace won't have the *clusterResourceId* under *features*.
+
+- Error messages
+
+ Cluster Create:
+ - 400 -- Cluster name is not valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.
+ - 400 -- The body of the request is null or in bad format.
+ - 400 -- SKU name is invalid. Set SKU name to capacityReservation.
+ - 400 -- Capacity was provided but SKU is not capacityReservation. Set SKU name to capacityReservation.
+ - 400 -- Missing Capacity in SKU. Set Capacity value to 1000 or higher in steps of 100 (GB).
+ - 400 -- Capacity in SKU is not in range. Should be minimum 1000 and up to the max allowed capacity which is available under ΓÇÿUsage and estimated costΓÇÖ in your workspace.
+ - 400 -- Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.
+ - 400 -- No SKU was set. Set the SKU name to capacityReservation and Capacity value to 1000 or higher in steps of 100 (GB).
+ - 400 -- Identity is null or empty. Set Identity with systemAssigned type.
+ - 400 -- KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation.
+ - 400 -- Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
+
+ Cluster Update
+ - 400 -- Cluster is in deleting state. Async operation is in progress . Cluster must complete its operation before any update operation is performed.
+ - 400 -- KeyVaultProperties is not empty but has a bad format. See [key identifier update](../platform/customer-managed-keys.md#update-cluster-with-key-identifier-details).
+ - 400 -- Failed to validate key in Key Vault. Could be due to lack of permissions or when key doesnΓÇÖt exist. Verify that you [set key and access policy](../platform/customer-managed-keys.md#grant-key-vault-permissions) in Key Vault.
+ - 400 -- Key is not recoverable. Key Vault must be set to Soft-delete and Purge-protection. See [Key Vault documentation](../../key-vault/general/soft-delete-overview.md)
+ - 400 -- Operation cannot be executed now. Wait for the Async operation to complete and try again.
+ - 400 -- Cluster is in deleting state. Wait for the Async operation to complete and try again.
+
+ Cluster Get:
+ - 404 -- Cluster not found, the cluster may have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it, or use another name to create a new cluster.
+
+ Cluster Delete
+ - 409 -- Can't delete a cluster while in provisioning state. Wait for the Async operation to complete and try again.
+
+ Workspace link:
+ - 404 -- Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted.
+ - 409 -- Workspace link or unlink operation in process.
+ - 400 -- Cluster not found, the cluster you specified doesnΓÇÖt exist or was deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it.
+ Workspace unlink:
+ - 404 -- Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted.
+ - 409 -- Workspace link or unlink operation in process.
## Next steps
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/customer-managed-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/customer-managed-keys.md
@@ -11,7 +11,7 @@ ms.date: 11/18/2020
# Azure Monitor customer-managed key
-This article provides background information and steps to configure customer-Managed keys for your Log Analytics workspaces. Once configured, any data sent to your workspaces is encrypted with your Azure Key Vault key.
+Data in Azure Monitor is encrypted with Microsoft-managed keys. You can use your own encryption key to protect the data and saved queries in your workspaces. When you specify a customer-managed key, that key is used to protect and control access to your data and once configured, any data sent to your workspaces is encrypted with your Azure Key Vault key. Customer-managed keys offer greater flexibility to manage access controls.
We recommend you review [Limitations and constraints](#limitationsandconstraints) below before configuration.
@@ -19,23 +19,25 @@ We recommend you review [Limitations and constraints](#limitationsandconstraints
[Encryption at Rest](../../security/fundamentals/encryption-atrest.md) is a common privacy and security requirement in organizations. You can let Azure completely manage encryption at rest, while you have various options to closely manage encryption and encryption keys.
-Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). Azure Monitor also provides an option for encryption using your own key that is stored in your [Azure Key Vault](../../key-vault/general/overview.md) and give you the control to revoke the access to your data at any time. Azure Monitor use of encryption is identical to the way [Azure Storage encryption](../../storage/common/storage-service-encryption.md#about-azure-storage-encryption) operates.
+Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). Azure Monitor also provides an option for encryption using your own key that is stored in your [Azure Key Vault](../../key-vault/general/overview.md), which gives you the control to revoke the access to your data at any time. Azure Monitor use of encryption is identical to the way [Azure Storage encryption](../../storage/common/storage-service-encryption.md#about-azure-storage-encryption) operates.
-Customer-Managed key is delivered on dedicated Log Analytics clusters providing higher protection level and control. Data ingested to dedicated clusters is being encrypted twice ΓÇö once at the service level using Microsoft-managed keys or customer-Managed keys, and once at the infrastructure level using two different encryption algorithms and two different keys. [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](#customer-lockbox-preview) control.
+Customer-Managed key is delivered on [dedicated clusters](../log-query/logs-dedicated-clusters.md) providing higher protection level and control. Data ingested to dedicated clusters is being encrypted twice ΓÇö once at the service level using Microsoft-managed keys or customer-Managed keys, and once at the infrastructure level using two different encryption algorithms and two different keys. [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](#customer-lockbox-preview) control.
Data ingested in the last 14 days is also kept in hot-cache (SSD-backed) for efficient query engine operation. This data remains encrypted with Microsoft keys regardless customer-Managed key configuration, but your control over SSD data adheres to [key revocation](#key-revocation). We are working to have SSD data encrypted with Customer-Managed key in the first half of 2021.
-The [Log Analytics clusters pricing model](./manage-cost-storage.md#log-analytics-dedicated-clusters) uses Capacity Reservations starting at a 1000 GB/day level.
+Log Analytics Dedicated Clusters use a Capacity Reservation [pricing model](../log-query/logs-dedicated-clusters.md#cluster-pricing-model) starting at 1000 GB/day.
> [!IMPORTANT] > Due to temporary capacity constraints, we require you pre-register to before creating a cluster. Use your contacts into Microsoft, or open support request to register your subscriptions IDs. ## How Customer-Managed key works in Azure Monitor
-Azure Monitor leverages system-assigned managed identity to grant access to your Azure Key Vault. System-assigned managed identity can only be associated with a single Azure resource while the identity of the Log Analytics cluster is supported at the cluster level -- This dictates that the capability is delivered on a dedicated Log Analytics cluster. To support Customer-Managed key on multiple workspaces, a new Log Analytics *Cluster* resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The Log Analytics cluster storage uses the managed identity that\'s associated with the *Cluster* resource to authenticate to your Azure Key Vault via Azure Active Directory.
+Azure Monitor uses system-assigned managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level and allowing Customer-Managed key on multiple workspaces, a new Log Analytics *Cluster* resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The Log Analytics cluster storage uses the managed identity that\'s associated with the *Cluster* resource to authenticate to your Azure Key Vault via Azure Active Directory.
-After configuration, any data ingested to workspaces linked to your dedicated cluster gets encrypted with your key in Key Vault. You can unlink workspaces from the cluster at any time. New data then gets ingested to Log Analytics storage and encrypted with Microsoft key, while you can query your new and old data seamlessly.
+After the Customer-managed key configuration, new ingested data to workspaces linked to your dedicated cluster gets encrypted with your key. You can unlink workspaces from the cluster at any time. New data then gets ingested to Log Analytics storage and encrypted with Microsoft key, while you can query your new and old data seamlessly.
+> [!IMPORTANT]
+> Customer-Managed key capability is regional. Your Azure Key Vault, cluster and linked Log Analytics workspaces must be in the same region, but they can be in different subscriptions.
![Customer-Managed key overview](media/customer-managed-keys/cmk-overview.png)
@@ -60,19 +62,19 @@ The following rules apply:
- Your KEK never leaves your Key Vault and in the case of an HSM key, it never leaves the hardware. - Azure Storage uses the managed identity that's associated with the *Cluster* resource to authenticate and access to Azure Key Vault via Azure Active Directory.
-## Customer-Managed key provisioning procedure
+## Customer-Managed key provisioning
-1. Register your subscription to allow cluster creation
+1. Registering your subscription to allow cluster creation
1. Creating Azure Key Vault and storing key 1. Creating cluster 1. Granting permissions to your Key Vault 1. Linking Log Analytics workspaces
-Customer-Managed key configuration isn't supported in Azure portal and provisioning is performed via [PowerShell](/powershell/module/az.operationalinsights/), [CLI](/cli/azure/monitor/log-analytics) or [REST](/rest/api/loganalytics/) requests.
+Customer-Managed key configuration isn't supported in Azure portal currently and provisioning can be performed via [PowerShell](/powershell/module/az.operationalinsights/), [CLI](/cli/azure/monitor/log-analytics) or [REST](/rest/api/loganalytics/) requests.
### Asynchronous operations and status check
-Some of the configuration steps run asynchronously because they can't be completed quickly. The `status` in response contains can be one of the followings: 'InProgress', 'Updating', 'Deleting', 'Succeeded or 'Failed' including the error code.
+Some of the configuration steps run asynchronously because they can't be completed quickly. The `status` in response can be one of the followings: 'InProgress', 'Updating', 'Deleting', 'Succeeded or 'Failed' with error code.
# [Azure portal](#tab/portal)
@@ -93,7 +95,7 @@ When using REST, the response initially returns an HTTP status code 200 (OK) and
"Azure-AsyncOperation": "https://management.azure.com/subscriptions/subscription-id/providers/Microsoft.OperationalInsights/locations/region-name/operationStatuses/operation-id?api-version=2020-08-01" ```
-You can check the status of the asynchronous operation by sending a GET request to the *Azure-AsyncOperation* header value:
+You can check the status of the asynchronous operation by sending a GET request to the endpoint in *Azure-AsyncOperation* header:
```rst GET https://management.azure.com/subscriptions/subscription-id/providers/microsoft.operationalInsights/locations/region-name/operationstatuses/operation-id?api-version=2020-08-01 Authorization: Bearer <token>
@@ -103,8 +105,7 @@ Authorization: Bearer <token>
### Allowing subscription
-> [!IMPORTANT]
-> Customer-Managed key capability is regional. Your Azure Key Vault, cluster and linked Log Analytics workspaces must be in the same region, but they can be in different subscriptions.
+Use your contacts into Microsoft or open support request in Log Analytics to provide your subscriptions IDs.
### Storing encryption key (KEK)
@@ -121,15 +122,12 @@ These settings can be updated in Key Vault via CLI and PowerShell:
Follow the procedure illustrated in [Dedicated Clusters article](../log-query/logs-dedicated-clusters.md#creating-a-cluster).
-> [!IMPORTANT]
-> Copy and save the response since you will need the details in next steps.
- ### Grant Key Vault permissions
-Create access policy in Key Vault to grants permissions to your cluster. These permissions are used by the underlay Azure Monitor Storage for data encryption. Open your Key Vault in Azure portal and click "Access Policies" then "+ Add Access Policy" to create a policy with these settings:
+Create access policy in Key Vault to grants permissions to your cluster. These permissions are used by the underlay Azure Monitor storage. Open your Key Vault in Azure portal and click *"Access Policies"* then *"+ Add Access Policy"* to create a policy with these settings:
-- Key permissions: select 'Get', 'Wrap Key' and 'Unwrap Key' permissions.-- Select principal: enter the cluster name or principal-id value that returned in the response in the previous step.
+- Key permissions: select *'Get'*, *'Wrap Key'* and *'Unwrap Key'*.
+- Select principal: enter the cluster name or principal-id.
![grant Key Vault permissions](media/customer-managed-keys/grant-key-vault-permissions-8bit.png)
@@ -137,15 +135,15 @@ The *Get* permission is required to verify that your Key Vault is configured as
### Update cluster with Key identifier details
-All operations on the cluster require the Microsoft.OperationalInsights/clusters/write action permission. This permission could be granted via the Owner or Contributor that contains the */write action or via the Log Analytics Contributor role that contains the Microsoft.OperationalInsights/* action.
+All operations on the cluster require the `Microsoft.OperationalInsights/clusters/write` action permission. This permission could be granted via the Owner or Contributor that contains the `*/write` action or via the Log Analytics Contributor role that contains the `Microsoft.OperationalInsights/*` action.
This step updates Azure Monitor Storage with the key and version to be used for data encryption. When updated, your new key is being used to wrap and unwrap the Storage key (AEK).
-Select the current version of your key in Azure Key Vault to get the Key identifier details.
+Select the current version of your key in Azure Key Vault to get the key identifier details.
![Grant Key Vault permissions](media/customer-managed-keys/key-identifier-8bit.png)
-Update KeyVaultProperties in cluster with Key Identifier details.
+Update KeyVaultProperties in cluster with key identifier details.
The operation is asynchronous and can take a while to complete.
@@ -187,11 +185,11 @@ Content-type: application/json
**Response**
-It takes the propagation of the Key identifier a few minutes to complete. You can check the update state in two ways:
+It takes the propagation of the key a few minutes to complete. You can check the update state in two ways:
1. Copy the Azure-AsyncOperation URL value from the response and follow the [asynchronous operations status check](#asynchronous-operations-and-status-check).
-2. Send a GET request on the cluster and look at the *KeyVaultProperties* properties. Your recently updated Key identifier details should return in the response.
+2. Send a GET request on the cluster and look at the *KeyVaultProperties* properties. Your recently updated key should return in the response.
-A response to GET request should look like this when Key identifier update is complete:
+A response to GET request should look like this when the key update is complete:
200 OK and header ```json {
@@ -226,15 +224,10 @@ A response to GET request should look like this when Key identifier update is co
### Link workspace to cluster
-You need to have 'write' permissions to both your workspace and cluster to perform this operation, which include these actions:
--- In workspace: Microsoft.OperationalInsights/workspaces/write-- In cluster: Microsoft.OperationalInsights/clusters/write- > [!IMPORTANT] > This step should be performed only after the completion of the Log Analytics cluster provisioning. If you link workspaces and ingest data prior to the provisioning, ingested data will be dropped and won't be recoverable.
-This operation is asynchronous and can a while to complete.
+You need to have 'write' permissions to both your workspace and cluster to perform this operation, which include `Microsoft.OperationalInsights/workspaces/write` and `Microsoft.OperationalInsights/clusters/write`.
Follow the procedure illustrated in [Dedicated Clusters article](../log-query/logs-dedicated-clusters.md#link-a-workspace-to-the-cluster).
@@ -248,7 +241,7 @@ Storage periodically polls your Key Vault to attempt to unwrap the encryption ke
## Key rotation
-Customer-Managed key rotation requires an explicit update to the cluster with the new key version in Azure Key Vault. Follow the instructions in "Update cluster with Key identifier details" step. If you don't update the new key identifier details in the cluster, the Log Analytics cluster storage will keep using your previous key for encryption. If you disable or delete your old key before updating the new key in the cluster, you will get into [key revocation](#key-revocation) state.
+Customer-Managed key rotation requires an explicit update to the cluster with the new key version in Azure Key Vault. [Update cluster with Key identifier details](#update-cluster-with-key-identifier-details). If you don't update the new key version in the cluster, the Log Analytics cluster storage will keep using your previous key for encryption. If you disable or delete your old key before updating the new key in the cluster, you will get into [key revocation](#key-revocation) state.
All your data remains accessible after the key rotation operation, since data always encrypted with Account Encryption Key (AEK) while AEK is now being encrypted with your new Key Encryption Key (KEK) version in Key Vault.
@@ -368,266 +361,25 @@ Learn more about [Customer Lockbox for Microsoft Azure](../../security/fundament
## Customer-Managed key operations -- **Get all clusters in a resource group**
-
- # [Azure portal](#tab/portal)
-
- N/A
-
- # [Azure CLI](#tab/azure-cli)
-
- ```azurecli
- az monitor log-analytics cluster list --resource-group "resource-group-name"
- ```
+Customer-Managed key is provided on dedicated cluster and these operations are referred in dedicated cluster
- # [PowerShell](#tab/powershell)
+- [Check cluster provisioning status](../log-query/logs-dedicated-clusters.md#check-cluster-provisioning-status)
- ```powershell
- Get-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name"
- ```
+- [Check workspace link status](../log-query/logs-dedicated-clusters.md#check-workspace-link-status)
- # [REST](#tab/rest)
-
- ```rst
- GET https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters?api-version=2020-08-01
- Authorization: Bearer <token>
- ```
-
- **Response**
+- [Get all clusters in a resource group](../log-query/logs-dedicated-clusters.md#get-all-clusters-in-a-resource-group)
- ```json
- {
- "value": [
- {
- "identity": {
- "type": "SystemAssigned",
- "tenantId": "tenant-id",
- "principalId": "principal-Id"
- },
- "sku": {
- "name": "capacityReservation",
- "capacity": 1000,
- "lastSkuUpdate": "Sun, 22 Mar 2020 15:39:29 GMT"
- },
- "properties": {
- "keyVaultProperties": {
- "keyVaultUri": "https://key-vault-name.vault.azure.net",
- "keyName": "key-name",
- "keyVersion": "current-version"
- },
- "provisioningState": "Succeeded",
- "billingType": "cluster",
- "clusterId": "cluster-id"
- },
- "id": "/subscriptions/subscription-id/resourcegroups/resource-group-name/providers/microsoft.operationalinsights/workspaces/workspace-name",
- "name": "cluster-name",
- "type": "Microsoft.OperationalInsights/clusters",
- "location": "region-name"
- }
- ]
- }
- ```
-
- ---
--- **Get all clusters in a subscription**-
- # [Azure portal](#tab/portal)
-
- N/A
-
- # [Azure CLI](#tab/azure-cli)
-
- ```azurecli
- az monitor log-analytics cluster list
- ```
-
- # [PowerShell](#tab/powershell)
-
- ```powershell
- Get-AzOperationalInsightsCluster
- ```
-
- # [REST](#tab/rest)
-
- ```rst
- GET https://management.azure.com/subscriptions/<subscription-id>/providers/Microsoft.OperationalInsights/clusters?api-version=2020-08-01
- Authorization: Bearer <token>
- ```
-
- **Response**
-
- The same response as for 'cluster in a resource group', but in subscription scope.
-
- ---
--- **Update *capacity reservation* in cluster**-
- When the data volume to your linked workspaces change over time and you want to update the capacity reservation level appropriately. Follow the [update cluster](#update-cluster-with-key-identifier-details) and provide your new capacity value. It can be in the range of 1000 to 3000 GB per day and in steps of 100. For level higher than 3000 GB per day, reach your Microsoft contact to enable it. Note that you donΓÇÖt have to provide the full REST request body but should include the sku:
-
- # [Azure portal](#tab/portal)
-
- N/A
-
- # [Azure CLI](#tab/azure-cli)
-
- ```azurecli
- az monitor log-analytics cluster update --name "cluster-name" --resource-group "resource-group-name" --sku-capacity daily-ingestion-gigabyte
- ```
-
- # [PowerShell](#tab/powershell)
-
- ```powershell
- Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -SkuCapacity daily-ingestion-gigabyte
- ```
-
- # [REST](#tab/rest)
-
- ```rst
- PATCH https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-08-01
- Authorization: Bearer <token>
- Content-type: application/json
-
- {
- "sku": {
- "name": "capacityReservation",
- "Capacity": daily-ingestion-gigabyte
- }
- }
- ```
-
- ---
--- **Update *billingType* in cluster**-
- The *billingType* property determines the billing attribution for the cluster and its data:
- - *cluster* (default) -- The billing is attributed to the subscription hosting your Cluster resource
- - *workspaces* -- The billing is attributed to the subscriptions hosting your workspaces proportionally
-
- Follow the [update cluster](#update-cluster-with-key-identifier-details) and provide your new billingType value. Note that you donΓÇÖt have to provide the full REST request body and should include the *billingType*:
-
- # [Azure portal](#tab/portal)
+- [Get all clusters in a subscription](../log-query/logs-dedicated-clusters.md#get-all-clusters-in-a-subscription)
- N/A
+- [Update *capacity reservation* in cluster](../log-query/logs-dedicated-clusters.md#update-capacity-reservation-in-cluster)
- # [Azure CLI](#tab/azure-cli)
+- [Update *billingType* in cluster](../log-query/logs-dedicated-clusters.md#update-billingtype-in-cluster)
- N/A
+- [Link a workspace to the cluster](../log-query/logs-dedicated-clusters.md#link-a-workspace-to-the-cluster)
- # [PowerShell](#tab/powershell)
+- [Unlink a workspace from a dedicated cluster](../log-query/logs-dedicated-clusters.md#unlink-a-workspace-from-a-dedicated-cluster)
- N/A
-
- # [REST](#tab/rest)
-
- ```rst
- PATCH https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-08-01
- Authorization: Bearer <token>
- Content-type: application/json
-
- {
- "properties": {
- "billingType": "cluster",
- }
- }
- ```
-
- ---
--- **Unlink workspace**-
- You need 'write' permissions on the workspace and cluster to perform this operation. You can unlink a workspace from your cluster at any time. New ingested data after the unlink operation is stored in Log Analytics storage and encrypted with Microsoft key. You can query you data that was ingested to your workspace before and after the unlink seamlessly as long as the cluster is provisioned and configured with valid Key Vault key.
-
- This operation is is asynchronous and can a while to complete.
-
- # [Azure portal](#tab/portal)
-
- N/A
-
- # [Azure CLI](#tab/azure-cli)
-
- ```azurecli
- az monitor log-analytics workspace linked-service delete --resource-group "resource-group-name" --name "cluster-name" --workspace-name "workspace-name"
- ```
-
- # [PowerShell](#tab/powershell)
-
- ```powershell
- Remove-AzOperationalInsightsLinkedService -ResourceGroupName "resource-group-name" -Name "workspace-name" -LinkedServiceName cluster
- ```
-
- # [REST](#tab/rest)
-
- ```rest
- DELETE https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>/linkedservices/cluster?api-version=2020-08-01
- Authorization: Bearer <token>
- ```
-
- ---
--- **Check workspace link status**
-
- Perform Get operation on the workspace and observe if *clusterResourceId* property is present in the response under *features*. A linked workspace will have the *clusterResourceId* property.
-
- # [Azure portal](#tab/portal)
-
- N/A
-
- # [Azure CLI](#tab/azure-cli)
-
- ```azurecli
- az monitor log-analytics cluster show --resource-group "resource-group-name" --name "cluster-name"
- ```
-
- # [PowerShell](#tab/powershell)
-
- ```powershell
- Get-AzOperationalInsightsWorkspace -ResourceGroupName "resource-group-name" -Name "workspace-name"
- ```
-
- # [REST](#tab/rest)
-
- ```rest
- GET https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>?api-version=2020-08-01
- Authorization: Bearer <token>
- ```
-
- ---
--- **Delete your cluster**-
- You need 'write' permissions on the cluster to perform this operation. A soft-delete operation is performed to allow the recovery of your cluster including its data within 14 days, whether the deletion was accidental or intentional. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. After the soft-delete period, the cluster name is released and your cluster and its data are permanently deleted and are non-recoverable. Any linked workspace gets unlinked from the cluster on delete operation. New ingested data is stored in Log Analytics storage and encrypted with Microsoft key.
-
- The unlink operation is asynchronous and can take up to 90 minutes to complete.
-
- # [Azure portal](#tab/portal)
-
- N/A
-
- # [Azure CLI](#tab/azure-cli)
-
- ```azurecli
- az monitor log-analytics cluster delete --resource-group "resource-group-name" --name "cluster-name"
- ```
-
- # [PowerShell](#tab/powershell)
-
- ```powershell
- Remove-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name"
- ```
-
- # [REST](#tab/rest)
-
- ```rst
- DELETE https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2020-08-01
- Authorization: Bearer <token>
- ```
-
- ---
-
-- **Recover your cluster and your data**
-
- A cluster that was deleted in the last 14 days is in soft-delete state and can be recovered with its data. Since all workspaces got unlinked from the cluster deletion, you need to re-link your workspaces after the cluster's recovery. The recovery operation is currently performed manually by the product group. Use your Microsoft channel or open support request for recovery of deleted cluster.
+- [Update cluster properties](../log-query/logs-dedicated-clusters.md#change-cluster-properties)
## Limitations and constraints
@@ -687,40 +439,7 @@ Learn more about [Customer Lockbox for Microsoft Azure](../../security/fundament
1. when using REST, copy the Azure-AsyncOperation URL value from the response and follow the [asynchronous operations status check](#asynchronous-operations-and-status-check). 2. Send GET request to cluster or workspace and observe the response. For example, unlinked workspace won't have the *clusterResourceId* under *features*. -- Error messages
-
- Cluster Create:
- - 400 -- Cluster name is not valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.
- - 400 -- The body of the request is null or in bad format.
- - 400 -- SKU name is invalid. Set SKU name to capacityReservation.
- - 400 -- Capacity was provided but SKU is not capacityReservation. Set SKU name to capacityReservation.
- - 400 -- Missing Capacity in SKU. Set Capacity value to 1000 or higher in steps of 100 (GB).
- - 400 -- Capacity in SKU is not in range. Should be minimum 1000 and up to the max allowed capacity which is available under ΓÇÿUsage and estimated costΓÇÖ in your workspace.
- - 400 -- Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.
- - 400 -- No SKU was set. Set the SKU name to capacityReservation and Capacity value to 1000 or higher in steps of 100 (GB).
- - 400 -- Identity is null or empty. Set Identity with systemAssigned type.
- - 400 -- KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation.
- - 400 -- Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
-
- Cluster Update
- - 400 -- Cluster is in deleting state. Async operation is in progress . Cluster must complete its operation before any update operation is performed.
- - 400 -- KeyVaultProperties is not empty but has a bad format. See [key identifier update](#update-cluster-with-key-identifier-details).
- - 400 -- Failed to validate key in Key Vault. Could be due to lack of permissions or when key doesnΓÇÖt exist. Verify that you [set key and access policy](#grant-key-vault-permissions) in Key Vault.
- - 400 -- Key is not recoverable. Key Vault must be set to Soft-delete and Purge-protection. See [Key Vault documentation](../../key-vault/general/soft-delete-overview.md)
- - 400 -- Operation cannot be executed now. Wait for the Async operation to complete and try again.
- - 400 -- Cluster is in deleting state. Wait for the Async operation to complete and try again.
-
- Cluster Get:
- - 404 -- Cluster not found, the cluster may have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it, or use another name to create a new cluster.
-
- Cluster Delete
- - 409 -- Can't delete a cluster while in provisioning state. Wait for the Async operation to complete and try again.
-
- Workspace link:
- - 404 -- Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted.
- - 409 -- Workspace link or unlink operation in process.
- - 400 -- Cluster not found, the cluster you specified doesnΓÇÖt exist or was deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it.
-
- Workspace unlink:
- - 404 -- Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted.
- - 409 -- Workspace link or unlink operation in process.
\ No newline at end of file
+## Next steps
+
+- Learn about [Log Analytics dedicated cluster billing](../platform/manage-cost-storage.md#log-analytics-dedicated-clusters)
+- Learn about [proper design of Log Analytics workspaces](../platform/design-logs-deployment.md)
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-definition.md
@@ -65,15 +65,14 @@ After you've prepped your ITSM tools, complete these steps to create a connectio
1. Under **Workspace Data Sources** in the left pane, select **ITSM Connections**: ![Screenshot that shows the ITSM Connections menu item.](media/itsmc-overview/add-new-itsm-connection.png)
- This page displays the list of connections.
1. Select **Add Connection**. 1. Specify the connection settings as described according to ITSM products/services: -- [ServiceNow](./itsmc-connections-servicenow.md)-- [System Center Service Manager](./itsmc-connections-scsm.md)-- [Cherwell](./itsmc-connections-cherwell.md)-- [Provance](./itsmc-connections-provance.md)
+ - [ServiceNow](./itsmc-connections-servicenow.md)
+ - [System Center Service Manager](./itsmc-connections-scsm.md)
+ - [Cherwell](./itsmc-connections-cherwell.md)
+ - [Provance](./itsmc-connections-provance.md)
> [!NOTE] >
@@ -115,6 +114,9 @@ Use the following procedure to create action groups:
4. In the notification list, select **Next: Actions**. 5. In the actions list, select **ITSM** in the **Action Type** list. Provide a **Name** for the action. Select the pen button that represents **Edit details**.+
+ ![Screenshot that shows action group definition.](media/itsmc-definition/action-group-pen.png)
+ 6. In the **Subscription** list, select the subscription in which your Log Analytics workspace is located. In the **Connection** list, select your ITSM connector name. It will be followed by your workspace name. For example, MyITSMConnector(MyWorkspace). 7. Select a **Work Item** type.
@@ -142,122 +144,6 @@ When you create or edit an Azure alert rule, use an action group, which has an I
> >- The short description field in the alert rule definition is limited to 40 characters when you send it by using the ITSM action.
-## Additional information
-
-### Data synced from your ITSM product
-
-Incidents and change requests are synced from your ITSM product to your Log Analytics workspace, based on the connection's configuration.
-
-This section shows some examples of data gathered by ITSMC.
-
-The fields in **ServiceDesk_CL** vary depending on the work item type that you import into Log Analytics. Here's a list of fields for two work item types:
-
-**Work item:** **Incidents**
-ServiceDeskWorkItemType_s="Incident"
-
-**Fields**
--- ServiceDeskConnectionName-- Service Desk ID-- State-- Urgency-- Impact-- Priority-- Escalation-- Created By-- Resolved By-- Closed By-- Source-- Assigned To-- Category-- Title-- Description-- Created Date-- Closed Date-- Resolved Date-- Last Modified Date-- Computer-
-**Work item:** **Change Requests**
-
-ServiceDeskWorkItemType_s="ChangeRequest"
-
-**Fields**
-- ServiceDeskConnectionName-- Service Desk ID-- Created By-- Closed By-- Source-- Assigned To-- Title-- Type-- Category-- State-- Escalation-- Conflict Status-- Urgency-- Priority-- Risk-- Impact-- Assigned To-- Created Date-- Closed Date-- Last Modified Date-- Requested Date-- Planned Start Date-- Planned End Date-- Work Start Date-- Work End Date-- Description-- Computer-
-## Output data for a ServiceNow incident
-
-| Log Analytics field | ServiceNow field |
-|:--- |:--- |
-| ServiceDeskId_s| Number |
-| IncidentState_s | State |
-| Urgency_s |Urgency |
-| Impact_s |Impact|
-| Priority_s | Priority |
-| CreatedBy_s | Opened by |
-| ResolvedBy_s | Resolved by|
-| ClosedBy_s | Closed by |
-| Source_s| Contact type |
-| AssignedTo_s | Assigned to |
-| Category_s | Category |
-| Title_s| Short description |
-| Description_s| Notes |
-| CreatedDate_t| Opened |
-| ClosedDate_t| closed|
-| ResolvedDate_t|Resolved|
-| Computer | Configuration item |
-
-## Output data for a ServiceNow change request
-
-| Log Analytics | ServiceNow field |
-|:--- |:--- |
-| ServiceDeskId_s| Number |
-| CreatedBy_s | Requested by |
-| ClosedBy_s | Closed by |
-| AssignedTo_s | Assigned to |
-| Title_s| Short description |
-| Type_s| Type |
-| Category_s| Category |
-| CRState_s| State|
-| Urgency_s| Urgency |
-| Priority_s| Priority|
-| Risk_s| Risk|
-| Impact_s| Impact|
-| RequestedDate_t | Requested by date |
-| ClosedDate_t | Closed date |
-| PlannedStartDate_t | Planned start date |
-| PlannedEndDate_t | Planned end date |
-| WorkStartDate_t | Actual start date |
-| WorkEndDate_t | Actual end date|
-| Description_s | Description |
-| Computer | Configuration Item |
- ## Next steps * [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-synced-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-synced-data.md new file mode 100644
@@ -0,0 +1,136 @@
+---
+title: Data synced from your ITSM product to LA Workspace
+description: This article provides an overview of Data synced from your ITSM product to LA Workspace.
+ms.subservice: logs
+ms.topic: conceptual
+author: nolavime
+ms.author: v-jysur
+ms.date: 12/29/2020
+ms.custom: references_regions
+
+---
+
+# Data synced from your ITSM product
+
+Incidents and change requests are synced from your ITSM tool to your Log Analytics workspace, based on the connection's configuration (using "Sync Data" field):
+* [ServiceNow](./itsmc-connections-servicenow.md)
+* [System Center Service Manager](./itsmc-connections-scsm.md)
+* [Cherwell](./itsmc-connections-cherwell.md)
+* [Provance](./itsmc-connections-provance.md)
+
+## Synced data
+
+This section shows some examples of data gathered by ITSMC.
+
+The fields in **ServiceDesk_CL** vary depending on the work item type that you import into Log Analytics. Here's a list of fields for two work item types:
+
+**Work item:** **Incidents**
+ServiceDeskWorkItemType_s="Incident"
+
+**Fields**
+
+- ServiceDeskConnectionName
+- Service Desk ID
+- State
+- Urgency
+- Impact
+- Priority
+- Escalation
+- Created By
+- Resolved By
+- Closed By
+- Source
+- Assigned To
+- Category
+- Title
+- Description
+- Created Date
+- Closed Date
+- Resolved Date
+- Last Modified Date
+- Computer
+
+**Work item:** **Change Requests**
+
+ServiceDeskWorkItemType_s="ChangeRequest"
+
+**Fields**
+- ServiceDeskConnectionName
+- Service Desk ID
+- Created By
+- Closed By
+- Source
+- Assigned To
+- Title
+- Type
+- Category
+- State
+- Escalation
+- Conflict Status
+- Urgency
+- Priority
+- Risk
+- Impact
+- Assigned To
+- Created Date
+- Closed Date
+- Last Modified Date
+- Requested Date
+- Planned Start Date
+- Planned End Date
+- Work Start Date
+- Work End Date
+- Description
+- Computer
+
+## ServiceNow example
+### Output data for a ServiceNow incident
+
+| Log Analytics field | ServiceNow field |
+|:--- |:--- |
+| ServiceDeskId_s| Number |
+| IncidentState_s | State |
+| Urgency_s |Urgency |
+| Impact_s |Impact|
+| Priority_s | Priority |
+| CreatedBy_s | Opened by |
+| ResolvedBy_s | Resolved by|
+| ClosedBy_s | Closed by |
+| Source_s| Contact type |
+| AssignedTo_s | Assigned to |
+| Category_s | Category |
+| Title_s| Short description |
+| Description_s| Notes |
+| CreatedDate_t| Opened |
+| ClosedDate_t| closed|
+| ResolvedDate_t|Resolved|
+| Computer | Configuration item |
+
+### Output data for a ServiceNow change request
+
+| Log Analytics | ServiceNow field |
+|:--- |:--- |
+| ServiceDeskId_s| Number |
+| CreatedBy_s | Requested by |
+| ClosedBy_s | Closed by |
+| AssignedTo_s | Assigned to |
+| Title_s| Short description |
+| Type_s| Type |
+| Category_s| Category |
+| CRState_s| State|
+| Urgency_s| Urgency |
+| Priority_s| Priority|
+| Risk_s| Risk|
+| Impact_s| Impact|
+| RequestedDate_t | Requested by date |
+| ClosedDate_t | Closed date |
+| PlannedStartDate_t | Planned start date |
+| PlannedEndDate_t | Planned end date |
+| WorkStartDate_t | Actual start date |
+| WorkEndDate_t | Actual end date|
+| Description_s | Description |
+| Computer | Configuration Item |
+
+## Next steps
+
+* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-charts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/metrics-charts.md
@@ -57,11 +57,25 @@ For example, suppose the chart is showing the **Server Response Time** metric us
There are five basic stats aggregation types available in the metrics explorer: **Sum**, **Count**, **Min**, **Max**, and **Average**. The **Sum** aggregation is sometimes referred as **Total** aggregation. For many metrics, Metrics Explorer will hide the aggregations that are totally irrelevant and cannot be used. -- **Sum** ΓÇô the sum of all values captured over the aggregation interval-- **Count** ΓÇô the number of measurements captured over the aggregation interval. Note that **Count** will be equal to **Sum** in the case where the metric is always captured with the value of 1. This is common when the metric tracks the count of distinct events, and each measurement represents one event (i.e. the code fires off a metric record every time a new request comes in)-- **Average** ΓÇô the average of the metric values captured over the aggregation interval-- **Min** ΓÇô the smallest value captured over the aggregation interval-- **Max** ΓÇô the largest value captured over the aggregation interval
+**Sum** ΓÇô the sum of all values captured over the aggregation interval
+
+![Screenshot of sum of request](./media/metrics-charts/request-sum.png)
+
+**Count** ΓÇô the number of measurements captured over the aggregation interval. Note that **Count** will be equal to **Sum** in the case where the metric is always captured with the value of 1. This is common when the metric tracks the count of distinct events, and each measurement represents one event (i.e. the code fires off a metric record every time a new request comes in)
+
+![Screenshot of count of request](./media/metrics-charts/request-count.png)
+
+**Average** ΓÇô the average of the metric values captured over the aggregation interval
+
+![Screenshot of average request](./media/metrics-charts/request-avg.png)
+
+**Min** ΓÇô the smallest value captured over the aggregation interval
+
+![Screenshot of minimum request](./media/metrics-charts/request-min.png)
+
+**Max** ΓÇô the largest value captured over the aggregation interval
+
+![Screenshot of max request](./media/metrics-charts/request-max.png)
## Apply filters to charts
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/metrics-troubleshoot.md
@@ -14,18 +14,6 @@ ms.subservice: metrics
Use this article if you run into issues with creating, customizing, or interpreting charts in Azure metrics explorer. If you are new to metrics, learn about [getting started with metrics explorer](metrics-getting-started.md) and [advanced features of metrics explorer](metrics-charts.md). You can also see [examples](metric-chart-samples.md) of the configured metric charts.
-## Can't find your resource to select it
-
-You clicked on the **Select a resource** button, but don't see your resource in the resource picker dialog.
-
-**Solution:** Metrics explorer requires you to select subscriptions and resource groups before listing available resources. If you don't see your resource:
-
-1. Ensure that you've selected correct subscription in the **Subscription** dropdown. If your subscription isn't listed, click on the **Directory + Subscription settings** and add a subscription with your resource.
-
-1. Ensure that you've selected the correct resource group.
- > [!WARNING]
- > For best performance, when you first open metrics explorer, the **Resource group** dropdown has no pre-selected resource groups. You must pick at least one group before you can see any resources.
- ## Chart shows no data Sometimes the charts might show no data after selecting correct resources and metrics. This behavior can be caused by several of the following reasons:
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/resource-logs-schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/resource-logs-schema.md
@@ -73,7 +73,7 @@ The schema for resource logs varies depending on the resource and log category.
| Load Balancer |[Log analytics for Azure Load Balancer](../../load-balancer/load-balancer-monitor-log.md) | | Logic Apps |[Logic Apps B2B custom tracking schema](../../logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md) | | Network Security Groups |[Log analytics for network security groups (NSGs)](../../virtual-network/virtual-network-nsg-manage-log.md) |
-| DDOS Protection | [Manage Azure DDoS Protection Standard](../../ddos-protection/reports-and-flow-logs.md#sample-log-outputs) |
+| DDOS Protection | [Manage Azure DDoS Protection Standard](../../ddos-protection/diagnostic-logging.md#log-schemas) |
| Power BI Dedicated | [Logging for Power BI Embedded in Azure](/power-bi/developer/azure-pbie-diag-logs) | | Recovery Services | [Data Model for Azure Backup](../../backup/backup-azure-reports-data-model.md)| | Search |[Enabling and using Search Traffic Analytics](../../search/search-traffic-analytics.md) |
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
@@ -2,7 +2,7 @@
title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. ms.topic: conceptual
-ms.date: 12/21/2020
+ms.date: 12/29/2020
--- # Naming rules and restrictions for Azure resources
@@ -165,7 +165,7 @@ In the following tables, the term alphanumeric refers to:
> | galleries | resource group | 1-80 | Alphanumerics and periods.<br><br>Start and end with alphanumeric. | > | galleries / applications | gallery | 1-80 | Alphanumerics, hyphens, and periods.<br><br>Start and end with alphanumeric. | > | galleries / applications/versions | application | 32-bit integer | Numbers and periods. |
-> | galleries / images | gallery | 1-80 | Alphanumerics, hyphens, and periods.<br><br>Start and end with alphanumeric. |
+> | galleries / images | gallery | 1-80 | Alphanumerics, underscores, hyphens, and periods.<br><br>Start and end with alphanumeric. |
> | galleries / images / versions | image | 32-bit integer | Numbers and periods. | > | images | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End with alphanumeric or underscore. | > | snapshots | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End with alphanumeric or underscore. |
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-script-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-script-template.md
@@ -271,11 +271,12 @@ A storage account and a container instance are needed for script execution and t
| Standard_RAGZRS | StorageV2 | | Standard_ZRS | StorageV2 |
- These combinations support file share. For more information, see [Create an Azure file share](../../storage/files/storage-how-to-create-file-share.md) and [Types of storage accounts](../../storage/common/storage-account-overview.md).
+ These combinations support file shares. For more information, see [Create an Azure file share](../../storage/files/storage-how-to-create-file-share.md) and [Types of storage accounts](../../storage/common/storage-account-overview.md).
+ - Storage account firewall rules aren't supported yet. For more information, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md). - Deployment principal must have permissions to manage the storage account, which includes read, create, delete file shares.
-To specify an existing storage account, add the following json to the property element of `Microsoft.Resources/deploymentScripts`:
+To specify an existing storage account, add the following JSON to the property element of `Microsoft.Resources/deploymentScripts`:
```json "storageAccountSettings": {
@@ -284,8 +285,8 @@ To specify an existing storage account, add the following json to the property e
}, ``` -- **storageAccountName**: specify the name of the storage account.-- **storageAccountKey"**: specify one of the storage account keys. You can use the [`listKeys()`](./template-functions-resource.md#listkeys) function to retrieve the key. For example:
+- `storageAccountName`: specify the name of the storage account.
+- `storageAccountKey`: specify one of the storage account keys. You can use the [listKeys()](./template-functions-resource.md#listkeys) function to retrieve the key. For example:
```json "storageAccountSettings": {
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/features-comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
@@ -46,7 +46,7 @@ The following table lists the major features of SQL Server and provides informat
| [Collation - server/instance](/sql/relational-databases/collations/set-or-change-the-server-collation) | No, default server collation `SQL_Latin1_General_CP1_CI_AS` is always used. | Yes, can be set when the [instance is created](../managed-instance/scripts/create-powershell-azure-resource-manager-template.md) and can't be updated later. | | [Columnstore indexes](/sql/relational-databases/indexes/columnstore-indexes-overview) | Yes - [Premium tier, Standard tier - S3 and above, General Purpose tier, Business Critical, and HyperScale tiers](/sql/relational-databases/indexes/columnstore-indexes-overview) |Yes | | [Common language runtime - CLR](/sql/relational-databases/clr-integration/common-language-runtime-clr-integration-programming-concepts) | No | Yes, but without access to file system in `CREATE ASSEMBLY` statement - see [CLR differences](../managed-instance/transact-sql-tsql-differences-sql-server.md#clr) |
-| [Credentials](/sql/relational-databases/security/authentication-access/credentials-database-engine) | Yes, but only [database scoped credentials](/sql/t-sql/statements/create-database-scoped-credential-transact-sql). | Yes, but only **Azure Key Vault** and `SHARED ACCESS SIGNATURE` are supported see [details](../managed-instance/transact-sql-tsql-differences-sql-server.md#credential) |
+| [Credentials](/sql/relational-databases/security/authentication-access/credentials-database-engine) | Yes, but only [database scoped credentials](/sql/t-sql/statements/create-database-scoped-credential-transact-sql). | Yes, but only **Azure Key Vault** and `SHARED ACCESS SIGNATURE` are supported - see [details](../managed-instance/transact-sql-tsql-differences-sql-server.md#credential) |
| [Cross-database/three-part name queries](/sql/relational-databases/linked-servers/linked-servers-database-engine) | No - see [Elastic queries](elastic-query-overview.md) | Yes, plus [Elastic queries](elastic-query-overview.md) | | [Cross-database transactions](/sql/relational-databases/linked-servers/linked-servers-database-engine) | No | Yes, within the instance. See [Linked server differences](../managed-instance/transact-sql-tsql-differences-sql-server.md#linked-servers) for cross-instance queries. | | [Database mail - DbMail](/sql/relational-databases/database-mail/database-mail) | No | Yes |
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/azure-vmware-solution-on-premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-on-premises.md
@@ -2,7 +2,7 @@
title: Connect Azure VMware Solution to your on-premises environment description: Learn how to connect Azure VMware Solution to your on-premises environment. ms.topic: tutorial
-ms.date: 10/02/2020
+ms.date: 12/28/2020
--- # Connect Azure VMware Solution to your on-premises environment
@@ -15,24 +15,20 @@ Before you begin, there are two prerequisites for connecting Azure VMware Soluti
- A /29 non-overlapping network address block for the ExpressRoute Global Reach peering, which you defined as part of the [planning phase](production-ready-deployment-steps.md). >[!NOTE]
-> You can connect via VPN, but that's out of scope for this quick start document.
+> You can connect through VPN, but that's out of scope for this quick start document.
## Establish an ExpressRoute Global Reach connection To establish on-premises connectivity to your Azure VMware Solution private cloud using ExpressRoute Global Reach, follow the [Peer on-premises environments to a private cloud](tutorial-expressroute-global-reach-private-cloud.md) tutorial. -- ## Verify on-premises network connectivity You should now see in your **on-premises edge router** where the ExpressRoute connects the NSX-T network segments and the Azure VMware Solution management segments.
->[!NOTE]
+>[!IMPORTANT]
>Everyone has a different environment, and some will need to allow these routes to propagate back into the on-premises network.
-Some environments will have firewalls protecting the ExpressRoute circuit. If no firewalls and no route pruning are occurring, you can ping your Azure VMware Solution vCenter server or a [VM](deploy-azure-vmware-solution.md#add-a-vm-on-the-nsx-t-network-segment) on the NSX-T segment from your on-premises environment.
-
-Additionally, from the VM on the NSX-T segment, you can reach resources in your on-premises environment.
+Some environments have firewalls protecting the ExpressRoute circuit. If no firewalls and no route pruning occur, ping your Azure VMware Solution vCenter server or a [VM on the NSX-T segment](deploy-azure-vmware-solution.md#add-a-vm-on-the-nsx-t-network-segment) from your on-premises environment. Additionally, from the VM on the NSX-T segment, you can reach resources in your on-premises environment.
## Next steps
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-upgrades https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-upgrades.md
@@ -7,8 +7,6 @@ ms.date: 09/22/2020
# Azure VMware Solution private cloud updates and upgrades
-## Overview
- One of the key benefits of Azure VMware Solution private clouds is that the platform is maintained for you. Platform maintenance includes automated updates to a VMware validated software bundle, helping to ensure you're using the latest version of the validated Azure VMware Solution private cloud software. Specifically, an Azure VMware Solution private cloud includes:
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/deploy-traffic-manager-balance-workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-traffic-manager-balance-workloads.md
@@ -1,115 +1,125 @@
---
-title: Deploy Traffic Manager to balance Azure VMware Solution (AVS) workloads
-description: Learn how to integrate Traffic Manager with Azure VMware Solution (AVS) to balance application workloads across multiple endpoints in different regions.
+title: Deploy Traffic Manager to balance Azure VMware Solution workloads
+description: Learn how to integrate Traffic Manager with Azure VMware Solution to balance application workloads across multiple endpoints in different regions.
ms.topic: how-to
-ms.date: 08/14/2020
+ms.date: 12/29/2020
---
-# Deploy Traffic Manager to balance Azure VMware Solution (AVS) workloads
+# Deploy Traffic Manager to balance Azure VMware Solution workloads
-This article walks you through integrating Traffic Manager with Azure VMware Solution (AVS) to balance application workloads across multiple endpoints. We'll look at a scenario in which Traffic Manager will direct traffic between three application gateways spanning several AVS regions: West US, West Europe, and on-premises in East US.
+This article walks through the steps of how to integrate [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) with Azure VMware Solution. The integration balances application workloads across multiple endpoints. This article also walks through the steps of how to configure Traffic Manager to direct traffic between three [Azure Application Gateway](../application-gateway/overview.md) spanning several Azure VMware Solution regions.
-Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions. It will load balance application traffic across both Azure running workloads and external public endpoints. For more information on Traffic Manager, see [What is Traffic Manager?](../traffic-manager/traffic-manager-overview.md)
+The gateways have Azure VMware Solution virtual machines (VMs) configured as backend pool members to load balance the incoming layer 7 requests. For more information, see [Use Azure Application Gateway to protect your web apps on Azure VMware Solution](protect-azure-vmware-solution-with-application-gateway.md)
-Review the [Prerequisites](#prerequisites) first; then we'll walk through the procedures to:
+The diagram shows how Traffic Manager provides load balancing for the applications at the DNS level between regional endpoints. The gateways have backend pool members configured as IIS Servers and referenced as Azure VMware Solution external endpoints. Connection over the virtual network between the two private cloud regions uses an ExpressRoute gateway.
+
+:::image type="content" source="media/traffic-manager/traffic-manager-topology.png" alt-text="Architecture diagram of the Traffic Manager integration with Azure VMware Solution" lightbox="media/traffic-manager/traffic-manager-topology.png" border="false":::
+
+Before you begin, first review the [Prerequisites](#prerequisites) and then we'll walk through the procedures to:
> [!div class="checklist"]
-> * Verify configuration of your application gateways
-> * Verify configuration of the NSX-T segment
+> * Verify configuration of your application gateways and the NSX-T segment
> * Create your Traffic Manager profile > * Add external endpoints into your Traffic Manager profile
-## Topology
-
-As shown in the following figure, Azure Traffic Manager provides load balancing for the applications at the DNS level between regional endpoints. The application gateways have backend pool members configured as IIS Servers and are referenced as AVS external endpoints.
-
-Connection over the virtual network between the two AVS private cloud regions, West US and West Europe, and an on-premises server in East US, uses an ExpressRoute gateway.
-
-:::image type="content" source="media/traffic-manager/traffic-manager-topology.png" alt-text="Diagram of the architecture of Traffic Manager integration with Azure VMware Solution" lightbox="media/traffic-manager/traffic-manager-topology.png" border="false":::
-
## Prerequisites -- Three virtual machines configured as Microsoft IIS Servers running in different AVS regions: West US, West Europe, and on premises.
+- Three VMs configured as Microsoft IIS Servers running in different Azure VMware Solution regions:
+ - West US
+ - West Europe
+ - East US (on-premises)
-- An application gateway with external endpoints in West US, West Europe, and on premises.
+- An application gateway with external endpoints in the Azure VMware Solution regions mentioned above.
- Host with internet connectivity for verification.
-## Verify configuration of your application gateways
+- An [NSX-T network segment created in Azure VMware Solution](tutorial-nsx-t-network-segment.md).
-[Azure Application Gateway](https://azure.microsoft.com/services/application-gateway/) is a layer 7 web traffic load balancer that enables you to manage traffic to your web applications. For more information on Application Gateway, see [What is Azure Application Gateway?](../application-gateway/overview.md)
+## Verify your application gateways configuration
-In this scenario, three application gateway instances are configured as external AVS endpoints. The application gateways have AVS virtual machines configured as backend pool members to load balance the incoming layer 7 requests. (To learn how to configure Application Gateway with AVS virtual machines as backend pools, see [Use Azure Application Gateway to protect your web apps on Azure VMware Solution](protect-azure-vmware-solution-with-application-gateway.md).)
+The following steps verify the configuration of your application gateways.
-The following steps verify the correct configuration of your application gateways.
+1. In the Azure portal, select **Application gateways** to view a list of your current application gateways:
-1. Open the Azure portal and select **Application gateways** to view a list of your current application gateways.
+ - AVS-GW-WUS
+ - AVS-GW-EUS (on-premises)
+ - AVS-GW-WEU
- For this scenario, we have configured three application gateways:
- - AVS-GW-WUS
- - AVS-GW-EUS (on premises)
- - AVS-GW-WEU
+ :::image type="content" source="media/traffic-manager/app-gateways-list-1.png" alt-text="Screenshot of Application gateway page showing list of configured application gateways." lightbox="media/traffic-manager/app-gateways-list-1.png":::
- :::image type="content" source="media/traffic-manager/app-gateways-list-1.png" alt-text="Screenshot of Application gateway page showing list of configured application gateways." lightbox="media/traffic-manager/app-gateways-list-1.png":::
+1. Select one of your previously deployed application gateways.
-2. Select one of your previously deployed application gateways. A window opens showing various information on the application gateway. Select **Backend pools** to verify the configuration of one of the backend pools.
+ A window opens showing various information on the application gateway.
:::image type="content" source="media/traffic-manager/backend-pool-config.png" alt-text="Screenshot of Application gateway page showing details of the selected application gateway." lightbox="media/traffic-manager/backend-pool-config.png":::
-
-3. In this case, we see one virtual machine backend pool member configured as a web server with an IP address of 172.29.1.10.
-
- :::image type="content" source="media/traffic-manager/backend-pool-ip-address.png" alt-text="Screenshot of Edit backend pool page with target IP address highlighted.":::
- You can similarly verify the configuration of the other application gateways and backend pool members.
+1. Select **Backend pools** to verify the configuration of one of the backend pools. You see one VM backend pool member configured as a web server with an IP address of 172.29.1.10.
+
+ :::image type="content" source="media/traffic-manager/backend-pool-ip-address.png" alt-text="Screenshot of Edit backend pool page with target IP address highlighted.":::
-## Verify configuration of the NSX-T segment
+1. Verify the configuration of the other application gateways and backend pool members.
-Network segments created in NSX-T Manager are used as networks for virtual machines in vCenter. For more information, see the tutorial, [Create an NSX-T network segment in Azure VMware Solution (AVS)](tutorial-nsx-t-network-segment.md).
+## Verify the NSX-T segment configuration
-In our scenario, an NSX-T segment is configured in the AVS environment where the backend pool member virtual machine is attached.
+The following steps verify the configuration of the NSX-T segment in the Azure VMware Solution environment.
-1. Select **Segments** to view your configured segments. In this case, we see that Contoso-segment1 is connected to Contoso-T01 gateway, a Tier-1 flexible router.
+1. Select **Segments** to view your configured segments. You see Contoso-segment1 connected to Contoso-T01 gateway, a Tier-1 flexible router.
- :::image type="content" source="media/traffic-manager/nsx-t-segment-avs.png" alt-text="Screenshot showing segment profiles in NSX-T Manager." lightbox="media/traffic-manager/nsx-t-segment-avs.png":::
+ :::image type="content" source="media/traffic-manager/nsx-t-segment-avs.png" alt-text="Screenshot showing segment profiles in NSX-T Manager." lightbox="media/traffic-manager/nsx-t-segment-avs.png":::
-2. Select **Tier-1 Gateways** to see a list of your Tier-1 gateways with the number of linked segments. Select the segment linked to Contoso-T01. A window opens showing the logical interface configured on the Tier-01 router. This serves as gateway to the backend pool member virtual machine connected to the segment.
+1. Select **Tier-1 Gateways** to see a list of Tier-1 gateways with the number of linked segments.
:::image type="content" source="media/traffic-manager/nsx-t-segment-linked-2.png" alt-text="Screenshot showing gateway address of the selected segment.":::
-3. In the VM vSphere client, select the virtual machine to view its details. Note its IP address matches what we saw in step 3 of the preceding section: 172.29.1.10.
+1. Select the segment linked to Contoso-T01. A window opens showing the logical interface configured on the Tier-01 router. It serves as a gateway to the backend pool member VM connected to the segment.
+
+1. In the vSphere client, select the VM to view its details.
- :::image type="content" source="media/traffic-manager/nsx-t-vm-details.png" alt-text="Screenshot showing VM details in VSphere Client." lightbox="media/traffic-manager/nsx-t-vm-details.png":::
+ >[!NOTE]
+ >Its IP address matches VM backend pool member configured as a web server from the preceding section: 172.29.1.10.
-4. Select the virtual machine, then click **ACTIONS > Edit Settings** to verify connection to the NSX-T segment.
+ :::image type="content" source="media/traffic-manager/nsx-t-vm-details.png" alt-text="Screenshot showing VM details in vSphere Client." lightbox="media/traffic-manager/nsx-t-vm-details.png":::
+
+4. Select the VM, then select **ACTIONS > Edit Settings** to verify connection to the NSX-T segment.
## Create your Traffic Manager profile
-1. Log in to the [Azure portal](https://rc.portal.azure.com/#home). Under **Azure Services > Networking**, select **Traffic Manager profiles**.
+1. Sign in to the [Azure portal](https://rc.portal.azure.com/#home). Under **Azure Services > Networking**, select **Traffic Manager profiles**.
2. Select **+ Add** to create a new Traffic Manager profile.
-3. Provide profile name, routing method (we will use weighted in this scenario; see [Traffic Manager routing methods](../traffic-manager/traffic-manager-routing-methods.md)), subscription, and resource group, and select **Create**.
+3. Provide the following information and then select **Create**:
+
+ - Profile name
+ - Routing method (use [weighted](../traffic-manager/traffic-manager-routing-methods.md)
+ - Subscription
+ - Resource group
## Add external endpoints into the Traffic Manager profile
-1. Select the Traffic Manager profile from the search results pane, select **Endpoints** and then **+ Add**.
+1. Select the Traffic Manager profile from the search results pane, select **Endpoints**, and then **+ Add**.
+
+1. For each of the external endpoints in the different regions, enter the required details and then select **Add**:
+ - Type
+ - Name
+ - Fully Qualified domain name (FQDN) or IP
+ - Weight (assign a weight of 1 to each endpoint).
-2. Enter the required details: Type, Name, Fully Qualified domain name
-(FQDN) or IP, and Weight (in this scenario, we're assigning a weight of 1 to each endpoint). Select **Add**. This creates the external endpoint. The monitor status must be **Online**. Repeat the same steps to create two more external endpoints, one in a different region and the other on-premises. Once created, all three will display in the Traffic Manager profile, and the status of all three should be **Online**.
+ Once created, all three shows in the Traffic Manager profile. The monitor status of all three must be **Online**.
-3. Select **Overview**. Copy the URL under **DNS Name**.
+3. Select **Overview** and copy the URL under **DNS Name**.
:::image type="content" source="media/traffic-manager/traffic-manager-endpoints.png" alt-text="Screenshot showing a Traffic Manager endpoint overview with DNS name highlighted." lightbox="media/traffic-manager/traffic-manager-endpoints.png":::
-4. Paste the DNS name URL in a browser. The following screenshot shows traffic directing to the West Europe region.
+4. Paste the DNS name URL in a browser. The screenshot shows traffic directing to the West Europe region.
:::image type="content" source="media/traffic-manager/traffic-to-west-europe.png" alt-text="Screenshot of browser window showing traffic routed to West Europe.":::
-5. Refresh your browser. The following screenshot shows traffic now directing to another set of backend pool members in the West US region.
+5. Refresh your browser. The screenshot shows traffic directing to another set of backend pool members in the West US region.
:::image type="content" source="media/traffic-manager/traffic-to-west-us.png" alt-text="Screenshot of browser window showing traffic routed to West US.":::
-6. Refresh your browser again. The following screenshot shows traffic now directing to the final set of backend pool members on premises.
+6. Refresh your browser again. The screenshot shows traffic directing to the final set of backend pool members on-premises.
:::image type="content" source="media/traffic-manager/traffic-to-on-premises.png" alt-text="Screenshot of browser window showing traffic routed to on-premises.":::
@@ -117,7 +127,7 @@ In our scenario, an NSX-T segment is configured in the AVS environment where the
Learn more about: -- [Using Azure Application Gateway on Azure VMware Solution (AVS)](protect-azure-vmware-solution-with-application-gateway.md)
+- [Using Azure Application Gateway on Azure VMware Solution](protect-azure-vmware-solution-with-application-gateway.md)
- [Traffic Manager routing methods](../traffic-manager/traffic-manager-routing-methods.md) - [Combining load-balancing services in Azure](../traffic-manager/traffic-manager-load-balancing-azure.md) - [Measuring Traffic Manager performance](../traffic-manager/traffic-manager-performance-considerations.md)
batch https://docs.microsoft.com/en-us/azure/batch/batch-quota-limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-quota-limit.md
@@ -2,7 +2,7 @@
title: Service quotas and limits description: Learn about default Azure Batch quotas, limits, and constraints, and how to request quota increases ms.topic: conceptual
-ms.date: 12/16/2020
+ms.date: 12/29/2020
ms.custom: seodec18 ---
@@ -28,7 +28,7 @@ Also note that quotas are not guaranteed values. Quotas can vary based on change
### Cores quotas in batch service mode
-The enforcement of dedicated core quotas is being improved, with the changes being made available in stages and completed for all Batch accounts by the end of December 2020.
+The enforcement of dedicated core quotas is being improved, with the changes being made available in stages and completed for all Batch accounts by the end of January 2021.
Core quotas exist for each VM series supported by Batch and are displayed on the **Quotas** page in the portal. VM series quota limits can be updated with a support request, as detailed below.
@@ -65,7 +65,7 @@ Pool size limits are set by the Batch service. Unlike [resource quotas](#resourc
## Other limits
-Additional limits set by the Batch service. Unlike [resource quotas](#resource-quotas), these values cannot be changed.
+These additional limits are set by the Batch service. Unlike [resource quotas](#resource-quotas), these values cannot be changed.
| **Resource** | **Maximum Limit** | | --- | --- |
@@ -75,6 +75,7 @@ Additional limits set by the Batch service. Unlike [resource quotas](#resource-q
| Application packages per pool | 10 | | Maximum task lifetime | 180 days<sup>1</sup> | | [Mounts](virtual-file-mount.md) per compute node | 10 |
+| Certificates per pool | 12 |
<sup>1</sup> The maximum lifetime of a task, from when it is added to the job to when it completes, is 180 days. Completed tasks persist for seven days; data for tasks not completed within the maximum lifetime is not accessible.
@@ -86,7 +87,7 @@ To view your Batch account quotas in the [Azure portal](https://portal.azure.com
1. Select **Quotas** on the Batch account's menu. 1. View the quotas currently applied to the Batch account.
-:::image type="content" source="./media/batch-quota-limit/account-quota-portal.png" alt-text="Batch account quotas":::
+:::image type="content" source="./media/batch-quota-limit/account-quota-portal.png" alt-text="Screenshot showing Batch account quotas in the Azure portal.":::
## Increase a quota
@@ -95,26 +96,26 @@ You can request a quota increase for your Batch account or your subscription usi
1. Select the **Help + support** tile on your portal dashboard, or the question mark (**?**) in the upper-right corner of the portal. 1. Select **New support request** > **Basics**. 1. In **Basics**:
-
+ 1. **Issue Type** > **Service and subscription limits (quotas)**
-
+ 1. Select your subscription.
-
+ 1. **Quota type** > **Batch**
-
+ Select **Next**.
-
+ 1. In **Details**:
-
+ 1. In **Provide details**, specify the location, quota type, and Batch account.
-
- ![Batch quota increase][quota_increase]
+
+ :::image type="content" source="media/batch-quota-limit/quota-increase.png" alt-text="Screenshot of the Quota details screen when requesting a quota increase.":::
Quota types include: * **Per Batch account** Values specific to a single Batch account, including dedicated and low-priority cores, and number of jobs and pools.
-
+ * **Per region** Values that apply to all Batch accounts in a region and includes the number of Batch accounts per region per subscription.
@@ -125,11 +126,11 @@ You can request a quota increase for your Batch account or your subscription usi
Select **Next**. 1. In **Contact information**:
-
+ 1. Select a **Preferred contact method**.
-
+ 1. Verify and enter the required contact details.
-
+ Select **Create** to submit the support request. Once you've submitted your support request, Azure support will contact you. Quota requests may be completed within a few minutes or up to two business days.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/rest-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/rest-tutorial.md new file mode 100644
@@ -0,0 +1,220 @@
+---
+author: PatrickFarley
+ms.author: pafarley
+ms.service: cognitive-services
+ms.date: 12/09/2020
+ms.topic: include
+---
+
+Get started with the Custom Vision REST API. Follow these steps to call the API and build an image classification model. You'll create a project, add tags, train the project, and use the project's prediction endpoint URL to programmatically test it. Use this example as a template for building your own image recognition app.
+
+> [!NOTE]
+> If you want to build and train a classification model _without_ writing code, see the [browser-based guidance](../../getting-started-build-a-classifier.md) instead.
+
+Use the Custom Vision client library for .NET to:
+
+* Create a new Custom Vision project
+* Add tags to the project
+* Upload and tag images
+* Train the project
+* Publish the current iteration
+* Test the prediction endpoint
+
+## Prerequisites
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Once you have your Azure subscription, <a href="https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision" title="Create a Custom Vision resource" target="_blank">create a Custom Vision resource <span class="docon docon-navigate-external x-hidden-focus"></span></a> in the Azure portal to create a training and prediction resource and get your keys and endpoint. Wait for it to deploy and click the **Go to resource** button.
+ * You will need the key and endpoint from the resources you create to connect your application to Custom Vision. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
++
+## Create a new Custom Vision project
+
+You'll use a command like the following to create an image classification project. The created project will show up on the [Custom Vision website](https://customvision.ai/).
++
+:::code language="shell" source="~/cognitive-services-quickstart-code/curl/custom-vision/image-classifier.sh" ID="createproject":::
+
+Copy the command to a text editor and make the following changes:
+
+* Replace `{subscription key}` with your valid Face subscription key.
+* Replace `{endpoint}` with the endpoint that corresponds to your subscription key.
+ [!INCLUDE [subdomains-note](../../../../../includes/cognitive-services-custom-subdomains-note.md)]
+* Replace `{name}` with the name of your project.
+* Optionally set other URL parameters to configure what type of model your project will use. See the [CreatProject API](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeae) for options.
+
+You'll receive a JSON response like the following. Save the `"id"` value of your project to a temporary location.
+
+```json
+{
+ "id": "00000000-0000-0000-0000-000000000000",
+ "name": "string",
+ "description": "string",
+ "settings": {
+ "domainId": "00000000-0000-0000-0000-000000000000",
+ "classificationType": "Multiclass",
+ "targetExportPlatforms": [
+ "CoreML"
+ ],
+ "useNegativeSet": true,
+ "detectionParameters": "string",
+ "imageProcessingSettings": {
+ "augmentationMethods": {}
+ }
+ },
+ "created": "string",
+ "lastModified": "string",
+ "thumbnailUri": "string",
+ "drModeEnabled": true,
+ "status": "Succeeded"
+}
+```
+
+## Add tags to the project
+
+Use the following command to define the tags that you will train the model on.
+
+:::code language="shell" source="~/cognitive-services-quickstart-code/curl/custom-vision/image-classifier.sh" ID="createtag":::
+
+* Again, insert your own key and endpoint URL.
+* Replace `{projectId}` with your own project ID.
+* Replace `{name}` with the name of the tag you want to use.
+
+Repeat this process for all the tags you'd like to use in your project. If you're using the example images provided, add the tags `"Hemlock"` and `"Japanese Cherry"`.
+
+You'll get a JSON response like the following. Save the `"id"` value of each tag to a temporary location.
+
+```json
+{
+ "id": "00000000-0000-0000-0000-000000000000",
+ "name": "string",
+ "description": "string",
+ "type": "Regular",
+ "imageCount": 0
+}
+```
+
+## Upload and tag images
+
+Next, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device.
+
+Use the following command to upload the images and apply tags; once for the "Hemlock" images, and separately for the "Japanese Cherry" images. See the [Create Images From Data](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb5) API for more options.
+
+:::code language="shell" source="~/cognitive-services-quickstart-code/curl/custom-vision/image-classifier.sh" ID="uploadimages":::
+
+* Again, insert your own key and endpoint URL.
+* Replace `{projectId}` with your own project ID.
+* Replace `{tagArray}` with the ID of a tag.
+* Then, populate the body of the request with the binary data of the images you want to tag.
+
+## Train the project
+
+This method trains the model on the tagged images you've uploaded and returns an ID for the current project iteration.
+
+:::code language="shell" source="~/cognitive-services-quickstart-code/curl/custom-vision/image-classifier.sh" ID="trainproject":::
+
+* Again, insert your own key and endpoint URL.
+* Replace `{projectId}` with your own project ID.
+* Replace `{tagArray}` with the ID of a tag.
+* Then, populate the body of the request with the binary data of the images you want to tag.
+* Optionally use other URL parameters. See the [Train Project](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee1) API for options.
+
+> [!TIP]
+> Train with selected tags
+>
+> You can optionally train on only a subset of your applied tags. You may want to do this if you haven't applied enough of certain tags yet, but you do have enough of others. Add the optional JSON content to the body of your request. Populate the `"selectedTags"` array with the IDs of the tags you want to use.
+> ```json
+> {
+> "selectedTags": [
+> "00000000-0000-0000-0000-000000000000"
+> ]
+> }
+> ```
+
+The JSON response contains information about your trained project, including the iteration ID (`"id"`). Save this value for the next step.
+
+```json
+{
+ "id": "00000000-0000-0000-0000-000000000000",
+ "name": "string",
+ "status": "string",
+ "created": "string",
+ "lastModified": "string",
+ "trainedAt": "string",
+ "projectId": "00000000-0000-0000-0000-000000000000",
+ "exportable": true,
+ "exportableTo": [
+ "CoreML"
+ ],
+ "domainId": "00000000-0000-0000-0000-000000000000",
+ "classificationType": "Multiclass",
+ "trainingType": "Regular",
+ "reservedBudgetInHours": 0,
+ "trainingTimeInMinutes": 0,
+ "publishName": "string",
+ "originalPublishResourceId": "string"
+}
+```
+
+## Publish the current iteration
+
+This method makes the current iteration of the model available for querying. You use the returned model name as a reference to send prediction requests.
+
+:::code language="shell" source="~/cognitive-services-quickstart-code/curl/custom-vision/image-classifier.sh" ID="publish":::
+
+* Again, insert your own key and endpoint URL.
+* Replace `{projectId}` with your own project ID.
+* Replace `{iterationId}` with the ID returned in the previous step.
+* Replace `{publishedName}` with the name you'd like to assign to your prediction model.
+* Replace `{predictionId}` with your own prediction resource ID. You can find it on your prediction resource's **Overview** tab in the Azure portal, listed as **Subscription ID**.
+* Optionally use other URL parameters. See the [Publish Iteration](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fdded5) API.
+
+## Test the prediction endpoint
+
+Finally, use this command to test your trained model by uploading a new image for it to classify with tags. You may use the image in the "Test" folder of the sample files you downloaded earlier.
+
+:::code language="shell" source="~/cognitive-services-quickstart-code/curl/custom-vision/image-classifier.sh" ID="publish":::
+
+* Again, insert your own key and endpoint URL.
+* Replace `{projectId}` with your own project ID.
+* Replace `{publishedName}` with the name you used in the previous step.
+* Add the binary data of your local image to the request body.
+* Optionally use other URL parameters. See the [Classify Image](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3) API.
+
+The returned JSON response will least each of the tags that the model applied to your image, along with probability scores for each tag.
+
+```json
+{
+ "id": "00000000-0000-0000-0000-000000000000",
+ "project": "00000000-0000-0000-0000-000000000000",
+ "iteration": "00000000-0000-0000-0000-000000000000",
+ "created": "string",
+ "predictions": [
+ {
+ "probability": 0.0,
+ "tagId": "00000000-0000-0000-0000-000000000000",
+ "tagName": "string",
+ "boundingBox": {
+ "left": 0.0,
+ "top": 0.0,
+ "width": 0.0,
+ "height": 0.0
+ },
+ "tagType": "Regular"
+ }
+ ]
+}
+```
+
+[!INCLUDE [clean-ic-project](../../includes/clean-ic-project.md)]
+
+## Next steps
+
+Now you've done every step of the image classification process using the REST API. This sample executes a single training iteration, but often you'll need to train and test your model multiple times in order to make it more accurate.
+
+> [!div class="nextstepaction"]
+> [Test and retrain a model](../../test-your-model.md)
+
+* [What is Custom Vision?](../../overview.md)
+* [API reference documentation (training)](/dotnet/api/overview/azure/cognitiveservices/client/customvision?view=azure-dotnet)
+* [API reference documentation (prediction)](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeae)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/overview.md
@@ -48,4 +48,4 @@ As with all of the Cognitive Services, developers using the Custom Vision servic
## Next steps
-Follow the [Build a classifier](getting-started-build-a-classifier.md) guide to get started using Custom Vision on the web portal, or complete a [client library quickstart](quickstarts/image-classification.md) to implement the basic scenarios in code.
\ No newline at end of file
+Follow the [Build a classifier](getting-started-build-a-classifier.md) guide to get started using Custom Vision on the web portal, or complete a [quickstart](quickstarts/image-classification.md) to implement the basic scenarios in code.
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/quickstarts/image-classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/quickstarts/image-classification.md
@@ -1,7 +1,7 @@
---
-title: "Quickstart: Image classification with Custom Vision client library"
+title: "Quickstart: Image classification with Custom Vision client library or REST API"
titleSuffix: Azure Cognitive Services
-description: "Quickstart: Create an image classification project, add tags, upload images, train your project, and make a prediction using the Custom Vision client library"
+description: "Quickstart: Create an image classification project, add tags, upload images, train your project, and make a prediction using the Custom Vision client library or the REST API with cURL"
author: PatrickFarley ms.author: pafarley ms.service: cognitive-services
@@ -10,10 +10,10 @@ ms.topic: quickstart
ms.date: 10/25/2020 ms.custom: "devx-track-python, devx-track-js, devx-track-csharp, cog-serv-seo-aug-2020" keywords: custom vision, image recognition, image recognition app, image analysis, image recognition software
-zone_pivot_groups: programming-languages-set-one
+zone_pivot_groups: programming-languages-set-cusvis
---
-# Quickstart: Create an image classification project with the Custom Vision client library
+# Quickstart: Create an image classification project with the Custom Vision client library or REST API
::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](../includes/quickstarts/csharp-tutorial.md)]
@@ -34,3 +34,7 @@ zone_pivot_groups: programming-languages-set-one
::: zone pivot="programming-language-python" [!INCLUDE [python quickstart](../includes/quickstarts/python-tutorial.md)] ::: zone-end+
+::: zone pivot="programming-language-rest-api"
+[!INCLUDE [REST API quickstart](../includes/quickstarts/rest-tutorial.md)]
+::: zone-end
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/release-notes.md
@@ -37,7 +37,7 @@ ms.author: pafarley
- Custom Vision Service has entered General Availability on Azure! - Added Advanced Training feature with a new machine learning backend for improved performance, especially on challenging datasets and fine-grained classification. With advanced training, you can specify a compute time budget for training and Custom Vision will experimentally identify the best training and augmentation settings. For quick iterations, you can continue to use the existing fast training.-- Introduced 3.0 APIs. Announced coming deprecation of pre-3.0 APIs on October 1, 2019. See the documentation quickstarts for [.Net](./quickstarts/image-classification.md), [Python](./quickstarts/image-classification.md), [Node](./quickstarts/image-classification.md), [Java](./quickstarts/image-classification.md), or [Go](./quickstarts/image-classification.md) for examples on how to get started.
+- Introduced 3.0 APIs. Announced coming deprecation of pre-3.0 APIs on October 1, 2019. See the documentation [quickstarts](./quickstarts/image-classification.md) for examples on how to get started.
- Replaced "Default Iterations" with Publish/Unpublish in the 3.0 APIs. - New model export targets have been added. Dockerfile export has been upgraded to support ARM for Raspberry Pi 3. Export support has been added to the [Vision AI Dev Kit.](https://visionaidevkit.com/). - Increased limit of Tags per project to 500 for S0 tier. Increased limit of Images per project to 100,000 for S0 tier.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/storage-integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/storage-integration.md
@@ -201,4 +201,5 @@ The `"exportStatus"` field may be either `"ExportCompleted"` or `"ExportFailed"`
## Next steps In this guide, you learned how to copy and move a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
-* [REST API reference documentation](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
+* [REST API reference documentation (training)](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
+* [REST API reference documentation (prediction)](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/includes/chinese-language-support-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/includes/chinese-language-support-notes.md
@@ -1,12 +1,12 @@
--- title: Chinese support notes services: cognitive-services
-author: IEvangelist
+author: aahill
manager: nitinme ms.service: cognitive-services ms.topic: include
-ms.date: 10/07/2019
-ms.author: dapine
+ms.date: 12/29/2020
+ms.author: aahi
--- ### *Chinese support notes
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/includes/get-started-get-intent-create-pizza-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/includes/get-started-get-intent-create-pizza-app.md
@@ -2,14 +2,15 @@
title: include file description: include file services: cognitive-services
-author: roy-har
-manager: diberry
+author: aahill
+manager: nitinme
ms.service: cognitive-services
-ms.date: 06/03/2020
+ms.date: 12/29/2020
ms.subservice: language-understanding ms.topic: include ms.custom: include file
-ms.author: roy-har
+ms.author: aahi
+ms.reviewer: roy-har
--- 1. Select [pizza-app-for-luis-v6.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/apps/pizza-app-for-luis-v6.json) to bring up the GitHub page for the `pizza-app-for-luis.json` file. 1. Right-click or long tap the **Raw** button and select **Save link as** to save the `pizza-app-for-luis.json` to your computer.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/includes/get-started-get-model-create-pizza-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/includes/get-started-get-model-create-pizza-app.md
@@ -2,14 +2,15 @@
title: include file description: include file services: cognitive-services
-author: roy-har
-manager: diberry
+author: aahill
+manager: nitinme
ms.service: cognitive-services ms.date: 06/04/2020 ms.subservice: language-understanding ms.topic: include ms.custom: include file
-ms.author: roy-har
+ms.author: aahi
+ms.reviewer: roy-har
--- Create the pizza app.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/includes/luis-portal-note https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/includes/luis-portal-note.md
@@ -7,7 +7,7 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: language-understanding ms.topic: include
-ms.date: 10/16/2020
+ms.date: 12/16/2020
---
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/includes/text-analytics-support-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/includes/text-analytics-support-notes.md
@@ -1,12 +1,12 @@
--- title: Chinese support notes services: cognitive-services
-author: IEvangelist
+author: aahill
manager: nitinme ms.service: cognitive-services ms.topic: include
-ms.date: 10/07/2019
-ms.author: dapine
+ms.date: 12/29/2020
+ms.author: aahi
--- ### **Text analytics support notes
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-concept-batch-test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-concept-batch-test.md deleted file mode 100644
@@ -1,132 +0,0 @@
-title: Batch testing - LUIS
-titleSuffix: Azure Cognitive Services
-description: Use batch testing to continuously work on your application to refine it and improve its language understanding.
-services: cognitive-services
-
-manager: nitinme
-ms.custom: seodec18
-ms.service: cognitive-services
-ms.subservice: language-understanding
-ms.topic: conceptual
-ms.date: 10/25/2019
--
-# Batch testing with 1000 utterances in LUIS portal
-
-Batch testing validates your active trained version to measure its prediction accuracy. A batch test helps you view the accuracy of each intent and entity in your active version, displaying results with a chart. Review the batch test results to take appropriate action to improve accuracy, such as adding more example utterances to an intent if your app frequently fails to identify the correct intent or labeling entities within the utterance.
-
-## Group data for batch test
-
-It is important that utterances used for batch testing are new to LUIS. If you have a data set of utterances, divide the utterances into three sets: example utterances added to an intent, utterances received from the published endpoint, and utterances used to batch test LUIS after it is trained.
-
-## A data set of utterances
-
-Submit a batch file of utterances, known as a *data set*, for batch testing. The data set is a JSON-formatted file containing a maximum of 1,000 labeled **non-duplicate** utterances. You can test up to 10 data sets in an app. If you need to test more, delete a data set and then add a new one.
-
-|**Rules**|
-|--|
-|*No duplicate utterances|
-|1000 utterances or less|
-
-*Duplicates are considered exact string matches, not matches that are tokenized first.
-
-## Entities allowed in batch tests
-
-All custom entities in the model appear in the batch test entities filter even if there are no corresponding entities in the batch file data.
-
-<a name="json-file-with-no-duplicates"></a>
-<a name="example-batch-file"></a>
-
-## Batch file format
-
-The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learning entities](luis-concept-entity-types.md#types-of-entities) you expect to be detected.
-
-## Batch syntax template for intents with entities
-
-Use the following template to start your batch file:
-
-```JSON
-[
- {
- "text": "example utterance goes here",
- "intent": "intent name goes here",
- "entities":
- [
- {
- "entity": "entity name 1 goes here",
- "startPos": 14,
- "endPos": 23
- },
- {
- "entity": "entity name 2 goes here",
- "startPos": 14,
- "endPos": 23
- }
- ]
- }
-]
-```
-
-The batch file uses the **startPos** and **endPos** properties to note the beginning and end of an entity. The values are zero-based and should not begin or end on a space. This is different from the query logs, which use startIndex and endIndex properties.
-
-[!INCLUDE [Entity roles in batch testing - currently not supported](../../../includes/cognitive-services-luis-roles-not-supported-in-batch-testing.md)]
-
-## Batch syntax template for intents without entities
-
-Use the following template to start your batch file without entities:
-
-```JSON
-[
- {
- "text": "example utterance goes here",
- "intent": "intent name goes here",
- "entities": []
- }
-]
-```
-
-If you do not want to test entities, include the `entities` property and set the value as an empty array, `[]`.
--
-## Common errors importing a batch
-
-Common errors include:
-
-> * More than 1,000 utterances
-> * An utterance JSON object that doesn't have an entities property. The property can be an empty array.
-> * Word(s) labeled in multiple entities
-> * Entity label starting or ending on a space.
-
-## Batch test state
-
-LUIS tracks the state of each data set's last test. This includes the size (number of utterances in the batch), last run date, and last result (number of successfully predicted utterances).
-
-<a name="sections-of-the-results-chart"></a>
-
-## Batch test results
-
-The batch test result is a scatter graph, known as an error matrix. This graph is a 4-way comparison of the utterances in the batch file and the current model's predicted intent and entities.
-
-Data points on the **False Positive** and **False Negative** sections indicate errors, which should be investigated. If all data points are on the **True Positive** and **True Negative** sections, then your app's accuracy is perfect on this data set.
-
-![Four sections of chart](./media/luis-concept-batch-test/chart-sections.png)
-
-This chart helps you find utterances that LUIS predicts incorrectly based on its current training. The results are displayed per region of the chart. Select individual points on the graph to review the utterance information or select region name to review utterance results in that region.
-
-![Batch testing](./media/luis-concept-batch-test/batch-testing.png)
-
-## Errors in the results
-
-Errors in the batch test indicate intents that are not predicted as noted in the batch file. Errors are indicated in the two red sections of the chart.
-
-The false positive section indicates that an utterance matched an intent or entity when it shouldn't have. The false negative indicates an utterance did not match an intent or entity when it should have.
-
-## Fixing batch errors
-
-If there are errors in the batch testing, you can either add more utterances to an intent, and/or label more utterances with the entity to help LUIS make the discrimination between intents. If you have added utterances, and labeled them, and still get prediction errors in batch testing, consider adding a [phrase list](luis-concept-feature.md) feature with domain-specific vocabulary to help LUIS learn faster.
-
-## Next steps
-
-* Learn how to [test a batch](luis-how-to-batch-test.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-how-to-batch-test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-batch-test.md
@@ -9,64 +9,137 @@ ms.custom: seodec18
ms.service: cognitive-services ms.subservice: language-understanding ms.topic: how-to
-ms.date: 05/17/2020
+ms.date: 12/29/2020
--- # Batch testing with a set of example utterances
- Batch testing is a comprehensive test on your current trained model to measure its performance in LUIS. The data sets used for batch testing should not include example utterances in the intents or utterances received from the prediction runtime endpoint.
+Batch testing validates your active trained version to measure its prediction accuracy. A batch test helps you view the accuracy of each intent and entity in your active version. Review the batch test results to take appropriate action to improve accuracy, such as adding more example utterances to an intent if your app frequently fails to identify the correct intent or labeling entities within the utterance.
+
+## Group data for batch test
+
+It is important that utterances used for batch testing are new to LUIS. If you have a data set of utterances, divide the utterances into three sets: example utterances added to an intent, utterances received from the published endpoint, and utterances used to batch test LUIS after it is trained.
+
+The batch JSON file you use should include utterances with top-level machine-learning entities labeled including start and end position. The utterances should not be part of the examples already in the app. They should be utterances you want to positively predict for intent and entities.
+
+You can separate out tests by intent and/or entity or have all the tests (up to 1000 utterances) in the same file.
+
+### Common errors importing a batch
+
+If you run into errors uploading your batch file to LUIS, check for the following common issues:
+
+* More than 1,000 utterances in a batch file
+* An utterance JSON object that doesn't have an entities property. The property can be an empty array.
+* Word(s) labeled in multiple entities
+* Entity labels starting or ending on a space.
+
+## Fixing batch errors
+
+If there are errors in the batch testing, you can either add more utterances to an intent, and/or label more utterances with the entity to help LUIS make the discrimination between intents. If you have added utterances, and labeled them, and still get prediction errors in batch testing, consider adding a [phrase list](luis-concept-feature.md) feature with domain-specific vocabulary to help LUIS learn faster.
+ <a name="batch-testing"></a>
-## Import a dataset file for batch testing
+## Batch testing using the LUIS portal
-1. Select **Test** in the top bar, and then select **Batch testing panel**.
+### Import and train an example app
- ![Batch Testing Link](./media/luis-how-to-batch-test/batch-testing-link.png)
+Import an app that takes a pizza order such as `1 pepperoni pizza on thin crust`.
+
+1. Download and save [app JSON file](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/apps/pizza-with-machine-learned-entity.json?raw=true).
+
+1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+1. Select the arrow next to **New app** and click **Import as JSON** to import the JSON into a new app. Name the app `Pizza app`.
++
+1. Select **Train** in the top-right corner of the navigation to train the app.
++
+[!INCLUDE [Entity roles in batch testing - currently not supported](../../../includes/cognitive-services-luis-roles-not-supported-in-batch-testing.md)]
+
+### Batch test file
+
+The example JSON includes one utterance with a labeled entity to illustrate what a test file looks like. In your own tests, you should have many utterances with correct intent and machine-learning entity labeled.
-2. Select **Import dataset**. The **Import new dataset** dialog box appears. Select **Choose File** and locate a JSON file with the correct [JSON format](luis-concept-batch-test.md#batch-file-format) that contains *no more than 1,000* utterances to test.
+1. Create `pizza-with-machine-learned-entity-test.json` in a text editor or [download](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/batch-tests/pizza-with-machine-learned-entity-test.json?raw=true) it.
- Import errors are reported in a red notification bar at the top of the browser. When an import has errors, no dataset is created. For more information, see [Common errors](luis-concept-batch-test.md#common-errors-importing-a-batch).
+2. In the JSON-formatted batch file, add an utterance with the **Intent** you want predicted in the test.
-3. In the **Dataset Name** field, enter a name for your dataset file. The dataset file includes an **array of utterances** including the *labeled intent* and *entities*. Review the [example batch file](luis-concept-batch-test.md#batch-file-format) for syntax.
+ [!code-json[Add the intents to the batch test file](~/samples-cognitive-services-data-files/luis/batch-tests/pizza-with-machine-learned-entity-test.json "Add the intent to the batch test file")]
-4. Select **Done**. The dataset file is added.
+## Run the batch
-## Run, rename, export, or delete dataset
+1. Select **Test** in the top navigation bar.
-To run, rename, export, or delete the dataset, use the ellipsis (***...***) button at the end of the dataset row.
+2. Select **Batch testing panel** in the right-side panel.
+
+ ![Batch Testing Link](./media/luis-how-to-batch-test/batch-testing-link.png)
-> [!div class="mx-imgBorder"]
-> ![Screenshot of batch tests list with options](./media/luis-how-to-batch-test/batch-testing-options.png)
+3. Select **Import**. In the dialog box that appears, select **Choose File** and locate a JSON file with the correct JSON format that contains *no more than 1,000* utterances to test.
-## Run a batch test on your trained app
+ Import errors are reported in a red notification bar at the top of the browser. When an import has errors, no dataset is created. For more information, see [Common errors](#common-errors-importing-a-batch).
-To run the test, select the dataset name then select **Run** from the contextual toolbar. When the test completes, this row displays the test result of the dataset.
+4. Choose the file location of the `pizza-with-machine-learned-entity-test.json` file.
-The downloadable dataset is the same file that was uploaded for batch testing.
+5. Name the dataset `pizza test` and select **Done**.
-|State|Meaning|
-|--|--|
-|![Successful test green circle icon](./media/luis-how-to-batch-test/batch-test-result-green.png)|All utterances are successful.|
-|![Failing test red x icon](./media/luis-how-to-batch-test/batch-test-result-red.png)|At least one utterance intent did not match the prediction.|
-|![Ready to test icon](./media/luis-how-to-batch-test/batch-test-result-blue.png)|Test is ready to run.|
+6. Select the **Run** button. After the batch test runs, select **See results**.
+
+ > [!TIP]
+ > * Selecting **Download** will download the same file that you uploaded.
+ > * If you see the batch test failed, at least one utterance intent did not match the prediction.
<a name="access-batch-test-result-details-in-a-visualized-view"></a>
-## View batch test results
+### Review batch results for intents
+
+To review the batch test results, select **See results**. The test results show graphically how the test utterances were predicted against the active version.
+
+The batch chart displays four quadrants of results. To the right of the chart is a filter. The filter contains intents and entities. When you select a [section of the chart](luis-concept-batch-test.md#batch-test-results) or a point within the chart, the associated utterance(s) display below the chart.
+
+While hovering over the chart, a mouse wheel can enlarge or reduce the display in the chart. This is useful when there are many points on the chart clustered tightly together.
+
+The chart is in four quadrants, with two of the sections displayed in red.
+
+1. Select the **ModifyOrder** intent in the filter list. The utterance is predicted as a **True Positive** meaning the utterance successfully matched its positive prediction listed in the batch file.
+
+ > [!div class="mx-imgBorder"]
+ > ![Utterance successfully matched its positive prediction](./media/luis-tutorial-batch-testing/intent-predicted-true-positive.png)
+
+ The green checkmarks in the filters list also indicate the success of the test for each intent. All the other intents are listed with a 1/1 positive score because the utterance was tested against each intent, as a negative test for any intents not listed in the batch test.
+
+1. Select the **Confirmation** intent. This intent isn't listed in the batch test so this is a negative test of the utterance that is listed in the batch test.
+
+ > [!div class="mx-imgBorder"]
+ > ![Utterance successfully predicted negative for unlisted intent in batch file](./media/luis-tutorial-batch-testing/true-negative-intent.png)
+
+ The negative test was successful, as noted with the green text in the filter, and the grid.
+
+### Review batch test results for entities
-To review the batch test results, select **See results**.
+The ModifyOrder entity, as a machine entity with subentities, displays if the top-level entity matched and how the subentities are predicted.
+
+1. Select the **ModifyOrder** entity in the filter list then select the circle in the grid.
+
+1. The entity prediction displays below the chart. The display includes solid lines for predictions that match the expectation and dotted lines for predictions that don't match the expectation.
+
+ > [!div class="mx-imgBorder"]
+ > ![Entity parent successfully predicted in batch file](./media/luis-tutorial-batch-testing/labeled-entity-prediction.png)
<a name="filter-chart-results-by-intent-or-entity"></a>
-## Filter chart results
+#### Filter chart results
To filter the chart by a specific intent or entity, select the intent or entity in the right-side filtering panel. The data points and their distribution update in the graph according to your selection. ![Visualized Batch Test Result](./media/luis-how-to-batch-test/filter-by-entity.png)
-## View single-point utterance data
+### Chart result examples
+
+The chart in the LUIS portal, you can perform the following actions:
+
+#### View single-point utterance data
In the chart, hover over a data point to see the certainty score of its prediction. Select a data point to retrieve its corresponding utterance in the utterances list at the bottom of the page.
@@ -76,7 +149,7 @@ In the chart, hover over a data point to see the certainty score of its predicti
<a name="relabel-utterances-and-retrain"></a> <a name="false-test-results"></a>
-## View section data
+#### View section data
In the four-section chart, select the section name, such as **False Positive** at the top-right of the chart. Below the chart, all utterances in that section display below the chart in a list.
@@ -88,7 +161,100 @@ The two sections of the chart in red indicate utterances that did not match the
The two sections of the chart in green did match the expected prediction.
-[!INCLUDE [Entity roles in batch testing - currently not supported](../../../includes/cognitive-services-luis-roles-not-supported-in-batch-testing.md)]
+## Batch testing using the REST API
+
+LUIS lets you batch test using the LUIS portal and REST API. The endpoints for the REST API are listed below. For information on batch testing using the LUIS portal, see [Tutorial: batch test data sets](luis-tutorial-batch-testing.md). Use the complete URLs below, replacing the placeholder values with your own LUIS Prediction key and endpoint.
+
+Remember to add your LUIS key to `Apim-Subscription-Id` in the header, and set `Content-Type` to `application/json`.
+
+### Start a batch test
+
+Start a batch test using either an app version ID or a publishing slot. Send a **POST** request to one of the following endpoint formats. Include your batch file in the body of the request.
+
+Publishing slot
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-NAME>/evaluations`
+
+App version ID
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations`
+
+These endpoints will return an operation ID that you will use to check the status, and get results.
++
+### Get the status of an ongoing batch test
+
+Use the operation ID from the batch test you started to get its status from the following endpoint formats:
+
+Publishing slot
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/status`
+
+App version ID
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/status`
+
+### Get the results from a batch test
+
+Use the operation ID from the batch test you started to get its results from the following endpoint formats:
+
+Publishing slot
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/result`
+
+App version ID
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/result`
++
+### Batch file of utterances
+
+Submit a batch file of utterances, known as a *data set*, for batch testing. The data set is a JSON-formatted file containing a maximum of 1,000 labeled utterances. You can test up to 10 data sets in an app. If you need to test more, delete a data set and then add a new one. All custom entities in the model appear in the batch test entities filter even if there are no corresponding entities in the batch file data.
+
+The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learning entities](luis-concept-entity-types.md#types-of-entities) you expect to be detected.
+
+### Batch syntax template for intents with entities
+
+Use the following template to start your batch file:
+
+```JSON
+{
+ "LabeledTestSetUtterances": [
+ {
+ "text": "play a song",
+ "intent": "play_music",
+ "entities": [
+ {
+ "entity": "song_parent",
+ "startPos": 0,
+ "endPos": 15,
+ "children": [
+ {
+ "entity": "pre_song",
+ "startPos": 0,
+ "endPos": 3
+ },
+ {
+ "entity": "song_info",
+ "startPos": 5,
+ "endPos": 15
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+
+```
+
+The batch file uses the **startPos** and **endPos** properties to note the beginning and end of an entity. The values are zero-based and should not begin or end on a space. This is different from the query logs, which use startIndex and endIndex properties.
+
+If you do not want to test entities, include the `entities` property and set the value as an empty array, `[]`.
+
+### REST API batch test results
+
+There are several objects returned by the API:
+
+* Information about the intents and entities models, such as precision, recall and F-score.
+* Information about the entities models, such as precision, recall and F-score) for each entity
+ * Using the `verbose` flag, you can get more information about the entity, such as `entityTextFScore` and `entityTypeFScore`.
+* Provided utterances with the predicted and labeled intent names
+* A list of false positive entities, and a list of false negative entities.
## Next steps
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-tutorial-batch-testing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-tutorial-batch-testing.md deleted file mode 100644
@@ -1,148 +0,0 @@
-title: "Tutorial: Batch testing to find issues - LUIS"
-description: This tutorial demonstrates how to use batch testing to validate the quality of your Language Understanding (LUIS) app.
-ms.service: cognitive-services
-ms.subservice: language-understanding
-ms.topic: tutorial
-ms.date: 05/07/2020
-
-# Tutorial: Batch test data sets
-
-This tutorial demonstrates how to use batch testing to validate the quality of your Language Understanding (LUIS) app.
-
-Batch testing allows you to validate the active, trained model's state with a known set of labeled utterances and entities. In the JSON-formatted batch file, add the utterances and set the entity labels you need predicted inside the utterance.
-
-Requirements for batch testing:
-
-* Maximum of 1000 utterances per test.
-* No duplicates.
-* Entity types allowed: only machined-learned entities.
-
-When using an app other than this tutorial, do *not* use the example utterances already added to your app.
-
-**In this tutorial, you learn how to:**
-
-<!-- green checkmark -->
-> [!div class="checklist"]
-> * Import example app
-> * Create a batch test file
-> * Run a batch test
-> * Review test results
-
-[!INCLUDE [LUIS Free account](../../../includes/cognitive-services-luis-free-key-short.md)]
-
-## Import example app
-
-Import an app that takes a pizza order such as `1 pepperoni pizza on thin crust`.
-
-1. Download and save [app JSON file](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/apps/pizza-with-machine-learned-entity.json?raw=true).
-
-1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-1. Import the JSON into a new app, name the app `Pizza app`.
--
-1. Select **Train** in the top-right corner of the navigation to train the app.
-
-## What should the batch file utterances include
-
-The batch file should include utterances with top-level machine-learning entities labeled including start and end position. The utterances should not be part of the examples already in the app. They should be utterances you want to positively predict for intent and entities.
-
-You can separate out tests by intent and/or entity or have all the tests (up to 1000 utterances) in the same file.
-
-## Batch file
-
-The example JSON includes one utterance with a labeled entity to illustrate what a test file looks like. In your own tests, you should have many utterances with correct intent and machine-learning entity labeled.
-
-1. Create `pizza-with-machine-learned-entity-test.json` in a text editor or [download](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/batch-tests/pizza-with-machine-learned-entity-test.json?raw=true) it.
-
-2. In the JSON-formatted batch file, add an utterance with the **Intent** you want predicted in the test.
-
- [!code-json[Add the intents to the batch test file](~/samples-cognitive-services-data-files/luis/batch-tests/pizza-with-machine-learned-entity-test.json "Add the intent to the batch test file")]
-
-## Run the batch
-
-1. Select **Test** in the top navigation bar.
-
-2. Select **Batch testing panel** in the right-side panel.
-
-3. Select **Import dataset**.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of LUIS app with Import dataset highlighted](./media/luis-tutorial-batch-testing/import-dataset-button.png)
-
-4. Choose the file location of the `pizza-with-machine-learned-entity-test.json` file.
-
-5. Name the dataset `pizza test` and select **Done**.
-
- > [!div class="mx-imgBorder"]
- > ![Select file](./media/luis-tutorial-batch-testing/import-dataset-modal.png)
-
-6. Select the **Run** button.
-
-7. Select **See results**.
-
-8. Review results in the graph and legend.
-
-## Review batch results for intents
-
-The test results show graphically how the test utterances were predicted against the active version.
-
-The batch chart displays four quadrants of results. To the right of the chart is a filter. The filter contains intents and entities. When you select a [section of the chart](luis-concept-batch-test.md#batch-test-results) or a point within the chart, the associated utterance(s) display below the chart.
-
-While hovering over the chart, a mouse wheel can enlarge or reduce the display in the chart. This is useful when there are many points on the chart clustered tightly together.
-
-The chart is in four quadrants, with two of the sections displayed in red.
-
-1. Select the **ModifyOrder** intent in the filter list.
-
- > [!div class="mx-imgBorder"]
- > ![Select ModifyOrder intent from filter list](./media/luis-tutorial-batch-testing/select-intent-from-filter-list.png)
-
- The utterance is predicted as a **True Positive** meaning the utterance successfully matched its positive prediction listed in the batch file.
-
- > [!div class="mx-imgBorder"]
- > ![Utterance successfully matched its positive prediction](./media/luis-tutorial-batch-testing/intent-predicted-true-positive.png)
-
- The green checkmarks in the filters list also indicate the success of the test for each intent. All the other intents are listed with a 1/1 positive score because the utterance was tested against each intent, as a negative test for any intents not listed in the batch test.
-
-1. Select the **Confirmation** intent. This intent isn't listed in the batch test so this is a negative test of the utterance that is listed in the batch test.
-
- > [!div class="mx-imgBorder"]
- > ![Utterance successfully predicted negative for unlisted intent in batch file](./media/luis-tutorial-batch-testing/true-negative-intent.png)
-
- The negative test was successful, as noted with the green text in the filter, and the grid.
-
-## Review batch test results for entities
-
-The ModifyOrder entity, as a machine entity with subentities, displays if the top-level entity matched and display how the subentities are predicted.
-
-1. Select the **ModifyOrder** entity in the filter list then select the circle in the grid.
-
-1. The entity prediction displays below the chart. The display includes solid lines for predictions that match the expectation and dotted lines for predictions that don't match the expectation.
-
- > [!div class="mx-imgBorder"]
- > ![Entity parent successfully predicted in batch file](./media/luis-tutorial-batch-testing/labeled-entity-prediction.png)
-
-## Finding errors with a batch test
-
-This tutorial showed you how to run a test and interpret results. It didn't cover test philosophy or how to respond to failing tests.
-
-* Make sure to cover both positive and negative utterances in your test, including utterances that may be predicted for a different but related intent.
-* For failing utterances, perform the following tasks then run the tests again:
- * Review current examples for intents and entities, validate the example utterances of the active version are correct both for intent and entity labeling.
- * Add features that help your app predict intents and entities
- * Add more positive example utterances
- * Review balance of example utterances across intents
-
-## Clean up resources
-
-[!INCLUDE [LUIS How to clean up resources](./includes/cleanup-resources-preview-portal.md)]
-
-## Next step
-
-The tutorial used a batch test to validate the current model.
-
-> [!div class="nextstepaction"]
-> [Learn about patterns](luis-tutorial-pattern.md)
-
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/whats-new.md
@@ -15,7 +15,7 @@ Learn what's new in the service. These items include release notes, videos, blog
### December 2020
-* All LUIS users are required to [migrate to a LUIS authorint resouurce](luis-migration-authoring.md)
+* All LUIS users are required to [migrate to a LUIS authorint resource](luis-migration-authoring.md)
### June 2020
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/includes/quickstart-sdk-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/includes/quickstart-sdk-csharp.md
@@ -207,7 +207,7 @@ Generate an answer from a published knowledgebase using the [RuntimeClient](/dot
Generate an answer from a published knowledgebase using the [QnAMakerClient.Knowledgebase](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.qnamakerclient.knowledgebase?view=azure-dotnet-preview).[GenerateAnswerAsync](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.knowledgebaseextensions.generateanswerasync?view=azure-dotnet-preview) method. This method accepts the knowledge base ID and the [QueryDTO](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto?view=azure-dotnet-preview). Access additional properties of the QueryDTO, such a [Top](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto.top?view=azure-dotnet#Microsoft_Azure_CognitiveServices_Knowledge_QnAMaker_Models_QueryDTO_Top), [Context](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto.context?view=azure-dotnet-preview#Microsoft_Azure_CognitiveServices_Knowledge_QnAMaker_Models_QueryDTO_Context) and [AnswerSpanRequest](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto.answerspanrequest?view=azure-dotnet-preview#Microsoft_Azure_CognitiveServices_Knowledge_QnAMaker_Models_QueryDTO_AnswerSpanRequest) to use in your chat bot.
-[!code-csharp[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=GenerateAnswerPreview&highlight=3)]
+[!code-csharp[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=GenerateAnswer&highlight=3)]
This is a simple example querying the knowledgebase. To understand advanced querying scenarios, review [other query examples](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md?pivots=url-test-tool-curl#use-curl-to-query-for-a-chit-chat-answer).
@@ -234,4 +234,4 @@ Run the application with the `dotnet run` command from your application director
dotnet run ```
-The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/QnAMaker/SDK-based-quickstart).
\ No newline at end of file
+The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/QnAMaker/SDK-based-quickstart).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/from-file/javascript/browser https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/from-file/javascript/browser.md
@@ -1,8 +1,8 @@
---
-author: IEvangelist
+author: trevorbye
ms.service: cognitive-services ms.topic: include
-ms.date: 04/03/2020
+ms.date: 12/29/2020
ms.author: trbye ms.custom: devx-track-js ---
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/from-file/javascript/nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/from-file/javascript/nodejs.md
@@ -1,5 +1,5 @@
---
-author: IEvangelist
+author: trevorbye
ms.service: cognitive-services ms.topic: include ms.date: 04/03/2020
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/from-microphone/javascript/javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/from-microphone/javascript/javascript.md
@@ -1,8 +1,8 @@
---
-author: IEvangelist
+author: trevorbye
ms.service: cognitive-services ms.topic: include
-ms.date: 04/03/2020
+ms.date: 12/29/2020
ms.author: trbye ---
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/javascript/browser https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/javascript/browser.md
@@ -1,5 +1,5 @@
---
-author: IEvangelist
+author: trevorbye
ms.service: cognitive-services ms.topic: include ms.date: 04/03/2020
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/javascript/javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/javascript/javascript.md
@@ -1,5 +1,5 @@
---
-author: IEvangelist
+author: trevorbye
ms.service: cognitive-services ms.topic: include ms.date: 04/03/2020
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/translate-sts/javascript/javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/translate-sts/javascript/javascript.md
@@ -1,5 +1,5 @@
---
-author: IEvangelist
+author: trevorbye
ms.service: cognitive-services ms.topic: include ms.date: 04/03/2020
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/translate-stt-multiple-languages/javascript/javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/translate-stt-multiple-languages/javascript/javascript.md
@@ -1,5 +1,5 @@
---
-author: IEvangelist
+author: trevorbye
ms.service: cognitive-services ms.topic: include ms.date: 04/03/2020
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/translate-stt/javascript/javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/translate-stt/javascript/javascript.md
@@ -1,5 +1,5 @@
---
-author: IEvangelist
+author: trevorbye
ms.service: cognitive-services ms.topic: include ms.date: 04/03/2020
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/build-training-data-set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/build-training-data-set.md
@@ -70,7 +70,6 @@ If you add the following content to the request body, the API will train with do
Now that you've learned how to build a training data set, follow a quickstart to train a custom Form Recognizer model and start using it on your forms. * [Train a model and extract form data using the client library](./quickstarts/client-library.md)
-* [Train a model and extract form data using cURL](./quickstarts/curl-train-extract.md)
* [Train a model and extract form data using the REST API and Python](./quickstarts/python-train-extract.md) * [Train with labels using the sample labeling tool](./quickstarts/label-tool.md) * [Train with labels using the REST API and Python](./quickstarts/python-labeled-data.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/disaster-recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/disaster-recovery.md
@@ -13,7 +13,7 @@ ms.author: pafarley
# Back up and recover your Form Recognizer models
-When you create a Form Recognizer resource in the Azure portal, you specify a region. From then on, your resource and all of its operations stay associated with that particular Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Form Recognizer resources in different regions and the ability to sync [custom models](./quickstarts/curl-train-extract.md) across regions.
+When you create a Form Recognizer resource in the Azure portal, you specify a region. From then on, your resource and all of its operations stay associated with that particular Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Form Recognizer resources in different regions and the ability to sync custom models across regions.
The Copy API enables this scenario by allowing you to copy custom models from one Form Recognizer account or into others, which can exist in any supported geographical region. This guide shows you how to use the Copy REST API with cURL. You can also use an HTTP request service like Postman to issue the requests.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/quickstarts/csharp-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/quickstarts/csharp-sdk.md
@@ -170,7 +170,9 @@ Repeat the steps above for a new method that authenticates a training client.
You'll also need to add references to the URLs for your training and testing data. Add these to the root of your **Program** class.
-* [!INCLUDE [get SAS URL](../../includes/sas-instructions.md)]
+* [!INCLUDE [get SAS URL](../sas-instructions.md)]
+
+ :::image type="content" source="../../media/quickstarts/get-sas-url.png" alt-text="SAS URL retrieval":::
* Then, repeat the above steps to get the SAS URL of an individual document in blob storage container. Save it to a temporary location as well. * Finally, save the URL of the sample image(s) included below (also available on [GitHub](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms)).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/quickstarts/java-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/quickstarts/java-sdk.md
@@ -124,6 +124,8 @@ In the application's **FormRecognizer** class, create variables for your resourc
In the application's **main** method, add calls for the methods used in this quickstart. You'll define these later. You'll also need to add references to the URLs for your training and testing data. * [!INCLUDE [get SAS URL](../../includes/sas-instructions.md)]
+
+ :::image type="content" source="../../media/quickstarts/get-sas-url.png" alt-text="SAS URL retrieval":::
* To get a URL of a form to test, you can use the above steps to get the SAS URL of an individual document in blob storage. Or, take the URL of a document located elsewhere. * Use the above method to get the URL of a receipt image as well.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/quickstarts/javascript-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/quickstarts/javascript-sdk.md
@@ -116,6 +116,8 @@ Authenticate a client object using the subscription variables you defined. You'l
You'll also need to add references to the URLs for your training and testing data. * [!INCLUDE [get SAS URL](../../includes/sas-instructions.md)]
+
+ :::image type="content" source="../../media/quickstarts/get-sas-url.png" alt-text="SAS URL retrieval":::
* Use the sample from and receipt images included in the samples below (also available on [GitHub](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/formrecognizer/ai-form-recognizer/test-assets)) or you can use the above steps to get the SAS URL of an individual document in blob storage.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/quickstarts/python-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/quickstarts/python-sdk.md
@@ -125,6 +125,8 @@ Here, you'll authenticate two client objects using the subscription variables yo
You'll need to add references to the URLs for your training and testing data. * [!INCLUDE [get SAS URL](../../includes/sas-instructions.md)]
+
+ :::image type="content" source="../../media/quickstarts/get-sas-url.png" alt-text="SAS URL retrieval":::
* Use the sample from and receipt images included in the samples below (also available on [GitHub](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms) or you can use the above steps to get the SAS URL of an individual document in blob storage. > [!NOTE]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/quickstarts/rest-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/quickstarts/rest-api.md new file mode 100644
@@ -0,0 +1,1622 @@
+---
+title: "Quickstart: Form Recognizer client library for .NET"
+description: Use the Form Recognizer client library for .NET to create a forms processing app that extracts key/value pairs and table data from your custom documents.
+services: cognitive-services
+author: PatrickFarley
+manager: nitinme
+ms.service: cognitive-services
+ms.subservice: forms-recognizer
+ms.topic: include
+ms.date: 12/15/2020
+ms.author: pafarley
+---
+
+## Prerequisites
+
+* [cURL](https://curl.haxx.se/windows/) installed.
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* An Azure Storage blob that contains a set of training data. See [Build a training data set for a custom model](../../build-training-data-set.md) for tips and options for putting together your training data set. For this quickstart, you can use the files under the **Train** folder of the [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*).
+* Once you have your Azure subscription, <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource <span class="docon docon-navigate-external x-hidden-focus"></span></a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
+ * You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+* A URL for an image of a receipt. You can use a [sample image](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg) for this quickstart.
+* A URL for an image of a business card. You can use a [sample image](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg) for this quickstart.
+* A URL for an image of an invoice. You can use a [sample document](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/forms/Invoice_1.pdf) for this quickstart.
++
+## Recognize form content
+
+You can use Form Recognizer to recognize and extract tables, lines, and words in documents, without needing to train a model. Before you run the command, make these changes:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
+1. Replace `{subscription key}` with the subscription key you copied from the previous step.
+1. Replace the URL in the request body with one of the example URLs.
+
+# [v2.0](#tab/v2-0)
+
+```bash
+curl -v -X POST "https://{Endpoint}/formrecognizer/v2.0/layout/analyze"
+-H "Content-Type: application/json"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+--data-ascii "{\"source\": \"http://example.com/test.jpg\"}"
+```
+
+# [v2.1 preview](#tab/v2-1)
+
+```bash
+curl -v -X POST "https://{Endpoint}/formrecognizer/v2.1-preview.2/layout/analyze"
+-H "Content-Type: application/json"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+--data-ascii "{\"source\": \"http://example.com/test.jpg\"}"
+```
+---
+
+You'll receive a `202 (Success)` response that includes am **Operation-Location** header. The value of this header contains an operation ID that you can use to query the status of the asynchronous operation and get the results. In the following example, the string after `analyzeResults/` is the operation ID.
+
+```console
+https://cognitiveservice/formrecognizer/v2.1-preview.2/layout/analyzeResults/54f0b076-4e38-43e5-81bd-b85b8835fdfb
+```
+
+### Get layout results
+
+After you've called the **[Analyze Layout](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeLayoutAsync)** API, you call the **[Get Analyze Layout Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/GetAnalyzeLayoutResult)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
+1. Replace `{subscription key}` with the subscription key you copied from the previous step.
+1. Replace `{resultId}` with the operation ID from the previous step.
+
+# [v2.0](#tab/v2-0)
+
+```bash
+curl -v -X GET "https://{Endpoint}/formrecognizer/v2.0/layout/analyzeResults/{resultId}"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+
+# [v2.1 preview](#tab/v2-1)
+```bash
+curl -v -X GET "https://{Endpoint}/formrecognizer/v2.1-preview.2/layout/analyzeResults/{resultId}"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+---
+
+### Examine the results
+
+You'll receive a `200 (success)` response with JSON content.
+
+See the following invoice image and its corresponding JSON output. The output has been shortened for simplicity. The `"readResults"` node contains every line of text with its respective bounding box placement on the page. The `"selectionMarks"` node (in v2.1 preview) shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected". The `"pageResults"` field shows every piece of text within tables, each with its row-column coordinate.
+
+:::image type="content" source="../../media/contoso-invoice.png" alt-text="Contoso project statement document with a table.":::
+
+# [v2.0](#tab/v2-0)
+```json
+{
+ "status": "succeeded",
+ "createdDateTime": "2020-08-20T20:36:52Z",
+ "lastUpdatedDateTime": "2020-08-20T20:36:58Z",
+ "analyzeResult": {
+ "version": "2.0.0",
+ "readResults": [
+ {
+ "page": 1,
+ "language": "en",
+ "angle": 0,
+ "width": 8.5,
+ "height": 11,
+ "unit": "inch",
+ "lines": [
+ {
+ "boundingBox": [
+ 0.5826,
+ 0.4411,
+ 2.3387,
+ 0.4411,
+ 2.3387,
+ 0.7969,
+ 0.5826,
+ 0.7969
+ ],
+ "text": "Contoso, Ltd.",
+ "words": [
+ {
+ "boundingBox": [
+ 0.5826,
+ 0.4411,
+ 1.744,
+ 0.4411,
+ 1.744,
+ 0.7969,
+ 0.5826,
+ 0.7969
+ ],
+ "text": "Contoso,",
+ "confidence": 1
+ },
+ {
+ "boundingBox": [
+ 1.8448,
+ 0.4446,
+ 2.3387,
+ 0.4446,
+ 2.3387,
+ 0.7631,
+ 1.8448,
+ 0.7631
+ ],
+ "text": "Ltd.",
+ "confidence": 1
+ }
+ ]
+ },
+ ...
+ ]
+ }
+ ],
+ "pageResults": [
+ {
+ "page": 1,
+ "tables": [
+ {
+ "rows": 5,
+ "columns": 5,
+ "cells": [
+ {
+ "rowIndex": 0,
+ "columnIndex": 0,
+ "text": "Training Date",
+ "boundingBox": [
+ 0.5133,
+ 4.2167,
+ 1.7567,
+ 4.2167,
+ 1.7567,
+ 4.4492,
+ 0.5133,
+ 4.4492
+ ],
+ "elements": [
+ "#/readResults/0/lines/14/words/0",
+ "#/readResults/0/lines/14/words/1"
+ ]
+ },
+ ...
+ ]
+ },
+ ...
+ ]
+ }
+ ]
+ }
+}
+```
+
+# [v2.1 preview](#tab/v2-1)
+```json
+{
+ "status": "succeeded",
+ "createdDateTime": "2020-08-20T20:40:50Z",
+ "lastUpdatedDateTime": "2020-08-20T20:40:55Z",
+ "analyzeResult": {
+ "version": "2.1.0",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 0,
+ "width": 8.5,
+ "height": 11,
+ "unit": "inch",
+ "lines": [
+ {
+ "boundingBox": [
+ 0.5826,
+ 0.4411,
+ 2.3387,
+ 0.4411,
+ 2.3387,
+ 0.7969,
+ 0.5826,
+ 0.7969
+ ],
+ "text": "Contoso, Ltd.",
+ "words": [
+ {
+ "boundingBox": [
+ 0.5826,
+ 0.4411,
+ 1.744,
+ 0.4411,
+ 1.744,
+ 0.7969,
+ 0.5826,
+ 0.7969
+ ],
+ "text": "Contoso,",
+ "confidence": 1
+ },
+ {
+ "boundingBox": [
+ 1.8448,
+ 0.4446,
+ 2.3387,
+ 0.4446,
+ 2.3387,
+ 0.7631,
+ 1.8448,
+ 0.7631
+ ],
+ "text": "Ltd.",
+ "confidence": 1
+ }
+ ]
+ },
+ ...
+ ]
+ }
+ ],
+ "selectionMarks": [
+ {
+ "boundingBox": [
+ 3.9737,
+ 3.7475,
+ 4.1693,
+ 3.7475,
+ 4.1693,
+ 3.9428,
+ 3.9737,
+ 3.9428
+ ],
+ "confidence": 0.989,
+ "state": "selected"
+ },
+ ...
+ ]
+ }
+ ],
+ "pageResults": [
+ {
+ "page": 1,
+ "tables": [
+ {
+ "rows": 5,
+ "columns": 5,
+ "cells": [
+ {
+ "rowIndex": 0,
+ "columnIndex": 0,
+ "text": "Training Date",
+ "boundingBox": [
+ 0.5133,
+ 4.2167,
+ 1.7567,
+ 4.2167,
+ 1.7567,
+ 4.4492,
+ 0.5133,
+ 4.4492
+ ],
+ "elements": [
+ "#/readResults/0/lines/12/words/0",
+ "#/readResults/0/lines/12/words/1"
+ ]
+ },
+ ...
+ ]
+ },
+ ...
+ ]
+ }
+ ]
+ }
+}
+```
+
+---
+
+## Recognize receipts
+
+To start analyzing a receipt, call the **[Analyze Receipt](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeReceiptAsync)** API using the cURL command below. Before you run the command, make these changes:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
+1. Replace `{your receipt URL}` with the URL address of a receipt image.
+1. Replace `{subscription key>` with the subscription key you copied from the previous step.
+
+# [v2.0](#tab/v2-0)
+
+```bash
+curl -i -X POST "https://{Endpoint}/formrecognizer/v2.0/prebuilt/receipt/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ \"source\": \"{your receipt URL}\"}"
+```
+
+# [v2.1 preview](#tab/v2-1)
+
+```bash
+curl -i -X POST "https://{Endpoint}/formrecognizer/v2.1-preview.2/prebuilt/receipt/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ \"source\": \"{your receipt URL}\"}"
+```
+---
+
+You'll receive a `202 (Success)` response that includes am **Operation-Location** header. The value of this header contains an operation ID that you can use to query the status of the asynchronous operation and get the results. In the following example, the string after `operations/` is the operation ID.
+
+```console
+https://cognitiveservice/formrecognizer/v2.1-preview.2/prebuilt/receipt/operations/54f0b076-4e38-43e5-81bd-b85b8835fdfb
+```
+
+### Get the receipt results
+
+After you've called the **Analyze Receipt** API, you call the **[Get Analyze Receipt Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/GetAnalyzeReceiptResult)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
+1. Replace `{operationId}` with the operation ID from the previous step.
+1. Replace `{subscription key}` with your subscription key.
+
+# [v2.0](#tab/v2-0)
+```bash
+curl -X GET "https://{Endpoint}/formrecognizer/v2.0/prebuilt/receipt/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+# [v2.1 preview](#tab/v2-1)
+```bash
+curl -X GET "https://{Endpoint}/formrecognizer/v2.1-preview.2/prebuilt/receipt/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+---
+
+### Examine the response
+
+You'll receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation is complete, the `"readResults"` field contains every line of text that was extracted from the receipt, and the `"documentResults"` field contains key/value information for the most relevant parts of the receipt. If the operation is not complete, the value of `"status"` will be `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
+
+See the following receipt image and its corresponding JSON output. The output has been shortened for readability.
+
+![A receipt from Contoso store](../../media/contoso-allinone.jpg)
+
+The `"readResults"` node contains all of the recognized text (if you set the optional *includeTextDetails* parameter to `true`). Text is organized by page, then by line, then by individual words. The `"documentResults"` node contains the receipt-specific values that the model discovered. This is where you'll find useful key/value pairs like the tax, total, merchant address, and so on.
+
+```json
+{
+ "status":"succeeded",
+ "createdDateTime":"2019-12-17T04:11:24Z",
+ "lastUpdatedDateTime":"2019-12-17T04:11:32Z",
+ "analyzeResult":{
+ "version":"2.1.0",
+ "readResults":[
+ {
+ "page":1,
+ "angle":0.6893,
+ "width":1688,
+ "height":3000,
+ "unit":"pixel",
+ "language":"en",
+ "lines":[
+ {
+ "text":"Contoso",
+ "boundingBox":[
+ 635,
+ 510,
+ 1086,
+ 461,
+ 1098,
+ 558,
+ 643,
+ 604
+ ],
+ "words":[
+ {
+ "text":"Contoso",
+ "boundingBox":[
+ 639,
+ 510,
+ 1087,
+ 461,
+ 1098,
+ 551,
+ 646,
+ 604
+ ],
+ "confidence":0.955
+ }
+ ]
+ },
+ ...
+ ]
+ }
+ ],
+ "documentResults":[
+ {
+ "docType":"prebuilt:receipt",
+ "pageRange":[
+ 1,
+ 1
+ ],
+ "fields":{
+ "ReceiptType":{
+ "type":"string",
+ "valueString":"Itemized",
+ "confidence":0.692
+ },
+ "MerchantName":{
+ "type":"string",
+ "valueString":"Contoso Contoso",
+ "text":"Contoso Contoso",
+ "boundingBox":[
+ 378.2,
+ 292.4,
+ 1117.7,
+ 468.3,
+ 1035.7,
+ 812.7,
+ 296.3,
+ 636.8
+ ],
+ "page":1,
+ "confidence":0.613,
+ "elements":[
+ "#/readResults/0/lines/0/words/0",
+ "#/readResults/0/lines/1/words/0"
+ ]
+ },
+ "MerchantAddress":{
+ "type":"string",
+ "valueString":"123 Main Street Redmond, WA 98052",
+ "text":"123 Main Street Redmond, WA 98052",
+ "boundingBox":[
+ 302,
+ 675.8,
+ 848.1,
+ 793.7,
+ 809.9,
+ 970.4,
+ 263.9,
+ 852.5
+ ],
+ "page":1,
+ "confidence":0.99,
+ "elements":[
+ "#/readResults/0/lines/2/words/0",
+ "#/readResults/0/lines/2/words/1",
+ "#/readResults/0/lines/2/words/2",
+ "#/readResults/0/lines/3/words/0",
+ "#/readResults/0/lines/3/words/1",
+ "#/readResults/0/lines/3/words/2"
+ ]
+ },
+ "MerchantPhoneNumber":{
+ "type":"phoneNumber",
+ "valuePhoneNumber":"+19876543210",
+ "text":"987-654-3210",
+ "boundingBox":[
+ 278,
+ 1004,
+ 656.3,
+ 1054.7,
+ 646.8,
+ 1125.3,
+ 268.5,
+ 1074.7
+ ],
+ "page":1,
+ "confidence":0.99,
+ "elements":[
+ "#/readResults/0/lines/4/words/0"
+ ]
+ },
+ "TransactionDate":{
+ "type":"date",
+ "valueDate":"2019-06-10",
+ "text":"6/10/2019",
+ "boundingBox":[
+ 265.1,
+ 1228.4,
+ 525,
+ 1247,
+ 518.9,
+ 1332.1,
+ 259,
+ 1313.5
+ ],
+ "page":1,
+ "confidence":0.99,
+ "elements":[
+ "#/readResults/0/lines/5/words/0"
+ ]
+ },
+ "TransactionTime":{
+ "type":"time",
+ "valueTime":"13:59:00",
+ "text":"13:59",
+ "boundingBox":[
+ 541,
+ 1248,
+ 677.3,
+ 1261.5,
+ 668.9,
+ 1346.5,
+ 532.6,
+ 1333
+ ],
+ "page":1,
+ "confidence":0.977,
+ "elements":[
+ "#/readResults/0/lines/5/words/1"
+ ]
+ },
+ "Items":{
+ "type":"array",
+ "valueArray":[
+ {
+ "type":"object",
+ "valueObject":{
+ "Quantity":{
+ "type":"number",
+ "text":"1",
+ "boundingBox":[
+ 245.1,
+ 1581.5,
+ 300.9,
+ 1585.1,
+ 295,
+ 1676,
+ 239.2,
+ 1672.4
+ ],
+ "page":1,
+ "confidence":0.92,
+ "elements":[
+ "#/readResults/0/lines/7/words/0"
+ ]
+ },
+ "Name":{
+ "type":"string",
+ "valueString":"Cappuccino",
+ "text":"Cappuccino",
+ "boundingBox":[
+ 322,
+ 1586,
+ 654.2,
+ 1601.1,
+ 650,
+ 1693,
+ 317.8,
+ 1678
+ ],
+ "page":1,
+ "confidence":0.923,
+ "elements":[
+ "#/readResults/0/lines/7/words/1"
+ ]
+ },
+ "TotalPrice":{
+ "type":"number",
+ "valueNumber":2.2,
+ "text":"$2.20",
+ "boundingBox":[
+ 1107.7,
+ 1584,
+ 1263,
+ 1574,
+ 1268.3,
+ 1656,
+ 1113,
+ 1666
+ ],
+ "page":1,
+ "confidence":0.918,
+ "elements":[
+ "#/readResults/0/lines/8/words/0"
+ ]
+ }
+ }
+ },
+ ...
+ ]
+ },
+ "Subtotal":{
+ "type":"number",
+ "valueNumber":11.7,
+ "text":"11.70",
+ "boundingBox":[
+ 1146,
+ 2221,
+ 1297.3,
+ 2223,
+ 1296,
+ 2319,
+ 1144.7,
+ 2317
+ ],
+ "page":1,
+ "confidence":0.955,
+ "elements":[
+ "#/readResults/0/lines/13/words/1"
+ ]
+ },
+ "Tax":{
+ "type":"number",
+ "valueNumber":1.17,
+ "text":"1.17",
+ "boundingBox":[
+ 1190,
+ 2359,
+ 1304,
+ 2359,
+ 1304,
+ 2456,
+ 1190,
+ 2456
+ ],
+ "page":1,
+ "confidence":0.979,
+ "elements":[
+ "#/readResults/0/lines/15/words/1"
+ ]
+ },
+ "Tip":{
+ "type":"number",
+ "valueNumber":1.63,
+ "text":"1.63",
+ "boundingBox":[
+ 1094,
+ 2479,
+ 1267.7,
+ 2485,
+ 1264,
+ 2591,
+ 1090.3,
+ 2585
+ ],
+ "page":1,
+ "confidence":0.941,
+ "elements":[
+ "#/readResults/0/lines/17/words/1"
+ ]
+ },
+ "Total":{
+ "type":"number",
+ "valueNumber":14.5,
+ "text":"$14.50",
+ "boundingBox":[
+ 1034.2,
+ 2617,
+ 1387.5,
+ 2638.2,
+ 1380,
+ 2763,
+ 1026.7,
+ 2741.8
+ ],
+ "page":1,
+ "confidence":0.985,
+ "elements":[
+ "#/readResults/0/lines/19/words/0"
+ ]
+ }
+ }
+ }
+ ]
+ }
+}
+```
+
+## Recognize business cards
+
+# [v2.0](#tab/v2-0)
+
+> [!IMPORTANT]
+> This feature isn't available in the selected API version.
+
+# [v2.1 preview](#tab/v2-1)
+
+To start analyzing a business card, you call the **[Analyze Business Card](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeBusinessCardAsync)** API using the cURL command below. Before you run the command, make these changes:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
+1. Replace `{your receipt URL}` with the URL address of a receipt image.
+1. Replace `{subscription key}` with the subscription key you copied from the previous step.
+
+```bash
+curl -i -X POST "https://{Endpoint}/formrecognizer/v2.1-preview.2/prebuilt/businessCard/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ \"source\": \"{your receipt URL}\"}"
+```
+
+You'll receive a `202 (Success)` response that includes am **Operation-Location** header. The value of this header contains an operation ID that you can use to query the status of the asynchronous operation and get the results.
+
+```console
+https://cognitiveservice/formrecognizer/v2.1-preview.2/prebuilt/businessCard/analyzeResults/54f0b076-4e38-43e5-81bd-b85b8835fdfb
+```
+
+### Get business card results
+
+After you've called the **Analyze Business Card** API, you call the **[Get Analyze Business Card Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/GetAnalyzeBusinessCardResult)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
+1. Replace `{resultId}` with the operation ID from the previous step.
+1. Replace `{subscription key}` with your subscription key.
+
+```bash
+curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/formrecognizer/v2.1-preview.2/prebuilt/businessCard/analyzeResults/{resultId}"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+
+### Examine the response
+
+You'll receive a `200 (Success)` response with JSON output. The `"readResults"` node contains all of the recognized text. Text is organized by page, then by line, then by individual words. The `"documentResults"` node contains the business-card-specific values that the model discovered. This is where you'll find useful contact information like the company name, first name, last name, phone number, and so on.
+
+![A business card from Contoso company](../../media/business-card-english.jpg)
+
+This sample illustrates the JSON output returned by Form Recognizer. It has been truncated for readability.
+
+```json
+{
+ "status": "succeeded",
+ "createdDateTime": "2020-06-04T08:19:29Z",
+ "lastUpdatedDateTime": "2020-06-04T08:19:35Z",
+ "analyzeResult": {
+ "version": "2.1.1",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": -17.0956,
+ "width": 4032,
+ "height": 3024,
+ "unit": "pixel"
+ }
+ ],
+ "documentResults": [
+ {
+ "docType": "prebuilt:businesscard",
+ "pageRange": [
+ 1,
+ 1
+ ],
+ "fields": {
+ "ContactNames": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "object",
+ "valueObject": {
+ "FirstName": {
+ "type": "string",
+ "valueString": "Avery",
+ "text": "Avery",
+ "boundingBox": [
+ 703,
+ 1096,
+ 1134,
+ 989,
+ 1165,
+ 1109,
+ 733,
+ 1206
+ ],
+ "page": 1
+ },
+ "text": "Dr. Avery Smith",
+ "boundingBox": [
+ 419.3,
+ 1154.6,
+ 1589.6,
+ 877.9,
+ 1618.9,
+ 1001.7,
+ 448.6,
+ 1278.4
+ ],
+ "confidence": 0.993
+ }
+ ]
+ },
+ "Emails": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "string",
+ "valueString": "avery.smith@contoso.com",
+ "text": "avery.smith@contoso.com",
+ "boundingBox": [
+ 2107,
+ 934,
+ 2917,
+ 696,
+ 2935,
+ 764,
+ 2126,
+ 995
+ ],
+ "page": 1,
+ "confidence": 0.99
+ }
+ ]
+ },
+ "Websites": {
+ "type": "array",
+ "valueArray": [
+ {
+ "type": "string",
+ "valueString": "https://www.contoso.com/",
+ "text": "https://www.contoso.com/",
+ "boundingBox": [
+ 2121,
+ 1002,
+ 2992,
+ 755,
+ 3014,
+ 826,
+ 2143,
+ 1077
+ ],
+ "page": 1,
+ "confidence": 0.995
+ }
+ ]
+ }
+ }
+ }
+ ]
+ }
+}
+```
+
+The script will print responses to the console until the **Analyze Business Card** operation completes.
+
+---
+
+## Recognize invoices
+
+# [version 2.0](#tab/v2-0)
+
+> [!IMPORTANT]
+> This feature isn't available in the selected API version.
+
+# [version 2.1 preview](#tab/v2-1)
+
+To start analyzing an invoice, call the **[Analyze Invoice](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/5ed8c9843c2794cbb1a96291)** API using the cURL command below. Before you run the command, make these changes:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
+1. Replace `{your invoice URL}` with the URL address of an invoice document.
+1. Replace `{subscription key}` with your subscription key.
+
+```bash
+curl -v -X POST "https://{Endpoint}/formrecognizer/v2.1-preview.2/prebuilt/invoice/analyze"
+-H "Content-Type: application/json"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+--data-ascii "{ \"source\": \"{your invoice URL}\"}"
+```
+
+You'll receive a `202 (Success)` response that includes am **Operation-Location** header. The value of this header contains an operation ID that you can use to query the status of the asynchronous operation and get the results.
+
+```console
+https://cognitiveservice/formrecognizer/v2.1-preview.2/prebuilt/invoice/analyzeResults/54f0b076-4e38-43e5-81bd-b85b8835fdfb
+```
+
+### Get invoice results
+
+After you've called the **Analyze Invoice** API, you call the **[Get Analyze Invoice Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/5ed8c9acb78c40a2533aee83)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
+1. Replace `{resultId}` with the operation ID from the previous step.
+1. Replace `{subscription key}` with your subscription key.
+
+```bash
+curl -v -X GET "https://{Endpoint}/formrecognizer/v2.1-preview.2/prebuilt/invoice/analyzeResults/{resultId}"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+
+### Examine the response
+
+You'll receive a `200 (Success)` response with JSON output. The `"readResults"` field contains every line of text that was extracted from the invoice, the `"pageResults"` includes the tables and selections marks extracted from the invoice and the `"documentResults"` field contains key/value information for the most relevant parts of the invoice.
+
+See the following invoice document and its corresponding JSON output. The JSON content has been shortened for readability.
+
+* [Sample invoice](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/curl/form-recognizer/sample-invoice.pdf)
+
+```json
+{
+ "status": "succeeded",
+ "createdDateTime": "2020-11-06T23:32:11Z",
+ "lastUpdatedDateTime": "2020-11-06T23:32:20Z",
+ "analyzeResult": {
+ "version": "2.1.0",
+ "readResults": [{
+ "page": 1,
+ "angle": 0,
+ "width": 8.5,
+ "height": 11,
+ "unit": "inch"
+ }],
+ "pageResults": [{
+ "page": 1,
+ "tables": [{
+ "rows": 3,
+ "columns": 4,
+ "cells": [{
+ "rowIndex": 0,
+ "columnIndex": 0,
+ "text": "QUANTITY",
+ "boundingBox": [0.4953,
+ 5.7306,
+ 1.8097,
+ 5.7306,
+ 1.7942,
+ 6.0122,
+ 0.4953,
+ 6.0122]
+ },
+ {
+ "rowIndex": 0,
+ "columnIndex": 1,
+ "text": "DESCRIPTION",
+ "boundingBox": [1.8097,
+ 5.7306,
+ 5.7529,
+ 5.7306,
+ 5.7452,
+ 6.0122,
+ 1.7942,
+ 6.0122]
+ },
+ ...
+ ],
+ "boundingBox": [0.4794,
+ 5.7132,
+ 8.0158,
+ 5.714,
+ 8.0118,
+ 6.5627,
+ 0.4757,
+ 6.5619]
+ },
+ {
+ "rows": 2,
+ "columns": 6,
+ "cells": [{
+ "rowIndex": 0,
+ "columnIndex": 0,
+ "text": "SALESPERSON",
+ "boundingBox": [0.4979,
+ 4.963,
+ 1.8051,
+ 4.963,
+ 1.7975,
+ 5.2398,
+ 0.5056,
+ 5.2398]
+ },
+ {
+ "rowIndex": 0,
+ "columnIndex": 1,
+ "text": "P.O. NUMBER",
+ "boundingBox": [1.8051,
+ 4.963,
+ 3.3047,
+ 4.963,
+ 3.3124,
+ 5.2398,
+ 1.7975,
+ 5.2398]
+ },
+ ...
+ ],
+ "boundingBox": [0.4976,
+ 4.961,
+ 7.9959,
+ 4.9606,
+ 7.9959,
+ 5.5204,
+ 0.4972,
+ 5.5209]
+ }]
+ }],
+ "documentResults": [{
+ "docType": "prebuilt:invoice",
+ "pageRange": [1,
+ 1],
+ "fields": {
+ "AmountDue": {
+ "type": "number",
+ "valueNumber": 610,
+ "text": "$610.00",
+ "boundingBox": [7.3809,
+ 7.8153,
+ 7.9167,
+ 7.8153,
+ 7.9167,
+ 7.9591,
+ 7.3809,
+ 7.9591],
+ "page": 1,
+ "confidence": 0.875
+ },
+ "BillingAddress": {
+ "type": "string",
+ "valueString": "123 Bill St, Redmond WA, 98052",
+ "text": "123 Bill St, Redmond WA, 98052",
+ "boundingBox": [0.594,
+ 4.3724,
+ 2.0125,
+ 4.3724,
+ 2.0125,
+ 4.7125,
+ 0.594,
+ 4.7125],
+ "page": 1,
+ "confidence": 0.997
+ },
+ "BillingAddressRecipient": {
+ "type": "string",
+ "valueString": "Microsoft Finance",
+ "text": "Microsoft Finance",
+ "boundingBox": [0.594,
+ 4.1684,
+ 1.7907,
+ 4.1684,
+ 1.7907,
+ 4.2837,
+ 0.594,
+ 4.2837],
+ "page": 1,
+ "confidence": 0.998
+ },
+ ...
+ }
+ }]
+ }
+}
+```
+
+---
+
+## Train a custom model
+
+To train a custom model, you'll need a set of training data in an Azure Storage blob. You should have a minimum of five filled-in forms (PDF documents and/or images) of the same type/structure as your main input data. Or, you can use a single empty form with two filled-in forms. The empty form's file name needs to include the word "empty." See [Build a training data set for a custom model](../../build-training-data-set.md) for tips and options for putting together your training data.
+
+> [!NOTE]
+> You can use the labeled data feature to manually label some or all of your training data beforehand. This is a more complex process but results in a better trained model. See the [Train with labels](../../overview.md#train-with-labels) section of the overview to learn more about this feature.
+
+To train a Form Recognizer model with the documents in your Azure blob container, call the **[Train Custom Model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/TrainCustomModelAsync)** API by running the following cURL command. Before you run the command, make these changes:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
+1. Replace `{subscription key}` with the subscription key you copied from the previous step.
+1. Replace `{SAS URL}` with the Azure Blob storage container's shared access signature (SAS) URL. [!INCLUDE [get SAS URL](../sas-instructions.md)]
+
+ :::image type="content" source="../../media/quickstarts/get-sas-url.png" alt-text="SAS URL retrieval":::
+
+# [v2.0](#tab/v2-0)
+```bash
+curl -i -X POST "https://{Endpoint}/formrecognizer/v2.0/custom/models" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ \"source\": \""{SAS URL}"\"}"
+```
+# [v2.1 preview](#tab/v2-1)
+```bash
+curl -i -X POST "https://{Endpoint}/formrecognizer/v2.1-preview.2/custom/models" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ \"source\": \""{SAS URL}"\"}"
+```
+
+---
++
+You'll receive a `201 (Success)` response with a **Location** header. The value of this header is the ID of the new model being trained.
+
+### Get training results
+
+After you've started the train operation, you use a new operation, **[Get Custom Model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/GetCustomModel)** to check the training status. Pass the model ID into this API call to check the training status:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription key.
+1. Replace `{subscription key}` with your subscription key
+1. Replace `{model ID}` with the model ID you received in the previous step
++
+# [v2.0](#tab/v2-0)
+```bash
+curl -X GET "https://{Endpoint}/formrecognizer/v2.0/custom/models/{model ID}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+# [v2.1 preview](#tab/v2-1)
+```bash
+curl -X GET "https://{Endpoint}/formrecognizer/v2.1-preview.2/custom/models/{model ID}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+
+---
+
+You'll receive a `200 (Success)` response with a JSON body in the following format. Notice the `"status"` field. This will have the value `"ready"` once training is complete. If the model is not finished training, you'll need to query the service again by rerunning the command. We recommend an interval of one second or more between calls.
+
+The `"modelId"` field contains the ID of the model you're training. You'll need this for the next step.
+
+```json
+{
+ "modelInfo":{
+ "status":"ready",
+ "createdDateTime":"2019-10-08T10:20:31.957784",
+ "lastUpdatedDateTime":"2019-10-08T14:20:41+00:00",
+ "modelId":"1cfb372bab404ba3aa59481ab2c63da5"
+ },
+ "trainResult":{
+ "trainingDocuments":[
+ {
+ "documentName":"invoices\\Invoice_1.pdf",
+ "pages":1,
+ "errors":[
+
+ ],
+ "status":"succeeded"
+ },
+ {
+ "documentName":"invoices\\Invoice_2.pdf",
+ "pages":1,
+ "errors":[
+
+ ],
+ "status":"succeeded"
+ },
+ {
+ "documentName":"invoices\\Invoice_3.pdf",
+ "pages":1,
+ "errors":[
+
+ ],
+ "status":"succeeded"
+ },
+ {
+ "documentName":"invoices\\Invoice_4.pdf",
+ "pages":1,
+ "errors":[
+
+ ],
+ "status":"succeeded"
+ },
+ {
+ "documentName":"invoices\\Invoice_5.pdf",
+ "pages":1,
+ "errors":[
+
+ ],
+ "status":"succeeded"
+ }
+ ],
+ "errors":[
+
+ ]
+ },
+ "keys":{
+ "0":[
+ "Address:",
+ "Invoice For:",
+ "Microsoft",
+ "Page"
+ ]
+ }
+}
+```
+
+## Analyze forms with a custom model
+
+Next, you'll use your newly trained model to analyze a document and extract key-value pairs and tables from it. Call the **[Analyze Form](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm)** API by running the following cURL command. Before you run the command, make these changes:
+
+1. Replace `{Endpoint}` with the endpoint that you obtained from your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
+1. Replace `{model ID}` with the model ID that you received in the previous section.
+1. Replace `{SAS URL}` with an SAS URL to your file in Azure storage. Follow the steps in the Training section, but instead of getting a SAS URL for the whole blob container, get one for the specific file you want to analyze.
+1. Replace `{subscription key}` with your subscription key.
+
+# [v2.0](#tab/v2-0)
+
+```bash
+curl -v "https://{Endpoint}/formrecognizer/v2.0/custom/models/{model ID}/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" -d "{ \"source\": \""{SAS URL}"\" } "
+```
+
+# [v2.1 preview](#tab/v2-1)
+```bash
+curl -v "https://{Endpoint}/formrecognizer/v2.1-preview.2/custom/models/{model ID}/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" -d "{ \"source\": \""{SAS URL}"\" } "
+```
+
+---
+++
+You'll receive a `202 (Success)` response with an **Operation-Location** header. The value of this header includes a results ID you use to track the results of the Analyze operation. Save this results ID for the next step.
+
+### Get the Analyze results
+
+Use the following API to query the results of the Analyze operation.
+
+1. Replace `{Endpoint}` with the endpoint that you obtained from your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
+1. Replace `{result ID}` with the ID that you received in the previous section.
+1. Replace `{subscription key}` with your subscription key.
+
+# [v2.0](#tab/v2-0)
+```bash
+curl -X GET "https://{Endpoint}/formrecognizer/v2.0/custom/models/{model ID}/analyzeResults/{result ID}" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+# [v2.1 preview](#tab/v2-1)
+```bash
+curl -X GET "https://{Endpoint}/formrecognizer/v2.1-preview/custom/models/{model ID}/analyzeResults/{result ID}" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+---
+
+You'll receive a `200 (Success)` response with a JSON body in the following format. The output has been shortened for simplicity. Notice the `"status"` field near the bottom. This will have the value `"succeeded"` when the Analyze operation is complete. If the Analyze operation hasn't completed, you'll need to query the service again by rerunning the command. We recommend an interval of one second or more between calls.
+
+The main key/value pair associations and tables are in the `"pageResults"` node. If you also specified plain text extraction through the *includeTextDetails* URL parameter, then the `"readResults"` node will show the content and positions of all the text in the document.
+
+This sample JSON output has been shortened for simplicity.
+
+# [v2.0](#tab/v2-0)
+```JSON
+{
+ "status": "succeeded",
+ "createdDateTime": "2020-08-21T00:46:25Z",
+ "lastUpdatedDateTime": "2020-08-21T00:46:32Z",
+ "analyzeResult": {
+ "version": "2.0.0",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 0,
+ "width": 8.5,
+ "height": 11,
+ "unit": "inch",
+ "lines": [
+ {
+ "text": "Project Statement",
+ "boundingBox": [
+ 5.0153,
+ 0.275,
+ 8.0944,
+ 0.275,
+ 8.0944,
+ 0.7125,
+ 5.0153,
+ 0.7125
+ ],
+ "words": [
+ {
+ "text": "Project",
+ "boundingBox": [
+ 5.0153,
+ 0.275,
+ 6.2278,
+ 0.275,
+ 6.2278,
+ 0.7125,
+ 5.0153,
+ 0.7125
+ ]
+ },
+ {
+ "text": "Statement",
+ "boundingBox": [
+ 6.3292,
+ 0.275,
+ 8.0944,
+ 0.275,
+ 8.0944,
+ 0.7125,
+ 6.3292,
+ 0.7125
+ ]
+ }
+ ]
+ },
+ ...
+ ]
+ }
+ ],
+ "pageResults": [
+ {
+ "page": 1,
+ "keyValuePairs": [
+ {
+ "key": {
+ "text": "Date:",
+ "boundingBox": [
+ 6.9722,
+ 1.0264,
+ 7.3417,
+ 1.0264,
+ 7.3417,
+ 1.1931,
+ 6.9722,
+ 1.1931
+ ],
+ "elements": [
+ "#/readResults/0/lines/2/words/0"
+ ]
+ },
+ "confidence": 1
+ },
+ ...
+ ],
+ "tables": [
+ {
+ "rows": 4,
+ "columns": 5,
+ "cells": [
+ {
+ "text": "Training Date",
+ "rowIndex": 0,
+ "columnIndex": 0,
+ "boundingBox": [
+ 0.6931,
+ 4.2444,
+ 1.5681,
+ 4.2444,
+ 1.5681,
+ 4.4125,
+ 0.6931,
+ 4.4125
+ ],
+ "confidence": 1,
+ "rowSpan": 1,
+ "columnSpan": 1,
+ "elements": [
+ "#/readResults/0/lines/15/words/0",
+ "#/readResults/0/lines/15/words/1"
+ ],
+ "isHeader": true,
+ "isFooter": false
+ },
+ ...
+ ]
+ }
+ ],
+ "clusterId": 0
+ }
+ ],
+ "documentResults": [],
+ "errors": []
+ }
+}
+```
+# [v2.1 preview](#tab/v2-1)
+```JSON
+{
+ "status": "succeeded",
+ "createdDateTime": "2020-08-21T01:13:28Z",
+ "lastUpdatedDateTime": "2020-08-21T01:13:42Z",
+ "analyzeResult": {
+ "version": "2.1.0",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 0,
+ "width": 8.5,
+ "height": 11,
+ "unit": "inch",
+ "lines": [
+ {
+ "text": "Project Statement",
+ "boundingBox": [
+ 5.0444,
+ 0.3613,
+ 8.0917,
+ 0.3613,
+ 8.0917,
+ 0.6718,
+ 5.0444,
+ 0.6718
+ ],
+ "words": [
+ {
+ "text": "Project",
+ "boundingBox": [
+ 5.0444,
+ 0.3587,
+ 6.2264,
+ 0.3587,
+ 6.2264,
+ 0.708,
+ 5.0444,
+ 0.708
+ ]
+ },
+ {
+ "text": "Statement",
+ "boundingBox": [
+ 6.3361,
+ 0.3635,
+ 8.0917,
+ 0.3635,
+ 8.0917,
+ 0.6396,
+ 6.3361,
+ 0.6396
+ ]
+ }
+ ]
+ },
+ ...
+ ]
+ }
+ ],
+ "pageResults": [
+ {
+ "page": 1,
+ "keyValuePairs": [
+ {
+ "key": {
+ "text": "Date:",
+ "boundingBox": [
+ 6.9833,
+ 1.0615,
+ 7.3333,
+ 1.0615,
+ 7.3333,
+ 1.1649,
+ 6.9833,
+ 1.1649
+ ],
+ "elements": [
+ "#/readResults/0/lines/2/words/0"
+ ]
+ },
+ "value": {
+ "text": "9/10/2020",
+ "boundingBox": [
+ 7.3833,
+ 1.0802,
+ 7.925,
+ 1.0802,
+ 7.925,
+ 1.174,
+ 7.3833,
+ 1.174
+ ],
+ "elements": [
+ "#/readResults/0/lines/3/words/0"
+ ]
+ },
+ "confidence": 1
+ },
+ ...
+ ],
+ "tables": [
+ {
+ "rows": 5,
+ "columns": 5,
+ "cells": [
+ {
+ "text": "Training Date",
+ "rowIndex": 0,
+ "columnIndex": 0,
+ "boundingBox": [
+ 0.6944,
+ 4.2779,
+ 1.5625,
+ 4.2779,
+ 1.5625,
+ 4.4005,
+ 0.6944,
+ 4.4005
+ ],
+ "confidence": 1,
+ "rowSpan": 1,
+ "columnSpan": 1,
+ "elements": [
+ "#/readResults/0/lines/15/words/0",
+ "#/readResults/0/lines/15/words/1"
+ ],
+ "isHeader": true,
+ "isFooter": false
+ },
+ ...
+ ]
+ }
+ ],
+ "clusterId": 0
+ }
+ ],
+ "documentResults": [],
+ "errors": []
+ }
+}
+```
+---
+
+### Improve results
+
+[!INCLUDE [improve results](../improve-results-unlabeled.md)]
+
+## Manage custom models
+
+### Get a list of custom models
+
+Use the following command to return a list of all the custom models that belong to your subscription.
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
+1. Replace `{subscription key}` with the subscription key you copied from the previous step.
+
+# [v2.0](#tab/v2-0)
+```bash
+curl -v -X GET "https://{Endpoint}/formrecognizer/v2.0/custom/models?op=full"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+
+# [v2.1 preview](#tab/v2-1)
+
+```bash
+curl -v -X GET "https://{Endpoint}/formrecognizer/v2.1-preview.2/custom/models?op=full"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+```
+---
+
+You'll receive a `200` success response, with JSON data like the following. The `"modelList"` element contains all of your created models and their information.
+
+```json
+{
+ "summary": {
+ "count": 0,
+ "limit": 0,
+ "lastUpdatedDateTime": "string"
+ },
+ "modelList": [
+ {
+ "modelId": "string",
+ "status": "creating",
+ "createdDateTime": "string",
+ "lastUpdatedDateTime": "string"
+ }
+ ],
+ "nextLink": "string"
+}
+```
+
+### Get a specific model
+
+To retrieve detailed information about a specific custom model, use the following command.
+
+1. Replace `{Endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
+1. Replace `{subscription key}` with the subscription key you copied from the previous step.
+1. Replace `{modelId}` with the ID of the custom model you want to look up.
+
+# [v2.0](#tab/v2-0)
+
+```bash
+curl -v -X GET "https://{Endpoint}/formrecognizer/v2.0/custom/models/{modelId}"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+--data-ascii "{body}"
+```
+
+# [v2.1 preview](#tab/v2-1)
+
+```bash
+curl -v -X GET "https://{Endpoint}/formrecognizer/v2.1-preview.2/custom/models/{modelId}"
+-H "Ocp-Apim-Subscription-Key: {subscription key}"
+--data-ascii "{body}"
+```
+---
+
+You'll receive a `200` success response, with JSON data like the following.
+
+```json
+{
+ "modelInfo": {
+ "modelId": "string",
+ "status": "creating",
+ "createdDateTime": "string",
+ "lastUpdatedDateTime": "string"
+ },
+ "keys": {
+ "clusters": {}
+ },
+ "trainResult": {
+ "trainingDocuments": [
+ {
+ "documentName": "string",
+ "pages": 0,
+ "errors": [
+ "string"
+ ],
+ "status": "succeeded"
+ }
+ ],
+ "fields": [
+ {
+ "fieldName": "string",
+ "accuracy": 0.0
+ }
+ ],
+ "averageModelAccuracy": 0.0,
+ "errors": [
+ {
+ "message": "string"
+ }
+ ]
+ }
+}
+```
+
+## Next steps
+
+In this quickstart, you used the Form Recognizer REST API to train models and analyze forms in different ways. Next, see the reference documentation to explore the Form Recognizer API in more depth.
+
+> [!div class="nextstepaction"]
+> [REST API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm)
+
+* [What is Form Recognizer?](../../overview.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/includes/sas-instructions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/includes/sas-instructions.md
@@ -7,6 +7,4 @@ ms.date: 12/11/2020
ms.author: pafarley ---
-To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read** and **List** permissions are checked, and click **Create**. Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
- > [!div class="mx-imgBorder"]
- > ![alt-text](../media/quickstarts/get-sas-url.png)
\ No newline at end of file
+To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read** and **List** permissions are checked, and click **Create**. Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/overview.md
@@ -112,13 +112,11 @@ Use the [Sample Form Recognizer tool](https://fott.azurewebsites.net/) or follow
* Extract text, selection marks and table structure from documents * [Extract layout data - Python](quickstarts/python-layout.md) * Train custom models and extract form data
- * [Train without labels - cURL](quickstarts/curl-train-extract.md)
* [Train without labels - Python](quickstarts/python-train-extract.md) * [Train with labels - Python](quickstarts/python-labeled-data.md) * Extract data from invoices * [Extract invoice data - Python](quickstarts/python-invoices.md) * Extract data from sales receipts
- * [Extract receipt data - cURL](quickstarts/curl-receipts.md)
* [Extract receipt data - Python](quickstarts/python-receipts.md) * Extract data from business cards * [Extract business card data - Python](quickstarts/python-business-cards.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/client-library.md
@@ -10,7 +10,7 @@ ms.subservice: forms-recognizer
ms.topic: quickstart ms.date: 09/21/2020 ms.author: pafarley
-zone_pivot_groups: programming-languages-set-ten
+zone_pivot_groups: programming-languages-set-formre
ms.custom: "devx-track-js, devx-track-csharp, cog-serv-seo-aug-2020" keywords: forms processing, automated data processing
@@ -18,12 +18,14 @@ keywords: forms processing, automated data processing
# Quickstart: Use the Form Recognizer client library
-Get started with the Form Recognizer client library in the language of your choice. Azure Form Recognizer is a cognitive service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs and table data from your form documents&mdash;the service outputs structured data that includes the relationships in the original file. Follow these steps to install the SDK package and try out the example code for basic tasks. The Form Recognizer client library currently targets v2.0 of the From Recognizer service.
+Get started with the Form Recognizer using the language of your choice. Azure Form Recognizer is a cognitive service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs and table data from your form documents&mdash;the service outputs structured data that includes the relationships in the original file. Follow these steps to install the SDK package and try out the example code for basic tasks. The Form Recognizer client library currently targets v2.0 of the From Recognizer service.
Use the Form Recognizer client library to: * [Recognize form content](#recognize-form-content) * [Recognize receipts](#recognize-receipts)
+* [Recognize business cards](#recognize-business-cards)
+* [Recognize invoices](#recognize-invoices)
* [Train a custom model](#train-a-custom-model) * [Analyze forms with a custom model](#analyze-forms-with-a-custom-model) * [Manage your custom models](#manage-your-custom-models)
@@ -50,4 +52,10 @@ Use the Form Recognizer client library to:
[!INCLUDE [Python SDK quickstart](../includes/quickstarts/python-sdk.md)]
+::: zone-end
+
+::: zone pivot="programming-language-rest-api"
+
+[!INCLUDE [REST API quickstart](../includes/quickstarts/rest-api.md)]
+ ::: zone-end\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/curl-receipts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/curl-receipts.md deleted file mode 100644
@@ -1,402 +0,0 @@
-title: "Quickstart: Extract receipt data using cURL - Form Recognizer"
-titleSuffix: Azure Cognitive Services
-description: In this quickstart, you'll use the Form Recognizer REST API with cURL to extract data from images of USA sales receipts.
-author: PatrickFarley
-manager: nitinme
-
-ms.service: cognitive-services
-ms.subservice: forms-recognizer
-ms.topic: quickstart
-ms.date: 10/05/2020
-ms.author: pafarley
-#Customer intent: As a developer or data scientist familiar with cURL, I want to learn how to use a prebuilt Form Recognizer model to extract my receipt data.
-
-# Quickstart: Extract receipt data using the Form Recognizer REST API with cURL
-
-In this quickstart, you'll use the Azure Form Recognizer REST API with cURL to extract and identify relevant information from USA sales receipts.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-To complete this quickstart, you must have:
-- [cURL](https://curl.haxx.se/windows/) installed.-- A URL for an image of a receipt. You can use a [sample image](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg) for this quickstart.-
-## Create a Form Recognizer resource
-
-[!INCLUDE [create resource](../includes/create-resource.md)]
-
-## Analyze a receipt
-
-To start analyzing a receipt, you call the **[Analyze Receipt](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeReceiptAsync)** API using the cURL command below. Before you run the command, make these changes:
-
-1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription.
-1. Replace `<your receipt URL>` with the URL address of a receipt image.
-1. Replace `<subscription key>` with the subscription key you copied from the previous step.
-
-```bash
-curl -i -X POST "https://<Endpoint>/formrecognizer/v2.1-preview.2/prebuilt/receipt/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription key>" --data-ascii "{ \"source\": \"<your receipt URL>\"}"
-```
-
-You'll receive a `202 (Success)` response that includes am **Operation-Location** header. The value of this header contains an operation ID that you can use to query the status of the asynchronous operation and get the results. In the following example, the string after `operations/` is the operation ID.
-
-```console
-https://cognitiveservice/formrecognizer/v2.1-preview.2/prebuilt/receipt/operations/54f0b076-4e38-43e5-81bd-b85b8835fdfb
-```
-
-## Get the receipt results
-
-After you've called the **Analyze Receipt** API, you call the **[Get Analyze Receipt Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/GetAnalyzeReceiptResult)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
-
-1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
-1. Replace `<operationId>` with the operation ID from the previous step.
-1. Replace `<subscription key>` with your subscription key.
-
-```bash
-curl -X GET "https://<Endpoint>/formrecognizer/v2.1-preview.2/prebuilt/receipt/analyzeResults/<operationId>" -H "Ocp-Apim-Subscription-Key: <subscription key>"
-```
-
-### Examine the response
-
-You'll receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation is complete, the `"readResults"` field contains every line of text that was extracted from the receipt, and the `"documentResults"` field contains key/value information for the most relevant parts of the receipt. If the operation is not complete, the value of `"status"` will be `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
-
-See the following receipt image and its corresponding JSON output. The output has been shortened for readability.
-
-![A receipt from Contoso store](../media/contoso-allinone.jpg)
-
-The `"readResults"` node contains all of the recognized text (if you set the optional *includeTextDetails* parameter to `true`). Text is organized by page, then by line, then by individual words. The `"documentResults"` node contains the receipt-specific values that the model discovered. This is where you'll find useful key/value pairs like the tax, total, merchant address, and so on.
-
-```json
-{
- "status":"succeeded",
- "createdDateTime":"2019-12-17T04:11:24Z",
- "lastUpdatedDateTime":"2019-12-17T04:11:32Z",
- "analyzeResult":{
- "version":"2.1.0",
- "readResults":[
- {
- "page":1,
- "angle":0.6893,
- "width":1688,
- "height":3000,
- "unit":"pixel",
- "language":"en",
- "lines":[
- {
- "text":"Contoso",
- "boundingBox":[
- 635,
- 510,
- 1086,
- 461,
- 1098,
- 558,
- 643,
- 604
- ],
- "words":[
- {
- "text":"Contoso",
- "boundingBox":[
- 639,
- 510,
- 1087,
- 461,
- 1098,
- 551,
- 646,
- 604
- ],
- "confidence":0.955
- }
- ]
- },
- ...
- ]
- }
- ],
- "documentResults":[
- {
- "docType":"prebuilt:receipt",
- "pageRange":[
- 1,
- 1
- ],
- "fields":{
- "ReceiptType":{
- "type":"string",
- "valueString":"Itemized",
- "confidence":0.692
- },
- "MerchantName":{
- "type":"string",
- "valueString":"Contoso Contoso",
- "text":"Contoso Contoso",
- "boundingBox":[
- 378.2,
- 292.4,
- 1117.7,
- 468.3,
- 1035.7,
- 812.7,
- 296.3,
- 636.8
- ],
- "page":1,
- "confidence":0.613,
- "elements":[
- "#/readResults/0/lines/0/words/0",
- "#/readResults/0/lines/1/words/0"
- ]
- },
- "MerchantAddress":{
- "type":"string",
- "valueString":"123 Main Street Redmond, WA 98052",
- "text":"123 Main Street Redmond, WA 98052",
- "boundingBox":[
- 302,
- 675.8,
- 848.1,
- 793.7,
- 809.9,
- 970.4,
- 263.9,
- 852.5
- ],
- "page":1,
- "confidence":0.99,
- "elements":[
- "#/readResults/0/lines/2/words/0",
- "#/readResults/0/lines/2/words/1",
- "#/readResults/0/lines/2/words/2",
- "#/readResults/0/lines/3/words/0",
- "#/readResults/0/lines/3/words/1",
- "#/readResults/0/lines/3/words/2"
- ]
- },
- "MerchantPhoneNumber":{
- "type":"phoneNumber",
- "valuePhoneNumber":"+19876543210",
- "text":"987-654-3210",
- "boundingBox":[
- 278,
- 1004,
- 656.3,
- 1054.7,
- 646.8,
- 1125.3,
- 268.5,
- 1074.7
- ],
- "page":1,
- "confidence":0.99,
- "elements":[
- "#/readResults/0/lines/4/words/0"
- ]
- },
- "TransactionDate":{
- "type":"date",
- "valueDate":"2019-06-10",
- "text":"6/10/2019",
- "boundingBox":[
- 265.1,
- 1228.4,
- 525,
- 1247,
- 518.9,
- 1332.1,
- 259,
- 1313.5
- ],
- "page":1,
- "confidence":0.99,
- "elements":[
- "#/readResults/0/lines/5/words/0"
- ]
- },
- "TransactionTime":{
- "type":"time",
- "valueTime":"13:59:00",
- "text":"13:59",
- "boundingBox":[
- 541,
- 1248,
- 677.3,
- 1261.5,
- 668.9,
- 1346.5,
- 532.6,
- 1333
- ],
- "page":1,
- "confidence":0.977,
- "elements":[
- "#/readResults/0/lines/5/words/1"
- ]
- },
- "Items":{
- "type":"array",
- "valueArray":[
- {
- "type":"object",
- "valueObject":{
- "Quantity":{
- "type":"number",
- "text":"1",
- "boundingBox":[
- 245.1,
- 1581.5,
- 300.9,
- 1585.1,
- 295,
- 1676,
- 239.2,
- 1672.4
- ],
- "page":1,
- "confidence":0.92,
- "elements":[
- "#/readResults/0/lines/7/words/0"
- ]
- },
- "Name":{
- "type":"string",
- "valueString":"Cappuccino",
- "text":"Cappuccino",
- "boundingBox":[
- 322,
- 1586,
- 654.2,
- 1601.1,
- 650,
- 1693,
- 317.8,
- 1678
- ],
- "page":1,
- "confidence":0.923,
- "elements":[
- "#/readResults/0/lines/7/words/1"
- ]
- },
- "TotalPrice":{
- "type":"number",
- "valueNumber":2.2,
- "text":"$2.20",
- "boundingBox":[
- 1107.7,
- 1584,
- 1263,
- 1574,
- 1268.3,
- 1656,
- 1113,
- 1666
- ],
- "page":1,
- "confidence":0.918,
- "elements":[
- "#/readResults/0/lines/8/words/0"
- ]
- }
- }
- },
- ...
- ]
- },
- "Subtotal":{
- "type":"number",
- "valueNumber":11.7,
- "text":"11.70",
- "boundingBox":[
- 1146,
- 2221,
- 1297.3,
- 2223,
- 1296,
- 2319,
- 1144.7,
- 2317
- ],
- "page":1,
- "confidence":0.955,
- "elements":[
- "#/readResults/0/lines/13/words/1"
- ]
- },
- "Tax":{
- "type":"number",
- "valueNumber":1.17,
- "text":"1.17",
- "boundingBox":[
- 1190,
- 2359,
- 1304,
- 2359,
- 1304,
- 2456,
- 1190,
- 2456
- ],
- "page":1,
- "confidence":0.979,
- "elements":[
- "#/readResults/0/lines/15/words/1"
- ]
- },
- "Tip":{
- "type":"number",
- "valueNumber":1.63,
- "text":"1.63",
- "boundingBox":[
- 1094,
- 2479,
- 1267.7,
- 2485,
- 1264,
- 2591,
- 1090.3,
- 2585
- ],
- "page":1,
- "confidence":0.941,
- "elements":[
- "#/readResults/0/lines/17/words/1"
- ]
- },
- "Total":{
- "type":"number",
- "valueNumber":14.5,
- "text":"$14.50",
- "boundingBox":[
- 1034.2,
- 2617,
- 1387.5,
- 2638.2,
- 1380,
- 2763,
- 1026.7,
- 2741.8
- ],
- "page":1,
- "confidence":0.985,
- "elements":[
- "#/readResults/0/lines/19/words/0"
- ]
- }
- }
- }
- ]
- }
-}
-```
-
-## Next steps
-
-In this quickstart, you used the Form Recognizer REST API with cURL to extract the content of a sales receipt. Next, see the reference documentation to explore the Form Recognizer API in more depth.
-
-> [!div class="nextstepaction"]
-> [REST API reference documentation](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeReceiptAsync)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/curl-train-extract https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/curl-train-extract.md deleted file mode 100644
@@ -1,489 +0,0 @@
-title: "Quickstart: Train a model and extract form data using cURL - Form Recognizer"
-titleSuffix: Azure Cognitive Services
-description: In this quickstart, you'll use the Form Recognizer REST API with cURL to train a model and extract data from forms.
-author: PatrickFarley
-manager: nitinme
-
-ms.service: cognitive-services
-ms.subservice: forms-recognizer
-ms.topic: quickstart
-ms.date: 10/05/2020
-ms.author: pafarley
-#Customer intent: As a developer or data scientist familiar with cURL, I want to learn how to use Form Recognizer to extract my form data.
-
-# Quickstart: Train a Form Recognizer model and extract form data by using the REST API with cURL
-
-In this quickstart, you'll use the Azure Form Recognizer REST API with cURL to train and score forms to extract key-value pairs and tables.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-To complete this quickstart, you must have:
-- [cURL](https://curl.haxx.se/windows/) installed.-- A set of at least six forms of the same type. You will use five of these to train the model, and then you'll test it with the sixth form. Your forms can be of different file types but must be the same type of document. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*) for this quickstart. Upload the training files to the root of a blob storage container in a standard-performance-tier Azure Storage account. You can put the testing files in a separate folder.-
-## Create a Form Recognizer resource
-
-[!INCLUDE [create resource](../includes/create-resource.md)]
-
-## Train a Form Recognizer model
-
-First, you'll need a set of training data in an Azure Storage blob. You should have a minimum of five filled-in forms (PDF documents and/or images) of the same type/structure as your main input data. Or, you can use a single empty form with two filled-in forms. The empty form's file name needs to include the word "empty." See [Build a training data set for a custom model](../build-training-data-set.md) for tips and options for putting together your training data.
-
-> [!NOTE]
-> You can use the labeled data feature to manually label some or all of your training data beforehand. This is a more complex process but results in a better trained model. See the [Train with labels](../overview.md#train-with-labels) section of the overview to learn more about this feature.
---
-To train a Form Recognizer model with the documents in your Azure blob container, call the **[Train Custom Model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/TrainCustomModelAsync)** API by running the following cURL command. Before you run the command, make these changes:
-
-1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription.
-1. Replace `<subscription key>` with the subscription key you copied from the previous step.
-1. Replace `<SAS URL>` with the Azure Blob storage container's shared access signature (SAS) URL. [!INCLUDE [get SAS URL](../includes/sas-instructions.md)]
--
- # [v2.0](#tab/v2-0)
- ```bash
- curl -i -X POST "https://<Endpoint>/formrecognizer/v2.0/custom/models" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription key>" --data-ascii "{ \"source\": \""<SAS URL>"\"}"
- ```
- # [v2.1 preview](#tab/v2-1)
- ```bash
- curl -i -X POST "https://<Endpoint>/formrecognizer/v2.1-preview.2/custom/models" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription key>" --data-ascii "{ \"source\": \""<SAS URL>"\"}"
- ```
-
- ---
---
-You'll receive a `201 (Success)` response with a **Location** header. The value of this header is the ID of the new model being trained.
-
-## Get training results
-
-After you've started the train operation, you use a new operation, **[Get Custom Model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/GetCustomModel)** to check the training status. Pass the model ID into this API call to check the training status:
-
-1. Replace `<Endpoint>` with the endpoint that you obtained with your Form Recognizer subscription key.
-1. Replace `<subscription key>` with your subscription key
-1. Replace `<model ID>` with the model ID you received in the previous step
---
-# [v2.0](#tab/v2-0)
-```bash
-curl -X GET "https://<Endpoint>/formrecognizer/v2.0/custom/models/<model ID>" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription key>"
-```
-# [v2.1 preview](#tab/v2-1)
-```bash
-curl -X GET "https://<Endpoint>/formrecognizer/v2.1-preview.2/custom/models/<model ID>" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription key>"
-```ce\": \""<SAS URL>"\"}"
-```
-
-
-You'll receive a `200 (Success)` response with a JSON body in the following format. Notice the `"status"` field. This will have the value `"ready"` once training is complete. If the model is not finished training, you'll need to query the service again by rerunning the command. We recommend an interval of one second or more between calls.
-
-The `"modelId"` field contains the ID of the model you're training. You'll need this for the next step.
-
-```json
-{
- "modelInfo":{
- "status":"ready",
- "createdDateTime":"2019-10-08T10:20:31.957784",
- "lastUpdatedDateTime":"2019-10-08T14:20:41+00:00",
- "modelId":"1cfb372bab404ba3aa59481ab2c63da5"
- },
- "trainResult":{
- "trainingDocuments":[
- {
- "documentName":"invoices\\Invoice_1.pdf",
- "pages":1,
- "errors":[
-
- ],
- "status":"succeeded"
- },
- {
- "documentName":"invoices\\Invoice_2.pdf",
- "pages":1,
- "errors":[
-
- ],
- "status":"succeeded"
- },
- {
- "documentName":"invoices\\Invoice_3.pdf",
- "pages":1,
- "errors":[
-
- ],
- "status":"succeeded"
- },
- {
- "documentName":"invoices\\Invoice_4.pdf",
- "pages":1,
- "errors":[
-
- ],
- "status":"succeeded"
- },
- {
- "documentName":"invoices\\Invoice_5.pdf",
- "pages":1,
- "errors":[
-
- ],
- "status":"succeeded"
- }
- ],
- "errors":[
-
- ]
- },
- "keys":{
- "0":[
- "Address:",
- "Invoice For:",
- "Microsoft",
- "Page"
- ]
- }
-}
-```
-
-## Analyze forms for key-value pairs and tables
-
-Next, you'll use your newly trained model to analyze a document and extract key-value pairs and tables from it. Call the **[Analyze Form](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm)** API by running the following cURL command. Before you run the command, make these changes:
-
-1. Replace `<Endpoint>` with the endpoint that you obtained from your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
-1. Replace `<model ID>` with the model ID that you received in the previous section.
-1. Replace `<SAS URL>` with an SAS URL to your file in Azure storage. Follow the steps in the Training section, but instead of getting a SAS URL for the whole blob container, get one for the specific file you want to analyze.
-1. Replace `<subscription key>` with your subscription key.
-
-# [v2.0](#tab/v2-0)
-```bash
-curl -v "https://<Endpoint>/formrecognizer/v2.0/custom/models/<model ID>/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription key>" -d "{ \"source\": \""<SAS URL>"\" } "
-```
-# [v2.1 preview](#tab/v2-1)
-```bash
-```bash
-curl -v "https://<Endpoint>/formrecognizer/v2.1-preview.2/custom/models/<model ID>/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription key>" -d "{ \"source\": \""<SAS URL>"\" } "
-```
-
---
-You'll receive a `202 (Success)` response with an **Operation-Location** header. The value of this header includes a results ID you use to track the results of the Analyze operation. Save this results ID for the next step.
-
-## Get the Analyze results
-
-Use the following API to query the results of the Analyze operation.
-
-1. Replace `<Endpoint>` with the endpoint that you obtained from your Form Recognizer subscription key. You can find it on your Form Recognizer resource **Overview** tab.
-1. Replace `<result ID>` with the ID that you received in the previous section.
-1. Replace `<subscription key>` with your subscription key.
--
-# [v2.0](#tab/v2-0)
-```bash
-curl -X GET "https://<Endpoint>/formrecognizer/v2.0/custom/models/<model ID>/analyzeResults/<result ID>" -H "Ocp-Apim-Subscription-Key: <subscription key>"
-```
-# [v2.1 preview](#tab/v2-1)
-```bash
-curl -X GET "https://<Endpoint>/formrecognizer/v2.1-preview/custom/models/<model ID>/analyzeResults/<result ID>" -H "Ocp-Apim-Subscription-Key: <subscription key>"
-```
-
-
-You'll receive a `200 (Success)` response with a JSON body in the following format. The output has been shortened for simplicity. Notice the `"status"` field near the bottom. This will have the value `"succeeded"` when the Analyze operation is complete. If the Analyze operation hasn't completed, you'll need to query the service again by rerunning the command. We recommend an interval of one second or more between calls.
-
-The main key/value pair associations and tables are in the `"pageResults"` node. If you also specified plain text extraction through the *includeTextDetails* URL parameter, then the `"readResults"` node will show the content and positions of all the text in the document.
-
-This sample JSON output has been shortened for simplicity.
-
-# [v2.0](#tab/v2-0)
-```JSON
-{
- "status": "succeeded",
- "createdDateTime": "2020-08-21T00:46:25Z",
- "lastUpdatedDateTime": "2020-08-21T00:46:32Z",
- "analyzeResult": {
- "version": "2.0.0",
- "readResults": [
- {
- "page": 1,
- "angle": 0,
- "width": 8.5,
- "height": 11,
- "unit": "inch",
- "lines": [
- {
- "text": "Project Statement",
- "boundingBox": [
- 5.0153,
- 0.275,
- 8.0944,
- 0.275,
- 8.0944,
- 0.7125,
- 5.0153,
- 0.7125
- ],
- "words": [
- {
- "text": "Project",
- "boundingBox": [
- 5.0153,
- 0.275,
- 6.2278,
- 0.275,
- 6.2278,
- 0.7125,
- 5.0153,
- 0.7125
- ]
- },
- {
- "text": "Statement",
- "boundingBox": [
- 6.3292,
- 0.275,
- 8.0944,
- 0.275,
- 8.0944,
- 0.7125,
- 6.3292,
- 0.7125
- ]
- }
- ]
- },
- ...
- ]
- }
- ],
- "pageResults": [
- {
- "page": 1,
- "keyValuePairs": [
- {
- "key": {
- "text": "Date:",
- "boundingBox": [
- 6.9722,
- 1.0264,
- 7.3417,
- 1.0264,
- 7.3417,
- 1.1931,
- 6.9722,
- 1.1931
- ],
- "elements": [
- "#/readResults/0/lines/2/words/0"
- ]
- },
- "confidence": 1
- },
- ...
- ],
- "tables": [
- {
- "rows": 4,
- "columns": 5,
- "cells": [
- {
- "text": "Training Date",
- "rowIndex": 0,
- "columnIndex": 0,
- "boundingBox": [
- 0.6931,
- 4.2444,
- 1.5681,
- 4.2444,
- 1.5681,
- 4.4125,
- 0.6931,
- 4.4125
- ],
- "confidence": 1,
- "rowSpan": 1,
- "columnSpan": 1,
- "elements": [
- "#/readResults/0/lines/15/words/0",
- "#/readResults/0/lines/15/words/1"
- ],
- "isHeader": true,
- "isFooter": false
- },
- ...
- ]
- }
- ],
- "clusterId": 0
- }
- ],
- "documentResults": [],
- "errors": []
- }
-}
-```
-# [v2.1 preview](#tab/v2-1)
-```JSON
-{
- "status": "succeeded",
- "createdDateTime": "2020-08-21T01:13:28Z",
- "lastUpdatedDateTime": "2020-08-21T01:13:42Z",
- "analyzeResult": {
- "version": "2.1.0",
- "readResults": [
- {
- "page": 1,
- "angle": 0,
- "width": 8.5,
- "height": 11,
- "unit": "inch",
- "lines": [
- {
- "text": "Project Statement",
- "boundingBox": [
- 5.0444,
- 0.3613,
- 8.0917,
- 0.3613,
- 8.0917,
- 0.6718,
- 5.0444,
- 0.6718
- ],
- "words": [
- {
- "text": "Project",
- "boundingBox": [
- 5.0444,
- 0.3587,
- 6.2264,
- 0.3587,
- 6.2264,
- 0.708,
- 5.0444,
- 0.708
- ]
- },
- {
- "text": "Statement",
- "boundingBox": [
- 6.3361,
- 0.3635,
- 8.0917,
- 0.3635,
- 8.0917,
- 0.6396,
- 6.3361,
- 0.6396
- ]
- }
- ]
- },
- ...
- ]
- }
- ],
- "pageResults": [
- {
- "page": 1,
- "keyValuePairs": [
- {
- "key": {
- "text": "Date:",
- "boundingBox": [
- 6.9833,
- 1.0615,
- 7.3333,
- 1.0615,
- 7.3333,
- 1.1649,
- 6.9833,
- 1.1649
- ],
- "elements": [
- "#/readResults/0/lines/2/words/0"
- ]
- },
- "value": {
- "text": "9/10/2020",
- "boundingBox": [
- 7.3833,
- 1.0802,
- 7.925,
- 1.0802,
- 7.925,
- 1.174,
- 7.3833,
- 1.174
- ],
- "elements": [
- "#/readResults/0/lines/3/words/0"
- ]
- },
- "confidence": 1
- },
- ...
- ],
- "tables": [
- {
- "rows": 5,
- "columns": 5,
- "cells": [
- {
- "text": "Training Date",
- "rowIndex": 0,
- "columnIndex": 0,
- "boundingBox": [
- 0.6944,
- 4.2779,
- 1.5625,
- 4.2779,
- 1.5625,
- 4.4005,
- 0.6944,
- 4.4005
- ],
- "confidence": 1,
- "rowSpan": 1,
- "columnSpan": 1,
- "elements": [
- "#/readResults/0/lines/15/words/0",
- "#/readResults/0/lines/15/words/1"
- ],
- "isHeader": true,
- "isFooter": false
- },
- ...
- ]
- }
- ],
- "clusterId": 0
- }
- ],
- "documentResults": [],
- "errors": []
- }
-}
-```
---
-## Improve results
-
-[!INCLUDE [improve results](../includes/improve-results-unlabeled.md)]
-
-## Next steps
-
-In this quickstart, you used the Form Recognizer REST API with cURL to train a model and run it in a sample scenario. Next, see the reference documentation to explore the Form Recognizer API in more depth.
-
-> [!div class="nextstepaction"]
-> [REST API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/label-tool.md
@@ -135,6 +135,7 @@ Fill in the fields with the following values:
* **Description** - Your project description. * **SAS URL** - The shared access signature (SAS) URL of your Azure Blob Storage container. [!INCLUDE [get SAS URL](../includes/sas-instructions.md)]
+ :::image type="content" source="../media/quickstarts/get-sas-url.png" alt-text="SAS URL retrieval":::
:::image type="content" source="../media/label-tool/connections.png" alt-text="Connection settings of sample labeling tool.":::
@@ -263,7 +264,7 @@ The following value types and variations are currently supported:
Click the Train icon on the left pane to open the Training page. Then click the **Train** button to begin training the model. Once the training process completes, you'll see the following information:
-* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./curl-train-extract.md) or [client library](./client-library.md).
+* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./client-library.md?pivots=programming-language-rest-api) or [client library](./client-library.md).
* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling additional forms and training again to create a new model. We recommend starting by labeling five forms and adding more forms as needed. * The list of tags, and the estimated accuracy per tag.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-labeled-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/python-labeled-data.md
@@ -252,6 +252,8 @@ To train a model with labeled data, call the **[Train Custom Model](https://west
1. Replace `<Endpoint>` with the endpoint URL for your Form Recognizer resource. 1. Replace `<SAS URL>` with the Azure Blob storage container's shared access signature (SAS) URL. [!INCLUDE [get SAS URL](../includes/sas-instructions.md)]+
+ :::image type="content" source="../media/quickstarts/get-sas-url.png" alt-text="SAS URL retrieval":::
1. Replace `<Blob folder name>` with the folder name in your blob container where the input data is located. Or, if your data is at the root, leave this blank and remove the `"prefix"` field from the body of the HTTP request. # [v2.0](#tab/v2-0)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/python-train-extract.md
@@ -45,6 +45,7 @@ To train a Form Recognizer model with the documents in your Azure blob container
1. Replace `<SAS URL>` with the Azure Blob storage container's shared access signature (SAS) URL. [!INCLUDE [get SAS URL](../includes/sas-instructions.md)]
+ :::image type="content" source="../media/quickstarts/get-sas-url.png" alt-text="SAS URL retrieval":::
1. Replace `<subscription key>` with the subscription key you copied from the previous step. 1. Replace `<endpoint>` with the endpoint URL for your Form Recognizer resource. 1. Replace `<Blob folder name>` with the path to the folder in blob storage where your forms are located. If your forms are at the root of your container, leave this string empty.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/includes/quickstarts/python-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/includes/quickstarts/python-sdk.md
@@ -56,6 +56,7 @@ pip install --upgrade azure-ai-textanalytics
> [!TIP] > Want to view the whole quickstart code file at once? You can find it [on GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/TextAnalytics/python-v3-client-library.py), which contains the code examples in this quickstart. + # [Version 2.1](#tab/version-2) ```console
@@ -166,6 +167,7 @@ client = authenticate_client()
Create a function to instantiate the `TextAnalyticsClient` object with your `key` AND `endpoint` created above. Then create a new client. Note that `api_version=TextAnalyticsApiVersion.V3_0` should be defined for using version 3.0. ```python
+# use this code if you're using SDK version is 5.0.0
from azure.ai.textanalytics import TextAnalyticsClient from azure.core.credentials import AzureKeyCredential
@@ -173,13 +175,30 @@ def authenticate_client():
ta_credential = AzureKeyCredential(key) text_analytics_client = TextAnalyticsClient( endpoint=endpoint,
- credential=ta_credential,
- api_version=TextAnalyticsApiVersion.V3_0)
+ credential=ta_credential)
return text_analytics_client client = authenticate_client() ```
+If you installed v5.1.0 of the client library using `pip install azure-ai-textanalytics --pre`, you can specify v3.0 of the Text Analytics API with the client's `api_version` parameter. Only use the following `authenticate_client()` method if your client is v5.1.0 or later.
+
+```python
+# Only use the following code sample if you're using v5.1.0 of the client library,
+# and are looking to specify v3.0 of the Text Analytics API for your client
+from azure.ai.textanalytics import TextAnalyticsClient, TextAnalyticsApiVersion
+from azure.core.credentials import AzureKeyCredential
+def authenticate_client():
+ ta_credential = AzureKeyCredential(key)
+ text_analytics_client = TextAnalyticsClient(
+ endpoint=endpoint,
+ credential=ta_credential,
+ api_version=TextAnalyticsApiVersion.V3_0
+ )
+
+client = authenticate_client()
+```
+ # [Version 2.1](#tab/version-2) [!code-python[imports statements](~/samples-cognitive-services-python-sdk/samples/language/text_analytics_samples.py?name=imports)]
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/notifications.md
@@ -29,9 +29,9 @@ Learn more about [event handling in Azure Communication Services](./event-handli
## Deliver push notifications via Azure Notification Hubs
-You can connect an Azure Notification Hub to your Communication Services resource in order to automatically send push notifications to a user's mobile device when they receive an incoming call. You should used these push notifications to wake up your application from the background and display UI that let's the user accept or decline the call.
+You can connect an Azure Notification Hub to your Communication Services resource in order to automatically send push notifications to a user's mobile device when they receive an incoming call. You should use these push notifications to wake up your application from the background and display UI that lets the user accept or decline the call.
-:::image type="content" source="./media/notifications/acs-anh-int.png" alt-text="Diagram showing how communication services integrates with Azure Notifications Hub.":::
+:::image type="content" source="./media/notifications/acs-anh-int.png" alt-text="Diagram showing how communication services integrates with Azure Notification Hubs.":::
Communication Services uses Azure Notification Hub as a pass-through service to communicate with the various platform-specific push notification services using the [Direct Send](/rest/api/notificationhubs/direct-send) API. This allows you to reuse your existing Azure Notification Hub resources and configurations to deliver low latency, reliable calling notifications to your applications.
@@ -40,15 +40,15 @@ Communication Services uses Azure Notification Hub as a pass-through service to
### Notification Hub provisioning
-To deliver push notifications to client devices using Notification Hubs, [create a Notification Hub](../../notification-hubs/create-notification-hub-portal.md) within the same subscription as your Communication Services resource. Azure Notification Hubs must be configured for the Platform Notifications Service you want to use. To learn how to get push notifications in your client app from Notification Hubs, see [Getting started with Notification Hubs](../../notification-hubs/notification-hubs-android-push-notification-google-fcm-get-started.md) and select your target client platform from the drop-down list near the top of the page.
+To deliver push notifications to client devices using Notification Hubs, [create a Notification Hub](../../notification-hubs/create-notification-hub-portal.md) within the same subscription as your Communication Services resource. You must configure the Azure Notification Hub for the Platform Notification System you want to use. To learn how to get push notifications in your client app from Notification Hubs, see [Getting started with Notification Hubs](../../notification-hubs/notification-hubs-android-push-notification-google-fcm-get-started.md) and select your target client platform from the drop-down list near the top of the page.
> [!NOTE] > Currently the APNs and FCM platforms are supported.
-The APNs platform needs to be configured with token authentication mode. Certificate authentication mode is not supported as of now.
+The APNs platform needs to be configured with token authentication mode. Certificate authentication mode isn't supported as of now.
-Once your Notification hub is configured, you can associate it to your Communication Services resource by supplying a connection string for the hub using the Azure Resource Manager Client or through the Azure portal. The connection string should contain "Send" permissions. We recommend creating another access policy with "Send" only permissions specifically for your hub. Learn more about [Notification Hubs security and access policies](../../notification-hubs/notification-hubs-push-notification-security.md)
+Once your Notification hub is configured, you can associate it to your Communication Services resource by supplying a connection string for the hub using the Azure Resource Manager Client or through the Azure portal. The connection string should contain `Send` permissions. We recommend creating another access policy with `Send` only permissions specifically for your hub. Learn more about [Notification Hubs security and access policies](../../notification-hubs/notification-hubs-push-notification-security.md)
-#### Using the Azure Resource Manager client to configure the Notification Hub
+#### Using the Azure Resource Manager client to link your Notification Hub
To log into Azure Resource Manager, execute the following and sign in using your credentials.
@@ -62,20 +62,62 @@ armclient login
armclient POST /subscriptions/<sub_id>/resourceGroups/<resource_group>/providers/Microsoft.Communication/CommunicationServices/<resource_id>/linkNotificationHub?api-version=2020-08-20-preview "{'connectionString': '<connection_string>','resourceId': '<resource_id>'}" ```
-#### Using the Azure portal to configure the Notification Hub
+#### Using the Azure portal to link your Notification Hub
-In the portal, navigate to your Azure Communication Services resource. Inside the Communication Services resource, select Push Notifications from the left menu of the Communication Services page and connect the Notification Hub that you provisioned earlier. You'll need to provide your connection string and resource ID here:
+In the portal, navigate to your Azure Communication Services resource. Inside the Communication Services resource, select Push Notifications from the left menu of the Communication Services page and connect the Notification Hub that you provisioned earlier. You'll need to provide your connection string and resourceId here:
-:::image type="content" source="./media/notifications/acs-anh-portal-int.png" alt-text="Screenshot showing the Push Notifications settings within the Azure Portal.":::
+:::image type="content" source="./media/notifications/acs-anh-portal-int.png" alt-text="Screenshot showing the Push Notifications settings within the Azure portal.":::
> [!NOTE] > If the Azure Notification Hub connection string is updated the Communication Services resource has to be updated as well. Any change on how the hub is linked will be reflected in data plane (i.e., when sending a notification) within a maximum period of ``10`` minutes. This is applicable also when the hub is linked for the first time **if** there were notifications sent before.
-#### Device registration
+### Device registration
Refer to the [voice calling quickstart](../quickstarts/voice-video-calling/getting-started-with-calling.md) to learn how to register your device handle with Communication Services.
+### Troubleshooting guide for push notifications
+
+When you don't see push notifications on your device, there are three places where the notifications could have been dropped:
+
+- Azure Notification Hubs didn't accept the notification from Azure Communication Services
+- The Platform Notification System (for example APNs and FCM) didn't accept the notification from Azure Notification Hubs
+- The Platform Notification System didn't deliver the notification to the device.
+
+The first place where a notification can be dropped (Azure Notification Hubs didn't accept the notifications from Azure Communication Services) is covered below. For the other two places, see [Diagnose dropped notifications in Azure Notification Hubs](../../notification-hubs/notification-hubs-push-notification-fixer.md).
+
+One way to see if your Communication Services resource sends notifications to Azure Notification Hubs is by looking at the `incoming messages` metric from the linked [Azure Notification Hub metrics](../../azure-monitor/platform/metrics-supported.md#microsoftnotificationhubsnamespacesnotificationhubs).
+
+The following are some common misconfigurations that might be the cause why Azure Notification Hub doesn't accept the notifications from your Communication Services resource.
+
+#### Azure Notification Hub not linked to the Communication Services resource
+
+There might be the case that you didn't link your Azure Notification Hub to your Communication Services resource. You can take a look in [Notification Hub provisioning section](#notification-hub-provisioning) to see how to link them.
+
+#### The linked Azure Notification Hub isn't configured
+
+You have to configure the linked Notification Hub with the Platform Notification System credentials for the platform (for example iOS or android) that you would like to use. For more details on how that can be done you can take a look in [Set up push notifications in a notification hub](../../notification-hubs/configure-notification-hub-portal-pns-settings.md).
+
+#### The linked Azure Notification Hub doesn't exist
+
+The Azure Notification Hub linked to your Communication Services resource doesn't exist anymore. Check that the linked Notification Hub still exists.
+
+#### The Azure Notification Hub APNs platform is configured with certificate authentication mode
+
+In case you want to use the APNs platform with certificate authentication mode, it is not currently supported. You should configure the APNs platform with token authentication mode as specified in [Set up push notifications in a notification hub](../../notification-hubs/configure-notification-hub-portal-pns-settings.md).
+
+#### The linked connection string doesn't have `Send` permission
+
+The connection string that you used to link your Notification Hub to your Communication Services resource needs to have the `Send` permission. For more details about how you can create a new connection string or see the current connection string from your Azure Notification Hub you can take a look in [Notification Hubs security and access policies](../../notification-hubs/notification-hubs-push-notification-security.md)
+
+#### The linked connection string or Azure Notification Hub resourceId aren't valid
+
+Make sure that you configure Communication Services resource with the correct connection string and Azure Notification Hub resourceId
+
+#### The linked connection string is regenerated
+
+In case that you regenerated the connection string of your linked Azure Notification Hub, you have to update the connection string with the new one in your Communication Services resource by [relinking the Notification Hub](#notification-hub-provisioning).
+ ## Next steps * For an introduction to Azure Event Grid, see [What is Event Grid?](../../event-grid/overview.md)
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-github-action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-github-action.md
@@ -51,7 +51,7 @@ In the GitHub workflow, you need to supply Azure credentials to authenticate to
First, get the resource ID of your resource group. Substitute the name of your group in the following [az group show][az-group-show] command: ```azurecli
-$groupId=$(az group show \
+groupId=$(az group show \
--name <resource-group-name> \ --query id --output tsv) ```
data-factory https://docs.microsoft.com/en-us/azure/data-factory/introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/introduction.md
@@ -51,8 +51,7 @@ After data is present in a centralized data store in the cloud, process or trans
If you prefer to code transformations by hand, ADF supports external activities for executing your transformations on compute services such as HDInsight Hadoop, Spark, Data Lake Analytics, and Machine Learning. ### CI/CD and publish
-Data Factory offers full support for CI/CD of your data pipelines using Azure DevOps and GitHub. This allows you to incrementally develop and deliver your ETL processes before publishing the finished product. After the raw data has been refined into a business-ready consumable form, load the data into Azure Synapse Analytics, Azure SQL Database, Azure CosmosDB, or whichever analytics engine your business users can point to from their business intelligence tools.
-
+[Data Factory offers full support for CI/CD](continuous-integration-deployment.md) of your data pipelines using Azure DevOps and GitHub. This allows you to incrementally develop and deliver your ETL processes before publishing the finished product. After the raw data has been refined into a business-ready consumable form, load the data into Azure Data Warehouse, Azure SQL Database, Azure CosmosDB, or whichever analytics engine your business users can point to from their business intelligence tools.
### Monitor After you have successfully built and deployed your data integration pipeline, providing business value from refined data, monitor the scheduled activities and pipelines for success and failure rates. Azure Data Factory has built-in support for pipeline monitoring via Azure Monitor, API, PowerShell, Azure Monitor logs, and health panels on the Azure portal.
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/alerts.md new file mode 100644
@@ -0,0 +1,100 @@
+---
+title: View and configure DDoS protection alerts for Azure DDoS Protection Standard
+description: Learn how to view and configure DDoS protection alerts for Azure DDoS Protection Standard.
+services: ddos-protection
+documentationcenter: na
+author: yitoh
+ms.service: ddos-protection
+ms.devlang: na
+ms.topic: article
+ms.tgt_pltfrm: na
+ms.workload: infrastructure-services
+ms.date: 12/28/2020
+ms.author: yitoh
+
+---
+# View and configure DDoS protection alerts
+
+Azure DDoS Protection standard provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Azure Sentinel](../sentinel/connect-azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Configure alerts through Azure Monitor
+> * Configure alerts through portal
+> * View alerts in Azure Security Center
+> * Validate and test your alerts
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) and DDoS Protection Standard must be enabled on a virtual network.
+- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments and Azure VPN Gateway. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.  
+
+## Configure alerts through Azure Monitor
+
+With these templates, you will be able to configure alerts for all public IP addresses that you have enabled diagnostic logging on. Hence in order to use these alert templates, you will first need a Log Analytics Workspace with diagnostic settings enabled. See [View and configure DDoS diagnostic logging](diagnostic-logging.md).
+
+### Azure Monitor alert rule
+This [Azure Monitor alert rule](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Azure%20Monitor%20Alert%20-%20DDoS%20Mitigation%20Started) will run a simple query to detect when an active DDoS mitigation is occurring. This indicates a potential attack. Action groups can be used to invoke actions as a result of the alert.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAzure%2520Monitor%2520Alert%2520-%2520DDoS%2520Mitigation%2520Started%2FDDoSMitigationStarted.json)
+
+### Azure Monitor alert rule with Logic App
+
+This [template](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/DDoS%20Mitigation%20Alert%20Enrichment) deploys the necessary components of an enriched DDoS mitigation alert: Azure Monitor alert rule, action group, and Logic App. The result of the process is an email alert with details about the IP address under attack, including information about the resource associated with the IP. The owner of the resource is added as a recipient of the email, along with the security team. A basic application availability test is also performed and the results are included in the email alert.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FDDoS%2520Mitigation%2520Alert%2520Enrichment%2FEnrich-DDoSAlert.json)
+
+## Configure alerts through portal
+
+You can select any of the available DDoS protection metrics to alert you when thereΓÇÖs an active mitigation during an attack, using the Azure Monitor alert configuration.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your DDoS Protection Plan.
+2. Under **Monitoring**, select **Metrics**.
+3. In the gray navigation bar, select **New alert rule**.
+4. Enter, or select your own values, or enter the following example values, accept the remaining defaults, and then select **Create alert rule**:
+
+ |Setting |Value |
+ |--------- |--------- |
+ | Scope | Select **Select resource**. </br> Select the **Subscription** that contains the public IP address you want to log, select **Public IP Address** for **Resource type**, then select the specific public IP address you want to log metrics for. </br> Select **Done**. |
+ | Condition | Select **Select condition**. </br> Under signal name, select **Under DDoS attack or not**. </br> Under **Operator**, select **Greater than or equal to**. </br> Under **Aggregation type**, select **Maximum**. </br> Under **Threshold value**, enter *1*. For the **Under DDoS attack or not** metric, **0** means you are not under attack while **1** means you are under attack. </br> Select **Done**. |
+ | Actions | Select **Add actions groups**. </br> Select **Create action group**. </br> Under **Notifications**, under **Notification type**, select **Email/SMS message/Push/Voice**. </br> Under **Name**, enter _MyUnderAttackEmailAlert_. </br> Click the edit button, then select **Email** and as many of the following options you require, and then select **OK**. </br> Select **Review + create**. |
+ | Alert rule details | Under **Alert rule name**, Enter _MyDdosAlert_. |
+
+Within a few minutes of attack detection, you should receive an email from Azure Monitor metrics that looks similar to the following picture:
+
+![Attack alert](./media/manage-ddos-protection/ddos-alert.png)
+
+You can also learn more about [configuring webhooks](../azure-monitor/platform/alerts-webhooks.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [logic apps](../logic-apps/logic-apps-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for creating alerts.
+
+## View alerts in Azure Security Center
+
+Azure Security Center provides a list of [security alerts](../security-center/security-center-managing-and-responding-alerts.md), with information to help investigate and remediate problems. With this feature, you get a unified view of alerts, including DDoS attack-related alerts and the actions taken to mitigate the attack in near-time.
+There are two specific alerts that you will see for any DDoS attack detection and mitigation:
+
+- **DDoS Attack detected for Public IP**: This alert is generated when the DDoS protection service detects that one of your public IP addresses is the target of a DDoS attack.
+- **DDoS Attack mitigated for Public IP**: This alert is generated when an attack on the public IP address has been mitigated.
+To view the alerts, open **Security Center** in the Azure portal. Under **Threat Protection**, select **Security alerts**. The following screenshot shows an example of the DDoS attack alerts.
+
+![DDoS Alert in Azure Security Center](./media/manage-ddos-protection/ddos-alert-asc.png)
+
+The alerts include general information about the public IP address thatΓÇÖs under attack, geo and threat intelligence information, and remediations steps.
+
+## Validate and test
+
+To simulate a DDoS attack to validate your alerts, see [Validate DDoS detection](test-through-simulations.md).
+
+## Next steps
+
+In this tutorial, you learned how to:
+
+- Configure alerts through Azure Monitor
+- Configure alerts through portal
+- View alerts in Azure Security Center
+- Validate and test your alerts
+
+To learn how to test and simulate a DDoS attack, see the simulation testing guide:
+
+> [!div class="nextstepaction"]
+> [Test through simulations](test-through-simulations.md)
\ No newline at end of file
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-protection-partner-onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-partner-onboarding.md
@@ -34,8 +34,8 @@ For every protected application, Azure DDoS Protection Standard automatically tu
Azure DDoS Protection identifies and mitigates DDoS attacks without any user intervention. - If the protected resource is in the subscription covered under Azure Security Center, DDoS Protection Standard automatically sends an alert to Security Center whenever a DDoS attack is detected and mitigated against the protected application.-- Alternatively, to get notified when thereΓÇÖs an active mitigation for a protected public IP, you can [configure an alert](telemetry-monitoring-alerting.md#configure-alerts-for-ddos-protection-metrics) on the metric Under DDoS attack or not.-- You can additionally choose to create alerts for the other DDoS metrics and [configure attack analytics](telemetry-monitoring-alerting.md) to understand the scale of the attack, traffic being dropped, attack vectors, top contributors, and other details.
+- Alternatively, to get notified when thereΓÇÖs an active mitigation for a protected public IP, you can [configure an alert](alerts.md) on the metric Under DDoS attack or not.
+- You can additionally choose to create alerts for the other DDoS metrics and [configure attack telemetry](telemetry.md) to understand the scale of the attack, traffic being dropped, attack vectors, top contributors, and other details.
![DDoS metrics](./media/ddos-protection-partner-onboarding/ddos-metrics.png)
@@ -65,7 +65,7 @@ The following steps are required for partners to configure integration with Azur
3. Enable Azure DDoS Protection Standard on the virtual network of the service that has public endpoints using DDoS Protection Plan created in the first step. For stpe-by-step instructions, see [Enable DDoS Standard Protection plan](manage-ddos-protection.md#enable-ddos-protection-for-an-existing-virtual-network) > [!IMPORTANT] > After Azure DDoS Protection Standard is enabled on a virtual network, all public IPs within that virtual network are automatically protected. The origin of these public IPs can be either within Azure (client subscription) or outside of Azure.
-4. Optionally, integrate Azure DDoS Protection Standard telemetry and attack analytics in your application-specific customer-facing dashboard. For more information about using telemetry, see [View and configure DDoS protection telemetry](telemetry-monitoring-alerting.md).
+4. Optionally, integrate Azure DDoS Protection Standard telemetry and attack analytics in your application-specific customer-facing dashboard. For more information about using telemetry, see [View and configure DDoS protection telemetry](telemetry.md).
### Onboarding guides and technical documentation
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-rapid-response https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-rapid-response.md
@@ -15,7 +15,7 @@ ms.author: yitoh
--- # Azure DDoS Rapid Response
-During an active access, Azure DDoS Protection Standard customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack as well as post-attack analysis.
+During an active access, Azure DDoS Protection Standard customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis.
## Prerequisites
@@ -34,7 +34,7 @@ You should only engage DRR if:
1. From Azure portal while creating a new support request, choose **Issue Type** as Technical. 2. Choose **Service** as **DDOS Protection**.
-3. Choose a resource in the resource drop down menu. _You must select a DDoS Plan thatΓÇÖs linked to the virtual network being protected by DDoS Protection Standard to engage DRR._
+3. Choose a resource in the resource drop-down menu. _You must select a DDoS Plan thatΓÇÖs linked to the virtual network being protected by DDoS Protection Standard to engage DRR._
![Choose Resource](./media/ddos-rapid-response/choose-resource.png)
@@ -44,12 +44,12 @@ You should only engage DRR if:
5. Complete additional details and submit the support request.
-DRR follows the Azure Rapid Response support model. Refer to [Support scope and responsiveness](https://azure.microsoft.com/en-us/support/plans/response/) for more information on Rapid Response..
+DRR follows the Azure Rapid Response support model. Refer to [Support scope and responsiveness](https://azure.microsoft.com/en-us/support/plans/response/) for more information on Rapid Response.
To learn more, read the [DDoS Protection Standard documentation](./ddos-protection-overview.md). ## Next steps - Learn how to [test through simulations](test-through-simulations.md).-- Learn how to [view and configure DDoS protection telemetry](telemetry-monitoring-alerting.md).-- Learn how to [configure DDoS attack mitigation reports and flow logs](reports-and-flow-logs.md).\ No newline at end of file
+- Learn how to [view and configure DDoS protection telemetry](telemetry.md).
+- Learn how to [view and configure DDoS diagnostic logging](diagnostic-logging.md).
\ No newline at end of file
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-response-strategy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-response-strategy.md
@@ -51,7 +51,7 @@ We recommend that you use simulation exercises as a normal part of your service
## Alerts during an attack
-Azure DDoS Protection Standard identifies and mitigates DDoS attacks without any user intervention. To get notified when thereΓÇÖs an active mitigation for a protected public IP, you can [configure an alert](telemetry-monitoring-alerting.md) on the metric **Under DDoS attack or not**. You can choose to create alerts for the other DDoS metrics to understand the scale of the attack, traffic being dropped, and other details.
+Azure DDoS Protection Standard identifies and mitigates DDoS attacks without any user intervention. To get notified when thereΓÇÖs an active mitigation for a protected public IP, you can [configure alerts](alerts.md).
### When to contact Microsoft support
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/diagnostic-logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/diagnostic-logging.md new file mode 100644
@@ -0,0 +1,158 @@
+---
+title: Azure DDoS Protection Standard reports and flow logs
+description: Learn how to configure reports and flow logs.
+services: ddos-protection
+documentationcenter: na
+author: yitoh
+ms.service: ddos-protection
+ms.devlang: na
+ms.topic: article
+ms.tgt_pltfrm: na
+ms.workload: infrastructure-services
+ms.date: 12/28/2020
+ms.author: yitoh
+
+---
+
+# View and configure DDoS diagnostic logging
+
+Azure DDoS Protection standard provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Azure Sentinel](../sentinel/connect-azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+
+The following diagnostic logs are available for Azure DDoS Protection Standard:
+
+- **DDoSProtectionNotifications**: Notifications will notify you anytime a public IP resource is under attack, and when attack mitigation is over.
+- **DDoSMitigationFlowLogs**: Attack mitigation flow logs allow you to review the dropped traffic, forwarded traffic and other interesting datapoints during an active DDoS attack in near-real time. You can ingest the constant stream of this data into Azure Sentinel or to your third-party SIEM systems via event hub for near-real time monitoring, take potential actions and address the need of your defense operations.
+- **DDoSMitigationReports**: Attack mitigation reports uses the Netflow protocol data which is aggregated to provide detailed information about the attack on your resource. Anytime a public IP resource is under attack, the report generation will start as soon as the mitigation starts. There will be an incremental report generated every 5 mins and a post-mitigation report for the whole mitigation period. This is to ensure that in an event the DDoS attack continues for a longer duration of time, you will be able to view the most current snapshot of mitigation report every 5 minutes and a complete summary once the attack mitigation is over.
+- **AllMetrics**: Provides all possible metrics available during the duration of a DDoS attack.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Configure DDoS diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
+> * Enable diagnostic logging on all public IPs in a defined scope.
+> * View log data in workbooks.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) and DDoS Protection Standard must be enabled on a virtual network.
+- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments and Azure VPN Gateway. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.  
+
+## Configure DDoS diagnostic logs
+
+If you want to automatically enable diagnostic logging on all public IPs within an environment, skip to [Enable diagnostic logging on all public IPs](#enable-diagnostic-logging-on-all-public-ips).
+
+1. Select **All services** on the top, left of the portal.
+2. Enter *Monitor* in the **Filter** box. When **Monitor** appears in the results, select it.
+3. Under **Settings**, select **Diagnostic Settings**.
+4. Select the **Subscription** and **Resource group** that contain the public IP address you want to log.
+5. Select **Public IP Address** for **Resource type**, then select the specific public IP address you want to enable logs for.
+6. Select **Add diagnostic setting**. Under **Category Details**, select as many of the following options you require, and then select **Save**.
+
+ ![DDoS Diagnostic Settings](./media/ddos-attack-telemetry/ddos-diagnostic-settings.png)
+
+7. Under **Destination details**, select as many of the following options as you require:
+
+ - **Archive to a storage account**: Data is written to an Azure Storage account. To learn more about this option, see [Archive resource logs](../azure-monitor/platform/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage).
+ - **Stream to an event hub**: Allows a log receiver to pick up logs using an Azure Event Hub. Event hubs enable integration with Splunk or other SIEM systems. To learn more about this option, see [Stream resource logs to an event hub](../azure-monitor/platform/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
+ - **Send to Log Analytics**: Writes logs to the Azure Monitor service. To learn more about this option, see [Collect logs for use in Azure Monitor logs](../azure-monitor/platform/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-log-analytics-workspace).
+
+### Log schemas
+
+The following table lists the field names and descriptions:
+
+# [DDoSProtectionNotifications](#tab/DDoSProtectionNotifications)
+
+| Field name | Description |
+| --- | --- |
+| **TimeGenerated** | The date and time in UTC when the notification was created. |
+| **ResourceId** | The resource ID of your public IP address. |
+| **Category** | For notifications, this will be `DDoSProtectionNotifications`.|
+| **ResourceGroup** | The resource group that contains your public IP address and virtual network. |
+| **SubscriptionId** | Your DDoS protection plan subscription ID. |
+| **Resource** | The name of your public IP address. |
+| **ResourceType** | This will always be `PUBLICIPADDRESS`. |
+| **OperationName** | For notifications, this will be `DDoSProtectionNotifications`. |
+| **Message** | Details of the attack. |
+| **Type** | Type of notification. Possible values include `MitigationStarted`. `MitigationStopped`. |
+| **PublicIpAddress** | Your public IP address. |
+
+# [DDoSMitigationFlowLogs](#tab/DDoSMitigationFlowLogs)
+
+| Field name | Description |
+| --- | --- |
+| **TimeGenerated** | The date and time in UTC when the flow log was created. |
+| **ResourceId** | The resource ID of your public IP address. |
+| **Category** | For flow logs, this will be `DDoSMitigationFlowLogs`.|
+| **ResourceGroup** | The resource group that contains your public IP address and virtual network. |
+| **SubscriptionId** | Your DDoS protection plan subscription ID. |
+| **Resource** | The name of your public IP address. |
+| **ResourceType** | This will always be `PUBLICIPADDRESS`. |
+| **OperationName** | For flow logs, this will be `DDoSMitigationFlowLogs`. |
+| **Message** | Details of the attack. |
+| **SourcePublicIpAddress** | The public IP address of the client generating traffic to your public IP address. |
+| **SourcePort** | Port number ranging from 0 to 65535. |
+| **DestPublicIpAddress** | Your public IP address. |
+| **DestPort** | Port number ranging from 0 to 65535. |
+| **Protocol** | Type of protocol. Possible values include `tcp`, `udp`, `other`.|
+
+# [DDoSMitigationReports](#tab/DDoSMitigationReports)
+
+| Field name | Description |
+| --- | --- |
+| **TimeGenerated** | The date and time in UTC when the report was created. |
+| **ResourceId** | The resource ID of your public IP address. |
+| **Category** | For notifications, this will be `DDoSProtectionNotifications`.|
+| **ResourceGroup** | The resource group that contains your public IP address and virtual network. |
+| **SubscriptionId** | Your DDoS protection plan subscription ID. |
+| **Resource** | The name of your public IP address. |
+| **ResourceType** | This will always be `PUBLICIPADDRESS`. |
+| **OperationName** | For mitigation reports, this will be `DDoSMitigationReports`. |
+| **ReportType** | Possible values include `Incremental`, `PostMitigation`.|
+| **MitigationPeriodStart** | The date and time in UTC when the mitigation started. |
+| **MitigationPeriodEnd** | The date and time in UTC when the mitigation ended. |
+| **IPAddress** | Your public IP address. |
+| **AttackVectors** | Breakdown of attack types. Keys include `TCP SYN flood`, `TCP flood`, `UDP flood`, `UDP reflection`, `Other packet flood`.|
+| **TrafficOverview** | Breakdown of attack traffic. Keys include `Total packets`, `Total packets dropped`, `Total TCP packets`, `Total TCP packets dropped`, `Total UDP packets`, `Total UDP packets dropped`, `Total Other packets`, `Total Other packets dropped`. |
+| **Protocols** | Breakdown of protocols involved. Keys include `TCP`, `UDP`, `Other`. |
+| **DropReasons** | Breakdown of reasons for dropped packets. Keys include `Protocol violation invalid TCP syn`, `Protocol violation invalid TCP`, `Protocol violation invalid UDP`, `UDP reflection`, `TCP rate limit exceeded`, `UDP rate limit exceeded`, `Destination limit exceeded`, `Other packet flood`, `Rate limit exceeded`, `Packet was forwarded to service`. |
+| **TopSourceCountries** | Breakdown of top 10 source countries of incoming traffic. |
+| **TopSourceCountriesForDroppedPackets** | Breakdown of top 10 source countries of attack traffic that is/was mitigated. |
+| **TopSourceASNs** | Breakdown of top 10 source autonomous system numbers (ASN) of the incoming traffic. |
+| **SourceContinents** | Breakdown of the source continents of incoming traffic. |
+***
+
+## Enable diagnostic logging on all public IPs
+
+This [template](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Enable%20Diagnostic%20Logging/Azure%20Policy) creates an Azure Policy definition to automatically enable diagnostic logging on all public IP logs in a defined scope.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FEnable%2520Diagnostic%2520Logging%2FAzure%2520Policy%2FDDoSLogs.json)
+
+## View log data in workbooks
+
+### Azure Sentinel data connector
+
+You can connect logs to Azure Sentinel, view and analyze your data in workbooks, create custom alerts, and incorporate it into investigation processes. To connect to Azure Sentinel, see [Connect to Azure Sentinel](../sentinel/connect-azure-ddos-protection.md).
+
+![Azure Sentinel DDoS Connector](./media/ddos-attack-telemetry/azure-sentinel-ddos.png)
+
+### Azure DDoS Protection Workbook
+
+You can use this Azure Resource Manager (ARM) template to deploy an attack analytics workbook. This workbook allows you to visualize attack data across several filterable panels to easily understand whatΓÇÖs at stake.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAzure%2520DDoS%2520Protection%2520Workbook%2FAzureDDoSWorkbook_ARM.json)
+
+![DDoS Protection Workbook](./media/ddos-attack-telemetry/ddos-attack-analytics-workbook.png)
+
+## Next steps
+
+In this tutorial, you learned how to:
+
+- Configure DDoS diagnostic logs, including notifications, mitigation reports and mitigation flow logs.
+- Enable diagnostic logging on all public IPs in a defined scope.
+- View log data in workbooks.
+
+To learn how to configure attack alerts, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [View and configure DDoS protection alerts](alerts.md)
\ No newline at end of file
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/manage-ddos-protection-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection-cli.md
@@ -129,4 +129,4 @@ If you want to delete a DDoS protection plan, you must first dissociate all virt
To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials. > [!div class="nextstepaction"]
-> [View and configure DDoS protection telemetry](telemetry-monitoring-alerting.md)
\ No newline at end of file
+> [View and configure DDoS protection telemetry](telemetry.md)
\ No newline at end of file
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/manage-ddos-protection-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection-powershell.md
@@ -109,4 +109,4 @@ If you want to delete a DDoS protection plan, you must first dissociate all virt
To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials. > [!div class="nextstepaction"]
-> [View and configure DDoS protection telemetry](telemetry-monitoring-alerting.md)
\ No newline at end of file
+> [View and configure DDoS protection telemetry](telemetry.md)
\ No newline at end of file
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/manage-ddos-protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection.md
@@ -72,6 +72,10 @@ You cannot move a virtual network to another resource group or subscription when
4. Select **DDoS protection**, under **SETTINGS**. 5. Select **Standard**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then select **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
+### Enable DDoS protection for all virtual networks
+
+This [policy](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Policy%20-%20Virtual%20Networks%20should%20be%20associated%20with%20an%20Azure%20DDoS%20Protection%20Standard%20plan) will detect any virtual networks in a defined scope that do not have DDoS Protection Standard enabled, then optionally create a remediation task that will create the association to protect the VNet. For detailed step-by-step instructions on how to deploy this policy, see https://aka.ms/ddosvnetpolicy-techcommunity.
+ ## Validate and test First, check the details of your DDoS protection plan:
@@ -109,4 +113,4 @@ If you want to delete a DDoS protection plan, you must first dissociate all virt
To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials. > [!div class="nextstepaction"]
-> [View and configure DDoS protection telemetry](telemetry-monitoring-alerting.md)
+> [View and configure DDoS protection telemetry](telemetry.md)
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/manage-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-permissions.md
@@ -47,4 +47,4 @@ For customers who have various subscriptions, and who want to ensure a single pl
To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials. > [!div class="nextstepaction"]
-> [View and configure DDoS protection telemetry](telemetry-monitoring-alerting.md)
\ No newline at end of file
+> [View and configure DDoS protection telemetry](telemetry.md)
\ No newline at end of file
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/reports-and-flow-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/reports-and-flow-logs.md deleted file mode 100644
@@ -1,125 +0,0 @@
-title: Azure DDoS Protection Standard reports and flow logs
-description: Learn how to configure reports and flow logs.
-services: ddos-protection
-documentationcenter: na
-author: yitoh
-ms.service: ddos-protection
-ms.devlang: na
-ms.topic: article
-ms.tgt_pltfrm: na
-ms.workload: infrastructure-services
-ms.date: 09/08/2020
-ms.author: yitoh
--
-# Configure DDoS attack mitigation reports and flow logs
-
-Azure DDoS Protection standard provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Azure Sentinel](../sentinel/connect-azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
-
-In this tutorial, you'll learn how to:
-
-> [!div class="checklist"]
-> * View and configure DDoS attack mitigation reports
-> * View and configure DDoS attack mitigation flow logs
-
-## Prerequisites
--- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md).-
-## View and configure DDoS attack mitigation reports
-
-Attack mitigation reports uses the Netflow protocol data which is aggregated to provide detailed information about the attack on your resource. Anytime a public IP resource is under attack, the report generation will start as soon as the mitigation starts. There will be an incremental report generated every 5 mins and a post-mitigation report for the whole mitigation period. This is to ensure that in an event the DDoS attack continues for a longer duration of time, you will be able to view the most current snapshot of mitigation report every 5 minutes and a complete summary once the attack mitigation is over.
-
-1. Select **All services** on the top, left of the portal.
-2. Enter *Monitor* in the **Filter** box. When **Monitor** appears in the results, select it.
-3. Under **SETTINGS**, select **Diagnostic Settings**.
-4. Select the **Subscription** and **Resource group** that contain the public IP address you want to log.
-5. Select **Public IP Address** for **Resource type**, then select the specific public IP address you want to log metrics for.
-6. Select **Turn on diagnostics to collect the DDoSMitigationReports log** and then select as many of the following options as you require:
-
- - **Archive to a storage account**: Data is written to an Azure Storage account. To learn more about this option, see [Archive resource logs](../azure-monitor/platform/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage).
- - **Stream to an event hub**: Allows a log receiver to pick up logs using an Azure Event Hub. Event hubs enable integration with Splunk or other SIEM systems. To learn more about this option, see [Stream resource logs to an event hub](../azure-monitor/platform/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
- - **Send to Log Analytics**: Writes logs to the Azure Monitor service. To learn more about this option, see [Collect logs for use in Azure Monitor logs](../azure-monitor/platform/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-log-analytics-workspace).
-
-Both the incremental & post-attack mitigation reports include the following fields
-- Attack vectors-- Traffic statistics-- Reason for dropped packets-- Protocols involved-- Top 10 source countries or regions-- Top 10 source ASNs-
-## View and configure DDoS attack mitigation flow logs
-Attack mitigation flow logs allow you to review the dropped traffic, forwarded traffic and other interesting datapoints during an active DDoS attack in near-real time. You can ingest the constant stream of this data into Azure Sentinel or to your third-party SIEM systems via event hub for near-real time monitoring, take potential actions and address the need of your defense operations.
-
-1. Select **All services** on the top, left of the portal.
-2. Enter *Monitor* in the **Filter** box. When **Monitor** appears in the results, select it.
-3. Under **SETTINGS**, select **Diagnostic Settings**.
-4. Select the **Subscription** and **Resource group** that contain the public IP address you want to log.
-5. Select **Public IP Address** for **Resource type**, then select the specific public IP address you want to log metrics for.
-6. Select **Turn on diagnostics to collect the DDoSMitigationFlowLogs log** and then select as many of the following options as you require:
-
- - **Archive to a storage account**: Data is written to an Azure Storage account. To learn more about this option, see [Archive resource logs](../azure-monitor/platform/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage).
- - **Stream to an event hub**: Allows a log receiver to pick up logs using an Azure Event Hub. Event hubs enable integration with Splunk or other SIEM systems. To learn more about this option, see [Stream resource logs to an event hub](../azure-monitor/platform/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
- - **Send to Log Analytics**: Writes logs to the Azure Monitor service. To learn more about this option, see [Collect logs for use in Azure Monitor logs](../azure-monitor/platform/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-log-analytics-workspace).
-
-This [template](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Enable%20Diagnostic%20Logging/Azure%20Policy) creates an Azure Policy definition to enable diagnostic logging.
-
-### Azure Sentinel data connector
-
-You can connect attack mitigation flow logs to Azure Sentinel, view and analyze your data in workbooks, create custom alerts, and incorporate it into investigation processes. To connect to Azure Sentinel, see [Connect to Azure Sentinel](../sentinel/connect-azure-ddos-protection.md).
-
-![Azure Sentinel DDoS Connector](./media/ddos-attack-telemetry/azure-sentinel-ddos.png)
-
-### Azure DDoS Protection Workbook
-
-You can use this Azure Resource Manager (ARM) template to deploy an attack analytics workbook. This workbook always you to visualize attack data across several filterable panels to easily understand whatΓÇÖs at stake. When deploying this ARM Template, you'll be required to fill in the following:
-
-* Workspace Name
-* Workspace ResourceGroup
-* Workspace Subscription ID
-
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAzure%2520DDoS%2520Protection%2520Workbook%2FAzureDDoSWorkbook_ARM.json)
-
-The flow logs will have the following fields:
-- Source IP-- Destination IP-- Source Port -- Destination port -- Protocol type -- Action taken during mitigation-
-![DDoS Protection Workbook](./media/ddos-attack-telemetry/ddos-attack-analytics-workbook.png)
-
-Attack analytics will only work if DDoS Protection Standard is enabled on the virtual network of the public IP address.
-
-## Sample log outputs
-
-The following screenshots are example log outputs:
-
-### DDoSMitigationFlowLogs
-
-![DDoS Protection DDoSMitigationFlowLogs](./media/ddos-attack-telemetry/ddos-mitigation-flow-logs.png)
-
-### DDoSProtectionNotifications
-
-![DDoS Protection DDoSProtectionNotifications](./media/ddos-attack-telemetry/ddos-protection-notifications.png)
-
-### DDoSMitigationReports
-
-![DDoS Protection DDoSMitigationReports](./media/ddos-attack-telemetry/ddos-mitigation-reports.png)
-
-## Next steps
-
-In this tutorial, you learned how to:
--- View and configure DDoS attack mitigation reports-- View and configure DDoS attack mitigation flow logs-
-To learn how to test and simulate a DDoS attack, see the simulation testing guide:
-
-> [!div class="nextstepaction"]
-> [Test through simulations](test-through-simulations.md)
\ No newline at end of file
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/telemetry-monitoring-alerting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/telemetry-monitoring-alerting.md deleted file mode 100644
@@ -1,116 +0,0 @@
-title: View and configure DDoS protection telemetry
-description: Learn how to view and configure DDoS protection telemetry.
-services: ddos-protection
-documentationcenter: na
-author: yitoh
-ms.service: ddos-protection
-ms.devlang: na
-ms.topic: article
-ms.tgt_pltfrm: na
-ms.workload: infrastructure-services
-ms.date: 09/08/2020
-ms.author: yitoh
-
-# View and configure DDoS protection telemetry
-
-Azure DDoS Protection standard provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Azure Sentinel](../sentinel/connect-azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
-
-In this tutorial, you'll learn how to:
-
-> [!div class="checklist"]
-> * Configure alerts for DDoS protection metrics
-> * Use DDoS protection telemetry
-> * View DDoS mitigation policies
-> * View DDoS protection alerts in Azure Security Center
-
-## Prerequisites
--- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md).-
-## Configure alerts for DDoS protection metrics
-
-You can select any of the available DDoS protection metrics to alert you when thereΓÇÖs an active mitigation during an attack, using the Azure Monitor alert configuration. When the conditions are met, the address specified receives an alert email:
-
-1. Select **All services** on the top, left of the portal.
-2. Enter *Monitor* in the **Filter** box. When **Monitor** appears in the results, select it.
-3. Select **Metrics** under **SHARED SERVICES**.
-4. Enter, or select your own values, or enter the following example values, accept the remaining defaults, and then select **OK**:
-
- |Setting |Value |
- |--------- |--------- |
- |Name | Enter _MyDdosAlert_. |
- |Subscription | Select the subscription that contains the public IP address you want to receive alerts for. |
- |Resource group | Select the resource group that contains the public IP address you want to receive alerts for. |
- |Resource | Select the public IP address that contains the public IP address you want to receive alerts for. DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments and Azure VPN Gateway. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. |
- |Metric | Select **Under DDoS attack or not**. |
- |Threshold | 1 - **1** means you are under attack. **0** means you are not under attack. |
- |Period | Select whatever value you choose. |
- |Notify via Email | Check the checkbox. |
- |Additional administrator | Enter your email address if you're not an email owner, contributor, or reader for the subscription. |
-
- Within a few minutes of attack detection, you receive an email from Azure Monitor metrics that looks similar to the following picture:
-
- ![Attack alert](./media/manage-ddos-protection/ddos-alert.png)
--
-To simulate a DDoS attack to validate your alert, see [Validate DDoS detection](test-through-simulations.md).
-
-You can also learn more about [configuring webhooks](../azure-monitor/platform/alerts-webhooks.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [logic apps](../logic-apps/logic-apps-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for creating alerts.
-
-## Use DDoS protection telemetry
-
-Telemetry for an attack is provided through Azure Monitor in real time. The telemetry is available only for the duration that a public IP address is under mitigation. You don't see telemetry before or after an attack is mitigated.
-
-1. Select **All services** on the top, left of the portal.
-2. Enter *Monitor* in the **Filter** box. When **Monitor** appears in the results, select it.
-3. Select **Metrics**, under **SHARED SERVICES**.
-4. Select the **Subscription** and **Resource group** that contain the public IP address that you want telemetry for.
-5. Select **Public IP Address** for **Resource type**, then select the specific public IP address you want telemetry for.
-6. A series of **Available Metrics** appear on the left side of the screen. These metrics, when selected, are graphed in the **Azure Monitor Metrics Chart** on the overview screen.
-7. Select the **aggregation** type as **Max**
-
-The metric names present different packet types, and bytes vs. packets, with a basic construct of tag names on each metric as follows:
--- **Dropped tag name** (for example, **Inbound Packets Dropped DDoS**): The number of packets dropped/scrubbed by the DDoS protection system.-- **Forwarded tag name** (for example **Inbound Packets Forwarded DDoS**): The number of packets forwarded by the DDoS system to the destination VIP ΓÇô traffic that was not filtered.-- **No tag name** (for example **Inbound Packets DDoS**): The total number of packets that came into the scrubbing system ΓÇô representing the sum of the packets dropped and forwarded.-
-This [Azure Monitor alert rule](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Azure%20Monitor%20Alert%20-%20DDoS%20Mitigation%20Started) will run a simple query to detect when an active DDoS mitigation is occurring. To simulate a DDoS attack to validate telemetry, see [Validate DDoS detection](test-through-simulations.md).
-
-## View DDoS mitigation policies
-
-DDoS Protection Standard applies three auto-tuned mitigation policies (TCP SYN, TCP & UDP) for each public IP address of the protected resource, in the virtual network that has DDoS enabled. You can view the policy thresholds by selecting the **Inbound TCP packets to trigger DDoS mitigation** and **Inbound UDP packets to trigger DDoS mitigation** metrics with **aggregation** type as 'Max', as shown in the following picture:
-
-![View mitigation policies](./media/manage-ddos-protection/view-mitigation-policies.png)
-
-Policy thresholds are auto-configured via Azure machine learning-based network traffic profiling. Only when the policy threshold is breached does DDoS mitigation occur for the IP address under attack.
-
-## View DDoS protection alerts in Azure Security Center
-
-Azure Security Center provides a list of [security alerts](../security-center/security-center-managing-and-responding-alerts.md), with information to help investigate and remediate problems. With this feature, you get a unified view of alerts, including DDoS attack-related alerts and the actions taken to mitigate the attack in near-time.
-There are two specific alerts that you will see for any DDoS attack detection and mitigation:
--- **DDoS Attack detected for Public IP**: This alert is generated when the DDoS protection service detects that one of your public IP addresses is the target of a DDoS attack.-- **DDoS Attack mitigated for Public IP**: This alert is generated when an attack on the public IP address has been mitigated.
-To view the alerts, open **Security Center** in the Azure portal. Under **Threat Protection**, select **Security alerts**. The following screenshot shows an example of the DDoS attack alerts.
-
-![DDoS Alert in Azure Security Center](./media/manage-ddos-protection/ddos-alert-asc.png)
-
-The alerts include general information about the public IP address thatΓÇÖs under attack, geo and threat intelligence information, and remediations steps.
-
-## Next steps
-
-In this tutorial, you learned how to:
--- Configure alerts for DDoS protection metrics-- Use DDoS protection telemetry-- View DDoS mitigation policies-- View DDoS protection alerts in Azure Security Center-
-To learn how to configure attack mitigation reports and flow logs, continue to the next tutorial.
-
-> [!div class="nextstepaction"]
-> [Configure DDoS attack mitigation reports and flow logs](reports-and-flow-logs.md)
\ No newline at end of file
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/telemetry.md new file mode 100644
@@ -0,0 +1,111 @@
+---
+title: View and configure DDoS protection telemetry for Azure DDoS Protection Standard
+description: Learn how to view and configure DDoS protection telemetry for Azure DDoS Protection Standard.
+services: ddos-protection
+documentationcenter: na
+author: yitoh
+ms.service: ddos-protection
+ms.devlang: na
+ms.topic: article
+ms.tgt_pltfrm: na
+ms.workload: infrastructure-services
+ms.date: 12/28/2020
+ms.author: yitoh
+
+---
+# View and configure DDoS protection telemetry
+
+Azure DDoS Protection standard provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Azure Sentinel](../sentinel/connect-azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * View DDoS protection telemetry
+> * View DDoS mitigation policies
+> * Validate and test DDoS protection telemetry
+
+### Metrics
+
+> [!NOTE]
+> While multiple options for **Aggregation** are displayed on Azure portal, only the aggregation types listed in the table below are supported for each metric. We apologize for this confusion and we are working to resolve it.
+
+The following [metrics](https://docs.microsoft.com/azure/azure-monitor/platform/metrics-supported#microsoftnetworkpublicipaddresses) are available for Azure DDoS Protection Standard. These metrics are also exportable via diagnostic settings (see [View and configure DDoS diagnostic logging](diagnostic-logging.md)).
++
+| Metric | Metric Display Name | Unit | Aggregation Type | Description |
+| --- | --- | --- | --- | --- |
+| ByteCountΓÇï | Byte CountΓÇï | CountΓÇï | TotalΓÇï | Total number of Bytes transmitted within time periodΓÇï |
+| BytesDroppedDDoSΓÇï | Inbound bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes dropped DDoSΓÇï|
+| BytesForwardedDDoSΓÇï | Inbound bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes forwarded DDoSΓÇï |
+| BytesInDDoSΓÇï | Inbound bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes DDoSΓÇï |
+| DDoSTriggerSYNPacketsΓÇï | Inbound SYN packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound SYN packets to trigger DDoS mitigationΓÇï |
+| DDoSTriggerTCPPacketsΓÇï | Inbound TCP packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets to trigger DDoS mitigationΓÇï |
+| DDoSTriggerUDPPacketsΓÇï | Inbound UDP packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets to trigger DDoS mitigationΓÇï |
+| IfUnderDDoSAttackΓÇï | Under DDoS attack or notΓÇï | CountΓÇï | MaximumΓÇï | Under DDoS attack or notΓÇï |
+| PacketCountΓÇï | Packet CountΓÇï | CountΓÇï | TotalΓÇï | Total number of Packets transmitted within time periodΓÇï |
+| PacketsDroppedDDoSΓÇï | Inbound packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets dropped DDoSΓÇï |
+| PacketsForwardedDDoSΓÇï | Inbound packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets forwarded DDoSΓÇï |
+| PacketsInDDoSΓÇï | Inbound packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets DDoSΓÇï |
+| SynCountΓÇï | SYN CountΓÇï | CountΓÇï | TotalΓÇï | Total number of SYN Packets transmitted within time periodΓÇï |
+| TCPBytesDroppedDDoSΓÇï | Inbound TCP bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes dropped DDoSΓÇï |
+| TCPBytesForwardedDDoSΓÇï | Inbound TCP bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes forwarded DDoSΓÇï |
+| TCPBytesInDDoSΓÇï | Inbound TCP bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes DDoSΓÇï |
+| TCPPacketsDroppedDDoSΓÇï | Inbound TCP packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets dropped DDoSΓÇï |
+| TCPPacketsForwardedDDoSΓÇï | Inbound TCP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets forwarded DDoSΓÇï |
+| TCPPacketsInDDoSΓÇï | Inbound TCP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets DDoSΓÇï |
+| UDPBytesDroppedDDoSΓÇï | Inbound UDP bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes dropped DDoSΓÇï |
+| UDPBytesForwardedDDoSΓÇï | Inbound UDP bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes forwarded DDoSΓÇï |
+| UDPBytesInDDoSΓÇï | Inbound UDP bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes DDoSΓÇï |
+| UDPPacketsDroppedDDoSΓÇï | Inbound UDP packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets dropped DDoSΓÇï |
+| UDPPacketsForwardedDDoSΓÇï | Inbound UDP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets forwarded DDoSΓÇï |
+| UDPPacketsInDDoSΓÇï | Inbound UDP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets DDoSΓÇï |
+| VipAvailabilityΓÇï | Data Path AvailabilityΓÇï | CountΓÇï | AverageΓÇï | Average IP Address availability per time durationΓÇï |
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) and DDoS Protection Standard must be enabled on a virtual network.
+- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments and Azure VPN Gateway. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.
+
+## View DDoS protection telemetry
+
+Telemetry for an attack is provided through Azure Monitor in real time. The telemetry is available only for the duration that a public IP address is under mitigation. You don't see telemetry before or after an attack is mitigated.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your DDoS Protection Plan.
+2. Under **Monitoring**, select **Metrics**.
+3. Select **Scope**. Select the **Subscription** that contains the public IP address you want to log, select **Public IP Address** for **Resource type**, then select the specific public IP address you want to log metrics for, and then select **Apply**.
+4. Select the **Aggregation** type as **Max**.
+
+The metric names present different packet types, and bytes vs. packets, with a basic construct of tag names on each metric as follows:
+
+- **Dropped tag name** (for example, **Inbound Packets Dropped DDoS**): The number of packets dropped/scrubbed by the DDoS protection system.
+- **Forwarded tag name** (for example **Inbound Packets Forwarded DDoS**): The number of packets forwarded by the DDoS system to the destination VIP ΓÇô traffic that was not filtered.
+- **No tag name** (for example **Inbound Packets DDoS**): The total number of packets that came into the scrubbing system ΓÇô representing the sum of the packets dropped and forwarded.
+
+This [Azure Monitor alert rule](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Azure%20Monitor%20Alert%20-%20DDoS%20Mitigation%20Started) will run a simple query to detect when an active DDoS mitigation is occurring. To simulate a DDoS attack to validate telemetry, see [Validate DDoS detection](test-through-simulations.md).
+
+## View DDoS mitigation policies
+
+DDoS Protection Standard applies three auto-tuned mitigation policies (TCP SYN, TCP & UDP) for each public IP address of the protected resource, in the virtual network that has DDoS enabled. You can view the policy thresholds by selecting the **Inbound TCP packets to trigger DDoS mitigation** and **Inbound UDP packets to trigger DDoS mitigation** metrics with **aggregation** type as 'Max', as shown in the following picture:
+
+![View mitigation policies](./media/manage-ddos-protection/view-mitigation-policies.png)
+
+Policy thresholds are auto-configured via Azure machine learning-based network traffic profiling. Only when the policy threshold is breached does DDoS mitigation occur for the IP address under attack.
+
+## Validate and test
+
+To simulate a DDoS attack to validate DDoS protection telemetry, see [Validate DDoS detection](test-through-simulations.md).
+
+## Next steps
+
+In this tutorial, you learned how to:
+
+- Configure alerts for DDoS protection metrics
+- View DDoS protection telemetry
+- View DDoS mitigation policies
+- Validate and test DDoS protection telemetry
+
+To learn how to configure attack mitigation reports and flow logs, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [View and configure DDoS diagnostic logging](diagnostic-logging.md)
\ No newline at end of file
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/test-through-simulations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/test-through-simulations.md
@@ -57,8 +57,12 @@ Once the resource is under attack, you should see that the value changes from **
![DDoS Attack Simulation Example: Portal](./media/ddos-attack-simulation/ddos-attack-simulation-example-2.png)
+### BreakingPoint Cloud API Script
+
+This [API script](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Breaking%20Point%20SDK) can be used to automate DDoS testing by running once or using cron to schedule regular tests. This is useful to validate that your logging is configured properly and that detection and response procedures are effective. The scripts require a Linux OS (tested with Ubuntu 18.04 LTS) and Python 3. Install prerequisites and API client using the included script or by using the documentation on the [BreakingPoint Cloud](http://breakingpoint.cloud/) website.
+ ## Next steps -- Learn how to [view and configure DDoS protection telemetry](telemetry-monitoring-alerting.md).-- Learn how to [configure DDoS attack mitigation reports and flow logs](reports-and-flow-logs.md).-- Learn how to [engage DDoS rapid response](ddos-rapid-response.md).\ No newline at end of file
+- Learn how to [view and configure DDoS protection telemetry](telemetry.md).
+- Learn how to [view and configure DDoS diagnostic logging](diagnostic-logging.md).
+- Learn how to [engage DDoS rapid response](ddos-rapid-response.md).
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-move https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-move.md
@@ -1,37 +1,37 @@
--- title: 'ExpressRoute: Move circuits from classic to Azure Resource Manager'
-description: Learn about what it means to move an Azure ExpressRoute circuit from the classic to the Azure Resource Manager deployment model.
+description: Learn about what happens when you move an Azure ExpressRoute circuit from the classic to the Azure Resource Manager deployment model.
services: expressroute author: duongau ms.service: expressroute ms.topic: how-to
-ms.date: 12/07/2018
+ms.date: 12/15/2020
ms.author: duau --- # Moving ExpressRoute circuits from the classic to the Resource Manager deployment model
-This article provides an overview of what it means to move an Azure ExpressRoute circuit from the classic to the Azure Resource Manager deployment model.
+This article provides an overview of what happens when you move an Azure ExpressRoute circuit from the classic to the Azure Resource Manager deployment model.
-You can use a single ExpressRoute circuit to connect to virtual networks that are deployed both in the classic and the Resource Manager deployment models. An ExpressRoute circuit, regardless of how it is created, can now link to virtual networks across both deployment models.
+You can use a single ExpressRoute circuit to connect virtual networks that are deployed both in the classic and the Resource Manager deployment models.
![An ExpressRoute circuit that links to virtual networks across both deployment models](./media/expressroute-move/expressroute-move-1.png) ## ExpressRoute circuits that are created in the classic deployment model
-ExpressRoute circuits that are created in the classic deployment model need to be moved to the Resource Manager deployment model first to enable connectivity to both the classic and the Resource Manager deployment models. There isn't connectivity loss or disruption when a connection is being moved. All circuit-to-virtual network links in the classic deployment model (within the same subscription and cross-subscription) are preserved.
+ExpressRoute circuits created in the classic deployment model need to migrate to the Resource Manager deployment model first. Only then can enable connectivity to both the classic and the Resource Manager deployment models. Connectivity isn't loss or disrupted when a connection is being moved. All circuit-to-virtual network links in the classic deployment model within the same subscription and cross-subscription are preserved.
-After the move is completed successfully, the ExpressRoute circuit looks, performs, and feels exactly like an ExpressRoute circuit that was created in the Resource Manager deployment model. You can now create connections to virtual networks in the Resource Manager deployment model.
+After the move has successfully completed, the ExpressRoute circuit will behave exactly like an ExpressRoute circuit that was created in the Resource Manager deployment model. You can now create connections to virtual networks in the Resource Manager deployment model.
-After an ExpressRoute circuit has been moved to the Resource Manager deployment model, you can manage the life cycle of the ExpressRoute circuit only by using the Resource Manager deployment model. This means that you can perform operations like adding/updating/deleting peerings, updating circuit properties (such as bandwidth, SKU, and billing type), and deleting circuits only in the Resource Manager deployment model. Refer to the section below on circuits that are created in the Resource Manager deployment model for further details on how you can manage access to both deployment models.
+Once you've moved the ExpressRoute circuit to the Resources Manager deployment model, you can only manage it in Resource Manager deployment model. Operations for managing peerings, updating circuit properties, and deleting circuits will only be available through the Resource Manager deployment model. Refer to the following section for further details on how you can manage access to both deployment models.
-You do not have to involve your connectivity provider to perform the move.
+You don't have to involve your connectivity provider to move your circuit to the Resource Manager deployment model.
## ExpressRoute circuits that are created in the Resource Manager deployment model
-You can enable ExpressRoute circuits that are created in the Resource Manager deployment model to be accessible from both deployment models. Any ExpressRoute circuit in your subscription can be enabled to be accessed from both deployment models.
+You can enable ExpressRoute circuits that are created in the Resource Manager deployment model to be accessible from both deployment models. Any ExpressRoute circuit in your subscription can be configured to have access from both deployment models.
-* ExpressRoute circuits that were created in the Resource Manager deployment model do not have access to the classic deployment model by default.
-* ExpressRoute circuits that have been moved from the classic deployment model to the Resource manager deployment model are accessible from both deployment models by default.
-* An ExpressRoute circuit always has access to the Resource Manager deployment model, regardless of whether it was created in the Resource Manager or classic deployment model. This means that you can create connections to virtual networks created in the Resource Manager deployment model by following instructions on [how to link virtual networks](expressroute-howto-linkvnet-arm.md).
+* ExpressRoute circuits that were created in the Resource Manager deployment model don't have access to the classic deployment model by default.
+* ExpressRoute circuits that have been moved from the classic deployment model to the Resource Manager deployment model are accessible from both deployment models by default.
+* An ExpressRoute circuit always has access to the Resource Manager deployment model, whether it was created in the Resource Manager or classic deployment model. You can create connections to virtual networks by following instructions on [how to link virtual networks](expressroute-howto-linkvnet-arm.md).
* Access to the classic deployment model is controlled by the **allowClassicOperations** parameter in the ExpressRoute circuit. > [!IMPORTANT]
@@ -40,11 +40,13 @@ You can enable ExpressRoute circuits that are created in the Resource Manager de
> ## Controlling access to the classic deployment model
-You can enable a single ExpressRoute circuit to link to virtual networks in both deployment models by setting the **allowClassicOperations** parameter of the ExpressRoute circuit.
+You can enable an ExpressRoute circuit to link to virtual networks in both deployment models. To do so, set the **allowClassicOperations** parameter on the ExpressRoute circuit.
-Setting **allowClassicOperations** to TRUE enables you to link virtual networks from both deployment models to the ExpressRoute circuit. You can link to virtual networks in the classic deployment model by following guidance on [how to link virtual networks in the classic deployment model](expressroute-howto-linkvnet-classic.md). You can link to virtual networks in the Resource Manager deployment model by following guidance on [how to link virtual networks in the Resource Manager deployment model](expressroute-howto-linkvnet-arm.md).
+Setting **allowClassicOperations** to TRUE enables you to link virtual networks from both deployment models to the ExpressRoute circuit.
+* To link virtual networks in the classic deployment model, see [how to link virtual networks for classic deployment model](expressroute-howto-linkvnet-classic.md).
+* To link virtual networks in the Resource Manager deployment model, see [how to link virtual networks in the Resource Manager deployment model](expressroute-howto-linkvnet-arm.md).
-Setting **allowClassicOperations** to FALSE blocks access to the circuit from the classic deployment model. However, all virtual network links in the classic deployment model are preserved. In this case, the ExpressRoute circuit is not visible in the classic deployment model.
+Setting **allowClassicOperations** to FALSE blocks access to the circuit from the classic deployment model. However, all virtual networks linked in the classic deployment model are still preserved. The ExpressRoute circuit isn't visible in the classic deployment model.
## Supported operations in the classic deployment model The following classic operations are supported on an ExpressRoute circuit when **allowClassicOperations** is set to TRUE:
@@ -53,15 +55,15 @@ The following classic operations are supported on an ExpressRoute circuit when *
* Create/update/get/delete virtual network links to classic virtual networks * Create/update/get/delete virtual network link authorizations for cross-subscription connectivity
-However, when **allowClassicOperations** is set to TRUE, you cannot perform the following classic operations:
+However, when **allowClassicOperations** is set to TRUE, you can't execute the following classic operations:
* Create/update/get/delete Border Gateway Protocol (BGP) peerings for Azure private, Azure public, and Microsoft peerings * Delete ExpressRoute circuits ## Communication between the classic and the Resource Manager deployment models
-The ExpressRoute circuit acts like a bridge between the classic and the Resource Manager deployment models. Traffic between virtual machines in virtual networks in the classic deployment model and those in virtual networks in the Resource Manager deployment model flows through ExpressRoute if both virtual networks are linked to the same ExpressRoute circuit.
+The ExpressRoute circuit acts like a bridge between the classic and the Resource Manager deployment models. Traffic between virtual networks for both deployment models can pass through the ExpressRoute circuit if both virtual networks are linked to the same circuit.
-Aggregate throughput is limited by the throughput capacity of the virtual network gateway. Traffic does not enter the connectivity provider's networks or your networks in such cases. Traffic flow between the virtual networks is fully contained within the Microsoft network.
+Aggregate throughput is limited by the throughput capacity of the virtual network gateway. Traffic doesn't enter the connectivity provider's networks or your networks in such cases. Traffic flow between the virtual networks is fully contained within the Microsoft network.
## Access to Azure public and Microsoft peering resources You can continue to access resources that are typically accessible through Azure public peering and Microsoft peering without any disruption.
@@ -70,10 +72,10 @@ You can continue to access resources that are typically accessible through Azure
This section describes what's supported for ExpressRoute circuits: * You can use a single ExpressRoute circuit to access virtual networks that are deployed in the classic and the Resource Manager deployment models.
-* You can move an ExpressRoute circuit from the classic to the Resource Manager deployment model. After it is moved, the ExpressRoute circuit looks, feels, and performs like any other ExpressRoute circuit that is created in the Resource Manager deployment model.
+* You can move an ExpressRoute circuit from the classic to the Resource Manager deployment model. Once moved, the ExpressRoute circuit will continue to operate like any other ExpressRoute circuit that is created in the Resource Manager deployment model.
* You can move only the ExpressRoute circuit. Circuit links, virtual networks, and VPN gateways cannot be moved through this operation.
-* After an ExpressRoute circuit has been moved to the Resource Manager deployment model, you can manage the life cycle of the ExpressRoute circuit only by using the Resource Manager deployment model. This means that you can perform operations like adding/updating/deleting peerings, updating circuit properties (such as bandwidth, SKU, and billing type), and deleting circuits only in the Resource Manager deployment model.
-* The ExpressRoute circuit acts like a bridge between the classic and the Resource Manager deployment models. Traffic between virtual machines in virtual networks in the classic deployment model and those in virtual networks in the Resource Manager deployment model flows through ExpressRoute if both virtual networks are linked to the same ExpressRoute circuit.
+* After an ExpressRoute circuit has been moved to the Resource Manager deployment model, you can manage the life cycle of the ExpressRoute circuit only by using the Resource Manager deployment model. This means that you can run operations like adding/updating/deleting peerings, updating circuit properties (such as bandwidth, SKU, and billing type), and deleting circuits only in the Resource Manager deployment model.
+* The ExpressRoute circuit acts like a bridge between the classic and the Resource Manager deployment models. Traffic between virtual machines in classic virtual networks and virtual machines in Resource Manager virtual networks can communicate through ExpressRoute if both virtual networks are linked to the same ExpressRoute circuit.
* Cross-subscription connectivity is supported in both the classic and the Resource Manager deployment models. * After you move an ExpressRoute circuit from the classic model to the Azure Resource Manager model, you can [migrate the virtual networks linked to the ExpressRoute circuit](expressroute-migration-classic-resource-manager.md).
@@ -81,7 +83,7 @@ This section describes what's supported for ExpressRoute circuits:
This section describes what's not supported for ExpressRoute circuits: * Managing the life cycle of an ExpressRoute circuit from the classic deployment model.
-* Azure role-based access control (Azure RBAC) support for the classic deployment model. You cannot perform Azure RBAC controls to a circuit in the classic deployment model. Any administrator/coadministrator of the subscription can link or unlink virtual networks to the circuit.
+* Azure role-based access control (Azure RBAC) support for the classic deployment model. You can't run Azure RBAC controls to a circuit in the classic deployment model. Any administrator/coadministrator of the subscription can link or unlink virtual networks to the circuit.
## Configuration Follow the instructions that are described in [Move an ExpressRoute circuit from the classic to the Resource Manager deployment model](expressroute-howto-move-arm.md).
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/support.md
@@ -90,7 +90,7 @@ The systems listed in the following table are considered compatible with Azure I
<sup>1</sup> Debian 10 systems, including Raspberry Pi OS Buster, use a version of OpenSSL that IoT Edge doesn't support. Use the following command to install an earlier version before installing IoT Edge: ```bash
-sudo apt-get install libssl1.0.2
+sudo apt-get install libssl1.1
``` <sup>2</sup> The Debian 9 packages from the [Azure IoT Edge releases repo](https://github.com/Azure/azure-iotedge/releases) should work out of the box with Ubuntu 20.04.
@@ -141,4 +141,4 @@ Experience while prototyping will help guide your final device selection. Questi
* How much data will your modules be processing? * Do your modules need any specialized hardware for accelerating their workloads? * What are the desired performance characteristics of your solution?
-* What is your hardware budget?
\ No newline at end of file
+* What is your hardware budget?
key-vault https://docs.microsoft.com/en-us/azure/key-vault/secrets/tutorial-rotation-dual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/tutorial-rotation-dual.md
@@ -35,14 +35,15 @@ In this solution, Azure Key Vault stores storage account individual access keys
## Prerequisites * An Azure subscription. [Create one for free.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* Azure [Cloud Shell](https://shell.azure.com/). This tutorial is using portal Cloud Shell with PowerShell env
* Azure Key Vault. * Two Azure storage accounts. You can use this deployment link if you don't have an existing key vault and existing storage accounts:
-[![Link that's labelled Deploy to Azure.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fjlichwa%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2Farm-templates%2FInitial-Setup%2Fazuredeploy.json)
+[![Link that's labelled Deploy to Azure.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FInitial-Setup%2Fazuredeploy.json)
-1. Under **Resource group**, select **Create new**. Name the group **akvrotation** and then select **OK**.
+1. Under **Resource group**, select **Create new**. Name the group **vaultrotation** and then select **OK**.
1. Select **Review + create**. 1. Select **Create**.
@@ -51,7 +52,7 @@ You can use this deployment link if you don't have an existing key vault and exi
You'll now have a key vault and two storage accounts. You can verify this setup in the Azure CLI by running this command: ```azurecli
-az resource list -o table -g akvrotation
+az resource list -o table -g vaultrotation
``` The result will look something like this output:
@@ -59,9 +60,9 @@ The result will look something like this output:
```console Name ResourceGroup Location Type Status ----------------------- -------------------- ---------- --------------------------------- --------
-akvrotation-kv akvrotation eastus Microsoft.KeyVault/vaults
-akvrotationstorage akvrotation eastus Microsoft.Storage/storageAccounts
-akvrotationstorage2 akvrotation eastus Microsoft.Storage/storageAccounts
+vaultrotation-kv vaultrotation westus Microsoft.KeyVault/vaults
+vaultrotationstorage vaultrotation westus Microsoft.Storage/storageAccounts
+vaultrotationstorage2 vaultrotation westus Microsoft.Storage/storageAccounts
``` ## Create and deploy the key rotation function
@@ -78,72 +79,79 @@ The function app rotation function requires the following components and configu
1. Select the Azure template deployment link:
- [![Azure template deployment link.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fjlichwa%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2Farm-templates%2FFunction%2Fazuredeploy.json)
+ [![Azure template deployment link.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FFunction%2Fazuredeploy.json)
-1. In the **Resource group** list, select **akvrotation**.
+1. In the **Resource group** list, select **vaultrotation**.
1. In the **Storage Account RG** box, enter the name of the resource group in which your storage account is located. Keep the default value **[resourceGroup().name]** if your storage account is already located in the same resource group where you'll deploy the key rotation function.
-1. In the **Storage Account Name** box, enter the name of the storage account that contains the access keys to rotate.
+1. In the **Storage Account Name** box, enter the name of the storage account that contains the access keys to rotate. Keep the default value **[concat(resourceGroup().name, 'storage')]** if you use storage account created in [Prerequisites](#prerequisites).
1. In the **Key Vault RG** box, enter the name of resource group in which your key vault is located. Keep the default value **[resourceGroup().name]** if your key vault already exists in the same resource group where you'll deploy the key rotation function.
-1. In the **Key Vault Name** box, enter the name of the key vault.
+1. In the **Key Vault Name** box, enter the name of the key vault. Keep the default value **[concat(resourceGroup().name, '-kv')]** if you use key vault created in [Prerequisites](#prerequisites).
+1. In the **App Service Plan Type** box, select hosting plan. **Premium Plan** is needed only when your key vault is behind firewall.
1. In the **Function App Name** box, enter the name of the function app. 1. In the **Secret Name** box, enter the name of the secret where you'll store access keys.
-1. In the **Repo URL** box, enter the GitHub location of the function code: **https://github.com/jlichwa/KeyVault-Rotation-StorageAccountKey-PowerShell.git**.
+1. In the **Repo URL** box, enter the GitHub location of the function code. In this tutorial you can use **https://github.com/Azure-Samples/KeyVault-Rotation-StorageAccountKey-PowerShell.git** .
1. Select **Review + create**. 1. Select **Create**.
- ![Screenshot that shows how to create the first storage account.](../media/secrets/rotation-dual/dual-rotation-2.png)
+ ![Screenshot that shows how to create and deploy function.](../media/secrets/rotation-dual/dual-rotation-2.png)
After you complete the preceding steps, you'll have a storage account, a server farm, a function app, and Application Insights. When the deployment is complete, you'll see this page:+ ![Screenshot that shows the Your deployment is complete page.](../media/secrets/rotation-dual/dual-rotation-3.png) > [!NOTE] > If you encounter a failure, you can select **Redeploy** to finish the deployment of the components.
-You can find deployment templates and code for the rotation function on [GitHub](https://github.com/jlichwa/KeyVault-Rotation-StorageAccountKey-PowerShell).
+You can find deployment templates and code for the rotation function in [Azure Samples](https://github.com/Azure-Samples/KeyVault-Rotation-StorageAccountKey-PowerShell).
## Add the storage account access keys to Key Vault
-First, set your access policy to grant **manage secrets** permissions to users:
+First, set your access policy to grant **manage secrets** permissions to your user principal:
```azurecli
-az keyvault set-policy --upn <email-address-of-user> --name akvrotation-kv --secret-permissions set delete get list
+az keyvault set-policy --upn <email-address-of-user> --name vaultrotation-kv --secret-permissions set delete get list
``` You can now create a new secret with a storage account access key as its value. You'll also need the storage account resource ID, secret validity period, and key ID to add to the secret so the rotation function can regenerate the key in the storage account. Determine the storage account resource ID. You can find this value in the `id` property.+ ```azurecli
-az storage account show -n akvrotationstorage
+az storage account show -n vaultrotationstorage
``` List the storage account access keys so you can get the key values: ```azurecli
-az storage account keys list -n akvrotationstorage
+az storage account keys list -n vaultrotationstorage
```
-Run this command, using your retrieved values for `key1Value` and `storageAccountResourceId`:
+Add secret to key vault with expiration date set to tomorrow, validity period for 60 days and storage account resource id. Run this command, using your retrieved values for `key1Value` and `storageAccountResourceId`:
```azurecli $tomorrowDate = (get-date).AddDays(+1).ToString("yyy-MM-ddTHH:mm:ssZ")
-az keyvault secret set --name storageKey --vault-name akvrotation-kv --value <key1Value> --tags "CredentialId=key1" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate
+az keyvault secret set --name storageKey --vault-name vaultrotation-kv --value <key1Value> --tags "CredentialId=key1" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate
```
-If you create a secret with a short expiration date, a `SecretNearExpiry` event will publish within several minutes. This event will in turn trigger the function to rotate the secret.
+Above secret will trigger `SecretNearExpiry` event within several minutes. This event will in turn trigger the function to rotate the secret with expiration set to 60 days. In that configuration, 'SecretNearExpiry' event would be triggered every 30 days (30 days before expiry) and rotation function would will alternate rotation between key1 and key2.
-You can verify that access keys have regenerated by retrieving the storage account key and the Key Vault secret and comparing them.
+You can verify that access keys have regenerated by retrieving the storage account key and the Key Vault secret and compare them.
Use this command to get the secret information: ```azurecli
-az keyvault secret show --vault-name akvrotation-kv --name storageKey
+az keyvault secret show --vault-name vaultrotation-kv --name storageKey
```+ Notice that `CredentialId` is updated to the alternate `keyName` and that `value` is regenerated:+ ![Screenshot that shows the output of the a z keyvault secret show command for the first storage account.](../media/secrets/rotation-dual/dual-rotation-4.png) Retrieve the access keys to compare the values: ```azurecli
-az storage account keys list -n akvrotationstorage
+az storage account keys list -n vaultrotationstorage
```
+Notice that `value` of the key is same as secret in key vault:
+ ![Screenshot that shows the output of the a z storage account keys list command for the first storage account.](../media/secrets/rotation-dual/dual-rotation-5.png) ## Add storage accounts for rotation
@@ -156,10 +164,12 @@ To add storage account keys to an existing function for rotation, you need:
1. Select the Azure template deployment link:
- [![Azure template deployment link.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fjlichwa%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2Farm-templates%2FAdd-Event-Subscriptions%2Fazuredeploy.json)
+ [![Azure template deployment link.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FAdd-Event-Subscriptions%2Fazuredeploy.json)
-1. In the **Resource group** list, select **akvrotation**.
+1. In the **Resource group** list, select **vaultrotation**.
+1. In the **Storage Account RG** box, enter the name of the resource group in which your storage account is located. Keep the default value **[resourceGroup().name]** if your storage account is already located in the same resource group where you'll deploy the key rotation function.
1. In the **Storage Account Name** box, enter the name of the storage account that contains the access keys to rotate.
+1. In the **Key Vault RG** box, enter the name of resource group in which your key vault is located. Keep the default value **[resourceGroup().name]** if your key vault already exists in the same resource group where you'll deploy the key rotation function.
1. In the **Key Vault Name** box, enter the name of the key vault. 1. In the **Function App Name** box, enter the name of the function app. 1. In the **Secret Name** box, enter the name of the secret where you'll store access keys.
@@ -172,41 +182,48 @@ To add storage account keys to an existing function for rotation, you need:
Determine the storage account resource ID. You can find this value in the `id` property. ```azurecli
-az storage account show -n akvrotationstorage2
+az storage account show -n vaultrotationstorage2
``` List the storage account access keys so you can get the key2 value: ```azurecli
-az storage account keys list -n akvrotationstorage2
+az storage account keys list -n vaultrotationstorage2
```
-Run this command, using your retrieved values for `key2Value` and `storageAccountResourceId`:
+Add secret to key vault with expiration date set to tomorrow, validity period for 60 days and storage account resource id. Run this command, using your retrieved values for `key2Value` and `storageAccountResourceId`:
```azurecli
-tomorrowDate=`date -d tomorrow -Iseconds -u | awk -F'+' '{print $1"Z"}'`
-az keyvault secret set --name storageKey2 --vault-name akvrotation-kv --value <key2Value> --tags "CredentialId=key2" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate
+$tomorrowDate = (get-date).AddDays(+1).ToString("yyy-MM-ddTHH:mm:ssZ")
+az keyvault secret set --name storageKey2 --vault-name vaultrotation-kv --value <key2Value> --tags "CredentialId=key2" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate
``` Use this command to get the secret information: ```azurecli
-az keyvault secret show --vault-name akvrotation-kv --name storageKey2
+az keyvault secret show --vault-name vaultrotation-kv --name storageKey2
```+ Notice that `CredentialId` is updated to the alternate `keyName` and that `value` is regenerated:+ ![Screenshot that shows the output of the a z keyvault secret show command for the second storage account.](../media/secrets/rotation-dual/dual-rotation-8.png) Retrieve the access keys to compare the values: ```azurecli
-az storage account keys list -n akvrotationstorage
+az storage account keys list -n vaultrotationstorage
```+
+Notice that `value` of the key is same as secret in key vault:
+ ![Screenshot that shows the output of the a z storage account keys list command for the second storage account.](../media/secrets/rotation-dual/dual-rotation-9.png)
-## Key Vault dual credential rotation functions
+## Key Vault rotation functions for two sets of credentials
- [Storage account](https://github.com/jlichwa/KeyVault-Rotation-StorageAccountKey-PowerShell) - [Redis cache](https://github.com/jlichwa/KeyVault-Rotation-RedisCacheKey-PowerShell) ## Next steps+
+- Tutorial: [Secrets rotation for one set of credentials](https://docs.microsoft.com/azure/key-vault/secrets/tutorial-rotation)
- Overview: [Monitoring Key Vault with Azure Event Grid](../general/event-grid-overview.md) - How to: [Create your first function in the Azure portal](../../azure-functions/functions-create-first-azure-function.md) - How to: [Receive email when a Key Vault secret changes](../general/event-grid-logicapps.md)
lab-services https://docs.microsoft.com/en-us/azure/lab-services/connect-virtual-machine-chromebook-remote-desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/connect-virtual-machine-chromebook-remote-desktop.md
@@ -9,22 +9,26 @@ ms.author: nicolela
--- # Connect to a VM using Remote Desktop Protocol on a Chromebook+ This section shows how a student can connect to a classroom lab VM from a Chromebook by using RDP. ## Install Microsoft Remote Desktop on a Chromebook+ 1. Open the App Store on your Chromebook, and search for **Microsoft Remote Desktop**. ![Microsoft Remote Desktop](./media/how-to-use-classroom-lab/install-ms-remote-desktop-chromebook.png)
+
1. Install the latest version of Microsoft Remote Desktop. ## Access the VM from your Chromebook using RDP+ 1. Open the **RDP** file that's downloaded on your computer with **Microsoft Remote Desktop** installed. It should start connecting to the VM. ![Connect to VM](./media/how-to-use-classroom-lab/connect-vm-chromebook.png) 1. When prompted, enter your password.
- ![Screenshot that shows the Logon screen where you enter your username and password.](./media/how-to-use-classroom-lab/password-chromebook.png)
+ ![Screenshot that shows the Logon screen where you enter your username and password.](./media/how-to-use-classroom-lab/password-chromebook.png)
1. Select **Continue** if you receive the following warning.
@@ -33,6 +37,6 @@ This section shows how a student can connect to a classroom lab VM from a Chrome
1. You should see the desktop of the VM that you are connecting to. ## Next steps
-To learn more about connecting to Linux VMs, see [Connect to Linux virtual machines](how-to-use-remote-desktop-linux-student.md)
+To learn more about connecting to Linux VMs, see [Connect to Linux virtual machines](how-to-use-remote-desktop-linux-student.md)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-power-bi-automated-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-power-bi-automated-model.md
@@ -1,7 +1,7 @@
---
-title: "Tutorial: Create the predictive model using automated ML (part 1 of 2)"
+title: "Tutorial: Create the predictive model by using automated ML (part 1 of 2)"
titleSuffix: Azure Machine Learning
-description: Learn how to build and deploy automated ML models, so you can use the best model to predict outcomes in Microsoft Power BI.
+description: Learn how to build and deploy automated machine learning models so you can use the best model to predict outcomes in Microsoft Power BI.
services: machine-learning ms.service: machine-learning ms.subservice: core
@@ -12,123 +12,123 @@ ms.reviewer: sdgilley
ms.date: 12/11/2020 ---
-# Tutorial: Power BI integration - create the predictive model using automated machine learning (part 1 of 2)
+# Tutorial: Power BI integration - Create the predictive model by using automated machine learning (part 1 of 2)
-In the first part of this tutorial, you train and deploy a predictive machine learning model using automated machine learning in the Azure Machine Learning studio. In part 2, you'll then use the best performing model to predict outcomes in Microsoft Power BI.
+In part 1 of this tutorial, you train and deploy a predictive machine learning model. You use automated machine learning (ML) in Azure Machine Learning Studio. In part 2, you'll use the best-performing model to predict outcomes in Microsoft Power BI.
In this tutorial, you: > [!div class="checklist"]
-> * Create an Azure Machine Learning compute cluster
-> * Create a dataset
-> * Create an automated ML run
-> * Deploy the best model to a real-time scoring endpoint
+> * Create an Azure Machine Learning compute cluster.
+> * Create a dataset.
+> * Create an automated machine learning run.
+> * Deploy the best model to a real-time scoring endpoint.
-There are three different ways to create and deploy the model you'll use in Power BI. This article covers Option C: Train and deploy models using automated ML in the studio. This option shows a no-code authoring experience that fully automates the data preparation and model training.
+There are three ways to create and deploy the model you'll use in Power BI. This article covers "Option C: Train and deploy models by using automated machine learning in the studio." This option is a no-code authoring experience. It fully automates data preparation and model training.
-You could instead use:
+But you could instead use one of the other options:
-* [Option A: Train and deploy models using Notebooks](tutorial-power-bi-custom-model.md) - a code-first authoring experience using Jupyter notebooks hosted in Azure Machine Learning studio.
-* [Option B: Train and deploy models using designer](tutorial-power-bi-designer-model.md)- a low-code authoring experience using Designer (a drag-and-drop user interface).
+* [Option A: Train and deploy models by using Jupyter Notebooks](tutorial-power-bi-custom-model.md). This code-first authoring experience uses Jupyter Notebooks that are hosted in Azure Machine Learning Studio.
+* [Option B: Train and deploy models by using the Azure Machine Learning designer](tutorial-power-bi-designer-model.md). This low-code authoring experience uses a drag-and-drop user interface.
## Prerequisites -- An Azure subscription ([a free trial is available](https://aka.ms/AMLFree)). -- An Azure Machine Learning workspace. If you do not already have a workspace, follow [how to create an Azure Machine Learning Workspace](./how-to-manage-workspace.md#create-a-workspace).
+- An Azure subscription. If you don't already have a subscription, you can use a [free trial](https://aka.ms/AMLFree).
+- An Azure Machine Learning workspace. If you don't already have a workspace, see [Create and manage Azure Machine Learning workspaces](./how-to-manage-workspace.md#create-a-workspace).
-## Create compute cluster
+## Create a compute cluster
-Automated ML automatically trains lots of different machine learning models to find the "best" algorithm and parameters. Azure Machine Learning parallelizes the execution of the model training over a compute cluster.
+Automated machine learning trains many machine learning models to find the "best" algorithm and parameters. Azure Machine Learning parallelizes the running of the model training over a compute cluster.
-In the [Azure Machine Learning Studio](https://ml.azure.com), select **Compute** in the left-hand menu followed by **Compute Clusters** tab. Select **New**:
+To begin, in [Azure Machine Learning Studio](https://ml.azure.com), in the menu on the left, select **Compute**. Open the **Compute clusters** tab. Then select **New**:
-:::image type="content" source="media/tutorial-power-bi/create-compute-cluster.png" alt-text="Screenshot showing how to create a compute cluster":::
+:::image type="content" source="media/tutorial-power-bi/create-compute-cluster.png" alt-text="Screenshot showing how to create a compute cluster.":::
-In the **Create compute cluster** screen:
+On the **Create compute cluster** page:
-1. Select a VM size (for the purposes of this tutorial a `Standard_D11_v2` machine is fine).
-1. Select **Next**
-1. Provide a valid compute name
-1. Keep **Minimum number of nodes** at 0
-1. Change **Maximum number of nodes** to 4
-1. Select **Create**
+1. Select a VM size. For this tutorial, a **Standard_D11_v2** machine is fine.
+1. Select **Next**.
+1. Provide a valid compute name.
+1. Keep **Minimum number of nodes** at `0`.
+1. Change **Maximum number of nodes** to `4`.
+1. Select **Create**.
-You can see that the status of your cluster has changed to **Creating**.
+The status of your cluster changes to **Creating**.
>[!NOTE]
-> When the cluster is created it will have 0 nodes, which means no compute costs are incurred. You only incur costs when the automated ML job runs. The cluster will scale back to 0 automatically for you after 120 seconds of idle time.
+> The new cluster has 0 nodes, so no compute costs are incurred. You incur costs only when the automated machine learning job runs. The cluster scales back to 0 automatically after 120 seconds of idle time.
-## Create dataset
+## Create a dataset
-In this tutorial, you use the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html), which is made available in [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/).
+In this tutorial, you use the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset is available in [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/).
-To create the dataset, select **Datasets** left-hand menu followed by **Create Dataset** - you will see the following options:
+To create the dataset, in the menu on the left, select **Datasets**. Then select **Create dataset**. You see the following options:
-:::image type="content" source="media/tutorial-power-bi/create-dataset.png" alt-text="Screenshot showing how to create a new dataset":::
+:::image type="content" source="media/tutorial-power-bi/create-dataset.png" alt-text="Screenshot showing how to create a new dataset.":::
-Select **From Open Datasets**, and then in **Create dataset from Open Datasets** screen:
+Select **From Open Datasets**. Then on the **Create dataset from Open Datasets** page:
-1. Search for *diabetes* using the search bar
-1. Select **Sample: Diabetes**
-1. Select **Next**
-1. Provide a name for your dataset - *diabetes*
-1. Select **Create**
+1. Use the search bar to find *diabetes*.
+1. Select **Sample: Diabetes**.
+1. Select **Next**.
+1. Name your dataset *diabetes*.
+1. Select **Create**.
-You can explore the data by selecting the Dataset followed by **Explore**:
+To explore the data, select the dataset and then select **Explore**:
-:::image type="content" source="media/tutorial-power-bi/explore-dataset.png" alt-text="Screenshot showing how to explore dataset":::
+:::image type="content" source="media/tutorial-power-bi/explore-dataset.png" alt-text="Screenshot showing how to explore a dataset.":::
-The data has 10 baseline input variables (such as age, sex, body mass index, average blood pressure, and six blood serum measurements), and one target variable named **Y** (a quantitative measure of diabetes progression one year after baseline).
+The data has 10 baseline input variables, such as age, sex, body mass index, average blood pressure, and six blood serum measurements. It also has one target variable, named **Y**. This target variable is a quantitative measure of diabetes progression one year after the baseline.
-## Create automated ML run
+## Create an automated machine learning run
-In the [Azure Machine Learning Studio](https://ml.azure.com) select **Automated ML** in the left-hand menu followed by **New Automated ML Run**:
+In [Azure Machine Learning Studio](https://ml.azure.com), in the menu on the left, select **Automated ML**. Then select **New Automated ML run**:
-:::image type="content" source="media/tutorial-power-bi/create-new-run.png" alt-text="Screenshot showing how to create a new automated ML run":::
+:::image type="content" source="media/tutorial-power-bi/create-new-run.png" alt-text="Screenshot showing how to create a new automated machine learning run.":::
-Next, select the **diabetes** dataset you created earlier and select **Next**:
+Next, select the **diabetes** dataset you created earlier. Then select **Next**:
-:::image type="content" source="media/tutorial-power-bi/select-dataset.png" alt-text="Screenshot showing how to select a dataset":::
+:::image type="content" source="media/tutorial-power-bi/select-dataset.png" alt-text="Screenshot showing how to select a dataset.":::
-In the **Configure run** screen:
+On the **Configure run** page:
-1. Under **Experiment name,** select **Create new**
-1. Provide an experiment a name
-1. In the Target column field, select **Y**
-1. In the **Select compute cluster** field select the compute cluster you created earlier.
+1. Under **Experiment name**, select **Create new**.
+1. Name the experiment.
+1. In the **Target column** field, select **Y**.
+1. In the **Select compute cluster** field, select the compute cluster you created earlier.
-Your completed form should look similar to:
+Your completed form should look like this:
-:::image type="content" source="media/tutorial-power-bi/configure-automated.png" alt-text="Screenshot showing how to configure automated ML":::
+:::image type="content" source="media/tutorial-power-bi/configure-automated.png" alt-text="Screenshot showing how to configure automated machine learning.":::
-Finally, you need to select the machine learning task to perform, which is **Regression**:
+Finally, select a machine learning task. In this case, the task is **Regression**:
-:::image type="content" source="media/tutorial-power-bi/configure-task.png" alt-text="Screenshot showing how to configure task":::
+:::image type="content" source="media/tutorial-power-bi/configure-task.png" alt-text="Screenshot showing how to configure a task.":::
Select **Finish**. > [!IMPORTANT]
-> It will take around 30 minutes for automated ML to finish training the 100 different models.
+> Automated machine learning takes around 30 minutes to finish training the 100 models.
## Deploy the best model
-Once the automated ML run has completed, you can see the list of all the different machine learning models that have been tried by selecting the **Models** tab. The models are ordered in performance order - the best performing model will be shown first. When you select the best model, the **Deploy** button will be enabled:
+When automated machine learning finishes, you can see all the machine learning models that have been tried by selecting the **Models** tab. The models are ordered by performance; the best-performing model is shown first. After you select the best model, the **Deploy** button is enabled:
-:::image type="content" source="media/tutorial-power-bi/list-models.png" alt-text="Screenshot showing the list of models":::
+:::image type="content" source="media/tutorial-power-bi/list-models.png" alt-text="Screenshot showing the list of models.":::
-Selecting **Deploy**, will present a **Deploy a model** screen:
+Select **Deploy** to open a **Deploy a model** window:
-1. Provide a name for your model service - use **diabetes-model**
-1. Select **Azure Container Service**
-1. Select **Deploy**
+1. Name your model service *diabetes-model*.
+1. Select **Azure Container Service**.
+1. Select **Deploy**.
-You should see a message that states the model has been deployed successfully.
+You should see a message that states that the model was deployed successfully.
## Next steps
-In this tutorial, you saw how to train and deploy a machine learning model using automated ML. In the next tutorial you are shown how to consume (score) this model from Power BI.
+In this tutorial, you saw how to train and deploy a machine learning model by using automated machine learning. In the next tutorial, you'll learn how to consume (score) this model in Power BI.
> [!div class="nextstepaction"]
-> [Tutorial: Consume model in Power BI](/power-bi/connect-data/service-aml-integrate?context=azure/machine-learning/context/ml-context)
+> [Tutorial: Consume a model in Power BI](/power-bi/connect-data/service-aml-integrate?context=azure/machine-learning/context/ml-context)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-power-bi-custom-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-power-bi-custom-model.md
@@ -1,7 +1,7 @@
---
-title: "Tutorial: Create the predictive model with a Notebook (part 1 of 2)"
+title: "Tutorial: Create the predictive model by using a notebook (part 1 of 2)"
titleSuffix: Azure Machine Learning
-description: Learn how to build and deploy a machine learning model using code in a Jupyter Notebook, so you can use it to predict outcomes in Microsoft Power BI.
+description: Learn how to build and deploy a machine learning model by using code in a Jupyter Notebook. You can use the model to predict outcomes in Microsoft Power BI.
services: machine-learning ms.service: machine-learning ms.subservice: core
@@ -12,62 +12,63 @@ ms.reviewer: sdgilley
ms.date: 12/11/2020 ---
-# Tutorial: Power BI integration - create the predictive model with a Notebook (part 1 of 2)
+# Tutorial: Power BI integration - Create the predictive model by using a Jupyter Notebook (part 1 of 2)
-In the first part of this tutorial, you train and deploy a predictive machine learning model using code in a Jupyter Notebook. In part 2, you'll then use the model to predict outcomes in Microsoft Power BI.
+In part 1 of this tutorial, you train and deploy a predictive machine learning model by using code in a Jupyter Notebook. In part 2, you'll use the model to predict outcomes in Microsoft Power BI.
In this tutorial, you: > [!div class="checklist"]
-> * Create a Jupyter Notebook
-> * Create an Azure Machine Learning compute instance
-> * Train a regression model using scikit-learn
-> * Deploy the model to a real-time scoring endpoint
+> * Create a Jupyter Notebook.
+> * Create an Azure Machine Learning compute instance.
+> * Train a regression model by using scikit-learn.
+> * Deploy the model to a real-time scoring endpoint.
-There are three different ways to create and deploy the model you'll use in Power BI. This article covers Option A: Train and deploy models using Notebooks. This option shows a code-first authoring experience using Jupyter notebooks hosted in Azure Machine Learning studio.
+There are three ways to create and deploy the model that you'll use in Power BI. This article covers "Option A: Train and deploy models by using notebooks." This option is a code-first authoring experience. It uses Jupyter notebooks that are hosted in Azure Machine Learning Studio.
-You could instead use:
+But you could instead use one of the other options:
-* [Option B: Train and deploy models using designer](tutorial-power-bi-designer-model.md)- a low-code authoring experience using Designer (a drag-and-drop user interface).
-* [Option C: Train and deploy models using automated ML](tutorial-power-bi-automated-model.md) - a no-code authoring experience that fully automates the data preparation and model training.
+* [Option B: Train and deploy models by using the Azure Machine Learning designer](tutorial-power-bi-designer-model.md). This low-code authoring experience uses a drag-and-drop user interface.
+* [Option C: Train and deploy models by using automated machine learning](tutorial-power-bi-automated-model.md). This no-code authoring experience fully automates data preparation and model training.
## Prerequisites -- An Azure subscription ([a free trial is available](https://aka.ms/AMLFree)). -- An Azure Machine Learning workspace. If you do not already have a workspace, follow [how to create an Azure Machine Learning Workspace](./how-to-manage-workspace.md#create-a-workspace).
+- An Azure subscription. If you don't already have a subscription, you can use a [free trial](https://aka.ms/AMLFree).
+- An Azure Machine Learning workspace. If you don't already have a workspace, see [Create and manage Azure Machine Learning workspaces](./how-to-manage-workspace.md#create-a-workspace).
- Introductory knowledge of the Python language and machine learning workflows. ## Create a notebook and compute
-In the [Azure Machine Learning Studio](https://ml.azure.com) homepage select **Create new** followed by **Notebook**:
+On the [**Azure Machine Learning Studio**](https://ml.azure.com) home page, select **Create new** > **Notebook**:
-:::image type="content" source="media/tutorial-power-bi/create-new-notebook.png" alt-text="Screenshot showing how to create a notebook":::
+:::image type="content" source="media/tutorial-power-bi/create-new-notebook.png" alt-text="Screenshot showing how to create a notebook.":::
-You are shown a dialog box to **Create a new file** enter:
+On the **Create a new file** page:
-1. A filename for your notebook (for example `my_model_notebook`)
-1. Change the **File Type** to **Notebook**
-
-Select **Create**. Next, you need to create some compute and attach it to your notebook in order to run code cells. To do this, select the plus icon at the top of the notebook:
+1. Name your notebook (for example, *my_model_notebook*).
+1. Change the **File Type** to **Notebook**.
+1. Select **Create**.
+
+Next, to run code cells, create a compute instance and attach it to your notebook. Start by selecting the plus icon at the top of the notebook:
-:::image type="content" source="media/tutorial-power-bi/create-compute.png" alt-text="Screenshot showing how to create compute instance":::
+:::image type="content" source="media/tutorial-power-bi/create-compute.png" alt-text="Screenshot showing how to create a compute instance.":::
-Next, on the **Create compute instance** page:
+On the **Create compute instance** page:
-1. Choose a CPU virtual machine size - for the purposes of this tutorial a **Standard_D11_v2** (two cores, 14-GB RAM) will be fine.
+1. Choose a CPU virtual machine size. For this tutorial, you can choose a **Standard_D11_v2**, with 2 cores and 14 GB of RAM.
1. Select **Next**.
-1. On the **Configure Settings** page provide a valid **Compute name** (valid characters are upper and lower case letters, digits, and the - character).
+1. On the **Configure Settings** page, provide a valid **Compute name**. Valid characters are uppercase and lowercase letters, digits, and hyphens (-).
1. Select **Create**.
-You may notice on the notebook that the circle next to **Compute** has turned cyan, indicating the compute instance is being created:
+In the notebook, you might notice the circle next to **Compute** turned cyan. This color change indicates that the compute instance is being created:
-:::image type="content" source="media/tutorial-power-bi/creating.png" alt-text="Screenshot showing compute being created":::
+:::image type="content" source="media/tutorial-power-bi/creating.png" alt-text="Screenshot showing a compute being created.":::
> [!NOTE]
-> It can take around 2-4 minutes for the compute to be provisioned.
+> The compute instance can take 2 to 4 minutes to be provisioned.
-Once the compute is provisioned, you can use the notebook to execute code cells. For example, type into the cell:
+After the compute is provisioned, you can use the notebook to run code cells. For example, in the cell you can type the following code:
```python import numpy as np
@@ -75,20 +76,20 @@ import numpy as np
np.sin(3) ```
-Followed by **Shift-Enter** (or **Control-Enter** or select the play button next to the cell). You should see the following output:
+Then select Shift + Enter (or select Control + Enter or select the **Play** button next to the cell). You should see the following output:
-:::image type="content" source="media/tutorial-power-bi/simple-sin.png" alt-text="Screenshot showing cell execution":::
+:::image type="content" source="media/tutorial-power-bi/simple-sin.png" alt-text="Screenshot showing the output of a cell.":::
-You are now ready to build a Machine Learning model!
+Now you're ready to build a machine learning model.
-## Build a model using scikit-learn
+## Build a model by using scikit-learn
-In this tutorial, you use the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html), which is made available in [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/).
+In this tutorial, you use the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset is available in [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/).
### Import data
-To import your data, copy-and-paste the code below into a new **code cell** in your notebook:
+To import your data, copy the following code and paste it into a new *code cell* in your notebook.
```python from azureml.opendatasets import Diabetes
@@ -101,11 +102,11 @@ y_df = y.to_pandas_dataframe()
X_df.info() ```
-The `X_df` pandas data frame contains 10 baseline input variables (such as age, sex, body mass index, average blood pressure, and six blood serum measurements). The `y_df` pandas data frame is the target variable containing a quantitative measure of disease progression one year after baseline. There are a total of 442 records.
+The `X_df` pandas data frame contains 10 baseline input variables. These variables include age, sex, body mass index, average blood pressure, and six blood serum measurements. The `y_df` pandas data frame is the target variable. It contains a quantitative measure of disease progression one year after the baseline. The data frame contains 442 records.
-### Train model
+### Train the model
-Create a new **code cell** in your notebook and copy-and-paste the code snippet below, which constructs a ridge regression model and serializes the model using Python's pickle format:
+Create a new *code cell* in your notebook. Then copy the following code and paste it into the cell. This code snippet constructs a ridge regression model and serializes the model by using the Python pickle format.
```python import joblib
@@ -117,9 +118,11 @@ joblib.dump(model, 'sklearn_regression_model.pkl')
### Register the model
-In addition to the content of the model file itself, your registered model will also store model metadata - model description, tags, and framework information - that will be useful when managing and deploying models in your workspace. Using tags, for instance, you can categorize your models and apply filters when listing models in your workspace. Also, marking this model with the scikit-learn framework will simplify deploying it as a web service, as we'll see later.
+In addition to the content of the model file itself, your registered model will store metadata. The metadata includes the model description, tags, and framework information.
+
+Metadata is useful when you're managing and deploying models in your workspace. By using tags, for instance, you can categorize your models and apply filters when you list models in your workspace. Also, if you mark this model with the scikit-learn framework, you'll simplify deploying it as a web service.
-Copy-and-paste the code below into a new **code cell** in your notebook:
+Copy the following code and then paste it into a new *code cell* in your notebook.
```python import sklearn
@@ -145,21 +148,21 @@ print('Name:', model.name)
print('Version:', model.version) ```
-You can also view the model in Azure Machine Learning Studio by navigating to **Endpoints** in the left hand-menu:
+You can also view the model in Azure Machine Learning Studio. In the menu on the left, select **Models**:
-:::image type="content" source="media/tutorial-power-bi/model.png" alt-text="Screenshot showing model":::
+:::image type="content" source="media/tutorial-power-bi/model.png" alt-text="Screenshot showing how to view a model.":::
### Define the scoring script
-When deploying a model to be integrated into Microsoft Power BI, you need to define a Python *scoring script* and custom environment. The scoring script contains two functions:
+When you deploy a model that will be integrated into Power BI, you need to define a Python *scoring script* and custom environment. The scoring script contains two functions:
-- `init()` - this function is executed once the service starts. This function loads the model (note that the model is automatically downloaded from the model registry) and deserializes it.-- `run(data)` - this function is executed when a call is made to the service with some input data that needs scoring.
+- The `init()` function runs when the service starts. It loads the model (which is automatically downloaded from the model registry) and deserializes it.
+- The `run(data)` function runs when a call to the service includes input data that needs to be scored.
>[!NOTE]
-> We use Python decorators to define the schema of the input and output data, which is important for the Microsoft Power BI integration to work.
+> This article uses Python decorators to define the schema of the input and output data. This setup is important for the Power BI integration.
-Copy-and-paste the code below into a new **code cell** in your notebook. The code snippet below has a cell magic that will write the code to a filed named score.py.
+Copy the following code and paste it into a new *code cell* in your notebook. The following code snippet has cell magic that writes the code to a file named *score.py*.
```python %%writefile score.py
@@ -214,7 +217,7 @@ def run(data):
result = model.predict(data) print("result.....") print(result)
- # You can return any data type, as long as it is JSON serializable.
+ # You can return any data type, as long as it can be serialized by JSON.
return result.tolist() except Exception as e: error = str(e)
@@ -223,9 +226,9 @@ def run(data):
### Define the custom environment
-Next we need to define the environment to score the model - we need to define in this environment the Python packages required by the scoring script (score.py) defined above such as pandas, scikit-learn, etc.
+Next, define the environment to score the model. In the environment, define the Python packages, such as pandas and scikit-learn, that the scoring script (*score.py*) requires.
-To define the environment, copy-and-paste the code below into a new **code cell** in your notebook:
+To define the environment, copy the following code and paste it into a new *code cell* in your notebook.
```python from azureml.core.model import InferenceConfig
@@ -247,7 +250,7 @@ inference_config = InferenceConfig(entry_script='./score.py',environment=environ
### Deploy the model
-To deploy the model, copy-and-paste the code below into a new **code cell** in your notebook:
+To deploy the model, copy the following code and paste it into a new *code cell* in your notebook:
```python service_name = 'my-diabetes-model'
@@ -257,9 +260,9 @@ service.wait_for_deployment(show_output=True)
``` >[!NOTE]
-> It can take 2-4 minutes for the service to be deployed.
+> The service can take 2 to 4 minutes to deploy.
-You should see the following output of a successfully deployed service:
+If the service deploys successfully, you should see the following output:
```txt Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
@@ -268,11 +271,11 @@ Succeeded
ACI service creation operation finished, operation "Succeeded" ```
-You can also view the service in Azure Machine Learning Studio by navigating to **Endpoints** in the left hand-menu:
+You can also view the service in Azure Machine Learning Studio. In the menu on the left, select **Endpoints**:
-:::image type="content" source="media/tutorial-power-bi/endpoint.png" alt-text="Screenshot showing endpoint":::
+:::image type="content" source="media/tutorial-power-bi/endpoint.png" alt-text="Screenshot showing how to view the service.":::
-It is recommended that you test the webservice to ensure that it works as expected. Navigate back to your notebook by selecting **Notebooks** in the left-hand menu in Azure Machine Learning Studio. Copy-and-paste the code below into a new **code cell** in your notebook to test the service:
+We recommend that you test the web service to ensure it works as expected. To return your notebook, in Azure Machine Learning Studio, in the menu on the left, select **Notebooks**. Then copy the following code and paste it into a new *code cell* in your notebook to test the service.
```python import json
@@ -288,11 +291,11 @@ output = service.run(input_payload)
print(output) ```
-The output should look like the following json structure: `{'predict': [[205.59], [68.84]]}`.
+The output should look like this JSON structure: `{'predict': [[205.59], [68.84]]}`.
## Next steps
-In this tutorial, you saw how to build and deploy a model in such a way that they can be consumed by Microsoft Power BI. In the next part, you learn how to consume this model from a Power BI report.
+In this tutorial, you saw how to build and deploy a model so that it can be consumed by Power BI. In the next part, you'll learn how to consume this model in a Power BI report.
> [!div class="nextstepaction"]
-> [Tutorial: Consume model in Power BI](/power-bi/connect-data/service-aml-integrate?context=azure/machine-learning/context/ml-context)
+> [Tutorial: Consume a model in Power BI](/power-bi/connect-data/service-aml-integrate?context=azure/machine-learning/context/ml-context)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-power-bi-designer-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-power-bi-designer-model.md
@@ -1,7 +1,7 @@
---
-title: "Tutorial: Drag-and-drop to create the predictive model (part 1 of 2)"
+title: "Tutorial: Drag and drop to create the predictive model (part 1 of 2)"
titleSuffix: Azure Machine Learning
-description: Learn how to build and deploy a machine learning predictive model with designer, so you can use it to predict outcomes in Microsoft Power BI.
+description: Learn how to build and deploy a machine learning predictive model by using the designer. Later, you can use it to predict outcomes in Microsoft Power BI.
services: machine-learning ms.service: machine-learning ms.subservice: core
@@ -13,153 +13,169 @@ ms.date: 12/11/2020
ms.custom: designer ---
-# Tutorial: Power BI integration - drag-and-drop to create the predictive model (part 1 of 2)
+# Tutorial: Power BI integration - Drag and drop to create the predictive model (part 1 of 2)
-In the first part of this tutorial, you train and deploy a predictive machine learning model using Azure Machine Learning designer - a low-code drag-and-drop user interface. In part 2, you'll then use the model to predict outcomes in Microsoft Power BI.
+In part 1 of this tutorial, you train and deploy a predictive machine learning model by using the Azure Machine Learning designer. The designer is a low-code drag-and-drop user interface. In part 2, you'll use the model to predict outcomes in Microsoft Power BI.
In this tutorial, you: > [!div class="checklist"]
-> * Create an Azure Machine Learning compute instance
-> * Create an Azure Machine Learning inference cluster
-> * Create a dataset
-> * Train a regression model
-> * Deploy the model to a real-time scoring endpoint
+> * Create an Azure Machine Learning compute instance.
+> * Create an Azure Machine Learning inference cluster.
+> * Create a dataset.
+> * Train a regression model.
+> * Deploy the model to a real-time scoring endpoint.
-There are three different ways to create and deploy the model you'll use in Power BI. This article covers Option B: Train and deploy models using designer. This option shows a low-code authoring experience using designer (a drag-and-drop user interface).
+There are three ways to create and deploy the model you'll use in Power BI. This article covers "Option B: Train and deploy models by using the designer." This option is a low-code authoring experience that uses the designer interface.
-You could instead use:
+But you could instead use one of the other options:
-* [Option A: Train and deploy models using Notebooks](tutorial-power-bi-custom-model.md) - a code-first authoring experience using Jupyter notebooks hosted in Azure Machine Learning studio.
-* [Option C: Train and deploy models using automated ML](tutorial-power-bi-automated-model.md) - a no-code authoring experience that fully automates the data preparation and model training.
+* [Option A: Train and deploy models by using Jupyter Notebooks](tutorial-power-bi-custom-model.md). This code-first authoring experience uses Jupyter Notebooks that are hosted in Azure Machine Learning Studio.
+* [Option C: Train and deploy models by using automated machine learning](tutorial-power-bi-automated-model.md). This no-code authoring experience fully automates data preparation and model training.
## Prerequisites -- An Azure subscription ([a free trial is available](https://aka.ms/AMLFree)). -- An Azure Machine Learning workspace. If you do not already have a workspace, follow [how to create an Azure Machine Learning Workspace](./how-to-manage-workspace.md#create-a-workspace).
+- An Azure subscription. If you don't already have a subscription, you can use a [free trial](https://aka.ms/AMLFree).
+- An Azure Machine Learning workspace. If you don't already have a workspace, see [Create and manage Azure Machine Learning workspaces](./how-to-manage-workspace.md#create-a-workspace).
- Introductory knowledge of machine learning workflows.
-## Create compute for training and scoring
+## Create compute to train and score
-In this section, you create a *compute instance*, which is used for training machine learning models. Also, you create an *inference cluster* that will be used to host the deployed model for real-time scoring.
+In this section, you create a *compute instance*. Compute instances are used to train machine learning models. You also create an *inference cluster* to host the deployed model for real-time scoring.
-Log into the [Azure Machine Learning studio](https://ml.azure.com) and select **Compute** from the left-hand menu followed by **New**:
+Sign in to [Azure Machine Learning Studio](https://ml.azure.com). In the menu on the left, select **Compute** and then **New**:
-:::image type="content" source="media/tutorial-power-bi/create-new-compute.png" alt-text="Screenshot showing how to create compute instance":::
+:::image type="content" source="media/tutorial-power-bi/create-new-compute.png" alt-text="Screenshot showing how to create a compute instance.":::
-On the resulting **Create compute instance** screen, select a VM size (for this tutorial, select a `Standard_D11_v2`) followed by **Next**. In the Setting page, provide a valid name for your compute instance followed by selecting **Create**.
+On the **Create compute instance** page, select a VM size. For this tutorial, select a **Standard_D11_v2** VM. Then select **Next**.
+
+On the **Settings** page, name your compute instance. Then select **Create**.
>[!TIP]
-> The compute instance can also be used to create and execute notebooks.
+> You can also use the compute instance to create and run notebooks.
+
+Your compute instance **Status** is now **Creating**. The machine takes around 4 minutes to provision.
-You can now see your compute instance **Status** is **Creating** - it will take around 4 minutes for the machine to be provisioned. While you are waiting, select **Inference Cluster** tab on the compute page followed by **New**:
+While you wait, on the **Compute** page, select the **Inference clusters** tab. Then select **New**:
-:::image type="content" source="media/tutorial-power-bi/create-cluster.png" alt-text="Screenshot showing how to create an inference cluster":::
+:::image type="content" source="media/tutorial-power-bi/create-cluster.png" alt-text="Screenshot showing how to create an inference cluster.":::
-In the resulting **Create inference cluster** page, select a region followed by a VM size (for this tutorial, select a `Standard_D11_v2`), then select **Next**. On the **Configure Settings** page:
+On the **Create inference cluster** page, select a region and a VM size. For this tutorial, select a **Standard_D11_v2** VM. Then select **Next**.
-1. Provide a valid compute name
-1. Select **Dev-test** as the cluster purpose (creates a single node to host the deployed model)
-1. Select **Create**
+On the **Configure Settings** page:
-You can now see your inference cluster **Status** is **Creating** - it will take around 4 minutes for your single node cluster to deploy.
+1. Provide a valid compute name.
+1. Select **Dev-test** as the cluster purpose. This option creates a single node to host the deployed model.
+1. Select **Create**.
+
+Your inference cluster **Status** is now **Creating**. Your single node cluster takes around 4 minutes to deploy.
## Create a dataset
-In this tutorial, you use the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html), which is made available in [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/).
+In this tutorial, you use the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset is available in [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/).
-To create the dataset, in the left-hand menu select **Datasets**, then **Create Dataset** - you will see the following options:
+To create the dataset, in the menu on the left, select **Datasets**. Then select **Create dataset**. You see the following options:
-:::image type="content" source="media/tutorial-power-bi/create-dataset.png" alt-text="Screenshot showing how to create a new dataset":::
+:::image type="content" source="media/tutorial-power-bi/create-dataset.png" alt-text="Screenshot showing how to create a new dataset.":::
-Select **From Open Datasets**, and then in **Create dataset from Open Datasets** screen:
+Select **From Open Datasets**. On the **Create dataset from Open Datasets** page:
-1. Search for *diabetes* using the search bar
-1. Select **Sample: Diabetes**
-1. Select **Next**
-1. Provide a name for your dataset - *diabetes*
-1. Select **Create**
+1. Use the search bar to find *diabetes*.
+1. Select **Sample: Diabetes**.
+1. Select **Next**.
+1. Name your dataset *diabetes*.
+1. Select **Create**.
-You can explore the data by selecting the Dataset followed by **Explore**:
+To explore the data, select the dataset and then select **Explore**:
-:::image type="content" source="media/tutorial-power-bi/explore-dataset.png" alt-text="Screenshot showing how to dataset explore":::
+:::image type="content" source="media/tutorial-power-bi/explore-dataset.png" alt-text="Screenshot showing how to explore a dataset.":::
-The data has 10 baseline input variables (such as age, sex, body mass index, average blood pressure, and six blood serum measurements), and one target variable named **Y** (a quantitative measure of diabetes progression one year after baseline).
+The data has 10 baseline input variables, such as age, sex, body mass index, average blood pressure, and six blood serum measurements. It also has one target variable, named **Y**. This target variable is a quantitative measure of diabetes progression one year after the baseline.
-## Create a Machine Learning model using designer
+## Create a machine learning model by using the designer
-Once you have created the compute and datasets, you can move on to creating the machine learning model using designer. In the Azure Machine Learning studio, select **Designer** followed by **New Pipeline**:
+After you create the compute and datasets, you can use the designer to create the machine learning model. In Azure Machine Learning Studio, select **Designer** and then **New pipeline**:
-:::image type="content" source="media/tutorial-power-bi/create-designer.png" alt-text="Screenshot showing how to create a new pipeline":::
+:::image type="content" source="media/tutorial-power-bi/create-designer.png" alt-text="Screenshot showing how to create a new pipeline.":::
-You see a blank *canvas* where you can also see a **Settings menu**:
+You see a blank *canvas* and a **Settings** menu:
-:::image type="content" source="media/tutorial-power-bi/select-compute.png" alt-text="Screenshot showing how to select a compute target":::
+:::image type="content" source="media/tutorial-power-bi/select-compute.png" alt-text="Screenshot showing how to select a compute target.":::
-On the **Settings menu**, **Select compute target** and then select the compute instance you created earlier followed by **Save**. Rename your **Draft name** to something more memorable (for example *diabetes-model*) and enter a description.
+On the **Settings** menu, choose **Select compute target**. Select the compute instance you created earlier, and then select **Save**. Change the **Draft name** to something more memorable, such as *diabetes-model*. Finally, enter a description.
-Next, in listed assets expand **Datasets** and locate the **diabetes** dataset - drag-and-drop this module onto the canvas:
+In list of assets, expand **Datasets** and locate the **diabetes** dataset. Drag this component onto the canvas:
-:::image type="content" source="media/tutorial-power-bi/drag-component.png" alt-text="Screenshot showing how to drag a component on":::
+:::image type="content" source="media/tutorial-power-bi/drag-component.png" alt-text="Screenshot showing how to drag a component onto the canvas.":::
-Next, drag-and-drop the following components on to the canvas:
+Next, drag the following components onto the canvas:
-1. Linear regression (located in **Machine Learning Algorithms**)
-1. Train model (located in **Model Training**)
+1. **Linear Regression** (located in **Machine Learning Algorithms**)
+1. **Train Model** (located in **Model Training**)
-Your canvas should look like (notice that the top-and-bottom of the components has little circles called ports - highlighted in red below):
+On your canvas, notice the circles at the top and bottom of the components. These circles are ports.
-:::image type="content" source="media/tutorial-power-bi/connections.png" alt-text="Screenshot showing how unconnected components":::
+:::image type="content" source="media/tutorial-power-bi/connections.png" alt-text="Screenshot showing the ports on unconnected components.":::
-Next, you need to *wire* these components together. Select the port at the bottom of the **diabetes** dataset and drag it to the right-hand port at the top of the **Train model** component. Select the port at the bottom of the **Linear regression** component and drag onto the left-hand port at the top of the **Train model** port.
+Now *wire* the components together. Select the port at the bottom of the **diabetes** dataset. Drag it to the port on the upper-right side of the **Train Model** component. Select the port at the bottom of the **Linear Regression** component. Drag it to the port on the upper-left side of the **Train Model** component.
+
+Choose the dataset column to use as the label (target) variable to predict. Select the **Train Model** component and then select **Edit column**.
-Choose the column in the dataset to be used as the label (target) variable to predict. Select the **Train model** component followed by **Edit column**. From the dialog box - Select the **Enter Column name** followed by **Y** in the drop-down list:
+In the dialog box, select **Enter column name** > **Y**:
-:::image type="content" source="media/tutorial-power-bi/label-columns.png" alt-text="Screenshot select label column":::
+:::image type="content" source="media/tutorial-power-bi/label-columns.png" alt-text="Screenshot showing how to select a label column.":::
-Select **Save**. Your machine learning *workflow* should look as follows:
+Select **Save**. Your machine learning *workflow* should look like this:
-:::image type="content" source="media/tutorial-power-bi/connected-diagram.png" alt-text="Screenshot showing how connected components":::
+:::image type="content" source="media/tutorial-power-bi/connected-diagram.png" alt-text="Screenshot showing connected components.":::
-Select **Submit** and then **Create new** under experiment. Provide a name for the experiment followed by **Submit**.
+Select **Submit**. Under **Experiment**, select **Create new**. Name the experiment, and then select **Submit**.
>[!NOTE]
-> It should take around 5-minutes for your experiment to complete on the first run. Subsequent runs are much quicker - designer caches already run components to reduce latency.
+> Your experiment's first run should take around 5 minutes. Subsequent runs are much quicker because the designer caches components that have been run to reduce latency.
-When the experiment is completed, you see:
+When the experiment finishes, you see this view:
-:::image type="content" source="media/tutorial-power-bi/completed-run.png" alt-text="Screenshot showing completed run":::
+:::image type="content" source="media/tutorial-power-bi/completed-run.png" alt-text="Screenshot showing a completed run.":::
-You can inspect the logs of the experiment by selecting **Train model** followed by **Outputs + logs**.
+To inspect the experiment logs, select **Train Model** and then select **Outputs + logs**.
## Deploy the model
-To deploy the model, select **Create Inference Pipeline** (located at the top of the canvas) followed by **Real-time inference pipeline**:
+To deploy the model, at the top of the canvas, select **Create inference pipeline** > **Real-time inference pipeline**:
-:::image type="content" source="media/tutorial-power-bi/pipeline.png" alt-text="Screenshot showing real-time inference pipeline":::
+:::image type="content" source="media/tutorial-power-bi/pipeline.png" alt-text="Screenshot showing where to select a real-time inference pipeline.":::
-The pipeline condenses to just the components necessary to do the model scoring. When you score the data you will not know the target variable values, therefore we can remove **Y** from the dataset. To remove, add to the canvas a **Select columns in Dataset** component. Wire the component so the diabetes dataset is the input, and the results are the output into the **Score Model** component:
+The pipeline condenses to just the components necessary to score the model. When you score the data, you won't know the target variable values. So you can remove **Y** from the dataset.
+
+To remove **Y**, add a **Select Columns in Dataset** component to the canvas. Wire the component so the diabetes dataset is the input. The results are the output into the **Score Model** component:
+
+:::image type="content" source="media/tutorial-power-bi/remove-column.png" alt-text="Screenshot showing how to remove a column.":::
+
+On the canvas, select the **Select Columns in Dataset** component, and then select **Edit Columns**.
+
+In the **Select columns** dialog box, choose **By name**. Then ensure that all the input variables are selected but the target is *not* selected:
-:::image type="content" source="media/tutorial-power-bi/remove-column.png" alt-text="Screenshot showing removal of a column":::
+:::image type="content" source="media/tutorial-power-bi/removal-settings.png" alt-text="Screenshot showing how to remove column settings.":::
-Select the **Select Columns in Dataset** component on the canvas followed by **Edit Columns**. In the Select columns dialog, select **By name** and then ensure all the input variables are selected but **not** the target:
+Select **Save**.
-:::image type="content" source="media/tutorial-power-bi/removal-settings.png" alt-text="Screenshot showing removal of a column settings":::
+Finally, select the **Score Model** component and ensure the **Append score columns to output** check box is cleared. To reduce latency, the predictions are sent back without the inputs.
-Select **Save**. Finally, select the **Score Model** component and ensure the **Append score columns to output** checkbox is unchecked (only the predictions are sent back, rather than the inputs *and* predictions, reducing latency):
+:::image type="content" source="media/tutorial-power-bi/score-settings.png" alt-text="Screenshot showing settings for the Score Model component.":::
-:::image type="content" source="media/tutorial-power-bi/score-settings.png" alt-text="Screenshot showing score model component settings":::
+At the top of the canvas, select **Submit**.
-Select **Submit** at the top of the canvas.
+After you successfully run the inference pipeline, you can deploy the model to your inference cluster. Select **Deploy**.
-When you have successfully run the inference pipeline, you can then deploy the model to your inference cluster. Select **Deploy**, which will show the **Set-up real-time endpoint** dialog box. Select **Deploy new real-time endpoint**, name the endpoint **my-diabetes-model**, select the inference you created earlier, select **Deploy**:
+In the **Set-up real-time endpoint** dialog box, select **Deploy new real-time endpoint**. Name the endpoint *my-diabetes-model*. Select the inference you created earlier, and then select **Deploy**:
-:::image type="content" source="media/tutorial-power-bi/endpoint-settings.png" alt-text="Screenshot showing real-time endpoint settings":::
+:::image type="content" source="media/tutorial-power-bi/endpoint-settings.png" alt-text="Screenshot showing real-time endpoint settings.":::
## Next steps
-In this tutorial, you saw how to train and deploy a designer model. In the next part, you learn how to consume (score) this model from Power BI.
+In this tutorial, you saw how to train and deploy a designer model. In the next part, you learn how to consume (score) this model in Power BI.
> [!div class="nextstepaction"]
-> [Tutorial: Consume model in Power BI](/power-bi/connect-data/service-aml-integrate?context=azure/machine-learning/context/ml-context)
+> [Tutorial: Consume a model in Power BI](/power-bi/connect-data/service-aml-integrate?context=azure/machine-learning/context/ml-context)
marketplace https://docs.microsoft.com/en-us/azure/marketplace/azure-consumption-commitment-benefit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-consumption-commitment-benefit.md
@@ -26,7 +26,7 @@ To take advantage of this benefit, simply purchase a qualifying offer on Azure M
## Determine if your organization has an Azure consumption commitment (MACC/CtC)
-If you are unsure if your organization has a qualifying agreement, sign in to the Marketplace experience within the [Azure portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home) under a tenant associated with your organization. If you see the **Azure benefit eligible** option within the Pricing filter, you have a qualifying Azure consumption commitment. Qualifying Azure Marketplace purchases will contribute towards your organizationΓÇÖs Azure consumption commitment if purchased directly through the Azure Marketplace.
+If you are unsure if your organization has a qualifying agreement, sign in to the Marketplace experience within the [Azure portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home) under a tenant associated with your organization. If you see the option to select **Azure benefit eligible only** as a filter option, you have a qualifying Azure consumption commitment. Qualifying Azure Marketplace purchases will contribute towards your organizationΓÇÖs Azure consumption commitment if purchased directly through the Azure Marketplace.
[![Azure benefit eligible menu option.](media/azure-benefit/azure-benefit-eligible.png)](media/azure-benefit/azure-benefit-eligible.png#lightbox)
@@ -52,4 +52,4 @@ If you are unsure if your organization has a qualifying agreement, sign in to th
## Next steps -- To learn more about how your organization can leverage the Azure Marketplace, complete our Microsoft Learn module: [Simplify cloud procurement and governance with Azure Marketplace](/learn/modules/simplify-cloud-procurement-governance-azure-marketplace/).\ No newline at end of file
+- To learn more about how your organization can leverage the Azure Marketplace, complete our Microsoft Learn module: [Simplify cloud procurement and governance with Azure Marketplace](/learn/modules/simplify-cloud-procurement-governance-azure-marketplace/).
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
@@ -373,7 +373,7 @@ https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecurity
**Enable NSG Flow Logging on all NSGs attached to a resource**: Flow logging in Azure is configured on the NSG resource. A flow will only be associated to one NSG Rule. In scenarios where multiple NSGs are utilized, we recommend enabling NSG flow logs on all NSGs applied at the resource's subnet or network interface to ensure that all traffic is recorded. For more information, see [how traffic is evaluated](../virtual-network/network-security-group-how-it-works.md) in Network Security Groups. Few common scenarios:
-1. **Multiple NSG at a NIC**: In case multiple NSGs are attached to a NIC, flow logging must be enabled on all of them
+1. **Multiple NICs at a VM**: In case multiple NICs are attached to a virtual machine, flow logging must be enabled on all of them
1. **Having NSG at both NIC and Subnet Level**: In case NSG is configured at the NIC as well as the Subnet level, then flow logging must be enabled at both the NSGs. **Storage provisioning**: Storage should be provisioned in tune with expected Flow Log volume.
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/traffic-analytics-schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/traffic-analytics-schema.md
@@ -36,11 +36,11 @@ Traffic Analytics is a cloud-based solution that provides visibility into user a
5. FlowStartTime_t field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. 6. For any resource in TA, the flows indicated in the UI are total flows seen by the NSG, but in Log Analytics user will see only the single, reduced record. To see all the flows, use the blob_id field, which can be referenced from Storage. The total flow count for that record will match the individual flows seen in the blob.
-The below query helps you looks at all flow logs from on-premises in the last 30 days.
+The below query helps you look at all subnets interacting with non-Azure public IPs in the last 30 days.
``` AzureNetworkAnalytics_CL | where SubType_s == "FlowLog" and FlowStartTime_t >= ago(30d) and FlowType_s == "ExternalPublic"
-| project Subnet_s
+| project Subnet1_s, Subnet2_s
``` To view the blob path for the flows in the above mentioned query, use the query below:
openshift https://docs.microsoft.com/en-us/azure/openshift/howto-create-a-storageclass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-create-a-storageclass.md
@@ -48,14 +48,14 @@ az storage account create \
## Set permissions ### Set resource group permissions
-The ARO service principal requires 'listKeys' permission on the new Azure storage account resource group. Assign the ΓÇÿContributorΓÇÖ role to achieve this.
+The ARO service principal requires 'listKeys' permission on the new Azure storage account resource group. Assign the ΓÇÿContributorΓÇÖ role to achieve this.
```bash ARO_RESOURCE_GROUP=aro-rg CLUSTER=cluster
-ARO_SERVICE_PRINCIPAL_ID=$(az aro show -g $ARO_RESOURCE_GROUP -n $CLUSTER ΓÇô-query servicePrincipalProfile.clientId -o tsv)
+ARO_SERVICE_PRINCIPAL_ID=$(az aro show -g $ARO_RESOURCE_GROUP -n $CLUSTER --query servicePrincipalProfile.clientId -o tsv)
-az role assignment create ΓÇô-role Contributor -ΓÇôassignee $ARO_SERVICE_PRINCIPAL_ID -g $AZURE_FILES_RESOURCE_GROUP
+az role assignment create --role Contributor -ΓÇôassignee $ARO_SERVICE_PRINCIPAL_ID -g $AZURE_FILES_RESOURCE_GROUP
``` ### Set ARO cluster permissions
@@ -86,7 +86,7 @@ metadata:
provisioner: kubernetes.io/azure-file parameters: location: $LOCATION
- skuName: Standard_LRS
+ skuName: Standard_LRS
storageAccount: $AZURE_STORAGE_ACCOUNT_NAME resourceGroup: $AZURE_FILES_RESOURCE_GROUP reclaimPolicy: Delete
@@ -127,7 +127,7 @@ oc exec $POD -- bash -c "echo 'azure file storage' >> /data/test.txt"
oc exec $POD -- bash -c "cat /data/test.txt" azure file storage ```
-The test.txt file will also be visible via the Storage Explorer in the Azure portal.
+The test.txt file will also be visible via the Storage Explorer in the Azure portal.
## Next steps
@@ -139,4 +139,4 @@ In this article, you created dynamic persistent storage using Microsoft Azure Fi
Advance to the next article to learn about Azure Red Hat OpenShift 4 supported resources.
-* [Azure Red Hat OpenShift support policy](support-policies-v4.md)
\ No newline at end of file
+* [Azure Red Hat OpenShift support policy](support-policies-v4.md)
private-link https://docs.microsoft.com/en-us/azure/private-link/troubleshoot-private-endpoint-connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/troubleshoot-private-endpoint-connectivity.md
@@ -107,9 +107,11 @@ Review these steps to make sure all the usual configurations are as expected to
- In this case, review the configuration of the private link resource associated with the private endpoint. For more information, see the [Azure Private Link troubleshooting guide](troubleshoot-private-link-connectivity.md) 1. It is always good to narrow down before raising the support ticket. + a. If the Source is On-Premises connecting to Private Endpoint in Azure having issues, then try to connect - To another Virtual Machine from On-Premises and check if you have IP connectivity to the Virtual Network from On-Premises. - From a Virtual Machine in the Virtual Network to the Private Endpoint.
+
b. If the Source is Azure and Private Endpoint is in different Virtual Network, then try to connect - To the Private Endpoint from a different Source. By doing this you can isolate any Virtual Machine specific issues. - To any Virtual Machine which is part of the same Virtual Network of that of Private Endpoint.
search https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-document-extraction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-document-extraction.md
@@ -95,11 +95,11 @@ This file reference object can be generated one of 3 ways:
"outputs": [ { "name": "content",
- "targetName": "content"
+ "targetName": "extracted_content"
}, { "name": "normalized_images",
- "targetName": "normalized_images"
+ "targetName": "extracted_normalized_images"
} ] }
security-center https://docs.microsoft.com/en-us/azure/security-center/quickstart-onboard-aws https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/quickstart-onboard-aws.md
@@ -3,7 +3,7 @@ title: Connect your AWS account to Azure Security Center
description: Monitoring your AWS resources from Azure Security Center author: memildin ms.author: memildin
-ms.date: 9/22/2020
+ms.date: 12/29/2020
ms.topic: quickstart ms.service: security-center manager: rkarlin
@@ -37,7 +37,7 @@ In the screenshot below you can see AWS accounts displayed in Security Center's
|----|:----| |Release state:|Preview<br>[!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)] | |Pricing:|Requires [Azure Defender for servers](defender-for-servers-introduction.md)|
-|Required roles and permissions:|**Owner** or **Contributor** on the relevant Azure Subscription|
+|Required roles and permissions:|**Owner** on the relevant Azure subscription<br>**Contributor** can also connect an AWS account if an owner provides the service principal details|
|Clouds:|![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![No](./media/icons/no-icon.png) National/Sovereign (US Gov, China Gov, Other Gov)| |||
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-enable-data-collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-enable-data-collection.md
@@ -27,6 +27,17 @@ Data is collected using:
:::image type="content" source="./media/security-center-enable-data-collection/auto-provisioning-options.png" alt-text="Security Center's auto provisioning settings page":::
+## Availability
+
+| Aspect | Details |
+|-------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Release state: | **Feature**: Auto provisioning is generally available (GA)<br>**Agent and extensions**: Log Analytics agent for Azure VMs is GA, Microsoft Dependency agent is in preview, Policy Add-on for Kubernetes is GA |
+| Pricing: | Free |
+| Supported destinations: | ![Yes](./media/icons/yes-icon.png) Azure machines<br>![No](./media/icons/no-icon.png) Azure Arc machines<br>![No](./media/icons/no-icon.png) Kubernetes nodes<br>![No](./media/icons/no-icon.png) Virtual Machine Scale Sets |
+| Clouds: | ![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![Yes](./media/icons/yes-icon.png) US Gov, China Gov, Other Gov |
+| | |
++ ## Why use auto provisioning? Any of the agents and extensions described on this page *can* be installed manually (see [Manual installation of the Log Analytics agent](#manual-agent)). However, **auto provisioning** reduces management overhead by installing all required agents and extensions on existing - and new - machines to ensure faster security coverage for all supported resources.
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-get-started.md
@@ -12,16 +12,23 @@ ms.topic: quickstart
ms.custom: mvc ms.tgt_pltfrm: na ms.workload: na
-ms.date: 09/22/2020
+ms.date: 12/30/2020
ms.author: memildin ---
-# Quickstart: Setting up Azure Security Center
+# Quickstart: Set up Azure Security Center
Azure Security Center provides unified security management and threat protection across your hybrid cloud workloads. While the free features offer limited security for your Azure resources only, enabling Azure Defender extends these capabilities to on-premises and other clouds. Azure Defender helps you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. You can try Azure Defender at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
-In this article, you upgrade to Azure Defender for added security and install the Log Analytics agent on your machines to monitor for security vulnerabilities and threats.
+This quick start will walk you through enabling Azure Defender for added security and installing the Log Analytics agent on your machines to monitor for security vulnerabilities and threats.
+
+You'll take the following steps:
+
+> [!div class="checklist"]
+> * Enable Security Center on your Azure subscription
+> * Enable Azure Defender on your Azure subscription
+> * Enable automatic data collection
## Prerequisites To get started with Security Center, you must have a subscription to Microsoft Azure. If you do not have a subscription, you can sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-remediate-recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-remediate-recommendations.md
@@ -1,6 +1,6 @@
--- title: Remediate recommendations in Azure Security Center | Microsoft Docs
-description: This article explains how to remediate recommendations in Azure Security Center to protect your resources and comply with security policies.
+description: This article explains how to respond to recommendations in Azure Security Center to protect your resources and satisfy security policies.
services: security-center documentationcenter: na author: memildin
@@ -21,41 +21,41 @@ Recommendations give you suggestions on how to better secure your resources. You
## Remediation steps <a name="remediation-steps"></a>
-After reviewing all the recommendations, decide which one to remediate first. We recommend that you use the [Secure Score impact](security-center-recommendations.md#monitor-recommendations) to help prioritize what to do first.
+After reviewing all the recommendations, decide which one to remediate first. We recommend that you prioritize the security controls with the highest potential to increase your secure score.
-1. From the list, click the recommendation.
+1. From the list, select a recommendation.
1. Follow the instructions in the **Remediation steps** section. Each recommendation has its own set of instructions. The following screenshot shows remediation steps for configuring applications to only allow traffic over HTTPS.
- ![Recommendation details](./media/security-center-remediate-recommendations/security-center-remediate-recommendation.png)
+ :::image type="content" source="./media/security-center-remediate-recommendations/security-center-remediate-recommendation.png" alt-text="Manual remediation steps for a recommendation" lightbox="./media/security-center-remediate-recommendations/security-center-remediate-recommendation.png":::
-1. Once completed, a notification appears informing you if the remediation succeeded.
+1. Once completed, a notification appears informing you whether the issue is resolved.
-## Quick fix remediation<a name="one-click"></a>
+## Quick fix remediation
-Quick fix simplifies remediation and enables you to quickly increase your secure score, improving your environment's security.
+To simplify remediation and improve your environment's security (and increase your secure score), many recommendations include a quick fix option.
-Quick fix enables you to quickly remediate a recommendation on multiple resources.
+Quick fix helps you to quickly remediate a recommendation on multiple resources.
> [!TIP]
-> Quick fix is only available for specific recommendations. To find the recommendations that have the quick fix option, use the dedicated filter at the top of the list of recommendations:
+> Quick fix solutions are only available for specific recommendations. To find the recommendations that have an available quick fix, use the **Response actions** filter for the list of recommendations:
> > :::image type="content" source="media/security-center-remediate-recommendations/quick-fix-filter.png" alt-text="Use the filters above the recommendations list to find recommendations that have the quick fix option":::
-To implement a quick fix remediation:
+To implement a quick fix solution:
-1. From the list of recommendations that have the **Quick Fix!** label, click on the recommendation.
+1. From the list of recommendations that have the **Quick Fix!** label, select a recommendation.
- [![Select Quick Fix!](media/security-center-remediate-recommendations/security-center-one-click-fix-select.png)](media/security-center-remediate-recommendations/security-center-one-click-fix-select.png#lightbox)
+ [![Select Quick Fix!](media/security-center-remediate-recommendations/security-center-quick-fix-select.png)](media/security-center-remediate-recommendations/security-center-quick-fix-select.png#lightbox)
-1. From the **Unhealthy resources** tab, select the resources that you want to implement the recommendation on, and click **Remediate**.
+1. From the **Unhealthy resources** tab, select the resources that you want to implement the recommendation on, and select **Remediate**.
> [!NOTE] > Some of the listed resources might be disabled, because you don't have the appropriate permissions to modify them. 1. In the confirmation box, read the remediation details and implications.
- ![Quick fix](./media/security-center-remediate-recommendations/security-center-one-click-fix-view.png)
+ ![Quick fix](./media/security-center-remediate-recommendations/security-center-quick-fix-view.png)
> [!NOTE] > The implications are listed in the grey box in the **Remediate resources** window that opens after clicking **Remediate**. They list what changes happen when proceeding with the quick fix remediation.
@@ -74,7 +74,7 @@ The remediation operation uses a template deployment or REST PATCH API call to a
## Next steps
-In this document, you were shown how to remediate recommendations in Security Center. To learn more about Security Center, see the following topics:
+In this document, you were shown how to remediate recommendations in Security Center. To learn more about Security Center, see the following pages:
* [Setting security policies in Azure Security Center](tutorial-security-policy.md) - Learn how to configure security policies for your Azure subscriptions and resource groups. * [Security health monitoring in Azure Security Center](security-center-monitoring.md) - Learn how to monitor the health of your Azure resources.
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-wdatp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-wdatp.md
@@ -37,10 +37,10 @@ Microsoft Defender for Endpoint is a holistic, cloud delivered endpoint security
|---------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Release state: | Generally available (GA) | | Pricing: | Requires [Azure Defender for servers](security-center-pricing.md) |
-| Supported platforms: | ![Yes](./media/icons/yes-icon.png) Azure machines running Windows<br>![Yes](./media/icons/yes-icon.png) Azure Arc machines running Windows|
-| Supported versions of Windows: | Defender for Endpoint is built into Windows 10 1703 (and newer) and Windows Server 2019.<br>Security Center supports detection on Windows Server 2016, 2012 R2, and 2008 R2 SP1.<br>Server endpoint monitoring using this integration has been disabled for Office 365 GCC customers. |
+| Supported platforms: | Azure machines running Windows<br>Azure Arc machines running Windows|
+| Supported versions of Windows: | ΓÇó Defender for Endpoint is built into Windows 10 1703 (and newer) and Windows Server 2019.<br> ΓÇó Security Center supports detection on Windows Server 2016, 2012 R2, and 2008 R2 SP1.<br> ΓÇó Server endpoint monitoring using this integration has been disabled for Office 365 GCC customers.<br> ΓÇó No support for Windows Server 2019 or Linux|
| Required roles and permissions: | To enable/disable the integration: **Security admin** or **Owner**<br>To view MDATP alerts in Security Center: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** |
-| Clouds: | ![Yes](./media/icons/yes-icon.png) Commercial clouds.<br>![No](./media/icons/no-icon.png) GCC customers running workloads in global Azure clouds<br>![Yes](./media/icons/yes-icon.png) US Gov<br>![No](./media/icons/no-icon.png) China Gov, Other Gov |
+| Clouds: | ![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![No](./media/icons/no-icon.png) GCC customers running workloads in global Azure clouds<br>![Yes](./media/icons/yes-icon.png) US Gov<br>![No](./media/icons/no-icon.png) China Gov, Other Gov |
| | |
security-center https://docs.microsoft.com/en-us/azure/security-center/tutorial-security-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/tutorial-security-policy.md
@@ -16,7 +16,7 @@ ms.date: 11/04/2019
ms.author: memildin ---
-# Working with security policies
+# Manage security policies
This article explains how security policies are configured, and how to view them in Security Center.
security https://docs.microsoft.com/en-us/azure/security/fundamentals/ddos-protection-security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/ddos-protection-security-baseline.md
@@ -36,7 +36,7 @@ This security baseline applies guidance from the [Azure Security Benchmark](../b
Enable Azure Activity Log diagnostic settings and send the logs to a Log Analytics workspace, Azure event hub, or Azure storage account for archive. Activity logs provide insight into the operations that were performed on your Azure Cache for Redis instances at the control plane level. Using Azure Activity Log data, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) performed at the control plane level for your Azure DDoS Protection instances. -- [How to configure alerts for DDoS protection metrics](../../ddos-protection/telemetry-monitoring-alerting.md#configure-alerts-for-ddos-protection-metrics)
+- [View and configure DDoS diagnostic logging](../../ddos-protection/diagnostic-logging.md)
- [How to enable Diagnostic Settings for Azure Activity Log](../../azure-monitor/platform/activity-log.md)
@@ -58,7 +58,7 @@ Enable Azure Activity Log diagnostic settings and send the logs to a Log Analyti
**Guidance**: Enable Azure Activity Log diagnostic settings and send the logs to a Log Analytics workspace. Perform queries in Log Analytics to search terms, identify trends, analyze patterns, and provide many other insights based on the Activity Log data that may have been collected for Recovery Services vaults. -- [Information on how to access telemetry, logs and attack analytics for DDoS Protection Standard service](../../ddos-protection/telemetry-monitoring-alerting.md#configure-alerts-for-ddos-protection-metrics)
+- [Information on how to access telemetry, logs and attack analytics for DDoS Protection Standard service](../../ddos-protection/telemetry.md)
- [How to enable diagnostic settings for Azure Activity Log](../../azure-monitor/platform/activity-log.md)
@@ -74,7 +74,7 @@ Enable Azure Activity Log diagnostic settings and send the logs to a Log Analyti
Onboard a Log Analytics workspace to Azure Sentinel as it provides a security orchestration automated response (SOAR) solution. This allows for playbooks (automated solutions) to be created and used to remediate security issues. Additionally, you can create custom log alerts in your Log Analytics workspace using Azure Monitor. -- [How configure alerts for DDoS metrics](https://azure.microsoft.com/blog/holiday-season-is-ddos-season/)
+- [How configure alerts for DDoS metrics](../../ddos-protection/alerts.md)
- [How to onboard Azure Sentinel](../../sentinel/quickstart-onboard.md)
@@ -140,7 +140,7 @@ Additionally, to help you keep track of dedicated administrative accounts, you c
- [How to register your client application (service principal) with Azure AD](/rest/api/azure/#register-your-client-application-with-azure-ad) -- [Azure DDos Protection API information](/rest/api/virtual-network/)
+- [Azure DDoS Protection API information](/rest/api/virtual-network/)
**Azure Security Center monitoring**: Not applicable
@@ -382,7 +382,7 @@ Use Azure Resource Graph to query for and discover resources within their subscr
### 7.1: Establish secure configurations for all Azure resources
-**Guidance**: Define and implement standard security configurations for Azure DDos Protection with Azure Policy. Use Azure Policy aliases in the "Microsoft.Network" namespace to create custom policies to audit or enforce the configuration of your Recovery Services vaults.
+**Guidance**: Define and implement standard security configurations for Azure DDoS Protection with Azure Policy. Use Azure Policy aliases in the "Microsoft.Network" namespace to create custom policies to audit or enforce the configuration of your Recovery Services vaults.
- [How to view available Azure Policy aliases](/powershell/module/az.resources/get-azpolicyalias?view=azps-3.3.0)
@@ -522,7 +522,7 @@ Test your assumptions about how your services will respond to an attack by gener
Select any of the available DDoS protection metrics to alert you when thereΓÇÖs an active mitigation during an attack, using the Azure Monitor alert configuration. When the conditions are met, the address specified receives an alert email -- [Configure alerts for DDoS protection metrics](../../ddos-protection/telemetry-monitoring-alerting.md#configure-alerts-for-ddos-protection-metrics)
+- [Configure alerts for DDoS protection metrics](../../ddos-protection/alerts.md)
- [How to configure continuous export](../../security-center/continuous-export.md)
storage https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-performance-tuning-guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-performance-tuning-guidance.md
@@ -16,11 +16,11 @@ Azure Data Lake Storage Gen2 supports high-throughput for I/O intensive analytic
![Data Lake Storage Gen2 performance](./media/data-lake-storage-performance-tuning-guidance/throughput.png)
-Data Lake Storage Gen2 can scale to provide the necessary throughput for all analytics scenario. By default, a Data Lake Storage Gen2 account provides automatically enough throughput to meet the needs of a broad category of use cases. For the cases where customers run into the default limit, the Data Lake Storage Gen2 account can be configured to provide more throughput by contacting [Azure Support](https://azure.microsoft.com/support/faq/).
+Data Lake Storage Gen2 can scale to provide the necessary throughput for all analytics scenarios. By default, a Data Lake Storage Gen2 account provides enough throughput in its default configuration to meet the needs of a broad category of use cases. For the cases where customers run into the default limit, the Data Lake Storage Gen2 account can be configured to provide more throughput by contacting [Azure Support](https://azure.microsoft.com/support/faq/).
## Data ingestion
-When ingesting data from a source system to Data Lake Storage Gen2, it is important to consider that the source hardware, source network hardware, and network connectivity to Data Lake Storage Gen2 can be the bottleneck.
+When ingesting data from a source system to Data Lake Storage Gen2, it is important to consider that the source hardware, source network hardware, or network connectivity to Data Lake Storage Gen2 can be the bottleneck.
![Diagram that shows the factors to consider when ingesting data from a source system to Data Lake Storage Gen2.](./media/data-lake-storage-performance-tuning-guidance/bottleneck.png)
@@ -32,7 +32,7 @@ Whether you are using on-premises machines or VMs in Azure, you should carefully
### Network connectivity to Data Lake Storage Gen2
-The network connectivity between your source data and Data Lake Storage Gen2 can sometimes be the bottleneck. When your source data is On-Premises, consider using a dedicated link with [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) . If your source data is in Azure, the performance will be best when the data is in the same Azure region as the Data Lake Storage Gen2 account.
+The network connectivity between your source data and Data Lake Storage Gen2 can sometimes be the bottleneck. When your source data is On-Premises, consider using a dedicated link with [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/). If your source data is in Azure, the performance will be best when the data is in the same Azure region as the Data Lake Storage Gen2 account.
### Configure data ingestion tools for maximum parallelization
@@ -134,4 +134,4 @@ In addition to the general guidelines above, each application has different para
| [Storm on HDInsight](data-lake-storage-performance-tuning-storm.md)| <ul><li>Number of worker processes</li><li>Number of spout executor instances</li><li>Number of bolt executor instances </li><li>Number of spout tasks</li><li>Number of bolt tasks</li></ul>| ## See also
-* [Overview of Azure Data Lake Storage Gen2](data-lake-storage-introduction.md)
\ No newline at end of file
+* [Overview of Azure Data Lake Storage Gen2](data-lake-storage-introduction.md)
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-initiate-account-failover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-initiate-account-failover.md
@@ -7,7 +7,7 @@ author: tamram
ms.service: storage ms.topic: how-to
-ms.date: 06/11/2020
+ms.date: 12/29/2020
ms.author: tamram ms.reviewer: artek ms.subservice: common
@@ -34,6 +34,13 @@ Before you can perform an account failover on your storage account, make sure th
For more information about Azure Storage redundancy, see [Azure Storage redundancy](storage-redundancy.md).
+Keep in mind that the following features and services are not supported for account failover:
+
+- Azure File Sync does not support storage account failover. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files.
+- ADLS Gen2 storage accounts (accounts that have hierarchical namespace enabled) are not supported at this time.
+- A storage account containing premium block blobs cannot be failed over. Storage accounts that support premium block blobs do not currently support geo-redundancy.
+- A storage account containing any [WORM immutability policy](../blobs/storage-blob-immutable-storage.md) enabled containers cannot be failed over. Unlocked/locked time-based retention or legal hold policies prevent failover in order to maintain compliance.
+ ## Initiate the failover ## [Portal](#tab/azure-portal)
storage https://docs.microsoft.com/en-us/azure/storage/scripts/storage-blobs-container-calculate-billing-size-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/scripts/storage-blobs-container-calculate-billing-size-powershell.md
@@ -8,7 +8,7 @@ ms.service: storage
ms.subservice: blobs ms.devlang: powershell ms.topic: sample
-ms.date: 11/07/2017
+ms.date: 12/29/2020
ms.author: fryu ---
@@ -16,6 +16,9 @@ ms.author: fryu
This script calculates the size of a container in Azure Blob storage for the purpose of estimating billing costs. The script totals the size of the blobs in the container.
+> [!IMPORTANT]
+> The sample script provided in this article may not accurately calculate the billing size for blob snapshots.
+ [!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)] [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
@@ -40,6 +43,7 @@ For-Each Signed Identifier[512 bytes]
``` Following is the breakdown:+ * 48 bytes of overhead for each container includes the Last Modified Time, Permissions, Public Settings, and some system metadata. * The container name is stored as Unicode, so take the number of characters and multiply by two.
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/overview-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/overview-faq.md
@@ -84,7 +84,7 @@ A: At this time, you must manually recreate your Azure Data Factory pipelines an
### Q: What is the difference between Apache Spark for Synapse and Apache Spark?
-A: Apache Spark for Synapse IS Apache Spark with added support for integrations with other services (AAD, AzureML, etc.) and additional libraries (mssparktuils, Hummingbird) and pre-tuned performance configurations.
+A: Apache Spark for Synapse is Apache Spark with added support for integrations with other services (AAD, AzureML, etc.) and additional libraries (mssparktuils, Hummingbird) and pre-tuned performance configurations.
Any workload that is currently running on Apache Spark will run on Apache Spark for Azure Synapse without change.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/deployment-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/deployment-guide.md
@@ -1067,20 +1067,34 @@ The new VM Extension for SAP uses a Managed Identity assigned to the VM to acces
Example: ```azurecli
+ # Azure CLI on Linux
spID=$(az resource show -g <resource-group-name> -n <vm name> --query identity.principalId --out tsv --resource-type Microsoft.Compute/virtualMachines) rgId=$(az group show -g <resource-group-name> --query id --out tsv) az role assignment create --assignee $spID --role 'Reader' --scope $rgId+
+ # Azure CLI on Windows/PowerShell
+ $spID=az resource show -g <resource-group-name> -n <vm name> --query identity.principalId --out tsv --resource-type Microsoft.Compute/virtualMachines
+ $rgId=az group show -g <resource-group-name> --query id --out tsv
+ az role assignment create --assignee $spID --role 'Reader' --scope $rgId
``` 1. Run the following Azure CLI command to install the Azure Extension for SAP. The extension is currently only supported in AzureCloud. Azure China 21Vianet, Azure Government or any of the other special environments are not yet supported. ```azurecli
- # For Linux machines
+ # Azure CLI on Linux
+ ## For Linux machines
az vm extension set --publisher Microsoft.AzureCAT.AzureEnhancedMonitoring --name MonitorX64Linux --version 1.0 -g <resource-group-name> --vm-name <vm name> --settings '{"system":"SAP"}'
- #For Windows machines
+ ## For Windows machines
az vm extension set --publisher Microsoft.AzureCAT.AzureEnhancedMonitoring --name MonitorX64Windows --version 1.0 -g <resource-group-name> --vm-name <vm name> --settings '{"system":"SAP"}'+
+ # Azure CLI on Windows/PowerShell
+ ## For Linux machines
+ az vm extension set --publisher Microsoft.AzureCAT.AzureEnhancedMonitoring --name MonitorX64Linux --version 1.0 -g <resource-group-name> --vm-name <vm name> --settings '{\"system\":\"SAP\"}'
+
+ ## For Windows machines
+ az vm extension set --publisher Microsoft.AzureCAT.AzureEnhancedMonitoring --name MonitorX64Windows --version 1.0 -g <resource-group-name> --vm-name <vm name> --settings '{\"system\":\"SAP\"}'
``` ## <a name="564adb4f-5c95-4041-9616-6635e83a810b"></a>Checks and Troubleshooting
virtual-network https://docs.microsoft.com/en-us/azure/virtual-network/troubleshoot-outbound-smtp-connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/troubleshoot-outbound-smtp-connectivity.md
@@ -1,6 +1,6 @@
--- title: Troubleshoot outbound SMTP connectivity in Azure | Microsoft Docs
-description: Learn the recommended method for sending email and how to troubleshoot issues of outbound SMTP connectivity in Azure.
+description: Learn the recommended method for sending email and how to troubleshoot problems with outbound SMTP connectivity in Azure.
services: virtual-network author: genlin manager: dcscontentpm
@@ -16,53 +16,68 @@ ms.author: genli
---
-# Troubleshoot outbound SMTP connectivity issues in Azure
+# Troubleshoot outbound SMTP connectivity problems in Azure
-Starting on November 15, 2017, outbound email messages that are sent directly to external domains (such as outlook.com and gmail.com) from a virtual machine (VM) are made available only to certain subscription types in Microsoft Azure. Outbound SMTP connections that use TCP port 25 were blocked. (Port 25 is primarily used for unauthenticated email delivery.)
+Starting on November 15, 2017, outbound email messages that are sent directly to external domains (like outlook.com and gmail.com) from a virtual machine (VM) are made available only to certain subscription types in Azure. Outbound SMTP connections that use TCP port 25 were blocked. (Port 25 is used mainly for unauthenticated email delivery.)
-This change in behavior applies only to new subscriptions and new deployments since November 15, 2017.
+This change in behavior applies only to subscriptions and deployments that were created after November 15, 2017.
## Recommended method of sending email
-We recommend you use authenticated SMTP relay services (that typically connect through TCP port 587 or 443 but support other ports, too) to send email from Azure VMs or from Azure App Services. These services are used to maintain IP or domain reputation to minimize the possibility that third-party email providers will reject the message. Such SMTP relay services include but aren't limited to [SendGrid](https://sendgrid.com/partners/azure/). It's also possible you have a secure SMTP relay service that's running on-premises that you can use.
+We recommend you use authenticated SMTP relay services to send email from Azure VMs or from Azure App Service. (These relay services typically connect through TCP port 587 or 443, but they support other ports.) These services are used to maintain IP or domain reputation to minimize the possibility that third-party email providers will reject messages. [SendGrid](https://sendgrid.com/partners/azure/) is one such SMTP relay service, but there are others. You might also have a secure SMTP relay service running on-premises that you can use.
Using these email delivery services isn't restricted in Azure, regardless of the subscription type. ## Enterprise Agreement
-For Enterprise Agreement Azure users, there's no change in the technical ability to send email without using an authenticated relay. Both new and existing Enterprise Agreement users can try outbound email delivery from Azure VMs directly to external email providers without any restrictions from the Azure platform. Although it's not guaranteed that email providers will accept incoming email from any given user, delivery attempts won't be blocked by the Azure platform for VMs within Enterprise Agreement subscriptions. You'll have to work directly with email providers to fix any message delivery or SPAM filtering issues that involve specific providers.
+For Enterprise Agreement Azure users, there's no change in the technical ability to send email without using an authenticated relay. Both new and existing Enterprise Agreement users can try outbound email delivery from Azure VMs directly to external email providers without any restrictions from the Azure platform. There's no guarantee that email providers will accept incoming email from any given user. But the Azure platform won't block delivery attempts for VMs within Enterprise Agreement subscriptions. You'll have to work directly with email providers to fix any message delivery or SPAM filtering problems that involve specific providers.
-## Pay-As-You-Go
+## Pay-as-you-go
-If you signed up before November 15, 2017 for the Pay-As-You-Go subscription, there will be no change in the technical ability to try outbound email delivery. You'll continue to be able to try outbound email delivery from Azure VMs within these subscriptions directly to external email providers without any restrictions from the Azure platform. Again, it's not guaranteed that email providers will accept incoming email from any given user, and users will have to work directly with email providers to fix any message delivery or SPAM filtering issues that involve specific providers.
+If you signed up before November 15, 2017, for a pay-as-you-go subscription, there will be no change in your technical ability to try outbound email delivery. You'll still be able to try outbound email delivery from Azure VMs within these subscriptions directly to external email providers without any restrictions from the Azure platform. Again, there's no guarantee that email providers will accept incoming email from any given user. Users will have to work directly with email providers to fix any message delivery or SPAM filtering issues that involve specific providers.
-For Pay-As-You-Go subscriptions that were created after November 15, 2017, there will be technical restrictions that block email that's sent directly from VMs within these subscriptions. If you want the ability to send email from Azure VMs directly to external email providers (not using an authenticated SMTP relay) and you have an account in good standing with a payment history, you can make a request to remove the restriction in the **Connectivity** section of the **Diagnose and Solve** blade for an Azure Virtual Network resource in the Azure portal. If qualified, your subscription will be enabled or you will receive instructions on next steps.
+For pay-as-you-go subscriptions that were created after November 15, 2017, there will be technical restrictions that block email that's sent directly from VMs within the subscriptions. If you want to be able to send email from Azure VMs directly to external email providers (without using an authenticated SMTP relay) and you have an account in good standing with a payment history, you can request to have the restriction removed. You can do so in the **Connectivity** section of the **Diagnose and Solve** blade for an Azure Virtual Network resource in the Azure portal. If your request is accepted, your subscription will be enabled or you'll receive instructions for next steps.
-After a Pay-As-You-Go subscription is exempted and the VMs have been stopped and started in the Azure portal, all VMs in that subscription are exempted going forward. The exemption applies only to the subscription requested and only to VM traffic that's routed directly to the internet.
+After a pay-as-you-go subscription is exempted and the VMs are stopped and restarted in the Azure portal, all VMs in that subscription are exempted going forward. The exemption applies only to the subscription requested and only to VM traffic that's routed directly to the internet.
> [!NOTE]
-> Microsoft reserves the right to revoke this exemption if it's determined that a violation of terms of service has occurred.
+> Microsoft reserves the right to revoke these exemptions if it's determined that a violation of terms of service has occurred.
-## MSDN, Azure Pass, Azure in Open, Education, Azure for Students, Vistual Studio, and Free Trial
+## MSDN, Azure Pass, Azure in Open, Education, Azure for Students, Visual Studio, and Free Trial
-If you created an MSDN, Azure Pass, Azure in Open, Education, Azure Student, Free Trial, or any Visual Studio subscription after November 15, 2017, you'll have technical restrictions that block email that's sent from VMs within these subscriptions directly to email providers. The restrictions are done to prevent abuse. No requests to remove this restriction will be granted.
+If you created one of the following subscription types after November 15, 2017, you'll have technical restrictions that block email that's sent from VMs within the subscription directly to email providers:
+- MSDN
+- Azure Pass
+- Azure in Open
+- Education
+- Azure for Students
+- Free Trial
+- Any Visual Studio subscription
-If you're using these subscription types, you're encouraged to use SMTP relay services, as outlined earlier in this article or change your subscription type.
+The restrictions are in place to prevent abuse. Requests to remove these restrictions won't be granted.
-## Cloud Service Provider (CSP)
+If you're using these subscription types, we encourage you to use SMTP relay services, as outlined earlier in this article, or to change your subscription type.
-If you're using Azure resources through CSP, you can make a request to remove the restriction in the **Connectivity** section of the **Diagnose and Solve** blade for a Virtual Network resource in the Azure Portal. If qualified, your subscription will be enabled or you will receive instructions on next steps.
+## Cloud Solution Provider
-## Microsoft Partner Network (MPN), BizSpark Plus, or Azure Sponsorship
+If you're using Azure resources through a Cloud Solution Provider, you can make a request to remove the restriction in the **Connectivity** section of the **Diagnose and Solve** pane for a virtual network resource in the Azure portal. If your request is accepted, your subscription will be enabled or you'll receive instructions for next steps.
-For Microsoft Partner Network (MPN), BizSpark Plus, or Azure Sponsorship subscriptions that were created after November 15, 2017, there will be technical restrictions that block email that's sent directly from VMs within these subscriptions. If you want the ability to send email from Azure VMs directly to external email providers (not using an authenticated SMTP relay), you can make a request by opening a support case by using the following issue type: **Technical** > **Virtual Network** > **Connectivity** > **Cannot send email (SMTP/Port 25)**. Make sure that you add details about why your deployment has to send mail directly to mail providers instead of using an authenticated relay. Requests will be reviewed and approved at the discretion of Microsoft. Requests may be granted only after additional anti-fraud checks are completed.
+## Microsoft Partner Network, BizSpark Plus, or Azure Sponsorship
-After a subscription is exempted and the VMs have been stopped and started in the Azure portal, all VMs in that subscription are exempted going forward. The exemption applies only to the subscription requested and only to VM traffic that's routed directly to the internet.
+For subscriptions of the following types that were created after November 15, 2017, there will be technical restrictions that block email that's sent directly from VMs within the subscriptions:
+
+- Microsoft Partner Network (MPN)
+- BizSpark Plus
+- Azure Sponsorship
+
+If you want to be able to send email from Azure VMs directly to external email providers (without using an authenticated SMTP relay), you can make a request by opening a support case by using the following issue type: **Technical** > **Virtual Network** > **Connectivity** > **Cannot send email (SMTP/Port 25)**. Be sure to add details about why your deployment has to send mail directly to mail providers instead of using an authenticated relay. Requests will be reviewed and approved at the discretion of Microsoft. Requests will be granted only after additional antifraud checks are completed.
+
+After a subscription is exempted and the VMs have been stopped and restarted in the Azure portal, all VMs in that subscription are exempted going forward. The exemption applies only to the subscription requested and only to VM traffic that's routed directly to the internet.
## Restrictions and limitations -- Routing port 25 traffic via Azure PaaS services like [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) is not supported.
+Routing port 25 traffic via Azure PaaS services like [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) isn't supported.
## Need help? Contact support
-If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your issue resolved quickly by using the following issue type: **Technical** > **Virtual Network** > **Connectivity** > **Cannot send email (SMTP/Port 25)**.
+If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly. Use this issue type: **Technical** > **Virtual Network** > **Connectivity** > **Cannot send email (SMTP/Port 25)**.