Updates from: 04/03/2021 03:10:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Ad External Identities Videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-ad-external-identities-videos.md
Learn how to perform various use cases in Azure AD B2C.
| Video title | Video | Video title | Video | |:|:|:|:| |[Azure AD: Monitoring and reporting Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) 6:57 | [:::image type="icon" source="./media/external-identities-videos/monitoring-reporting-aad-b2c.png" border="false":::](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) | [Azure AD B2C user migration using Microsoft Graph API](https://www.youtube.com/watch?v=9BRXBtkBzL4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=5) 7:09 | [:::image type="icon" source="./media/external-identities-videos/user-migration-msgraph-api.png" border="false":::](https://www.youtube.com/watch?v=9BRXBtkBzL4list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=5) |
-| [Azure AD B2C user migration strategies](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) 8:22 | [:::image type="icon" source="./media/external-identities-videos/user-migration-stratagies.png" border="false":::](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) | [How to localize or customize language using Azure AD B2C](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) 20:41 | [:::image type="icon" source="./media/external-identities-videos/language-localization.png" border="false":::](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) |
+| [Azure AD B2C user migration strategies](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) 8:22 | [:::image type="icon" source="./media/external-identities-videos/user-migration-stratagies.png" border="false":::](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) | [How to localize or customize language using Azure AD B2C](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) 20:41 | [:::image type="icon" source="./media/external-identities-videos/language-localization.png" border="false":::](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) |
+|[Configure monitoring: Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=tF2JS6TGc3g&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=14) 17:23 | [:::image type="icon" source="./media/external-identities-videos/configure-monitoring.png" border="false":::](https://www.youtube.com/watch?v=tF2JS6TGc3g&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=14) |
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
If you'd like to share feedback or encounter issues with this feature, share via
Administrator provisioning and de-provisioning of security keys is not available.
+**Note:** FIDO2 Cached logon fails on hybrid Azure AD joined machine specific to win10 20H2 version (when LOS to DC unavailable). This is currently under investigation with Engineering.
+ ### UPN changes We are working on supporting a feature that allows UPN change on hybrid Azure AD joined and Azure AD joined devices. If a user's UPN changes, you can no longer modify FIDO2 security keys to account for the change. The resolution is to reset the device and the user has to re-register.
active-directory Tutorial V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md
Title: "Tutorial: Sign-in users in a Node.js & Express web app | Azure"
+ Title: "Tutorial: Sign in users in a Node.js & Express web app | Azure"
description: In this tutorial, you add support for signing-in users in a web app.
Last updated 02/17/2021
-# Tutorial: Sign-in users in a Node.js & Express web app
+# Tutorial: Sign in users in a Node.js & Express web app
In this tutorial, you build a web app that signs-in users. The web app you build uses the [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node).
In this tutorial, you initialized an MSAL Node [ConfidentialClientApplication](h
If you'd like to dive deeper into Node.js & Express web application development on the Microsoft identity platform, see our multi-part scenario series: > [!div class="nextstepaction"]
-> [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md)
+> [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md)
active-directory Entitlement Management Organization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-organization.md
For example, suppose you work at Woodgrove Bank and you want to collaborate with
- Graphic Design Institute uses Azure AD, and their users have a user principal name that ends with *graphicdesigninstitute.com*. - Contoso does not yet use Azure AD. Contoso users have a user principal name that ends with *contoso.com*.
-In this case, you can configure two connected organizations. You create one connected organization for Graphic Design Institute and one for Contoso. If you then add the two connected organizations to a policy, users from each organization with a user principal name that matches the policy can request access packages. Users with a user principal name that has a domain of *graphicdesigninstitute.com* would match the Graphic Design Institute-connected organization and be allowed to submit requests. Users with a user principal name that has a domain of *contoso.com* would match the Contoso-connected organization and would also be allowed to request packages. And, because Graphic Design Institute uses Azure AD, any users with a principal name that matches a [verified domain](../fundamentals/add-custom-domain.md#verify-your-custom-domain-name) that's added to their tenant, such as *graphicdesigninstitute.example*, would also be able to request access packages by using the same policy.
+In this case, you can configure two connected organizations. You create one connected organization for Graphic Design Institute and one for Contoso. If you then add the two connected organizations to a policy, users from each organization with a user principal name that matches the policy can request access packages. Users with a user principal name that has a domain of contoso.com would match the Contoso-connected organization and would also be allowed to request packages. Users with a user principal name that has a domain of *graphicdesigninstitute.com* would match the Graphic Design Institute-connected organization and be allowed to submit requests. And, because Graphic Design Institute uses Azure AD, any users with a principal name that matches a [verified domain](../fundamentals/add-custom-domain.md#verify-your-custom-domain-name) that's added to their tenant, such as *graphicdesigninstitute.example*, would also be able to request access packages by using the same policy. If you have [email one-time passcode (OTP) authentication](../external-identities/one-time-passcode.md) turned on, that includes users from those domains who do not yet have Azure AD accounts who will authenticate using email OTP when accessing your resources.
![Connected organization example](./media/entitlement-management-organization/connected-organization-example.png)
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Users in this role can read and update basic information of users, groups, and s
> | microsoft.directory/servicePrincipals/synchronizationCredentials/manage | Manage application provisioning secrets and credentials | > | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning syncronization jobs | > | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning syncronization jobs and schema |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service pricipal direct access to a group's data |
+> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service principal direct access to a group's data |
> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/users/assignLicense | Manage user licenses | > | microsoft.directory/users/create | Add users |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/serviceAction/getAvailableExtentionProperties | Can perform the Getavailableextentionproperties service action | > | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete service principals, and read and update all properties | > | microsoft.directory/servicePrincipals/managePermissionGrantsForAll.microsoft-company-admin | Grant consent for any permission to any application |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service pricipal direct access to a group's data |
+> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service principal direct access to a group's data |
> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.directory/subscribedSkus/allProperties/allTasks | Buy and manage subscriptions and delete subscriptions |
Users in this role can create/manage groups and its settings like naming and exp
> | microsoft.directory/groups/owners/update | Update owners of groups, excluding role-assignable groups | > | microsoft.directory/groups/settings/update | Update settings of groups | > | microsoft.directory/groups/visibility/update | Update the visibility property of groups |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service pricipal direct access to a group's data |
+> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service principal direct access to a group's data |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> | microsoft.directory/groups.unified/basic/update | Update basic properties on Microsoft 365 groups with the exclusion of role-assignable groups | > | microsoft.directory/groups.unified/members/update | Update members of Microsoft 365 groups with the exclusion of role-assignable groups | > | microsoft.directory/groups.unified/owners/update | Update owners of Microsoft 365 groups with the exclusion of role-assignable groups |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service pricipal direct access to a group's data |
+> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service principal direct access to a group's data |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
active-directory Adobecaptivateprime Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/adobecaptivateprime-tutorial.md
Previously updated : 01/17/2019 Last updated : 03/31/2021 # Tutorial: Azure Active Directory integration with Adobe Captivate Prime
-In this tutorial, you learn how to integrate Adobe Captivate Prime with Azure Active Directory (Azure AD).
-Integrating Adobe Captivate Prime with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Adobe Captivate Prime with Azure Active Directory (Azure AD). When you integrate Adobe Captivate Prime with Azure AD, you can:
-* You can control in Azure AD who has access to Adobe Captivate Prime.
-* You can enable your users to be automatically signed-in to Adobe Captivate Prime (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Adobe Captivate Prime.
+* Enable your users to be automatically signed-in to Adobe Captivate Prime with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Adobe Captivate Prime, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Adobe Captivate Prime single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Adobe Captivate Prime single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Adobe Captivate Prime supports **IDP** initiated SSO
-
-## Adding Adobe Captivate Prime from the gallery
-
-To configure the integration of Adobe Captivate Prime into Azure AD, you need to add Adobe Captivate Prime from the gallery to your list of managed SaaS apps.
-
-**To add Adobe Captivate Prime from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* Adobe Captivate Prime supports **IDP** initiated SSO.
-4. In the search box, type **Adobe Captivate Prime**, select **Adobe Captivate Prime** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![Adobe Captivate Prime in the results list](common/search-new-app.png)
+## Add Adobe Captivate Prime from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Adobe Captivate Prime based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Adobe Captivate Prime needs to be established.
-
-To configure and test Azure AD single sign-on with Adobe Captivate Prime, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Adobe Captivate Prime Single Sign-On](#configure-adobe-captivate-prime-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Adobe Captivate Prime test user](#create-adobe-captivate-prime-test-user)** - to have a counterpart of Britta Simon in Adobe Captivate Prime that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of Adobe Captivate Prime into Azure AD, you need to add Adobe Captivate Prime from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Adobe Captivate Prime** in the search box.
+1. Select **Adobe Captivate Prime** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Adobe Captivate Prime
-To configure Azure AD single sign-on with Adobe Captivate Prime, perform the following steps:
+Configure and test Azure AD SSO with Adobe Captivate Prime using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Adobe Captivate Prime.
-1. In the [Azure portal](https://portal.azure.com/), on the **Adobe Captivate Prime** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Adobe Captivate Prime, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Adobe Captivate Prime SSO](#configure-adobe-captivate-prime-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Adobe Captivate Prime test user](#create-adobe-captivate-prime-test-user)** - to have a counterpart of B.Simon in Adobe Captivate Prime that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Adobe Captivate Prime** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
- ![Adobe Captivate Prime Domain and URLs single sign-on information](common/idp-intiated.png)
-
- a. In the **Identifier** text box, type a URL:
+ a. In the **Identifier** text box, type the URL:
`https://captivateprime.adobe.com`
- b. In the **Reply URL** text box, type a URL:
+ b. In the **Reply URL** text box, type the URL:
`https://captivateprime.adobe.com/saml/SSO` 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with Adobe Captivate Prime, perform the fol
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
- 7. Go to **Properties** tab, copy the **User access URL** and paste it in Notepad.
- ![The user access link](./media/adobecaptivateprime-tutorial/tutorial_adobecaptivateprime_appprop.png)
-
-### Configure Adobe Captivate Prime Single Sign-On
-
-To configure single sign-on on **Adobe Captivate Prime** side, you need to send the downloaded **Federation Metadata XML**, copied **User access URL** and appropriate copied URLs from Azure portal to [Adobe Captivate Prime support team](mailto:captivateprimesupport@adobe.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![The user access link](./media/adobecaptivateprime-tutorial/adobe.png)
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Adobe Captivate Prime.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Adobe Captivate Prime.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Adobe Captivate Prime**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Adobe Captivate Prime**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Adobe Captivate Prime SSO
-2. In the applications list, select **Adobe Captivate Prime**.
-
- ![The Adobe Captivate Prime link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Adobe Captivate Prime** side, you need to send the downloaded **Federation Metadata XML**, copied **User access URL** and appropriate copied URLs from Azure portal to [Adobe Captivate Prime support team](mailto:captivateprimesupport@adobe.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Adobe Captivate Prime test user In this section, you create a user called Britta Simon in Adobe Captivate Prime. Work with [Adobe Captivate Prime support team](mailto:captivateprimesupport@adobe.com) to add the users in the Adobe Captivate Prime platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the Adobe Captivate Prime tile in the Access Panel, you should be automatically signed in to the Adobe Captivate Prime for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Adobe Captivate Prime for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Adobe Captivate Prime tile in the My Apps, you should be automatically signed in to the Adobe Captivate Prime for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Adobe Captivate Prime you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Anaplan Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/anaplan-tutorial.md
Previously updated : 01/17/2019 Last updated : 04/02/2021 # Tutorial: Azure Active Directory integration with Anaplan
-In this tutorial, you learn how to integrate Anaplan with Azure Active Directory (Azure AD).
-Integrating Anaplan with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Anaplan with Azure Active Directory (Azure AD). When you integrate Anaplan with Azure AD, you can:
-* You can control in Azure AD who has access to Anaplan.
-* You can enable your users to be automatically signed-in to Anaplan (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Anaplan.
+* Enable your users to be automatically signed-in to Anaplan with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Anaplan, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Anaplan single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Anaplan single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Anaplan supports **SP** initiated SSO
+* Anaplan supports **SP** initiated SSO.
-## Adding Anaplan from the gallery
+## Add Anaplan from the gallery
To configure the integration of Anaplan into Azure AD, you need to add Anaplan from the gallery to your list of managed SaaS apps.
-**To add Anaplan from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Anaplan**, select **Anaplan** from result panel then click **Add** button to add the application.
-
- ![Anaplan in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Anaplan** in the search box.
+1. Select **Anaplan** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Anaplan based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Anaplan needs to be established.
+## Configure and test Azure AD SSO for Anaplan
-To configure and test Azure AD single sign-on with Anaplan, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Anaplan using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Anaplan.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Anaplan Single Sign-On](#configure-anaplan-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Anaplan test user](#create-anaplan-test-user)** - to have a counterpart of Britta Simon in Anaplan that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Anaplan, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Anaplan SSO](#configure-anaplan-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Anaplan test user](#create-anaplan-test-user)** - to have a counterpart of B.Simon in Anaplan that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Anaplan, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Anaplan** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Anaplan** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Anaplan Domain and URLs single sign-on information](common/sp-identifier.png)
- a. In the **Sign on URL** text box, type a URL using the following pattern: `https://sdp.anaplan.com/frontdoor/saml/<tenant name>`
To configure Azure AD single sign-on with Anaplan, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Anaplan Single Sign-On
-
-To configure single sign-on on **Anaplan** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Anaplan support team](mailto:support@anaplan.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Anaplan.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Anaplan.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Anaplan**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Anaplan**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Anaplan SSO
-2. In the applications list, select **Anaplan**.
-
- ![The Anaplan link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Anaplan** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Anaplan support team](mailto:support@anaplan.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Anaplan test user In this section, you create a user called Britta Simon in Anaplan. Work with [Anaplan support team](mailto:support@anaplan.com) to add the users in the Anaplan platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Anaplan tile in the Access Panel, you should be automatically signed in to the Anaplan for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Anaplan Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Anaplan Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Anaplan tile in the My Apps, this will redirect to Anaplan Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Anaplan you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Clickup Productivity Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/clickup-productivity-platform-tutorial.md
Previously updated : 02/21/2019 Last updated : 03/30/2021 # Tutorial: Azure Active Directory integration with ClickUp Productivity Platform
-In this tutorial, you learn how to integrate ClickUp Productivity Platform with Azure Active Directory (Azure AD).
-Integrating ClickUp Productivity Platform with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ClickUp Productivity Platform with Azure Active Directory (Azure AD). When you integrate ClickUp Productivity Platform with Azure AD, you can:
-* You can control in Azure AD who has access to ClickUp Productivity Platform.
-* You can enable your users to be automatically signed-in to ClickUp Productivity Platform (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to ClickUp Productivity Platform.
+* Enable your users to be automatically signed-in to ClickUp Productivity Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with ClickUp Productivity Platform, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* ClickUp Productivity Platform single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ClickUp Productivity Platform single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ClickUp Productivity Platform supports **SP** initiated SSO
+* ClickUp Productivity Platform supports **SP** initiated SSO.
-## Adding ClickUp Productivity Platform from the gallery
+## Add ClickUp Productivity Platform from the gallery
To configure the integration of ClickUp Productivity Platform into Azure AD, you need to add ClickUp Productivity Platform from the gallery to your list of managed SaaS apps.
-**To add ClickUp Productivity Platform from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **ClickUp Productivity Platform**, select **ClickUp Productivity Platform** from result panel then click **Add** button to add the application.
-
- ![ClickUp Productivity Platform in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with ClickUp Productivity Platform based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ClickUp Productivity Platform needs to be established.
-
-To configure and test Azure AD single sign-on with ClickUp Productivity Platform, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ClickUp Productivity Platform Single Sign-On](#configure-clickup-productivity-platform-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ClickUp Productivity Platform test user](#create-clickup-productivity-platform-test-user)** - to have a counterpart of Britta Simon in ClickUp Productivity Platform that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ClickUp Productivity Platform** in the search box.
+1. Select **ClickUp Productivity Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-### Configure Azure AD single sign-on
+## Configure and test Azure AD SSO for ClickUp Productivity Platform
-In this section, you enable Azure AD single sign-on in the Azure portal.
+Configure and test Azure AD SSO with ClickUp Productivity Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ClickUp Productivity Platform.
-To configure Azure AD single sign-on with ClickUp Productivity Platform, perform the following steps:
+To configure and test Azure AD SSO with ClickUp Productivity Platform, perform the following steps:
-1. In the [Azure portal](https://portal.azure.com/), on the **ClickUp Productivity Platform** application integration page, select **Single sign-on**.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ClickUp Productivity Platform SSO](#configure-clickup-productivity-platform-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ClickUp Productivity Platform test user](#create-clickup-productivity-platform-test-user)** - to have a counterpart of B.Simon in ClickUp Productivity Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Configure single sign-on link](common/select-sso.png)
+## Configure Azure AD SSO
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Single sign-on select mode](common/select-saml-option.png)
+1. In the Azure portal, on the **ClickUp Productivity Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![ClickUp Productivity Platform Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL:
+ a. In the **Sign on URL** text box, type the URL:
`https://app.clickup.com/login/sso` b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
To configure Azure AD single sign-on with ClickUp Productivity Platform, perform
![The Certificate download link](common/copy-metadataurl.png)
-### Configure ClickUp Productivity Platform Single Sign-On
-
-1. In a different web browser window, sign-on to your ClickUp Productivity Platform tenant as an administrator.
-
-2. Click on the **User profile**, and then select **Settings**.
-
- ![Screenshot shows the ClickUp Productivity tenant with the Settings icon selected.](./media/clickup-productivity-platform-tutorial/configure0.png)
-
- ![Screenshot shows Settings.](./media/clickup-productivity-platform-tutorial/configure1.png)
-
-3. Select **Microsoft**, under Single Sign-On (SSO) Provider.
-
- ![Screenshot shows the Authentication pane with Microsoft selected.](./media/clickup-productivity-platform-tutorial/configure2.png)
-
-4. On the **Configure Microsoft Single Sign On** page, perform the following steps:
-
- ![Screenshot shows the Configure Microsoft Single Sign On page where you can copy the Entity I D and save the Azure Federation Metadata U R L.](./media/clickup-productivity-platform-tutorial/configure3.png)
-
- a. Click **Copy** to copy the Entity ID value and paste it into the **Identifier (Entity ID)** textbox in the **Basic SAML Configuration** section in the Azure portal.
-
- b. In the **Azure Federation Metadata URL** textbox, paste the App Federation Metadata Url value, which you have copied from the Azure portal and then click **Save**.
-
-5. To complete the setup, click **Authenticate With Microsoft to complete setup** and authenticate with microsoft account.
-
- ![Screenshot shows the Authenticate with Microsoft to complete setup button.](./media/clickup-productivity-platform-tutorial/configure4.png)
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
-2. Select **New user** at the top of the screen.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
+### Assign the Azure AD test user
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ClickUp Productivity Platform.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ClickUp Productivity Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- d. Click **Create**.
+## Configure ClickUp Productivity Platform SSO
-### Assign the Azure AD test user
+1. In a different web browser window, sign-on to your ClickUp Productivity Platform tenant as an administrator.
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ClickUp Productivity Platform.
+2. Click on the **User profile**, and then select **Settings**.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ClickUp Productivity Platform**.
+ ![Screenshot shows the ClickUp Productivity tenant with the Settings icon selected.](./media/clickup-productivity-platform-tutorial/configure-0.png)
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot shows Settings.](./media/clickup-productivity-platform-tutorial/configure-1.png)
-2. In the applications list, select **ClickUp Productivity Platform**.
+3. Select **Microsoft**, under Single Sign-On (SSO) Provider.
- ![The ClickUp Productivity Platform link in the Applications list](common/all-applications.png)
+ ![Screenshot shows the Authentication pane with Microsoft selected.](./media/clickup-productivity-platform-tutorial/configure-2.png)
-3. In the menu on the left, select **Users and groups**.
+4. On the **Configure Microsoft Single Sign On** page, perform the following steps:
- ![The "Users and groups" link](common/users-groups-blade.png)
+ ![Screenshot shows the Configure Microsoft Single Sign On page where you can copy the Entity I D and save the Azure Federation Metadata U R L.](./media/clickup-productivity-platform-tutorial/configure-3.png)
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+ a. Click **Copy** to copy the Entity ID value and paste it into the **Identifier (Entity ID)** textbox in the **Basic SAML Configuration** section in the Azure portal.
- ![The Add Assignment pane](common/add-assign-user.png)
+ b. In the **Azure Federation Metadata URL** textbox, paste the App Federation Metadata Url value, which you have copied from the Azure portal and then click **Save**.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+5. To complete the setup, click **Authenticate With Microsoft to complete setup** and authenticate with microsoft account.
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+ ![Screenshot shows the Authenticate with Microsoft to complete setup button.](./media/clickup-productivity-platform-tutorial/configure-4.png)
-7. In the **Add Assignment** dialog click the **Assign** button.
### Create ClickUp Productivity Platform test user
In this section, you enable Britta Simon to use Azure single sign-on by granting
2. Click on the **User profile**, and then select **People**.
- ![Screenshot shows the ClickUp Productivity tenant.](./media/clickup-productivity-platform-tutorial/configure0.png)
+ ![Screenshot shows the ClickUp Productivity tenant.](./media/clickup-productivity-platform-tutorial/configure-0.png)
- ![Screenshot shows the People link selected.](./media/clickup-productivity-platform-tutorial/user1.png)
+ ![Screenshot shows the People link selected.](./media/clickup-productivity-platform-tutorial/user-1.png)
3. Enter the email address of the user in the textbox and click **Invite**.
- ![Screenshot shows Team Users Settings where you can invite people by email.](./media/clickup-productivity-platform-tutorial/user2.png)
+ ![Screenshot shows Team Users Settings where you can invite people by email.](./media/clickup-productivity-platform-tutorial/user-2.png)
> [!NOTE] > The user will receive the notification and must accept the invitation to activate the account.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the ClickUp Productivity Platform tile in the Access Panel, you should be automatically signed in to the ClickUp Productivity Platform for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to ClickUp Productivity Platform Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to ClickUp Productivity Platform Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ClickUp Productivity Platform tile in the My Apps, this will redirect to ClickUp Productivity Platform Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ClickUp Productivity Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Eab Navigate Impl Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/eab-navigate-impl-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with EAB Navigate IMPL | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and EAB Navigate IMPL.
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with EAB Implementation | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and EAB Implementation.
Previously updated : 10/22/2019 Last updated : 03/26/2021
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with EAB Navigate IMPL
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with EAB Implementation
-In this tutorial, you'll learn how to integrate EAB Navigate IMPL with Azure Active Directory (Azure AD). When you integrate EAB Navigate IMPL with Azure AD, you can:
+In this tutorial, you'll learn how to integrate EAB Implementation with Azure Active Directory (Azure AD). When you integrate EAB Implementation with Azure AD, you can:
-* Control in Azure AD who has access to EAB Navigate IMPL.
-* Enable your users to be automatically signed-in to EAB Navigate IMPL with their Azure AD accounts.
+* Control in Azure AD who has access to EAB Implementation.
+* Enable your users to be automatically signed-in to EAB Implementation with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* EAB Navigate IMPL single sign-on (SSO) enabled subscription.
+* EAB Implementation single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* EAB Navigate IMPL supports **SP** initiated SSO
+* EAB Implementation supports **SP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding EAB Navigate IMPL from the gallery
+## Adding EAB Implementation from the gallery
-To configure the integration of EAB Navigate IMPL into Azure AD, you need to add EAB Navigate IMPL from the gallery to your list of managed SaaS apps.
+To configure the integration of EAB Implementation into Azure AD, you need to add EAB Implementation from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **EAB Navigate IMPL** in the search box.
-1. Select **EAB Navigate IMPL** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **EAB Implementation** in the search box.
+1. Select **EAB Implementation** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for EAB Navigate IMPL
+## Configure and test Azure AD SSO for EAB Implementation
-Configure and test Azure AD SSO with EAB Navigate IMPL using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in EAB Navigate IMPL.
+Configure and test Azure AD SSO with EAB Implementation using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in EAB Implementation.
-To configure and test Azure AD SSO with EAB Navigate IMPL, complete the following building blocks:
+To configure and test Azure AD SSO with EAB Implementation, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure EAB Navigate IMPL SSO](#configure-eab-navigate-impl-sso)** - to configure the single sign-on settings on application side.
- * **[Create EAB Navigate IMPL test user](#create-eab-navigate-impl-test-user)** - to have a counterpart of B.Simon in EAB Navigate IMPL that is linked to the Azure AD representation of user.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure EAB Implementation SSO](#configure-eab-implementation-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create EAB Implementation test user](#create-eab-implementation-test-user)** - to have a counterpart of B.Simon in EAB Implementation that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **EAB Navigate IMPL** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **EAB Implementation** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- In the **Identifier (Entity ID)** text box, enter exactly the following value:
+
+ a. In the **Identifier (Entity ID)** text box, enter exactly the following value:
`https://impl.bouncer.eab.com`
- In the **Reply URL (Assertion Consumer Service URL)** text box, enter both the following values as separate rows:
- `https://impl.bouncer.eab.com/sso/saml2/acs`
- `https://impl.bouncer.eab.com/sso/saml2/acs/`
+ b. In the **Reply URL (Assertion Consumer Service URL)** text box, enter both the following values as separate rows:
+
+ | Reply URL|
+ | -- |
+ | `https://impl.bouncer.eab.com/sso/saml2/acs` |
+ | `https://impl.bouncer.eab.com/sso/saml2/acs/` |
+ |
- In the **Sign-on URL** text box, type a URL using the following pattern:
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.navigate.impl.eab.com/` > [!NOTE]
- > The value is not real. Update the value with the actual Sign-On URL. Contact [EAB Navigate IMPL Client support team](mailto:EABTechSupport@eab.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The value is not real. Update the value with the actual Sign-On URL. Contact [EAB Implementation Client support team](mailto:EABTechSupport@eab.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to EAB Navigate IMPL.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to EAB Implementation.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **EAB Navigate IMPL**.
+1. In the applications list, select **EAB Implementation**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure EAB Navigate IMPL SSO
+## Configure EAB Implementation SSO
-To configure single sign-on on **EAB Navigate IMPL** side, you need to send the **App Federation Metadata Url** to [EAB Navigate IMPL support team](mailto:EABTechSupport@eab.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **EAB Implementation** side, you need to send the **App Federation Metadata Url** to [EAB Implementation support team](mailto:EABTechSupport@eab.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create EAB Navigate IMPL test user
+### Create EAB Implementation test user
-In this section, you create a user called B.Simon in EAB Navigate IMPL. Work with [EAB Navigate IMPL support team](mailto:EABTechSupport@eab.com) to add the users in the EAB Navigate IMPL platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in EAB Implementation. Work with [EAB Implementation support team](mailto:EABTechSupport@eab.com) to add the users in the EAB Implementation platform. Users must be created and activated before you use single sign-on.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the EAB Navigate IMPL tile in the Access Panel, you should be automatically signed in to the EAB Navigate IMPL for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to EAB Implementation Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to EAB Implementation Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the EAB Implementation tile in the My Apps, this will redirect to EAB Implementation Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try EAB Navigate IMPL with Azure AD](https://aad.portal.azure.com/)
+Once you configure EAB Implementation you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Eab Navigate Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/eab-navigate-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with EAB Navigate | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and EAB Navigate.
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with EAB | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and EAB.
Previously updated : 01/29/2020 Last updated : 03/30/2021
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with EAB Navigate
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with EAB
-In this tutorial, you'll learn how to integrate EAB Navigate with Azure Active Directory (Azure AD). When you integrate EAB Navigate with Azure AD, you can:
+In this tutorial, you'll learn how to integrate EAB with Azure Active Directory (Azure AD). When you integrate EAB with Azure AD, you can:
-* Control in Azure AD who has access to EAB Navigate.
-* Enable your users to be automatically signed-in to EAB Navigate with their Azure AD accounts.
+* Control in Azure AD who has access to EAB.
+* Enable your users to be automatically signed-in to EAB with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* EAB Navigate single sign-on (SSO) enabled subscription.
+* EAB single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* EAB Navigate supports **SP** initiated SSO
+* EAB supports **SP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding EAB Navigate from the gallery
+## Adding EAB from the gallery
-To configure the integration of EAB Navigate into Azure AD, you need to add EAB Navigate from the gallery to your list of managed SaaS apps.
+To configure the integration of EAB into Azure AD, you need to add EAB from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **EAB Navigate** in the search box.
-1. Select **EAB Navigate** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **EAB** in the search box.
+1. Select **EAB** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for EAB Navigate
+## Configure and test Azure AD SSO for EAB
-Configure and test Azure AD SSO with EAB Navigate using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in EAB Navigate.
+Configure and test Azure AD SSO with EAB using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in EAB.
-To configure and test Azure AD SSO with EAB Navigate, complete the following building blocks:
+To configure and test Azure AD SSO with EAB, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure EAB Navigate SSO](#configure-eab-navigate-sso)** - to configure the single sign-on settings on application side.
- * **[Create EAB Navigate test user](#create-eab-navigate-test-user)** - to have a counterpart of B.Simon in EAB Navigate that is linked to the Azure AD representation of user.
+1. **[Configure EAB SSO](#configure-eab-sso)** - to configure the single sign-on settings on application side.
+ * **[Create EAB test user](#create-eab-test-user)** - to have a counterpart of B.Simon in EAB that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **EAB Navigate** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **EAB** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- In the **Identifier (Entity ID)** text box, enter exactly the following value:
+ a. In the **Identifier (Entity ID)** text box, enter exactly the following value:
`https://bouncer.eab.com`
- In the **Reply URL (Assertion Consumer Service URL)** text box, enter both the following values as separate rows:
- `https://bouncer.eab.com/sso/saml2/acs`
- `https://bouncer.eab.com/sso/saml2/acs/`
+ b. In the **Reply URL (Assertion Consumer Service URL)** text box, enter both the following values as separate rows:
+
+ | Reply URL |
+ |--|
+ | `https://bouncer.eab.com/sso/saml2/acs` |
+ | `https://bouncer.eab.com/sso/saml2/acs/` |
+ |
- In the **Sign-on URL** text box, type a URL using the following pattern:
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.navigate.eab.com/` > [!NOTE]
- > The value is not real. Update the value with the actual Sign-On URL. Contact [EAB Navigate Client support team](mailto:EABTechSupport@eab.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The value is not real. Update the value with the actual Sign-On URL. Contact [EAB Client support team](mailto:EABTechSupport@eab.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to EAB Navigate.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to EAB.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **EAB Navigate**.
+1. In the applications list, select **EAB**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure EAB Navigate SSO
+## Configure EAB SSO
-To configure single sign-on on **EAB Navigate** side, you need to send the **App Federation Metadata Url** to [EAB Navigate support team](mailto:EABTechSupport@eab.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **EAB** side, you need to send the **App Federation Metadata Url** to [EAB support team](mailto:EABTechSupport@eab.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create EAB Navigate test user
+### Create EAB test user
-In this section, you create a user called B.Simon in EAB Navigate. Work with [EAB Navigate support team](mailto:EABTechSupport@eab.com) to add the users in the EAB Navigate platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in EAB. Work with [EAB support team](mailto:EABTechSupport@eab.com) to add the users in the EAB platform. Users must be created and activated before you use single sign-on.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the EAB Navigate tile in the Access Panel, you should be automatically signed in to the EAB Navigate for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to EAB Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Go to EAB Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the EAB tile in the My Apps, this will redirect to EAB Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try EAB Navigate with Azure AD](https://aad.portal.azure.com/) -- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect EAB Navigate with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure EAB you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Envoy Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/envoy-tutorial.md
Previously updated : 08/29/2019 Last updated : 04/01/2021
In this tutorial, you'll learn how to integrate Envoy with Azure Active Director
* Enable your users to be automatically signed-in to Envoy with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Envoy supports **SP** initiated SSO
+* Envoy supports **SP** initiated SSO.
-* Envoy supports **Just In Time** user provisioning
+* Envoy supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Envoy from the gallery
+## Add Envoy from the gallery
To configure the integration of Envoy into Azure AD, you need to add Envoy from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Envoy** in the search box. 1. Select **Envoy** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Envoy
+## Configure and test Azure AD SSO for Envoy
Configure and test Azure AD SSO with Envoy using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Envoy.
-To configure and test Azure AD SSO with Envoy, complete the following building blocks:
+To configure and test Azure AD SSO with Envoy, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Envoy, complete the following building b
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Envoy** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Envoy** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Envoy**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Envoy SSO
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. In the toolbar on the top, click **Settings**.
- ![Envoy](./media/envoy-tutorial/ic776782.png "Envoy")
+ ![Envoy](./media/envoy-tutorial/envoy-1.png "Envoy")
5. Click **Company**.
- ![Company](./media/envoy-tutorial/ic776783.png "Company")
+ ![Company](./media/envoy-tutorial/envoy-2.png "Company")
6. Click **SAML**.
- ![SAML](./media/envoy-tutorial/ic776784.png "SAML")
+ ![SAML](./media/envoy-tutorial/envoy-3.png "SAML")
7. In the **SAML Authentication** configuration section, perform the following steps:
- ![SAML authentication](./media/envoy-tutorial/ic776785.png "SAML authentication")
+ ![SAML authentication](./media/envoy-tutorial/envoy-4.png "SAML authentication")
>[!NOTE] >The value for the HQ location ID is auto generated by the application.
In this section, a user called Britta Simon is created in Envoy. Envoy supports
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Envoy tile in the Access Panel, you should be automatically signed in to the Envoy for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Envoy Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Envoy Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Envoy tile in the My Apps, this will redirect to Envoy Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Envoy with Azure AD](https://aad.portal.azure.com/)
+Once you configure Envoy you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Five9 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/five9-tutorial.md
Previously updated : 04/04/2019 Last updated : 03/17/2021 # Tutorial: Azure Active Directory integration with Five9 Plus Adapter (CTI, Contact Center Agents)
-In this tutorial, you learn how to integrate Five9 Plus Adapter (CTI, Contact Center Agents) with Azure Active Directory (Azure AD).
-Integrating Five9 Plus Adapter (CTI, Contact Center Agents) with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Five9 Plus Adapter (CTI, Contact Center Agents) with Azure Active Directory (Azure AD). When you integrate Five9 Plus Adapter (CTI, Contact Center Agents) with Azure AD, you can:
-* You can control in Azure AD who has access to Five9 Plus Adapter (CTI, Contact Center Agents).
-* You can enable your users to be automatically signed-in to Five9 Plus Adapter (CTI, Contact Center Agents) (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Five9 Plus Adapter (CTI, Contact Center Agents).
+* Enable your users to be automatically signed-in to Five9 Plus Adapter (CTI, Contact Center Agents) with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Five9 Plus Adapter (CTI, Contact Center Agents), you need the following items: * An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
-* Five9 Plus Adapter (CTI, Contact Center Agents) single sign-on enabled subscription
+* Five9 Plus Adapter (CTI, Contact Center Agents) single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Five9 Plus Adapter (CTI, Contact Center Agents) supports **IDP** initiated SSO
-
-## Adding Five9 Plus Adapter (CTI, Contact Center Agents) from the gallery
-
-To configure the integration of Five9 Plus Adapter (CTI, Contact Center Agents) into Azure AD, you need to add Five9 Plus Adapter (CTI, Contact Center Agents) from the gallery to your list of managed SaaS apps.
-
-**To add Five9 Plus Adapter (CTI, Contact Center Agents) from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Five9 Plus Adapter (CTI, Contact Center Agents)**, select **Five9 Plus Adapter (CTI, Contact Center Agents)** from result panel then click **Add** button to add the application.
+* Five9 Plus Adapter (CTI, Contact Center Agents) supports **IDP** initiated SSO.
- ![Five9 Plus Adapter (CTI, Contact Center Agents) in the results list](common/search-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Configure and test Azure AD single sign-on
+## Add Five9 Plus Adapter (CTI, Contact Center Agents) from the gallery
-In this section, you configure and test Azure AD single sign-on with Five9 Plus Adapter (CTI, Contact Center Agents) based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Five9 Plus Adapter (CTI, Contact Center Agents) needs to be established.
-
-To configure and test Azure AD single sign-on with Five9 Plus Adapter (CTI, Contact Center Agents), you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Five9 Plus Adapter (CTI, Contact Center Agents) Single Sign-On](#configure-five9-plus-adapter-cti-contact-center-agents-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Five9 Plus Adapter (CTI, Contact Center Agents) test user](#create-five9-plus-adapter-cti-contact-center-agents-test-user)** - to have a counterpart of Britta Simon in Five9 Plus Adapter (CTI, Contact Center Agents) that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of Five9 Plus Adapter (CTI, Contact Center Agents) into Azure AD, you need to add Five9 Plus Adapter (CTI, Contact Center Agents) from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Five9 Plus Adapter (CTI, Contact Center Agents)** in the search box.
+1. Select **Five9 Plus Adapter (CTI, Contact Center Agents)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Five9 Plus Adapter (CTI, Contact Center Agents)
-To configure Azure AD single sign-on with Five9 Plus Adapter (CTI, Contact Center Agents), perform the following steps:
+Configure and test Azure AD SSO with Five9 Plus Adapter (CTI, Contact Center Agents) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Five9 Plus Adapter (CTI, Contact Center Agents).
-1. In the [Azure portal](https://portal.azure.com/), on the **Five9 Plus Adapter (CTI, Contact Center Agents)** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Five9 Plus Adapter (CTI, Contact Center Agents), perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Five9 Plus Adapter (CTI, Contact Center Agents) SSO](#configure-five9-plus-adapter-cti-contact-center-agents-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Five9 Plus Adapter (CTI, Contact Center Agents) test user](#create-five9-plus-adapter-cti-contact-center-agents-test-user)** - to have a counterpart of B.Simon in Five9 Plus Adapter (CTI, Contact Center Agents) that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Five9 Plus Adapter (CTI, Contact Center Agents)** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
- ![Five9 Plus Adapter (CTI, Contact Center Agents) Domain and URLs single sign-on information](common/idp-intiated.png)
-
- a. In the **Identifier** text box, type a URL using the following pattern:
+ a. In the **Identifier** text box, type one of the following URLs:
| Environment | URL | | :-- | :-- |
To configure Azure AD single sign-on with Five9 Plus Adapter (CTI, Contact Cente
| For ΓÇ£Five9 Plus Adapter for ZendeskΓÇ¥ | `https://app.five9.com/appsvcs/saml/metadata/alias/zd` | | For ΓÇ£Five9 Plus Adapter for Agent Desktop ToolkitΓÇ¥ | `https://app.five9.com/appsvcs/saml/metadata/alias/adt` |
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type one of the following URLs:
| Environment | URL | | :-- | :-- |
To configure Azure AD single sign-on with Five9 Plus Adapter (CTI, Contact Cente
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
- b. Azure AD Identifier
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- c. Logout URL
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Five9 Plus Adapter (CTI, Contact Center Agents).
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Five9 Plus Adapter (CTI, Contact Center Agents)**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure Five9 Plus Adapter (CTI, Contact Center Agents) Single Sign-On
+## Configure Five9 Plus Adapter (CTI, Contact Center Agents) SSO
1. To configure single sign-on on **Five9 Plus Adapter (CTI, Contact Center Agents)** side, you need to send the downloaded **Certificate(Base64)** and appropriate copied URL(s) to [Five9 Plus Adapter (CTI, Contact Center Agents) support team](https://www.five9.com/about/contact). Also additionally, for configuring SSO further please follow the below steps according to the adapter:
To configure Azure AD single sign-on with Five9 Plus Adapter (CTI, Contact Cente
c. ΓÇ£Five9 Plus Adapter for ZendeskΓÇ¥ Admin Guide: [https://webapps.five9.com/assets/files/for_customers/documentation/integrations/zendesk/zendesk-plus-administrators-guide.pdf](https://webapps.five9.com/assets/files/for_customers/documentation/integrations/zendesk/zendesk-plus-administrators-guide.pdf)
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Five9 Plus Adapter (CTI, Contact Center Agents).
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Five9 Plus Adapter (CTI, Contact Center Agents)**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Five9 Plus Adapter (CTI, Contact Center Agents)**.
-
- ![The Five9 Plus Adapter (CTI, Contact Center Agents) link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create Five9 Plus Adapter (CTI, Contact Center Agents) test user In this section, you create a user called Britta Simon in Five9 Plus Adapter (CTI, Contact Center Agents). Work with [Five9 Plus Adapter (CTI, Contact Center Agents) support team](https://www.five9.com/about/contact) to add the users in the Five9 Plus Adapter (CTI, Contact Center Agents) platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the Five9 Plus Adapter (CTI, Contact Center Agents tile in the Access Panel, you should be automatically signed in to the Five9 Plus Adapter (CTI, Contact Center Agents) for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Five9 Plus Adapter (CTI, Contact Center Agents) for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Five9 Plus Adapter (CTI, Contact Center Agents) tile in the My Apps, you should be automatically signed in to the Five9 Plus Adapter (CTI, Contact Center Agents) for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Five9 Plus Adapter (CTI, Contact Center Agents) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Greenhouse Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/greenhouse-tutorial.md
Previously updated : 11/25/2020 Last updated : 03/26/2021 # Tutorial: Azure Active Directory integration with Greenhouse
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Greenhouse supports **SP** initiated SSO
+* Greenhouse supports **SP and IDP** initiated SSO.
## Adding Greenhouse from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Greenhouse** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-4. On the **Basic SAML Configuration** section, perform the following steps:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<companyname>.greenhouse.io`
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<COMPANYNAME>.greenhouse.io`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<companyname>.greenhouse.io`
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | Reply URL|
+ | -- |
+ | `https://<COMPANYNAME>.greenhouse.io/users/saml/consume` |
+ | `https://app.greenhouse.io/<ENTITY ID>/users/saml/consume` |
+ |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<COMPANYNAME>.greenhouse.io`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Greenhouse Client support team](https://www.greenhouse.io/contact) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Greenhouse Client support team](https://www.greenhouse.io/contact) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![screenshot for the sso page](./media/greenhouse-tutorial/configure.png)
-1. Perform the following steps in the Single Sign-On page.
+1. Perform the following steps in the **Single Sign-On** page.
![screenshot for the sso configuration page](./media/greenhouse-tutorial/sso-page.png)
In order to enable Azure AD users to log into Greenhouse, they must be provision
>[!NOTE] >The Azure Active Directory account holders will receive an email including a link to confirm the account before it becomes active.
-### Test SSO
+## Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Greenhouse Sign-on URL where you can initiate the login flow.
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Greenhouse Sign on URL where you can initiate the login flow.
* Go to Greenhouse Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Greenhouse tile in the My Apps, this will redirect to Greenhouse Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Greenhouse for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Greenhouse tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Greenhouse for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+ ## Next steps
active-directory Netskope Cloud Security Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/netskope-cloud-security-tutorial.md
Previously updated : 12/17/2020 Last updated : 04/02/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Netskope Administrator Console supports **SP and IDP** initiated SSO
+* Netskope Administrator Console supports **SP and IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant. -
-## Adding Netskope Administrator Console from the gallery
+## Add Netskope Administrator Console from the gallery
To configure the integration of Netskope Administrator Console into Azure AD, you need to add Netskope Administrator Console from the gallery to your list of managed SaaS apps.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Click on the **Settings** tab from the left navigation pane.
- ![Screenshot shows Setting selected in the navigation pane.](./media/netskope-cloud-security-tutorial/config-settings.png)
+ ![Screenshot shows Setting selected in the navigation pane.](./media/netskope-cloud-security-tutorial/configure-settings.png)
1. Click **Administration** tab.
- ![Screenshot shows Administration selected from Settings.](./media/netskope-cloud-security-tutorial/config-administration.png)
+ ![Screenshot shows Administration selected from Settings.](./media/netskope-cloud-security-tutorial/administration.png)
1. Click **SSO** tab.
- ![Screenshot shows S S O selected in Administration.](./media/netskope-cloud-security-tutorial/config-sso.png)
+ ![Screenshot shows S S O selected in Administration.](./media/netskope-cloud-security-tutorial/tab.png)
1. On the **Network Settings** section, perform the following steps:
- ![Screenshot shows Network Settings where you can enter the values described.](./media/netskope-cloud-security-tutorial/config-pasteurls.png)
+ ![Screenshot shows Network Settings where you can enter the values described.](./media/netskope-cloud-security-tutorial/network.png)
a. Copy **Assertion Consumer Service URL** value and paste it into the **Reply URL** textbox in the **Basic SAML Configuration** section in the Azure portal.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Click on the **EDIT SETTINGS** under the **SSO/SLO Settings** section.
- ![Screenshot shows S S O / S L O Settings where you can select EDIT SETTINGS.](./media/netskope-cloud-security-tutorial/config-editsettings.png)
+ ![Screenshot shows S S O / S L O Settings where you can select EDIT SETTINGS.](./media/netskope-cloud-security-tutorial/settings.png)
1. On the **Settings** popup window, perform the following steps;
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Click on the **Settings** tab from the left navigation pane.
- ![Screenshot shows Settings selected.](./media/netskope-cloud-security-tutorial/config-settings.png)
+ ![Screenshot shows Settings selected.](./media/netskope-cloud-security-tutorial/configure-settings.png)
1. Click **Active Platform** tab.
- ![Screenshot shows Active Platform selected from Settings.](./media/netskope-cloud-security-tutorial/user1.png)
+ ![Screenshot shows Active Platform selected from Settings.](./media/netskope-cloud-security-tutorial/user-1.png)
1. Click **Users** tab.
active-directory Oracle Peoplesoft Protected By F5 Big Ip Apm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial.md
Previously updated : 09/14/2020 Last updated : 03/22/2021
To get started, you need the following items:
2. F5 BIG-IP Access Policy Manager™ (APM) standalone license 3. F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM). 4. In addition to the above license, the F5 system may also be licensed with:
- * A URL Filtering subscription to use the URL category database
- * An F5 IP Intelligence subscription to detect and block known attackers and malicious traffic
- * A network hardware security module (HSM) to safeguard and manage digital keys for strong authentication
-1. F5 BIG-IP system is provisioned with APM modules (LTM is optional)
+ * A URL Filtering subscription to use the URL category database.
+ * An F5 IP Intelligence subscription to detect and block known attackers and malicious traffic.
+ * A network hardware security module (HSM) to safeguard and manage digital keys for strong authentication.
+1. F5 BIG-IP system is provisioned with APM modules (LTM is optional).
1. Although optional, it is highly recommended to Deploy the F5 systems in a [sync/failover device group](https://techdocs.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/big-ip-device-service-clustering-administration-14-1-0.html) (S/F DG), which includes the active standby pair, with a floating IP address for high availability (HA). Further interface redundancy can be achieved using the Link Aggregation Control Protocol (LACP). LACP manages the connected physical interfaces as a single virtual interface (aggregate group) and detects any interface failures within the group. ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Oracle PeopleSoft - Protected by F5 BIG-IP APM supports **SP and IDP** initiated SSO
+* Oracle PeopleSoft - Protected by F5 BIG-IP APM supports **SP and IDP** initiated SSO.
-## Adding Oracle PeopleSoft - Protected by F5 BIG-IP APM from the gallery
+## Add Oracle PeopleSoft - Protected by F5 BIG-IP APM from the gallery
To configure the integration of Oracle PeopleSoft - Protected by F5 BIG-IP APM into Azure AD, you need to add Oracle PeopleSoft - Protected by F5 BIG-IP APM from the gallery to your list of managed SaaS apps.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Oracle PeopleSoft - Protected by F5 BIG-IP APM** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
Navigate to **Local Traffic > Profiles > SSL > Client > +**, complete the follow
>[!Note] > Reference https://docs.oracle.com/cd/E12530_01/oam.1014/e10356/people.htm
-1. Logon to Peoplesoft Console `https://<FQDN>.peoplesoft.f5.com/:8000/psp/ps/?cmd=start` using Admin credentials(Example: PS/PS)
+1. Logon to Peoplesoft Console `https://<FQDN>.peoplesoft.f5.com/:8000/psp/ps/?cmd=start` using Admin credentials(Example: PS/PS).
![Manager self services](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/people-soft-console.png)
To add single logout support for all PeopleSoft users,please follow these steps:
* Navigate to **Local Traffic > Virtual Servers > Virtual Server List > PeopleSoftApp > Resources**. Click the **Manage…** button:
- * Specify `<Name>` as Enabled iRule and click **Finished**
+ * Specify `<Name>` as Enabled iRule and click **Finished**.
![_iRule_PeopleSoftApp ](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/irule-people-soft.png)
In this section, you test your Azure AD single sign-on configuration with follow
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Oracle PeopleSoft-Protected by F5 BIG-IP APM for which you set up the SSO
-
-You can also use Microsoft Access Panel to test the application in any mode. When you click the Oracle PeopleSoft-Protected by F5 BIG-IP APM tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Oracle PeopleSoft-Protected by F5 BIG-IP APM for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Oracle PeopleSoft-Protected by F5 BIG-IP APM for which you set up the SSO.
+You can also use Microsoft My Apps to test the application in any mode. When you click the Oracle PeopleSoft-Protected by F5 BIG-IP APM tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Oracle PeopleSoft-Protected by F5 BIG-IP APM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
## Next steps
-Once you configure Oracle PeopleSoft - Protected by F5 BIG-IP APM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Oracle PeopleSoft-Protected by F5 BIG-IP APM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Paloaltonetworks Aperture Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/paloaltonetworks-aperture-tutorial.md
Previously updated : 09/10/2020 Last updated : 03/22/2021 # Tutorial: Azure Active Directory integration with Palo Alto Networks - Aperture
-In this tutorial, you learn how to integrate Palo Alto Networks - Aperture with Azure Active Directory (Azure AD).
-Integrating Palo Alto Networks - Aperture with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Palo Alto Networks - Aperture with Azure Active Directory (Azure AD). When you integrate Palo Alto Networks - Aperture with Azure AD, you can:
-* You can control in Azure AD who has access to Palo Alto Networks - Aperture.
-* You can enable your users to be automatically signed-in to Palo Alto Networks - Aperture (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
+* Control in Azure AD who has access to Palo Alto Networks - Aperture.
+* Enable your users to be automatically signed-in to Palo Alto Networks - Aperture with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Palo Alto Networks - Aperture, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Palo Alto Networks - Aperture single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Palo Alto Networks - Aperture single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Palo Alto Networks - Aperture supports **SP** and **IDP** initiated SSO
+* Palo Alto Networks - Aperture supports **SP** and **IDP** initiated SSO.
-## Adding Palo Alto Networks - Aperture from the gallery
+## Add Palo Alto Networks - Aperture from the gallery
To configure the integration of Palo Alto Networks - Aperture into Azure AD, you need to add Palo Alto Networks - Aperture from the gallery to your list of managed SaaS apps.
For single sign-on to work, a link relationship between an Azure AD user and the
To configure and test Azure AD single sign-on with Palo Alto Networks - Aperture, perform the following steps: 1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
2. **[Configure Palo Alto Networks - Aperture SSO](#configure-palo-alto-networksaperture-sso)** - to configure the Single Sign-On settings on application side.
- * **[Create Palo Alto Networks - Aperture test user](#create-palo-alto-networksaperture-test-user)** - to have a counterpart of Britta Simon in Palo Alto Networks - Aperture that is linked to the Azure AD representation of user.
+ 1. **[Create Palo Alto Networks - Aperture test user](#create-palo-alto-networksaperture-test-user)** - to have a counterpart of Britta Simon in Palo Alto Networks - Aperture that is linked to the Azure AD representation of user.
3. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot that shows the "Basic S A M L Configuration" with the "Identifier" and "Reply U R L" text boxes highlighted, and the "Save" action selected.](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `https://<subdomain>.aperture.paloaltonetworks.com/d/users/saml/metadata`
Follow these steps to enable Azure AD SSO in the Azure portal.
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Palo Alto Networks - Aperture Domain and URLs single sign-on information SP](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<subdomain>.aperture.paloaltonetworks.com/d/users/saml/sign_in`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
2. On the top menu bar, click **SETTINGS**.
- ![The settings tab](./media/paloaltonetworks-aperture-tutorial/tutorial_paloaltonetwork_settings.png)
+ ![The settings tab](./media/paloaltonetworks-aperture-tutorial/settings.png)
3. Navigate to **APPLICATION** section click **Authentication** form the left side of menu.
- ![The Auth tab](./media/paloaltonetworks-aperture-tutorial/tutorial_paloaltonetwork_auth.png)
+ ![The Auth tab](./media/paloaltonetworks-aperture-tutorial/authentication.png)
4. On the **Authentication** page perform the following steps:
- ![The authentication tab](./media/paloaltonetworks-aperture-tutorial/tutorial_paloaltonetwork_singlesignon.png)
+ ![The authentication tab](./media/paloaltonetworks-aperture-tutorial/tab.png)
a. Check the **Enable Single Sign-On(Supported SSP Providers are Okta, One login)** from **Single Sign-On** field.
active-directory Rsa Archer Suite Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/rsa-archer-suite-tutorial.md
Previously updated : 09/01/2020 Last updated : 04/02/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* RSA Archer Suite supports **SP** initiated SSO
-* RSA Archer Suite supports **Just In Time** user provisioning
+* RSA Archer Suite supports **SP** initiated SSO.
+* RSA Archer Suite supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding RSA Archer Suite from the gallery
+## Add RSA Archer Suite from the gallery
To configure the integration of RSA Archer Suite into Azure AD, you need to add RSA Archer Suite from the gallery to your list of managed SaaS apps.
To configure the integration of RSA Archer Suite into Azure AD, you need to add
1. In the **Add from the gallery** section, type **RSA Archer Suite** in the search box. 1. Select **RSA Archer Suite** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for RSA Archer Suite Configure and test Azure AD SSO with RSA Archer Suite using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in RSA Archer Suite.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **RSA Archer Suite** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Perform the following steps in the following page.
- ![Configure RSA Archer Suite SSO](./media/rsa-archer-suite-tutorial/configuring-saml-sso.png)
+ ![Configure RSA Archer Suite SSO](./media/rsa-archer-suite-tutorial/configuration.png)
a. Go to the **Single Sign-On** tab and select **SAML** as a **Single Sign-On Mode** from the dropdown.
active-directory Skyhighnetworks Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/skyhighnetworks-tutorial.md
Previously updated : 06/23/2020 Last updated : 03/31/2021 # Tutorial: Integrate MVISION Cloud Azure AD SSO Configuration with Azure Active Directory
In this tutorial, you'll learn how to integrate MVISION Cloud Azure AD SSO Confi
* Enable your users to be automatically signed-in to MVISION Cloud Azure AD SSO Configuration with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
-* An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/).
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
* MVISION Cloud Azure AD SSO Configuration single sign-on (SSO) enabled subscription. - ## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* MVISION Cloud Azure AD SSO Configuration supports **SP and IDP** initiated SSO
-* Once you configure Dropbox you can enforce Session Control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session Control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+* MVISION Cloud Azure AD SSO Configuration supports **SP and IDP** initiated SSO.
-## Adding MVISION Cloud Azure AD SSO Configuration from the gallery
+## Add MVISION Cloud Azure AD SSO Configuration from the gallery
To configure the integration of MVISION Cloud Azure AD SSO Configuration into Azure AD, you need to add MVISION Cloud Azure AD SSO Configuration from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **MVISION Cloud Azure AD SSO Configuration** in the search box. 1. Select **MVISION Cloud Azure AD SSO Configuration** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for MVISION Cloud Azure AD SSO Configuration
Configure and test Azure AD SSO with MVISION Cloud Azure AD SSO Configuration using a test user called **Britta Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in MVISION Cloud Azure AD SSO Configuration.
-To configure and test Azure AD SSO with MVISION Cloud Azure AD SSO Configuration, complete the following building blocks:
+To configure and test Azure AD SSO with MVISION Cloud Azure AD SSO Configuration, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the [Azure portal](https://portal.azure.com/), on the **Datadog** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) - 4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps: a. In the **Identifier** text box, type a URL using the following pattern:
Follow these steps to enable Azure AD SSO in the Azure portal.
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![MVISION Cloud Azure AD SSO Configuration Domain and URLs single sign-on information](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<ENV>.myshn.net/shndash/saml/Azure_SSO`
Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png) - ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to MVISION Cloud Azure AD SSO Configuration.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **MVISION Cloud Azure AD SSO Configuration**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **MVISION Cloud Azure AD SSO Configuration**.
-
- ![The MVISION Cloud Azure AD SSO Configuration link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to MVISION Cloud Azure AD SSO Configuration.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **MVISION Cloud Azure AD SSO Configuration**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
## Configure MVISION Cloud Azure AD SSO Configuration SSO To configure single sign-on on **MVISION Cloud Azure AD SSO Configuration** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [MVISION Cloud Azure AD SSO Configuration support team](mailto:support@skyhighnetworks.com). They set this setting to have the SAML SSO connection set properly on both sides. - ### Create MVISION Cloud Azure AD SSO Configuration test user In this section, you create a user called B.Simon in MVISION Cloud Azure AD SSO Configuration. Work with [MVISION Cloud Azure AD SSO Configuration support team](mailto:support@skyhighnetworks.com) to add the users in the MVISION Cloud Azure AD SSO Configuration platform. Users must be created and activated before you use single sign-on.
-### Test SSO
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+#### SP initiated:
-When you click the MVISION Cloud Azure AD SSO Configuration tile in the Access Panel, you should be automatically signed in to the MVISION Cloud Azure AD SSO Configuration for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to MVISION Cloud Azure AD SSO Configuration Sign on URL where you can initiate the login flow.
-## Additional Resources
+* Go to MVISION Cloud Azure AD SSO Configuration Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+#### IDP initiated:
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the MVISION Cloud Azure AD SSO Configuration for which you set up the SSO.
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the MVISION Cloud Azure AD SSO Configuration tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MVISION Cloud Azure AD SSO Configuration for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try MVISION Cloud Azure AD SSO Configuration with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+Once you configure MVISION Cloud Azure AD SSO Configuration you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Statuspage Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/statuspage-tutorial.md
Previously updated : 12/18/2020 Last updated : 03/31/2021 # Tutorial: Azure Active Directory single sign-on (SSO) integration with StatusPage
-In this tutorial, you learn how to integrate StatusPage with Azure Active Directory (Azure AD).
-Integrating StatusPage with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate StatusPage with Azure Active Directory (Azure AD). When you integrate StatusPage with Azure AD, you can:
-* You can control in Azure AD who has access to StatusPage.
-* You can enable your users to be automatically signed-in to StatusPage (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
+* Control in Azure AD who has access to StatusPage.
+* Enable your users to be automatically signed-in to StatusPage with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with StatusPage, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* StatusPage single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* StatusPage single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* StatusPage supports **IDP** initiated SSO
+* StatusPage supports **IDP** initiated SSO.
-## Adding StatusPage from the gallery
+## Add StatusPage from the gallery
To configure the integration of StatusPage into Azure AD, you need to add StatusPage from the gallery to your list of managed SaaS apps.
To configure and test Azure AD SSO with StatusPage, perform the following steps:
1. **[Create StatusPage test user](#create-statuspage-test-user)** - to have a counterpart of Britta Simon in StatusPage that is linked to the Azure AD representation of user. 6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal. 1. In the Azure portal, on the **AskYourTeam** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
| `https://<subdomain>.statuspage.io/` | |
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
| Reply URL | |--|
In this section, you enable Britta Simon to use Azure single sign-on by granting
1. In the main toolbar, click **Manage Account**.
- ![Screenshot shows Manage Account selected from the StatusPage company site.](./media/statuspage-tutorial/tutorial_statuspage_06.png)
+ ![Screenshot shows Manage Account selected from the StatusPage company site.](./media/statuspage-tutorial/account.png)
1. Click the **Single Sign-on** tab.
- ![Screenshot shows the Single Sign-on tab.](./media/statuspage-tutorial/tutorial_statuspage_07.png)
+ ![Screenshot shows the Single Sign-on tab.](./media/statuspage-tutorial/tab.png)
1. On the SSO Setup page, perform the following steps:
- ![Screenshot shows the S S O Setup page where you can enter the values described.](./media/statuspage-tutorial/tutorial_statuspage_08.png)
+ ![Screenshot shows the S S O Setup page where you can enter the values described.](./media/statuspage-tutorial/setup.png)
- ![Screenshot shows the Save Configuration button.](./media/statuspage-tutorial/tutorial_statuspage_09.png)
+ ![Screenshot shows the Save Configuration button.](./media/statuspage-tutorial/configuration.png)
a. In the **SSO Target URL** textbox, paste the value of **Login URL**, which you have copied from Azure portal.
StatusPage supports just-in-time provisioning. You have already enabled it in [C
1. In the menu on the top, click **Manage Account**.
- ![Screenshot shows Manage Account selected from the StatusPage company site.](./media/statuspage-tutorial/tutorial_statuspage_06.png)
+ ![Screenshot shows Manage Account selected from the StatusPage company site.](./media/statuspage-tutorial/account.png)
1. Click the **Team Members** tab.
- ![Screenshot shows the Team Members tab.](./media/statuspage-tutorial/tutorial_statuspage_10.png)
+ ![Screenshot shows the Team Members tab.](./media/statuspage-tutorial/sandbox.png)
1. Click **ADD TEAM MEMBER**.
- ![Screenshot shows the Add Team Member button.](./media/statuspage-tutorial/tutorial_statuspage_11.png)
+ ![Screenshot shows the Add Team Member button.](./media/statuspage-tutorial/team.png)
1. Type the **Email Address**, **First Name**, and **Surname** of a valid user you want to provision into the related textboxes.
- ![Screenshot shows the Add a User dialog box where you can enter the values described.](./media/statuspage-tutorial/tutorial_statuspage_12.png)
+ ![Screenshot shows the Add a User dialog box where you can enter the values described.](./media/statuspage-tutorial/user.png)
1. As **Role**, choose **Client Administrator**. 1. Click **CREATE ACCOUNT**.
-### Test SSO
+## Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Veritas Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/veritas-tutorial.md
Previously updated : 03/28/2019 Last updated : 03/22/2021 # Tutorial: Azure Active Directory integration with Veritas Enterprise Vault.cloud SSO
-In this tutorial, you learn how to integrate Veritas Enterprise Vault.cloud SSO with Azure Active Directory (Azure AD).
-Integrating Veritas Enterprise Vault.cloud SSO with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Veritas Enterprise Vault.cloud SSO with Azure Active Directory (Azure AD). When you integrate Veritas Enterprise Vault.cloud SSO with Azure AD, you can:
-* You can control in Azure AD who has access to Veritas Enterprise Vault.cloud SSO.
-* You can enable your users to be automatically signed-in to Veritas Enterprise Vault.cloud SSO (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Veritas Enterprise Vault.cloud SSO.
+* Enable your users to be automatically signed-in to Veritas Enterprise Vault.cloud SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Veritas Enterprise Vault.cloud SSO, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Veritas Enterprise Vault.cloud SSO single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Veritas Enterprise Vault.cloud SSO single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Veritas Enterprise Vault.cloud SSO supports **SP** initiated SSO
-
-## Adding Veritas Enterprise Vault.cloud SSO from the gallery
-
-To configure the integration of Veritas Enterprise Vault.cloud SSO into Azure AD, you need to add Veritas Enterprise Vault.cloud SSO from the gallery to your list of managed SaaS apps.
-
-**To add Veritas Enterprise Vault.cloud SSO from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* Veritas Enterprise Vault.cloud SSO supports **SP** initiated SSO.
- ![The New application button](common/add-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-4. In the search box, type **Veritas Enterprise Vault.cloud SSO**, select **Veritas Enterprise Vault.cloud SSO** from result panel then click **Add** button to add the application.
+## Add Veritas Enterprise Vault.cloud SSO from the gallery
- ![Veritas Enterprise Vault.cloud SSO in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Veritas Enterprise Vault.cloud SSO based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Veritas Enterprise Vault.cloud SSO needs to be established.
-
-To configure and test Azure AD single sign-on with Veritas Enterprise Vault.cloud SSO, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Veritas Enterprise Vault.cloud SSO Single Sign-On](#configure-veritas-enterprise-vaultcloud-sso-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Veritas Enterprise Vault.cloud SSO test user](#create-veritas-enterprise-vaultcloud-sso-test-user)** - to have a counterpart of Britta Simon in Veritas Enterprise Vault.cloud SSO that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of Veritas Enterprise Vault.cloud SSO into Azure AD, you need to add Veritas Enterprise Vault.cloud SSO from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Veritas Enterprise Vault.cloud SSO** in the search box.
+1. Select **Veritas Enterprise Vault.cloud SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Veritas Enterprise Vault.cloud SSO
-To configure Azure AD single sign-on with Veritas Enterprise Vault.cloud SSO, perform the following steps:
+Configure and test Azure AD SSO with Veritas Enterprise Vault.cloud SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Veritas Enterprise Vault.cloud SSO.
-1. In the [Azure portal](https://portal.azure.com/), on the **Veritas Enterprise Vault.cloud SSO** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Veritas Enterprise Vault.cloud SSO, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Veritas Enterprise Vault.cloud SSO SSO](#configure-veritas-enterprise-vaultcloud-sso-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Veritas Enterprise Vault.cloud SSO test user](#create-veritas-enterprise-vaultcloud-sso-test-user)** - to have a counterpart of B.Simon in Veritas Enterprise Vault.cloud SSO that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Veritas Enterprise Vault.cloud SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Veritas Enterprise Vault.cloud SSO Domain and URLs single sign-on information](common/sp-identifier-reply.png)
- a. In the **Sign-on URL** text box, type a URL using the following pattern: `https://personal.ap.archive.veritas.com/CID=<CUSTOMERID>`
- b. In the **Identifier** box, use the URL as per the Datacenter:
+ b. In the **Identifier** box, type one of the URLs as per the Datacenter:
| Datacenter| URL | |-|-|
To configure Azure AD single sign-on with Veritas Enterprise Vault.cloud SSO, pe
| Europe | `https://auth.ams.archivecloud.net` | | Asia Pacific| `https://auth.syd.archivecloud.net`|
- c. In the **Reply URL** text box, use the URL as per the Datacenter:
+ c. In the **Reply URL** text box, type one of the URLs as per the Datacenter:
| Datacenter| URL | |-|-|
To configure Azure AD single sign-on with Veritas Enterprise Vault.cloud SSO, pe
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Veritas Enterprise Vault.cloud SSO Single Sign-On
-
-To configure single sign-on on **Veritas Enterprise Vault.cloud SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Veritas Enterprise Vault.cloud SSO support team](https://www.veritas.com/support/.html). They set this setting to have the SAML SSO connection set properly on both sides.
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Veritas Enterprise Vault.cloud SSO.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Veritas Enterprise Vault.cloud SSO.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Veritas Enterprise Vault.cloud SSO**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Veritas Enterprise Vault.cloud SSO**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Veritas Enterprise Vault.cloud SSO SSO
-2. In the applications list, select **Veritas Enterprise Vault.cloud SSO**.
-
- ![The Veritas Enterprise Vault.cloud SSO link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Veritas Enterprise Vault.cloud SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Veritas Enterprise Vault.cloud SSO support team](https://www.veritas.com/support/.html). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Veritas Enterprise Vault.cloud SSO test user In this section, you create a user called Britta Simon in Veritas Enterprise Vault.cloud SSO. Work with [Veritas Enterprise Vault.cloud SSO support team](https://www.veritas.com/support/.html) to add the users in the Veritas Enterprise Vault.cloud SSO platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Veritas Enterprise Vault.cloud SSO tile in the Access Panel, you should be automatically signed in to the Veritas Enterprise Vault.cloud SSO for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Veritas Enterprise Vault.cloud SSO Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Veritas Enterprise Vault.cloud SSO Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Veritas Enterprise Vault.cloud SSO tile in the My Apps, this will redirect to Veritas Enterprise Vault.cloud SSO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Veritas Enterprise Vault.cloud SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/credential-design.md
Verifiable credentials are made up of two components, the rules and display file
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Rules File: Requirements from the user
+## Rules file: Requirements from the user
The rules file is a simple JSON file that describes important properties of verifiable credentials. In particular, it describes how claims are used to populate your verifiable credential.
There are currently three input types that that are available to configure in th
- Verifiable credentials via a verifiable presentation. - Self-Attested Claims
-**ID Token:** The sample App and Tutorial use the ID Token. When this option is configured, you will need to provide an Open ID Connect configuration URI and include the claims that should be included in the VC. The user will be prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account.
+**ID token:** The sample App and Tutorial use the ID Token. When this option is configured, you will need to provide an Open ID Connect configuration URI and include the claims that should be included in the VC. The user will be prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account.
-**Verifiable Credentials:** The end result of an issuance flow is to produce a Verifiable Credential but you may also ask the user to Present a Verifiable Credential in order to issue one. The Rules File is able to take specific claims from the presented Verifiable Credential and include those claims in the newly issued Verifiable Credential from your organization.
+**Verifiable credentials:** The end result of an issuance flow is to produce a Verifiable Credential but you may also ask the user to Present a Verifiable Credential in order to issue one. The Rules File is able to take specific claims from the presented Verifiable Credential and include those claims in the newly issued Verifiable Credential from your organization.
-**Self Attested Claims:** When this option is selected, the user will be able to directly type information into Authenticator. At this time, strings are the only supported input for self attested claims.
+**Self attested claims:** When this option is selected, the user will be able to directly type information into Authenticator. At this time, strings are the only supported input for self attested claims.
![detailed view of verifiable credential card](media/credential-design/issuance-doc.png)
-**Static Claims:** Additionally we are able declare a static claim in the Rules file, however this input does not come from the user. The Issuer defines a static claim in the Rules file and would look like any other claim in the Verifiable Credential. Simply add a credentialSubject after vc.type and declare the attribute and the claim.
+**Static claims:** Additionally we are able declare a static claim in the Rules file, however this input does not come from the user. The Issuer defines a static claim in the Rules file and would look like any other claim in the Verifiable Credential. Simply add a credentialSubject after vc.type and declare the attribute and the claim.
```json "vc": {
There are currently three input types that that are available to configure in th
```
-## Input Type: ID Token
+## Input type: ID token
To get ID Token as input, the rules file needs to configure the well-known endpoint of the OIDC compatible Identity system. In that system you need to register an application with the correct information from [Issuer service communication examples](issuer-openid.md). Additionally, the client_id needs to be put in the rules file, as well as a scope parameter needs to be filled in with the correct scopes. For example, Azure Active Directory needs the email scope if you want to return an email claim in the ID token. ```json
By declaring all three types, Contoso University's diplomas can be used to satis
To ensure interoperability of your credentials, it's recommended that you work closely with related organizations to define credential types, schemas, and URIs for use in your industry. Many industry bodies provide guidance on the structure of official documents that can be repurposed for defining the contents of verifiable credentials. You should also work closely with the verifiers of your credentials to understand how they intend to request and consume your verifiable credentials.
-## Input Type: Verifiable Credential
+## Input type: Verifiable credential
>[!NOTE] >Rules files that ask for a verifiable credential do not use the presentation exchange format for requesting credentials. This will be updated when the Issuing Service supports the standard, Credential Manifest.
To ensure interoperability of your credentials, it's recommended that you work c
| `vc.type` | An array of strings indicating the schema(s) that your verifiable credential satisfies. |
-## Input Type: Self-Attested Claims
+## Input type: Self-attested claims
During the issuance flow, the user can be asked to input some self-attested information. As of now, the only input type is a 'string'. ```json
During the issuance flow, the user can be asked to input some self-attested info
| `vc.type` | An array of strings indicating the schema(s) that your Verifiable Credential satisfies. |
-## Display File: verifiable credentials in Microsoft Authenticator
+## Display file: Verifiable credentials in Microsoft Authenticator
Verifiable credentials offer a limited set of options that can be used to reflect your brand. This article provides instructions how to customize your credentials, and best practices for designing credentials that look great once issued to users.
The display file has the following structure.
| `claims.{attribute}.type` | Indicates the attribute type. Currently we only support 'String'. | | `claims.{attribute}.label` | The value that should be used as a label for the attribute, which will show up in Authenticator. This maybe different than the label that was used in the rules file. Recommended maximum length of 40 characters. |
->[!note]
+>[!NOTE]
>If a claim is included in the rules file and then omitted in the display file, there are two different types of experiences. On iOS, the claim will not be displayed in details section shown in the above image, while on Android the claim will be shown. ## Next steps
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
Our digital and physical lives are increasingly linked to the apps, services, an
But identity data has too often been exposed in security breaches. These breaches are impactful to people's lives affecting our social, professional, and financial lives. Microsoft believes that thereΓÇÖs a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This primer explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations.
-## Why we need Decentralized Identity
+## Why we need Decentralized Identity
Today we use our digital identity at work, at home, and across every app, service, and device we use. ItΓÇÖs made up of everything we say, do, and experience in our livesΓÇöpurchasing tickets for an event, checking into a hotel, or even ordering lunch. Currently, our identity and all our digital interactions are owned and controlled by other parties, some of whom we arenΓÇÖt even aware of.
Generally, users grant consent to several apps and devices. This approach requir
We believe a standards-based Decentralized Identity system can unlock a new set of experiences that give users and organizations to have greater control over their dataΓÇöand deliver a higher degree of trust and security for apps, devices, and service providers
-## Lead with open standards
+## Lead with open standards
WeΓÇÖre committed to working closely with customers, partners, and the community to unlock the next generation of Decentralized IdentityΓÇôbased experiences, and weΓÇÖre excited to partner with the individuals and organizations that are making incredible contributions in this space. If the DID ecosystem is to grow, standards, technical components, and code deliverables must be open source and accessible to all.
Microsoft is actively collaborating with members of the Decentralized Identity F
* [DIF Presentation Exchange](https://identity.foundation/presentation-exchange/)
-## What are DIDs
+## What are DIDs?
Before we can understand DIDs, it helps to compare them with current identity systems. Email addresses and social network IDs are human-friendly aliases for collaboration but are now overloaded to serve as the control points for data access across many scenarios beyond collaboration. This creates a potential problem, because access to these IDs can be removed at any time by external parties. Decentralized Identifiers (DIDs) are different. DIDs are user-generated, self-owned, globally unique identifiers rooted in decentralized systems like ION. They possess unique characteristics, like greater assurance of immutability, censorship resistance, and tamper evasiveness. These attributes are critical for any ID system that is intended to provide self-ownership and user control. MicrosoftΓÇÖs verifiable credential solution uses decentralized credentials (DIDs) to cryptographically sign as proof that a relying party (verifier) is attesting to information proving they are the owners of a verifiable credential. Therefore, a basic understanding of decentralized identifiers is recommended for anyone creating a verifiable credential solution based on the Microsoft offering.
-## What are Verifiable Credentials
+## What are Verifiable Credentials?
We use IDs in our daily lives. We have drivers licenses that we use as evidence of our ability to operate a car. Universities issue diplomas that prove we attained a level of education. We use passports to prove who we are to authorities as we arrive to other countries. The data model describes how we could handle these types of scenarios when working over the internet but in a secure manner that respects user's privacy. You can get additional information in The [Verifiable Credentials Data Model 1.0](https://www.w3.org/TR/vc-data-model/) In short, verifiable credentials are data objects consisting of claims made by the issuer attesting information about a subject. These claims are identified by schema and include the DID the issuer and subject. The issuer's DID creates a digital signature as proof that they attest to this information.
-## How does Decentralized Identity work?
+## How does Decentralized Identity work?
We need a new form of identity. We need an identity that brings together technologies and standards to deliver key identity attributes like self-ownership, and censorship resistance. These capabilities are difficult to achieve using existing systems.
active-directory Enable Your Tenant Verifiable Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/enable-your-tenant-verifiable-credentials.md
Title: "Tutorial: Configure your Azure Active Directory to issue verifiable credentials (Preview)"
+ Title: Tutorial - Configure your Azure Active Directory to issue verifiable credentials (Preview)
description: In this tutorial, you build the environment needed to deploy verifiable credentials in your tenant documentationCenter: ''
Previously updated : 03/31/2021 Last updated : 04/01/2021
-# Tutorial: Configure your Azure Active Directory to issue verifiable credentials (Preview)
+# Tutorial - Configure your Azure Active Directory to issue verifiable credentials (Preview)
In this tutorial, we build on the work done as part of the [get started](get-started-verifiable-credentials.md) article and set up your Azure Active Directory (Azure AD) with its own [decentralized identifier](https://www.microsoft.com/security/business/identity-access-management/decentralized-identity-blockchain?rtc=1#:~:text=Decentralized%20identity%20is%20a%20trust,protect%20privacy%20and%20secure%20transactions.) (DID). We use the decentralized identifier to issue a verifiable credential using the sample app and your issuer; however, in this tutorial, we still use the sample Azure B2C tenant for authentication. In our next tutorial, we will take additional steps to get the app configured to work with your Azure AD.
Take note of the two properties listed below:
>[!IMPORTANT] > During the Azure Active Directory Verifiable Credentials preview, keys and secrets created in your vault should not be modified once created. Deleting, disabling, or updating your keys and secrets invalidates any issued credentials. Do not modify your keys or secrets during the preview.
-## Create a Modified Rules and Display File
+## Create a modified rules and display file
In this section, we use the rules and display files from the Sample issuer app and modify them slightly to create your tenant's first verifiable credential.
Now we need to take the last step to set up your tenant for verifiable credentia
Congratulations, your tenant is now enabled for the Verifiable Credential preview!
-## Create your VC in the Portal
+## Create your VC in the portal
The previous step leaves you in the **Create a new credential** page.
active-directory Get Started Verifiable Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/get-started-verifiable-credentials.md
Title: "Tutorial: Get started with verifiable credentials using a sample app (preview)"
+ Title: Tutorial - Get started with Azure Active Directory Verifiable Credentials using a sample app (preview)
description: In this tutorial, you learn how to issue verifiable credentials using our sample app and test tenant Previously updated : 03/31/2021 Last updated : 04/01/2021 # Customer intent: As an enterprise we want to enable customers to manage information about themselves using verifiable credentials -
-# Tutorial: Get started with verifiable credentials using a sample app (preview)
+# Tutorial - Get started with Azure Active Directory Verifiable Credentials using a sample app (preview)
In this tutorial, we go over the steps needed to issue your first verifiable credential: a Verified Credential expert card. You can then use this card to prove to a verifier that you are a verified credential expert, mastered in the art of digital credentialing. Get started with Azure Active Directory Verifiable Credentials by using the Verifiable Credentials sample app to issue your first verifiable credential.
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
There are two easy ways to create a free Azure Active Directory with a P2 trial
If you decide to sign up for the free Microsoft 365 developer program, you need to follow a few easy steps:
-1. Click on the Join Now button on the screen
+1. Click on the **Join Now** button on the screen.
2. Sign in with a new Microsoft Account or use an existing (work) account you already have.
-3. On the sign-up page select your region, enter a company name and accept the terms and conditions of the program before you click next
+3. On the sign-up page select your region, enter a company name and accept the terms and conditions of the program before you click **Next**.
-4. Click on set up subscription. Specify the region where you want to create your new tenant, create a username, domain, and enter a password. This will create a new tenant and the first administrator of the tenant
+4. Click on **set up subscription**. Specify the region where you want to create your new tenant, create a username, domain, and enter a password. This will create a new tenant and the first administrator of the tenant.
-5. Enter the security information needed to protect the administrator account of your new tenant. This will setup MFA authentication for the account
+5. Enter the security information needed to protect the administrator account of your new tenant. This will setup MFA authentication for the account.
At this point, you have created a tenant with 25 E5 user licenses. The E5 licenses include Azure AD P2 licenses. Optionally, you can add sample data packs with users, groups, mail, and SharePoint to help you test in your development environment. For the Verifiable Credential Issuing service, they are not required.
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
Title: Link your Domain to your Decentralized Identifier (DID) (preview)
+ Title: Link your Domain to your Decentralized Identifier (DID) (preview) - Azure Active Directory Verifiable Credentials
description: Learn how to DNS Bind? documentationCenter: ''
#Customer intent: Why are we doing this?
-# Link your Domain to your Decentralized Identifier (DID)
+# Link your domain to your Decentralized Identifier (DID)
> [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview.
After you have the well-known configuration file, you need to make the file avai
>[!IMPORTANT] >Microsoft Authenticator does not honor redirects, the URL specified must be the final destination URL.
-## User Experience
+## User experience
When a user is going through an issuance flow or presenting a verifiable credential, they should know something about organization and its DID. If the domain our verifiable credential wallet, Microsoft Authenticator, validates a DID's relationship with the domain in the DID document and presents users with two different experiences depending on the outcome.
-## Verified Domain
+## Verified domain
Before Microsoft Authenticator displays a **Verified** icon, a few things need to be true:
If all of the previously mentioned are true, then Microsoft Authenticator displa
![new permission request](media/how-to-dnsbind/new-permission-request.png)
-## Unverified Domain
+## Unverified domain
If any of the above are not true, the Microsoft Authenticator will display a full page warning to the user that the domain is unverified, the user is in the middle of a risky transaction and they should proceed with caution. We have chosen to take this route because:
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
Title: How to Revoke a Verifiable Credential as an Issuer
+ Title: How to Revoke a Verifiable Credential as an Issuer - Azure Active Directory Verifiable Credentials
description: Learn how to revoke a Verifiable Credential that you've issued documentationCenter: ''
Once an index claim has been set and verifiable credentials have been issued to
Now whenever a relying party calls to check the status of this specific verifiable credential, Microsoft's status API, acting on behalf of the tenant, returns a 'false' response.
-## Next Steps
+## Next steps
Test out the functionality on your own with a test credential to get used to the flow. You can see information on how to configure your tenant to issue verifiable credentials by [reviewing our tutorials](get-started-verifiable-credentials.md).
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-opt-out.md
Title: Opt out of the verifiable credentials (Preview)
+ Title: Opt out of the Azure Active Directory Verifiable Credentials (Preview)
description: Learn how to Opt Out of the Verifiable Credentials Preview documentationCenter: ''
When you complete opting out of the Azure Active Directory Verifiable Credential
Once an opt-out takes place, you will not be able to recover your DID or conduct any operations on your DID. This step is a one-way operation, and you need to opt in again, which results in a new DID being created.
-## Effect on existing verifiable credentials.
+## Effect on existing verifiable credentials
All verifiable credentials already issued will continue to exist. They will not be cryptographically invalidated as your DID will remain resolvable through ION. However, when relying parties call the status API, they will always receive back a failure message.
active-directory Issue Verify Verifiable Credentials Your Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/issue-verify-verifiable-credentials-your-tenant.md
Title: Tutorial - Issue and verify verifiable credentials using your tenant (preview)
+ Title: Tutorial - Issue and verify verifiable credentials using your Azure tenant (preview)
description: Change the Verifiable Credential code sample to work with your Azure tenant documentationCenter: ''
-# Tutorial: Issue and verify verifiable credentials using your tenant (preview)
+# Tutorial - Issue and verify verifiable credentials using your tenant (preview)
> [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview.
Any identity provider that supports the OpenID Connect protocol is supported. Ex
This tutorial assumes you've already completed the steps in the [previous tutorial](enable-your-tenant-verifiable-credentials.md) and have access to the environment you used.
-## Register an App to enable DID Wallets to sign in users
+## Register an app to enable DID wallets to sign in users
To issue a verifiable credential, you need to register an app so Authenticator, or any other verifiable credential wallet, is allowed to sign in users.
Register an application called 'VC Wallet App' in Azure AD and obtain a client I
![issuer endpoints](media/issue-verify-verifable-credentials-your-tenant/application-endpoints.png)
-## Set up your node app with access to Key Vault
+## Set up your node app with access to Azure Key Vault
To authenticate a user's credential issuance request, the issuer website uses your cryptographic keys in Azure Key Vault. To access Azure Key Vault, your website needs a client ID and client secret that can be used to authenticate to Azure Key Vault.
There are a few other values we need to get before we can make the changes one t
![sign in key identifier](media/issue-verify-verifable-credentials-your-tenant/issuer-signing-key-ion.png)
-### DID Document
+### DID document
1. Open the [DIF ION Network Explorer](https://identity.foundation/ion/explorer/)
Now we have everything we need to make the changes in our sample code.
Now you have everything in place to issue and verify your own Verifiable Credential from your Azure Active Directory tenant with your own keys.
-## Issue and Verify the VC
+## Issue and verify the VC
Follow the same steps we followed in the previous tutorial to issue the verifiable credential and validate it with your app. Once that you successfully complete the verification process you are now ready to continue learning about verifiable credentials.
active-directory Issuer Openid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/issuer-openid.md
Title: Issuer service communication examples (preview)
+ Title: Issuer service communication examples (preview) - Azure Active Directory Verifiable Credentials
description: Details of communication between identity provider and issuer service
The ID token must use the JWT compact serialization format, and must not be encr
| `exp` | Must contain the expiry time of the ID token. | | `iat` | Must contain the time at which the ID token was issued. | | `nonce` | The value included in the authorization request. |
-| Additional claims | The ID token should contain any additional claims whose values will be included in the Verifiable Credential that will be issued. This section is where you should include any attributes about the user, such as their name. |
+| Additional claims | The ID token should contain any additional claims whose values will be included in the Verifiable Credential that will be issued. This section is where you should include any attributes about the user, such as their name. |
+
+## Next steps
+
+- [How to customize your Azure Active Directory Verifiable Credentials](credential-design.md)
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
Credentials are a part of our daily lives; driver's licenses are used to assert
There are multiple ways of offering a recovery mechanism to users, each with their own tradeoffs. We're currently evaluating options and designing approaches to recovery that offer convenience and security while respecting a user's privacy and self-sovereignty.
-### Why does validation of a verifiable credential require a query to a credential status endpoint? Is this not a privacy concern?
-
-The `credentialStatus` property in a verifiable credential requires the verifier to query the credential's issuer during validation. This is a convenient and efficient way for the issuer to be able to revoke a credential that has been previously issued. This also means that the issuer can track which verifiers have accessed a user's credentials. In some use cases this is desirable, but in many, this would be considered a serious violation of user privacy. We are exploring alternative means of credential revocation that will allow an issuer to revoke a verifiable credential without being able to trace a credential's usage.
-
-<!-- Additionally, an issuer can issuer a Verifiable Credential without a 'credentialStatus' endpoint. Please follow the instructions in [How to customize your verifiable credentials article.](credential-design.md) -->
- ### How can a user trust a request from an issuer or verifier? How do they know a DID is the real DID for an organization? We have implemented [the Decentralized Identity Foundation's Well Known DID Configuration spec](https://identity.foundation/.well-known/resources/did-configuration/) in order to connect a DID to a highly known existing system, domain names. Each DID created using the Azure Active Directory Verifiable Credentials has the option of including a root domain name that will be encoded in the DID Document. Follow the article titled [Link your Domain to your Distributed Identifier](how-to-dnsbind.md) to learn more.
-### Does a user need to periodically rotate their DID keys?
-
-The DID methods used in verifiable credential exchanges support the ability for a user to update the keys associated with their DID. Currently, Microsoft Authenticator does not change the user's keys after a DID has been created.
- ### Why does the Verifiable Credential preview use ION as its DID method, and therefore Bitcoin to provide decentralized public key infrastructure? ION is a decentralized, permissionless, scalable decentralized identifier Layer 2 network that runs atop Bitcoin. It achieves scalability without including a special cryptoasset token, trusted validators, or centralized consensus mechanisms. We use Bitcoin for the base Layer 1 substrate because of the strength of the decentralized network to provide a high degree of immutability for a chronological event record system.
Yes! The following repositories are the open-sourced components of our services.
2. The [VC SDK for Node, on GitHub](https://github.com/microsoft/VerifiableCredentials-Verification-SDK-Typescript) 3. An [Android SDK for building decentralized identity wallets, on GitHub](https://github.com/microsoft/VerifiableCredential-SDK-Android) 4. An [iOS SDK for building decentralized identity wallets, on GitHub](https://github.com/microsoft/VerifiableCredential-SDK-iOS)++
+## What are the licensing requirements?
+
+An Azure AD P2 license is required to use the preview of Verifiable Credentials. This is a temporary requirement, as we expect pricing for this service to be billed based on usage.
++
+## Next steps
+
+- [How to customize your Azure Active Directory Verifiable Credentials](credential-design.md)
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
To upgrade, from *1.12.x* -> *1.14.x*, first upgrade from *1.12.x* -> *1.13.x*,
Skipping multiple versions can only be done when upgrading from an unsupported version back into a supported version. For example, upgrade from an unsupported *1.10.x* --> a supported *1.15.x* can be completed.
+**Can I create a new 1.xx.x cluster during its 30 day support window?**
+
+No. Once a version is deprecated/removed, you cannot create a cluster with that version. As the change rolls out, you will start to see the old version removed from your version list. This process may take up to two weeks from announcement, progressively by region.
+
+**I am on a freshly deprecated version, can I still add new node pools? Or will I have to upgrade?**
+
+No. You will not be allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version. However, this may require you to update the control plane first.
+ ## Next steps For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
For existing AKS clusters, you can also add a new node pool, and attach a public
az aks nodepool add -g MyResourceGroup2 --cluster-name MyManagedCluster -n nodepool2 --enable-node-public-ip ```
+### Use a public IP prefix
+
+#### Install the `aks-preview` Azure CLI
+
+You will need the *aks-preview* Azure CLI extension. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+There are a number of [benefits to using a public IP prefix][public-ip-prefix-benefits]. AKS supports using addresses from an existing public IP prefix for your nodes by passing the resource ID with the flag `node-public-ip-prefix` when creating a new cluster or adding a node pool.
+
+First, create a public IP prefix using [az network public-ip prefix create][az-public-ip-prefix-create]:
+
+```azurecli-interactive
+az network public-ip prefix create --length 28 --location eastus --name MyPublicIPPrefix --resource-group MyResourceGroup3
+```
+
+View the output, and take note of the `id` for the prefix:
+
+```output
+{
+ ...
+ "id": "/subscriptions/<subscription-id>/resourceGroups/myResourceGroup3/providers/Microsoft.Network/publicIPPrefixes/MyPublicIPPrefix",
+ ...
+}
+```
+
+Finally, when creating a new cluster or adding a new node pool, use the flag `node-public-ip-prefix` and pass in the prefix's resource ID:
+
+```azurecli-interactive
+az aks create -g MyResourceGroup3 -n MyManagedCluster -l eastus --enable-node-public-ip --node-public-ip-prefix /subscriptions/<subscription-id>/resourcegroups/MyResourceGroup3/providers/Microsoft.Network/publicIPPrefixes/MyPublicIPPrefix
+```
+
+### Locate public IPs for nodes
+ You can locate the public IPs for your nodes in various ways: * Use the Azure CLI command [az vmss list-instance-public-ips][az-list-ips].
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[vmss-commands]: ../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine [az-list-ips]: /cli/azure/vmss?view=azure-cli-latest&preserve-view=true#az_vmss_list_instance_public_ips [reduce-latency-ppg]: reduce-latency-ppg.md
+[public-ip-prefix-benefits]: ../virtual-network/public-ip-address-prefix.md#why-create-a-public-ip-address-prefix
+[az-public-ip-prefix-create]: /cli/azure/network/public-ip/prefix?view=azure-cli-latest&preserve-view=true#az_network_public_ip_prefix_create
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-system-pools.md
System node pools have the following restrictions:
* System pools osType must be Linux. * User node pools osType may be Linux or Windows. * System pools must contain at least one node, and user node pools may contain zero or more nodes.
-* System node pools require a VM SKU of at least 2 vCPUs and 4GB memory.
+* System node pools require a VM SKU of at least 2 vCPUs and 4GB memory. But burstable-VM(B series) is not recommended.
+* A minimum of two nodes 4 vCPUs is recommended(e.g. Standard_DS4_v2), especially for large clusters (Multiple CoreDNS Pod replicas, 3-4+ add-ons, etc.).
* System node pools must support at least 30 pods as described by the [minimum and maximum value formula for pods][maximum-pods]. * Spot node pools require user node pools. * Adding an additional system node pool or changing which node pool is a system node pool will *NOT* automatically move system pods. System pods can continue to run on the same node pool even if you change it to a user node pool. If you delete or scale down a node pool running system pods that was previously a system node pool, those system pods are redeployed with preferred scheduling to the new system node pool.
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/firewall-integration.md
description: Learn how to integrate with Azure Firewall to secure outbound traff
ms.assetid: 955a4d84-94ca-418d-aa79-b57a5eb8cb85 Previously updated : 09/24/2020 Last updated : 03/25/2021
With an Azure Firewall, you automatically get everything below configured with t
|login.windows.com:443 | |login.windows.net:443 | |login.microsoftonline.com:443 |
+|\*.login.microsoftonline.com:443|
+|\*.login.microsoft.com:443|
|client.wns.windows.com:443 | |definitionupdates.microsoft.com:443 | |go.microsoft.com:80 |
app-service Scenario Secure App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scenario-secure-app-authentication-app-service.md
Previously updated : 11/09/2020 Last updated : 04/02/2021
In this tutorial, you learn how to:
For this tutorial, you need a web app deployed to App Service. You can use an existing web app, or you can follow the [ASP.NET Core quickstart](quickstart-dotnetcore.md) to create and publish a new web app to App Service.
-Whether you use an existing web app or create a new one, take note of the web app name and the name of the resource group that the web app is deployed to. You need these names throughout this tutorial. Throughout this tutorial, example names in procedures and screenshots contain *SecureWebApp*.
+Whether you use an existing web app or create a new one, take note of the web app name and the name of the resource group that the web app is deployed to. You need these names throughout this tutorial.
## Configure authentication and authorization
In **Resource groups**, find and select your resource group. In **Overview**, se
:::image type="content" alt-text="Screenshot that shows selecting your app's management page." source="./media/scenario-secure-app-authentication-app-service/select-app-service.png":::
-On your app's left menu, select **Authentication / Authorization**, and then enable App Service Authentication by selecting **On**.
+On your app's left menu, select **Authentication**, and then click **Add identity provider**.
-In **Action to take when request is not authenticated**, select **Log in with Azure Active Directory**.
+In the **Add an identity provider** page, select **Microsoft** as the **Identity provider** to sign in Microsoft and Azure AD identities.
-Under **Authentication Providers**, select **Azure Active Directory**. Select **Express**, and then accept the default settings to create a new Active Directory app. Select **OK**.
+For **App registration** > **App registration type**, select **Create new app registration**.
+For **App registration** > **Supported account types**, select **Current tenant-single tenant**.
-On the **Authentication / Authorization** page, select **Save**.
+In the **App Service authentication settings** section, leave **Authentication** set to **Require authentication** and **Unauthenticated requests** set to **HTTP 302 Found redirect: recommended for websites**.
-When you see the notification with the message `Successfully saved the Auth Settings for <app-name> App`, refresh the portal page.
+At the bottom of the **Add an identity provider** page, click **Add** to enable authentication for your web app.
+ You now have an app that's secured by the App Service authentication and authorization.
app-service Scenario Secure App Clean Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scenario-secure-app-clean-up-resources.md
Previously updated : 10/27/2020 Last updated : 04/02/2021
app-service Scenario Secure App Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scenario-secure-app-overview.md
Previously updated : 11/09/2020 Last updated : 04/02/2021
application-gateway Application Gateway Configure Listener Specific Ssl Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-configure-listener-specific-ssl-policy.md
+
+ Title: Configure listener-specific SSL policies on Azure Application Gateway through portal
+description: Learn how to configure listener-specific SSL policies on Application Gateway through portal
++++ Last updated : 03/30/2021+++
+# Configure listener-specific SSL policies on Application Gateway through portal (Preview)
+
+This article describes how to use the Azure portal to configure listener-specific SSL policies on your Application Gateway. Listener-specific SSL policies allow you to configure specific listeners to use different SSL policies from each other. You'll still be able to set a default SSL policy that all listeners will use unless overwritten by the listener-specific SSL policy.
+
+> [!NOTE]
+> Only Standard_v2 and WAF_v2 SKUs support listener specific policies as listener specific policies are part of SSL profiles, and SSL profiles are only supported on v2 gateways.
+++
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Create a new Application Gateway
+
+First create a new Application Gateway as you would usually through the portal - there are no additional steps needed in the creation to configure listener-specific SSL policies. For more information on how to create an Application Gateway in portal, check out our [portal quickstart tutorial](./quick-create-portal.md).
+
+## Set up a listener-specific SSL policy
+
+To set up a listener-specific SSL policy, you'll need to first go to the **SSL settings (Preview)** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **SSL Policy** tab is to configure a listener-specific SSL policy. The **Client Authentication** tab is where to upload a client certificate(s) for mutual authentication - for more information, check out [Configuring a mutual authentication](./mutual-authentication-portal.md).
+
+> [!NOTE]
+> We recommend using TLS 1.2 as TLS 1.2 will be mandated in the future.
+
+1. Search for **Application Gateway** in portal, select **Application gateways**, and click on your existing Application Gateway.
+
+2. Select **SSL settings (Preview)** from the left-side menu.
+
+3. Click on the plus sign next to **SSL Profiles** at the top to create a new SSL profile.
+
+4. Enter a name under **SSL Profile Name**. In this example, we call our SSL profile *applicationGatewaySSLProfile*.
+
+5. Go to the **SSL Policy** tab and check the **Enable listener-specific SSL Policy** box.
+
+6. Set up your listener-specific SSL policy given your requirements. You can choose between predefined SSL policies and customizing your own SSL policy. For more information on SSL policies, visit [SSL policy overview](./application-gateway-ssl-policy-overview.md). We recommend using TLS 1.2
+
+7. Select **Add** to save.
+
+ > [!NOTE]
+ > You don't have to configure client authentication on an SSL profile to associate it to a listener. You can have only client authentication configure, or only listener specific SSL policy configured, or both configured in your SSL profile.
+
+ ![Add listener specific SSL policy to SSL profile](./media/application-gateway-configure-listener-specific-ssl-policy/listener-specific-ssl-policy-ssl-profile.png)
+
+## Associate the SSL profile with a listener
+
+Now that we've created an SSL profile with a listener-specific SSL policy, we need to associate the SSL profile to the listener to put the listener-specific policy in action.
+
+1. Navigate to your existing Application Gateway. If you just completed the steps above, you don't need to do anything here.
+
+2. Select **Listeners** from the left-side menu.
+
+3. Click on **Add listener** if you don't already have an HTTPS listener set up. If you already have an HTTPS listener, click on it from the list.
+
+4. Fill out the **Listener name**, **Frontend IP**, **Port**, **Protocol**, and other **HTTPS Settings** to fit your requirements.
+
+5. Check the **Enable SSL Profile** checkbox so that you can select which SSL Profile to associate with the listener.
+
+6. Select the SSL profile you created from the dropdown list. In this example, we choose the SSL profile we created from the earlier steps: *applicationGatewaySSLProfile*.
+
+7. Continue configuring the remainder of the listener to fit your requirements.
+
+8. Click **Add** to save your new listener with the SSL profile associated to it.
+
+ ![Associate SSL profile to new listener](./media/mutual-authentication-portal/mutual-authentication-listener-portal.png)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
application-gateway Application Gateway Configure Ssl Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-configure-ssl-policy-powershell.md
Learn how to configure TLS/SSL policy versions and cipher suites on Application
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
+> [!NOTE]
+> We recommend using TLS 1.2 as your minimum TLS protocol version for better security on your Application Gateway.
+ ## Get available TLS options The `Get-AzApplicationGatewayAvailableSslOptions` cmdlet provides a listing of available pre-defined policies, available cipher suites, and protocol versions that can be configured. The following example shows an example output from running the cmdlet.
$SetGW = Set-AzApplicationGateway -ApplicationGateway $AppGW
## Next steps
-Visit [Application Gateway redirect overview](./redirect-overview.md) to learn how to redirect HTTP traffic to an HTTPS endpoint.
+Visit [Application Gateway redirect overview](./redirect-overview.md) to learn how to redirect HTTP traffic to an HTTPS endpoint.
+
+Check out setting up listener specific SSL policies at [setting up SSL listener specific policy through Portal](./application-gateway-configure-listener-specific-ssl-policy.md)
application-gateway Application Gateway Faq Md https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-faq-md.md
For more information, check documentation [here](./end-to-end-ssl-portal.md#add-
If you are using the V2 SKU of the Application Gateway/WAF service, you donΓÇÖt have to upload the new certificate in the HTTP settings since V2 SKU uses ΓÇ£trusted root certificatesΓÇ¥ and no action needs to be taken here.
+## Configuration - mutual authentication
+
+### What is mutual authentication?
+
+Mutual authentication is two-way authentication between a client and a server. Mutual authentication with Application Gateway currently allows the gateway to verify the client sending the request, which is client authentication. Typically, the client is the only one that authenticates the Application Gateway. Because Application Gateway can now also authenticate the client, it becomes mutual authentication where Application Gateway and the client are mutually authenticating each other.
+
+### Is mutual authentication available between Application Gateway and its backend pools?
+
+No, mutual authentication is currently only between the frontend client and the Application Gateway. Backend mutual authentication is currently not supported.
+ ## Configuration - ingress controller for AKS ### What is an Ingress Controller?
application-gateway Mutual Authentication Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/mutual-authentication-certificate-management.md
+
+ Title: Export trusted client CA certificate chain for client authentication
+
+description: Learn how to export a trusted client CA certificate chain for client authentication on Azure Application Gateway
++++ Last updated : 03/31/2021+++
+# Export a trusted client CA certificate chain to use with client authentication
+In order to configure mutual authentication with the client, or client authentication, Application Gateway requires a trusted client CA certificate chain to be uploaded to the gateway. If you have multiple certificate chains, you'll need to create the chains separately and upload them as different files on the Application Gateway. In this article, you'll learn how to export a trusted client CA certificate chain that you can use in your client authentication configuration on your gateway.
+
+## Prerequisites
+
+An existing client certificate is required to generate the trusted client CA certificate chain.
+
+## Export trusted client CA certificate
+
+Trusted client CA certificate is required to allow client authentication on Application Gateway. In this example, we will use a TLS/SSL certificate for the client certificate, export its public key and then export the CA certificates from the public key to get the trusted client CA certificates. We'll then concatenate all the client CA certificates into one trusted client CA certificate chain.
+
+The following steps help you export the .pem or .cer file for your certificate:
+
+### Export public certificate
+
+1. To obtain a .cer file from the certificate, open **Manage user certificates**. Locate the certificate, typically in 'Certificates - Current User\Personal\Certificates', and right-click. Click **All Tasks**, and then click **Export**. This opens the **Certificate Export Wizard**. If you can't find the certificate under Current User\Personal\Certificates, you may have accidentally opened "Certificates - Local Computer", rather than "Certificates - Current User"). If you want to open Certificate Manager in current user scope using PowerShell, you type *certmgr* in the console window.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the Certificate Manager with Certificates selected and a contextual menu with All tasks, then Export selected.](./media/certificates-for-backend-authentication/export.png)
+
+2. In the Wizard, click **Next**.
+ > [!div class="mx-imgBorder"]
+ > ![Export certificate](./media/certificates-for-backend-authentication/exportwizard.png)
+
+3. Select **No, do not export the private key**, and then click **Next**.
+ > [!div class="mx-imgBorder"]
+ > ![Do not export the private key](./media/certificates-for-backend-authentication/notprivatekey.png)
+
+4. On the **Export File Format** page, select **Base-64 encoded X.509 (.CER).**, and then click **Next**.
+ > [!div class="mx-imgBorder"]
+ > ![Base-64 encoded](./media/certificates-for-backend-authentication/base64.png)
+
+5. For **File to Export**, **Browse** to the location to which you want to export the certificate. For **File name**, name the certificate file. Then, click **Next**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the Certificate Export Wizard where you specify a file to export.](./media/certificates-for-backend-authentication/browse.png)
+
+6. Click **Finish** to export the certificate.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the Certificate Export Wizard after you complete the file export.](./media/certificates-for-backend-authentication/finish.png)
+
+7. Your certificate is successfully exported.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the Certificate Export Wizard with a success message.](./media/certificates-for-backend-authentication/success.png)
+
+ The exported certificate looks similar to this:
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows a certificate symbol.](./media/certificates-for-backend-authentication/exported.png)
+
+### Export CA certificate(s) from the public certificate
+
+Now that you've exported your public certificate, you will now export the CA certificate(s) from your public certificate. If you only have a root CA, you'll only need to export that certificate. However, if you have 1+ intermediate CAs, you'll need to export each of those as well.
+
+1. Once the public key has been exported, open the file.
+
+ > [!div class="mx-imgBorder"]
+ > ![Open authorization certificate](./media/certificates-for-backend-authentication/openAuthcert.png)
+
+ > [!div class="mx-imgBorder"]
+ > ![about certificate](./media/mutual-authentication-certificate-management/general.png)
+
+1. Select the Certification Path tab to view the certification authority.
+
+ > [!div class="mx-imgBorder"]
+ > ![cert details](./media/mutual-authentication-certificate-management/cert-details.png)
+
+1. Select the root certificate and click on **View Certificate**.
+
+ > [!div class="mx-imgBorder"]
+ > ![cert path](./media/mutual-authentication-certificate-management/root-cert.png)
+
+ You should see the root certificate details.
+
+ > [!div class="mx-imgBorder"]
+ > ![cert info](./media/mutual-authentication-certificate-management/root-cert-details.png)
+
+1. Select the **Details** tab and click **Copy to File...**
+
+ > [!div class="mx-imgBorder"]
+ > ![copy root cert](./media/mutual-authentication-certificate-management/root-cert-copy-to-file.png)
+
+1. At this point, you've extracted the details of the root CA certificate from the public certificate. You'll see the **Certificate Export Wizard**. Follow steps 2-7 from the previous section ([Export public certificate](./mutual-authentication-certificate-management.md#export-public-certificate)) to complete the Certificate Export Wizard.
+
+1. Now repeat steps 2-6 from this current section ([Export CA certificate(s) from the public certificate](./mutual-authentication-certificate-management.md#export-ca-certificates-from-the-public-certificate)) for all intermediate CAs to export all intermediate CA certificates in the Base-64 encoded X.509(.CER) format.
+
+ > [!div class="mx-imgBorder"]
+ > ![intermediate cert](./media/mutual-authentication-certificate-management/intermediate-cert.png)
+
+ For example, you would repeat steps 2-6 from this section on the *MSIT CAZ2* intermediate CA to extract it as its own certificate.
+
+### Concatenate all your CA certificates into one file
+
+1. Run the following command with all the CA certificates you extracted earlier.
+
+ Windows:
+ ```console
+ type intermediateCA.cer rootCA.cer > combined.cer
+ ```
+
+ Linux:
+ ```console
+ cat intermediateCA.cer rootCA.cer >> combined.cer
+ ```
+
+ Your resulting combined certificate should look something like the following:
+
+ > [!div class="mx-imgBorder"]
+ > ![combined cert](./media/mutual-authentication-certificate-management/combined-cert.png)
+
+## Next steps
+
+Now you have the trusted client CA certificate chain. You can add this to your client authentication configuration on the Application Gateway to allow mutual authentication with your gateway. See [configure mutual authentication using Application Gateway with Portal](./mutual-authentication-portal.md) or [configure mutual authentication using Application Gateway with PowerShell](./mutual-authentication-powershell.md).
+
application-gateway Mutual Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/mutual-authentication-overview.md
+
+ Title: Overview of mutual authentication on Azure Application Gateway
+description: This article is an overview of mutual authentication on Application Gateway.
+++ Last updated : 03/30/2021++++
+# Overview of mutual authentication with Application Gateway (Preview)
+
+Mutual authentication, or client authentication, allows for the Application Gateway to authenticate the client sending requests. Usually only the client is authenticating the Application Gateway; mutual authentication allows for both the client and the Application Gateway to authenticate each other.
+
+> [!NOTE]
+> We recommend using TLS 1.2 with mutual authentication as TLS 1.2 will be mandated in the future.
+
+## Mutual authentication
+
+Application Gateway supports certificate based mutual authentication where you can upload a trusted client CA certificate(s) to the Application Gateway and the gateway will use that certificate to authenticate the client sending a request to the gateway. With the rise in IoT use cases and increased security requirements across industries, mutual authentication provides a way for you to manage and control which clients can talk to your Application Gateway.
+
+To configure mutual authentication, a trusted client CA certificate is required to be uploaded as part of the client authentication portion of an SSL profile. The SSL profile will then need to be associated to a listener in order to complete configuration of mutual authentication. There must always be a root CA certificate in the client certificate that you upload. You can upload a certificate chain as well, but the chain must include a root CA certificate in addition to as many intermediate CA certificates as you'd like.
+
+For example, if your client certificate contains a root CA certificate, multiple intermediate CA certificates, and a leaf certificate, make sure that the root CA certificate and all the intermediate CA certificates are uploaded onto Application Gateway in one file. For more information on how to extract a trusted client CA certificate, see [how to extract trusted client CA certificates](./mutual-authentication-certificate-management.md).
+
+If you're uploading a certificate chain with root CA and intermediate CA certificates, the certificate chain must be uploaded as a PEM or CER file to the gateway.
+
+> [!NOTE]
+> Mutual authentication is only available on Standard_v2 and WAF_v2 SKUs.
+
+### Certificates supported for mutual authentication
+
+Application Gateway supports the following types of certificates:
+
+- CA (Certificate Authority) certificate: A CA certificate is a digital certificate issued by a certificate authority (CA)
+- Self-signed CA certificates: Client browsers do not trust these certificates and will warn the user that the virtual serviceΓÇÖs certificate is not part of a trust chain. Self-signed CA certificates are good for testing or environments where administrators control the clients and can safely bypass the browserΓÇÖs security alerts. Production workloads should never use self-signed CA certificates.
+
+For more information on how to set up mutual authentication, see [configure mutual authentication with Application Gateway](./mutual-authentication-portal.md).
+
+> [!IMPORTANT]
+> Make sure you upload the entire trusted client CA certificate chain to the Application Gateway when using mutual authentication.
+
+## Additional client authentication validation
+
+### Verify client certificate DN
+
+You have the option to verify the client certificate's immediate issuer and only allow the Application Gateway to trust that issuer. This options is off by default but you can enable this through Portal, PowerShell, or Azure CLI.
+
+If you choose to enable the Application Gateway to verify the client certificate's immediate issuer, here's how to determine what the client certificate issuer DN will be extracted from the certificates uploaded.
+* **Scenario 1:** Certificate chain includes: root certificate - intermediate certificate - leaf certificate
+ * *Intermediate certificate's* subject name is what Application Gateway will extract as the client certificate issuer DN and will be verified against.
+* **Scenario 2:** Certificate chain includes: root certificate - intermediate1 certificate - intermediate2 certificate - leaf certificate
+ * *Intermediate2 certificate's* subject name will be what's extracted as the client certificate issuer DN and will be verified against.
+* **Scenario 3:** Certificate chain includes: root certificate - leaf certificate
+ * *Root certificate's* subject name will be extracted and used as client certificate issuer DN.
+* **Scenario 4:** Multiple certificate chains of the same length in the same file. Chain 1 includes: root certificate - intermediate1 certificate - leaf certificate. Chain 2 includes: root certificate - intermediate2 certificate - leaf certificate.
+ * *Intermediate1 certificate's* subject name will be extracted as client certificate issuer DN.
+* **Scenario 5:** Multiple certificate chains of different lengths in the same file. Chain 1 includes: root certificate - intermediate1 certificate - leaf certificate. Chain 2 includes root certificate - intermediate2 certificate - intermediate3 certificate - leaf certificate.
+ * *Intermediate3 certificate's* subject name will be extracted as client certificate issuer DN.
+
+> [!IMPORTANT]
+> We recommend only uploading one certificate chain per file. This is especially important if you enable verify client certificate DN. By uploading multiple certificate chains in one file, you will end up in scenario four or five and may run into issues later down the line when the client certificate presented doesn't match the client certificate issuer DN Application Gateway extracted from the chains.
+
+For more information on how to extract trusted client CA certificate chains, see [how to extract trusted client CA certificate chains](./mutual-authentication-certificate-management.md).
+
+## Server variables
+
+With mutual authentication, there are additional server variables that you can use to pass information about the client certificate to the backend servers behind the Application Gateway. For more information about which server variables are available and how to use them, check out [server variables](./rewrite-http-headers-url.md#mutual-authentication-server-variables-preview).
+
+## Next steps
+
+After learning about mutual authentication, go to [Configure Application Gateway with mutual authentication in PowerShell](./mutual-authentication-powershell.md) to create an Application Gateway using mutual authentication.
+
application-gateway Mutual Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/mutual-authentication-portal.md
+
+ Title: Configure mutual authentication on Azure Application Gateway through portal
+description: Learn how to configure an Application Gateway to have mutual authentication through portal
++++ Last updated : 04/02/2021+++
+# Configure mutual authentication with Application Gateway through portal (Preview)
+
+This article describes how to use the Azure portal to configure mutual authentication on your Application Gateway. Mutual authentication means Application Gateway authenticates the client sending the request using the client certificate you upload onto the Application Gateway.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Before you begin
+
+To configure mutual authentication with an Application Gateway, you need a client certificate to upload to the gateway. The client certificate will be used to validate the certificate the client will present to Application Gateway. For testing purposes, you can use a self-signed certificate. However, this is not advised for production workloads, because they're harder to manage and aren't completely secure.
+
+To learn more, especially about what kind of client certificates you can upload, see [Overview of mutual authentication with Application Gateway](./mutual-authentication-overview.md#certificates-supported-for-mutual-authentication).
+
+## Create a new Application Gateway
+
+First create a new Application Gateway as you would usually through the portal - there are no additional steps needed in the creation to enable mutual authentication. For more information on how to create an Application Gateway in portal, check out our [portal quickstart tutorial](./quick-create-portal.md).
+
+## Configure mutual authentication
+
+To configure an existing Application Gateway with mutual authentication, you'll need to first go to the **SSL settings (Preview)** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **Client Authentication** tab is where you'll upload your client certificate(s). The **SSL Policy** tab is to configure a listener specific SSL policy - for more information, check out [Configuring a listener specific SSL policy](./application-gateway-configure-listener-specific-ssl-policy.md).
+
+> [!IMPORTANT]
+> Please ensure that you upload the entire client CA certificate chain in one file, and only one chain per file.
+
+1. Search for **Application Gateway** in portal, select **Application gateways**, and click on your existing Application Gateway.
+
+2. Select **SSL settings (Preview)** from the left-side menu.
+
+3. Click on the plus sign next to **SSL Profiles** at the top to create a new SSL profile.
+
+4. Enter a name under **SSL Profile Name**. In this example, we call our SSL profile *applicationGatewaySSLProfile*.
+
+5. Stay in the **Client Authentication** tab. Upload the PEM certificate you intend to use for mutual authentication between the client and the Application Gateway using the **Upload a new certificate** button.
+
+ For more information on how to extract trusted client CA certificate chains to upload here, see [how to extract trusted client CA certificate chains](./mutual-authentication-certificate-management.md).
+
+ > [!NOTE]
+ > If this isn't your first SSL profile and you've uploaded other client certificates onto your Application Gateway, you can choose to reuse an existing certificate on your gateway through the dropdown menu.
+
+6. Check the **Verify client certificate issuer's DN** box only if you want Application Gateway to verify the client certificate's immediate issuer Distinguished Name.
+
+7. Consider adding a listener specific policy. See instructions at [setting up listener specific SSL policies](./application-gateway-configure-listener-specific-ssl-policy.md).
+
+8. Select **Add** to save.
+ > [!div class="mx-imgBorder"]
+ > ![Add client authentication to SSL profile](./media/mutual-authentication-portal/mutual-authentication-portal.png)
+
+## Associate the SSL profile with a listener
+
+Now that we've created an SSL profile with mutual authentication configured, we need to associate the SSL profile to the listener to complete the set up of mutual authentication.
+
+1. Navigate to your existing Application Gateway. If you just completed the steps above, you don't need to do anything here.
+
+2. Select **Listeners** from the left-side menu.
+
+3. Click on **Add listener** if you don't already have an HTTPS listener set up. If you already have an HTTPS listener, click on it from the list.
+
+4. Fill out the **Listener name**, **Frontend IP**, **Port**, **Protocol**, and other **HTTPS Settings** to fit your requirements.
+
+5. Check the **Enable SSL Profile** checkbox so that you can select which SSL Profile to associate with the listener.
+
+6. Select the SSL profile you just created from the dropdown list. In this example, we choose the SSL profile we created from the earlier steps: *applicationGatewaySSLProfile*.
+
+7. Continue configuring the remainder of the listener to fit your requirements.
+
+8. Click **Add** to save your new listener with the SSL profile associated to it.
+
+ > [!div class="mx-imgBorder"]
+ > ![Associate SSL profile to new listener](./media/mutual-authentication-portal/mutual-authentication-listener-portal.png)
+
+## Renew expired client CA certificates
+
+In the case that your client CA certificate has expired, you can update the certificate on your gateway through the following steps:
+
+1. Navigate to your Application Gateway and go to the **SSL settings (Preview)** tab in the left-hand menu.
+
+1. Select the existing SSL profile(s) with the expired client certificate.
+
+1. Select **Upload a new certificate** in the **Client Authentication** tab and upload your new client certificate.
+
+1. Select the trash can icon next to the expired certificate. This will remove the association of that certificate from the SSL profile.
+
+1. Repeat steps 2-4 above with any other SSL profile that was using the same expired client certificate. You will be able to choose the new certificate you uploaded in step 3 from the dropdown menu in other SSL profiles.
+
+## Next steps
+
+- [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
application-gateway Mutual Authentication Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/mutual-authentication-powershell.md
+
+ Title: Configure mutual authentication on Azure Application Gateway through PowerShell
+description: Learn how to configure an Application Gateway to have mutual authentication through PowerShell
++++ Last updated : 04/02/2021+++
+# Configure mutual authentication with Application Gateway through PowerShell (Preview)
+This article describes how to use the PowerShell to configure mutual authentication on your Application Gateway. Mutual authentication means Application Gateway authenticates the client sending the request using the client certificate you upload onto the Application Gateway.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
++
+This article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+
+## Before you begin
+
+To configure mutual authentication with an Application Gateway, you need a client certificate to upload to the gateway. The client certificate will be used to validate the certificate the client will present to Application Gateway. For testing purposes, you can use a self-signed certificate. However, this is not advised for production workloads, because they're harder to manage and aren't completely secure.
+
+To learn more, especially about what kind of client certificates you can upload, see [Overview of mutual authentication with Application Gateway](./mutual-authentication-overview.md#certificates-supported-for-mutual-authentication).
+
+## Create a resource group
+
+First create a new resource group in your subscription.
+
+```azurepowershell
+$resourceGroup = New-AzResourceGroup -Name $rgname -Location $location -Tags @{ testtag = "APPGw tag"}
+```
+## Create a virtual network
+
+Deploy a virtual network for your Application Gateway to be deployed in.
+
+```azurepowershell
+$gwSubnet = New-AzVirtualNetworkSubnetConfig -Name $gwSubnetName -AddressPrefix 10.0.0.0/24
+$vnet = New-AzVirtualNetwork -Name $vnetName -ResourceGroupName $rgname -Location $location -AddressPrefix 10.0.0.0/16 -Subnet $gwSubnet
+$vnet = Get-AzVirtualNetwork -Name $vnetName -ResourceGroupName $rgname
+$gwSubnet = Get-AzVirtualNetworkSubnetConfig -Name $gwSubnetName -VirtualNetwork $vnet
+```
+
+## Create a public IP
+
+Create a public IP to use with your Application Gateway.
+
+```azurepowershell
+$publicip = New-AzPublicIpAddress -ResourceGroupName $rgname -name $publicIpName -location $location -AllocationMethod Static -sku Standard
+```
+
+## Create the Application Gateway IP configuration
+
+Create the IP configurations and frontend port.
+
+```azurepowershell
+$gipconfig = New-AzApplicationGatewayIPConfiguration -Name $gipconfigname -Subnet $gwSubnet
+$fipconfig = New-AzApplicationGatewayFrontendIPConfig -Name $fipconfigName -PublicIPAddress $publicip
+$port = New-AzApplicationGatewayFrontendPort -Name $frontendPortName -Port 443
+```
+
+## Configure frontend SSL
+
+Configure the SSL certificates for your Application Gateway.
+
+```azurepowershell
+$password = ConvertTo-SecureString "P@ssw0rd" -AsPlainText -Force
+$sslCertPath = $basedir + "/ScenarioTests/Data/ApplicationGatewaySslCert1.pfx"
+$sslCert = New-AzApplicationGatewaySslCertificate -Name $sslCertName -CertificateFile $sslCertPath -Password $password
+```
+
+## Configure client authentication
+
+Configure client authentication on your Application Gateway. For more information on how to extract trusted client CA certificate chains to use here, see [how to extract trusted client CA certificate chains](./mutual-authentication-certificate-management.md).
+
+> [!IMPORTANT]
+> Please ensure that you upload the entire client CA certificate chain in one file, and only one chain per file.
+
+> [!NOTE]
+> We recommend using TLS 1.2 with mutual authentication as TLS 1.2 will be mandated in the future.
+
+```azurepowershell
+$clientCertFilePath = $basedir + "/ScenarioTests/Data/TrustedClientCertificate.cer"
+$trustedClient01 = New-AzApplicationGatewayTrustedClientCertificate -Name $trustedClientCert01Name -CertificateFile $clientCertFilePath
+$sslPolicy = New-AzApplicationGatewaySslPolicy -PolicyType Predefined -PolicyName "AppGwSslPolicy20170401S"
+$clientAuthConfig = New-AzApplicationGatewayClientAuthConfiguration -VerifyClientCertIssuerDN
+$sslProfile01 = New-AzApplicationGatewaySslProfile -Name $sslProfile01Name -SslPolicy $sslPolicy -ClientAuthConfiguration $clientAuthConfig -TrustedClientCertificates $trustedClient01
+$listener = New-AzApplicationGatewayHttpListener -Name $listenerName -Protocol Https -SslCertificate $sslCert -FrontendIPConfiguration $fipconfig -FrontendPort $port -SslProfile $sslProfile01
+```
+
+## Configure the backend pool and settings
+
+Set up backend pool and settings for your Application Gateway. Optionally, set up the backend trusted root certificate for end-to-end SSL encryption.
+
+```azurepowershell
+$certFilePath = $basedir + "/ScenarioTests/Data/ApplicationGatewayAuthCert.cer"
+$trustedRoot = New-AzApplicationGatewayTrustedRootCertificate -Name $trustedRootCertName -CertificateFile $certFilePath
+$pool = New-AzApplicationGatewayBackendAddressPool -Name $poolName -BackendIPAddresses www.microsoft.com, www.bing.com
+$poolSetting = New-AzApplicationGatewayBackendHttpSettings -Name $poolSettingName -Port 443 -Protocol Https -CookieBasedAffinity Enabled -PickHostNameFromBackendAddress -TrustedRootCertificate $trustedRoot
+```
+
+## Configure the rule
+
+Set up a rule on your Application Gateway.
+
+```azurepowershell
+$rule = New-AzApplicationGatewayRequestRoutingRule -Name $ruleName -RuleType basic -BackendHttpSettings $poolSetting -HttpListener $listener -BackendAddressPool $pool
+```
+
+## Set up default SSL policy for future listeners
+
+You've set up a listener specific SSL policy while setting up mutual authentication. In this step, you can optionally set the default SSL policy for future listeners you create.
+
+```azurepowershell
+$sslPolicyGlobal = New-AzApplicationGatewaySslPolicy -PolicyType Predefined -PolicyName "AppGwSslPolicy20170401"
+```
+
+## Create the Application Gateway
+
+Using everything we created above, deploy your Application Gateway.
+
+```azurepowershell
+$sku = New-AzApplicationGatewaySku -Name Standard_v2 -Tier Standard_v2
+$appgw = New-AzApplicationGateway -Name $appgwName -ResourceGroupName $rgname -Zone 1,2 -Location $location -BackendAddressPools $pool -BackendHttpSettingsCollection $poolSetting -FrontendIpConfigurations $fipconfig -GatewayIpConfigurations $gipconfig -FrontendPorts $port -HttpListeners $listener -RequestRoutingRules $rule -Sku $sku -SslPolicy $sslPolicyGlobal -TrustedRootCertificate $trustedRoot -AutoscaleConfiguration $autoscaleConfig -TrustedClientCertificates $trustedClient01 -SslProfiles $sslProfile01 -SslCertificates $sslCert
+```
+
+## Clean up resources
+
+When no longer needed, remove the resource group, application gateway, and all related resources using [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup).
+
+```azurepowershell
+Remove-AzResourceGroup -Name $rgname
+```
+
+## Renew expired client CA certificates
+
+In the case that your client CA certificate has expired, you can update the certificate on your gateway through the following steps:
+
+1. Sign in to Azure
+ ```azurepowershell
+ Connect-AzAccount
+ Select-AzSubscription -Subscription "<sub name>"
+ ```
+2. Get your Application Gateway configuration
+ ```azurepowershell
+ $gateway = Get-AzApplicationGateway -Name "<gateway-name>" -ResourceGroupName "<resource-group-name>"
+ ```
+3. Remove the trusted client certificate from the gateway
+ ```azurepowershell
+ Remove-AzApplicationGatewayTrustedClientCertificate -Name "<name-of-client-certificate>" -ApplicationGateway $gateway
+ ```
+4. Add the new certificate onto the gateway
+ ```azurepowershell
+ Add-AzApplicationGatewayTrustedClientCertificate -ApplicationGateway $gateway -Name "<name-of-new-cert>" -CertificateFile "<path-to-certificate-file>"
+ ```
+5. Update the gateway with the new certificate
+ ```azurepowershell
+ Set-AzApplicationGateway -ApplicationGateway $gateway
+ ```
+
+## Next steps
+
+- [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
application-gateway Mutual Authentication Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/mutual-authentication-troubleshooting.md
+
+ Title: Troubleshoot mutual authentication on Azure Application Gateway
+description: Learn how to troubleshoot mutual authentication on Application Gateway
++++ Last updated : 04/02/2021+++
+# Troubleshooting mutual authentication errors in Application Gateway (Preview)
+
+Learn how to troubleshoot problems with mutual authentication when using Application Gateway.
+
+## Overview
+
+After configuring mutual authentication on an Application Gateway, there can be a number of errors that appear when trying to use mutual authentication. Some common causes for errors include:
+
+* Uploaded a certificate or certificate chain without a root CA certificate
+* Uploaded a certificate chain with multiple root CA certificates
+* Uploaded a certificate chain that only contained a leaf certificate without a CA certificate
+* Validation errors due to issuer DN mismatch
+
+We'll go through different scenarios that you might run into and how to troubleshoot those scenarios. We'll then address error codes and explain likely causes for certain error codes you might be seeing with mutual authentication.
+
+## Scenario troubleshooting - configuration problems
+There are a few scenarios that you might be facing when trying to configure mutual authentication. We'll walk through how to troubleshoot some of the most common pitfalls.
+
+### Self-signed certificate
+
+#### Problem
+
+The client certificate you uploaded is a self-signed certificate and is resulting in the error code ApplicationGatewayTrustedClientCertificateDoesNotContainAnyCACertificate.
+
+#### Solution
+
+Double check that the self-signed certificate that you're using has the extension *BasicConstraintsOid* = "2.5.29.19" which indicates the subject can act as a CA. This will ensure that the certificate used is a CA certificate. For more information about how to generate self-signed client certificates, check out [trusted client certificates](./mutual-authentication-certificate-management.md).
+
+## Scenario troubleshooting - connectivity problems
+
+You might have been able to configure mutual authentication without any problems but you're running into problems when sending requests to your Application Gateway. We address some common problems and solutions in the following section. You can find the *sslClientVerify* property in the access logs of your Application Gateway.
+
+### SslClientVerify is NONE
+
+#### Problem
+
+The property *sslClientVerify* is appearing as "NONE" in your access logs.
+
+#### Solution
+
+This is seen when the client doesn't send a client certificate when sending a request to the Application Gateway. This could happen if the client sending the request to the Application Gateway isn't configured correctly to use client certificates. One way to verify that the client authentication setup on Application Gateway is working as expected is through the following OpenSSL command:
+
+```
+openssl s_client -connect <hostname:port> -cert <path-to-certificate> -key <client-private-key-file>
+```
+
+The `-cert` flag is the leaf certificate, the `-key` flag is the client private key file.
+
+For more information on how to use the OpenSSL `s_client` command, check out their [manual page](https://www.openssl.org/docs/man1.0.2/man1/openssl-s_client.html).
+
+### SslClientVerify is FAILED
+
+#### Problem
+
+The property *sslClientVerify* is appearing as "FAILED" in your access logs.
+
+#### Solution
+
+There are a number of potential causes for failures in the access logs. Below is a list of common causes for failure:
+* **Unable to get issuer certificate:** The issuer certificate of the client certificate couldn't be found. This normally means the trusted client CA certificate chain is not complete on the Application Gateway. Validate that the trusted client CA certificate chain uploaded on the Application Gateway is complete.
+* **Unable to get local issuer certificate:** Similar to unable to get issuer certificate, the issuer certificate of the client certificate couldn't be found. This normally means the trusted client CA certificate chain is not complete on the Application Gateway. Validate that the trusted client CA certificate chain uploaded on the Application Gateway is complete.
+* **Unable to verify the first certificate:** Unable to verify the client certificate. This error occurs specifically when the client presents only the leaf certificate, whose issuer is not trusted. Validate that the trusted client CA certificate chain uploaded on the Application Gateway is complete.
+* **Unable to verify the client certificate issuer:** This error occurs when the configuration *VerifyClientCertIssuerDN* is set to true. This typically happens when the Issuer DN of the client certificate doesn't match the *ClientCertificateIssuerDN* extracted from the trusted client CA certificate chain uploaded by the customer. For more information about how Application Gateway extracts the *ClientCertificateIssuerDN*, check out [Application Gateway extracting issuer DN](./mutual-authentication-overview.md#verify-client-certificate-dn). As best practice, make sure you're uploading one certificate chain per file to Application Gateway.
+
+For more information on how to extract the entire trusted client CA certificate chain to upload to Application Gateway, see [how to extract trusted client CA certificate chains](./mutual-authentication-certificate-management.md).
+
+## Error code troubleshooting
+If you're seeing any of the following error codes, we have a few recommended solutions to help resolve the problem you might be facing.
+
+### Error code: ApplicationGatewayTrustedClientCertificateMustSpecifyData
+
+#### Cause
+
+There is certificate data that is missing. The certificate uploaded could have been an empty file without any certificate data.
+
+#### Solution
+
+Validate that the certificate file uploaded does not have any missing data.
+
+### Error code: ApplicationGatewayTrustedClientCertificateMustNotHavePrivateKey
+
+#### Cause
+
+There is a private key in the certificate chain. There shouldn't be a private key in the certificate chain.
+
+#### Solution
+
+Double check the certificate chain that was uploaded and remove the private key that was part of the chain. Reupload the chain without the private key.
+
+### Error code: ApplicationGatewayTrustedClientCertificateInvalidData
+
+#### Cause
+
+There are two potential causes behind this error code.
+1. The parsing failed due to the chain not being presented in the right format. Application Gateway expects a certificate chain to be in PEM format and also expects individual certificate data to be delimited.
+2. The parser didn't find anything to parse. The file uploaded could potentially only have had the delimiters but no certificate data.
+
+#### Solution
+
+Depending on the cause of this error, there are two potential solutions.
+* Validate that the certificate chain uploaded was in the right format (PEM) and that the certificate data was properly delimited.
+* Check that the certificate file uploaded contained the certificate data in addition to the delimiters.
+
+### Error code: ApplicationGatewayTrustedClientCertificateDoesNotContainAnyCACertificate
+
+#### Cause
+
+The certificate uploaded only contained a leaf certificate without a CA certificate. Uploading a certificate chain with CA certificates and a leaf certificate is acceptable as the leaf certificate would just be ignored, but a certificate must have a CA.
+
+#### Solution
+
+Double check the certificate chain that was uploaded contained more than just the leaf certificate. The *BasicConstraintsOid* = "2.5.29.19" extension should be present and indicate the subject can act as a CA.
+
+### Error code: ApplicationGatewayOnlyOneRootCAAllowedInTrustedClientCertificate
+
+#### Cause
+
+The certificate chain contained multiple root CA certificates *or* contained zero root CA certificates.
+
+#### Solution
+
+Certificates uploaded must contain exactly one root CA certificate (and however many intermediate CA certificates as needed).
++
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/rewrite-http-headers-url.md
Application gateway supports the following server variables:
| ssl_enabled | ΓÇ£OnΓÇ¥ if the connection operates in TLS mode. Otherwise, an empty string. | | uri_path | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments. Example: in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, uri_path value will be `/article.aspx` |
-
+### Mutual authentication server variables (Preview)
+
+Application Gateway supports the following server variables for mutual authentication scenarios. Use these server variables the same way as above with the other server variables.
+
+| Variable name | Description |
+| - | |
+| client_certificate | The client certificate in PEM formate for an established SSL connection. |
+| client_certificate_end_date| The end date of the client certificate. |
+| client_certificate_fingerprint| The SHA1 fingerprint of the client certificate for an established SSL connection. |
+| client_certificate_issuer | The "issuer DN" string of the client certificate for an established SSL connection. |
+| client_certificate_serial | The serial number of the client certificate for an established SSL connection. |
+| client_certificate_start_date| The start date of the client certificate. |
+| client_certificate_subject| The "subject DN" string of the client certificate for an established SSL connection. |
+| client_certificate_verification| The result of the client certificate verification: *SUCCESS*, *FAILED:<reason>*, or *NONE* if a certificate was not present. |
## Rewrite configuration
application-gateway Tutorial Ssl Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/tutorial-ssl-cli.md
# Create an application gateway with TLS termination using the Azure CLI
-You can use the Azure CLI to create an [application gateway](overview.md) with a certificate for [TLS termination](ssl-overview.md). For backend servers, you can use a [virtual machine scale set](../virtual-machine-scale-sets/overview.md) . In this example, the scale set contains two virtual machine instances that are added to the default backend pool of the application gateway.
+You can use the Azure CLI to create an [application gateway](overview.md) with a certificate for [TLS termination](ssl-overview.md). For backend servers, you can use a [virtual machine scale set](../virtual-machine-scale-sets/overview.md). In this example, the scale set contains two virtual machine instances that are added to the default backend pool of the application gateway.
In this article, you learn how to:
automanage Automanage Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-linux.md
Automanage supports the following Linux distributions and versions:
|Service |Description |Environments Supported<sup>1</sup> |Preferences supported<sup>1</sup> | |--||-|-|
-|VM Insights Monitoring |Azure Monitor for VMs monitors the performance and health of your virtual machines, including their running processes and dependencies on other resources. Learn [more](../azure-monitor/vm/vminsights-overview.md). |Production |No |
-|Backup |Azure Backup provides independent and isolated backups to guard against unintended destruction of the data on your VMs. Learn [more](../backup/backup-azure-vms-introduction.md). Charges are based on the number and size of VMs being protected. Learn [more](https://azure.microsoft.com/pricing/details/backup/). |Production |Yes |
-|Azure Security Center |Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud. Learn [more](../security-center/security-center-introduction.md). Automanage will configure the subscription where your VM resides to the free-tier offering of Azure Security Center. If your subscription is already onboarded to Azure Security Center, then Automanage will not reconfigure it. |Production, Dev/Test |No |
-|Update Management |You can use Update Management in Azure Automation to manage operating system updates for your virtual machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Production, Dev/Test |No |
-|Change Tracking & Inventory |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |No |
-|Azure Guest Configuration | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the Guest Configuration extension. For Linux machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No |
-|Azure Automation Account |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test |No |
-|Log Analytics Workspace |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Production, Dev/Test |No |
+|[VM Insights Monitoring](https://docs.microsoft.com/azure/azure-monitor/vm/vminsights-overview) |Azure Monitor for VMs monitors the performance and health of your virtual machines, including their running processes and dependencies on other resources. Learn [more](../azure-monitor/vm/vminsights-overview.md). |Production |No |
+|[Backup](https://docs.microsoft.com/azure/backup/backup-overview) |Azure Backup provides independent and isolated backups to guard against unintended destruction of the data on your VMs. Learn [more](../backup/backup-azure-vms-introduction.md). Charges are based on the number and size of VMs being protected. Learn [more](https://azure.microsoft.com/pricing/details/backup/). |Production |Yes |
+|[Azure Security Center](https://docs.microsoft.com/azure/security-center/security-center-introduction) |Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud. Learn [more](../security-center/security-center-introduction.md). Automanage will configure the subscription where your VM resides to the free-tier offering of Azure Security Center. If your subscription is already onboarded to Azure Security Center, then Automanage will not reconfigure it. |Production, Dev/Test |No |
+|[Update Management](https://docs.microsoft.com/azure/automation/update-management/overview) |You can use Update Management in Azure Automation to manage operating system updates for your virtual machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Production, Dev/Test |No |
+|[Change Tracking & Inventory](https://docs.microsoft.com/azure/automation/change-tracking/overview) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |No |
+|[Azure Guest Configuration](https://docs.microsoft.com/azure/governance/policy/concepts/guest-configuration) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the Guest Configuration extension. For Linux machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No |
+|[Boot Diagnostics](https://docs.microsoft.com/azure/virtual-machines/boot-diagnostics) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. |Production, Dev/Test |No |
+|[Azure Automation Account](https://docs.microsoft.com/azure/automation/automation-create-standalone-account) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test |No |
+|[Log Analytics Workspace](https://docs.microsoft.com/azure/azure-monitor/logs/log-analytics-overview) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Production, Dev/Test |No |
<sup>1</sup> The environment selection is available when you are enabling Automanage. Learn [more](automanage-virtual-machines.md#environment-configuration). You can also adjust the default settings of the environment and set your own preferences within the best practices constraints.
automanage Automanage Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-virtual-machines.md
If you are enabling Automanage with a new Automanage account:
If you are enabling Automanage with an existing Automanage account: * **Contributor** role on the resource group containing your VMs
+The Automanage account will be granted **Contributor** and **Resource Policy Contributor** permissions to perform actions on Automanaged machines.
+ > [!NOTE] > If you want to use Automanage on a VM that is connected to a workspace in a different subscription, you must have the permissions described above on each subscription.
If it is your first time enabling Automanage for your VM, you can search in the
The only time you might need to interact with this VM to manage these services is in the event we attempted to remediate your VM, but failed to do so. If we successfully remediate your VM, we will bring it back into compliance without even alerting you. For more details, see [Status of VMs](#status-of-vms).
+## Enabling Automanage for VMs using Azure Policy
+You can also enable Automanage on VMs at scale using the built-in Azure Policy. The policy has a DeployIfNotExists effect, which means that all eligible VMs located within the scope of the policy will be automatically onboarded to Automanage VM Best Practices.
+
+A direct link to the policy is [here](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F270610db-8c04-438a-a739-e8e6745b22d3).
+
+### How to apply the policy
+1. Click the **Assign** button when viewing the policy definition
+1. Select the scope at which you want to apply the policy (can be management group, subscription, or resource group)
+1. Under **Parameters**, specify parameters for the Automanage account, Configuration profile, and Effect (the effect should usually be DeployIfNotExists)
+ 1. If you don't have an Automanage account, you will have to [create one](#create-an-automanage-account).
+1. Under **Remediation**, check the "Click a remediation task" checkbox. This will perform onboarding to Automanage.
+1. Click **Review + create** and ensure that all settings look good.
+1. Click **Create**.
## Environment configuration
If you are enabling Automanage with an existing Automanage Account, you need to
> [!NOTE] > When you disable Automanage Best Practices, the Automanage Account's permissions on any associated subscriptions will remain. Manually remove the permissions by going to the subscription's IAM page or delete the Automanage Account. The Automanage Account cannot be deleted if it is still managing any machines.
+### Create an Automanage Account
+You may create an Automanage Account using the portal or using an ARM template.
+
+#### Portal
+1. Navigate to the **Automanage** blade in the portal
+1. Click **Enable on existing machine**
+1. Under **Advanced**, click "Create a new account"
+1. Fill in the required fields and click **Create**
+
+#### ARM template
+Save the following ARM template as `azuredeploy.json` and run the following command:
+`az deployment group create --resource-group <resource group name> --template-file azuredeploy.json`
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "automanageAccountName": {
+ "type": "String"
+ },
+ "location": {
+ "type": "String"
+ }
+ },
+ "resources": [
+ {
+ "apiVersion": "2020-06-30-preview",
+ "type": "Microsoft.Automanage/accounts",
+ "name": "[parameters('automanageAccountName')]",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "SystemAssigned"
+ }
+ }
+ ]
+}
+```
## Status of VMs
automanage Automanage Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-windows-server.md
Automanage supports the following Windows Server versions:
|Service |Description |Environments Supported<sup>1</sup> |Preferences supported<sup>1</sup> | |--||-|-|
-|VM Insights Monitoring |Azure Monitor for VMs monitors the performance and health of your virtual machines, including their running processes and dependencies on other resources. Learn [more](../azure-monitor/vm/vminsights-overview.md). |Production |No |
-|Backup |Azure Backup provides independent and isolated backups to guard against unintended destruction of the data on your VMs. Learn [more](../backup/backup-azure-vms-introduction.md). Charges are based on the number and size of VMs being protected. Learn [more](https://azure.microsoft.com/pricing/details/backup/). |Production |Yes |
-|Azure Security Center |Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud. Learn [more](../security-center/security-center-introduction.md). Automanage will configure the subscription where your VM resides to the free-tier offering of Azure Security Center. If your subscription is already onboarded to Azure Security Center, then Automanage will not reconfigure it. |Production, Dev/Test |No |
-|Microsoft Antimalware |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. Learn [more](../security/fundamentals/antimalware.md). |Production, Dev/Test |Yes |
-|Update Management |You can use Update Management in Azure Automation to manage operating system updates for your virtual machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Production, Dev/Test |No |
-|Change Tracking & Inventory |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |No |
-|Azure Guest Configuration | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the Guest Configuration extension. For Windows machines, the guest configuration service will automatically reapply the baseline settings if they are out of compliance. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No |
-|Azure Automation Account |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test |No |
-|Log Analytics Workspace |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Production, Dev/Test |No |
+|[VM Insights Monitoring](https://docs.microsoft.com/azure/azure-monitor/vm/vminsights-overview) |Azure Monitor for VMs monitors the performance and health of your virtual machines, including their running processes and dependencies on other resources. Learn [more](../azure-monitor/vm/vminsights-overview.md). |Production |No |
+|[Backup](https://docs.microsoft.com/azure/backup/backup-overview) |Azure Backup provides independent and isolated backups to guard against unintended destruction of the data on your VMs. Learn [more](../backup/backup-azure-vms-introduction.md). Charges are based on the number and size of VMs being protected. Learn [more](https://azure.microsoft.com/pricing/details/backup/). |Production |Yes |
+|[Azure Security Center](https://docs.microsoft.com/azure/security-center/security-center-introduction) |Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud. Learn [more](../security-center/security-center-introduction.md). Automanage will configure the subscription where your VM resides to the free-tier offering of Azure Security Center. If your subscription is already onboarded to Azure Security Center, then Automanage will not reconfigure it. |Production, Dev/Test |No |
+|[Microsoft Antimalware](https://docs.microsoft.com/azure/security/fundamentals/antimalware) |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. Learn [more](../security/fundamentals/antimalware.md). |Production, Dev/Test |Yes |
+|[Update Management](https://docs.microsoft.com/azure/automation/update-management/overview) |You can use Update Management in Azure Automation to manage operating system updates for your virtual machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Production, Dev/Test |No |
+|[Change Tracking & Inventory](https://docs.microsoft.com/azure/automation/change-tracking/overview) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |No |
+|[Azure Guest Configuration](https://docs.microsoft.com/azure/governance/policy/concepts/guest-configuration) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the Guest Configuration extension. For Windows machines, the guest configuration service will automatically reapply the baseline settings if they are out of compliance. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No |
+|[Boot Diagnostics](https://docs.microsoft.com/azure/virtual-machines/boot-diagnostics) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. |Production, Dev/Test |No |
+|[Azure Automation Account](https://docs.microsoft.com/azure/automation/automation-create-standalone-account) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test |No |
+|[Log Analytics Workspace](https://docs.microsoft.com/azure/azure-monitor/logs/log-analytics-overview) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Production, Dev/Test |No |
<sup>1</sup> The environment selection is available when you are enabling Automanage. Learn [more](automanage-virtual-machines.md#environment-configuration). You can also adjust the default settings of the environment and set your own preferences within the best practices constraints.
automanage Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/common-errors.md
Automanage may fail to onboard a machine onto the service. This document explains how to troubleshoot deployment failures, shares some common reasons why deployments may fail, and describes potential next steps on mitigation. ## Troubleshooting deployment failures
-Onboarding a machine to Automanage will result in an Azure Resource Manager deployment being created. If onboarding fails, it may be helpful to consult the deployment for further details as to why it failed. There are links to the deployments in the failure detail flyout, pictured below.
+Onboarding a machine to Automanage will result in an Azure Resource Manager deployment being created. For more information, see the deployment for further details as to why it failed. There are links to the deployments in the failure detail flyout, pictured below.
:::image type="content" source="media\common-errors\failure-flyout.png" alt-text="Automanage failure detail flyout."::: ### Check the deployments for the resource group containing the failed VM
-The failure flyout will contain a link to the deployments within the resource group that contains the machine that failed onboarding and a prefix name you can use to filter deployments with. Clicking the link will take you to the deployments blade, where you can then filter deployments to see Automanage deployments to your machine. If you're deploying across multiple regions, ensure that you click on the deployment in the correct region.
+The failure flyout will contain a link to the deployments in the resource group containing the machine that failed onboarding. The flyout will also contain a prefix name you can use to filter deployments with. Clicking the deployment link will take you to the deployments blade, where you can then filter deployments to see Automanage deployments to your machine. If you're deploying across multiple regions, ensure that you click on the deployment in the correct region.
### Check the deployments for the subscription containing the failed VM If you don't see any failures in the resource group deployment, then your next step would be to look at the deployments in your subscription containing the VM that failed onboarding. Click the **Deployments for subscription** link in the failure flyout and filter deployments using the **Automanage-DefaultResourceGroup** filter. Use the resource group name from the failure blade to filter deployments. The deployment name will be suffixed with a region name. If you're deploying across multiple regions, ensure that you click on the deployment in the correct region.
If you don't see any failed deployments in the resource group or subscription co
Error | Mitigation :--|:-|
-Automanage account insufficient permissions error | This may happen if you have recently moved a subscription containing a new Automanage Account into a new tenant. Steps to resolve this are located [here](./repair-automanage-account.md).
-Workspace region not matching region mapping requirements | Automanage was unable to onboard your machine but the Log Analytics workspace that the machine is currently linked to is not mapped to a supported Automation region. Ensure that your existing Log Analytics workspace and Automation account are located in a [supported region mapping](../automation/how-to/region-mappings.md).
-"Access denied because of the deny assignment with name 'System deny assignment created by managed application'" | A [denyAssignment](../role-based-access-control/deny-assignments.md) was created on your resource which prevented Automanage from accessing your resource. This may have been caused by either a [Blueprint](../governance/blueprints/concepts/resource-locking.md) or a [Managed Application](../azure-resource-manager/managed-applications/overview.md).
-"OS Information: Name='(null)', ver='(null)', agent status='Not Ready'." | Ensure that you're running a [minimum supported agent version](/troubleshoot/azure/virtual-machines/support-extensions-agent-version), the agent is running ([Linux](/troubleshoot/azure/virtual-machines/linux-azure-guest-agent) and [Windows](/troubleshoot/azure/virtual-machines/windows-azure-guest-agent)), and that the agent is up to date ([Linux](../virtual-machines/extensions/update-linux-agent.md) and [Windows](../virtual-machines/extensions/agent-windows.md)).
+Automanage account insufficient permissions error | This error may occur if you have recently moved a subscription containing a new Automanage Account into a new tenant. Steps to resolve this error are located [here](./repair-automanage-account.md).
+Workspace region not matching region mapping requirements | Automanage was unable to onboard your machine because the Log Analytics workspace that the machine is currently linked to is not mapped to a supported Automation region. Ensure that your existing Log Analytics workspace and Automation account are located in a [supported region mapping](../automation/how-to/region-mappings.md).
+"Access denied because of the deny assignment with name 'System deny assignment created by managed application'" | A [denyAssignment](https://docs.microsoft.com/azure/role-based-access-control/deny-assignments) was created on your resource, which prevented Automanage from accessing your resource. This denyAssignment may have been created by either a [Blueprint](https://docs.microsoft.com/azure/governance/blueprints/concepts/resource-locking) or a [Managed Application](https://docs.microsoft.com/azure/azure-resource-manager/managed-applications/overview).
+"OS Information: Name='(null)', ver='(null)', agent status='Not Ready'." | Ensure that you're running a [minimum supported agent version](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/support-extensions-agent-version), the agent is running ([Linux](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/linux-azure-guest-agent) and [Windows](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/windows-azure-guest-agent)), and that the agent is up to date ([Linux](https://docs.microsoft.com/azure/virtual-machines/extensions/update-linux-agent) and [Windows](https://docs.microsoft.com/azure/virtual-machines/extensions/agent-windows)).
+"Unable to determine the OS for the VM OS Name:, ver . Please check that the VM Agent is running, the current status is Ready." | Ensure that you're running a [minimum supported agent version](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/support-extensions-agent-version), the agent is running ([Linux](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/linux-azure-guest-agent) and [Windows](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/windows-azure-guest-agent)), and that the agent is up to date ([Linux](https://docs.microsoft.com/azure/virtual-machines/extensions/update-linux-agent) and [Windows](https://docs.microsoft.com/azure/virtual-machines/extensions/agent-windows)).
+ "VM has reported a failure when processing extension 'IaaSAntimalware'" | Ensure you don't have another antimalware/antivirus offering already installed on your VM. If that fails, contact support. ASC workspace: Automanage does not currently support the Log Analytics service in _location_. | Check that your VM is located in a [supported region](./automanage-virtual-machines.md#supported-regions). The template deployment failed because of policy violation. Please see details for more information. | There is a policy preventing Automanage from onboarding your VM. Check the policies that are applied to your subscription or resource group containing your VM you want to onboard to Automanage.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/overview.md
Machines connected to the Log Analytics workspace use the [Log Analytics agent](
> [!NOTE] > Change Tracking and Inventory requires linking a Log Analytics workspace to your Automation account. For a definitive list of supported regions, see [Azure Workspace mappings](../how-to/region-mappings.md). The region mappings don't affect the ability to manage VMs in a separate region from your Automation account.
+As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../../lighthouse/overview.md). Azure Lighthouse allows you to perform operations at scale across several Azure Active Directory (Azure AD) tenants at once, making management tasks like Change Tracking and Inventory more efficient across those tenants you're responsible for. Change Tracking and Inventory can manage machines in multiple subscriptions in the same tenant, or across tenants using [Azure delegated resource management](../../lighthouse/concepts/azure-delegated-resource-management.md).
+ ## Current limitations Change Tracking and Inventory doesn't support or has the following limitations:
automation Region Mappings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/region-mappings.md
Title: Supported regions for linked Log Analytics workspace description: This article describes the supported region mappings between an Automation account and a Log Analytics workspace as it relates to certain features of Azure Automation. Previously updated : 02/17/2021 Last updated : 04/01/2021
The following table shows the supported mappings:
|EastUS2<sup>2</sup>|EastUS| |WestUS|WestUS| |WestUS2|WestUS2|
+|NorthCentralUS|NorthCentralUS|
|CentralUS|CentralUS| |SouthCentralUS|SouthCentralUS| |WestCentralUS|WestCentralUS|
+|**Brazil**||
+|BrazilSouth|BrazilSouth|
|**Canada**|| |CanadaCentral|CanadaCentral|
+|**China**||
+|ChinaEast2<sup>3</sup>|ChinaEast2|
|**Asia Pacific**||
-|AustraliaEast|AustraliaEast|
-|AustraliaSoutheast|AustraliaSoutheast|
|EastAsia|EastAsia| |SoutheastAsia|SoutheastAsia|
+|**India**||
|CentralIndia|CentralIndia|
-|ChinaEast2<sup>3</sup>|ChinaEast2|
+|**Japan**||
|JapanEast|JapanEast|
+|**Australia**||
+|AustraliaEast|AustraliaEast|
+|AustraliaSoutheast|AustraliaSoutheast|
+|**Korea**||
+|KoreaCentral|KoreaCentral|
+|**Norway**||
+|NorwayEast|NorwayEast|
|**Europe**|| |NorthEurope|NorthEurope|
+|WestEurope|WestEurope|
+|**France**||
|FranceCentral|FranceCentral|
+|**United Kingdom**
|UKSouth|UKSouth|
-|WestEurope|WestEurope|
+|**Switzerland**||
|SwitzerlandNorth|SwitzerlandNorth|
+|**United Arab Emirates**||
+|UAENorth|UAENorth|
|**US Gov**|| |USGovVirginia|USGovVirginia| |USGovArizona<sup>3</sup>|USGovArizona| -- <sup>1</sup> EastUS mapping for Log Analytics workspaces to Automation accounts isn't an exact region-to-region mapping, but is the correct mapping. <sup>2</sup> EastUS2 mapping for Log Analytics workspaces to Automation accounts isn't an exact region-to-region mapping, but is the correct mapping.
azure-cache-for-redis Cache Aspnet Output Cache Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-aspnet-output-cache-provider.md
Configure the attributes with the values from your cache blade in the Microsoft
| *settingsClassName*<br/>*settingsMethodName* | string<br/>string | *n/a* | *These attributes can be specified only through either web.config or AppSettings.*<br/><br/>Use these attributes to provide a connection string. *settingsClassName* should be an assembly qualified class name that contains the method specified by *settingsMethodName*.<br/><br/>The method specified by *settingsMethodName* should be public, static, and void (not take any parameters), with a return type of **string**. This method returns the actual connection string. | | *loggingClassName*<br/>*loggingMethodName* | string<br/>string | *n/a* | *These attributes can be specified only through either web.config or AppSettings.*<br/><br/>Use these attributes to debug your application by providing logs from Session State/Output Cache along with logs from StackExchange.Redis. *loggingClassName* should be an assembly qualified class name that contains the method specified by *loggingMethodName*.<br/><br/>The method specified by *loggingMethodName* should be public, static, and void (not take any parameters), with a return type of **System.IO.TextWriter**. | | *applicationName* | string | The module name of the current process or "/" | *SessionStateProvider only*<br/>*This attribute can be specified only through either web.config or AppSettings.*<br/><br/>The app name prefix to use in Redis cache. The customer may use the same Redis cache for different purposes. To insure that the session keys do not collide, it can be prefixed with the application name. |
-| *throwOnError* | boolean | true | *SessionStateProvider only*<br/>*This attribute can be specified only through either web.config or AppSettings.*<br/><br/>Whether to throw an exception when an error occurs.<br/><br/>For more about *throwOnError*, see [Notes on *throwOnError*](#notes-on-throwonerror) in the [Attribute notes](#attribute-notes) section. |>*Microsoft.Web.Redis.RedisSessionStateProvider.LastException*. |
+| *throwOnError* | boolean | true | *SessionStateProvider only*<br/>*This attribute can be specified only through either web.config or AppSettings.*<br/><br/>Whether to throw an exception when an error occurs.<br/><br/>For more about *throwOnError*, see [Notes on *throwOnError*](#notes-on-throwonerror) in the [Attribute notes](#attribute-notes) section. |
| *retryTimeoutInMilliseconds* | positive integer | 5000 | *SessionStateProvider only*<br/>*This attribute can be specified only through either web.config or AppSettings.*<br/><br/>How long to retry when an operation fails. If this value is less than *operationTimeoutInMilliseconds*, the provider will not retry.<br/><br/>For more about *retryTimeoutInMilliseconds*, see [Notes on *retryTimeoutInMilliseconds*](#notes-on-retrytimeoutinmilliseconds) in the [Attribute notes](#attribute-notes) section. | | *redisSerializerType* | string | *n/a* | Specifies the assembly qualified type name of a class that implements Microsoft.Web.Redis. ISerializer and that contains the custom logic to serialize and deserialize the values. For more information, see [About *redisSerializerType*](#about-redisserializertype) in the [Attribute notes](#attribute-notes) section. |
Once these steps are performed, your application is configured to use the Redis
## Next steps
-Check out the [ASP.NET Session State Provider for Azure Cache for Redis](cache-aspnet-session-state-provider.md).
+Check out the [ASP.NET Session State Provider for Azure Cache for Redis](cache-aspnet-session-state-provider.md).
azure-functions Create First Function Cli Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-java.md
The archetype also generates a unit test for your function. When you change your
> [!NOTE] > If HttpExample doesn't appear as shown above, you likely started the host from outside the root folder of the project. In that case, use **Ctrl**+**C** to stop the host, navigate to the project's root folder, and run the previous command again.
-1. Copy the URL of your `HttpExample` function from this output to a browser and append the query string `?name=<YOUR_NAME>`, making the full URL like `http://localhost:7071/api/HttpExample?name=Functions`. The browser should display a message like `Hello Functions`:
-
- ![Result of the function run locally in the browser](./media/functions-create-first-azure-function-azure-cli/function-test-local-browser.png)
-
- The terminal in which you started your project also shows log output as you make requests.
+1. Copy the URL of your `HttpExample` function from this output to a browser and append the query string `?name=<YOUR_NAME>`, making the full URL like `http://localhost:7071/api/HttpExample?name=Functions`. The browser should display a message that echoes back your query string value. The terminal in which you started your project also shows log output as you make requests.
1. When you're done, use **Ctrl**+**C** and choose `y` to stop the functions host.
azure-functions Durable Functions Http Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-http-api.md
GET /admin/extensions/DurableTaskExtension/instances/{instanceId}
&showHistory=[true|false] &showHistoryOutput=[true|false] &showInput=[true|false]
+ &returnInternalServerErrorOnFailure=[true|false]
``` In version 2.x of the Functions runtime, the URL format has all the same parameters but with a slightly different prefix:
GET /runtime/webhooks/durabletask/instances/{instanceId}
&showHistory=[true|false] &showHistoryOutput=[true|false] &showInput=[true|false]
+ &returnInternalServerErrorOnFailure=[true|false]
``` Request parameters for this API include the default set mentioned previously as well as the following unique parameters:
Request parameters for this API include the default set mentioned previously as
| **`createdTimeFrom`** | Query string | Optional parameter. When specified, filters the list of returned instances that were created at or after the given ISO8601 timestamp.| | **`createdTimeTo`** | Query string | Optional parameter. When specified, filters the list of returned instances that were created at or before the given ISO8601 timestamp.| | **`runtimeStatus`** | Query string | Optional parameter. When specified, filters the list of returned instances based on their runtime status. To see the list of possible runtime status values, see the [Querying instances](durable-functions-instance-management.md) article. |
+| **`returnInternalServerErrorOnFailure`** | Query string | Optional parameter. If set to `true`, this API will return an HTTP 500 response instead of a 200 if the instance is in a failure state. This parameter is intended for automated status polling scenarios. |
### Response Several possible status code values can be returned.
-* **HTTP 200 (OK)**: The specified instance is in a completed state.
+* **HTTP 200 (OK)**: The specified instance is in a completed or failed state.
* **HTTP 202 (Accepted)**: The specified instance is in progress. * **HTTP 400 (Bad Request)**: The specified instance failed or was terminated. * **HTTP 404 (Not Found)**: The specified instance doesn't exist or has not started running.
-* **HTTP 500 (Internal Server Error)**: The specified instance failed with an unhandled exception.
+* **HTTP 500 (Internal Server Error)**: Returned only when the `returnInternalServerErrorOnFailure` is set to `true` and the specified instance failed with an unhandled exception.
The response payload for the **HTTP 200** and **HTTP 202** cases is a JSON object with the following fields:
The response JSON may look like the following (formatted for readability):
## Next steps > [!div class="nextstepaction"]
-> [Learn how to use Application Insights to monitor your durable functions](durable-functions-diagnostics.md)
+> [Learn how to use Application Insights to monitor your durable functions](durable-functions-diagnostics.md)
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/start-stop-vms/overview.md
An HTTP trigger endpoint function is created to support the schedule and sequenc
|Name |Trigger |Description | |--|--||
-|AlertAvailabilityTest |Timer |This function is performs the availability test to make sure the primary function **AutoStopVM** is always available.|
+|AlertAvailabilityTest |Timer |This function performs the availability test to make sure the primary function **AutoStopVM** is always available.|
|AutoStop |HTTP |This function supports the **AutoStop** scenario, which is the entry point function that is called from Logic App.| |AutoStopAvailabilityTest |Timer |This function performs the availability test to make sure the primary function **AutoStop** is always available.| |AutoStopVM |HTTP |This function is triggered automatically by the VM alert when the alert condition is true.|
Specifying a list of VMs can be used when you need to perform the start and stop
- Your account has been granted the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) permission in the subscription. -- Start/Stop VMs v2 (preview) is available in all Azure global regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=functions) page for Azure Functions. For the Azure Government cloud, it is available only in the US Government Virginia region.
+- Start/Stop VMs v2 (preview) is available in all Azure global and US Government cloud regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=functions) page for Azure Functions.
## Next steps
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/azure-secure-isolation-guidance.md
Table 5 summarizes available security guidance for customer virtual machines pro
**Table 5.** Security guidance for Azure virtual machines
-|VM|||Security guidance||
-||||||
-|**Windows**|[Secure policies](../virtual-machines/security-policy.md)|[Azure Disk Encryption](../virtual-machines/windows/disk-encryption-overview.md)|[Built-in security controls](../virtual-machines/windows/security-baseline.md)|[Security recommendations](../virtual-machines/security-recommendations.md)|
-|**Linux**|[Secure policies](../virtual-machines/security-policy.md)|[Azure Disk Encryption](../virtual-machines/linux/disk-encryption-overview.md)|[Built-in security controls](../virtual-machines/linux/security-baseline.md)|[Security recommendations](../virtual-machines/security-recommendations.md)|
+|VM|Security guidance|
+|||
+|**Windows**|[Secure policies](../virtual-machines/security-policy.md) <br/>[Azure Disk Encryption](../virtual-machines/windows/disk-encryption-overview.md) <br/> [Built-in security controls](../virtual-machines/windows/security-baseline.md) <br/> [Security recommendations](../virtual-machines/security-recommendations.md)|
+|**Linux**|[Secure policies](../virtual-machines/security-policy.md) <br/> [Azure Disk Encryption](../virtual-machines/linux/disk-encryption-overview.md) <br/> [Built-in security controls](../virtual-machines/linux/security-baseline.md) <br/> [Security recommendations](../virtual-machines/security-recommendations.md)|
#### Isolated Virtual Machines Azure Compute offers virtual machine sizes that are [isolated to a specific hardware type](../virtual-machines/isolation.md) and dedicated to a single customer. These VM instances allow customer workloads to be deployed on dedicated physical servers. Utilizing Isolated VMs essentially guarantees that a customer VM will be the only one running on that specific server node. Customers can also choose to further subdivide the resources on these Isolated VMs by using [Azure support for nested Virtual Machines](https://azure.microsoft.com/blog/nested-virtualization-in-azure/).
Transferring large volumes of data across the Internet is inherently unreliable.
:::image type="content" source="./media/secure-isolation-fig13.png" alt-text="Block blob partitioning of data into individual blocks"::: **Figure 13.** Block blob partitioning of data into individual blocks
-Customers can upload blocks in any order and determine their sequence in the final block list commitment step. Customers can also upload a new block to replace an existing uncommitted block of the same block ID.
+Customers can upload blocks in any order and determine their sequence in the final blocklist commitment step. Customers can also upload a new block to replace an existing uncommitted block of the same block ID.
#### Partition layer The partition layer is responsible for a) managing higher-level data abstractions (Blob, Table, Queue), b) providing a scalable object namespace, c) providing transaction ordering and strong consistency for objects, d) storing object data on top of the stream layer, and e) caching object data to reduce disk I/O. This layer also provides asynchronous geo-replication of data and is focused on replicating data across stamps. Inter-stamp replication is done in the background to keep a copy of the data in two locations for disaster recovery purposes.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
Title: Azure Services in FedRAMP and DoD SRG Audit Scope
description: This article contains tables for Azure Public and Azure Government that illustrate what FedRAMP (Moderate vs. High) and DoD SRG (Impact level 2, 4, 5 or 6) audit scope a given service has reached. Previously updated : 01/29/2021 Last updated : 04/01/2021
This article provides a detailed list of in-scope cloud services across Azure Pu
* Planned 2021 = indicates the service will be reviewed by 3PAO and JAB in 2021. Once the service is authorized, status will be updated ## Azure public services by audit scope
-| _Last Updated: January 2021_ |
+| _Last Updated: April 2021_ |
| Azure Service| DoD CC SRG IL 2 | FedRAMP Moderate | FedRAMP High | Planned 2021 | | |::|:-:|::|::|
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure DevOps (formerly VSTS)](https://azure.microsoft.com/services/devops/) | | | | |
-| [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | | | | :heavy_check_mark: |
+| [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Azure DNS](https://azure.microsoft.com/services/dns/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure for Education](https://azure.microsoft.com/developer/students/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure File Sync](https://azure.microsoft.com/services/storage/files/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Sphere](https://azure.microsoft.com/services/azure-sphere/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Azure VMware Solution](https://azure.microsoft.com/services/azure-vmware/) | | | | :heavy_check_mark: |
| [Backup](https://azure.microsoft.com/services/backup/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Batch](https://azure.microsoft.com/services/batch/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Microsoft Health Bot](/healthbot/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Microsoft Managed Desktop](https://www.microsoft.com/en-us/microsoft-365/modern-desktop/enterprise/microsoft-managed-desktop) | | | | | | [Microsoft PowerApps](/powerapps/powerapps-overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Microsoft PowerApps Portal](https://powerapps.microsoft.com/portals/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Microsoft Stream](/stream/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Microsoft Threat Experts](/windows/security/threat-protection/microsoft-defender-atp/microsoft-threat-experts) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
**&ast;&ast;** FedRAMP High certification for Azure Databricks is applicable for limited regions in Azure Commercial. To configure Azure Databricks for FedRAMP High use, please reach out to your Microsoft or Databricks Representative. ## Azure Government services by audit scope
-| _Last Updated: January 2021_ |
+| _Last Updated: April 2021_ |
| Azure Service | DoD CC SRG IL 2 | DoD CC SRG IL 4 | DoD CC SRG IL 5 (Azure Gov)**&ast;** | DoD CC SRG IL 5 (Azure DoD) **&ast;&ast;** | FedRAMP High | DoD CC SRG IL 6 | - |::|::|::|::|::|::
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Microsoft Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Microsoft Graph](/graph/overview) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Microsoft PowerApps](/powerapps/powerapps-overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
+| [Microsoft PowerApps Portal](https://powerapps.microsoft.com/portals/) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
| [Microsoft Stream](/stream/overview) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | | [Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | | [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
This article provides a detailed list of in-scope cloud services across Azure Pu
**&ast;** DoD CC SRG IL5 (Azure Gov) column shows DoD CC SRG IL5 certification status of services in Azure Government. For details, please refer to [Azure Government Isolation Guidelines for Impact Level 5](../documentation-government-impact-level-5.md)
-**&ast;&ast;** DoD CC SRG IL5 (Azure DoD) column shows DoD CC SRG IL5 certification status for services in Azure Government DoD regions.
+**&ast;&ast;** DoD CC SRG IL5 (Azure DoD) column shows DoD CC SRG IL5 certification status for services in Azure Government DoD regions.
azure-government Documentation Government Stig Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-stig-linux-vm.md
Previously updated : 03/16/2021 Last updated : 04/02/2021 # Deploy STIG-compliant Linux Virtual Machines (Preview)
Sign in at the [Azure portal](https://ms.portal.azure.com/) or [Azure Government
b. Enter Log Analytics workspace (optional, required to store log analytics).
- c. Enter Custom data (optional, only applicable for RHEL 7.7/7.8, CentOS 7.7/7.8/7.9 and Ubuntu 18.04).
+ c. Enter Custom data (optional, only applicable for RHEL 7.7/7.8, CentOS 7.7/7.8/7.9, and Ubuntu 18.04).
:::image type="content" source="./media/stig-linux-diagnostic-settings.png" alt-text="Management section showing where you select the diagnostic settings for the virtual machine" border="false":::
Sign in at the [Azure portal](https://ms.portal.azure.com/) or [Azure Government
1. The deployed virtual machine can be found in the resource group used for the deployment. Since inbound RDP is disallowed, Azure Bastion must be used to connect to the VM.
+## High availability and resiliency
+
+Our solution template creates a single instance virtual machine using premium or standard operating system disk, which supports [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
+
+We recommend you deploy multiple instances of virtual machines configured behind Azure Load Balancer and/or Azure Traffic Manager for higher availability and resiliency.
+
+## Business continuity and disaster recovery (BCDR)
+
+As an organization you need to adopt a business continuity and disaster recovery (BCDR) strategy that keeps your data safe, and your apps and workloads online, when planned and unplanned outages occur.
+
+[Azure Site Recovery](../site-recovery/site-recovery-overview.md) helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines from a primary site to a secondary location. When an outage occurs at your primary site, you fail over to secondary location, and access apps from there. After the primary location is running again, you can fail back to it.
+
+Site Recovery can manage replication for:
+
+- Azure VMs replicating between Azure regions.
+- On-premises VMs, Azure Stack VMs, and physical servers.
+
+To learn more about backup and restore options for virtual machines in Azure, continue to [Overview of backup options for VMs](../virtual-machines/backup-recovery.md).
+ ## Clean up resources When no longer needed, you can delete the resource group, virtual machine, and all related resources.
azure-government Documentation Government Stig Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-stig-windows-vm.md
Previously updated : 03/16/2021 Last updated : 04/02/2021 # Deploy STIG-compliant Windows Virtual Machines (Preview)
Sign in at the [Azure portal](https://ms.portal.azure.com/) or [Azure Government
1. The deployed virtual machine can be found in the resource group used for the deployment. Since inbound RDP is disallowed, Azure Bastion must be used to connect to the VM.
+## High availability and resiliency
+
+Our solution template creates a single instance virtual machine using premium or standard operating system disk, which supports [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
+
+We recommend you deploy multiple instances of virtual machines configured behind Azure Load Balancer and/or Azure Traffic Manager for higher availability and resiliency.
+
+## Business continuity and disaster recovery (BCDR)
+
+As an organization you need to adopt a business continuity and disaster recovery (BCDR) strategy that keeps your data safe, and your apps and workloads online, when planned and unplanned outages occur.
+
+[Azure Site Recovery](../site-recovery/site-recovery-overview.md) helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines from a primary site to a secondary location. When an outage occurs at your primary site, you fail over to secondary location, and access apps from there. After the primary location is running again, you can fail back to it.
+
+Site Recovery can manage replication for:
+
+- Azure VMs replicating between Azure regions.
+- On-premises VMs, Azure Stack VMs, and physical servers.
+
+To learn more about backup and restore options for virtual machines in Azure, continue to [Overview of backup options for VMs](../virtual-machines/backup-recovery.md).
+ ## Clean up resources When no longer needed, you can delete the resource group, virtual machine, and all related resources.
azure-monitor Container Insights Azure Redhat4 Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-azure-redhat4-setup.md
Container insights supports monitoring Azure Red Hat OpenShift v4.x as described
- The [Helm 3](https://helm.sh/docs/intro/install/) CLI tool
+- Latest version of [OpenShift CLI](https://docs.openshift.com/container-platform/4.7/cli_reference/openshift_cli/getting-started-cli.html)
+ - [Bash version 4](https://www.gnu.org/software/bash/) - The [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
To enable monitoring of your cluster using the PowerShell or bash script you dow
$azureArcClusterResourceId = "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>" ```
-3. Configure the `$kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`. If you want to use the current context, set the value to `""`.
+3. Configure the `$kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`.
```powershell $kubeContext = "<kubeContext name of your k8s cluster>"
Perform the following steps to enable monitoring using the provided bash script.
export azureArcClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>" ```
-3. Configure the `kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`. If you want to use the current context, set the value to `""`.
+3. Configure the `kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`.
```bash export kubeContext="<kubeContext name of your k8s cluster>"
export proxyEndpoint=https://<user>:<password>@<proxyhost>:<port>
- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md) -- To learn how to stop monitoring your Arc enabled Kubernetes cluster with Container insights, see [How to stop monitoring your hybrid cluster](container-insights-optout-hybrid.md#how-to-stop-monitoring-on-arc-enabled-kubernetes).
+- To learn how to stop monitoring your Arc enabled Kubernetes cluster with Container insights, see [How to stop monitoring your hybrid cluster](container-insights-optout-hybrid.md#how-to-stop-monitoring-on-arc-enabled-kubernetes).
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
For important additional information, see [Monitoring Agents Overview](../agents
|IoTConnectorMeasurementIngestionLatencyMs|Yes|Average Group Stage Latency|Milliseconds|Average|The time period between when the IoT Connector received the device data and when the data is processed by the FHIR conversion stage.|Operation, ConnectorName| |IoTConnectorNormalizedEvent|Yes|Number of Normalized Messages|Count|Sum|The total number of mapped normalized values outputted from the normalization stage of the the Azure IoT Connector for FHIR.|Operation, ConnectorName| |IoTConnectorTotalErrors|Yes|Total Error Count|Count|Sum|The total number of errors logged by the Azure IoT Connector for FHIR|Name, Operation, ErrorType, ErrorSeverity, ConnectorName|
-|ServiceApiErrors|Yes|Service Errors|Count|Sum|The total number of internal server errors generated by the service.|Protocol, Authentication, Operation, ResourceType, StatusCode, StatusCodeClass, StatusCodeText|
-|ServiceApiLatency|Yes|Service Latency|Milliseconds|Average|The response latency of the service.|Protocol, Authentication, Operation, ResourceType, StatusCode, StatusCodeClass, StatusCodeText|
-|ServiceApiRequests|Yes|Service Requests|Count|Sum|The total number of requests received by the service.|Protocol, Authentication, Operation, ResourceType, StatusCode, StatusCodeClass, StatusCodeText|
|TotalErrors|Yes|Total Errors|Count|Sum|The total number of internal server errors encountered by the service.|Protocol, StatusCode, StatusCodeClass, StatusCodeText| |TotalLatency|Yes|Total Latency|Milliseconds|Average|The response latency of the service.|Protocol| |TotalRequests|Yes|Total Requests|Count|Sum|The total number of requests received by the service.|Protocol|
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 03/03/2021 Last updated : 03/28/2021
In addition to the Pay-As-You-Go model, Log Analytics has **Capacity Reservation
In all pricing tiers, an event's data size is calculated from a string representation of the properties which are stored in Log Analytics for this event, whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the AzureActivity, Heartbeat and Usage types. To determine whether an event was excluded from billing for data ingestion, you can use the `_IsBillable` property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (1.0E9 bytes).
-Also, note that some solutions, such as [Azure Security Center](https://azure.microsoft.com/pricing/details/security-center/), [Azure Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/) and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
+Also, note that some solutions, such as [Azure Defender (Security Center)](https://azure.microsoft.com/pricing/details/azure-defender/), [Azure Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/) and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
### Log Analytics Dedicated Clusters
The cluster capacity reservation level is configured via programmatically with A
There are two modes of billing for usage on a cluster. These can be specified by the `billingType` parameter when [configuring your cluster](customer-managed-keys.md#customer-managed-key-operations). The two modes are:
-1. **Cluster**: in this case (which is the default), billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster is aggregated to calculate the daily bill for the cluster. Note that per-node allocations from [Azure Security Center](../../security-center/index.yml) are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster.
+1. **Cluster**: in this case (which is the default), billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster is aggregated to calculate the daily bill for the cluster. Note that per-node allocations from [Azure Defender (Security Center)](../../security-center/index.yml) are applied at the workspace level prior to this aggregation of aggregated data across all workspaces in the cluster.
-2. **Workspaces**: the Capacity Reservation costs for your Cluster are attributed proportionately to the workspaces in the Cluster (after accounting for per-node allocations from [Azure Security Center](../../security-center/index.yml) for each workspace.) If the total data volume ingested into a workspace for a day is less than the Capacity Reservation, then each workspace is billed for its ingested data at the effective per-GB Capacity Reservation rate by billing them a fraction of the Capacity Reservation, and the unused part of the Capacity Reservation is billed to the cluster resource. If the total data volume ingested into a workspace for a day is more than the Capacity Reservation, then each workspace is billed for a fraction of the Capacity Reservation based on itΓÇÖs fraction of the ingested data that day, and each workspace for a fraction of the ingested data above the Capacity Reservation. There is nothing billed to the cluster resource if the total data volume ingested into a workspace for a day is over the Capacity Reservation.
+2. **Workspaces**: the Capacity Reservation costs for your Cluster are attributed proportionately to the workspaces in the Cluster (after accounting for per-node allocations from [Azure Defender (Security Center)](../../security-center/index.yml) for each workspace.) If the total data volume ingested into a workspace for a day is less than the Capacity Reservation, then each workspace is billed for its ingested data at the effective per-GB Capacity Reservation rate by billing them a fraction of the Capacity Reservation, and the unused part of the Capacity Reservation is billed to the cluster resource. If the total data volume ingested into a workspace for a day is more than the Capacity Reservation, then each workspace is billed for a fraction of the Capacity Reservation based on itΓÇÖs fraction of the ingested data that day, and each workspace for a fraction of the ingested data above the Capacity Reservation. There is nothing billed to the cluster resource if the total data volume ingested into a workspace for a day is over the Capacity Reservation.
In cluster billing options, data retention is billed at per-workspace. Note that cluster billing starts when the cluster is created, regardless of whether workspaces have been associated to the cluster. Also, note that workspaces associated to a cluster no longer have a pricing tier.
Log Analytics charges are added to your Azure bill. You can see details of your
## Viewing Log Analytics usage on your Azure bill
-Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=%2fazure%2fbilling%2fTOC.json) hub. For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. First, add a filter by "Resource type" (to microsoft.operationalinsights/workspace for Log Analytics and microsoft.operationalinsights/cluster for Log Analytics Clusters) will allow you to track your Log Analytics spend. Then for "Group by" select "Meter category" or "Meter". Note that other services such as Azure Security Center and Azure Sentinel also bill their usage against Log Analytics workspace resources. To see the mapping to Service name, you can select the Table view instead of a chart.
+Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=%2fazure%2fbilling%2fTOC.json) hub. For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. First, add a filter by "Resource type" (to microsoft.operationalinsights/workspace for Log Analytics and microsoft.operationalinsights/cluster for Log Analytics Clusters) will allow you to track your Log Analytics spend. Then for "Group by" select "Meter category" or "Meter". Note that other services such as Azure Defender (Security Center) and Azure Sentinel also bill their usage against Log Analytics workspace resources. To see the mapping to Service name, you can select the Table view instead of a chart.
More understanding of your usage can be gained by [downloading your usage from the Azure portal](../../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md#download-usage-in-azure-portal). In the downloaded spreadsheet you can see usage per Azure resource (e.g. Log Analytics workspace) per day. In this Excel spreadsheet, usage from your Log Analytics workspaces can be found by first filtering on the "Meter Category" column to show "Log Analytics", "Insight and Analytics" (used by some of the legacy pricing tiers) and "Azure Monitor" (used by Capacity Reservation pricing tiers), and then adding a filter on the "Instance ID" column which is "contains workspace" or "contains cluster" (the latter to include Log Analytics Cluster usage). The usage is shown in the "Consumed Quantity" column and the unit for each entry is shown in the "Unit of Measure" column. More details are available to help you [understand your Microsoft Azure bill](../../cost-management-billing/understand/review-individual-bill.md).
You can also [set the pricing tier via Azure Resource Manager](./resource-manage
## Legacy pricing tiers
-Subscriptions who had a Log Analytics workspace or Application Insights resource in it before April 2, 2018, or are linked to an Enterprise Agreement that started prior to February 1, 2019, will continue to have access to use the legacy pricing tiers: **Free**, **Standalone (Per GB)** and **Per Node (OMS)**. Workspaces in the Free pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Azure Security Center](../../security-center/index.yml)) and the data retention is limited to 7 days. The Free pricing tier is intended only for evaluation purposes. Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days.
+Subscriptions who had a Log Analytics workspace or Application Insights resource in it before April 2, 2018, or are linked to an Enterprise Agreement that started prior to February 1, 2019, will continue to have access to use the legacy pricing tiers: **Free**, **Standalone (Per GB)** and **Per Node (OMS)**. Workspaces in the Free pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Azure Defender (Security Center)](../../security-center/index.yml)) and the data retention is limited to 7 days. The Free pricing tier is intended only for evaluation purposes. Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days.
Usage on the Standalone pricing tier is billed by the ingested data volume. It is reported in the **Log Analytics** service and the meter is named "Data Analyzed".
The Per Node pricing tier charges per monitored VM (node) on an hour granularity
1. Node: this is usage for the number of monitored nodes (VMs) in units of node*months. 2. Data Overage per Node: this is the number of GB of data ingested in excess of the aggregated data allocation.
-3. Data Included per Node: this is the amount of ingested data that was covered by the aggregated data allocation. This meter is also used when the workspace is in all pricing tiers to show the amount of data covered by the Azure Security Center.
+3. Data Included per Node: this is the amount of ingested data that was covered by the aggregated data allocation. This meter is also used when the workspace is in all pricing tiers to show the amount of data covered by the Azure Defender (Security Center).
> [!TIP] > If your workspace has access to the **Per Node** pricing tier, but you're wondering whether it would be cost less in a Pay-As-You-Go tier, you can [use the query below](#evaluating-the-legacy-per-node-pricing-tier) to easily get a recommendation. Workspaces created prior to April 2016 can also access the original **Standard** and **Premium** pricing tiers which have fixed data retention of 30 and 365 days respectively. New workspaces cannot be created in the **Standard** or **Premium** pricing tiers, and if a workspace is moved out of these tiers, it cannot be moved back. Data ingestion meters for these legacy tiers are called "Data analyzed".
-There are also some behaviors between the use of legacy Log Analytics tiers and how usage is billed for [Azure Security Center](../../security-center/index.yml).
+There are also some behaviors between the use of legacy Log Analytics tiers and how usage is billed for [Azure Defender (Security Center)](../../security-center/index.yml).
-1. If the workspace is in the legacy Standard or Premium tier, Azure Security Center will be billed only for Log Analytics data ingestion, not per node.
-2. If the workspace is in the legacy Per Node tier, Azure Security Center will be billed using the current [Azure Security Center node-based pricing model](https://azure.microsoft.com/pricing/details/security-center/).
-3. In other pricing tiers (including Capacity Reservations), if Azure Security Center was enabled before June 19, 2017, Azure Security Center will be billed only for Log Analytics data ingestion. Otherwise Azure Security Center will be billed using the current Azure Security Center node-based pricing model.
+1. If the workspace is in the legacy Standard or Premium tier, Azure Defender will be billed only for Log Analytics data ingestion, not per node.
+2. If the workspace is in the legacy Per Node tier, Azure Defender will be billed using the current [Azure Defender node-based pricing model](https://azure.microsoft.com/pricing/details/security-center/).
+3. In other pricing tiers (including Capacity Reservations), if Azure Defender was enabled before June 19, 2017, Azure Defender will be billed only for Log Analytics data ingestion. Otherwise Azure Defender will be billed using the current Azure Defender node-based pricing model.
More details of pricing tier limitations are available at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces).
None of the legacy pricing tiers has regional-based pricing.
> [!NOTE] > To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
-## Log Analytics and Security Center
+## Log Analytics and Azure Defender (Security Center)
-[Azure Security Center](../../security-center/index.yml) billing is closely tied to Log Analytics billing. Security Center provides 500 MB/node/day allocation against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security) (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus) and the Update and UpdateSummary data types when the Update Management solution is not running on the workspace or solution targeting is enabled. If the workspace is in the legacy Per Node pricing tier, the Security Center and Log Analytics allocations are combined and applied jointly to all billable ingested data.
+[Azure Defender (Security Center)](../../security-center/index.yml) billing is closely tied to Log Analytics billing. Azure Defender provides 500 MB/node/day allocation against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security) (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus) and the Update and UpdateSummary data types when the Update Management solution is not running on the workspace or solution targeting is enabled [learn more](https://docs.microsoft.com/azure/security-center/security-center-pricing#what-data-types-are-included-in-the-500-mb-free-data-limit). If the workspace is in the legacy Per Node pricing tier, the Azure Defender and Log Analytics allocations are combined and applied jointly to all billable ingested data.
## Change the data retention period
Soon after the daily limit is reached, the collection of billable data types sto
> The daily cap cannot stop data collection as precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. See [below](#view-the-effect-of-the-daily-cap) for a query that is helpful in studying the daily cap behavior. > [!WARNING]
-> The daily cap does not stop the collection of data types WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus, Update and UpdateSummary, except for workspaces in which Azure Security Center was installed before June 19, 2017.
+> The daily cap does not stop the collection of data types WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus, Update and UpdateSummary, except for workspaces in which Azure Defender (Security Center) was installed before June 19, 2017.
### Identify what daily data limit to define
The following steps describe how to configure a limit to manage the volume of da
The daily cap can be configured via ARM by setting the `dailyQuotaGb` parameter under `WorkspaceCapping` as described at [Workspaces - Create Or Update](/rest/api/loganalytics/workspaces/createorupdate#workspacecapping).
+You can track changes made to the daily cap using this query:
+
+```kusto
+_LogOperation | where Operation == "Workspace Configuration" | where Detail contains "Daily quota"
+```
+
+Learn more about the [_LogOperation](https://docs.microsoft.com/azure/azure-monitor/logs/monitor-workspace) function.
+ ### View the effect of the Daily Cap To view the effect of the daily cap, it's important to account for the security data types not included in the daily cap, and the reset hour for your workspace. The daily cap reset hour is visible in the **Daily Cap** page. The following query can be used to track the data volumes subject to the Daily Cap between daily cap resets. In this example, the workspace's reset hour is 14:00. You'll need to update this for your workspace.
Usage
While we present a visual cue in the Azure portal when your data limit threshold is met, this behavior doesn't necessarily align to how you manage operational issues requiring immediate attention. To receive an alert notification, you can create a new alert rule in Azure Monitor. To learn more, see [how to create, view, and manage alerts](../alerts/alerts-metric.md).
-To get you started, here are the recommended settings for the alert querying the `Operation` table using the `_LogOperation` function.
+To get you started, here are the recommended settings for the alert querying the `Operation` table using the `_LogOperation` function ([learn more](https://docs.microsoft.com/azure/azure-monitor/logs/monitor-workspace)).
- Target: Select your Log Analytics resource - Criteria:
Note that the clause `where _IsBillable = true` filters out data types from cert
### Data volume by solution
-The query used to view the billable data volume by solution over the last month (excluding the last partial day) is:
+The query used to view the billable data volume by solution over the last month (excluding the last partial day) can be built using the [Usage](https://docs.microsoft.com/azure/azure-monitor/reference/tables/usage) data type as:
```kusto Usage
Usage
### Data volume by computer
-The `Usage` data type does not include information at the computer level. To see the **size** of ingested data per computer, use the `_BilledSize` [property](./log-standard-columns.md#_billedsize), which provides the size in bytes:
+The `Usage` data type does not include information at the computer level. To see the **size** of ingested billable data per computer, use the `_BilledSize` [property](./log-standard-columns.md#_billedsize), which provides the size in bytes:
```kusto
-find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, Computer
-| where _IsBillable == true
+find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, Computer, Type
+| where _IsBillable == true and Type != "Usage"
| extend computerName = tolower(tostring(split(Computer, '.')[0])) | summarize BillableDataBytes = sum(_BilledSize) by computerName
-| sort by BillableDataBytes nulls last
+| sort by BillableDataBytes desc nulls last
```
-The `_IsBillable` [property](./log-standard-columns.md#_isbillable) specifies whether the ingested data will incur charges.
+The `_IsBillable` [property](./log-standard-columns.md#_isbillable) specifies whether the ingested data will incur charges. The Usage type is omitted since this is only for analytics of data trends.
To see the **count** of billable events ingested per computer, use ```kusto find where TimeGenerated > ago(24h) project _IsBillable, Computer
-| where _IsBillable == true
+| where _IsBillable == true and Type != "Usage"
| extend computerName = tolower(tostring(split(Computer, '.')[0])) | summarize eventCount = count() by computerName
-| sort by eventCount nulls last
+| sort by eventCount desc nulls last
``` > [!TIP]
Some suggestions for reducing the volume of logs collected include:
| Source of high data volume | How to reduce data volume | | -- | - |
+| Data Collection Rules | The [Azure Monitor Agent](https://docs.microsoft.com/azure/azure-monitor/agents/azure-monitor-agent-overview) uses Data Collection Rules to manage the collection of data. You can [limit the collection of data](https://docs.microsoft.com/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent#limit-data-collection-with-custom-xpath-queries) using custom XPath queries. |
| Container Insights | [Configure Container Insights](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to collect only the data you required. | | Security events | Select [common or minimal security events](../../security-center/security-center-enable-data-collection.md#data-collection-tier) <br> Change the security audit policy to collect only needed events. In particular, review the need to collect events for <br> - [audit filtering platform](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772749(v=ws.10)) <br> - [audit registry](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941614(v%3dws.10))<br> - [audit file system](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772661(v%3dws.10))<br> - [audit kernel object](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941615(v%3dws.10))<br> - [audit handle manipulation](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772626(v%3dws.10))<br> - audit removable storage | | Performance counters | Change [performance counter configuration](../agents/data-sources-performance-counters.md) to: <br> - Reduce the frequency of collection <br> - Reduce number of performance counters |
The decision of whether workspaces with access to the legacy **Per Node** pricin
To facilitate this assessment, the following query can be used to make a recommendation for the optimal pricing tier based on a workspace's usage patterns. This query looks at the monitored nodes and data ingested into a workspace in the last 7 days, and for each day evaluates which pricing tier would have been optimal. To use the query, you need to specify
-1. whether the workspace is using Azure Security Center by setting `workspaceHasSecurityCenter` to `true` or `false`,
+1. whether the workspace is using Azure Defender (Security Center) by setting `workspaceHasSecurityCenter` to `true` or `false`,
2. update the prices if you have specific discounts, and 3. specify the number of days to look back and analyze by setting `daysToEvaluate`. This is useful if the query is taking too long trying to look at 7 days of data.
Here is the pricing tier recommendation query:
```kusto // Set these parameters before running query
-// Pricing details available at https://azure.microsoft.com/en-us/pricing/details/monitor/
+// Pricing details available at https://azure.microsoft.com/pricing/details/monitor/
let daysToEvaluate = 7; // Enter number of previous days to analyze (reduce if the query is taking too long) let workspaceHasSecurityCenter = false; // Specify if the workspace has Azure Security Center let PerNodePrice = 15.; // Enter your montly price per monitored nodes
When data collection stops, the OperationStatus is **Warning**. When data collec
To be notified when data collection stops, use the steps described in *Create daily data cap* alert to be notified when data collection stops. Use the steps described in [create an action group](../alerts/action-groups.md) to configure an e-mail, webhook, or runbook action for the alert rule.
+## Late arriving data
+
+Situations can arise where data is ingested with very old timestamps, for instance if an agent cannot communicate to Log Analytics due to a connectivity issue, or situations when a host has an incorrectly time date/time. To diagnose these issues, use the `_TimeReceived` column ([learn more](https://docs.microsoft.com/azure/azure-monitor/logs/log-standard-columns#_timereceived)) in addition to the `TimeGenerated` column. `TimeReceived` is the time when the the record was received by the Azure Monitor ingestion point in the Azure cloud.
+ ## Limits summary There are some additional Log Analytics limits, some of which depend on the Log Analytics pricing tier. These are documented at [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md#log-analytics-workspaces).
There are some additional Log Analytics limits, some of which depend on the Log
- See [Log searches in Azure Monitor Logs](../logs/log-query-overview.md) to learn how to use the search language. You can use search queries to perform additional analysis on the usage data. - Use the steps described in [create a new log alert](../alerts/alerts-metric.md) to be notified when a search criteria is met. - Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers.-- To configure an effective event collection policy, review [Azure Security Center filtering policy](../../security-center/security-center-enable-data-collection.md).
+- To configure an effective event collection policy, review [Azure Defender (Security Center) filtering policy](../../security-center/security-center-enable-data-collection.md).
- Change [performance counter configuration](../agents/data-sources-performance-counters.md). - To modify your event collection settings, review [event log configuration](../agents/data-sources-windows-events.md). - To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).-- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
+- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
azure-resource-manager Template Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-best-practices.md
When your template works as expected, we recommend you continue using the same A
Don't use a parameter for the API version. Resource properties and values can vary by API version. IntelliSense in a code editor can't determine the correct schema when the API version is set to a parameter. If you pass in an API version that doesn't match the properties in your template, the deployment will fail.
-Don't use variables for the API version. In particular, don't use the [providers function](template-functions-resource.md#providers) to dynamically get API versions during deployment. The dynamically retrieved API version might not match the properties in your template.
+Don't use variables for the API version.
## Resource dependencies
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 02/10/2021 Last updated : 04/01/2021 # Resource functions for ARM templates
Resource Manager provides the following functions for getting resource values in
* [extensionResourceId](#extensionresourceid) * [list*](#list) * [pickZones](#pickzones)
-* [providers](#providers)
* [reference](#reference) * [resourceGroup](#resourcegroup) * [resourceId](#resourceid)
You can use the response from pickZones to determine whether to provide null for
-## providers
-
-`providers(providerNamespace, [resourceType])`
-
-Returns information about a resource provider and its supported resource types. If you don't provide a resource type, the function returns all the supported types for the resource provider.
-
-### Parameters
-
-| Parameter | Required | Type | Description |
-|: |: |: |: |
-| providerNamespace |Yes |string |Namespace of the provider |
-| resourceType |No |string |The type of resource within the specified namespace. |
-
-### Return value
-
-Each supported type is returned in the following format:
-
-```json
-{
- "resourceType": "{name of resource type}",
- "locations": [ all supported locations ],
- "apiVersions": [ all supported API versions ]
-}
-```
-
-Array ordering of the returned values isn't guaranteed.
-
-### Providers example
-
-The following [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/functions/providers.json) shows how to use the provider function:
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "providerNamespace": {
- "type": "string"
- },
- "resourceType": {
- "type": "string"
- }
- },
- "resources": [],
- "outputs": {
- "providerOutput": {
- "type": "object",
- "value": "[providers(parameters('providerNamespace'), parameters('resourceType'))]"
- }
- }
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-param providerNamespace string
-param resourceType string
-
-output providerOutput array = providers(providerNamespace, resourceType)
-```
---
-For the **Microsoft.Web** resource provider and **sites** resource type, the preceding example returns an object in the following format:
-
-```json
-{
- "resourceType": "sites",
- "locations": [
- "South Central US",
- "North Europe",
- "West Europe",
- "Southeast Asia",
- ...
- ],
- "apiVersions": [
- "2016-08-01",
- "2016-03-01",
- "2015-08-01-preview",
- "2015-08-01",
- ...
- ]
-}
-```
- ## reference `reference(resourceName or resourceIdentifier, [apiVersion], ['Full'])`
azure-resource-manager Template Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions.md
Resource Manager provides the following functions for getting resource values:
* [listSecrets](template-functions-resource.md#list) * [list*](template-functions-resource.md#list) * [pickZones](template-functions-resource.md#pickzones)
-* [providers](template-functions-resource.md#providers)
* [reference](template-functions-resource.md#reference) * [resourceGroup](template-functions-resource.md#resourcegroup) - can only be used in deployments to a resource group. * [resourceId](template-functions-resource.md#resourceid) - can be used at any scope, but the valid parameters change depending on the scope.
azure-sql-edge Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/release-notes.md
Last updated 11/24/2020
This article describes what's new and what has changed with every new build of Azure SQL Edge.
+## Azure SQL Edge 1.0.3
+
+SQL engine build 15.0.2000.1554
+
+### Fixes
+
+- Upgrade ONNX runtime to 1.5.3
+- Update to Microsoft.SqlServer.DACFx version 150.5084.2
+- Miscellaneous bug fixes
+
## Azure SQL Edge 1.0.2 SQL engine build 15.0.2000.1554
azure-sql Availability Group Manually Configure Prerequisites Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-prerequisites-tutorial.md
You need an Azure account. You can [open a free Azure account](https://signup.az
Azure creates the resource group and pins a shortcut to the resource group in the portal.
-## Create the network and subnets
+## Create the network and subnet
-The next step is to create the networks and subnets in the Azure resource group.
+The next step is to create the networks and subnet in the Azure resource group.
-The solution uses one virtual network with two subnets. The [Virtual network overview](../../../virtual-network/virtual-networks-overview.md) provides more information about networks in Azure.
+The solution uses one virtual network and one subnet. The [Virtual network overview](../../../virtual-network/virtual-networks-overview.md) provides more information about networks in Azure.
To create the virtual network in the Azure portal:
To create the virtual network in the Azure portal:
Your address space and subnet address range might be different from the table. Depending on your subscription, the portal suggests an available address space and corresponding subnet address range. If no sufficient address space is available, use a different subscription.
- The example uses the subnet name **Admin**. This subnet is for the domain controllers.
+ The example uses the subnet name **Admin**. This subnet is for the domain controllers and SQL Server VMs.
5. Select **Create**.
To create the virtual network in the Azure portal:
Azure returns you to the portal dashboard and notifies you when the new network is created.
-### Create a second subnet
-
-The new virtual network has one subnet, named **Admin**. The domain controllers use this subnet. The SQL Server VMs use a second subnet named **SQL**. To configure this subnet:
-
-1. On your dashboard, select the resource group that you created, **SQL-HA-RG**. Locate the network in the resource group under **Resources**.
-
- If **SQL-HA-RG** isn't visible, find it by selecting **Resource Groups** and filtering by the resource group name.
-
-2. Select **autoHAVNET** on the list of resources.
-3. On the **autoHAVNET** virtual network, under **Settings** select **Subnets**.
-
- Note the subnet that you already created.
-
- ![Note the subnet that you already created](./media/availability-group-manually-configure-prerequisites-tutorial-/07-addsubnet.png)
-
-5. To create a second subnet, select **+ Subnet**.
-6. On **Add subnet**, configure the subnet by typing **sqlsubnet** under **Name**. Azure automatically specifies a valid **Address range**. Verify that this address range has at least 10 addresses in it. In a production environment, you might require more addresses.
-7. Select **OK**.
-
- ![Configure subnet](./media/availability-group-manually-configure-prerequisites-tutorial-/08-configuresubnet.png)
-
-The following table summarizes the network configuration settings:
-
-| **Field** | Value |
-| | |
-| **Name** |**autoHAVNET** |
-| **Address space** |This value depends on the available address spaces in your subscription. A typical value is 10.0.0.0/16. |
-| **Subnet name** |**admin** |
-| **Subnet address range** |This value depends on the available address ranges in your subscription. A typical value is 10.0.0.0/24. |
-| **Subnet name** |**sqlsubnet** |
-| **Subnet address range** |This value depends on the available address ranges in your subscription. A typical value is 10.0.1.0/24. |
-| **Subscription** |Specify the subscription that you intend to use. |
-| **Resource Group** |**SQL-HA-RG** |
-| **Location** |Specify the same location that you chose for the resource group. |
- ## Create availability sets Before you create virtual machines, you need to create availability sets. Availability sets reduce the downtime for planned or unplanned maintenance events. An Azure availability set is a logical group of resources that Azure places on physical fault domains and update domains. A fault domain ensures that the members of the availability set have separate power and network resources. An update domain ensures that members of the availability set aren't brought down for maintenance at the same time. For more information, see [Manage the availability of virtual machines](../../../virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).
After you create the availability sets, return to the resource group in the Azur
## Create domain controllers
-After you've created the network, subnets, and availability sets, you're ready to create the virtual machines for the domain controllers.
+After you've created the network, subnet, and availability sets, you're ready to create the virtual machines for the domain controllers.
### Create virtual machines for the domain controllers
azure-sql Sql Agent Extension Manually Register Single Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md
Registering with the [SQL Server IaaS Agent extension](sql-server-iaas-agent-ext
Deploying a SQL Server VM Azure Marketplace image through the Azure portal automatically registers the SQL Server VM with the extension. However, if you choose to self-install SQL Server on an Azure virtual machine, or provision an Azure virtual machine from a custom VHD, then you must register your SQL Server VM with the SQL IaaS Agent extension to to unlock full feature benefits and manageability.
-To utilize the SQL IaaS Agent extension, you must first [register your subscription with the **Microsoft.SqlVirtualMachine** provider](#register-subscription-with-rp), which gives the SQL IaaS extension the ability to create resources within that specific subscription.
+To utilize the SQL IaaS Agent extension, you must first [register your subscription with the **Microsoft.SqlVirtualMachine** provider](#register-subscription-with-resource-provider), which gives the SQL IaaS extension the ability to create resources within that specific subscription.
> [!IMPORTANT] > The SQL IaaS Agent extension collects data for the express purpose of giving customers optional benefits when using SQL Server within Azure Virtual Machines. Microsoft will not use this data for licensing audits without the customer's advance consent. See the [SQL Server privacy supplement](/sql/sql-server/sql-server-privacy#non-personal-data) for more information.
To register your SQL Server VM with the extension, you'll need:
- The latest version of [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell (5.0 minimum)](/powershell/azure/install-az-ps).
-## Register subscription with RP
+## Register subscription with Resource Provider
-To register your SQL Server VM with the SQL IaaS Agent extension, you must first register your subscription with **Microsoft.SqlVirtualMachine** provider. This gives the SQL IaaS Agent extension the ability to create resources within your subscription. You can do so by using the Azure portal, the Azure CLI, or Azure PowerShell.
+To register your SQL Server VM with the SQL IaaS Agent extension, you must first register your subscription with **Microsoft.SqlVirtualMachine** resource provider. This gives the SQL IaaS Agent extension the ability to create resources within your subscription. You can do so by using the Azure portal, the Azure CLI, or Azure PowerShell.
### Azure portal
azure-sql Sql Agent Extension Manually Register Vms Bulk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk.md
The registration process carries no risk, has no downtime, and will not restart
To register your SQL Server VM with the extension, you'll need the following: -- An [Azure subscription](https://azure.microsoft.com/free/) that has been [registered with the **Microsoft.SqlVirtualMachine** provider](sql-agent-extension-manually-register-single-vm.md#register-subscription-with-rp) and contains unregistered SQL Server virtual machines.
+- An [Azure subscription](https://azure.microsoft.com/free/) that has been [registered with the **Microsoft.SqlVirtualMachine** provider](sql-agent-extension-manually-register-single-vm.md#register-subscription-with-resource-provider) and contains unregistered SQL Server virtual machines.
- The client credentials used to register the virtual machines exist in any of the following Azure roles: **Virtual Machine contributor**, **Contributor**, or **Owner**. - The latest version of [Az PowerShell (5.0 minimum)](/powershell/azure/new-azureps-module-az).
batch Batch Apis Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-apis-tools.md
Your applications and services can issue direct REST API calls or use one or mor
| **Batch REST** |[Azure REST API - Docs](/rest/api/batchservice/) |N/A |- |- | [Supported versions](/rest/api/batchservice/batch-service-rest-api-versioning) | | **Batch .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Batch/) |[Tutorial](tutorial-parallel-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) | [Release notes](https://aka.ms/batch-net-dataplane-changelog) | | **Batch Python** |[Azure SDK for Python - Docs](/python/api/overview/azure/batch/client) |[PyPI](https://pypi.org/project/azure-batch/) |[Tutorial](tutorial-parallel-python.md)|[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Python/Batch) | [Readme](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/batch/azure-batch/README.md) |
-| **Batch Node.js** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/batch/client) |[npm](https://www.npmjs.com/package/azure-batch) |[Tutorial](batch-nodejs-get-started.md) |- | [Readme](https://github.com/Azure/azure-sdk-for-node/tree/master/lib/services/batch) |
+| **Batch JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/batch/client) |[npm](https://www.npmjs.com/package/azure-batch) |[Tutorial](batch-js-get-started.md) |- | [Readme](https://github.com/Azure/azure-sdk-for-node/tree/master/lib/services/batch) |
| **Batch Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Java) | [Readme](https://github.com/Azure/azure-batch-sdk-for-java)| ## Batch Management APIs
The Azure Resource Manager APIs for Batch provide programmatic access to Batch a
| **Batch Management REST** |[Azure REST API - Docs](/rest/api/batchmanagement/) |- |- |[GitHub](https://github.com/Azure-Samples/batch-dotnet-manage-batch-accounts) | | **Batch Management .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch/management) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.Batch/) | [Tutorial](batch-management-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) | | **Batch Management Python** |[Azure SDK for Python - Docs](/python/api/overview/azure/batch/management) |[PyPI](https://pypi.org/project/azure-mgmt-batch/) |- |- |
-| **Batch Management Node.js** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/batch/management) |[npm](https://www.npmjs.com/package/azure-arm-batch) |- |- |
+| **Batch Management JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/batch/management) |[npm](https://www.npmjs.com/package/azure-arm-batch) |- |- |
| **Batch Management Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch/management) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |- | ## Batch command-line tools
These additional tools may be helpful for building and debugging your Batch appl
## Next steps - Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks.-- [Get started with the Azure Batch library for .NET](tutorial-parallel-dotnet.md) to learn how to use C# and the Batch .NET library to execute a simple workload using a common Batch workflow. A [Python version](tutorial-parallel-python.md) and a [Node.js tutorial](batch-nodejs-get-started.md) are also available.
+- [Get started with the Azure Batch library for .NET](tutorial-parallel-dotnet.md) to learn how to use C# and the Batch .NET library to execute a simple workload using a common Batch workflow. A [Python version](tutorial-parallel-python.md) and a [JavaScript tutorial](batch-js-get-started.md) are also available.
- Download the [code samples on GitHub](https://github.com/Azure-Samples/azure-batch-samples) to see how both C# and Python can interface with Batch to schedule and process sample workloads.
batch Batch Job Schedule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-job-schedule.md
To manage a job using the Azure CLI, see [az batch job-schedule](/cli/azure/batc
-[1]: ./media/batch-job-schedule/add_job_schedule-02.png
-[2]: ./media/batch-job-schedule/add_job_schedule-03.png
+[1]: ./media/batch-job-schedule/add-job-schedule-02.png
+[2]: ./media/batch-job-schedule/add-job-schedule-03.png
batch Batch Js Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-js-get-started.md
+
+ Title: Use the Azure Batch client library for JavaScript
+description: Learn the basic concepts of Azure Batch and build a simple solution using JavaScript.
+ Last updated : 01/01/2021++
+# Get started with Batch SDK for JavaScript
+
+Learn the basics of building a Batch client in JavaScript using [Azure Batch JavaScript SDK](https://github.com/Azure/azure-sdk-for-js/). We take a step by step approach of understanding a scenario for a batch application and then setting it up using JavaScript.
+
+## Prerequisites
+
+This article assumes that you have a working knowledge of JavaScript and familiarity with Linux. It also assumes that you have an Azure account setup with access rights to create Batch and Storage services.
+
+We recommend reading [Azure Batch Technical Overview](batch-technical-overview.md) before you go through the steps outlined this article.
+
+## Understand the scenario
+
+Here, we have a simple script written in Python that downloads all csv files from an Azure Blob storage container and converts them to JSON. To process multiple storage account containers in parallel, we can deploy the script as an Azure Batch job.
+
+## Azure Batch architecture
+
+The following diagram depicts how we can scale the Python script using Azure Batch and a client.
+
+![Diagram showing scenario architecture.](./media/batch-js-get-started/batch-scenario.png)
+
+The JavaScript sample deploys a batch job with a preparation task (explained in detail later) and a set of tasks depending on the number of containers in the storage account. You can download the scripts from the GitHub repository.
+
+- [Sample Code](https://github.com/Azure-Samples/azure-batch-samples/blob/master/JavaScript/Node.js/sample.js)
+- [Preparation task shell scripts](https://github.com/Azure-Samples/azure-batch-samples/blob/master/JavaScript/Node.js/startup_prereq.sh)
+- [Python csv to JSON processor](https://github.com/Azure-Samples/azure-batch-samples/blob/master/JavaScript/Node.js/processcsv.py)
+
+> [!TIP]
+> The JavaScript sample in the link specified does not contain specific code to be deployed as an Azure function app. You can refer to the following links for instructions to create one.
+> - [Create function app](../azure-functions/functions-get-started.md)
+> - [Create timer trigger function](../azure-functions/functions-bindings-timer.md)
+
+## Build the application
+
+Now, let us follow the process step by step into building the JavaScript client:
+
+### Step 1: Install Azure Batch SDK
+
+You can install Azure Batch SDK for JavaScript using the npm install command.
+
+`npm install azure-batch`
+
+This command installs the latest version of azure-batch JavaScript SDK.
+
+>[!Tip]
+> In an Azure Function app, you can go to "Kudu Console" in the Azure function's Settings tab to run the npm install commands. In this case to install Azure Batch SDK for JavaScript.
+
+### Step 2: Create an Azure Batch account
+
+You can create it from the [Azure portal](batch-account-create-portal.md) or from command line ([PowerShell](batch-powershell-cmdlets-get-started.md) /[Azure CLI](/cli/azure)).
+
+Following are the commands to create one through Azure CLI.
+
+Create a Resource Group, skip this step if you already have one where you want to create the Batch Account:
+
+`az group create -n "<resource-group-name>" -l "<location>"`
+
+Next, create an Azure Batch account.
+
+`az batch account create -l "<location>" -g "<resource-group-name>" -n "<batch-account-name>"`
+
+Each Batch account has its corresponding access keys. These keys are needed to create further resources in Azure batch account. A good practice for production environment is to use Azure Key Vault to store these keys. You can then create a Service principal for the application. Using this service principal the application can create an OAuth token to access keys from the key vault.
+
+`az batch account keys list -g "<resource-group-name>" -n "<batch-account-name>"`
+
+Copy and store the key to be used in the subsequent steps.
+
+### Step 3: Create an Azure Batch service client
+
+Following code snippet first imports the azure-batch JavaScript module and then creates a Batch Service client. You need to first create a SharedKeyCredentials object with the Batch account key copied from the previous step.
+
+```javascript
+// Initializing Azure Batch variables
+
+var batch = require('azure-batch');
+
+var accountName = '<azure-batch-account-name>';
+
+var accountKey = '<account-key-downloaded>';
+
+var accountUrl = '<account-url>'
+
+// Create Batch credentials object using account name and account key
+
+var credentials = new batch.SharedKeyCredentials(accountName,accountKey);
+
+// Create Batch service client
+
+var batch_client = new batch.ServiceClient(credentials,accountUrl);
+
+```
+
+The Azure Batch URI can be found in the Overview tab of the Azure portal. It is of the format:
+
+`https://accountname.location.batch.azure.com`
+
+Refer to the screenshot:
+
+![Azure batch uri](./media/batch-js-get-started/batch-uri.png)
+
+### Step 4: Create an Azure Batch pool
+
+An Azure Batch pool consists of multiple VMs (also known as Batch Nodes). Azure Batch service deploys the tasks on these nodes and manages them. You can define the following configuration parameters for your pool.
+
+- Type of Virtual Machine image
+- Size of Virtual Machine nodes
+- Number of Virtual Machine nodes
+
+> [!TIP]
+> The size and number of Virtual Machine nodes largely depend on the number of tasks you want to run in parallel and also the task itself. We recommend testing to determine the ideal number and size.
+
+The following code snippet creates the configuration parameter objects.
+
+```javascript
+// Creating Image reference configuration for Ubuntu Linux VM
+var imgRef = {publisher:"Canonical",offer:"UbuntuServer",sku:"14.04.2-LTS",version:"latest"}
+
+// Creating the VM configuration object with the SKUID
+var vmconfig = {imageReference:imgRef,nodeAgentSKUId:"batch.node.ubuntu 14.04"}
+
+// Setting the VM size to Standard F4
+var vmSize = "STANDARD_F4"
+
+//Setting number of VMs in the pool to 4
+var numVMs = 4
+```
+
+> [!TIP]
+> For the list of Linux VM images available for Azure Batch and their SKU IDs, see [List of virtual machine images](batch-linux-nodes.md#list-of-virtual-machine-images).
+
+Once the pool configuration is defined, you can create the Azure Batch pool. The Batch pool command creates Azure Virtual Machine nodes and prepares them to be ready to receive tasks to execute. Each pool should have a unique ID for reference in subsequent steps.
+
+The following code snippet creates an Azure Batch pool.
+
+```javascript
+// Create a unique Azure Batch pool ID
+var poolid = "pool" + customerDetails.customerid;
+var poolConfig = {id:poolid, displayName:poolid,vmSize:vmSize,virtualMachineConfiguration:vmconfig,targetDedicatedComputeNodes:numVms,enableAutoScale:false };
+// Creating the Pool for the specific customer
+var pool = batch_client.pool.add(poolConfig,function(error,result){
+ if(error!=null){console.log(error.response)};
+});
+```
+
+You can check the status of the pool created and ensure that the state is in "active" before going ahead with submission of a Job to that pool.
+
+```javascript
+var cloudPool = batch_client.pool.get(poolid,function(error,result,request,response){
+ if(error == null)
+ {
+
+ if(result.state == "active")
+ {
+ console.log("Pool is active");
+ }
+ }
+ else
+ {
+ if(error.statusCode==404)
+ {
+ console.log("Pool not found yet returned 404...");
+
+ }
+ else
+ {
+ console.log("Error occurred while retrieving pool data");
+ }
+ }
+ });
+```
+
+Following is a sample result object returned by the pool.get function.
+
+```
+{ id: 'processcsv_201721152',
+ displayName: 'processcsv_201721152',
+ url: 'https://<batch-account-name>.centralus.batch.azure.com/pools/processcsv_201721152',
+ eTag: '<eTag>',
+ lastModified: 2017-03-27T10:28:02.398Z,
+ creationTime: 2017-03-27T10:28:02.398Z,
+ state: 'active',
+ stateTransitionTime: 2017-03-27T10:28:02.398Z,
+ allocationState: 'resizing',
+ allocationStateTransitionTime: 2017-03-27T10:28:02.398Z,
+ vmSize: 'standard_a1',
+ virtualMachineConfiguration:
+ { imageReference:
+ { publisher: 'Canonical',
+ offer: 'UbuntuServer',
+ sku: '14.04.2-LTS',
+ version: 'latest' },
+ nodeAgentSKUId: 'batch.node.ubuntu 14.04' },
+ resizeTimeout:
+ { [Number: 900000]
+ _milliseconds: 900000,
+ _days: 0,
+ _months: 0,
+ _data:
+ { milliseconds: 0,
+ seconds: 0,
+ minutes: 15,
+ hours: 0,
+ days: 0,
+ months: 0,
+ years: 0 },
+ _locale:
+ Locale {
+ _calendar: [Object],
+ _longDateFormat: [Object],
+ _invalidDate: 'Invalid date',
+ ordinal: [Function: ordinal],
+ _ordinalParse: /\d{1,2}(th|st|nd|rd)/,
+ _relativeTime: [Object],
+ _months: [Object],
+ _monthsShort: [Object],
+ _week: [Object],
+ _weekdays: [Object],
+ _weekdaysMin: [Object],
+ _weekdaysShort: [Object],
+ _meridiemParse: /[ap]\.?m?\.?/i,
+ _abbr: 'en',
+ _config: [Object],
+ _ordinalParseLenient: /\d{1,2}(th|st|nd|rd)|\d{1,2}/ } },
+ currentDedicated: 0,
+ targetDedicated: 4,
+ enableAutoScale: false,
+ enableInterNodeCommunication: false,
+ taskSlotsPerNode: 1,
+ taskSchedulingPolicy: { nodeFillType: 'Spread' } }
+```
+
+### Step 4: Submit an Azure Batch job
+
+An Azure Batch job is a logical group of similar tasks. In our scenario, it is "Process csv to JSON." Each task here could be processing csv files present in each Azure Storage container.
+
+These tasks would run in parallel and deployed across multiple nodes, orchestrated by the Azure Batch service.
+
+> [!TIP]
+> You can use the [taskSlotsPerNode](https://azure.github.io/azure-sdk-for-node/azure-batch/latest/Pool.html#add) property to specify maximum number of tasks that can run concurrently on a single node.
+
+#### Preparation task
+
+The VM nodes created are blank Ubuntu nodes. Often, you need to install a set of programs as prerequisites.
+Typically, for Linux nodes you can have a shell script that installs the prerequisites before the actual tasks run. However it could be any programmable executable.
+
+The [shell script](https://github.com/shwetams/azure-batchclient-sample-nodejs/blob/master/startup_prereq.sh) in this example installs Python-pip and the Azure Storage SDK for Python.
+
+You can upload the script on an Azure Storage Account and generate a SAS URI to access the script. This process can also be automated using the Azure Storage JavaScript SDK.
+
+> [!TIP]
+> A preparation task for a job runs only on the VM nodes where the specific task needs to run. If you want prerequisites to be installed on all nodes irrespective of the tasks that run on it, you can use the [startTask](https://azure.github.io/azure-sdk-for-node/azure-batch/latest/Pool.html#add) property while adding a pool. You can use the following preparation task definition for reference.
+
+A preparation task is specified during the submission of Azure Batch job. Following are the preparation task configuration parameters:
+
+- **ID**: A unique identifier for the preparation task
+- **commandLine**: Command line to execute the task executable
+- **resourceFiles**: Array of objects that provide details of files needed to be downloaded for this task to run. Following are its options
+ - blobSource: The SAS URI of the file
+ - filePath: Local path to download and save the file
+ - fileMode: Only applicable for Linux nodes, fileMode is in octal format with a default value of 0770
+- **waitForSuccess**: If set to true, the task does not run on preparation task failures
+- **runElevated**: Set it to true if elevated privileges are needed to run the task.
+
+Following code snippet shows the preparation task script configuration sample:
+
+```javascript
+var job_prep_task_config = {id:"installprereq",commandLine:"sudo sh startup_prereq.sh > startup.log",resourceFiles:[{'blobSource':'Blob SAS URI','filePath':'startup_prereq.sh'}],waitForSuccess:true,runElevated:true}
+```
+
+If there are no prerequisites to be installed for your tasks to run, you can skip the preparation tasks. Following code creates a job with display name "process csv files."
+
+ ```javascript
+ // Setting up Batch pool configuration
+ var pool_config = {poolId:poolid}
+ // Setting up Job configuration along with preparation task
+ var jobId = "processcsvjob"
+ var job_config = {id:jobId,displayName:"process csv files",jobPreparationTask:job_prep_task_config,poolInfo:pool_config}
+ // Adding Azure batch job to the pool
+ var job = batch_client.job.add(job_config,function(error,result){
+ if(error != null)
+ {
+ console.log("Error submitting job : " + error.response);
+ }});
+```
+
+### Step 5: Submit Azure Batch tasks for a job
+
+Now that our process csv job is created, let us create tasks for that job. Assuming we have four containers, we have to create four tasks, one for each container.
+
+If we look at the [Python script](https://github.com/shwetams/azure-batchclient-sample-nodejs/blob/master/processcsv.py), it accepts two parameters:
+
+- container name: The Storage container to download files from
+- pattern: An optional parameter of file name pattern
+
+Assuming we have four containers "con1", "con2", "con3","con4" following code shows submitting for tasks to the Azure batch job "process csv" we created earlier.
+
+```javascript
+// storing container names in an array
+var container_list = ["con1","con2","con3","con4"]
+ container_list.forEach(function(val,index){
+
+ var container_name = val;
+ var taskID = container_name + "_process";
+ var task_config = {id:taskID,displayName:'process csv in ' + container_name,commandLine:'python processcsv.py --container ' + container_name,resourceFiles:[{'blobSource':'<blob SAS URI>','filePath':'processcsv.py'}]}
+ var task = batch_client.task.add(poolid,task_config,function(error,result){
+ if(error != null)
+ {
+ console.log(error.response);
+ }
+ else
+ {
+ console.log("Task for container : " + container_name + "submitted successfully");
+ }
+++
+ });
+
+ });
+```
+
+The code adds multiple tasks to the pool. And each of the tasks is executed on a node in the pool of VMs created. If the number of tasks exceeds the number of VMs in a pool or the taskSlotsPerNode property, the tasks wait until a node is made available. This orchestration is handled by Azure Batch automatically.
+
+The portal has detailed views on the tasks and job statuses. You can also use the list and get functions in the Azure JavaScript SDK. Details are provided in the documentation [link](https://azure.github.io/azure-sdk-for-node/azure-batch/latest/Job.html).
+
+## Next steps
+
+- Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks.
+- See the [Batch JavaScript reference](/javascript/api/overview/azure/batch) to explore the Batch API.
batch Batch Mpi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-mpi.md
Multi-instance tasks allow you to run an Azure Batch task on multiple compute no
In Batch, each task is normally executed on a single compute node--you submit multiple tasks to a job, and the Batch service schedules each task for execution on a node. However, by configuring a task's **multi-instance settings**, you tell Batch to instead create one primary task and several subtasks that are then executed on multiple nodes. When you submit a task with multi-instance settings to a job, Batch performs several steps unique to multi-instance tasks:
cloud-services-extended-support Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/extensions.md
With advanced monitoring, additional metrics are sampled and collected at interv
For more information, see [Apply the Windows Azure diagnostics extension in Cloud Services (extended support)](enable-wad.md)
+## Anti Malware Extension
+An Azure application or service can enable and configure Microsoft Antimalware for Azure Cloud Services using PowerShell cmdlets. Note that Microsoft Antimalware is installed in a disabled state in the Cloud Services platform running Windows Server 2012 R2 and older which requires an action by an Azure application to enable it. For Windows Server 2016 and above, Windows Defender is enabled by default, hence these cmdlets can be used for configuring Antimalware.
+
+For more information, see [Add Microsoft Antimalware to Azure Cloud Service using Extended Support(CS-ES)](https://docs.microsoft.com/azure/security/fundamentals/antimalware-code-samples#add-microsoft-antimalware-to-azure-cloud-service-using-extended-support)
+
+To know more about Azure Antimalware, please visit [here](https://docs.microsoft.com/azure/security/fundamentals/antimalware)
++ ## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
cloud-services-extended-support In Place Migration Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-common-errors.md
+
+ Title: Common errors and known issues when migration to Azure Cloud Services (extended support)
+description: Overview of common errors when migrating from Cloud Services (classic) to Cloud Service (extended support)
++++++ Last updated : 2/08/2021+++
+# Common errors and known issues when migration to Azure Cloud Services (extended support)
+
+This article covers known issues and common errors you might encounter when migration from Cloud Services (classic) to Cloud Services (extended support).
+
+## Known issues
+Following issues are known and being addressed.
+
+| Known issues | Mitigation |
+|||
+| Role Instances restarting UD by UD after successful commit. | Restart operation follows the same method as monthly guest OS rollouts. Do not commit migration of cloud services with single role instance or impacted by restart.|
+| Azure portal cannot read migration state after browser refresh. | Rerun validate and prepare operation to get back to the original migration state. |
+| Certificate displayed as secret resource in key vault. | After migration, reupload the certificate as a certificate resource to simplify update operation on Cloud Services (extended support). |
+| Deployment labels not getting saved as tags as part of migration. | Manually create the tags after migration to maintain this information.
+| Resource Group name is in all caps. | Non-impacting. Solution not yet available. |
+| Name of the lock on Cloud Services (extended support) lock is incorrect. | Non-impacting. Solution not yet available. |
+| IP address name is incorrect on Cloud Services (extended support) portal blade. | Non-impacting. Solution not yet available. |
+| Invalid DNS name shown for virtual IP address after on update operation on a migrated cloud service. | Non-impacting. Solution not yet available. |
+| After successful prepare, linking a new Cloud Services (extended support) deployment as swappable is not allowed. | Do not link a new cloud service as swappable to a prepared cloud service. |
+| Error messages need to be updated. | Non-impacting. |
+
+## Common migration errors
+Common migration errors and mitigation steps.
+
+| Error message | Details |
+|||
+| The resource type could not be found in the namespace `Microsoft.Compute` for api version '2020-10-01-preview'. | [Register the subscription](in-place-migration-overview.md#setup-access-for-migration) for CloudServices feature flag to access public preview. |
+| The server encountered an internal error. Retry the request. | Retry the operation, use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or contact support. |
+| The server encountered an unexpected error while trying to allocate network resources for the cloud service. Retry the request. | Retry the operation, use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or contact support. |
+| Deployment deployment-name in cloud service cloud-service-name must be within a virtual network to be migrated. | Deployment is not located in a virtual network. Refer [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. |
+| Migration of deployment deployment-name in cloud service cloud-service-name is not supported because it is in region region-name. Allowed regions: [list of available regions]. | Region is not yet supported for migration. |
+| The Deployment deployment-name in cloud service cloud-service-name cannot be migrated because there are no subnets associated with the role(s) role-name. Associate all roles with a subnet, then retry the migration of the cloud service. | Update the cloud service (classic) deployment by placing it in a subnet before migration. |
+| The deployment deployment-name in cloud service cloud-service-name cannot be migrated because the deployment requires at least one feature that not registered on the subscription in Azure Resource Manager. Register all required features to migrate this deployment. Missing feature(s): [list of missing features]. | Contact support to get the feature flags registered. |
+| The deployment cannot be migrated because the deployment's cloud service has two occupied slots. Migration of cloud services is only supported for deployments that are the only deployment in their cloud service. Delete the other deployment in the cloud service to proceed with the migration of this deployment. | Refer to the [unsupported scenario](in-place-migration-overview.md#unsupported-configurations--migration-scenarios) list for more details. |
+| Deployment deployment-name in HostedService cloud-service-name is in intermediate state: state. Migration not allowed. | Deployment is either being created, deleted or updated. Wait for the operation to complete and retry. |
+| The deployment deployment-name in hosted service cloud-service-name has reserved IP(s) but no reserved IP name. To resolve this issue, update reserved IP name or contact the Microsoft Azure service desk. | Update cloud service deployment. |
+| The deployment deployment-name in hosted service cloud-service-name has reserved IP(s) reserved-ip-name but no endpoint on the reserved IP. To resolve this issue, add at least one endpoint to the reserved IP. | Add endpoint to reserved IP. |
+| Migration of Deployment {0} in HostedService {1} is in the process of being committed and cannot be changed until it completes successfully. | Wait or retry operation. |
+| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and cannot be changed until it completes successfully. | Wait or retry operation. |
+| One or more VMs in Deployment {0} in HostedService {1} is undergoing an update operation. It can't be migrated until the previous operation completes successfully. Retry after sometime. | Wait for operation to complete. |
+| Migration is not supported for Deployment {0} in HostedService {1} because it uses following features not yet supported for migration: Non-vnet deployment.| Deployment is not located in a virtual network. Refer [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. |
+| The virtual network name cannot be null or empty. | Provide virtual network name in the REST request body |
+| The Subnet Name cannot be null or empty. | Provide subnet name in the REST request body. |
+| DestinationVirtualNetwork must be set to one of the following values: Default, New, or Existing. | Provide DestinationVirtualNetwork property in the REST request body. |
+| Default VNet destination option not implemented. | ΓÇ£DefaultΓÇ¥ value is not supported for DestinationVirtualNetwork property in the REST request body. |
+| The deployment {0} cannot be migrated because the CSPKG is not available. | Upgrade the deployment and try again. |
+| The subnet with ID '{0}' is in a different location than deployment '{1}' in hosted service '{2}'. The location for the subnet is '{3}' and the location for the hosted service is '{4}'. Specify a subnet in the same location as the deployment. | Update the cloud service to have both subnet and cloud service in the same location before migration. |
+| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and cannot be changed until it completes successfully. | Wait for abort to complete or retry abort. Use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or Contact support otherwise. |
+| Deployment {0} in HostedService {1} has not been prepared for Migration. | Run prepare on the cloud service before running the commit operation. |
+| UnknownExceptionInEndExecute: Contract.Assert failed: rgName is null or empty: Exception received in EndExecute that is not an RdfeException. | Use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or Contact support. |
+| UnknownExceptionInEndExecute: A task was canceled: Exception received in EndExecute that is not an RdfeException. | Use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or Contact support. |
+| XrpVirtualNetworkMigrationError: Virtual network migration failure. | Use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or Contact support. |
+| Deployment {0} in HostedService {1} belongs to Virtual Network {2}. Migrate Virtual Network {2} to migrate this HostedService {1}. | Refer to [Virtual Network migration](in-place-migration-technical-details.md#virtual-network-migration). |
+| The current quota for Resource name in Azure Resource Manager is insufficient to complete migration. Current quota is {0}, additional needed is {1}. File a support request to raise the quota and retry migration once the quota has been raised. | Follow appropriate channels to request quota increase: <br>[Quota increase for networking resources](../azure-portal/supportability/networking-quota-requests.md) <br>[Quota increase for compute resources](../azure-portal/supportability/per-vm-quota-requests.md) |
+
+## Next steps
+For more information on the requirements of migration, see [Technical details of migrating to Azure Cloud Services (extended support)](in-place-migration-technical-details.md)
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-overview.md
+
+ Title: Migrate Azure Cloud Services (classic) to Azure Cloud Services (extended support)
+description: Overview of migration from Cloud Services (classic) to Cloud Service (extended support)
++++++ Last updated : 2/08/2021++
+
+# Migrate Azure Cloud Services (classic) to Azure Cloud Services (extended support)
+
+This article provides an overview on the platform-supported migration tool and how to use it to migrate [Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) to [Azure Cloud Services (extended support)](overview.md).
+
+The migration tool utilizes the same APIs and has the same experience as the [Virtual Machine (classic) migration](https://docs.microsoft.com/azure/virtual-machines/migration-classic-resource-manager-overview).
+
+> [!IMPORTANT]
+> Migrating from Cloud Services (classic) to Cloud Services (extended support) using the migration tool is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Refer to the following resources if you need assistance with your migration:
+
+- [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html): Microsoft and community support for migration.
+- [Azure Migration Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%22pesId%22:%22e79dcabe-5f77-3326-2112-74487e1e5f78%22,%22supportTopicId%22:%22fca528d2-48bd-7c9f-5806-ce5d5b1d226f%22%7D): Dedicated support team for technical assistance during migration. Customers without technical support can use [free support capability](https://aka.ms/cs-migration-errors) provided specifically for this migration.
+- If your company/organization has partnered with Microsoft or works with Microsoft representatives such as cloud solution architects or technical account managers, reach out to them for more resources for migration.
+- Complete [this survey](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR--AgudUMwJKgRGMO84rHQtUQzZYNklWUk4xOTFXVFBPOFdGOE85RUIwVC4u) to provide feedback or raise issues to the Cloud Services (extended support) product team.
+
+## Migration benefits
+The platform supported migration provides following key benefits:
+- The migration is fully orchestrated by the platform, moving the entire deployment and associated resources to Azure Resource Manager.
+- No downtime migration.
+- Easy and fast migration compared to other migration paths by minimizing manual tasks.
+- Retains Cloud Services IP Address and DNS label as part of the migration.
+
+For other benefits and why you should migrate, see [Cloud Services (extended support)](overview.md) and [Azure Resource Manager](../azure-resource-manager/management/overview.md).
+
+## Setup access for migration
+
+To perform this migration, you must be added as a coadministrator for the subscription and register the providers needed.
+
+1. Sign in to the Azure portal.
+3. On the Hub menu, select Subscription. If you don't see it, select All services.
+3. Find the appropriate subscription entry, and then look at the MY ROLE field. For a coadministrator, the value should be Account admin. If you're not able to add a co-administrator, contact a service administrator or co-administrator for the subscription to get yourself added.
+
+4. Register your subscription for Microsoft.ClassicInfrastructureMigrate namespace using [Portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), [PowerShell](../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell) or [CLI](../azure-resource-manager/management/resource-providers-and-types.md#azure-cli)
+
+ ```powershell
+ Register-AzResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate
+ ```
+
+5. Register your subscription for the Cloud Services migration preview feature using [Portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), [PowerShell](../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell) or [CLI](../azure-resource-manager/management/resource-providers-and-types.md#azure-cli)
+
+ ```powershell
+ Register-AzProviderFeature -FeatureName CloudServices -ProviderNamespace Microsoft.Compute
+ ```
+
+6. Check the status of your registration. Registration can take a few minutes to complete.
+
+ ```powershell
+ Get-AzProviderFeature -FeatureName CloudServices -ProviderNamespace Microsoft.Compute
+ ```
+
+## How is migration for Cloud Services (classic) different from Virtual Machines (classic)?
+Azure Service Manager supports two different compute products, [Azure Virtual Machines (classic)](https://docs.microsoft.com/previous-versions/azure/virtual-machines/windows/classic/tutorial-classic) and [Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) or Web/ Worker roles. The two products differ based on the deployment type that lies within the Cloud Service. Azure Cloud Services (classic) uses Cloud Service containing deployments with Web/Worker roles. Azure Virtual Machines (classic) uses a cloud service containing deployments with IaaS VMs.
+
+The list of supported scenarios differ between Cloud Services (classic) and Virtual Machines (classic) because of differences in the deployment types.
+
+## Migration steps
+
+Customers can migrate their Cloud Services (classic) deployments using the same four operations used to migrate Virtual Machines (classic).
+
+1. **Validate Migration** - Validates that the migration will not be prevented by common unsupported scenarios.
+2. **Prepare Migration** ΓÇô Duplicates the resource metadata in Azure Resource Manager. All resources are locked for create/update/delete operations to ensure resource metadata is in sync across Azure Server Manager and Azure Resource Manager. All read operations will work using both Cloud Services (classic) and Cloud Services (extended support) APIs.
+3. **Abort Migration** - Removes resource metadata from Azure Resource Manager. Unlocks all resources for create/update/delete operations.
+4. **Commit Migration** - Removes resource metadata from Azure Service Manager. Unlocks the resource for create/update/delete operations. Abort is no longer allowed after commit has been attempted.
+
+>[!NOTE]
+> Prepare, Abort and Commit are idempotent and therefore, if failed, a retry should fix the issue.
++
+For more information, see [Overview of Platform-supported migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-overview.md)
+
+## Supported resources and features available for migration associated with Cloud Services (classic)
+- Storage Accounts
+- Virtual Networks
+- Network Security Groups
+- Reserved Public IP addresses
+- Endpoint Access Control Lists
+- User Defined Routes
+- Internal load balancer
+- Certificate migration to key vault
+- Plugins and Extension (XML and Json based)
+- On Start / On Stop Tasks
+- Deployments with Accelerated Networking
+- Deployments using single or multiple roles
+- Basic load balancer
+- Input, Instance Input, Internal Endpoints
+- Dynamic Public IP addresses
+- DNS Name
+- Network Traffic Rules
+- Hypernet virtual network
+
+## Supported configurations / migration scenarios
+These are top scenarios involving combinations of resources, features, and Cloud Services. This list is not exhaustive.
+
+| Service | Configuration | Comments |
+||||
+| [Azure AD Domain Services](https://docs.microsoft.com/azure/active-directory-domain-services/migrate-from-classic-vnet) | Virtual networks that contain Azure Active Directory Domain services. | Virtual network containing both Cloud Service deployment and Azure AD Domain services is supported. Customer first needs to separately migrate Azure AD Domain services and then migrate the virtual network left only with the Cloud Service deployment |
+| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing either a prod or staging slot deployment can be migrated |
+| Cloud Service | Deployment not in a publicly visible virtual network (default virtual network deployment) | A Cloud Service can be in a publicly visible virtual network, in a hidden virtual network or not in any virtual network. Cloud Services in a hidden virtual network and publicly visible virtual networks are supported for migration. Customer can use the Validate API to tell if a deployment is inside a default virtual network or not and thus determine if it can be migrated. |
+|Cloud Service | XML extensions (BGInfo, Visual Studio Debugger, Web Deploy, and Remote Debugging). | All xml extensions are supported for migration
+| Virtual Network | Virtual network containing multiple Cloud Services. | Virtual network contain multiple cloud services is supported for migration. The virtual network and all the Cloud Services within it will be migrated together to Azure Resource Manager. |
+| Virtual Network | Migration of virtual networks created via Portal (Requires using ΓÇ£Group Resource-group-name VNet-NameΓÇ¥ in .cscfg file) | As part of migration, the virtual network name in cscfg will be changed to use Azure Resource Manager ID of the virtual network. (subscription/subscription-id/resource-group/resource-group-name/resource/vnet-name) <br><br>To manage the deployment after migration, update the local copy of .cscfg file to start using Azure Resource Manager ID instead of virtual network name. <br><br>A .cscfg file that uses the old naming scheme will not pass validation.
+| Virtual Network | Migration of deployment with roles in different subnet. | A cloud service with different roles in different subnets is supported for migration. |
+
+
+## Resources and features not available for migration
+These are top scenarios involving combinations of resources, features and Cloud Services. This list is not exhaustive.
+
+| Resource | Next steps / work-around |
+|||
+| Auto Scale Rules | Migration goes through but rules are dropped. [Recreate the rules](https://docs.microsoft.com/azure/cloud-services-extended-support/configure-scaling) after migration on Cloud Services (extended support). |
+| Alerts | Migration goes through but alerts are dropped. [Recreate the rules](https://docs.microsoft.com/azure/cloud-services-extended-support/enable-alerts) after migration on Cloud Services (extended support). |
+| VPN Gateway | Remove the VPN Gateway before beginning migration and then recreate the VPN Gateway once migration is complete. |
+| Express Route Gateway (in the same subscription as Virtual Network only) | Remove the Express Route Gateway before beginning migration and then recreate the Gateway once migration is complete. |
+| Quota | Quota is not migrated. [Request new quota](https://docs.microsoft.com/azure/azure-resource-manager/templates/error-resource-quota#solution) on Azure Resource Manager prior to migration for the validation to be successful. |
+| Affinity Groups | Not supported. Remove any affinity groups before migration. |
+| Virtual networks using [virtual network peering](https://docs.microsoft.com/azure/virtual-network/virtual-network-peering-overview)| Before migrating a virtual network that is peered to another virtual network, delete the peering, migrate the virtual network to Resource Manager and re-create peering. This can cause downtime depending on the architecture. |
+| Virtual networks that contain App Service environments | Not supported |
+| Virtual networks that contain HDInsight services | Not supported.
+| Virtual networks that contain Azure API Management deployments | Not supported. <br><br> To migrate the virtual network, change the virtual network of the API Management deployment. This is a no downtime operation. |
+| Classic Express Route circuits | Not supported. <br><br>These circuits need to be migrated to Azure Resource Manager before beginning PaaS migration. To learn more, see [Moving ExpressRoute circuits from the classic to the Resource Manager deployment model](../expressroute/expressroute-howto-move-arm.md). |
+| Role-Based Access Control | Post migration, the URI of the resource changes from `Microsoft.ClassicCompute` to `Microsoft.Compute` RBAC policies needs to be updated after migration. |
+| Application Gateway | Not Supported. <br><br> Remove the Application Gateway before beginning migration and then recreate the Application Gateway once migration is completed to Azure Resource Manager |
+
+## Unsupported configurations / migration scenarios
+
+| Configuration / Scenario | Next steps / work-around |
+|||
+| Migration of some older deployments not in a virtual network | Some Cloud Service deployments not in a virtual network are not supported for migration. <br><br> 1. Use the validate API to check if the deployment is eligible to migrate. <br> 2. If eligible, the deployments will be moved to Azure Resource Manager under a virtual network with prefix of ΓÇ£DefaultRdfeVnetΓÇ¥ |
+| Migration of deployments containing both production and staging slot deployment using dynamic IP addresses | Migration of a two slot Cloud Service requires deletion of the staging slot. Once the staging slot is deleted, migrate the production slot as an independent Cloud Service (extended support) in Azure Resource Manager. Then redeploy the staging environment as a new Cloud Service (extended support) and make it swappable with the first one. |
+| Migration of deployments containing both production and staging slot deployment using Reserved IP addresses | Not supported. |
+| Migration of production and staging deployment in different virtual network|Migration of a two slot cloud service requires deleting the staging slot. Once the staging slot is deleted, migrate the production slot as an independent cloud service (extended support) in Azure Resource Manager. A new Cloud Services (extended support) deployment can then be linked to the migrated deployment with swappable property enabled. Deployments files of the old staging slot deployment can be reused to create this new swappable deployment. |
+| Migration of empty Cloud Service (Cloud Service with no deployment) | Not supported. |
+| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins are not recommended](https://docs.microsoft.com/azure/cloud-services-extended-support/deploy-prerequisite#required-service-definition-file-csdef-updates) for use on Cloud Services (extended support).|
+| Virtual networks with both PaaS and IaaS deployment |Not Supported <br><br> Move either the PaaS or IaaS deployments into a different virtual network. This will cause downtime. |
+Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge). | The migration will complete, but the role sizes will be updated to use modern role sizes. There is no change in cost or SKU properties and virtual machine will not be rebooted for this change. Update all deployment artifacts to reference these new modern role sizes. For more information, see [Available VM sizes](available-sizes.md)|
+| Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This will cause downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This will cause downtime. |
+| Cloud Service in a virtual network but does not have an explicit subnet assigned | Not supported. Mitigation involves moving the role into a subnet, which requires a role restart (downtime) |
++
+## Post Migration Changes
+The Cloud Services (classic) deployment is converted to a Cloud Service (extended support) deployment. Refer to [Cloud Services (extended support) documentation](deploy-prerequisite.md) for more details.
+
+### Changes to deployment files
+
+Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deployment files conform to the Azure Resource Manager and Cloud Services (extended support) requirements. Post migration retrieves your new deployment files or update the existing files. This will be needed for update/delete operations.
+
+- Virtual Network uses full Azure Resource Manager resource ID instead of just the resource name in the NetworkConfiguration section of the .cscfg file. For example, `/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Network/virtualNetworks/vnet-name`. For virtual networks belonging to the same resource group as the cloud service, you can choose to update the .cscfg file back to using just the virtual network name.
+
+- Classic sizes like Small, Large, ExtraLarge are replaced by their new size names, Standard_A*. The size names need to be changed to their new names in .csdef file. For more information, see [Cloud Services (extended support) deployment prerequisites](deploy-prerequisite.md#required-service-definition-file-csdef-updates)
+
+- Use the Get API to get the latest copy of the deployment files.
+ - Get the template using [Portal](https://docs.microsoft.com/azure/azure-resource-manager/templates/export-template-portal), [PowerShell](https://docs.microsoft.com/azure/azure-resource-manager/management/manage-resource-groups-powershell#export-resource-groups-to-templates), [CLI](https://docs.microsoft.com/azure/azure-resource-manager/management/manage-resource-groups-cli#export-resource-groups-to-templates), and [Rest API](https://docs.microsoft.com/rest/api/resources/resourcegroups/exporttemplate)
+ - Get the .csdef file using [PowerShell](https://docs.microsoft.com/powershell/module/az.cloudservice/?view=azps-5.4.0#cloudservice&preserve-view=true) or [Rest API](https://docs.microsoft.com/rest/api/compute/cloudservices/rest-get-package).
+ - Get the .cscfg file using [PowerShell](https://docs.microsoft.com/powershell/module/az.cloudservice/?view=azps-5.4.0#cloudservice&preserve-view=true) or [Rest API](https://docs.microsoft.com/rest/api/compute/cloudservices/rest-get-package).
+
+
+
+### Changes to customerΓÇÖs Automation, CI/CD pipeline, custom scripts, custom dashboards, custom tooling, etc.
+
+Customers need to update their tooling and automation to start using the new APIs / commands to manage their deployment. Customer can easily adopt new features and capabilities of Azure Resource Manager/Cloud Services (extended support) as part of this change.
+
+- Changes to Resource and Resource Group names post migration
+ - As part of migration, the names of few resources like the Cloud Service, public IP addresses, etc. change. These changes might need to be reflected in deployment files before update of Cloud Service. [Learn More about the names of resources changing](in-place-migration-technical-details.md#translation-of-resources-and-naming-convention-post-migration).
+
+- Recreate rules and policies required to manage and scale cloud services
+ - [Auto Scale rules](configure-scaling.md) are not migrated. After migration, recreate the auto scale rules.
+ - [Alerts](enable-alerts.md) are not migrated. After migration, recreate the alerts.
+ - The Key Vault is created without any access policies. [Create appropriate policies](../key-vault/general/assign-access-policy-portal.md) on the Key Vault to view or manage your certificates. Certificates will be visible under settings on the tab called secrets.
+
+## Next steps
+- [Overview of Platform-supported migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-overview.md)
+- Migrate to Cloud Services (extended support) using the [Azure portal](in-place-migration-portal.md)
+- Migrate to Cloud Services (extended support) using [PowerShell](in-place-migration-powershell.md)
cloud-services-extended-support In Place Migration Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-portal.md
+
+ Title: How to migrate - portal
+description: How to migrate to Cloud Services (extended support) using the Azure portal
++++++ Last updated : 2/08/2021+++
+# Migrate to Cloud Services (extended support) using the Azure portal
+
+This article shows you how to use the Azure portal to migrate from [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) to [Cloud Services (extended support)](overview.md).
+
+> [!IMPORTANT]
+> Migrating from Cloud Services (classic) to Cloud Services (extended support) using the migration tool is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Before you begin
+
+**Ensure you are an administrator for the subscription.**
+
+To perform this migration, you must be added as a coadministrator for the subscription in the [Azure portal](https://portal.azure.com).
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. On the **Hub** menu, select **Subscription**. If you don't see it, select **All services**.
+3. Find the appropriate subscription entry, and then look at the **MY ROLE** field. For a coadministrator, the value should be *Account admin*.
+
+If you're not able to add a co-administrator, contact a service administrator or [co-administrator](../role-based-access-control/classic-administrators.md) for the subscription to get yourself added.
+
+**Sign up for Migration resource provider**
+
+1. Register with the migration resource provider `Microsoft.ClassicInfrastructureMigrate` and preview feature `Cloud Services` under Microsoft.Compute namespace using the [Azure portal](https://docs.microsoft.com/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider-1).
+1. Wait five minutes for the registration to complete then check the status of the approval.
+
+## Migrate your Cloud Service resources
+
+1. Go to [Cloud Services (classic) portal blade](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/microsoft.classicCompute%2FdomainNames).
+2. Select the Cloud Service you want to migrate.
+3. Select the **Migrate to ARM** blade.
+
+ > [!NOTE]
+ > If migrating a Cloud Services (classic) located inside a virtual network (classic), a banner message will appear prompting you to move the virtual network (classic) blade.
+ > You will be brought to the virtual network (classic) blade to complete the migration of both the virtual network (classic) and the Cloud Services (classic) deployments within it.
+ > :::image type="content" source="media/in-place-migration-portal-2.png" alt-text="Image shows moving a virtual network classic in the Azure portal.":::
+
+
+4. Validate the migration.
+
+ If validation is successful, then all deployments are supported and ready to be prepared.
+
+ :::image type="content" source="media/in-place-migration-portal-1.png" alt-text="Image shows the Migrate to ARM blade in the Azure portal.":::
+
+ If validate fails, a list of unsupported scenarios will be displayed and need to be fixed before migration can continue.
+
+ :::image type="content" source="media/in-place-migration-portal-3.png" alt-text="Image shows validation error in the Azure portal.":::
+
+5. Prepare for the migration.
+
+ If the prepare is successful, the migration is ready for commit.
+
+ :::image type="content" source="media/in-place-migration-portal-4.png" alt-text="Image shows validation passing in the Azure portal.":::
+
+ If the prepare fails, review the error, address any issues, and retry the prepare.
+
+ :::image type="content" source="media/in-place-migration-portal-5.png" alt-text="Image shows validation failure error.":::
+
+ After Prepare, all Cloud Services in a virtual network are available for read operations using both Cloud Services (classic) and Cloud Services (extended support) Azure portal blade. The Cloud Service (extended support) deployment can now be tested to ensure proper functioning before finalizing the migration.
+
+ :::image type="content" source="media/in-place-migration-portal-6.png" alt-text="Image shows testing APIs in portal blade.":::
+
+6. **(Optional)** Abort migration.
+
+ If you chose to discontinue the migration, use the **Abort** button to roll back the previous steps. The Cloud Services (classic) deploy is then unlocked for all CRUD operations.
+
+ :::image type="content" source="media/in-place-migration-portal-7.png" alt-text="Image shows validation passing.":::
+
+ If abort fails, select **Retry abort**. A retry should fix the issue. If not, contact support.
+
+ :::image type="content" source="media/in-place-migration-portal-8.png" alt-text="Image shows validation failure error message.":::
+
+7. Commit migration.
+
+ >[!IMPORTANT]
+ > Once you commit to the migration, there is no option to roll back.
+
+ Type in "yes" to confirm and commit to the migration. The migration is now complete. The migrated Cloud Services (extended support) deployment is unlocked for all operations".
+
+## Next steps
+Review the [Post migration changes](in-place-migration-overview.md#post-migration-changes) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
cloud-services-extended-support In Place Migration Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-powershell.md
+
+ Title: Migrate to Azure Cloud Services (extended support) using PowerShell
+description: How to migrate from Azure Cloud Services (classic) to Azure Cloud Services (extended support) using PowerShell
+++
+ms.reviwer: mimckitt
+ Last updated : 02/06/2020++++
+# Migrate to Azure Cloud Services (extended support) using PowerShell
+
+These steps show you how to use Azure PowerShell commands to migrate from [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) to [Cloud Services (extended support)](overview.md).
+
+> [!IMPORTANT]
+> Migrating from Cloud Services (classic) to Cloud Services (extended support) using the migration tool is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## 1) Plan for migration
+Planning is the most important step for a successful migration experience. Review the [Cloud Services (extended support) overview](overview.md) and [Planning for migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-plan.md) prior to beginning any migration steps.
+
+## 2) Install the latest version of PowerShell
+There are two main options to install Azure PowerShell: [PowerShell Gallery](https://www.powershellgallery.com/profiles/azure-sdk/) or [Web Platform Installer (WebPI)](https://aka.ms/webpi-azps). WebPI receives monthly updates. PowerShell Gallery receives updates on a continuous basis. This article is based on Azure PowerShell version 2.1.0.
+
+For installation instructions, see [How to install and configure Azure PowerShell](https://docs.microsoft.com/powershell/azure/servicemanagement/install-azure-ps?view=azuresmps-4.0.0&preserve-view=true).
+
+## 3) Ensure Admin permissions
+To perform this migration, you must be added as a coadministrator for the subscription in the [Azure portal](https://portal.azure.com).
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. On the **Hub** menu, select **Subscription**. If you don't see it, select **All services**.
+3. Find the appropriate subscription entry, and then look at the **MY ROLE** field. For a coadministrator, the value should be *Account admin*.
+
+If you're not able to add a co-administrator, contact a service administrator or co-administrator for the subscription to get yourself added.
+
+## 4) Register the classic provider and CloudService feature
+First, start a PowerShell prompt. For migration, set up your environment for both classic and Resource Manager.
+
+Sign in to your account for the Resource Manager model.
+
+```powershell
+Connect-AzAccount
+```
+
+Get the available subscriptions by using the following command:
+
+```powershell
+Get-AzSubscription | Sort Name | Select Name
+```
+
+Set your Azure subscription for the current session. This example sets the default subscription name to **My Azure Subscription**. Replace the example subscription name with your own.
+
+```powershell
+Select-AzSubscription ΓÇôSubscriptionName "My Azure Subscription"
+```
+
+Register with the migration resource provider by using the following command:
+
+```powershell
+Register-AzResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate
+```
+> [!NOTE]
+> Registration is a one-time step, but you must do it once before you attempt migration. Without registering, you see the following error message:
+>
+> *BadRequest : Subscription is not registered for migration.*
+
+Register the CloudServices feature for your subscription. The registrations may take several minutes to complete.
+
+```powershell
+Register-AzProviderFeature -FeatureName CloudServices -ProviderNamespace Microsoft.Compute
+```
+
+Wait five minutes for the registration to finish.
+
+Check the status of the classic provider approval by using the following command:
+
+```powershell
+Get-AzResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate
+```
+
+Check the status of registration using the following:
+```powershell
+Get-AzProviderFeature -FeatureName CloudServices
+```
+
+Make sure that RegistrationState is `Registered` for both before you proceed.
+
+Before switching to the classic deployment model, make sure that you have enough Azure Resource Manager vCPU quota in the Azure region of your current deployment or virtual network. You can use the following PowerShell command to check the current number of vCPUs you have in Azure Resource Manager. To learn more about vCPU quotas, see [Limits and the Azure Resource Manager](../azure-resource-manager/management/azure-subscription-service-limits.md#managing-limits).
+
+This example checks the availability in the **West US** region. Replace the example region name with your own.
+
+```powershell
+Get-AzVMUsage -Location "West US"
+```
+
+Now, sign in to your account for the classic deployment model.
+
+```powershell
+Add-AzureAccount
+```
+
+Get the available subscriptions by using the following command:
+
+```powershell
+Get-AzureSubscription | Sort SubscriptionName | Select SubscriptionName
+```
+
+Set your Azure subscription for the current session. This example sets the default subscription to **My Azure Subscription**. Replace the example subscription name with your own.
+
+```powershell
+Select-AzureSubscription ΓÇôSubscriptionName "My Azure Subscription"
+```
++
+## 5) Migrate your Cloud Services
+* [Migrate a Cloud Service not in a virtual network](#51-option-1migrate-a-cloud-service-not-in-a-virtual-network)
+* [Migrate a Cloud Service in a virtual network](#51-option-2migrate-a-cloud-service-in-a-virtual-network)
+
+> [!NOTE]
+> All the operations described here are idempotent. If you have a problem other than an unsupported feature or a configuration error, we recommend that you retry the prepare, abort, or commit operation. The platform then tries the action again.
++
+### 5.1) Option 1 - Migrate a Cloud Service not in a virtual network
+Get the list of cloud services by using the following command. Then pick the cloud service that you want to migrate.
+
+```powershell
+Get-AzureService | ft Servicename
+```
+
+Get the deployment name for the Cloud Service. In this example, the service name is **My Service**. Replace the example service name with your own service name.
+
+```powershell
+$serviceName = "My Service"
+$deployment = Get-AzureDeployment -ServiceName $serviceName
+$deploymentName = $deployment.DeploymentName
+```
+
+First, validate that you can migrate the Cloud Service by using the following commands:
+
+```powershell
+$validate = Move-AzureService -Validate -ServiceName $serviceName -DeploymentName $deploymentName -CreateNewVirtualNetwork
+$validate.ValidationMessages
+ ```
+
+The following command displays any warnings and errors that block migration. If validation is successful, you can move on to the Prepare step.
+
+```powershell
+Move-AzureService -Prepare -ServiceName $serviceName -DeploymentName $deploymentName -CreateNewVirtualNetwork
+```
+
+### 5.1) Option 2 - Migrate a Cloud Service in a virtual network
+
+To migrate a Cloud Service in a virtual network, you migrate the virtual network. The Cloud Service automatically migrates with the virtual network.
+
+> [!NOTE]
+> The virtual network name might be different from what is shown in the new portal. The new Azure portal displays the name as `[vnet-name]`, but the actual virtual network name is of type `Group [resource-group-name] [vnet-name]`. Before you start the migration, look up the actual virtual network name by using the command `Get-AzureVnetSite | Select -Property Name` or view it in the old Azure portal.
+
+This example sets the virtual network name to **myVnet**. Replace the example virtual network name with your own.
+
+```powershell
+$vnetName = "myVnet"
+```
+
+First, validate that you can migrate the virtual network by using the following command:
+
+```powershell
+Move-AzureVirtualNetwork -Validate -VirtualNetworkName $vnetName
+```
+
+The following command displays any warnings and errors that block migration. If validation is successful, you can proceed with the following Prepare step:
+
+```powershell
+Move-AzureVirtualNetwork -Prepare -VirtualNetworkName $vnetName
+```
+
+Check the configuration for the prepared Cloud Service by using either Azure PowerShell or the Azure portal. If you're not ready for migration and you want to go back to the old state, use the following command:
+
+```powershell
+Move-AzureVirtualNetwork -Abort -VirtualNetworkName $vnetName
+```
+
+If the prepared configuration looks good, you can move forward and commit the resources by using the following command:
+
+```powershell
+Move-AzureVirtualNetwork -Commit -VirtualNetworkName $vnetName
+```
++
+## Next steps
+Review the [Post migration changes](in-place-migration-overview.md#post-migration-changes) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-technical-details.md
+
+ Title: Technical details and requirements for migrating to Azure Cloud Services (extended support)
+description: Provides technical details and requirements for migrating from Azure Cloud Services (classic) to Azure Cloud Services (extended support)
+++
+ms.reviwer: mimckitt
+ Last updated : 02/06/2020++++
+# Technical details of migrating to Azure Cloud Services (extended support)
+
+This article discusses the technical details regarding the migration tool as pertaining to Cloud Services (classic).
+
+> [!IMPORTANT]
+> Migrating from Cloud Services (classic) to Cloud Services (extended support) using the migration tool is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Details about feature / scenarios supported for migration
+
+### Extensions and plugin migration
+- All enabled and supported extensions will be migrated.
+- Disabled extensions will not be migrated.
+- Plugins are a legacy concept and should be removed before migration. They are supported for migration and but after migration, if extension needs to be enabled, plugin needs to be removed first before installing the extension. Remote desktop plugins and extensions are most impacted by this.
+
+### Certificate migration
+- In Cloud Services (extended support), certificates are stored in a Key Vault. As part of migration, we create a Key Vault for the customers having the Cloud Service name and transfer all certificates from Azure Service Manager to Key Vault.
+- The reference to this Key Vault is specified in the template or passed through PowerShell or Azure CLI.
+
+### Service Configuration and Service Definition files
+- The .cscfg and .csdef files needs to be updated for Cloud Services (extended support) with minor changes.
+- The names of resources like virtual network and VM SKU are different. See [Translation of resources and naming convention post migration](#translation-of-resources-and-naming-convention-post-migration)
+- Customers can retrieve their new deployments through [PowerShell](https://docs.microsoft.com/powershell/module/az.cloudservice/?view=azps-5.4.0#cloudservice&preserve-view=true) and [Rest API](https://docs.microsoft.com/rest/api/compute/cloudservices/get).
+
+### Cloud Service and deployments
+- Each Cloud Services (extended support) deployment is an independent Cloud Service. Deployment are no longer grouped into a cloud service using slots.
+- If you have two slots in your Cloud Service (classic), you need to delete one slot (staging) and use the migration tool to move the other (production) slot to Azure Resource Manager.
+- The public IP address on the Cloud Service deployment remains the same after migration to Azure Resource Manager and is exposed as a Basic SKU IP (dynamic or static) resource.
+- The DNS name and domain (cloudapp.azure.net) for the migrated cloud service remains the same.
+
+### Virtual network migration
+- If a Cloud Services deployment is in a virtual network, then during migration all Cloud Services and associated virtual network resources are migrated together.
+- After migration, the virtual network is placed in a different resource group than the Cloud Service.
+- For virtual networks with multiple Cloud Services, each Cloud Service is migrated one after the other.
+
+### Migration of deployments not in a virtual network
+- In 2017, Azure started automatically creating new deployments (without customer specified virtual network) into a platform created ΓÇ£defaultΓÇ¥ virtual network. These default virtual networks are hidden from customers.
+- As part of the migration, this default virtual network will be exposed to customers once in Azure Resource Manager. To manage or update the deployment in Azure Resource Manager, customers need to add this virtual network information in the NetworkConfiguration section of the .cscfg file.
+- The default virtual network, when migrated to Azure Resource Manager, is placed in the same resource group as the Cloud Service.
+- Cloud Services created before this time will not be in any virtual network and cannot be migrated using the tool. Consider redeploying these Cloud Services directly in Azure Resource Manager.
+- To check if a deployment is eligible to migrate, run the validate API on the deployment. The result of Validate API will contain error message explicitly mentioning if this deployment is eligible to migrate.
+
+### Load Balancer
+- For a Cloud Service using a public endpoint, a platform created load balancer associated with the Cloud Service is exposed inside the customerΓÇÖs subscription in Azure Resource Manager. The load balancer is a read-only resource, and updates are restricted only through the Service Configuration (.cscfg) and Service Definition (.csdef) files.
+
+### Key Vault
+- As part of migration, Azure automatically creates a new Key Vault and migrates all the certificates to it. The tool does not allow you to use an existing Key Vault.
+- Cloud Services (extended support) require a Key Vault located in the same region and subscription. This Key Vault is automatically created as part of the migration.
++
+## Translation of resources and naming convention post migration
+As part of migration, the resource names are changed, and few Cloud Services features are exposed as Azure Resource Manager resources. The table summarizes the changes specific to Cloud Services migration.
+
+| Cloud Services (classic) <br><br> Resource name | Cloud Services (classic) <br><br> Syntax| Cloud Services (extended support) <br><br> Resource name| Cloud Services (extended support) <br><br> Syntax |
+|||||
+| Cloud Service | `cloudservicename` | Not associated| Not associated |
+| Deployment (portal created) <br><br> Deployment (non-portal created) | `deploymentname` | Cloud Services (extended support) | `deploymentname` |
+| Virtual Network | `vnetname` <br><br> `Group resourcegroupname vnetname` <br><br> Not associated | Virtual Network (not portal created) <br><br> Virtual Network (portal created) <br><br> Virtual Networks (Default) | `vnetname` <br><br> `group-resourcegroupname-vnetname` <br><br> `DefaultRdfevirtualnetwork_vnetid`|
+| Not associated | Not associated | Key Vault | `cloudservicename` |
+| Not associated | Not associated | Resource Group for Cloud Service Deployments | `cloudservicename-migrated` |
+| Not associated | Not associated | Resource Group for Virtual Network | `vnetname-migrated` <br><br> `group-resourcegroupname-vnetname-migrated`|
+| Not associated | Not associated | Public IP (Dynamic) | `cloudservicenameContractContract` |
+| Reserved IP Name | `reservedipname` | Reserved IP (non-portal created) <br><br> Reserved IP (portal created) | `reservedipname` <br><br> `group-resourcegroupname-reservedipname` |
+| Not associated| Not associated | Load Balancer | `deploymentname-lb`|
+++
+## Migration issues and how to handle them.
+
+### Migration stuck in an operation for a long time.
+- Commit, prepare, and abort can take a long time depending on the number of deployments. Operations will time out after 24 hours.
+- Commit, prepare, and abort operations are idempotent. Most issues are fixable by retrying. There could be transient errors, which can go away in few minutes, we recommend retrying after a gap. If migrating using the Azure portal and the operation is stuck in an "in-progress state", use PowerShell to retry the operation.
+- Contact support to help migrate or roll back the deployment from the backend.
+
+### Migration failed in an operation.
+- If validate failed, it is because the deployment or virtual network contains an unsupported scenario/feature/resource. Use the list of unsupported scenarios to find the work-around in the documents.
+- Prepare operation first does validation including some expensive validations (not covered in validate). Prepare failure could be due to an unsupported scenario. Find the scenario and the work-around in the public documents. Abort needs to be called to go back to the original state and unlock the deployment for updates and delete operations.
+- If abort failed, retry the operation. If retries fail, then contact support.
+- If commit failed, retry the operation. If retry fail, then contact support. Even in commit failure, there should be no data plane issue to your deployment. Your deployment should be able to handle customer traffic without any issue.
+
+### Portal refreshed after Prepare. Experience restarted and Commit or Abort not visible anymore.
+- Portal stores the migration information locally and therefore after refresh, it will start from validate phase even if the Cloud Service is in the prepare phase.
+- You can use portal to go through the validate and prepare steps again to expose the Abort and Commit button. It will not cause any failures.
+- Customers can use PowerShell or Rest API to abort or commit.
+
+### How much time can the operations take?<br>
+Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but will take less time compared to prepare. All operations will time out after 24 hrs.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview.md
The Anomaly Detector API enables you to monitor and detect abnormalities in your
Using the Anomaly Detector doesn't require any prior experience in machine learning, and the RESTful API enables you to easily integrate the service into your applications and processes.
+This documentation contains the following types of articles:
+* The [quickstarts](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./how-to/identify-anomalies.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](./concepts/anomaly-detection-best-practices.md) provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./tutorials/batch-anomaly-detection-powerbi.md) are longer guides that show you how to use this service as a component in broader business solutions.
+ ## Features With the Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they occur in real-time.
cognitive-services Howtocallvisionapi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/HowToCallVisionAPI.md
Title: Call the Computer Vision API
+ Title: Call the Image Analysis API
-description: Learn how to call the Computer Vision API by using the REST API in Azure Cognitive Services.
+description: Learn how to call the Image Analysis API by using the REST API in Azure Cognitive Services.
-# Call the Computer Vision API
+# Call the Image Analysis API
-This article demonstrates how to call the Computer Vision API by using the REST API. The samples are written both in C# by using the Computer Vision API client library and as HTTP POST or GET calls. The article focuses on:
+This article demonstrates how to call the Image Analysis API by using the REST API. The samples are written both in C# by using the Image Analysis API client library and as HTTP POST or GET calls. The article focuses on:
- Getting tags, a description, and categories - Getting domain-specific information, or "celebrities"
The features offer the following options:
## Authorize the API call
-Every call to the Computer Vision API requires a subscription key. This key must be either passed through a query string parameter or specified in the request header.
+Every call to the Image Analysis API requires a subscription key. This key must be either passed through a query string parameter or specified in the request header.
You can pass the subscription key by doing any of the following:
-* Pass it through a query string, as in this Computer Vision API example:
+* Pass it through a query string, as in this example:
``` https://westus.api.cognitive.microsoft.com/vision/v2.1/analyze?visualFeatures=Description,Tags&subscription-key=<Your subscription key>
You can pass the subscription key by doing any of the following:
} ```
-## Upload an image to the Computer Vision API service
+## Upload an image to the Image Analysis service
-The basic way to perform the Computer Vision API call is by uploading an image directly to return tags, a description, and celebrities. You do this by sending a "POST" request with the binary image in the HTTP body together with the data read from the image. The upload method is the same for all Computer Vision API calls. The only difference is the query parameters that you specify.
+The basic way to perform the Image Analysis API call is by uploading an image directly to return tags, a description, and celebrities. You do this by sending a "POST" request with the binary image in the HTTP body together with the data read from the image. The upload method is the same for all Image Analysis API calls. The only difference is the query parameters that you specify.
For a specified image, get tags and a description by using either of the following options:
These errors are identical to those in vision.analyze, with the additional NotSu
## Next steps
-To use the REST API, go to [Computer Vision API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/56f91f2e778daf14a499f21b).
+To use the REST API, go to the [Image Analysis API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/56f91f2e778daf14a499f21b).
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/call-read-api.md
+
+ Title: How to call the Read API
+
+description: Learn how to call the Read API and configure its behavior in detail.
+++++++ Last updated : 03/31/2021+++
+# Call the Read API
+
+In this guide, you'll learn how to call the Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs.
+
+## Submit data to the service
+
+The Read API's [Read call](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously.
+
+`https://{endpoint}/vision/v3.2-preview.3/read/analyze[?language][&pages][&readingOrder]`
+
+The call returns with a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Operation ID to be used in the next step.
+
+|Response header| Example value |
+|:--|:-|
+|Operation-Location | `https://cognitiveservice/vision/v3.1/read/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
+
+> [!NOTE]
+> **Billing**
+>
+> The [Computer Vision pricing](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) page includes the pricing tier for Read. Each analyzed image or page is one transaction. If you call the operation with a PDF or TIFF document containing 100 pages, the Read operation will count it as 100 transactions and you will be billed for 100 transactions. If you made 50 calls to the operation and each call submitted a document with 100 pages, you will be billed for 50 X 100 = 5000 transactions.
+
+## Determine how to process the data
+
+### Language specification
+
+The [Read](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d986960601faab4bf452005) call has an optional request parameter for language. Read supports auto language identification and multilingual documents, so only provide a language code if you would like to force the document to be processed as that specific language.
+
+### Natural reading order output (Latin languages only)
+With the [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005), you can specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
++++
+### Select page(s) or page ranges for text extraction
+With the [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005), for large multi-page documents, use the `pages` query parameter to specify page numbers or page ranges to extract text from only those pages. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
++
+## Get results from the service
+
+The second step is to call [Get Read Results](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d9869604be85dee480c8750) operation. This operation takes as input the operation ID that was created by the Read operation.
+
+`https://{endpoint}/vision/v3.2-preview.3/read/analyzeResults/{operationId}`
+
+It returns a JSON response that contains a **status** field with the following possible values.
+
+|Value | Meaning |
+|:--|:-|
+| `notStarted`| The operation has not started. |
+| `running`| The operation is being processed. |
+| `failed`| The operation has failed. |
+| `succeeded`| The operation has succeeded. |
+
+You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 1 to 2 seconds to avoid exceeding the requests per second (RPS) rate.
+
+> [!NOTE]
+> The free tier limits the request rate to 20 calls per minute. The paid tier allows 10 requests per second (RPS) that can be increased upon request. Use the Azure support channel or your account team to request a higher request per second (RPS) rate.
+
+When the **status** field has the `succeeded` value, the JSON response contains the extracted text content from your image or document. The JSON response maintains the original line groupings of recognized words. It includes the extracted text lines and their bounding box coordinates. Each text line includes all extracted words with their coordinates and confidence scores.
+
+> [!NOTE]
+> The data submitted to the `Read` operation are temporarily encrypted and stored at rest for a short duration, and then deleted. This lets your applications retrieve the extracted text as part of the service response.
+
+### Sample JSON output
+
+See the following example of a successful JSON response:
+
+```json
+{
+ "status": "succeeded",
+ "createdDateTime": "2020-05-28T05:13:21Z",
+ "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
+ "analyzeResult": {
+ "version": "3.1.0",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 0.8551,
+ "width": 2661,
+ "height": 1901,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 67,
+ 646,
+ 2582,
+ 713,
+ 2580,
+ 876,
+ 67,
+ 821
+ ],
+ "text": "The quick brown fox jumps",
+ "words": [
+ {
+ "boundingBox": [
+ 143,
+ 650,
+ 435,
+ 661,
+ 436,
+ 823,
+ 144,
+ 824
+ ],
+ "text": "The",
+ "confidence": 0.958
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### Handwritten classification for text lines (Latin languages only)
+The [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005) response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
++
+## Next steps
+
+To use the REST API, go to the [Read API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005).
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
keywords: on-premises, OCR, Docker, container
Containers enable you to run the Computer Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run Computer Vision containers.
-The *Read* OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](concept-recognizing-text.md#read-api).
+The *Read* OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md).
## Read 3.2-preview container
Container images for Read are available.
Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image.
-### Docker pull for the Read container
+### Docker pull for the Read OCR container
# [Version 3.2-preview](#tab/version-3-2)
ApiKey={API_KEY}
This command:
-* Runs the Read container from the container image.
+* Runs the Read OCR container from the container image.
* Allocates 8 CPU core and 18 gigabytes (GB) of memory. * Exposes TCP port 5000 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer.
ApiKey={API_KEY}
This command:
-* Runs the Read container from the container image.
+* Runs the Read OCR container from the container image.
* Allocates 8 CPU core and 16 gigabytes (GB) of memory. * Exposes TCP port 5000 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer.
The `operation-location` is the fully qualified URL and is accessed via an HTTP
> [!IMPORTANT]
-> If you deploy multiple Read containers behind a load balancer, for example, under Docker Compose or Kubernetes, you must have an external cache. Because the processing container and the GET request container might not be the same, an external cache stores the results and shares them across containers. For details about cache settings, see [Configure Computer Vision Docker containers](./computer-vision-resource-container-config.md).
+> If you deploy multiple Read OCR containers behind a load balancer, for example, under Docker Compose or Kubernetes, you must have an external cache. Because the processing container and the GET request container might not be the same, an external cache stores the results and shares them across containers. For details about cache settings, see [Configure Computer Vision Docker containers](./computer-vision-resource-container-config.md).
### Synchronous read
In this article, you learned concepts and workflow for downloading, installing,
* Computer Vision provides a Linux container for Docker, encapsulating Read. * Container images are downloaded from the "Container Preview" container registry in Azure. * Container images run in Docker.
-* You can use either the REST API or SDK to call operations in Read containers by specifying the host URI of the container.
+* You can use either the REST API or SDK to call operations in Read OCR containers by specifying the host URI of the container.
* You must specify billing information when instantiating a container. > [!IMPORTANT]
In this article, you learned concepts and workflow for downloading, installing,
## Next steps * Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings
-* Review [Computer Vision overview](overview.md) to learn more about recognizing printed and handwritten text
-* Refer to the [Computer Vision API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/56f91f2e778daf14a499f21b) for details about the methods supported by the container.
+* Review the [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text
+* Refer to the [Read API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/56f91f2e778daf14a499f21b) for details about the methods supported by the container.
* Refer to [Frequently asked questions (FAQ)](FAQ.md) to resolve issues related to Computer Vision functionality. * Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
cognitive-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-resource-container-config.md
The container also has the following container-specific configuration settings:
|Required|Setting|Purpose| |--|--|--| |No|ReadEngineConfig:ResultExpirationPeriod| v2.0 containers only. Result expiration period in hours. The default is 48 hours. The setting specifies when the system should clear recognition results. For example, if `resultExpirationPeriod=1`, the system clears the recognition result 1 hour after the process. If `resultExpirationPeriod=0`, the system clears the recognition result after the result is retrieved.|
-|No|Cache:Redis| v2.0 containers only. Enables Redis storage for storing results. A cache is *required* if multiple read containers are placed behind a load balancer.|
-|No|Queue:RabbitMQ|v2.0 containers only. Enables RabbitMQ for dispatching tasks. The setting is useful when multiple read containers are placed behind a load balancer.|
+|No|Cache:Redis| v2.0 containers only. Enables Redis storage for storing results. A cache is *required* if multiple read OCR containers are placed behind a load balancer.|
+|No|Queue:RabbitMQ|v2.0 containers only. Enables RabbitMQ for dispatching tasks. The setting is useful when multiple read OCR containers are placed behind a load balancer.|
|No|Queue:Azure:QueueVisibilityTimeoutInMilliseconds | v3.x containers only. The time for a message to be invisible when another worker is processing it. | |No|Storage::DocumentStore::MongoDB|v2.0 containers only. Enables MongoDB for permanent result storage. | |No|Storage:ObjectStore:AzureBlob:ConnectionString| v3.x containers only. Azure blob storage connection string. |
Replace {_argument_name_} with your own values:
## Container Docker examples
-The following Docker examples are for the Read container.
+The following Docker examples are for the Read OCR container.
# [Version 3.2-preview](#tab/version-3-2)
cognitive-services Concept Recognizing Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-recognizing-text.md
- Title: Optical Character Recognition (OCR) - Computer Vision-
-description: Concepts related to optical character recognition (OCR) of images and documents with printed and handwritten text using the Computer Vision API.
------- Previously updated : 08/11/2020----
-# Optical Character Recognition (OCR)
-
-Azure's Computer Vision API includes Optical Character Recognition (OCR) capabilities that extract printed or handwritten text from images. You can extract text from images, such as photos of license plates or containers with serial numbers, as well as from documents - invoices, bills, financial reports, articles, and more.
-
-## Read API
-
-The Computer Vision [Read API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g)) that extracts printed text (in several languages), handwritten text (English only), digits, and currency symbols from images and multi-page PDF documents. It's optimized to extract text from text-heavy images and multi-page PDF documents with mixed languages. It supports detecting both printed and handwritten text in the same image or document.
-
-![How OCR converts images and documents into structured output with extracted text](./Images/how-ocr-works.svg)
-
-## Input requirements
-The **Read** call takes images and documents as its input. They have the following requirements:
-
-* Supported file formats: JPEG, PNG, BMP, PDF, and TIFF
-* For PDF and TIFF files, up to 2000 pages (only first two pages for the free tier) are processed.
-* The file size must be less than 50 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.
-* The PDF dimensions must be at most 17 x 17 inches, corresponding to legal or A3 paper sizes and smaller.
-
-> [!NOTE]
-> **Language input**
->
-> The [Read call](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d986960601faab4bf452005) has an optional request parameter for language. Read supports auto language identification and multilingual documents, so only provide a language code if you would like to force the document to be processed as that specific language.
-
-## OCR demo (examples)
-
-![OCR demos](./Images/ocr-demo.gif)
-
-## Step 1: The Read operation
-
-The Read API's [Read call](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously. The call returns with a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Operation ID to be used in the next step.
-
-|Response header| Result URL |
-|:--|:-|
-|Operation-Location | `https://cognitiveservice/vision/v3.1/read/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
-
-> [!NOTE]
-> **Billing**
->
-> The [Computer Vision pricing](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) page includes the pricing tier for Read. Each analyzed image or page is one transaction. If you call the operation with a PDF or TIFF document containing 100 pages, the Read operation will count it as 100 transactions and you will be billed for 100 transactions. If you made 50 calls to the operation and each call submitted a document with 100 pages, you will be billed for 50 X 100 = 5000 transactions.
-
-## Step 2: The Get Read Results operation
-
-The second step is to call [Get Read Results](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d9869604be85dee480c8750) operation. This operation takes as input the operation ID that was created by the Read operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 1 to 2 seconds to avoid exceeding the requests per second (RPS) rate.
-
-|Field| Type | Possible values |
-|:--|:-:|:-|
-|status | string | notStarted: The operation has not started. |
-| | | running: The operation is being processed. |
-| | | failed: The operation has failed. |
-| | | succeeded: The operation has succeeded. |
-
-> [!NOTE]
-> The free tier limits the request rate to 20 calls per minute. The paid tier allows 10 requests per second (RPS) that can be increased upon request. Use the Azure support channel or your account team to request a higher request per second (RPS) rate.
-
-When the **status** field has the **succeeded** value, the JSON response contains the extracted text content from your image or document. The JSON response maintains the original line groupings of recognized words. It includes the extracted text lines and their bounding box coordinates. Each text line includes all extracted words with their coordinates and confidence scores.
-
-> [!NOTE]
-> The data submitted to the `Read` operation are temporarily encrypted and stored at rest for a short duration, and then deleted. This lets your applications retrieve the extracted text as part of the service response.
-
-## Sample JSON output
-
-See the following example of a successful JSON response:
-
-```json
-{
- "status": "succeeded",
- "createdDateTime": "2020-05-28T05:13:21Z",
- "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
- "analyzeResult": {
- "version": "3.1.0",
- "readResults": [
- {
- "page": 1,
- "angle": 0.8551,
- "width": 2661,
- "height": 1901,
- "unit": "pixel",
- "lines": [
- {
- "boundingBox": [
- 67,
- 646,
- 2582,
- 713,
- 2580,
- 876,
- 67,
- 821
- ],
- "text": "The quick brown fox jumps",
- "words": [
- {
- "boundingBox": [
- 143,
- 650,
- 435,
- 661,
- 436,
- 823,
- 144,
- 824
- ],
- "text": "The",
- "confidence": 0.958
- }
- ]
- }
- ]
- }
- ]
- }
-}
-```
-
-## Natural reading order output (Latin only)
-With the [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005), specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
--
-## Handwritten classification for text lines (Latin only)
-The [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005) response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
--
-## Select page(s) or page ranges for text extraction
-With the [Read 3.2 preview API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005), for large multi-page documents, use the `pages` query parameter to specify page numbers or page ranges to extract text from only those pages. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
--
-## Supported languages
-The Read APIs support a total of 73 languages for print style text. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr). Handwritten style OCR is supported exclusively for English.
-
-## Use the cloud API or deploy on-premise
-The Read 3.x cloud APIs are the preferred option for most customers because of ease of integration and fast productivity out of the box. Azure and the Computer Vision service handle scale, performance, data security, and compliance needs while you focus on meeting your customers' needs.
-
-For on-premise deployment, the [Read Docker container (preview)](./computer-vision-how-to-install-containers.md) enables you to deploy the new OCR capabilities in your own local environment. Containers are great for specific security and data governance requirements.
-
-## OCR API
-
-The [OCR API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g#optical-character-recognition-ocr) then Read API.
-
-> [!NOTE]
-> The Computer Vison 2.0 RecognizeText operations are in the process of getting deprecated in favor of the new Read API covered in this article. Existing customers should [transition to using Read operations](upgrade-api-versions.md).
-
-## Next steps
--- Get started with the [Computer Vision REST API or client library quickstarts](./quickstarts-sdk/client-library.md).-- Learn about the [Read 3.1 REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d986960601faab4bf452005).-- Learn about the [Read 3.2 public preview REST API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005) with support for a total of 73 languages.
cognitive-services Deploy Computer Vision On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/deploy-computer-vision-on-premises.md
read:
# resultExpirationPeriod=0, the system will clear the recognition result after result retrieval. resultExpirationPeriod: 1
- # Redis storage, if configured, will be used by read container to store result records.
- # A cache is required if multiple read containers are placed behind load balancer.
+ # Redis storage, if configured, will be used by read OCR container to store result records.
+ # A cache is required if multiple read OCR containers are placed behind load balancer.
redis: enabled: false # {true/false} password: password
- # RabbitMQ is used for dispatching tasks. This can be useful when multiple read containers are
+ # RabbitMQ is used for dispatching tasks. This can be useful when multiple read OCR containers are
# placed behind load balancer. rabbitmq: enabled: false # {true/false}
read:
> [!IMPORTANT] > - If the `billing` and `apikey` values aren't provided, the services expire after 15 minutes. Likewise, verification fails because the services aren't available. >
-> - If you deploy multiple Read containers behind a load balancer, for example, under Docker Compose or Kubernetes, you must have an external cache. Because the processing container and the GET request container might not be the same, an external cache stores the results and shares them across containers. For details about cache settings, see [Configure Computer Vision Docker containers](./computer-vision-resource-container-config.md).
+> - If you deploy multiple Read OCR containers behind a load balancer, for example, under Docker Compose or Kubernetes, you must have an external cache. Because the processing container and the GET request container might not be the same, an external cache stores the results and shares them across containers. For details about cache settings, see [Configure Computer Vision Docker containers](./computer-vision-resource-container-config.md).
> Create a *templates* folder under the *read* directory. Copy and paste the following YAML into a file named `deployment.yaml`. The `deployment.yaml` file will serve as a Helm template.
spec:
In the same *templates* folder, copy and paste the following helper functions into `helpers.tpl`. `helpers.tpl` defines useful functions to help generate Helm template. > [!NOTE]
-> This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
```yaml {{- define "rabbitmq.hostname" -}}
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
Title: Overview of Spatial Analysis
+ Title: What is Spatial Analysis?
-description: This document explains the basic concepts and features of a Computer Vision spatial analysis container.
+description: This document explains the basic concepts and features of a Computer Vision Spatial Analysis container.
- Previously updated : 02/01/2021+ Last updated : 03/29/2021
-# Overview of Computer Vision spatial analysis
+# What is Spatial Analysis?
-Computer Vision spatial analysis is a new feature of Azure Cognitive Services Computer Vision that helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, run AI operations to extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, an AI operation can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines.
+The Spatial Analysis service helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, run AI operations to extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, an AI operation can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines.
-## The basics of Spatial Analysis
+<!--This documentation contains the following types of articles:
+* The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./Vision-API-How-to-Topics/HowToCallVisionAPI.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](tbd) provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.-->
-Today the core operations of spatial analysis are all built on a pipeline that ingests video, detects people in the video, tracks the people as they move around over time, and generates events as people interact with regions of interest.
+## What it does
-## Spatial Analysis terms
+The core operations of Spatial Analysis are all built on a pipeline that ingests video, detects people in the video, tracks the people as they move around over time, and generates events as people interact with regions of interest.
-| Term | Definition |
+## Spatial Analysis features
+
+| Feature | Definition |
|||
-| People Detection | This component answers the question "where are the people in this image"? It finds humans in an image and passes a bounding box indicating the location of each person to the people tracking component. |
-| People Tracking | This component connects the people detections over time as the people move around in front of a camera. It uses temporal logic about how people typically move and basic information about the overall appearance of the people to do this. It does not track people across multiple cameras. If a person exists the field of view from a camera for longer than approximately a minute and then re-enters the camera view, the system will perceive this as a new person. People Tracking does not uniquely identify individuals across cameras. It does not use facial recognition or gait tracking. |
-| Face Mask Detection | This component detects the location of a personΓÇÖs face in the cameraΓÇÖs field of view and identifies the presence of a face mask. To do so, the AI operation scans images from video; where a face is detected the service provides a bounding box around the face. Using object detection capabilities, it identifies the presence of face masks within the bounding box. Face Mask detection does not involve distinguishing one face from another face, predicting or classifying facial attributes or performing facial recognition. |
-| Region of Interest | This is a zone or line defined in the input video as part of configuration. When a person interacts with the region of the video the system generates an event. For example, for the PersonCrossingLine operation, a line is defined in the video. When a person crosses that line an event is generated. |
-| Event | An event is the primary output of spatial analysis. Each operation emits a specific event either periodically (ex. once per minute) or when a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the PeopleCount operation can emit an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
+| **People Detection** | This component answers the question, "Where are the people in this image?" It finds people in an image and passes a bounding box indicating the location of each person to the people tracking component. |
+| **People Tracking** | This component connects the people detections over time as people move around in front of a camera. It uses temporal logic about how people typically move and basic information about the overall appearance of the people. It does not track people across multiple cameras. If a person exits the field of view for longer than approximately a minute and then re-enters the camera view, the system will perceive this as a new person. People Tracking does not uniquely identify individuals across cameras. It does not use facial recognition or gait tracking. |
+| **Face Mask Detection** | This component detects the location of a person's face in the camera's field of view and identifies the presence of a face mask. The AI operation scans images from video; where a face is detected the service provides a bounding box around the face. Using object detection capabilities, it identifies the presence of face masks within the bounding box. Face Mask detection does not involve distinguishing one face from another face, predicting or classifying facial attributes or performing facial recognition. |
+| **Region of Interest** | This is a user-defined zone or line in the input video frame. When a person interacts with this region on the video, the system generates an event. For example, for the PersonCrossingLine operation, a line is defined in the video. When a person crosses that line an event is generated. |
+| **Event** | An event is the primary output of Spatial Analysis. Each operation emits a specific event either periodically (like once per minute) or whenever a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the PeopleCount operation can emit an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
-## Responsible use of Spatial Analysis technology
+## Get started
+
+### Public preview gating
-To learn how to use Spatial Analysis technology responsibly, see the [transparency note](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext). MicrosoftΓÇÖs transparency notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment.
+To ensure Spatial Analysis is used for scenarios it was designed for, we are making this technology available to customers through an application process. To get access to Spatial Analysis, you'll need to start by filling out our online intake form. [Begin your application here](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRyQZ7B8Cg2FEjpibPziwPcZUNlQ4SEVORFVLTjlBSzNLRlo0UzRRVVNPVy4u).
-## Spatial Analysis gating for public preview
+Access to the Spatial Analysis public preview is subject to Microsoft's sole discretion based on our eligibility criteria, vetting process, and availability to support a limited number of customers during this gated preview. In public preview, we are looking for customers who have a significant relationship with Microsoft, are interested in working with us on the recommended use cases, and additional scenarios that keep with our responsible AI commitments.
-To ensure spatial analysis is used for scenarios it was designed for, we are making this technology available to customers through an application process. To get access to spatial analysis, you will need to start by filling out our online intake form. [Begin your application here](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRyQZ7B8Cg2FEjpibPziwPcZUNlQ4SEVORFVLTjlBSzNLRlo0UzRRVVNPVy4u).
+### Follow a quickstart
+
+Once you're granted access to Spatial Analysis, follow the [quickstart](spatial-analysis-container.md) to set up the container and begin analyzing video.
+
+## Responsible use of Spatial Analysis technology
-Access to the spatial analysis public preview is subject to Microsoft's sole discretion based on our eligibility criteria, vetting process, and availability to support a limited number of customers during this gated preview. In public preview, we are looking for customers who have a significant relationship with Microsoft, are interested in working with us on the recommended use cases, and additional scenarios that are in keeping with our responsible AI commitments.
+To learn how to use Spatial Analysis technology responsibly, see the [transparency note](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext). Microsoft's transparency notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment.
## Next steps > [!div class="nextstepaction"]
-> [Get started with Spatial Analysis Container](spatial-analysis-container.md)
+> [Quickstart: Spatial Analysis container](spatial-analysis-container.md)''''''''''''
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/language-support.md
Some features of Computer Vision support multiple languages; any features not me
## Optical Character Recognition (OCR)
-Computer Vision's OCR APIs support several languages. They do not require you to specify a language code. See [Optical Character Recognition (OCR)](concept-recognizing-text.md) for more information.
+Computer Vision's OCR APIs support several languages. They do not require you to specify a language code. See the [Optical Character Recognition (OCR) overview](overview-ocr.md) for more information.
|Language| Language code | OCR API | Read 3.0/3.1 | Read v3.2 preview | |:--|:-:|:--:|::|::|
Computer Vision's OCR APIs support several languages. They do not require you to
|Italian | `it` |Γ£ö |Γ£ö |Γ£ö | |Japanese | `ja` |Γ£ö | |Γ£ö | |Javanese | `jv` | | |Γ£ö |
-|KΓÇÖicheΓÇÖ | `quc` | | |Γ£ö |
+|K'iche' | `quc` | | |Γ£ö |
|Kabuverdianu | `kea` | | |Γ£ö | |Kachin (Latin) | `kac` | | |Γ£ö | |Kara-Kalpak | `kaa` | | |Γ£ö |
Computer Vision's OCR APIs support several languages. They do not require you to
## Image analysis
-Some actions of the [Analyze - Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) for a list of all the actions you can do with image analysis.
+Some actions of the [Analyze - Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) for a list of all the actions you can do with image analysis.
|Language | Language code | Categories | Tags | Description | Adult | Brands | Color | Faces | ImageType | Objects | Celebrities | Landmarks | |:|::|:-:|::|::|::|::|::|::|::|::|::|::|
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/overview-image-analysis.md
+
+ Title: What is Image Analysis?
+
+description: The Image Analysis service uses pre-trained AI models to extract many different visual features from images.
+++++++ Last updated : 03/30/2021+
+keywords: computer vision, computer vision applications, computer vision service
++
+# What is Image Analysis?
++
+The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces.
+
+You can use Image Analysis through a client library SDK or by calling the [REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) to get started.
+
+This documentation contains the following types of articles:
+* The [quickstarts](./quickstarts-sdk/image-analysis-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./Vision-API-How-to-Topics/HowToCallVisionAPI.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](concept-tagging-images.md) provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
+
+## Image Analysis features
+
+You can analyze images to provide insights about their visual features and characteristics. All of the features in the list below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) to get started.
++
+### Tag visual features
+
+Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. [Tag visual features](concept-tagging-images.md)
+
+### Detect objects
+
+Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process further relationships between the objects in an image. It also lets you know when there are multiple instances of the same tag in an image. [Detect objects](concept-object-detection.md)
+
+### Detect brands
+
+Identify commercial brands in images or videos from a database of thousands of global logos. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement. [Detect brands](concept-brand-detection.md)
+
+### Categorize an image
+
+Identify and categorize an entire image, using a [category taxonomy](Category-Taxonomy.md) with parent/child hereditary hierarchies. Categories can be used alone, or with our new tagging models.<br/>Currently, English is the only supported language for tagging and categorizing images. [Categorize an image](concept-categorizing-images.md)
+
+### Describe an image
+
+Generate a description of an entire image in human-readable language, using complete sentences. Computer Vision's algorithms generate various descriptions based on the objects identified in the image. The descriptions are each evaluated and a confidence score generated. A list is then returned ordered from highest confidence score to lowest. [Describe an image](concept-describing-images.md)
+
+### Detect faces
+
+Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/>Computer Vision provides a subset of the [Face](../face/index.yml) service functionality. You can use the Face service for more detailed analysis, such as facial identification and pose detection. [Detect faces](concept-detecting-faces.md)
+
+### Detect image types
+
+Detect characteristics about an image, such as whether an image is a line drawing or the likelihood of whether an image is clip art. [Detect image types](concept-detecting-image-types.md)
+
+### Detect domain-specific content
+
+Use domain models to detect and identify domain-specific content in an image, such as celebrities and landmarks. For example, if an image contains people, Computer Vision can use a domain model for celebrities to determine if the people detected in the image are known celebrities. [Detect domain-specific content](concept-detecting-domain-content.md)
+
+### Detect the color scheme
+
+Analyze color usage within an image. Computer Vision can determine whether an image is black & white or color and, for color images, identify the dominant and accent colors. [Detect the color scheme](concept-detecting-color-schemes.md)
+
+### Generate a thumbnail
+
+Analyze the contents of an image to generate an appropriate thumbnail for that image. Computer Vision first generates a high-quality thumbnail and then analyzes the objects within the image to determine the *area of interest*. Computer Vision then crops the image to fit the requirements of the area of interest. The generated thumbnail can be presented using an aspect ratio that is different from the aspect ratio of the original image, depending on your needs. [Generate a thumbnail](concept-generating-thumbnails.md)
+
+### Get the area of interest
+
+Analyze the contents of an image to return the coordinates of the *area of interest*. Instead of cropping the image and generating a thumbnail, Computer Vision returns the bounding box coordinates of the region, so the calling application can modify the original image as desired. [Get the area of interest](concept-generating-thumbnails.md#area-of-interest)
+
+## Moderate content in images
+
+You can use Computer Vision to [detect adult content](concept-detecting-adult-content.md) in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences.
+
+## Image requirements
+
+Image Analysis works on images that meet the following requirements:
+
+- The image must be presented in JPEG, PNG, GIF, or BMP format
+- The file size of the image must be less than 4 megabytes (MB)
+- The dimensions of the image must be greater than 50 x 50 pixels
+
+## Data privacy and security
+
+As with all of the Cognitive Services, developers using the Computer Vision service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
+
+## Next steps
+
+Get started with Image Analysis by following the quickstart guide in your preferred development language:
+
+- [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md)
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/overview-ocr.md
+
+ Title: What is Optical character recognition?
+
+description: The optical character recognition (OCR) service extracts visible text in an image and returns it as structured strings.
+++++++ Last updated : 03/29/2021++++
+# What is Optical character recognition?
+
+The Optical character recognition (OCR) service allows you to extract printed or handwritten text from images, such as photos of license plates or containers with serial numbers, as well as from documents&mdash;invoices, bills, financial reports, articles, and more. It uses deep learning based models and works with text on a variety of surfaces and backgrounds.
+
+The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow a [quickstart](./quickstarts-sdk/client-library.md) to get started.
+
+![OCR demos](./Images/ocr-demo.gif)
+
+This documentation contains the following types of articles:
+* The [quickstarts](./quickstarts-sdk/client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./Vision-API-How-to-Topics/call-read-api.md) contain instructions for using the service in more specific or customized ways.
+<!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. -->
+
+## Supported languages
+The OCR APIs support a total of 73 languages for print style text. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr). Handwritten-style OCR is supported exclusively for English.
+
+## Input requirements
+
+The **Read** call takes images and documents as its input. They have the following requirements:
+
+* Supported file formats: JPEG, PNG, BMP, PDF, and TIFF
+* For PDF and TIFF files, up to 2000 pages (only first two pages for the free tier) are processed.
+* The file size must be less than 50 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.
+* The PDF dimensions must be at most 17 x 17 inches, corresponding to legal or A3 paper sizes and smaller.
+
+## Read API
+
+The Computer Vision [Read API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g)) that extracts printed text (in several languages), handwritten text (English only), digits, and currency symbols from images and multi-page PDF documents. It's optimized to extract text from text-heavy images and multi-page PDF documents with mixed languages. It supports detecting both printed and handwritten text in the same image or document.
+
+![How OCR converts images and documents into structured output with extracted text](./Images/how-ocr-works.svg)
++
+## Use the cloud API or deploy on-premise
+The Read 3.x cloud APIs are the preferred option for most customers because of ease of integration and fast productivity out of the box. Azure and the Computer Vision service handle scale, performance, data security, and compliance needs while you focus on meeting your customers' needs.
+
+For on-premise deployment, the [Read Docker container (preview)](./computer-vision-how-to-install-containers.md) enables you to deploy the new OCR capabilities in your own local environment. Containers are great for specific security and data governance requirements.
+
+## OCR API
+
+The legacy [OCR API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g#optical-character-recognition-ocr) for a list of supported languages.
+
+## RecognizeText API
+
+> [!WARNING]
+> The Computer Vision 2.0 RecognizeText operations are in the process of being deprecated in favor of the new [Read API](#read-api) covered in this article. Existing customers should [transition to using Read operations](upgrade-api-versions.md).
+
+## Data privacy and security
+
+As with all of the Cognitive Services, developers using the Computer Vision service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
+
+## Next steps
+
+- Get started with the [OCR REST API or client library quickstarts](./quickstarts-sdk/client-library.md).
+- Learn about the [Read 3.1 REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d986960601faab4bf452005).
+- Learn about the [Read 3.2 public preview REST API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005) with support for a total of 73 languages.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/overview.md
Previously updated : 11/23/2020 Last updated : 03/29/2021 keywords: computer vision, computer vision applications, computer vision service
keywords: computer vision, computer vision applications, computer vision service
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
-Azure's Computer Vision service gives you access to advanced algorithms that process images and return information based on the visual features you're interested in. For example, Computer Vision can determine whether an image contains adult content, find specific brands or objects, or find human faces.
+Azure's Computer Vision service gives you access to advanced algorithms that process images and return information based on the visual features you're interested in.
-You can create Computer Vision applications through a [client library SDK](./quickstarts-sdk/client-library.md) or by calling the [REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d986960601faab4bf452005) directly. This page broadly covers what you can do with Computer Vision.
-
-This documentation contains the following types of articles:
-* The [quickstarts](./quickstarts-sdk/client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./Vision-API-How-to-Topics/HowToCallVisionAPI.md) contain instructions for using the service in more specific or customized ways.
-* The [conceptual articles](concept-recognizing-text.md) provide in-depth explanations of the service's functionality and features.
-* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
-
-## Optical Character Recognition (OCR)
-
-Computer Vision includes [Optical Character Recognition (OCR)](concept-recognizing-text.md) capabilities. You can use the new Read API to extract printed and handwritten text from images and documents. It uses deep learning based models and works with text on a variety of surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow a [quickstart](./quickstarts-sdk/client-library.md) to get started.
+| Service|Description|
+|||
+|[Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on a variety of surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.|
+|[Image Analysis](overview-image-analysis.md)| The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library.md) to get started.|
+| [Spatial Analysis](intro-to-spatial-analysis-public-preview.md)| The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started.|
## Computer Vision for digital asset management Computer Vision can power many digital asset management (DAM) scenarios. DAM is the business process of organizing, storing, and retrieving rich media assets and managing digital rights and permissions. For example, a company may want to group and identify images based on visible logos, faces, objects, colors, and so on. Or, you might want to automatically [generate captions for images](./Tutorials/storage-lab-tutorial.md) and attach keywords so they're searchable. For an all-in-one DAM solution using Cognitive Services, Azure Cognitive Search, and intelligent reporting, see the [Knowledge Mining Solution Accelerator Guide](https://github.com/Azure-Samples/azure-search-knowledge-mining) on GitHub. For other DAM examples, see the [Computer Vision Solution Templates](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates) repository.
-## Analyze images for insight
-
-You can analyze images to provide insights about their visual features and characteristics. All of the features in the table below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) to get started.
--
-### Tag visual features
-
-Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. [Tag visual features](concept-tagging-images.md)
-
-### Detect objects
-
-Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process further relationships between the objects in an image. It also lets you know when there are multiple instances of the same tag in an image. [Detect objects](concept-object-detection.md)
-
-### Detect brands
-
-Identify commercial brands in images or videos from a database of thousands of global logos. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement. [Detect brands](concept-brand-detection.md)
-
-### Categorize an image
-
-Identify and categorize an entire image, using a [category taxonomy](Category-Taxonomy.md) with parent/child hereditary hierarchies. Categories can be used alone, or with our new tagging models.<br/>Currently, English is the only supported language for tagging and categorizing images. [Categorize an image](concept-categorizing-images.md)
-
-### Describe an image
-
-Generate a description of an entire image in human-readable language, using complete sentences. Computer Vision's algorithms generate various descriptions based on the objects identified in the image. The descriptions are each evaluated and a confidence score generated. A list is then returned ordered from highest confidence score to lowest. [Describe an image](concept-describing-images.md)
-
-### Detect faces
-
-Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/>Computer Vision provides a subset of the [Face](../face/index.yml) service functionality. You can use the Face service for more detailed analysis, such as facial identification and pose detection. [Detect faces](concept-detecting-faces.md)
-
-### Detect image types
-
-Detect characteristics about an image, such as whether an image is a line drawing or the likelihood of whether an image is clip art. [Detect image types](concept-detecting-image-types.md)
-
-### Detect domain-specific content
-
-Use domain models to detect and identify domain-specific content in an image, such as celebrities and landmarks. For example, if an image contains people, Computer Vision can use a domain model for celebrities to determine if the people detected in the image are known celebrities. [Detect domain-specific content](concept-detecting-domain-content.md)
-
-### Detect the color scheme
-
-Analyze color usage within an image. Computer Vision can determine whether an image is black & white or color and, for color images, identify the dominant and accent colors. [Detect the color scheme](concept-detecting-color-schemes.md)
-
-### Generate a thumbnail
-
-Analyze the contents of an image to generate an appropriate thumbnail for that image. Computer Vision first generates a high-quality thumbnail and then analyzes the objects within the image to determine the *area of interest*. Computer Vision then crops the image to fit the requirements of the area of interest. The generated thumbnail can be presented using an aspect ratio that is different from the aspect ratio of the original image, depending on your needs. [Generate a thumbnail](concept-generating-thumbnails.md)
-
-### Get the area of interest
-
-Analyze the contents of an image to return the coordinates of the *area of interest*. Instead of cropping the image and generating a thumbnail, Computer Vision returns the bounding box coordinates of the region, so the calling application can modify the original image as desired. [Get the area of interest](concept-generating-thumbnails.md#area-of-interest)
-
-## Moderate content in images
-
-You can use Computer Vision to [detect adult content](concept-detecting-adult-content.md) in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences.
-
-## Deploy on premises using Docker containers
-
-Use Computer Vision containers to deploy API features on-premises. These Docker containers enable you to bring the service closer to your data for compliance, security or other operational reasons. Computer Vision offers the following containers:
-
-* The [Computer Vision read OCR container (preview)](computer-vision-how-to-install-containers.md) lets you recognize printed and handwritten text in images.
-* The [Computer Vision spatial analysis container (preview)](spatial-analysis-container.md) lets you to analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments.
- ## Image requirements Computer Vision can analyze images that meet the following requirements:
As with all of the Cognitive Services, developers using the Computer Vision serv
## Next steps
-Get started with Computer Vision by following the quickstart guide in your preferred development language:
+Follow a quickstart to implement and run a service in your preferred development language.
-- [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/client-library.md)
+* [Quickstart: Optical character recognition (OCR)](quickstarts-sdk/client-library.md)
+* [Quickstart: Image Analysis](quickstarts-sdk/image-analysis-client-library.md)
+* [Quickstart: Spatial Analysis container](spatial-analysis-container.md)
cognitive-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/quickstarts-sdk/client-library.md
Title: "Quickstart: Computer Vision client library"
+ Title: "Quickstart: Read client library or REST API"
-description: Learn how to use Azure Computer Vision in your application through a native client library in the language of your choice.
+description: Learn how to use Optical character recognition (OCR) in your application through a native client library in the language of your choice.
Previously updated : 12/15/2020 Last updated : 03/21/2020 zone_pivot_groups: programming-languages-computer-vision keywords: computer vision, computer vision service
-# Quickstart: Use the Computer Vision client library
+# Quickstart: Use the Read client library or REST API
-Get started with the Computer Vision REST API or client libraries. The Computer Vision service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code for basic tasks.
+Get started with the Read REST API or client libraries. The Read service provides you with AI algorithms for extracting visible text from images and returning it as structured strings. Follow these steps to install a package to your application and try out the sample code for basic tasks.
cognitive-services Image Analysis Client Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/quickstarts-sdk/image-analysis-client-library.md
+
+ Title: "Quickstart: Image Analysis client library or REST API"
+
+description: Learn how to use Image Analysis in your application through a native client library in the language of your choice.
++++++ Last updated : 03/21/2020++
+zone_pivot_groups: programming-languages-computer-vision
+keywords: computer vision, computer vision service
++
+# Quickstart: Use the Image Analysis client library or REST API
+
+Get started with the Image Analysis REST API or client libraries. The Analyze Image service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code for basic tasks.
++++++++++++++++++++
cognitive-services Read Container Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/read-container-migration-guide.md
Title: Migrating to the Read v3.x OCR containers
+ Title: Migrating to the Read v3.x containers
description: Learn how to migrate to the v3 Read OCR containers
If you're using version 2 of the Computer Vision Read OCR container, Use this ar
## Configuration changes
-* `ReadEngineConfig:ResultExpirationPeriod` is no longer supported. The Read container has a built Cron job that removes the results and metadata associated with a request after 48 hours.
+* `ReadEngineConfig:ResultExpirationPeriod` is no longer supported. The Read OCR container has a built Cron job that removes the results and metadata associated with a request after 48 hours.
* `Cache:Redis:Configuration` is no longer supported. The Cache is not used in the v3.x containers, so you do not need to set it. ## API changes
See the [Computer Vision v3 REST API migration guide](./upgrade-api-versions.md)
## Memory requirements
-The requirements and recommendations are based on benchmarks with a single request per second, using an 8-MB image of a scanned business letter that contains 29 lines and a total of 803 characters. The following table describes the minimum and recommended allocation of resources for each Read container.
+The requirements and recommendations are based on benchmarks with a single request per second, using an 8-MB image of a scanned business letter that contains 29 lines and a total of 803 characters. The following table describes the minimum and recommended allocation of resources for each Read OCR container.
|Container |Minimum | Recommended | ||||
Set the timer with `Queue:Azure:QueueVisibilityTimeoutInMilliseconds`, which set
## Next steps * Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings
-* Review [Computer Vision overview](overview.md) to learn more about recognizing printed and handwritten text
-* Refer to the [Computer Vision API](//westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) for details about the methods supported by the container.
+* Review [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text
+* Refer to the [Read API](//westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) for details about the methods supported by the container.
* Refer to [Frequently asked questions (FAQ)](FAQ.md) to resolve issues related to Computer Vision functionality. * Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
cognitive-services Spatial Analysis Camera Placement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-camera-placement.md
# Camera placement guide
-This article provides camera placement recommendations for spatial analysis (public preview). It includes general guidelines as well as specific recommendations for height, angle, and camera-to-focal-point-distance for all the included operations.
+This article provides camera placement recommendations for Spatial Analysis (public preview). It includes general guidelines as well as specific recommendations for height, angle, and camera-to-focal-point-distance for all the included operations.
> [!NOTE] > This guide is designed for the Axis M3045-V camera. This camera will use resolution 1920x1080, 106 degree horizontal field of view, 59 degree vertical field of view and a fixed 2.8mm focal length. The principles below will apply to all cameras, but specific guidelines around camera height and camera-to-focal-point distance will need to be adjusted for use with other cameras. ## General guidelines
-Consider the following general guidelines when positioning cameras for spatial analysis:
+Consider the following general guidelines when positioning cameras for Spatial Analysis:
* **Lighting height.** Place cameras below lighting fixtures so the fixtures don't block the cameras. * **Obstructions.** To avoid obstructing camera views, take note of obstructions such as poles, signage, shelving, walls, and existing LP cameras.
Consider the following general guidelines when positioning cameras for spatial a
## Height, focal-point distance, and angle
-You need to consider three things when deciding how to install a camera for spatial analysis:
+You need to consider three things when deciding how to install a camera for Spatial Analysis:
- Camera height - Camera-to-focal-point distance - The angle of the camera relative to the floor plane
Organic queue lines form organically. This style of queue is acceptable if queue
## Next steps * [Deploy a People Counting web application](spatial-analysis-web-app.md)
-* [Configure spatial analysis operations](./spatial-analysis-operations.md)
+* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
* [Logging and troubleshooting](spatial-analysis-logging.md) * [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
Last updated 01/12/2021
-# Install and run the spatial analysis container (Preview)
+# Install and run the Spatial Analysis container (Preview)
-The spatial analysis container enables you to analyze real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. Containers are great for specific security and data governance requirements.
+The Spatial Analysis container enables you to analyze real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. Containers are great for specific security and data governance requirements.
## Prerequisites * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> for the Standard S1 tier in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
- * You will need the key and endpoint from the resource you create to run the spatial analysis container. You'll use your key and endpoint later.
+ * You will need the key and endpoint from the resource you create to run the Spatial Analysis container. You'll use your key and endpoint later.
-### Spatial analysis container requirements
+### Spatial Analysis container requirements
-To run the spatial analysis container, you need a compute device with a [NVIDIA Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We will refer to this device as the host computer.
+To run the Spatial Analysis container, you need a compute device with a [NVIDIA Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We will refer to this device as the host computer.
#### [Azure Stack Edge device](#tab/azure-stack-edge)
In our example, we will utilize an [NC series VM](../../virtual-machines/nc-seri
| Requirement | Description | |--|--|
-| Camera | The spatial analysis container is not tied to a specific camera brand. The camera device needs to: support Real-Time Streaming Protocol(RTSP) and H.264 encoding, be accessible to the host computer, and be capable of streaming at 15FPS and 1080p resolution. |
+| Camera | The Spatial Analysis container is not tied to a specific camera brand. The camera device needs to: support Real-Time Streaming Protocol(RTSP) and H.264 encoding, be accessible to the host computer, and be capable of streaming at 15FPS and 1080p resolution. |
| Linux OS | [Ubuntu Desktop 18.04 LTS](http://releases.ubuntu.com/18.04/) must be installed on the host computer. |
It is recommended that you use an Azure Stack Edge device for your host computer
### Configure compute on the Azure Stack Edge portal
-Spatial analysis uses the compute features of the Azure Stack Edge to run an AI solution. To enable the compute features, make sure that:
+Spatial Analysis uses the compute features of the Azure Stack Edge to run an AI solution. To enable the compute features, make sure that:
* You've [connected and activated](../../databox-online/azure-stack-edge-deploy-connect-setup-activate.md) your Azure Stack Edge device. * You have a Windows client system running PowerShell 5.0 or later, to access the device.
When the Edge compute role is set up on the Edge device, it creates two devices:
$ip = "<device-IP-address>" ```
-4. To add the IP address of your device to the clientΓÇÖs trusted hosts list, use the following command:
+4. To add the IP address of your device to the client's trusted hosts list, use the following command:
```powershell Set-Item WSMan:\localhost\Client\TrustedHosts $ip -Concatenate -Force
sudo systemctl --now enable nvidia-mps.service
## Configure Azure IoT Edge on the host computer
-To deploy the spatial analysis container on the host computer, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier.
+To deploy the Spatial Analysis container on the host computer, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier.
Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters where appropriate. Alternatively, you can create the Azure IoT Hub on the [Azure portal](https://portal.azure.com/).
Run this command to restart the IoT Edge service on the host computer.
sudo systemctl restart iotedge ```
-Deploy the spatial analysis container as an IoT Module on the host computer, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows). If you're using the portal, set the image URI to the location of your Azure Container Registry.
+Deploy the Spatial Analysis container as an IoT Module on the host computer, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows). If you're using the portal, set the image URI to the location of your Azure Container Registry.
Use the below steps to deploy the container using the Azure CLI. #### [Azure VM with GPU](#tab/virtual-machine)
-An Azure Virtual Machine with a GPU can also be used to run spatial analysis. The example below will use an [NC series](../../virtual-machines/nc-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) VM that has one K80 GPU.
+An Azure Virtual Machine with a GPU can also be used to run Spatial Analysis. The example below will use an [NC series](../../virtual-machines/nc-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) VM that has one K80 GPU.
#### Create the VM
Now that you have set up and configured your VM, follow the steps below to confi
## Configure Azure IoT Edge on the VM
-To deploy the spatial analysis container on the VM, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier.
+To deploy the Spatial Analysis container on the VM, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier.
Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters where appropriate. Alternatively, you can create the Azure IoT Hub on the [Azure portal](https://portal.azure.com/).
Run this command to restart the IoT Edge service on the VM.
sudo systemctl restart iotedge ```
-Deploy the spatial analysis container as an IoT Module on the VM, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows). If you're using the portal, set the image URI to the location of your Azure Container Registry.
+Deploy the Spatial Analysis container as an IoT Module on the VM, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows). If you're using the portal, set the image URI to the location of your Azure Container Registry.
Use the below steps to deploy the container using the Azure CLI.
sudo az iot edge set-modules --hub-name "<iothub-name>" --device-id "<device-nam
| `--target-condition` | Your IoT Edge device name for the host computer. | | `-ΓÇôsubscription` | Subscription ID or name. |
-This command will start the deployment. Navigate to the page of your Azure IoT Hub instance in the Azure portal to see the deployment status. The status may show as *417 ΓÇô The deviceΓÇÖs deployment configuration is not set* until the device finishes downloading the container images and starts running.
+This command will start the deployment. Navigate to the page of your Azure IoT Hub instance in the Azure portal to see the deployment status. The status may show as *417 ΓÇô The device's deployment configuration is not set* until the device finishes downloading the container images and starts running.
## Validate that the deployment is successful
-There are several ways to validate that the container is running. Locate the *Runtime Status* in the **IoT Edge Module Settings** for the spatial analysis module in your Azure IoT Hub instance on the Azure portal. Validate that the **Desired Value** and **Reported Value** for the *Runtime Status* is *Running*.
+There are several ways to validate that the container is running. Locate the *Runtime Status* in the **IoT Edge Module Settings** for the Spatial Analysis module in your Azure IoT Hub instance on the Azure portal. Validate that the **Desired Value** and **Reported Value** for the *Runtime Status* is *Running*.
![Example deployment verification](./media/spatial-analysis/deployment-verification.png)
-Once the deployment is complete and the container is running, the **host computer** will start sending events to the Azure IoT Hub. If you used the `.debug` version of the operations, youΓÇÖll see a visualizer window for each camera you configured in the deployment manifest. You can now define the lines and zones you want to monitor in the deployment manifest and follow the instructions to deploy again.
+Once the deployment is complete and the container is running, the **host computer** will start sending events to the Azure IoT Hub. If you used the `.debug` version of the operations, you'll see a visualizer window for each camera you configured in the deployment manifest. You can now define the lines and zones you want to monitor in the deployment manifest and follow the instructions to deploy again.
-## Configure the operations performed by spatial analysis
+## Configure the operations performed by Spatial Analysis
-You will need to use [spatial analysis operations](spatial-analysis-operations.md) to configure the container to use connected cameras, configure the operations, and more. For each camera device you configure, the operations for spatial analysis will generate an output stream of JSON messages, sent to your instance of Azure IoT Hub.
+You will need to use [Spatial Analysis operations](spatial-analysis-operations.md) to configure the container to use connected cameras, configure the operations, and more. For each camera device you configure, the operations for Spatial Analysis will generate an output stream of JSON messages, sent to your instance of Azure IoT Hub.
## Use the output generated by the container If you want to start consuming the output generated by the container, see the following articles:
-* Use the Azure Event Hub SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. See [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md) for more information.
-* Set up Message Routing on your Azure IoT Hub to send the events to other endpoints or save the events to Azure Blob Storage, etc. See [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) for more information.
+* Use the Azure Event Hub SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. See [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md) for more information.
+* Set up Message Routing on your Azure IoT Hub to send the events to other endpoints or save the events to Azure Blob Storage, etc. See [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) for more information.
-## Running spatial analysis with a recorded video file
+## Running Spatial Analysis with a recorded video file
-You can use spatial analysis with both recorded or live video. To use spatial analysis for recorded video, try recording a video file and save it as an mp4 file. Create a blob storage account in Azure, or use an existing one. Then update the following blob storage settings in the Azure portal:
- 1. Change **Secure transfer required** to **Disabled**
- 2. Change **Allow Blob public access** to **Enabled**
+You can use Spatial Analysis with both recorded or live video. To use Spatial Analysis for recorded video, try recording a video file and save it as an mp4 file. Create a blob storage account in Azure, or use an existing one. Then update the following blob storage settings in the Azure portal:
+ 1. Change **Secure transfer required** to **Disabled**
+ 2. Change **Allow Blob public access** to **Enabled**
Navigate to the **Container** section, and either create a new container or use an existing one. Then upload the video file to the container. Expand the file settings for the uploaded file, and select **Generate SAS**. Be sure to set the **Expiry Date** long enough to cover the testing period. Set **Allowed Protocols** to *HTTP* (*HTTPS* is not supported). Click on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
-Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the spatial analysis container with the updated manifest. See the example below.
+Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the Spatial Analysis container with the updated manifest. See the example below.
-The spatial analysis module will start consuming video file and will continuously auto replay as well.
+The Spatial Analysis module will start consuming video file and will continuously auto replay as well.
```json "zonecrossing": {
- "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon",
- "version": 1,
- "enabled": true,
- "parameters": {
- "VIDEO_URL": "Replace http url here",
- "VIDEO_SOURCE_ID": "personcountgraph",
- "VIDEO_IS_LIVE": false,
+ "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "Replace http url here",
+ "VIDEO_SOURCE_ID": "personcountgraph",
+ "VIDEO_IS_LIVE": false,
"VIDEO_DECODE_GPU_INDEX": 0,
- "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true }",
- "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"events\": [{\"type\": \"zonecrossing\", \"config\": {\"threshold\": 16.0, \"focus\": \"footprint\"}}]}]}"
- }
+ "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true }",
+ "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"events\": [{\"type\": \"zonecrossing\", \"config\": {\"threshold\": 16.0, \"focus\": \"footprint\"}}]}]}"
+ }
}, ```
If you encounter issues when starting or running the container, see [telemetry a
## Billing
-The spatial analysis container sends billing information to Azure, using a Computer Vision resource on your Azure account. The use of spatial analysis in public preview is currently free.
+The Spatial Analysis container sends billing information to Azure, using a Computer Vision resource on your Azure account. The use of Spatial Analysis in public preview is currently free.
Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint at all times. Cognitive Services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft. ## Summary
-In this article, you learned concepts and workflow for downloading, installing, and running the spatial analysis container. In summary:
+In this article, you learned concepts and workflow for downloading, installing, and running the Spatial Analysis container. In summary:
-* Spatial analysis is a Linux container for Docker.
+* Spatial Analysis is a Linux container for Docker.
* Container images are downloaded from the Microsoft Container Registry. * Container images run as IoT Modules in Azure IoT Edge. * How to configure the container and deploy it on a host machine.
In this article, you learned concepts and workflow for downloading, installing,
## Next steps * [Deploy a People Counting web application](spatial-analysis-web-app.md)
-* [Configure spatial analysis operations](spatial-analysis-operations.md)
+* [Configure Spatial Analysis operations](spatial-analysis-operations.md)
* [Logging and troubleshooting](spatial-analysis-logging.md) * [Camera placement guide](spatial-analysis-camera-placement.md) * [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-logging.md
# Telemetry and troubleshooting
-Spatial analysis includes a set of features to monitor the health of the system and help with diagnosing issues.
+Spatial Analysis includes a set of features to monitor the health of the system and help with diagnosing issues.
## Enable visualizations
-To enable a visualization of AI Insights events in a video frame, you need to use the `.debug` version of a [spatial analysis operation](spatial-analysis-operations.md) on a desktop machine. The visualization is not possible on Azure Stack Edge devices. There are four debug operations available.
+To enable a visualization of AI Insights events in a video frame, you need to use the `.debug` version of a [Spatial Analysis operation](spatial-analysis-operations.md) on a desktop machine. The visualization is not possible on Azure Stack Edge devices. There are four debug operations available.
If your device is not an Azure Stack Edge device, edit the deployment manifest file for [desktop machines](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json) to use the correct value for the `DISPLAY` environment variable. It needs to match the `$DISPLAY` variable on the host computer. After updating the deployment manifest, redeploy the container.
xhost +
## Collect system health telemetry
-Telegraf is an open source image that works with spatial analysis, and is available in the Microsoft Container Registry. It takes the following inputs and sends them to Azure Monitor. The telegraf module can be built with desired custom inputs and outputs. The telegraf module configuration in spatial analysis is part of the deployment manifest (linked above). This module is optional and can be removed from the manifest if you don't need it.
+Telegraf is an open source image that works with Spatial Analysis, and is available in the Microsoft Container Registry. It takes the following inputs and sends them to Azure Monitor. The telegraf module can be built with desired custom inputs and outputs. The telegraf module configuration in Spatial Analysis is part of the deployment manifest (linked above). This module is optional and can be removed from the manifest if you don't need it.
Inputs:
-1. Spatial analysis Metrics
+1. Spatial Analysis Metrics
2. Disk Metrics 3. CPU Metrics 4. Docker Metrics
Inputs:
Outputs: 1. Azure Monitor
-The supplied spatial analysis telegraf module will publish all the telemetry data emitted by the spatial analysis container to Azure Monitor. See the [Azure Monitor](../../azure-monitor/overview.md) for information on adding Azure Monitor to your subscription.
+The supplied Spatial Analysis telegraf module will publish all the telemetry data emitted by the Spatial Analysis container to Azure Monitor. See the [Azure Monitor](../../azure-monitor/overview.md) for information on adding Azure Monitor to your subscription.
After setting up Azure Monitor, you will need to create credentials that enable the module to send telemetry. You can use the Azure portal to create a new Service Principal, or use the Azure CLI command below to create one.
Once the telegraf module is deployed, the reported metrics can be accessed eithe
| Event Name | Description | |--|-|
-| archon_exit | Sent when a user changes the spatial analysis module status from *running* to *stopped*. |
+| archon_exit | Sent when a user changes the Spatial Analysis module status from *running* to *stopped*. |
| archon_error | Sent when any of the processes inside the container crash. This is a critical error. | | InputRate | The rate at which the graph processes video input. Reported every 5 minutes. | | OutputRate | The rate at which the graph outputs AI insights. Reported every 5 minutes. |
You can use `iotedge` command line tool to check the status and logs of the runn
## Collect log files with the diagnostics container
-Spatial analysis generates Docker debugging logs that you can use to diagnose runtime issues, or include in support tickets. The spatial analysis diagnostics module is available in the Microsoft Container Registry for you to download. In the manifest deployment file for your [Azure Stack Edge Device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) look for the *diagnostics* module.
+Spatial Analysis generates Docker debugging logs that you can use to diagnose runtime issues, or include in support tickets. The Spatial Analysis diagnostics module is available in the Microsoft Container Registry for you to download. In the manifest deployment file for your [Azure Stack Edge Device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) look for the *diagnostics* module.
In the "env" section add the following configuration:
From the IoT Edge portal, select your device and then the **diagnostics** module
1. Create your own Azure Blob Storage account, if you haven't already. 2. Get the **Connection String** for your storage account from the Azure portal. It will be located in **Access Keys**.
-3. Spatial analysis logs will be automatically uploaded into a Blob Storage container named *rtcvlogs* with the following file name format: `{CONTAINER_NAME}/{START_TIME}-{END_TIME}-{QUERY_TIME}.log`.
+3. Spatial Analysis logs will be automatically uploaded into a Blob Storage container named *rtcvlogs* with the following file name format: `{CONTAINER_NAME}/{START_TIME}-{END_TIME}-{QUERY_TIME}.log`.
```json "env":{
From the IoT Edge portal, select your device and then the **diagnostics** module
} ```
-### Uploading spatial analysis logs
+### Uploading Spatial Analysis logs
Logs are uploaded on-demand with the `getRTCVLogs` IoT Edge method, in the `diagnostics` module.
The below table lists the parameters you can use when querying logs.
| ContainerId | Target container for fetching logs.| `null`, when there is no container ID. The API returns all available containers information with IDs.| | DoPost | Perform the upload operation. When this is set to `false`, it performs the requested operation and returns the upload size without performing the upload. When set to `true`, it will initiate the asynchronous upload of the selected logs | `false`, do not upload.| | Throttle | Indicate how many lines of logs to upload per batch | `1000`, Use this parameter to adjust post speed. |
-| Filters | Filters logs to be uploaded | `null`, filters can be specified as key value pairs based on the spatial analysis logs structure: `[UTC, LocalTime, LOGLEVEL,PID, CLASS, DATA]`. For example: `{"TimeFilter":[-1,1573255761112]}, {"TimeFilter":[-1,1573255761112]}, {"CLASS":["myNode"]`|
+| Filters | Filters logs to be uploaded | `null`, filters can be specified as key value pairs based on the Spatial Analysis logs structure: `[UTC, LocalTime, LOGLEVEL,PID, CLASS, DATA]`. For example: `{"TimeFilter":[-1,1573255761112]}, {"TimeFilter":[-1,1573255761112]}, {"CLASS":["myNode"]`|
The following table lists the attributes in the query response.
kubectl logs <pod-name> -n <namespace> --all-containers
| `Enable-HcsSupportAccess` | Generates access credentials to start a support session. |
-## How to file a support ticket for spatial analysis
+## How to file a support ticket for Spatial Analysis
-If you need more support in finding a solution to a problem you're having with the spatial analysis container, follow these steps to fill out and submit a support ticket. Our team will get back to you with additional guidance.
+If you need more support in finding a solution to a problem you're having with the Spatial Analysis container, follow these steps to fill out and submit a support ticket. Our team will get back to you with additional guidance.
### Fill out the basics Create a new support ticket at the [New support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) page. Follow the prompts to fill in the following parameters:
Create a new support ticket at the [New support request](https://ms.portal.azure
![Support basics](./media/support-ticket-page-1-final.png) 1. Set **Issue Type** to be `Technical`.
-2. Select the subscription that you are utilizing to deploy the spatial analysis container.
+2. Select the subscription that you are utilizing to deploy the Spatial Analysis container.
3. Select `My services` and select `Cognitive Services` as the the service.
-4. Select the resource that you are utilizing to deploy the spatial analysis container.
+4. Select the resource that you are utilizing to deploy the Spatial Analysis container.
5. Write a brief description detailing the problem you are facing. 6. Select `Spatial Analysis` as your problem type. 7. Select the appropriate subtype from the drop down.
Review the details of your support request to ensure everything is accurate and
## Next steps * [Deploy a People Counting web application](spatial-analysis-web-app.md)
-* [Configure spatial analysis operations](./spatial-analysis-operations.md)
+* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
* [Camera placement guide](spatial-analysis-camera-placement.md) * [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
Last updated 01/12/2021
-# Spatial analysis operations
+# Spatial Analysis operations
-Spatial analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for spatial analysis will generate an output stream of JSON messages sent to your instance of Azure IoT Hub.
+Spatial Analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for Spatial Analysis will generate an output stream of JSON messages sent to your instance of Azure IoT Hub.
-The spatial analysis container implements the following operations:
+The Spatial Analysis container implements the following operations:
| Operation Identifier| Description| |||
All above the operations are also available in the `.debug` version, which have
| cognitiveservices.vision.spatialanalysis-persondistance.debug | Tracks when people violate a distance rule. <br> Emits a _personDistanceEvent_ periodically with the location of each distance violation. | | cognitiveservices.vision.spatialanalysis.debug | Generic operation which can be used to run all scenarios mentioned above. This option is more useful when you want to run multiple scenarios on the same camera or use system resources (e.g. GPU) more efficiently. |
-Spatial analysis can also be run with [Live Video Analytics](../../media-services/live-video-analytics-edge/spatial-analysis-tutorial.md) as their Video AI module.
+Spatial Analysis can also be run with [Live Video Analytics](../../media-services/live-video-analytics-edge/spatial-analysis-tutorial.md) as their Video AI module.
<!--more details on the setup can be found in the [LVA Setup page](LVA-Setup.md). Below is the list of the operations supported with Live Video Analytics. -->
Live Video Analytics operations are also available in the `.debug` version (e.g.
> [!IMPORTANT] > The computer vision AI models detect and locate human presence in video footage and output by using a bounding box around a human body. The AI models do not attempt to discover the identities or demographics of individuals.
-These are the parameters required by each of these spatial analysis operations.
+These are the parameters required by each of these Spatial Analysis operations.
| Operation parameters| Description| ||| | Operation ID | The Operation Identifier from table above.| | enabled | Boolean: true or false|
-| VIDEO_URL| The RTSP url for the camera device (Example: `rtsp://username:password@url`). Spatial analysis supports H.264 encoded stream either through RTSP, http, or mp4. Video_URL can be provided as an obfuscated base64 string value using AES encryption, and if the video url is obfuscated then `KEY_ENV` and `IV_ENV` need to be provided as environment variables. Sample utility to generate keys and encryption can be found [here](/dotnet/api/system.security.cryptography.aesmanaged). |
+| VIDEO_URL| The RTSP url for the camera device (Example: `rtsp://username:password@url`). Spatial Analysis supports H.264 encoded stream either through RTSP, http, or mp4. Video_URL can be provided as an obfuscated base64 string value using AES encryption, and if the video url is obfuscated then `KEY_ENV` and `IV_ENV` need to be provided as environment variables. Sample utility to generate keys and encryption can be found [here](/dotnet/api/system.security.cryptography.aesmanaged). |
| VIDEO_SOURCE_ID | A friendly name for the camera device or video stream. This will be returned with the event JSON output.| | VIDEO_IS_LIVE| True for camera devices; false for recorded videos.| | VIDEO_DECODE_GPU_INDEX| Which GPU to decode the video frame. By default it is 0. Should be the same as the `gpu_index` in other node config like `VICA_NODE_CONFIG`, `DETECTOR_NODE_CONFIG`.|
These are the parameters required by each of these spatial analysis operations.
| ENABLE_FACE_MASK_CLASSIFIER | `True` to enable detecting people wearing face masks in the video stream, `False` to disable it. By default this is disabled. Face mask detection requires input video width parameter to be 1920 `"INPUT_VIDEO_WIDTH": 1920`. The face mask attribute will not be returned if detected people are not facing the camera or are too far from it. Refer to the [camera placement](spatial-analysis-camera-placement.md) guide for more information | ### Detector Node Parameter Settings
-This is an example of the DETECTOR_NODE_CONFIG parameters for all spatial analysis operations.
+This is an example of the DETECTOR_NODE_CONFIG parameters for all Spatial Analysis operations.
```json {
This is an example of the DETECTOR_NODE_CONFIG parameters for all spatial analys
| `calibration_quality_check_queue_max_size` | int | Maximum number of data samples to store when camera model is calibrated. Default is `1000`. Only used when `enable_recalibration=True`.| | `enable_breakpad`| bool | Indicates whether you want to enable breakpad, which is used to generate crash dump for debug use. It is `false` by default. If you set it to `true`, you also need to add `"CapAdd": ["SYS_PTRACE"]` in the `HostConfig` part of container `createOptions`. By default, the crash dump is uploaded to the [RealTimePersonTracking](https://appcenter.ms/orgs/Microsoft-Organization/apps/RealTimePersonTracking/crashes/errors?version=&appBuild=&period=last90Days&status=&errorType=all&sortCol=lastError&sortDir=desc) AppCenter app, if you want the crash dumps to be uploaded to your own AppCenter app, you can override the environment variable `RTPT_APPCENTER_APP_SECRET` with your app's app secret.
-## Spatial analysis operations configuration and output
+## Spatial Analysis operations configuration and output
### Zone configuration for cognitiveservices.vision.spatialanalysis-personcount This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that configures a zone. You may configure multiple zones for this operation.
This is an example of the DETECTOR_NODE_CONFIG parameters for all spatial analys
```json { "zones":[{
- "name": "lobbycamera",
- "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
- "events":[{
- "type": "count",
- "config":{
- "trigger": "event",
+ "name": "lobbycamera",
+ "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
+ "events":[{
+ "type": "count",
+ "config":{
+ "trigger": "event",
"threshold": 16.00, "focus": "footprint" }
- }]
+ }]
} ```
This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
"name": "lobbycamera", "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]], "events":[{
- "type": "persondistance",
- "config":{
- "trigger": "event",
- "output_frequency":1,
- "minimum_distance_threshold":6.0,
- "maximum_distance_threshold":35.0,
- "aggregation_method": "average"
+ "type": "persondistance",
+ "config":{
+ "trigger": "event",
+ "output_frequency":1,
+ "minimum_distance_threshold":6.0,
+ "maximum_distance_threshold":35.0,
+ "aggregation_method": "average"
"threshold": 16.00, "focus": "footprint"
- }
- }]
+ }
+ }]
}] } ```
This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
See the [camera placement](spatial-analysis-camera-placement.md) guidelines to learn about more about how to configure zones and lines.
-## Spatial analysis Operation Output
+## Spatial Analysis Operation Output
The events from each operation are egressed to Azure IoT Hub on JSON format.
Sample JSON for an event output by this operation.
"y": 0.0 }, "metadata": {
- "attributes": {
- "face_mask": 0.99
- }
- }
+ "attributes": {
+ "face_mask": 0.99
+ }
+ }
}, { "type": "person",
Sample JSON for an event output by this operation.
"y": 0.0 }, "metadata":{
- "attributes": {
- "face_nomask": 0.99
- }
+ "attributes": {
+ "face_nomask": 0.99
+ }
}
- }
+ }
], "schemaVersion": "1.0" }
Sample JSON for detections output by this operation.
}, "confidence": 0.9005028605461121, "metadata": {
- "attributes": {
- "face_mask": 0.99
- }
- }
+ "attributes": {
+ "face_mask": 0.99
+ }
+ }
} ], "schemaVersion": "1.0"
Sample JSON for detections output by this operation with `zonecrossing` type SPA
] }, "confidence": 0.6267998814582825,
- "metadata": {
- "attributes": {
- "face_mask": 0.99
- }
- }
+ "metadata": {
+ "attributes": {
+ "face_mask": 0.99
+ }
+ }
} ],
Sample JSON for detections output by this operation with `zonedwelltime` type SP
"trackingId": "afcc2e2a32a6480288e24381f9c5d00e", "status": "Exit", "side": "1",
- "durationMs": 7132.0
+ "durationMs": 7132.0
}, "zone": "queuecamera" }
Output of this operation depends on configured `events`, for example if the ther
## Use the output generated by the container
-You may want to integrate spatial analysis detection or events into your application. Here are a few approaches to consider:
+You may want to integrate Spatial Analysis detection or events into your application. Here are a few approaches to consider:
* Use the Azure Event Hub SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. See [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md) for more information. * Set up **Message Routing** on your Azure IoT Hub to send the events to other endpoints or save the events to your data storage. See [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) for more information. * Setup an Azure Stream Analytics job to process the events in real-time as they arrive and create visualizations.
-## Deploying spatial analysis operations at scale (multiple cameras)
+## Deploying Spatial Analysis operations at scale (multiple cameras)
-In order to get the best performance and utilization of the GPUs, you can deploy any spatial analysis operations on multiple cameras using graph instances. Below is a sample for running the `cognitiveservices.vision.spatialanalysis-personcrossingline` operation on fifteen cameras.
+In order to get the best performance and utilization of the GPUs, you can deploy any Spatial Analysis operations on multiple cameras using graph instances. Below is a sample for running the `cognitiveservices.vision.spatialanalysis-personcrossingline` operation on fifteen cameras.
```json "properties.desired": {
cognitive-services Spatial Analysis Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-web-app.md
# How to: Deploy a People Counting web application
-Use this article to learn how to integrate spatial analysis into a web app that understands the movement of people, and monitors the number of people occupying a physical space.
+Use this article to learn how to integrate Spatial Analysis into a web app that understands the movement of people, and monitors the number of people occupying a physical space.
In this tutorial you will learn how to:
-* Deploy the spatial analysis container
+* Deploy the Spatial Analysis container
* Configure the operation and camera * Configure the IoT Hub connection in the Web Application * Deploy and test the Web Application
In this tutorial you will learn how to:
* Basic understanding of Azure IoT Edge deployment configurations, and an [Azure IoT Hub](../../iot-hub/index.yml) * A configured [host computer](spatial-analysis-container.md).
-## Deploy the spatial analysis container
+## Deploy the Spatial Analysis container
Fill out the [request application](https://aka.ms/csgate) to get access to run the container.
az iot hub device-identity create --hub-name "<IoT Hub Name>" --device-id "<Edge
### Deploy the container on Azure IoT Edge on the host computer
-Deploy the spatial analysis container as an IoT Module on the host computer, using the Azure CLI. The deployment process requires a deployment manifest file which outlines the required containers, variables, and configurations for your deployment. You can find a sample [Azure Stack Edge specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2142179), [non-Azure Stack Edge specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2152189), and [Azure VM with GPU specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2152189) on GitHub, which include a basic deployment configuration for the *spatial-analysis* container.
+Deploy the Spatial Analysis container as an IoT Module on the host computer, using the Azure CLI. The deployment process requires a deployment manifest file which outlines the required containers, variables, and configurations for your deployment. You can find a sample [Azure Stack Edge specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2142179), [non-Azure Stack Edge specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2152189), and [Azure VM with GPU specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2152189) on GitHub, which include a basic deployment configuration for the *spatial-analysis* container.
Alternatively, you can use the Azure IoT extensions for Visual Studio Code to perform operations with your IoT hub. Go to [Deploy Azure IoT Edge Modules from Visual Studio Code](../../iot-edge/how-to-deploy-modules-vscode.md) to learn more.
If you'd like to view or modify the source code for this application, you can fi
## Next steps
-* [Configure spatial analysis operations](./spatial-analysis-operations.md)
+* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
* [Logging and troubleshooting](spatial-analysis-logging.md) * [Camera placement guide](spatial-analysis-camera-placement.md) * [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Spatial Analysis Zone Line Placement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-zone-line-placement.md
# Zone and Line Placement Guide
-This article provides guidelines for how to define zones and lines for spatial analysis operations to achieve accurate analysis of peoples movements in a space. This applies to all operations.
+This article provides guidelines for how to define zones and lines for Spatial Analysis operations to achieve accurate analysis of peoples movements in a space. This applies to all operations.
-Zones and lines are defined using the JSON SPACEANALYSIS_CONFIG parameter. See the [spatial analysis operations](spatial-analysis-operations.md) article for more information.
+Zones and lines are defined using the JSON SPACEANALYSIS_CONFIG parameter. See the [Spatial Analysis operations](spatial-analysis-operations.md) article for more information.
## Guidelines for drawing zones
If you want to see a specific section of your camera view, create the largest zo
### Example of a well-shaped zone
-The zone should be big enough to accommodate three people standing along each edge and focused on the area of interest. Spatial analysis will identify people whose feet are placed in the zone, so when drawing zones on the 2D image, imagine the zone as a carpet laying on the floor.
+The zone should be big enough to accommodate three people standing along each edge and focused on the area of interest. Spatial Analysis will identify people whose feet are placed in the zone, so when drawing zones on the 2D image, imagine the zone as a carpet laying on the floor.
![Well-shaped zone](./media/spatial-analysis/zone-good-example.png)
The following examples show poorly shaped zones. In these examples, the area of
### Example of a well-shaped line
-The line should be long enough to accommodate the entire entrance. Spatial analysis will identify people whose feet cross the line, so when drawing lines on the 2D image imagine you're drawing them as if they lie on the floor.
+The line should be long enough to accommodate the entire entrance. Spatial Analysis will identify people whose feet cross the line, so when drawing lines on the 2D image imagine you're drawing them as if they lie on the floor.
If possible, extend the line wider than the actual entrance. If this will not result in extra crossings (as in the image below when the line is against a wall) then extend it.
The following examples show poorly defined lines.
## Next steps * [Deploy a People Counting web application](spatial-analysis-web-app.md)
-* [Configure spatial analysis operations](./spatial-analysis-operations.md)
+* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
* [Logging and troubleshooting](spatial-analysis-logging.md) * [Camera placement guide](spatial-analysis-camera-placement.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/whats-new.md
Computer Vision's Read API v3.2 public preview, available as cloud service and D
* Extract text only for selected pages for a multi-page document. * Available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment.
-[Learn more](concept-recognizing-text.md) about the Read API.
+See the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md) to learn more.
> [!div class="nextstepaction"] > [Use the Read API v3.2 Public Preview](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-3/operations/5d986960601faab4bf452005)
Computer Vision's Read API v3.2 public preview, available as cloud service and D
## January 2021
-### Spatial analysis container update
+### Spatial Analysis container update
-A new version of the [spatial analysis container](spatial-analysis-container.md) has been released with a new feature set. This Docker container lets you analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments.
+A new version of the [Spatial Analysis container](spatial-analysis-container.md) has been released with a new feature set. This Docker container lets you analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments.
-* [Spatial analysis operations](spatial-analysis-operations.md) can be now configured to detect if a person is wearing a protective face covering such as a mask.
+* [Spatial Analysis operations](spatial-analysis-operations.md) can be now configured to detect if a person is wearing a protective face covering such as a mask.
* A mask classifier can be enabled for the `personcount`, `personcrossingline` and `personcrossingpolygon` operations by configuring the `ENABLE_FACE_MASK_CLASSIFIER` parameter. * The attributes `face_mask` and `face_noMask` will be returned as metadata with confidence score for each person detected in the video stream * The *personcrossingpolygon* operation has been extended to allow the calculation of the dwell time a person spends in a zone. You can set the `type` parameter in the Zone configuration for the operation to `zonedwelltime` and a new event of type *personZoneDwellTimeEvent* will include the `durationMs` field populated with the number of milliseconds that the person spent in the zone. * **Breaking change**: The *personZoneEvent* event has been renamed to *personZoneEnterExitEvent*. This event is raised by the *personcrossingpolygon* operation when a person enters or exits the zone and provides directional info with the numbered side of the zone that was crossed. * Video URL can be provided as "Private Parameter/obfuscated" in all operations. Obfuscation is optional now and it will only work if `KEY` and `IV` are provided as environment variables. * Calibration is enabled by default for all operations. Set the `do_calibration: false` to disable it.
-* Added support for auto recalibration (by default disabled) via the `enable_recalibration` parameter, please refer to [Spatial analysis operations](./spatial-analysis-operations.md) for details
-* Camera calibration parameters to the `DETECTOR_NODE_CONFIG`. Refer to [Spatial analysis operations](./spatial-analysis-operations.md) for details.
+* Added support for auto recalibration (by default disabled) via the `enable_recalibration` parameter, please refer to [Spatial Analysis operations](./spatial-analysis-operations.md) for details
+* Camera calibration parameters to the `DETECTOR_NODE_CONFIG`. Refer to [Spatial Analysis operations](./spatial-analysis-operations.md) for details.
## October 2020
The Computer Vision API in General Availability has been upgraded to v3.1.
## September 2020
-### Spatial analysis container preview
+### Spatial Analysis container preview
-The [spatial analysis container](spatial-analysis-container.md) is now in preview. The spatial analysis feature of Computer Vision lets you to analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments. Spatial analysis is a Docker container you can use on-premises.
+The [Spatial Analysis container](spatial-analysis-container.md) is now in preview. The Spatial Analysis feature of Computer Vision lets you to analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments. Spatial Analysis is a Docker container you can use on-premises.
### Read API v3.1 Public Preview adds OCR for Japanese Computer Vision's Read API v3.1 public preview adds these capabilities:
Computer Vision's Read API v3.1 public preview adds these capabilities:
* This preview version of the Read API supports English, Dutch, French, German, Italian, Japanese, Portuguese, Simplified Chinese, and Spanish languages.
-See the [Read API overview](concept-recognizing-text.md) to learn more.
+See the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md) to learn more.
> [!div class="nextstepaction"] > [Learn more about Read API v3.1 Public Preview 2](https://westus2.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-2/operations/5d986960601faab4bf452005)
Computer Vision's Read API v3.1 public preview adds support for Simplified Chine
* This preview version of the Read API supports English, Dutch, French, German, Italian, Portuguese, Simplified Chinese, and Spanish languages.
-See the [Read API overview](concept-recognizing-text.md) to learn more.
+See the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md) to learn more.
> [!div class="nextstepaction"] > [Learn more about Read API v3.1 Public Preview 1](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-1/operations/5d986960601faab4bf452005) ## May 2020
-Computer Vision API v3.0 entered General Availability, with updates to [Read API](concept-recognizing-text.md):
+Computer Vision API v3.0 entered General Availability, with updates to the Read API:
* Support for English, Dutch, French, German, Italian, Portuguese, and Spanish * Improved accuracy * Confidence score for each extracted word * New output format
+See the [OCR overview](overview-ocr.md) to learn more.
+ ## March 2020 * TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../cognitive-services-security.md).
cognitive-services How To Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency.md
If your computer has a slow connection to the Face service, that will impact the
Mitigations: - When you create your Face subscription, make sure to choose the region closest to where your application is hosted. - If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. See the previous section for an example.
+- If longer latencies impact the user experience, choose a timeout threshold (e.g. maximum 5s) before retrying the API call
## Next steps
In this guide, you learned how to mitigate latency when using the Face service.
## Related topics - [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)-- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/client/faceapi)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/client/faceapi)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Overview.md
The Azure Face service provides AI algorithms that detect, recognize, and analyz
The Face service provides several different facial analysis functions which are each outlined in the following sections.
+This documentation contains the following types of articles:
+* The [quickstarts](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./Face-API-How-to-Topics/HowtoDetectFacesinImage.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](./concepts/face-detection.md) provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./Tutorials/FaceAPIinCSharpTutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
+ ## Face detection The Face service detects human faces in an image and returns the rectangle coordinates of their locations. Optionally, face detection can extract a series of face-related attributes, such as head pose, gender, age, emotion, facial hair, and glasses.
cognitive-services Client Libraries Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/client-libraries-rest-api.md
Title: "Quickstart: Language Understanding (LUIS) SDK client libraries and REST API" description: Create and query a LUIS app with the LUIS SDK client libraries and REST API. Previously updated : 12/09/2020 Last updated : 03/29/2021
Other errors - if you get an error not covered in the preceding list, let us kno
## Next steps
-* [What is the Language Understanding (LUIS) API?](what-is-luis.md)
-* [What's new?](whats-new.md)
-* [Intents](luis-concept-intent.md), [entities](luis-concept-entity-types.md), and [example utterances](luis-concept-utterance.md), and [prebuilt entities](luis-reference-prebuilt-entities.md)
-* The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code).
-* Understanding natural language: [natural language understanding (NLU) and natural language processing (NLP)](artificial-intelligence.md)
-* Bots: [AI chatbots](luis-csharp-tutorial-bf-v4.md "chatbot maker tutorial")
+> [!div class="nextstepaction"]
+> [Iterative app development for LUIS](./luis-concept-app-iteration.md)
cognitive-services Get Started Portal Build App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/get-started-portal-build-app.md
description: In this quickstart, you create the basic parts of an app, intents,
Previously updated : 11/30/2020 Last updated : 03/26/2021 # Quickstart: Create a new app in the LUIS portal
When you're done with this quickstart and aren't moving on to the next quickstar
## Next steps > [!div class="nextstepaction"]
-> [2. Deploy an app](get-started-portal-deploy-app.md)
+> [Deploy an app](get-started-portal-deploy-app.md)
cognitive-services Get Started Portal Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/get-started-portal-deploy-app.md
description: This quickstart shows how to deploy an app by creating a prediction
Previously updated : 05/06/2020 Last updated : 03/29/2021 # Quickstart: Deploy an app in the LUIS portal
When you're done with this quickstart, select **My apps** from the top navigatio
## Next steps > [!div class="nextstepaction"]
-> [Identify common intents and entities](./tutorial-machine-learned-entity.md)
+> [Iterative app development for LUIS](./luis-concept-app-iteration.md)
cognitive-services Luis Get Started Create App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-get-started-create-app.md
description: This quickstart shows how to create a LUIS app that uses the prebu
Previously updated : 10/13/2020 Last updated : 03/29/2021 #Customer intent: As a new user, I want to quickly get a LUIS app created so I can understand the model and actions to train, test, publish, and query.
In the window that appears, enter the following information:
|Name |Description | |||
-|AName | A name for the your app. For example "home automation". |
+|AName | A name for your app. For example "home automation". |
|Culture | The language that your app understands and speaks. | |Description | A description for your app. |Prediction resource | The prediction resource that will receive queries. |
In order to receive a LUIS prediction in a chat bot or other client application,
## Next steps
-You can call the endpoint from code:
- > [!div class="nextstepaction"]
-> [Call a LUIS endpoint using code](./luis-get-started-get-intent-from-rest.md)
+> [Iterative app development for LUIS](./luis-concept-app-iteration.md)
cognitive-services Luis Get Started Get Intent From Browser https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-get-started-get-intent-from-browser.md
description: In this article, use an available public LUIS app to determine a us
Previously updated : 11/30/2020 Last updated : 03/26/2021 #Customer intent: As an developer familiar with how to use a browser but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response.
Learn more about:
* [Custom subdomains](../cognitive-services-custom-subdomains.md) > [!div class="nextstepaction"]
-> [Create an app in the LUIS portal](get-started-portal-build-app.md)
+> [Use the client libraries or REST API](client-libraries-rest-api.md)
cognitive-services What Is Luis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/what-is-luis.md
keywords: Azure, artificial intelligence, ai, natural language processing, nlp,
Previously updated : 11/23/2020 Last updated : 03/22/2021
A client application for LUIS is any conversational application that communicate
![Conceptual image of 3 client applications working with Cognitive Services Language Understanding (LUIS)](./media/luis-overview/luis-entry-point.png "Conceptual image of 3 client applications working with Cognitive Services Language Understanding (LUIS)")
+This documentation contains the following article types:
+
+* [**Quickstarts**](luis-get-started-create-app.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](luis-how-to-start-new-app.md) contain instructions for using the service in more specific or customized ways.
+* [**Concepts**](artificial-intelligence.md) provide in-depth explanations of the service functionality and features.
+* [**Tutorials**](tutorial-intents-only.md) are longer guides that show you how to use the service as a component in broader business solutions.
+ ## Use LUIS in a chat bot <a name="Accessing-LUIS"></a>
Learn about LUIS with hands-on quickstarts using the [portal](get-started-portal
* [What's new](whats-new.md "What's new") with the service and documentation * [Plan your app](luis-how-plan-your-app.md "Plan your app") with [intents](luis-concept-intent.md "intents") and [entities](luis-concept-entity-types.md "entities").
-* [Query the prediction endpoint](luis-get-started-get-intent-from-browser.md "Query the prediction endpoint").
-* [Developer resources](developer-reference-resource.md "Developer resources") for LUIS.
[bot-framework]: /bot-framework/ [flow]: /connectors/luis/
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Dutch (Netherlands) | `nl-NL` | Audio (20201015)<br>Text | Yes | | English (Australia) | `en-AU` | Audio (20201019)<br>Text | Yes | | English (Canada) | `en-CA` | Audio (20201019)<br>Text | Yes |
+| English (Ghana) | `en-GH` | Text | |
| English (Hong Kong) | `en-HK` | Text | | | English (India) | `en-IN` | Audio (20200923)<br>Text | Yes | | English (Ireland) | `en-IE` | Text | |
+| English (Kenya) | `en-KE` | Text | |
| English (New Zealand) | `en-NZ` | Audio (20201019)<br>Text | Yes | | English (Nigeria) | `en-NG` | Text | | | English (Philippines) | `en-PH` | Text | | | English (Singapore) | `en-SG` | Text | | | English (South Africa) | `en-ZA` | Text | |
+| English (Tanzania) | `en-TZ` | Text | |
| English (United Kingdom) | `en-GB` | Audio (20201019)<br>Text<br>Pronunciation| Yes | | English (United States) | `en-US` | Audio (20201019)<br>Text<br>Pronunciation| Yes | | Estonian(Estonia) | `et-EE` | Text | |
+| Filipino (Philippines) | `fil-PH`| Text | |
| Finnish (Finland) | `fi-FI` | Text | Yes | | French (Canada) | `fr-CA` | Audio (20201015)<br>Text | Yes | | French (France) | `fr-FR` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
+| French (Switzerland) | `fr-CH` | Text | |
+| German (Austria) | `de-AT` | Text | |
| German (Germany) | `de-DE` | Audio (20190701, 20200619, 20201127)<br>Text<br>Pronunciation| Yes | | Greek (Greece) | `el-GR` | Text | | | Gujarati (Indian) | `gu-IN` | Text | | | Hindi (India) | `hi-IN` | Audio (20200701)<br>Text | Yes | | Hungarian (Hungary) | `hu-HU` | Text | |
+| Indonesian (Indonesia) | `id-ID` | Text | |
| Irish(Ireland) | `ga-IE` | Text | | | Italian (Italy) | `it-IT` | Audio (20201016)<br>Text<br>Pronunciation| Yes | | Japanese (Japan) | `ja-JP` | Text | Yes | | Korean (Korea) | `ko-KR` | Audio (20201015)<br>Text | Yes | | Latvian (Latvia) | `lv-LV` | Text | | | Lithuanian (Lithuania) | `lt-LT` | Text | |
+| Malay (Malaysia) | `ms-MY` | Text | |
| Maltese (Malta) | `mt-MT` | Text | | | Marathi (India) | `mr-IN` | Text | | | Norwegian (Bokmål, Norway) | `nb-NO` | Text | Yes |
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Telugu (India) | `te-IN` | Text | | | Thai (Thailand) | `th-TH` | Text | Yes | | Turkish (Turkey) | `tr-TR` | Text | |
+| Vietnamese (Vietnam) | `vi-VN` | Text | |
## Text-to-speech
cognitive-services Terminology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/terminology.md
Title: Terminology - Custom Translator
+ Title: Key terms - Custom Translator
-description: List of the terms used in the Custom Translator articles.
+description: List of key terms used in Custom Translator articles.
Previously updated : 08/17/2020 Last updated : 04/02/2021 #Customer intent: As a Custom Translator user, I want to review and understand the terms in multiple articles.
-# Custom Translator Terminology
+# Custom Translator key terms
-The following table presents a list of terms that you may find as you work with the [Custom Translator](https://portal.customtranslator.azure.ai).
+The following table presents a list of key terms that you may find as you work with the [Custom Translator](https://portal.customtranslator.azure.ai).
| Word or Phrase|Definition| ||--|
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-container-support.md
Azure Cognitive Services containers provide the following set of Docker containe
| Service | Container | Description | Availability | |--|--|--|--|
-| [Computer Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/concept-recognizing-text.md). | Gated preview. [Request access][request-access]. |
+| [Computer Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Gated preview. [Request access][request-access]. |
| [Spatial Analysis][spa-containers] | **Spatial analysis** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-spatial-analysis)) | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. | Gated preview. [Request access][request-access]. | | [Face][fa-containers] | **Face** | Detects human faces in images, and identifies attributes, including face landmarks (such as noses and eyes), gender, age, and other machine-predicted facial features. In addition to detection, Face can check if two faces in the same image or different images are the same by using a confidence score, or compare faces against a database to see if a similar-looking or identical face already exists. It can also organize similar faces into groups, using shared visual traits. | Unavailable | | [Form recognizer][fr-containers] | **Form Recognizer** | Form Understanding applies machine learning technology to identify and extract key-value pairs and tables from forms. | Unavailable |
cognitive-services Concept Identification Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-identification-cards.md
# Form Recognizer prebuilt identification card (ID) model
-Azure Form Recognizer can analyze and extract information from government identification cards (IDs) using its prebuilt IDs model. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with ID recognition capabilities to extract key information from Worldwide Passports and U.S. Driver's Licenses (all 50 states and D.C.). The IDs API extracts key information from these identity documents, such as first name, last name, date of birth, document number, and more. This API is available in the Form Recognizer v2.1 preview as a cloud service and as an on-premise container.
+Azure Form Recognizer can analyze and extract information from government identification cards (IDs) using its prebuilt IDs model. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/overview-ocr.md) capabilities with ID recognition capabilities to extract key information from Worldwide Passports and U.S. Driver's Licenses (all 50 states and D.C.). The IDs API extracts key information from these identity documents, such as first name, last name, date of birth, document number, and more. This API is available in the Form Recognizer v2.1 preview as a cloud service and as an on-premise container.
## What does the ID service do?
cognitive-services Concept Invoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-invoices.md
# Form Recognizer prebuilt invoice model
-Azure Form Recognizer can analyze and extract information from sales invoices using its prebuilt invoice models. The Invoice API enables customers to take invoices in a variety of formats and return structured data to automate the invoice processing. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts the text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, invoice amount due, tax amount, ship to, bill to, line items and more. The prebuilt Invoice API is publicly available in the Form Recognizer v2.1 preview.
+Azure Form Recognizer can analyze and extract information from sales invoices using its prebuilt invoice models. The Invoice API enables customers to take invoices in a variety of formats and return structured data to automate the invoice processing. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/overview-ocr.md) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts the text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, invoice amount due, tax amount, ship to, bill to, line items and more. The prebuilt Invoice API is publicly available in the Form Recognizer v2.1 preview.
## What does the Invoice service do?
cognitive-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-layout.md
# Form Recognizer Layout service
-Azure Form Recognizer can extract text, tables, selection marks, and structure information from documents using its Layout service. The Layout API enables customers to take documents in a variety of formats and return structured data representations of the documents. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with deep learning models to extract text, tables, selection marks, and document structure.
+Azure Form Recognizer can extract text, tables, selection marks, and structure information from documents using its Layout service. The Layout API enables customers to take documents in a variety of formats and return structured data representations of the documents. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/overview-ocr.md) capabilities with deep learning models to extract text, tables, selection marks, and document structure.
## What does the Layout service do?
cognitive-services Concept Receipts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-receipts.md
# Form Recognizer prebuilt receipt model
-Azure Form Recognizer can analyze and extract information from sales receipts using its prebuilt receipt model. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with deep learning models to extract key information from receipts written in English.
+Azure Form Recognizer can analyze and extract information from sales receipts using its prebuilt receipt model. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/overview-ocr.md) capabilities with deep learning models to extract key information from receipts written in English.
## Understanding Receipts
cognitive-services Tutorial Ios Picture Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/tutorial-ios-picture-immersive-reader.md
The [Immersive Reader](https://www.onenote.com/learningtools) is an inclusively designed tool that implements proven techniques to improve reading comprehension.
-The [Computer Vision Cognitive Services Read API](../computer-vision/concept-recognizing-text.md) detects text content in an image using Microsoft's latest recognition models and converts the identified text into a machine-readable character stream.
+The [Computer Vision Cognitive Services Read API](../computer-vision/overview-ocr.md) detects text content in an image using Microsoft's latest recognition models and converts the identified text into a machine-readable character stream.
In this tutorial, you will build an iOS app from scratch and integrate the Read API, and the Immersive Reader by using the Immersive Reader SDK. A full working sample of this tutorial is available [here](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/ios).
cognitive-services Text Analytics How To Entity Linking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking.md
Version 3.0 only includes synchronous operation. The following JSON is an exampl
## Post the request
-Analysis is performed upon receipt of the request. See the [data limits](../overview.md#data-limits) section in the overview for information on the size and number of requests you can send per minute and second.
+Analysis is performed upon receipt of the request. See the [data limits](../overview.md#data-limits) article for information on the size and number of requests you can send per minute and second.
The Text Analytics API is stateless. No data is stored in your account, and results are returned immediately in the response.
cognitive-services Text Analytics How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers.md
Previously updated : 03/25/2021 Last updated : 03/29/2021 keywords: on-premises, Docker, container, sentiment analysis, natural language processing
Containers enable you to run the Text Analytic APIs in your own environment and
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. > [!IMPORTANT]
-> The free account is limited to 5,000 transactions per month and only the **Free** and **Standard** <a href="https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics" target="_blank">pricing tiers </a> are valid for containers. For more information on transaction request rates, see [Data Limits](../overview.md#data-limits).
+> The free account is limited to 5,000 transactions per month and only the **Free** and **Standard** <a href="https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics" target="_blank">pricing tiers </a> are valid for containers. For more information on transaction request rates, see [Data Limits](../concepts/data-limits.md).
## Prerequisites
cognitive-services Text Analytics How To Keyword Extraction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-keyword-extraction.md
Previously updated : 12/17/2020 Last updated : 03/29/2021
For information about request definition, see [How to call the Text Analytics AP
## Step 2: Post the request
-Analysis is performed upon receipt of the request. For information about the size and number of requests you can send per minute or per second, see the [data limits](../overview.md#data-limits) section in the overview.
+Analysis is performed upon receipt of the request. For information about the size and number of requests you can send per minute or per second, see the [data limits](../concepts/data-limits.md) article.
Recall that the service is stateless. No data is stored in your account. Results are returned immediately in the response.
cognitive-services Text Analytics How To Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection.md
Previously updated : 12/17/2020 Last updated : 04/02/2021
For more information on request definition, see [Call the Text Analytics API](te
## Step 2: POST the request
-Analysis is performed upon receipt of the request. For information on the size and number of requests you can send per minute and second, see the [data limits](../overview.md#data-limits) section in the overview.
+Analysis is performed upon receipt of the request. For information on the size and number of requests you can send per minute and second, see the [data limits](../concepts/data-limits.md) article.
Recall that the service is stateless. No data is stored in your account. Results are returned immediately in the response.
cognitive-services Text Analytics How To Sentiment Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-sentiment-analysis.md
Previously updated : 03/09/2021 Last updated : 03/29/2021 # How to: Sentiment analysis and Opinion Mining
-The Text Analytics API's Sentiment Analysis feature provides two ways for detecting positive and negative sentiment. If you send a Sentiment Analysis request, the API will return sentiment labels (such as "negative", "neutral" and "positive") and confidence scores at the sentence and document-level. You can also send Opinion Mining requests using the Sentiment Analysis endpoint, which provides granular information about the opinions related to words (such as the attributes of products or services) in the text.
+The Text Analytics API's Sentiment Analysis feature provides two ways for detecting positive and negative sentiment. If you send a Sentiment Analysis request, the API will return sentiment labels (such as "negative", "neutral" and "positive") and confidence scores at the sentence and document-level. You can also send Opinion Mining requests using the Sentiment Analysis endpoint, which provides granular information about the opinions related to words (such as the attributes of products or services) in the text.
The AI models used by the API are provided by the service, you just have to send content for analysis.
Output is returned immediately. You can stream the results to an application tha
Sentiment Analysis v3.1 can return response objects for both Sentiment Analysis and Opinion Mining.
-Sentiment analysis returns a sentiment label and confidence score for the entire document, and each sentence within it. Scores closer to 1 indicate a higher confidence in the label's classification, while lower scores indicate lower confidence. A document can have multiple sentences, and the confidence scores within each document or sentence add up to 1. assessments
+Sentiment analysis returns a sentiment label and confidence score for the entire document, and each sentence within it. Scores closer to 1 indicate a higher confidence in the label's classification, while lower scores indicate lower confidence. A document can have multiple sentences, and the confidence scores within each document or sentence add up to 1.
Opinion Mining will locate targets (nouns or verbs) in the text, and their associated assessment (adjective). In the below response, the sentence *The restaurant had great food and our waiter was friendly* has two targets: *food* and *waiter*. Each target's `relations` property contains a `ref` value with the URI-reference to the associated `documents`, `sentences`, and `assessments` objects.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/overview.md
Previously updated : 03/09/2021 Last updated : 03/29/2021 keywords: text mining, sentiment analysis, text analytics
The API is a part of [Azure Cognitive Services](../index.yml), a collection of m
> [!VIDEO https://channel9.msdn.com/Shows/AI-Show/Whats-New-in-Text-Analytics-Opinion-Mining-and-Async-API/player]
+This documentation contains the following types of articles:
+* [Quickstarts](./quickstarts/client-libraries-rest-api.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* [How-to guides](./how-tos/text-analytics-how-to-call-api.md) contain instructions for using the service in more specific or customized ways.
+* [Concepts](text-analytics-user-scenarios.md) provide in-depth explanations of the service's functionality and features.
+* [Tutorials](./tutorials/tutorial-power-bi-key-phrases.md) are longer guides that show you how to use this service as a component in broader business solutions.
+ ## Sentiment analysis Use [sentiment analysis](how-tos/text-analytics-how-to-sentiment-analysis.md) and find out what people think of your brand or topic by mining the text for clues about positive or negative sentiment.
cognitive-services Text Analytics Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/text-analytics-resource-faq.md
Previously updated : 01/05/2021 Last updated : 03/29/2021 # Frequently Asked Questions (FAQ) about the Text Analytics API Find answers to commonly asked questions about concepts, code, and scenarios related to the Text Analytics API in Azure Cognitive Services.
+## What is the maximum size and number of requests I can make to the API?
+
+See the [data limits](concepts/data-limits.md) article for information on the size and number of requests you can send per minute and second.
+ ## Can Text Analytics identify sarcasm? Analysis is for positive-negative sentiment rather than mood detection.
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
If users decide to quickly turn video on/off while call is in `Connecting` state
- If the user starts with audio and then start and stop video while the call is in `Connecting` state. - If the user starts with audio and then start and stop video while the call is in `Lobby` state. - #### Possible causes Under investigation.
+### Enumerating/accessing devices for Safari on MacOS and iOS
+If access to devices are granted, after some time, device permissions are reset. Safari on MacOS and on iOS does not keep permissions for very long time unless there is a stream acquired. The simplest way to work around this is to call DeviceManager.askDevicePermission() API before calling the device manager's device enumeration APIs (DeviceManager.getCameras(), DeviceManager.getSpeakers(), and DeviceManager.getMicrophones()). If the permissions are there, then user will not see anything, if not, it will re-prompt.
+
+<br/>Devices affected: iPhone
+<br/>Client library: Calling (JavaScript)
+<br/>Browsers: Safari
+<br/>Operating System: iOS
+ ### Sometimes it takes a long time to render remote participant videos During an ongoing group call, _User A_ sends video and then _User B_ joins the call. Sometimes, User B doesn't see video from User A, or User A's video begins rendering after a long delay. This issue could be caused by a network environment that requires further configuration. Refer to the [network requirements](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/network-requirements) documentation for network configuration guidance.
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/notifications.md
armclient POST /subscriptions/<sub_id>/resourceGroups/<resource_group>/providers
#### Using the Azure portal to link your Notification Hub
-In the portal, navigate to your Azure Communication Services resource. Inside the Communication Services resource, select Push Notifications from the left menu of the Communication Services page and connect the Notification Hub that you provisioned earlier. You'll need to provide your connection string and resourceId here:
+1. In the portal, go to your Azure Communication Services resource.
+1. Inside the Communication Services resource, select **Push Notifications** from the left menu of the Communication Services page, and connect the Notification Hub that you provisioned earlier.
+
+1. Select **Connect notification hub**. You'll see a list of notification hubs available to connect.
+
+1. Select the notification hub that you'd like to use for this resource.
+
+ - If you need to create a new hub, select **Create new notification hub** to get a new hub provisioned for this resource.
+
+ :::image type="content" source="./media/notifications/acs-anh-portal-int.png" alt-text="Screenshot showing the Push Notifications settings within the Azure portal.":::
+
+Now you'll see the notification hub that you linked with the connected state.
+
+If you'd like to use a different hub for the resource, select **Disconnect**, and then repeat the steps to link the different notification hub.
> [!NOTE]
-> If the Azure Notification Hub connection string is updated the Communication Services resource has to be updated as well.
-Any change on how the hub is linked will be reflected in data plane (i.e., when sending a notification) within a maximum period of ``10`` minutes. This is applicable also when the hub is linked for the first time **if** there were notifications sent before.
+> Any change on how the hub is linked is reflected in the data plane (that is, when sending a notification) within a maximum period of 10 minutes. This same behavior applies when the hub is linked for the first time, **if** notifications were sent before the change.
### Device registration
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
When a Communication Services user joins the Teams meeting, the display name pro
Communication Services Teams Interop is currently in private preview. When generally available, Communication Services users will be treated like "External access users". Learn more about external access in [Call, chat, and collaborate with people outside your organization in Microsoft Teams](/microsoftteams/communicate-with-users-from-other-organizations).
-Communication Services users can join scheduled Teams meetings as long as anonymous joins are enabled in the [meeting settings](/microsoftteams/meeting-settings-in-teams).
+Communication Services users can join scheduled Teams meetings as long as anonymous joins are enabled in the [meeting settings](/microsoftteams/meeting-settings-in-teams). If the meeting is scheduled for a channel, Communication Services users will not be able to join the chat or send and receive messages.
## Teams in Government Clouds (GCC) Azure Communication Services interoperability isn't compatible with Teams deployments using [Microsoft 365 government clouds (GCC)](/MicrosoftTeams/plan-for-government-gcc) at this time.
Azure Communication Services interoperability isn't compatible with Teams deploy
## Next steps > [!div class="nextstepaction"]
-> [Join your calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)
+> [Join your calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)
container-registry Container Registry Auth Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-auth-service-principal.md
Title: Authenticate with service principal description: Provide access to images in your private container registry by using an Azure Active Directory service principal. Previously updated : 10/04/2019 Last updated : 03/15/2021 # Azure Container Registry authentication with service principals
-You can use an Azure Active Directory (Azure AD) service principal to provide container image `docker push` and `pull` access to your container registry. By using a service principal, you can provide access to "headless" services and applications.
+You can use an Azure Active Directory (Azure AD) service principal to provide push, pull, or other access to your container registry. By using a service principal, you can provide access to "headless" services and applications.
## What is a service principal?
Once you have a service principal that you've granted access to your container r
* **User name** - service principal application ID (also called *client ID*) * **Password** - service principal password (also called *client secret*)
-Each value is a GUID of the form `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+Each value has the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
> [!TIP] > You can regenerate the password of a service principal by running the [az ad sp reset-credentials](/cli/azure/ad/sp/credential#az-ad-sp-credential-reset) command.
For example, use the credentials to pull an image from an Azure container regist
### Use with docker login
-You can run `docker login` using a service principal. In the following example, the service principal application ID is passed in the environment variable `$SP_APP_ID`, and the password in the variable `$SP_PASSWD`. For best practices to manage Docker credentials, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command reference.
+You can run `docker login` using a service principal. In the following example, the service principal application ID is passed in the environment variable `$SP_APP_ID`, and the password in the variable `$SP_PASSWD`. For recommended practices to manage Docker credentials, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command reference.
```bash # Log in to Docker with service principal credentials
container-registry Container Registry Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-authentication-managed-identity.md
Last updated 01/16/2019
# Use an Azure managed identity to authenticate to an Azure container registry
-Use a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) to authenticate to an Azure container registry from another Azure resource, without needing to provide or manage registry credentials. For example, set up a user-assigned or system-assigned managed identity on a Linux VM to access container images from your container registry, as easily as you use a public registry.
+Use a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) to authenticate to an Azure container registry from another Azure resource, without needing to provide or manage registry credentials. For example, set up a user-assigned or system-assigned managed identity on a Linux VM to access container images from your container registry, as easily as you use a public registry. Or, set up an Azure Kubernetes Service cluster to use its [managed identity](../aks/use-managed-identity.md) to pull container images from Azure Container Registry for pod deployments.
For this article, you learn more about managed identities and how to:
To set up a container registry and push a container image to it, you must also h
## Why use a managed identity?
-A managed identity for Azure resources provides Azure services with an automatically managed identity in Azure Active Directory (Azure AD). You can configure [certain Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md), including virtual machines, with a managed identity. Then, use the identity to access other Azure resources, without passing credentials in code or scripts.
+If you're not familiar with the managed identities for Azure resources feature, see this [overview](../active-directory/managed-identities-azure-resources/overview.md).
-Managed identities are of two types:
+After you set up selected Azure resources with a managed identity, give the identity the access you want to another resource, just like any security principal. For example, assign a managed identity a role with pull, push and pull, or other permissions to a private registry in Azure. (For a complete list of registry roles, see [Azure Container Registry roles and permissions](container-registry-roles.md).) You can give an identity access to one or more resources.
-* *User-assigned identities*, which you can assign to multiple resources and persist for as long as your want. User-assigned identities are currently in preview.
+Then, use the identity to authenticate to any [service that supports Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication), without any credentials in your code. Choose how to authenticate using the managed identity, depending on your scenario. To use the identity to access an Azure container registry from a virtual machine, you authenticate with Azure Resource Manager.
-* A *system-managed identity*, which is unique to a specific resource like a single virtual machine and lasts for the lifetime of that resource.
-
-After you set up an Azure resource with a managed identity, give the identity the access you want to another resource, just like any security principal. For example, assign a managed identity a role with pull, push and pull, or other permissions to a private registry in Azure. (For a complete list of registry roles, see [Azure Container Registry roles and permissions](container-registry-roles.md).) You can give an identity access to one or more resources.
-
-Then, use the identity to authenticate to any [service that supports Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication), without any credentials in your code. To use the identity to access an Azure container registry from a virtual machine, you authenticate with Azure Resource Manager. Choose how to authenticate using the managed identity, depending on your scenario:
-
-* [Acquire an Azure AD access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) programmatically using HTTP or REST calls
-
-* Use the [Azure SDKs](../active-directory/managed-identities-azure-resources/how-to-use-vm-sdk.md)
-
-* [Sign into Azure CLI or PowerShell](../active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md) with the identity.
+> [!NOTE]
+> Currently, services such as Azure Web App for Containers or Azure Container Instances can't use their managed identity to authenticate with Azure Container Registry when pulling a container image to deploy the container resource itself. The identity is only available after the container is running. To deploy these resources using images from Azure Container Registry, a different authentication method such as [service principal](container-registry-auth-service-principal.md) is recommended.
## Create a container registry
You should see a `Login succeeded` message. You can then run `docker` commands w
``` docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1 ```
-> [!NOTE]
-> System-assigned managed service identities can be used to interact with ACRs and App Service can use system-assigned managed service identities. However, you cannot combine these, as App Service cannot use MSI to talk to an ACR. The only way is to enable admin on the ACR and use the admin username/password.
## Next steps
container-registry Container Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-authentication.md
Title: Registry authentication options description: Authentication options for a private Azure container registry, including signing in with an Azure Active Directory identity, using service principals, and using optional admin credentials. Previously updated : 01/30/2020 Last updated : 03/15/2021 # Authenticate with an Azure container registry
The following table lists available authentication methods and typical scenarios
| [Individual AD identity](#individual-login-with-azure-ad)  | `az acr login` in Azure CLI  | Interactive push/pull by developers, testers  | Yes  | AD token must be renewed every 3 hours  | | [AD service principal](#service-principal)  | `docker login`<br/><br/>`az acr login` in Azure CLI<br/><br/> Registry login settings in APIs or tooling<br/><br/> [Kubernetes pull secret](container-registry-auth-kubernetes.md)    | Unattended push from CI/CD pipeline<br/><br/> Unattended pull to Azure or external services  | Yes  | SP password default expiry is 1 year  | | [Integrate with AKS](../aks/cluster-container-registry-integration.md?toc=/azure/container-registry/toc.json&bc=/azure/container-registry/breadcrumb/toc.json)  | Attach registry when AKS cluster created or updated  | Unattended pull to AKS cluster  | No, pull access only  | Only available with AKS cluster  |
-| [Managed identity for Azure resources](container-registry-authentication-managed-identity.md)  | `docker login`<br/><br/> `az acr login` in Azure CLI | Unattended push from Azure CI/CD pipeline<br/><br/> Unattended pull to Azure services<br/><br/> | Yes  | Use only from Azure services that [support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-managed-identities-for-azure-resources) |
+| [Managed identity for Azure resources](container-registry-authentication-managed-identity.md)  | `docker login`<br/><br/> `az acr login` in Azure CLI | Unattended push from Azure CI/CD pipeline<br/><br/> Unattended pull to Azure services<br/><br/> | Yes  | Use only from select Azure services that [support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-managed-identities-for-azure-resources) |
| [Admin user](#admin-account)  | `docker login`  | Interactive push/pull by individual developer or tester<br/><br/>Portal deployment of image from registry to Azure App Service or Azure Container Instances | No, always pull and push access  | Single account per registry, not recommended for multiple users  | | [Repository-scoped access token](container-registry-repository-scoped-permissions.md)  | `docker login`<br/><br/>`az acr login` in Azure CLI | Interactive push/pull to repository by individual developer or tester<br/><br/> Unattended push/pull to repository by individual system or external device  | Yes  | Not currently integrated with AD identity  |
Output displays the access token, abbreviated here:
"loginServer": "myregistry.azurecr.io" } ```
+For registry authentication, we recommend that you store the token credential in a safe location and follow recommended practices to manage [docker login](https://docs.docker.com/engine/reference/commandline/login/)) credentials. For example, store the token value in an environment variable:
+
+```bash
+TOKEN=$(az acr login --name <acrName> --expose-token --output tsv --query accessToken)
+```
Then, run `docker login`, passing `00000000-0000-0000-0000-000000000000` as the username and using the access token as password: ```console
-docker login myregistry.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password eyJhbGciOiJSUzI1NiIs[...]24V7wA
+docker login myregistry.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password $TOKEN
``` ## Service principal
For CLI scripts to create a service principal for authenticating with an Azure c
Each container registry includes an admin user account, which is disabled by default. You can enable the admin user and manage its credentials in the Azure portal, or by using the Azure CLI or other Azure tools. The admin account has full permissions to the registry.
-The admin account is currently required for some scenarios to deploy an image from a container registry to certain Azure services. For example, the admin account is needed when you deploy a container image in the portal from a registry directly to [Azure Container Instances](../container-instances/container-instances-using-azure-container-registry.md#deploy-with-azure-portal) or [Azure Web Apps for Containers](container-registry-tutorial-deploy-app.md).
+The admin account is currently required for some scenarios to deploy an image from a container registry to certain Azure services. For example, the admin account is needed when you use the Azure portal to deploy a container image from a registry directly to [Azure Container Instances](../container-instances/container-instances-using-azure-container-registry.md#deploy-with-azure-portal) or [Azure Web Apps for Containers](container-registry-tutorial-deploy-app.md).
> [!IMPORTANT] > The admin account is designed for a single user to access the registry, mainly for testing purposes. We do not recommend sharing the admin account credentials among multiple users. All users authenticating with the admin account appear as a single user with push and pull access to the registry. Changing or disabling this account disables registry access for all users who use its credentials. Individual identity is recommended for users and service principals for headless scenarios.
The admin account is provided with two passwords, both of which can be regenerat
docker login myregistry.azurecr.io ```
-For best practices to manage login credentials, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command reference.
+For recommended practices to manage login credentials, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command reference.
To enable the admin user for an existing registry, you can use the `--admin-enabled` parameter of the [az acr update](/cli/azure/acr#az-acr-update) command in the Azure CLI:
container-registry Container Registry Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-get-started-powershell.md
In this quickstart you create a *Basic* registry, which is a cost-optimized opti
## Log in to registry
-Before pushing and pulling container images, you must log in to your registry. In production scenarios you should use an individual identity or service principal for container registry access, but to keep this quickstart brief, enable the admin user on your registry with the [Get-AzContainerRegistryCredential][Get-AzContainerRegistryCredential] command:
+Before pushing and pulling container images, you must log in to your registry. To keep this quickstart brief, enable the admin user on your registry with the [Get-AzContainerRegistryCredential][Get-AzContainerRegistryCredential] command. In production scenarios you should use an alternative [authentication method](container-registry-authentication.md) for registry access, such as a service principal.
```powershell $creds = Get-AzContainerRegistryCredential -Registry $registry
$creds.Password | docker login $registry.LoginServer -u $creds.Username --passwo
The command returns `Login Succeeded` once completed.
+> [!TIP]
+> The Azure CLI provides the `az acr login` command, a convenient way to log in to a container registry using your [individual identity](container-registry-authentication.md#individual-login-with-azure-ad), without passing docker credentials.
++ [!INCLUDE [container-registry-quickstart-docker-push](../../includes/container-registry-quickstart-docker-push.md)] [!INCLUDE [container-registry-quickstart-docker-pull](../../includes/container-registry-quickstart-docker-pull.md)]
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-helm-repos.md
Helm 3 should be used to host Helm charts in Azure Container Registry. With Helm
* Use `helm chart` commands in the Helm CLI to push, pull, and manage Helm charts in a registry * Use `helm install` to install charts to a Kubernetes cluster from a local repository cache. > [!NOTE]
-> As of Helm 3, [az acr helm][az-acr-helm] commands for use with the Helm 2 client are being deprecated. See the [product roadmap](https://github.com/Azure/acr/blob/master/docs/acr-roadmap.md#acr-helm-ga). If you've previously deployed Helm 2 charts, see [Migrating Helm v2 to v3](https://helm.sh/docs/topics/v2_v3_migration/).
+> As of Helm 3, [az acr helm][az-acr-helm] commands for use with the Helm 2 client are being deprecated. A minimum of 3 months' notice will be provided in advance of command removal. If you've previously deployed Helm 2 charts, see [Migrating Helm v2 to v3](https://helm.sh/docs/topics/v2_v3_migration/).
## Prerequisites
container-registry Container Registry Image Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-image-formats.md
Title: Supported content formats description: Learn about content formats supported by Azure Container Registry, including Docker-compatible container images, Helm charts, OCI images, and OCI artifacts. Previously updated : 08/30/2019 Last updated : 03/02/2021 # Content formats supported in Azure Container Registry
To learn more about OCI artifacts, see the [OCI Registry as Storage (ORAS)](http
## Helm charts
-Azure Container Registry can host repositories for [Helm charts](https://helm.sh/), a packaging format used to quickly manage and deploy applications for Kubernetes. [Helm client](https://docs.helm.sh/using_helm/#installing-helm) version 2 (2.11.0 or later) is supported.
+Azure Container Registry can host repositories for [Helm charts](https://helm.sh/), a packaging format used to quickly manage and deploy applications for Kubernetes. [Helm client](https://docs.helm.sh/using_helm/#installing-helm) version 3 is recommended. See [Push and pull Helm charts to an Azure container registry](container-registry-helm-repos.md).
## Next steps
container-registry Container Registry Import Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-import-images.md
Last updated 01/15/2021
You can easily import (copy) container images to an Azure container registry, without using Docker commands. For example, import images from a development registry to a production registry, or copy base images from a public registry.
-Azure Container Registry handles a number of common scenarios to copy images from an existing registry:
+Azure Container Registry handles a number of common scenarios to copy images and other artifacts from an existing registry:
-* Import from a public registry
+* Import images from a public registry
-* Import from another Azure container registry, in the same or a different Azure subscription or tenant
+* Import images or OCI artifacts including Helm 3 charts from another Azure container registry, in the same or a different Azure subscription or tenant
* Import from a non-Azure private container registry
container-registry Container Registry Troubleshoot Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-access.md
Title: Troubleshoot network issues with registry description: Symptoms, causes, and resolution of common problems when accessing an Azure container registry in a virtual network or behind a firewall Previously updated : 10/01/2020 Last updated : 03/30/2021 # Troubleshoot network issues with registry
-This article helps you troubleshoot problems you might encounter when accessing an Azure container registry in a virtual network or behind a firewall.
+This article helps you troubleshoot problems you might encounter when accessing an Azure container registry in a virtual network or behind a firewall or proxy server.
## Symptoms May include one or more of the following: * Unable to push or pull images and you receive error `dial tcp: lookup myregistry.azurecr.io`
+* Unable to push or pull images and you receive error `Client.Timeout exceeded while awaiting headers`
* Unable to push or pull images and you receive Azure CLI error `Could not connect to the registry login server` * Unable to pull images from registry to Azure Kubernetes Service or another Azure service
-* Unable to access a registry behind an HTTPS proxy and you receive error `Error response from daemon: login attempt failed with status: 403 Forbidden`
+* Unable to access a registry behind an HTTPS proxy and you receive error `Error response from daemon: login attempt failed with status: 403 Forbidden` or `Error response from daemon: Get <registry>: proxyconnect tcp: EOF Login failed`
* Unable to configure virtual network settings and you receive error `Failed to save firewall and virtual network settings for container registry` * Unable to access or view registry settings in Azure portal or manage registry using the Azure CLI * Unable to add or modify virtual network settings or public access rules
Run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command to get
See [Check the health of an Azure container registry](container-registry-check-health.md) for command examples. If errors are reported, review the [error reference](container-registry-health-error-reference.md) and the following sections for recommended solutions.
-If you're experiencing problems using the registry wih Azure Kubernetes Service, run the [az aks check-acr](/cli/azure/aks#az_aks_check_acr) command to validate that the registry is accessible from the AKS cluster.
+If you're experiencing problems using an Azure Kubernetes Service with an integrated registry, run the [az aks check-acr](/cli/azure/aks#az_aks_check_acr) command to validate that the AKS cluster can reach the registry.
> [!NOTE] > Some network connectivity symptoms can also occur when there are issues with registry authentication or authorization. See [Troubleshoot registry login](container-registry-troubleshoot-login.md).
To access a registry from behind a client firewall or proxy server, configure fi
For a geo-replicated registry, configure access to the data endpoint for each regional replica.
-Behind an HTTPS proxy, ensure that both your Docker client and Docker daemon are configured for proxy behavior.
+Behind an HTTPS proxy, ensure that both your Docker client and Docker daemon are configured for proxy behavior. If you change your proxy settings for the Docker daemon, be sure to restart the daemon.
Registry resource logs in the ContainerRegistryLoginEvents table may help diagnose an attempted connection that is blocked.
container-registry Tasks Agent Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/tasks-agent-pools.md
This feature is available in the **Premium** container registry service tier. Fo
## Preview limitations - Task agent pools currently support Linux nodes. Windows nodes aren't currently supported.-- Task agent pools are available in preview in the following regions: West US 2, South Central US, East US 2, East US, Central US, USGov Arizona, USGov Texas, and USGov Virginia.
+- Task agent pools are available in preview in the following regions: West US 2, South Central US, East US 2, East US, Central US, West Europe, Canada Central, USGov Arizona, USGov Texas, and USGov Virginia.
- For each registry, the default total vCPU (core) quota is 16 for all standard agent pools and is 0 for isolated agent pools. Open a [support request][open-support-ticket] for additional allocation. - You can't currently cancel a task run on an agent pool.
cosmos-db Configure Periodic Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-periodic-backup-restore.md
description: This article describes how to configure Azure Cosmos DB accounts wi
Previously updated : 10/13/2020 Last updated : 04/05/2021
Azure Cosmos DB automatically takes backups of your data at regular intervals. T
* The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database.
+## <a id="backup-storage-redundancy"></a>Backup storage redundancy
+
+By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [blob storage](../storage/common/storage-redundancy.md) that is replicated to a [paired region](../best-practices-availability-paired-regions.md).
+
+To ensure that your backup data stays within the same region where your Azure Cosmos DB account is provisioned, you can change the default geo-redundant backup storage and configure either locally redundant or zone-redundant storage. Storage redundancy mechanisms store multiple copies of your backups so that it is protected from planned and unplanned events, including transient hardware failure, network or power outages, or massive natural disasters.
+
+Backup data in Azure Cosmos DB is replicated three times in the primary region. You can configure storage redundancy for periodic backup mode at the time of account creation or update it for an existing account. You can use the following three data redundancy options in periodic backup mode:
+
+* **Geo-redundant backup storage:** This option copies your data asynchronously across the paired region.
+
+* **Zone-redundant backup storage:** This option copies your data asynchronously across three Azure availability zones in the primary region.
+
+* **Locally-redundant backup storage:** This option copies your data asynchronously three times within a single physical location in the primary region.
+
+> [!NOTE]
+> Zone-redundant storage is currently available only in [specific regions](high-availability.md#availability-zone-support). Based on the region you select; this option will not be available for new or existing accounts.
+>
+> Updating backup storage redundancy will not have any impact on backup storage pricing.
+ ## <a id="configure-backup-interval-retention"></a>Modify the backup interval and retention period Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos account creation or after the account is created. The backup configuration is set at the Azure Cosmos account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. Currently you can change them backup options from Azure portal only. If you have accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account.
+### Modify backup options for an existing account
+ Use the following steps to change the default backup options for an existing Azure Cosmos account: 1. Sign into the [Azure portal.](https://portal.azure.com/)
Use the following steps to change the default backup options for an existing Azu
* **Copies of data retained** - By default, two backup copies of your data are offered at free of charge. There is an extra charge if you need more than two copies. See the Consumed Storage section in the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to know the exact price for extra copies.
- :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-interval-retention.png" alt-text="Configure backup interval and retention for an existing Azure Cosmos account." border="true":::
+ * **Backup storage redundancy** - Choose the required storage redundancy option, see the [Backup storage redundancy](#backup-storage-redundancy) section for available options. By default, your existing periodic backup mode accounts have geo-redundant storage. You can choose other storage such as locally redundant to ensure the backup is not replicated to another region. The changes made to an existing account are applied to only future backups. After the backup storage redundancy of an existing account is updated, it may take up to twice the backup interval time for the changes to take effect and **you will lose access to restore the older backups immediately.**
+
+ > [!NOTE]
+ > You must have the Azure [Cosmos DB Account Reader Role](../role-based-access-control/built-in-roles.md#cosmos-db-account-reader-role) role assigned at the subscription level to configure backup storage redundancy.
-If you configure backup options during the account creation, you can configure the **Backup policy**, which is either **Periodic** or **Continuous**. The periodic policy allows you to configure the Backup interval and Backup retention. The continuous policy is currently available by sign-up only. The Azure Cosmos DB team will assess your workload and approve your request.
+ :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-options-existing-accounts.png" alt-text="Configure backup interval, retention, and storage redundancy for an existing Azure Cosmos account." border="true":::
+### Modify backup options for a new account
+
+When provisioning a new account, from the **Backup Policy** tab, select **Periodic*** backup policy. The periodic policy allows you to configure the backup interval, backup retention, and backup storage redundancy. For example, you can choose **locally redundant backup storage** or **Zone redundant backup storage** options to prevent backup data replication outside your region.
+ ## <a id="request-restore"></a>Request data restore from a backup
If you provision throughput at the database level, the backup and restore proces
Principals who are part of the role [CosmosdbBackupOperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator), owner, or contributor are allowed to request a restore or change the retention period. ## Understanding Costs of extra backups
-Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/en-us/pricing/details/cosmos-db/). For example if Backup Retention is configured to 240 hrs that is, 10 days and Backup Interval to 24 hrs. This implies 10 copies of the backup data. Assuming 1 TB of data in West US 2, the cost would be will be 0.12 * 1000 * 8 for backup storage in given month.
-
+Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/pricing/details/cosmos-db/). For example if Backup Retention is configured to 240 hrs that is, 10 days and Backup Interval to 24 hrs. This implies 10 copies of the backup data. Assuming 1 TB of data in West US 2, the cost would be will be 0.12 * 1000 * 8 for backup storage in given month.
## Options to manage your own backups
cosmos-db Continuous Backup Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-portal.md
description: Learn how to identify the restore point and configure continuous ba
Previously updated : 02/01/2021 Last updated : 04/05/2021
When creating a new Azure Cosmos DB account, for the **Backup policy** option, c
:::image type="content" source="./media/continuous-backup-restore-portal/configure-continuous-backup-portal.png" alt-text="Provision an Azure Cosmos DB account with continuous backup configuration." border="true":::
+## Backup storage redundancy
+
+By default, Azure Cosmos DB stores continuous mode backup data in locally redundant storage blobs. For the regions that have zone redundancy configured, the backup is stored in zone-redundant storage blobs. In this mode you can't update the backup storage redundancy.
+ ## <a id="restore-live-account"></a>Restore a live account from accidental modification You can use Azure portal to restore a live account or selected databases and containers under it. Use the following steps to restore your data:
You can use Azure portal to restore a live account or selected databases and con
* **Restore Point (UTC)** ΓÇô A timestamp within the last 30 days. The account should exist at that timestamp. You can specify the restore point in UTC. It can be as close to the second when you want to restore it. Select the **Click here** link to get help on [identifying the restore point](#event-feed).
- * **Location** ΓÇô The destination region where the account is restored. The account should exist in this region at the given timestamp (eg. West US or East US). An account can be restored only to the regions in which the source account existed.
+ * **Location** ΓÇô The destination region where the account is restored. The account should exist in this region at the given timestamp (for example, West US or East US). An account can be restored only to the regions in which the source account existed.
* **Restore Resource** ΓÇô You can either choose **Entire account** or a **selected database/container** to restore. The databases and containers should exist at the given timestamp. Based on the restore point and location selected, restore resources are populated, which allows user to select specific databases or containers that need to be restored.
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/data-residency.md
+
+ Title: How to meet data residency requirements in Azure Cosmos DB
+description: learn how to meet data residency requirements in Azure Cosmos DB for your data and backups to remain in a single region.
+++ Last updated : 04/05/2021+++++
+# How to meet data residency requirements in Azure Cosmos DB
+
+In Azure Cosmos DB, you can configure your data and backups to remain in a single region to meet the[ residency requirements.](https://azure.microsoft.com/en-us/global-infrastructure/data-residency/)
+
+## Residency requirements for data
+
+In Azure Cosmos DB, you must explicitly configure the cross-region data replication. Learn how to configure geo-replication using [Azure portal](how-to-manage-database-account.md#addremove-regions-from-your-database-account), [Azure CLI](scripts/cli/common/regions.md). To meet data residency requirements, you can create an Azure policy that allows certain regions to prevent data replication to unwanted regions.
+
+## Residency requirements for backups
+
+**Continuous mode Backups**: These backups are resident by default as they are stored in either locally redundant or zone redundant storage. To learn more, see the [continuous backup](continuous-backup-restore-portal.md) article.
+
+**Periodic mode Backups**: For periodic backup modes, you can configure data redundancy at the account level. There are three redundancy options for the backup storage. They are local redundancy, zone redundancy, or geo redundancy. To learn more, see how to [configure backup redundancy](configure-periodic-backup-restore.md#configure-backup-interval-retention) using portal.
+
+## Use Azure Policy to enforce the residency requirements
+
+If you have data residency requirements that require you to keep all your data in a single Azure region, you can enforce zone-redundant or locally redundant backups for your account by using an Azure Policy. You can also enforce a policy that the Azure Cosmos DB accounts are not geo-replicated to other regions.
+
+Azure Policy is a service that you can use to create, assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep these resources compliant with your corporate standards and service level agreements. For more information, see how to use [Azure Policy](policy.md) to implement governance and controls for Azure Cosmos DB resources
+
+## Next steps
+
+* Configure and manage periodic backup using [Azure portal](configure-periodic-backup-restore.md)
+* Configure and manage continuous backup using [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
cosmos-db Global Dist Under The Hood https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/global-dist-under-the-hood.md
The service allows you to configure your Cosmos databases with either a single w
## Conflict resolution
-Our design for the update propagation, conflict resolution, and causality tracking is inspired from the prior work on [epidemic algorithms](https://www.cs.utexas.edu/~lorenzo/corsi/cs395t/04S/notes/naor98load.pdf) and the [Bayou](https://zoo.cs.yale.edu/classes/cs422/2013/bib/terry95managing.pdf) system. While the kernels of the ideas have survived and provide a convenient frame of reference for communicating the Cosmos DBΓÇÖs system design, they have also undergone significant transformation as we applied them to the Cosmos DB system. This was needed, because the previous systems were designed neither with the resource governance nor with the scale at which Cosmos DB needs to operate, nor to provide the capabilities (for example, bounded staleness consistency) and the stringent and comprehensive SLAs that Cosmos DB delivers to its customers.
+Our design for the update propagation, conflict resolution, and causality tracking is inspired from the prior work on [epidemic algorithms](https://www.cs.utexas.edu/~lorenzo/corsi/cs395t/04S/notes/naor98load.pdf) and the [Bayou](https://people.cs.umass.edu/~mcorner/courses/691M/papers/terry.pdf) system. While the kernels of the ideas have survived and provide a convenient frame of reference for communicating the Cosmos DBΓÇÖs system design, they have also undergone significant transformation as we applied them to the Cosmos DB system. This was needed, because the previous systems were designed neither with the resource governance nor with the scale at which Cosmos DB needs to operate, nor to provide the capabilities (for example, bounded staleness consistency) and the stringent and comprehensive SLAs that Cosmos DB delivers to its customers.
Recall that a partition-set is distributed across multiple regions and follows Cosmos DBs (multi-region writes) replication protocol to replicate the data among the physical partitions comprising a given partition-set. Each physical partition (of a partition-set) accepts writes and serves reads typically to the clients that are local to that region. Writes accepted by a physical partition within a region are durably committed and made highly available within the physical partition before they are acknowledged to the client. These are tentative writes and are propagated to other physical partitions within the partition-set using an anti-entropy channel. Clients can request either tentative or committed writes by passing a request header. The anti-entropy propagation (including the frequency of propagation) is dynamic, based on the topology of the partition-set, regional proximity of the physical partitions, and the consistency level configured. Within a partition-set, Cosmos DB follows a primary commit scheme with a dynamically selected arbiter partition. The arbiter selection is dynamic and is an integral part of the reconfiguration of the partition-set based on the topology of the overlay. The committed writes (including multi-row/batched updates) are guaranteed to be ordered.
cosmos-db How To Setup Cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-cmk.md
description: Learn how to configure customer-managed keys for your Azure Cosmos
Previously updated : 02/19/2021 Last updated : 04/01/2021 # Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-> [!NOTE]
-> Using customer-managed keys with the Azure Cosmos DB [analytical store](analytical-store-introduction.md) currently requires additional configuration on your account. Please contact [azurecosmosdbcmk@service.microsoft.com](mailto:azurecosmosdbcmk@service.microsoft.com) for details.
- Data stored in your Azure Cosmos account is automatically and seamlessly encrypted with keys managed by Microsoft (**service-managed keys**). Optionally, you can choose to add a second layer of encryption with keys you manage (**customer-managed keys**). :::image type="content" source="./media/how-to-setup-cmk/cmk-intro.png" alt-text="Layers of encryption around customer data":::
If you're using an existing Azure Key Vault instance, you can verify that these
- [How to use soft-delete with PowerShell](../key-vault/general/key-vault-recovery.md) - [How to use soft-delete with Azure CLI](../key-vault/general/key-vault-recovery.md)
-## Add an access policy to your Azure Key Vault instance
+## <a id="add-access-policy"></a> Add an access policy to your Azure Key Vault instance
1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys. Select **Access Policies** from the left menu:
If you're using an existing Azure Key Vault instance, you can verify that these
:::image type="content" source="./media/how-to-setup-cmk/portal-akv-add-ap-perm2.png" alt-text="Selecting the right permissions":::
-1. Under **Select principal**, select **None selected**. Then, search for **Azure Cosmos DB** principal and select it (to make it easier to find, you can also search by principal ID: `a232010e-820c-4083-83bb-3ace5fc29d0b` for any Azure region except Azure Government regions where the principal ID is `57506a73-e302-42a9-b869-6f12d9ec29e9`). Finally, choose **Select** at the bottom. If the **Azure Cosmos DB** principal isn't in the list, you might need to re-register the **Microsoft.DocumentDB** resource provider as described in the [Register the resource provider](#register-resource-provider) section of this article.
+1. Under **Select principal**, select **None selected**.
+
+1. Search for **Azure Cosmos DB** principal and select it (to make it easier to find, you can also search by principal ID: `a232010e-820c-4083-83bb-3ace5fc29d0b` for any Azure region except Azure Government regions where the principal ID is `57506a73-e302-42a9-b869-6f12d9ec29e9`). If the **Azure Cosmos DB** principal isn't in the list, you might need to re-register the **Microsoft.DocumentDB** resource provider as described in the [Register the resource provider](#register-resource-provider) section of this article.
+
+ > [!NOTE]
+ > This registers the Azure Cosmos DB first-party-identity in your Azure Key Vault access policy. To replace this first-party identity by your Azure Cosmos DB account managed identity, see [Using a managed identity in the Azure Key Vault access policy](#using-managed-identity).
+
+1. Choose **Select** at the bottom.
:::image type="content" source="./media/how-to-setup-cmk/portal-akv-add-ap.png" alt-text="Select the Azure Cosmos DB principal":::
az cosmosdb show \
--query keyVaultKeyUri ```
+## <a id="using-managed-identity"></a> Using a managed identity in the Azure Key Vault access policy
+
+This access policy ensures that your encryption keys can be accessed by your Azure Cosmos DB account. This is done by granting access to a specific Azure Active Directory (AD) identity. Two types of identities are supported:
+
+- Azure Cosmos DB's first-party identity can be used to grant access to the Azure Cosmos DB service.
+- Your Azure Cosmos DB account's [managed identity](how-to-setup-managed-identity.md) can be used to grant access to your account specifically.
+
+Because a system-assigned managed identity can only be retrieved after the creation of your account, you still need to initially create your account using the first-party identity, as described [above](#add-access-policy). Then:
+
+1. If this wasn't done during account creation, [enable a system-assigned managed identity](how-to-setup-managed-identity.md) on your account and copy the `principalId` that got assigned.
+
+1. Add a new access policy to your Azure Key Vault account, just as described [above](#add-access-policy), but using the `principalId` you copied at the previous step instead of Azure Cosmos DB's first-party identity.
+
+1. Update your Azure Cosmos DB account to specify that you want to use the system-assigned managed identity when accessing your encryption keys in Azure Key Vault. You can do this by specifying this property in your account's Azure Resource Manager template:
+
+ ```json
+ {
+ "type": " Microsoft.DocumentDB/databaseAccounts",
+ "properties": {
+ "defaultIdentity": "SystemAssignedIdentity",
+ // ...
+ },
+ // ...
+ }
+ ```
+
+1. Optionally, you can then remove the Azure Cosmos DB first-party identity from your Azure Key Vault access policy.
+ ## Key rotation Rotating the customer-managed key used by your Azure Cosmos account can be done in two ways.
This feature is currently available only for new accounts.
### Is it possible to use customer-managed keys in conjunction with the Azure Cosmos DB [analytical store](analytical-store-introduction.md)?
-Yes, but this currently requires additional configuration on your account. Please contact [azurecosmosdbcmk@service.microsoft.com](mailto:azurecosmosdbcmk@service.microsoft.com) for details.
+Yes, but you must [use your Azure Cosmos DB account's managed identity](#using-managed-identity) in your Azure Key Vault access policy before enabling the analytical store.
### Is there a plan to support finer granularity than account-level keys?
The only operation possible when the encryption key has been revoked is account
## Next steps - Learn more about [data encryption in Azure Cosmos DB](./database-encryption-at-rest.md).-- Get an overview of [secure access to data in Cosmos DB](secure-access-to-data.md).
+- Get an overview of [secure access to data in Cosmos DB](secure-access-to-data.md).
cosmos-db How To Setup Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-managed-identity.md
+
+ Title: Configure managed identities with Azure AD for your Azure Cosmos DB account
+description: Learn how to configure managed identities with Azure Active Directory for your Azure Cosmos DB account
+++ Last updated : 04/02/2021+++
+# Configure managed identities with Azure Active Directory for your Azure Cosmos DB account
+
+Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. This article shows how to create a managed identity for Azure Cosmos DB accounts.
+
+> [!NOTE]
+> Only system-assigned managed identities are currently supported by Azure Cosmos DB.
+
+## Prerequisites
+
+- If you're unfamiliar with managed identities for Azure resources, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md). To learn about managed identity types, see [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+- To set up managed identities, your account needs to have the [DocumentDB Account Contributor role](../role-based-access-control/built-in-roles.md#documentdb-account-contributor).
+
+## Add a system-assigned identity
+
+### Using an Azure Resource Manager (ARM) template
+
+> [!IMPORTANT]
+> Make sure to use an `apiVersion` of `2021-03-15` or higher when working with managed identities.
+
+To enable a system-assigned identity on a new or existing Azure Cosmos DB account, include the following property in the resource definition:
+
+```json
+"identity": {
+ "type": "SystemAssigned"
+}
+```
+
+The `resources` section of your ARM template should then look like the following:
+
+```json
+"resources": [
+ {
+ "type": " Microsoft.DocumentDB/databaseAccounts",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ // ...
+ },
+ // ...
+ ]
+```
+
+After your Azure Cosmos DB account has been created or updated, it will show the following property:
+
+```json
+"identity": {
+ "type": "SystemAssigned",
+ "tenantId": "<azure-ad-tenant-id>",
+ "principalId": "<azure-ad-principal-id>"
+}
+```
+
+## Next steps
+
+- Learn more about [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+- Learn more about [customer-managed keys on Azure Cosmos DB](how-to-setup-cmk.md)
cosmos-db How To Use Change Feed Estimator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-use-change-feed-estimator.md
Previously updated : 08/15/2019 Last updated : 04/01/2021
This article describes how you can monitor the progress of your [change feed pro
## Why is monitoring progress important?
-The change feed processor acts as a pointer that moves forward across your [change feed](./change-feed.md) and delivers the changes to a delegate implementation.
+The change feed processor acts as a pointer that moves forward across your [change feed](./change-feed.md) and delivers the changes to a delegate implementation.
Your change feed processor deployment can process changes at a particular rate based on its available resources like CPU, memory, network, and so on.
Identifying this scenario helps understand if we need to scale our change feed p
## Implement the change feed estimator
-Like the [change feed processor](./change-feed-processor.md), the change feed estimator works as a push model. The estimator will measure the difference between the last processed item (defined by the state of the leases container) and the latest change in the container, and push this value to a delegate. The interval at which the measurement is taken can also be customized with a default value of 5 seconds.
+### As a push model for automatic notifications
+
+Like the [change feed processor](./change-feed-processor.md), the change feed estimator can work as a push model. The estimator will measure the difference between the last processed item (defined by the state of the leases container) and the latest change in the container, and push this value to a delegate. The interval at which the measurement is taken can also be customized with a default value of 5 seconds.
As an example, if your change feed processor is defined like this:
An example of a delegate that receives the estimation is:
You can send this estimation to your monitoring solution and use it to understand how your progress is behaving over time.
+### As an on-demand detailed estimation
+
+In contrast with the push model, there's an alternative that lets you obtain the estimation on demand. This model also provides more detailed information:
+
+* The estimated lag per lease.
+* The instance owning and processing each lease, so you can identify if there's an issue on an instance.
+
+If your change feed processor is defined like this:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartProcessorEstimatorDetailed)]
+
+You can create the estimator with the same lease configuration:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartEstimatorDetailed)]
+
+And whenever you want it, with the frequency you require, you can obtain the detailed estimation:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=GetIteratorEstimatorDetailed)]
+
+Each `ChangeFeedProcessorState` will contain the lease and lag information, and also who is the current instance owning it.
+ > [!NOTE]
-> The change feed estimator does not need to be deployed as part of your change feed processor, nor be part of the same project. It can be independent and run in a completely different instance. It just needs to use the same name and lease configuration.
+> The change feed estimator does not need to be deployed as part of your change feed processor, nor be part of the same project. It can be independent and run in a completely different instance, which is recommended. It just needs to use the same name and lease configuration.
## Additional resources
cosmos-db Sql Api Java Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-java-application.md
This Java application tutorial shows you how to create a web-based task-manageme
:::image type="content" source="./media/sql-api-java-application/image1.png" alt-text="My ToDo List Java application"::: > [!TIP]
-> This application development tutorial assumes that you have prior experience using Java. If you are new to Java or the [prerequisite tools](#Prerequisites), we recommend downloading the complete [todo]https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app) project from GitHub and building it using [the instructions at the end of this article](#GetProject). Once you have it built, you can review the article to gain insight on the code in the context of the project.
->
+> This application development tutorial assumes that you have prior experience using Java. If you are new to Java or the [prerequisite tools](#Prerequisites), we recommend downloading the complete [todo](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app) project from GitHub and building it using [the instructions at the end of this article](#GetProject). Once you have it built, you can review the article to gain insight on the code in the context of the project.
## <a id="Prerequisites"></a>Prerequisites for this Java web application tutorial
All the samples in this tutorial are included in the [todo](https://github.com/A
## Next steps > [!div class="nextstepaction"]
-> [Build a node.js application with Azure Cosmos DB](sql-api-nodejs-application.md)
+> [Build a node.js application with Azure Cosmos DB](sql-api-nodejs-application.md)
cost-management-billing Cost Mgt Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/cost-mgt-best-practices.md
Title: Optimize your cloud investment with Azure Cost Management
description: This article helps get the most value out of your cloud investments, reduce your costs, and evaluate where your money is being spent. Previously updated : 05/27/2020 Last updated : 04/02/2021
To learn more about the various options, visit [How to buy Azure](https://azure.
#### [Free](https://azure.microsoft.com/free/) - 12 months of popular free services-- 200 USD credit in your billing currency to explore services for 30 days
+- USD200 credit in your billing currency to explore services for 30 days
- 25+ services are always free #### [Pay as you go](https://azure.microsoft.com/offers/ms-azr-0003p)
cost-management-billing Avoid Charges Free Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/avoid-charges-free-account.md
# Avoid charges with your Azure free account
-Eligible new users get 200 USD Azure credit in your billing currency for the first 30 days and a limited quantity of free services for 12 months with your [Azure free account](https://azure.microsoft.com/free/). To learn about limits of free services, see the [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq/). As long as you have unexpired credit or you use only free services within the limits, you're not charged.
+Eligible new users get USD200 Azure credit in your billing currency for the first 30 days and a limited quantity of free services for 12 months with your [Azure free account](https://azure.microsoft.com/free/). To learn about limits of free services, see the [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq/). As long as you have unexpired credit or you use only free services within the limits, you're not charged.
Let's look at some of the reasons you can incur charges on your Azure free account.
cost-management-billing Create Free Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/create-free-services.md
# Create services included with Azure free account
-During the first 30 days after you've created an Azure free account, you have 200 USD credit in your billing currency to use on any service, except for third-party Marketplace purchases. You can experiment with different tiers and types of Azure services using the free credit to try out Azure. If you use services or Azure resources that arenΓÇÖt free during that time, charges are deducted against your credit.
+During the first 30 days after you've created an Azure free account, you have USD200 credit in your billing currency to use on any service, except for third-party Marketplace purchases. You can experiment with different tiers and types of Azure services using the free credit to try out Azure. If you use services or Azure resources that arenΓÇÖt free during that time, charges are deducted against your credit.
If you donΓÇÖt use all of your credit by the end of the first 30 days, it's lost. After the first 30 days and up to 12 months after sign-up, you can only use a limited quantity of *some services*ΓÇönot all Azure services are free. If you upgrade before 30 days and have remaining credit, you can use the rest of your credit with a pay-as-you-go subscription for the remaining days. For example, if you sign up for the free account on November 1 and upgrade on November 5, you have until November 30 to use your credit in the new pay-as-you-go subscription.
cost-management-billing Ea Portal Vm Reservations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-vm-reservations.md
In scenarios where Azure EA customers have used all their Azure Prepayment, rese
### Reserved instance expiration
-You'll receive email notifications 30 days before reservation and at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate. For more information, see [Reserved Virtual Machine Instances offering.](https://azure.microsoft.com/pricing/reserved-vm-instances/)
+You'll receive email notifications, first one 30 days prior to reservation expiry and other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate. For more information, see [Reserved Virtual Machine Instances offering.](https://azure.microsoft.com/pricing/reserved-vm-instances/)
## Next steps - For more information about Azure reservations, see [What are Azure Reservations?](../reservations/save-compute-costs-reservations.md). - To learn more about enterprise reservation costs and usage, see [Get Enterprise Agreement reservation costs and usage](../reservations/understand-reserved-instance-usage-ea.md).-- For information about pricing, see [Linux Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) or [Windows Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/windows/).
+- For information about pricing, see [Linux Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) or [Windows Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/windows/).
cost-management-billing Mca Check Azure Credits Balance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/mca-check-azure-credits-balance.md
The API response returns all transactions that affected the credit balance for y
In a billing account for a Microsoft customer agreement, you use billing profiles to manage your invoices and payment methods. A monthly invoice is generated for each billing profile and you use the payment methods to pay the invoice.
-you assign credits that you acquire to a billing profile. When an invoice is generated for the billing profile, credits are automatically applied to the total charges to calculate the amount that you need to pay. You pay the remaining amount with your payment methods like check/ wire transfer or credit card.
+You assign credits that you acquire to a billing profile. When an invoice is generated for the billing profile, credits are automatically applied to the total charges to calculate the amount that you need to pay. You pay the remaining amount with your payment methods like check/ wire transfer or credit card.
## Products that aren't covered by Azure credits
If you need help, [contact support](https://portal.azure.com/?#blade/Microsoft_A
## Next steps - [Understand billing account for Microsoft Customer Agreement](../understand/mca-overview.md)-- [Understand terms on your Microsoft Customer Agreement invoice](../understand/mca-understand-your-invoice.md)
+- [Understand terms on your Microsoft Customer Agreement invoice](../understand/mca-understand-your-invoice.md)
cost-management-billing Subscription Disabled https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/subscription-disabled.md
Your Azure subscription can get disabled because your credit has expired, you re
## Your credit is expired
-When you sign up for an Azure free account, you get a Free Trial subscription, which provides you 200 USD Azure credit in your billing currency for 30 days and 12 months of free services. At the end of 30 days, Azure disables your subscription. Your subscription is disabled to protect you from accidentally incurring charges for usage beyond the credit and free services included with your subscription. To continue using Azure services, you must [upgrade your subscription](upgrade-azure-subscription.md). After you upgrade, your subscription still has access to free services for 12 months. You only get charged for usage beyond the free service quantity limits.
+When you sign up for an Azure free account, you get a Free Trial subscription, which provides you USD200 Azure credit in your billing currency for 30 days and 12 months of free services. At the end of 30 days, Azure disables your subscription. Your subscription is disabled to protect you from accidentally incurring charges for usage beyond the credit and free services included with your subscription. To continue using Azure services, you must [upgrade your subscription](upgrade-azure-subscription.md). After you upgrade, your subscription still has access to free services for 12 months. You only get charged for usage beyond the free service quantity limits.
## You reached your spending limit
cost-management-billing Upgrade Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/upgrade-azure-subscription.md
You can upgrade your [Azure free account](https://azure.microsoft.com/free/) to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) in the Azure portal.
-If you have an [Azure for Students Starter account](https://azure.microsoft.com/offers/ms-azr-0144p/) and are eligible for an [Azure free account](https://azure.microsoft.com/free/), you can upgrade to it to a [Azure free account](https://azure.microsoft.com/free/). You'll get 200 USD Azure credit in your billing currency and 12 months of free services on upgrade. If you don't qualify for a free account, you can upgrade to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) with a [support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+If you have an [Azure for Students Starter account](https://azure.microsoft.com/offers/ms-azr-0144p/) and are eligible for an [Azure free account](https://azure.microsoft.com/free/), you can upgrade to it to a [Azure free account](https://azure.microsoft.com/free/). You'll get USD200 Azure credit in your billing currency and 12 months of free services on upgrade. If you don't qualify for a free account, you can upgrade to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) with a [support request](https://go.microsoft.com/fwlink/?linkid=2083458).
If you have an [Azure for Students](https://azure.microsoft.com/offers/ms-azr-0170p/) account, you can upgrade to [pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) with a [support request](https://go.microsoft.com/fwlink/?linkid=2083458)
data-factory Connector Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-github.md
Title: Connect to GitHub description: Use GitHub to specify your Common Data Model entity references-+ Last updated 06/03/2020-+
data-factory Connector Sap Business Warehouse Open Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-business-warehouse-open-hub.md
Previously updated : 02/02/2020 Last updated : 04/02/2021 # Copy data from SAP Business Warehouse via Open Hub using Azure Data Factory
To copy data from SAP BW Open Hub, the following properties are supported in the
| type | The **type** property of the copy activity source must be set to **SapOpenHubSource**. | Yes | | excludeLastRequest | Whether to exclude the records of the last request. | No (default is **true**) | | baseRequestId | The ID of request for delta loading. Once it is set, only data with requestId **larger than** the value of this property will be retrieved. | No |
+| customRfcReadTableFunctionModule | A custom RFC function module that can be used to read data from an SAP table. <br/> You can use a custom RFC function module to define how the data is retrieved from your SAP system and returned to Data Factory. The custom function module must have an interface implemented (import, export, tables) that's similar to `/SAPDS/RFC_READ_TABLE2`, which is the default interface used by Data Factory. | No |
>[!TIP] >If your Open Hub table only contains the data generated by single request ID, for example, you always do full load and overwrite the existing data in the table, or you only run the DTP once for test, remember to uncheck the "excludeLastRequest" option in order to copy the data out.
data-factory Enable Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/enable-customer-managed-key.md
To change key used for Data Factory encryption, you have to manually update the
By design, once the customer-managed key feature is enabled, you can't remove the extra security step. We will always expect a customer provided key to encrypt factory and data.
+## Customer managed key and continuous integration and continuous deployment
+
+By default, CMK configuration is not included in the factory Azure Resource Manager (ARM) template. To include the customer managed key encryption settings in ARM template for continuous integration (CI/CD):
+
+1. Ensure the factory is in Git mode
+1. Navigate to management portal - customer managed key section
+1. Check _Include in ARM template_ option
+
+ :::image type="content" source="media/enable-customer-managed-key/07-include-in-template.png" alt-text="Screenshot of including customer managed key setting in ARM template.":::
+
+The following settings will be added in ARM template. These properties can be parameterized in Continuous Integration and Delivery pipelines by editing the [Azure Resource Manager parameters configuration](continuous-integration-deployment.md#use-custom-parameters-with-the-resource-manager-template)
+
+ :::image type="content" source="media/enable-customer-managed-key/08-template-with-customer-managed-key.png" alt-text="Screenshot of including customer managed key setting in Azure Resource Manager template.":::
+
+> [!NOTE]
+> Adding the encryption setting to the ARM templates adds a factory-level setting that will override other factory level settings, such as git configurations, in other environments. If you have these settings enabled in an elevated environment such as UAT or PROD, please refer to [Global Parameters in CI/CD](author-global-parameters.md#cicd).
+ ## Next steps
-Go through the [tutorials](tutorial-copy-data-dot-net.md) to learn about using Data Factory in more scenarios.
+Go through the [tutorials](tutorial-copy-data-dot-net.md) to learn about using Data Factory in more scenarios.
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delta.md
Title: Delta format in Azure Data Factory description: Transform and move data from a delta lake using the delta format-+ Last updated 03/26/2020-+ # Delta format in Azure Data Factory
The below table lists the properties supported by a delta sink. You can edit the
| Vacuum | Specify retention threshold in hours for older versions of table. A value of 0 or less defaults to 30 days | yes | Integer | vacuum | | Update method | Specify which update operations are allowed on the delta lake. For methods that aren't insert, a preceding alter row transformation is required to mark rows. | yes | `true` or `false` | deletable <br> insertable <br> updateable <br> merge | | Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true |
-| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
+| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
### Delta sink script example
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
This section shows you how to create a storage event trigger within the Azure Da
In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder _event-testing_ in the container _sample-data_. The **folderPath** and **fileName** properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path sample-data/event-testing, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped, in the example, to the pipeline parameters `sourceFolder` and `sourceFile`, which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile` respectively. > [!NOTE]
- > If you are creating your pipeline and trigger in [Azure Synapse Analytics](/synapse-analytics), you must use `@trigger().outputs.body.fileName` and `@trigger().outputs.body.folderPath` as parameters. Those two properties capture blob information. Use those properties instead of using `@triggerBody().fileName` and `@triggerBody().folderPath`.
+ > If you are creating your pipeline and trigger in [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md), you must use `@trigger().outputs.body.fileName` and `@trigger().outputs.body.folderPath` as parameters. Those two properties capture blob information. Use those properties instead of using `@triggerBody().fileName` and `@triggerBody().folderPath`.
1. Click **Finish** once you are done.
data-factory Solution Templates Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/solution-templates-introduction.md
Title: Overview of templates
description: Learn how to use a pre-defined template to get started quickly with Azure Data Factory. --++ Last updated 01/04/2019
Data Factory uses Azure Resource Manager templates for saving data factory pipel
- Copy templates: - [Bulk copy from Database](solution-template-bulk-copy-with-control-table.md)
-
+
- [Copy new files by LastModifiedDate](solution-template-copy-new-files-lastmodifieddate.md) - [Copy multiple file containers between file-based stores](solution-template-copy-files-multiple-containers.md)
data-factory Transform Data Machine Learning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-machine-learning-service.md
Title: Execute Azure Machine Learning pipelines
description: Learn how to run your Azure Machine Learning pipelines in your Azure Data Factory pipelines. --++ Last updated 07/16/2020
The below video features a six-minute introduction and demonstration of this fea
Property | Description | Allowed values | Required -- | -- | -- | -- name | Name of the activity in the pipeline | String | Yes
-type | Type of activity is ΓÇÿAzureMLExecutePipelineΓÇÖ | String | Yes
+type | Type of activity is 'AzureMLExecutePipeline' | String | Yes
linkedServiceName | Linked Service to Azure Machine Learning | Linked service reference | Yes mlPipelineId | ID of the published Azure Machine Learning pipeline | String (or expression with resultType of string) | Yes experimentName | Run history experiment name of the Machine Learning pipeline run | String (or expression with resultType of string) | No
event-grid Add Identity Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/add-identity-roles.md
Last updated 03/25/2021
-# Add an identity to Azure roles on Azure Event Grid destinations
+# Grant managed identity the access to Event Grid destination
This section describes how to add the identity for your system topic, custom topic, or domain to an Azure role. ## Prerequisites
az role assignment create --role "$role" --assignee "$topic_pid" --scope "$sbust
``` ## Next steps
-Now that you have assigned a system-assigned identity to your system topic, custom topic, or domain, and added the identity to appropriate roles on destinations, see [Devlier events using identity](managed-service-identity.md) on delivering events to destinations using the identity.
+Now that you have assigned a system-assigned identity to your system topic, custom topic, or domain, and added the identity to appropriate roles on destinations, see [Deliver events using the managed identity](managed-service-identity.md) on delivering events to destinations using the identity.
event-grid Delivery And Retry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/delivery-and-retry.md
All other codes not in the above set (200-204) are considered failures and will
| 503 Service Unavailable | Retry after 30 seconds or more | | All others | Retry after 10 seconds or more |
-## Delivery with custom headers
+## Custom delivery properties
Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set up to 10 headers when creating an event subscription. Each header value shouldn't be greater than 4,096 (4K) bytes. You can set custom headers on the events that are delivered to the following destinations: - Webhooks
Event subscriptions allow you to set up HTTP headers that are included in delive
- Azure Event Hubs - Relay Hybrid Connections
-For more information, see [Delivery with custom headers](delivery-properties.md).
+For more information, see [Custom delivery properties](delivery-properties.md).
## Next steps
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/delivery-properties.md
Last updated 03/24/2021
-# Delivery with custom headers
+# Custom delivery properties
Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set up to 10 headers when creating an event subscription. Each header value shouldn't be greater than 4,096 (4K) bytes. You can set custom headers on the events that are delivered to the following destinations:
event-grid Enable Identity Custom Topics Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/enable-identity-custom-topics-domains.md
The command for updating an existing domain is similar (`az eventgrid domain upd
## Next steps
-Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For detailed steps, see [Add identity to Azure roles on destinations](add-identity-roles.md).
+Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For detailed steps, see [Grant managed identity the access to Event Grid destination](add-identity-roles.md).
event-grid Enable Identity System Topics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/enable-identity-system-topics.md
You can enable system-managed identity only for the regional Azure resources. Yo
## Next steps
-Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For detailed steps, see [Add identity to Azure roles on destinations](add-identity-roles.md).
+Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For detailed steps, see [Grant managed identity the access to Event Grid destination](add-identity-roles.md).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, GTT, IX Reach, Equinix, Megaport, SES, Sohonet, Telehouse - KDDI | | **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | 10G, 100G | CoreSite, Equinix, Megaport, Neutrona Networks, NTT, Zayo | | **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | 10G, 100G | Equinix |
-| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | 10G, 100G | |
+| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | 10G, 100G | Interxion |
| **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect | | **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | 10G, 100G | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Telstra Corporation, TPG Telecom | | **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | 10G, 100G | Claro, C3ntro, Equinix, Megaport, Neutrona Networks |
If you are remote and don't have fiber connectivity or you want to explore other
| **Amsterdam** | Equinix, Interxion, Level 3 Communications | BICS, CloudXpress, Eurofiber, Fastweb S.p.A, Gulf Bridge International, Kalaam Telecom Bahrain B.S.C, MainOne, Nianet, POST Telecom Luxembourg, Proximus, RETN, TDC Erhverv, Telecom Italia Sparkle, Telekom Deutschland GmbH, Telia | | **Atlanta** | Equinix| Crown Castle | **Cape Town** | Teraco | MTN |
+| **Chennai** | Tata Communications | Tata Teleservices |
| **Chicago** | Equinix| Crown Castle, Spectrum Enterprise, Windstream | | **Dallas** | Equinix, Megaport | Axtel, C3ntro Telecom, Cox Business, Crown Castle, Data Foundry, Spectrum Enterprise, Transtelco | | **Frankfurt** | Interxion | BICS, Cinia, Equinix, Nianet, QSC AG, Telekom Deutschland GmbH |
If you are remote and don't have fiber connectivity or you want to explore other
| **Los Angeles** | Equinix |Crown Castle, Spectrum Enterprise, Transtelco | | **Madrid** | Level3 | Zertia | | **Montreal** | Cologix| Airgate Technologies, Inc. Aptum Technologies, Rogers, Zirro |
+| **Mumbai** | Tata Communications | Tata Teleservices |
| **New York** |Equinix, Megaport | Altice Business, Crown Castle, Spectrum Enterprise, Webair | | **Paris** | Equinix | Proximus | | **Quebec City** | Megaport | Fibrenoire |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** |Supported |Supported |Chicago, Dallas, Silicon Valley, Washington DC | | **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** |Supported |Supported |Osaka, Tokyo | | **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** |Supported |Supported |Cape Town, Johannesburg, London |
-| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Copenhagen, Dublin, Frankfurt, London, Marseille, Paris, Zurich |
+| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Copenhagen, Dublin, Frankfurt, London, Madrid, Marseille, Paris, Zurich |
| **[IRIDEOS](https://irideos.it/)** |Supported |Supported |Milan | | **Iron Mountain** | Supported |Supported |Washington DC | | **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**|Supported |Supported | Amsterdam, London2, Silicon Valley, Toronto, Washington DC |
If you are remote and don't have fiber connectivity or you want to explore other
| **[Proximus](https://www.proximus.be/en/id_b_cl_proximus_external_cloud_connect/companies-and-public-sector/discover/magazines/expert-blog/proximus-external-cloud-connect.html)**|Equinix | Amsterdam, Dublin, London, Paris | | **[QSC AG](https://www.qsc.de/de/produkte-loesungen/cloud-services-und-it-outsourcing/pure-enterprise-cloud/multi-cloud-management/azure-expressroute/)** |Interxion | Frankfurt | | **[RETN](https://retn.net/services/cloud-connect/)** | Equinix | Amsterdam |
+| **[Tata Teleservices](https://www.tatateleservices.com/business-services/data-services/secure-cloud-connect)** | Tata Communications | Chennai, Mumbai |
| **Rogers** | Cologix, Equinix | Montreal, Toronto | | **[Spectrum Enterprise](https://enterprise.spectrum.com/services/cloud/cloud-connect.html)** | Equinix | Chicago, Dallas, Los Angeles, New York, Silicon Valley | | **[Tamares Telecom](http://www.tamarestelecom.com/our-services/#Connectivity)** | Equinix | London |
firewall-manager Private Link Inspection Secure Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/private-link-inspection-secure-virtual-hub.md
+
+ Title: Secure traffic destined to private endpoints in Azure Virtual WAN
+description: Learn how to use network rules and application rules to secure traffic destined to private endpoints in Azure Virtual WAN
++++ Last updated : 04/02/2021+++
+# Secure traffic destined to private endpoints in Azure Virtual WAN
+
+[Azure Private Endpoint](../private-link/private-endpoint-overview.md) is the fundamental building block for [Azure Private Link](../private-link/private-link-overview.md). Private endpoints enable Azure resources deployed in a virtual network to communicate privately with private link resources.
+
+Private endpoints allow resources access to the private link service deployed in a virtual network. Access to the private endpoint through virtual network peering and on-premises network connections extend the connectivity.
+
+You may need to filter traffic from clients either on premises or in Azure destined to services exposed via private endpoints in a Virtual WAN connected virtual network. This article walks you through this task using [secured virtual hub](../firewall-manager/secured-virtual-hub.md) with [Azure Firewall](../firewall/overview.md) as the security provider.
+
+Azure Firewall filters traffic using any of the following methods:
+
+* [FQDN in network rules](../firewall/fqdn-filtering-network-rules.md) for TCP and UDP protocols
+* [FQDN in application rules](../firewall/features.md#application-fqdn-filtering-rules) for HTTP, HTTPS, and MSSQL.
+* Source and destination IP addresses, port, and protocol using [network rules](../firewall/features.md#network-traffic-filtering-rules)
+
+Use application rules over network rules to inspect traffic destined to private endpoints.
+A secured virtual hub is managed by Microsoft and it cannot be linked to a [Private DNS Zone](../dns/private-dns-privatednszone.md). This is required to resolve a [private link resource](../private-link/private-endpoint-overview.md#private-link-resource) FQDN to its corresponding private endpoint IP address.
+
+SQL FQDN filtering is supported in [proxy-mode](../azure-sql/database/connectivity-architecture.md#connection-policy) only (port 1433). *Proxy* mode can result in more latency compared to *redirect*. If you want to continue using redirect mode, which is the default for clients connecting within Azure, you can filter access using FQDN in firewall network rules.
+
+## Filter traffic using FQDN in network and application rules
+
+The following steps enable Azure Firewall to filter traffic using FQDN in network and application rules:
+
+1. Deploy a [DNS forwarder](../private-link/private-endpoint-dns.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder) virtual machine in a virtual network connected to the secured virtual hub and linked to the Private DNS Zones hosting the A record types for the private endpoints.
+
+2. Configure [custom DNS settings](../firewall/dns-settings.md#configure-custom-dns-serversazure-portal) to point to the DNS forwarder virtual machine IP address and enable DNS proxy in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub.
+
+3. Configure [custom DNS servers](../virtual-network/manage-virtual-network.md#change-dns-servers) for the virtual networks connected to the secured virtual hub to point to the private IP address associated with the Azure Firewall deployed in the secured virtual hub.
+
+4. Configure on premises DNS servers to forward DNS queries for the private endpoints public DNS zones to the private IP address associated with the Azure Firewall deployed in the secured virtual hub.
+
+5. Configure an [application rule](../firewall/tutorial-firewall-deploy-portal.md#configure-an-application-rule) or [network rule](../firewall/tutorial-firewall-deploy-portal.md#configure-a-network-rule) as necessary in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub with *Destination Type* FQDN and the private link resource public FQDN as *Destination*.
+
+6. Navigate to *Secured virtual hubs* in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub and select the secured virtual hub where traffic filtering destined to private endpoints will be configured.
+
+7. Navigate to **Security configuration**, select **Send via Azure Firewall** under **Private traffic**.
+
+8. Select **Private traffic prefixes** to edit the CIDR prefixes that will be inspected via Azure Firewall in secured virtual hub and add one /32 prefix for each private endpoint as follows:
+
+ > [!IMPORTANT]
+ > If these /32 prefixes are not configured, traffic destined to private endpoints will bypass Azure Firewall.
+
+ :::image type="content" source="./media/private-link-inspection-secure-virtual-hub/firewall-manager-security-configuration.png" alt-text="Firewall Manager Security Configuration" border="true":::
+
+These steps only work when the clients and private endpoints are deployed in different virtual networks connected to the same secured virtual hub and for on premises clients. If the clients and private endpoints are deployed in the same virtual network, a UDR with /32 routes for the private endpoints must be created. Configure these routes with **Next hop type** set to **Virtual appliance** and **Next hop address** set to the private IP address of the Azure Firewall deployed in the secured virtual hub. **Propagate gateway routes** must be set to **Yes**.
+
+The following diagram illustrates the DNS and data traffic flows for the different clients to connect to a private endpoint deployed in Azure virtual WAN:
++
+## Troubleshooting
+
+The main problems that you might have when you attempt to filter traffic destined to private endpoints via secured virtual hub are:
+
+- Clients are unable to connect to private endpoints.
+
+- Azure Firewall is bypassed. This symptom can be validated by the absence of network or application rules log entries in Azure Firewall.
+
+In most cases, these problems are caused by one of the following issues:
+
+- Incorrect DNS name resolution
+
+- Incorrect routing configuration
+
+### Incorrect DNS name resolution
+
+1. Verify the virtual network DNS servers are set to *Custom* and the IP address is the private IP address of Azure Firewall in secured virtual hub.
+
+ Azure CLI:
+
+ ```azurecli-interactive
+ az network vnet show --name <VNET Name> --resource-group <Resource Group Name> --query "dhcpOptions.dnsServers"
+ ```
+2. Verify clients in the same virtual network as the DNS forwarder virtual machine can resolve the private endpoint public FQDN to its corresponding private IP address by directly querying the virtual machine configured as DNS forwarder.
+
+ Linux:
+
+ ```bash
+ dig @<DNS forwarder VM IP address> <Private endpoint public FQDN>
+ ```
+3. Inspect *AzureFirewallDNSProxy* Azure Firewall log entries and validate it can receive and resolve DNS queries from the clients.
+
+ ```kusto
+ AzureDiagnostics
+ | where Category contains "DNS"
+ | where msg_s contains "database.windows.net"
+ ```
+4. Verify *DNS proxy* has been enabled and a *Custom* DNS server pointing to the IP address of the DNS forwarder virtual machine IP address has been configured in the firewall policy associated with the Azure Firewall in the secured virtual hub.
+
+ Azure CLI:
+
+ ```azurecli-interactive
+ az network firewall policy show --name <Firewall Policy> --resource-group <Resource Group Name> --query dnsSettings
+ ```
+
+### Incorrect routing configuration
+
+1. Verify *Security configuration* in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub. Make sure under the **PRIVATE TRAFFIC** column it shows as **Secured by Azure Firewall** for all the virtual network and branches connections you want to filter traffic for.
+
+ :::image type="content" source="./media/private-link-inspection-secure-virtual-hub/firewall-policy-private-traffic-configuration.png" alt-text="Private Traffic Secured by Azure Firewall" border="true":::
+
+2. Verify **Security configuration** in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub. Make sure there's a /32 entry for each private endpoint private IP address you want to filter traffic for under **Private traffic prefixes**.
+
+ :::image type="content" source="./media/private-link-inspection-secure-virtual-hub/firewall-manager-security-configuration.png" alt-text="Firewall Manager Security Configuration - Private Traffic Prefixes" border="true":::
+
+3. In the secured virtual hub under virtual WAN, inspect effective routes for the route tables associated with the virtual networks and branches connections you want to filter traffic for. Make sure there are /32 entries for each private endpoint private IP address you want to filter traffic for.
+
+ :::image type="content" source="./media/private-link-inspection-secure-virtual-hub/secured-virtual-hub-effective-routes.png" alt-text="Secured Virtual Hub Effective Routes" border="true":::
+
+4. Inspect the effective routes on the NICs attached to the virtual machines deployed in the virtual networks you want to filter traffic for. Make sure there are /32 entries for each private endpoint private IP address you want to filter traffic for.
+
+ Azure CLI:
+
+ ```azurecli-interactive
+ az network nic show-effective-route-table --name <Network Interface Name> --resource-group <Resource Group Name> -o table
+ ```
+5. Inspect the routing tables of your on premises routing devices. Make sure you're learning the address spaces of the virtual networks where the private endpoints are deployed.
+
+ Azure virtual WAN doesn't advertise the prefixes configured under **Private traffic prefixes** in firewall policy **Security configuration** to on premises. It's expected that the /32 entries won't show in the routing tables of your on premises routing devices.
+
+6. Inspect **AzureFirewallApplicationRule** and **AzureFirewallNetworkRule** Azure Firewall logs. Make sure traffic destined to the private endpoints is being logged.
+
+ **AzureFirewallNetworkRule** log entries don't include FQDN information. Filter by IP address and port when inspecting network rules.
+
+ When filtering traffic destined to [Azure Files](../storage/files/storage-files-introduction.md) private endpoints, **AzureFirewallNetworkRule** log entries will only be generated when a client first mounts or connects to the file share. Azure Firewall won't generate logs for [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations for files in the file share. This is because CRUD operations are carried over the persistent TCP channel opened when the client first connects or mounts to the file share.
+
+ Application rule log query example:
+
+ ```kusto
+ AzureDiagnostics
+ | where msg_s contains "database.windows.net"
+ | where Category contains "ApplicationRule"
+ ```
+## Next steps
+
+- [Use Azure Firewall to inspect traffic destined to a private endpoint](../private-link/inspect-traffic-with-azure-firewall.md)
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/features.md
Previously updated : 03/10/2021 Last updated : 04/02/2021
Azure Firewall Availability Zones are available in regions that support Availabi
> [!NOTE] > Availability Zones can only be configured during deployment. You can't configure an existing firewall to include Availability Zones.
-For more information about Availability Zones, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md)
+For more information about Availability Zones, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md).
## Unrestricted cloud scalability
All outbound virtual network traffic IP addresses are translated to the Azure Fi
If your organization uses a public IP address range for private networks, Azure Firewall will SNAT the traffic to one of the firewall private IP addresses in AzureFirewallSubnet. You can configure Azure Firewall to **not** SNAT your public IP address range. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md).
+You can monitor SNAT port utilization in Azure Firewall metrics. Learn more and see our recommendation on SNAT port utilization in our [firewall logs and metrics documentation](logs-and-metrics.md#metrics).
+ ## Inbound DNAT support Inbound Internet network traffic to your firewall public IP address is translated (Destination Network Address Translation) and filtered to the private IP addresses on your virtual networks.
firewall Logs And Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/logs-and-metrics.md
Previously updated : 02/16/2021 Last updated : 04/02/2021
The following metrics are available for Azure Firewall:
When you add more public IP addresses to your firewall, more SNAT ports are available, reducing the SNAT ports utilization. Additionally, when the firewall scales out for different reasons (for example, CPU or throughput) additional SNAT ports also become available. So effectively, a given percentage of SNAT ports utilization may go down without you adding any public IP addresses, just because the service scaled out. You can directly control the number of public IP addresses available to increase the ports available on your firewall. But, you can't directly control firewall scaling.
+ If your firewall is running into SNAT port exhaustion, you should add at least five public IP address. This increases the number of SNAT ports available. For more information, see [Azure Firewall features](features.md#multiple-public-ip-addresses).
+ ## Next steps
governance Assign Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-bicep.md
+
+ Title: "Quickstart: New policy assignment with Bicep (Preview) file"
+description: In this quickstart, you use a Bicep (Preview) file to create a policy assignment to identify non-compliant resources.
Last updated : 04/01/2021+++
+# Quickstart: Create a policy assignment to identify non-compliant resources by using a Bicep file
+
+The first step in understanding compliance in Azure is to identify the status of your resources.
+This quickstart steps you through the process of using a
+[Bicep (Preview)](https://github.com/Azure/bicep) file compiled to an Azure Resource
+Manager template (ARM template) to create a policy assignment to identify virtual machines that
+aren't using managed disks. At the end of this process, you'll successfully identify virtual
+machines that aren't using managed disks. They're _non-compliant_ with the policy assignment.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the
+**Deploy to Azure** button. The template opens in the Azure portal.
++
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/)
+ account before you begin.
+- Bicep version `0.3` or higher installed. If you don't yet have Bicep CLI or need to update, see
+ [Install Bicep (Preview)](../../azure-resource-manager/templates/bicep-install.md).
+
+## Review the Bicep file
+
+In this quickstart, you create a policy assignment and assign a built-in policy definition called
+_Audit VMs that do not use managed disks_ (`06a78e20-9358-41c9-923c-fb736d382a4d`). For a partial
+list of available built-in policies, see [Azure Policy samples](./samples/index.md).
+
+Create the following Bicep file as `assignment.bicep`:
+
+```bicep
+param policyAssignmentName string = 'audit-vm-manageddisks'
+param policyDefinitionID string = '/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d'
+
+resource assignment 'Microsoft.Authorization/policyAssignments@2019-09-01' = {
+ name: policyAssignmentName
+ properties: {
+ scope: subscriptionResourceId('Microsoft.Resources/resourceGroups', resourceGroup().name)
+ policyDefinitionId: policyDefinitionID
+ }
+}
+
+output assignmentId string = assignment.id
+```
+
+The resource defined in the file is:
+
+- [Microsoft.Authorization/policyAssignments](/azure/templates/microsoft.authorization/policyassignments)
+
+## Deploy the template
+
+> [!NOTE]
+> Azure Policy service is free. For more information, see
+> [Overview of Azure Policy](./overview.md).
+
+After the Bicep CLI is installed and file created, you can deploy the Bicep file with:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzResourceGroupDeployment `
+ -Name PolicyDeployment `
+ -ResourceGroupName PolicyGroup `
+ -TemplateFile assignment.bicep
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az deployment group create \
+ --name PolicyDeployment \
+ --resource-group PolicyGroup \
+ --template-file assignment.bicep
+```
+++
+Some additional resources:
+
+- To find more samples templates, see
+ [Azure Quickstart Template](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Authorization&pageNumber=1&sort=Popular).
+- To see the template reference, go to
+ [Azure template reference](/azure/templates/microsoft.authorization/allversions).
+- To learn how to develop ARM templates, see
+ [Azure Resource Manager documentation](../../azure-resource-manager/management/overview.md).
+- To learn subscription-level deployment, see
+ [Create resource groups and resources at the subscription level](../../azure-resource-manager/templates/deploy-to-subscription.md).
+
+## Validate the deployment
+
+Select **Compliance** in the left side of the page. Then locate the _Audit VMs that do not use
+managed disks_ policy assignment you created.
++
+If there are any existing resources that aren't compliant with this new assignment, they appear
+under **Non-compliant resources**.
+
+For more information, see
+[How compliance works](./how-to/get-compliance-data.md#how-compliance-works).
+
+## Clean up resources
+
+To remove the assignment created, follow these steps:
+
+1. Select **Compliance** (or **Assignments**) in the left side of the Azure Policy page and locate
+ the _Audit VMs that do not use managed disks_ policy assignment you created.
+
+1. Right-click the _Audit VMs that do not use managed disks_ policy assignment and select **Delete
+ assignment**.
+
+ :::image type="content" source="./media/assign-policy-template/delete-assignment.png" alt-text="Screenshot of using the context menu to delete an assignment from the Compliance page." border="false":::
+
+1. Delete the `assignment.bicep` file.
+
+## Next steps
+
+In this quickstart, you assigned a built-in policy definition to a scope and evaluated its
+compliance report. The policy definition validates that all the resources in the scope are compliant
+and identifies which ones aren't.
+
+To learn more about assigning policies to validate that new resources are compliant, continue to the
+tutorial for:
+
+> [!div class="nextstepaction"]
+> [Creating and managing policies](./tutorials/create-and-manage.md)
governance First Query Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-java.md
+
+ Title: "Quickstart: Your first Java query"
+description: In this quickstart, you follow the steps to enable the Resource Graph Maven packages for Java and run your first query.
Last updated : 03/30/2021+++
+# Quickstart: Run your first Resource Graph query using Java
+
+The first step to using Azure Resource Graph is to check that the required Maven packages for Java
+are installed. This quickstart walks you through the process of adding the Maven packages to your
+Java installation.
+
+At the end of this process, you'll have added the Maven packages to your Java installation and run
+your first Resource Graph query.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a
+ [free](https://azure.microsoft.com/free/) account before you begin.
+
+- Check that the latest Azure CLI is installed (at least **2.21.0**). If it isn't yet installed, see
+ [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+ > [!NOTE]
+ > Azure CLI is required to enable Azure SDK for Java to use the **CLI-based authentication** in
+ > the following examples. For information about other options, see
+ > [Azure Identity client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity).
+
+- The [Java Developer Kit](/azure/developer/java/fundamentals/java-jdk-long-term-support), version
+ 8.
+
+- [Apache Maven](https://maven.apache.org/), version 3.6 or above.
+
+## Create the Resource Graph project
+
+To enable Java to query Azure Resource Graph, create and configure a new application with Maven and
+install the required Maven packages.
+
+1. Initialize a new Java application named "argQuery" with a
+ [Maven archetype](https://maven.apache.org/guides/introduction/introduction-to-archetypes.html):
+
+ ```cmd
+ mvn -B archetype:generate -DarchetypeGroupId="org.apache.maven.archetypes" -DgroupId="com.Fabrikam" -DartifactId="argQuery"
+ ```
+
+1. Change directories into the new project folder `argQuery` and open `pom.xml` in your favorite
+ editor. Add the following `<dependency>` nodes under the existing `<dependencies>` node:
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.2.4</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure.resourcemanager</groupId>
+ <artifactId>azure-resourcemanager-resourcegraph</artifactId>
+ <version>1.0.0-beta.1</version>
+ </dependency>
+ ```
+
+1. In the `pom.xml` file, add the following `<properties>` node under the base `<project>` node to
+ update the source and target versions:
+
+ ```xml
+ <properties>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ </properties>
+ ```
+
+1. In the `pom.xml` file, add the following `<build>` node under the base `<project>` node to
+ configure the goal and main class for the project to run.
+
+ ```xml
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.codehaus.mojo</groupId>
+ <artifactId>exec-maven-plugin</artifactId>
+ <version>1.2.1</version>
+ <executions>
+ <execution>
+ <goals>
+ <goal>java</goal>
+ </goals>
+ </execution>
+ </executions>
+ <configuration>
+ <mainClass>com.Fabrikam.App</mainClass>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ ```
+
+1. Replace the default `App.java` in `\argQuery\src\main\java\com\Fabrikam` with the following code
+ and save the updated file:
+
+ ```java
+ package com.Fabrikam;
+
+ import java.util.Arrays;
+ import java.util.List;
+ import com.azure.core.management.AzureEnvironment;
+ import com.azure.core.management.profile.AzureProfile;
+ import com.azure.identity.DefaultAzureCredentialBuilder;
+ import com.azure.resourcemanager.resourcegraph.ResourceGraphManager;
+ import com.azure.resourcemanager.resourcegraph.models.QueryRequest;
+ import com.azure.resourcemanager.resourcegraph.models.QueryRequestOptions;
+ import com.azure.resourcemanager.resourcegraph.models.QueryResponse;
+ import com.azure.resourcemanager.resourcegraph.models.ResultFormat;
+
+ public class App
+ {
+ public static void main( String[] args )
+ {
+ List<String> listSubscriptionIds = Arrays.asList(args[0]);
+ String strQuery = args[1];
+
+ ResourceGraphManager manager = ResourceGraphManager.authenticate(new DefaultAzureCredentialBuilder().build(), new AzureProfile(AzureEnvironment.AZURE));
+
+ QueryRequest queryRequest = new QueryRequest()
+ .withSubscriptions(listSubscriptionIds)
+ .withQuery(strQuery);
+
+ QueryResponse response = manager.resourceProviders().resources(queryRequest);
+
+ System.out.println("Records: " + response.totalRecords());
+ System.out.println("Data:\n" + response.data());
+ }
+ }
+ ```
+
+1. Build the `argQuery` console application:
+
+ ```bash
+ mvn package
+ ```
+
+## Run your first Resource Graph query
+
+With the Java console application built, it's time to try out a simple Resource Graph query. The
+query returns the first five Azure resources with the **Name** and **Resource Type** of each
+resource.
+
+In each call to `argQuery`, there are variables that are used that you need to replace with your own
+values:
+
+- `{subscriptionId}` - Replace with your subscription ID
+- `{query}` - Replace with your Azure Resource Graph query
+
+1. Use the Azure CLI to authenticate with `az login`.
+
+1. Change directories to the `argQuery` project folder you created with the previous
+ `mvn -B archetype:generate` command.
+
+1. Run your first Azure Resource Graph query using Maven to compile the console application and pass
+ the arguments. The `exec.args` property identifies arguments by spaces. To identify the query as
+ a single argument, we wrap it with single quotes (`'`).
+
+ ```bash
+ mvn compile exec:java -Dexec.args "{subscriptionId} 'Resources | project name, type | limit 5'"
+ ```
+
+ > [!NOTE]
+ > As this query example doesn't provide a sort modifier such as `order by`, running this query
+ > multiple times is likely to yield a different set of resources per request.
+
+1. Change the argument to `argQuery.exe` and change the query to `order by` the **Name** property:
+
+ ```bash
+ mvn compile exec:java -Dexec.args "{subscriptionId} 'Resources | project name, type | limit 5 | order by name asc'"
+ ```
+
+ > [!NOTE]
+ > Just as with the first query, running this query multiple times is likely to yield a different
+ > set of resources per request. The order of the query commands is important. In this example,
+ > the `order by` comes after the `limit`. This command order first limits the query results and
+ > then orders them.
+
+1. Change the final parameter to `argQuery.exe` and change the query to first `order by` the
+ **Name** property and then `limit` to the top five results:
+
+ ```bash
+ mvn compile exec:java -Dexec.args "{subscriptionId} 'Resources | project name, type | order by name asc | limit 5'"
+ ```
+
+When the final query is run several times, assuming that nothing in your environment is changing,
+the results returned are consistent and ordered by the **Name** property, but still limited to the
+top five results.
+
+## Clean up resources
+
+If you wish to remove the Java console application and installed packages, you can do so by deleting
+the `argQuery` project folder.
+
+## Next steps
+
+In this quickstart, you've created a Java console application with the required Resource Graph
+packages and run your first query. To learn more about the Resource Graph language, continue to the
+query language details page.
+
+> [!div class="nextstepaction"]
+> [Get more information about the query language](./concepts/query-language.md)
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/how-to/get-resource-changes.md
Title: Get resource changes description: Understand how to find when a resource was changed, get a list of the properties that changed, and evaluate the diffs. Previously updated : 01/27/2021 Last updated : 03/31/2021 # Get resource changes
Each detected change event for the **resourceId** has the following properties:
- **changeType** - Describes the type of change detected for the entire change record between the **beforeSnapshot** and **afterSnapshot**. Values are: _Create_, _Update_, and _Delete_. The **propertyChanges** property array is only included when **changeType** is _Update_.+
+ > [!IMPORTANT]
+ > _Create_ is only available on resources that previously existed and were deleted within the last
+ > 14 days.
+ - **propertyChanges** - This array of properties details all of the resource properties that were updated between the **beforeSnapshot** and the **afterSnapshot**: - **propertyName** - The name of the resource property that was altered.
hdinsight Apache Spark Troubleshoot Job Slowness Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-troubleshoot-job-slowness-container.md
In Spark 2.1, while we do not need to update the cache after every write, Spark
In Spark 2.2, when writing data with append mode, this performance problem should be fixed.
+In Spark 2.3, the same behavior as Spark 2.2 is expected.
+ ## Resolution When you create a partitioned data set, it is important to use a partitioning scheme that will limit the number of files that Spark has to list to update the `FileStatusCache`.
For every Nth micro batch where N % 100 == 0 (100 is just an example), move exis
## Next steps
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/customer-managed-key.md
In Azure, this is typically accomplished using an encryption key in the customer
- [Register the Azure Cosmos DB resource provider for your Azure subscription](../../cosmos-db/how-to-setup-cmk.md#register-resource-provider) - [Configure your Azure Key Vault instance](../../cosmos-db/how-to-setup-cmk.md#configure-your-azure-key-vault-instance)-- [Add an access policy to your Azure Key Vault instance](../../cosmos-db/how-to-setup-cmk.md#add-an-access-policy-to-your-azure-key-vault-instance)
+- [Add an access policy to your Azure Key Vault instance](../../cosmos-db/how-to-setup-cmk.md#add-access-policy)
- [Generate a key in Azure Key Vault](../../cosmos-db/how-to-setup-cmk.md#generate-a-key-in-azure-key-vault) ## Using Azure portal
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data.md
description: How to use the new data export to export your IoT data to Azure and
Previously updated : 01/27/2021 Last updated : 03/24/2021
This article describes how to use the new data export feature in Azure IoT Centr
For example, you can: -- Continuously export telemetry data and property changes in JSON format in near-real time.
+- Continuously export telemetry, property changes, device lifecycle, and device template lifecycle data in JSON format in near-real time.
- Filter the data streams to export data that matches custom conditions. - Enrich the data streams with custom values and property values from the device. - Send the data to destinations such as Azure Event Hubs, Azure Service Bus, Azure Blob Storage, and webhook endpoints.
Now that you have a destination to export your data to, set up data export in yo
| :- | :- | :-- | | Telemetry | Export telemetry messages from devices in near-real time. Each exported message contains the full contents of the original device message, normalized. | [Telemetry message format](#telemetry-format) | | Property changes | Export changes to device and cloud properties in near-real time. For read-only device properties, changes to the reported values are exported. For read-write properties, both reported and desired values are exported. | [Property change message format](#property-changes-format) |
+ | Device lifecycle | Export device registered and deleted events. | [Device lifecycle changes message format](#device-lifecycle-changes-format) |
+ | Device template lifecycle | Export published device template changes including created, updated, and deleted. | [Device template lifecycle changes message format](#device-template-lifecycle-changes-format) |
-<a name="DataExportFilters"></a>
1. Optionally, add filters to reduce the amount of data exported. There are different types of filter available for each data export type:-
- To filter telemetry, you can:
-
- - **Filter** the exported stream to only contain telemetry from devices that match the device name, device ID, and device template filter condition.
- - **Filter** over capabilities: If you choose a telemetry item in the **Name** dropdown, the exported stream only contains telemetry that meets the filter condition. If you choose a device or cloud property item in the **Name** dropdown, the exported stream only contains telemetry from devices with properties matching the filter condition.
- - **Message property filter**: Devices that use the device SDKs can send *message properties* or *application properties* on each telemetry message. The properties are a bag of key-value pairs that tag the message with custom identifiers. To create a message property filter, enter the message property key you're looking for, and specify a condition. Only telemetry messages with properties that match the specified filter condition are exported. The following string comparison operators are supported: equals, does not equal, contains, does not contain, exists, does not exist. [Learn more about application properties from IoT Hub docs](../../iot-hub/iot-hub-devguide-messages-construct.md).
-
- To filter property changes, use a **Capability filter**. Choose a property item in the dropdown. The exported stream only contains changes to the selected property that meets the filter condition.
-
-<a name="DataExportEnrichmnents"></a>
+ <a name="DataExportFilters"></a>
+
+ | Type of data | Available filters|
+ |--||
+ |Telemetry|<ul><li>Filter by device name, device ID, and device template</li><li>Filter stream to only contain telemetry that meets the filter conditions</li><li>Filter stream to only contain telemetry from devices with properties matching the filter conditions</li><li>Filter stream to only contain telemetry that have *message properties* meeting the filter condition. *Message properties* (also known as *application properties*) are sent in a bag of key-value pairs on each telemetry message optionally sent by devices that use the device SDKs. To create a message property filter, enter the message property key you're looking for, and specify a condition. Only telemetry messages with properties that match the specified filter condition are exported. [Learn more about application properties from IoT Hub docs](../../iot-hub/iot-hub-devguide-messages-construct.md) </li></ul>|
+ |Property changes|<ul><li>Filter by device name, device ID, and device template</li><li>Filter stream to only contain property changes that meet the filter conditions</li></ul>|
+ |Device lifecycle|<ul><li>Filter by device name, device ID, and device template</li><li>Filter stream to only contain changes from devices with properties matching the filter conditions</li></ul>|
+ |Device template lifecycle|<ul><li>Filter by device template</li></ul>|
+
1. Optionally, enrich exported messages with additional key-value pair metadata. The following enrichments are available for the telemetry and property changes data export types:-
+<a name="DataExportEnrichmnents"></a>
- **Custom string**: Adds a custom static string to each message. Enter any key, and enter any string value. - **Property**: Adds the current device reported property or cloud property value to each message. Enter any key, and choose a device or cloud property. If the exported message is from a device that doesn't have the specified property, the exported message doesn't get the enrichment.
Each exported message contains a normalized form of the full message the device
- `deviceId`: The ID of the device that sent the telemetry message. - `schema`: The name and version of the payload schema. - `templateId`: The ID of the device template associated with the device.
+- `enqueuedTime`: The time at which this message was received by IoT Central.
- `enrichments`: Any enrichments set up on the export. - `messageProperties`: Additional properties that the device sent with the message. These properties are sometimes referred to as *application properties*. [Learn more from IoT Hub docs](../../iot-hub/iot-hub-devguide-messages-construct.md).
Each message or record represents one change to a device or cloud property. For
- `messageType`: Either `cloudPropertyChange`, `devicePropertyDesiredChange`, or `devicePropertyReportedChange`. - `deviceId`: The ID of the device that sent the telemetry message. - `schema`: The name and version of the payload schema.
+- `enqueuedTime`: The time at which this change was detected by IoT Central.
- `templateId`: The ID of the device template associated with the device. - `enrichments`: Any enrichments set up on the export.
The following example shows an exported property change message received in Azur
} ```
+## Device lifecycle changes format
+
+Each message or record represents one change to a single device. Information in the exported message includes:
+
+- `applicationId`: The ID of the IoT Central application.
+- `messageSource`: The source for the message - `deviceLifecycle`.
+- `messageType`: Either `registered` or `deleted`.
+- `deviceId`: The ID of the device that was changed.
+- `schema`: The name and version of the payload schema.
+- `templateId`: The ID of the device template associated with the device.
+- `enqueuedTime`: The time at which this change occurred in IoT Central.
+- `enrichments`: Any enrichments set up on the export.
+
+For Event Hubs and Service Bus, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically.
+
+For Blob storage, messages are batched and exported once per minute.
+
+The following example shows an exported device lifecycle message received in Azure Blob Storage.
+
+```json
+{
+ "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
+ "messageSource": "deviceLifecycle",
+ "messageType": "registered",
+ "deviceId": "1vzb5ghlsg1",
+ "schema": "default@v1",
+ "templateId": "urn:qugj6vbw5:___qbj_27r",
+ "enqueuedTime": "2021-01-01T22:26:55.455Z",
+ "enrichments": {
+ "userSpecifiedKey": "sampleValue"
+ }
+}
+```
+## Device template lifecycle changes format
+
+Each message or record represents one change to a single published device template. Information in the exported message includes:
+
+- `applicationId`: The ID of the IoT Central application.
+- `messageSource`: The source for the message - `deviceTemplateLifecycle`.
+- `messageType`: Either `created`, `updated`, or `deleted`.
+- `schema`: The name and version of the payload schema.
+- `templateId`: The ID of the device template associated with the device.
+- `enqueuedTime`: The time at which this change occurred in IoT Central.
+- `enrichments`: Any enrichments set up on the export.
+
+For Event Hubs and Service Bus, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically.
+
+For Blob storage, messages are batched and exported once per minute.
+
+The following example shows an exported device lifecycle message received in Azure Blob Storage.
+
+```json
+{
+ "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
+ "messageSource": "deviceTemplateLifecycle",
+ "messageType": "created",
+ "schema": "default@v1",
+ "templateId": "urn:qugj6vbw5:___qbj_27r",
+ "enqueuedTime": "2021-01-01T22:26:55.455Z",
+ "enrichments": {
+ "userSpecifiedKey": "sampleValue"
+ }
+}
+```
+ ## Comparison of legacy data export and data export The following table shows the differences between the [legacy data export](howto-export-data-legacy.md) and the new data export features: | Capabilities | Legacy data export | New data export | | :- | :- | :-- |
-| Available data types | Telemetry, Devices, Device templates | Telemetry, Property changes |
+| Available data types | Telemetry, Devices, Device templates | Telemetry, Property changes, Device lifecycle changes, Device template lifecycle changes |
| Filtering | None | Depends on the data type exported. For telemetry, filtering by telemetry, message properties, property values | | Enrichments | None | Enrich with a custom string or a property value on the device | | Destinations | Azure Event Hubs, Azure Service Bus queues and topics, Azure Blob Storage | Same as for legacy data export plus webhooks|
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot-common-errors.md
Specify the DNS server for your environment in the container engine settings, wh
The above example sets the DNS server to a publicly accessible DNS service. If the edge device can't access this IP from its environment, replace it with DNS server address that is accessible.
+<!-- 1.1 -->
Place `daemon.json` in the right location for your platform: | Platform | Location |
Restart the container engine for the updates to take effect.
| Linux | `sudo systemctl restart docker` | | Windows (Admin PowerShell) | `Restart-Service iotedge-moby -Force` |
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+Place `daemon.json` in the `/etc/docker` directory on your device.
+
+If the location already contains a `daemon.json` file, add the **dns** key to it and save the file.
+
+Restart the container engine for the updates to take effect.
+
+```bash
+sudo systemctl restart docker
+```
+
+<!-- end 1.2 -->
+ **Option 2: Set DNS server in IoT Edge deployment per module** You can set DNS server for each module's *createOptions* in the IoT Edge deployment. For example:
When you see this error, you can resolve it by configuring the DNS name of your
:::moniker-end <!-- end 1.2 -->
+<!-- 1.1 -->
+ ## Can't get the IoT Edge daemon logs on Windows **Observed behavior:**
Windows Registry Editor Version 5.00
"TypesSupported"=dword:00000007 ```
+<!-- end 1.1 -->
+ ## Stability issues on smaller devices **Observed behavior:**
IoT Edge devices behind a gateway get their module images from the parent IoT Ed
Make sure the parent IoT Edge device can receive incoming requests from the child IoT Edge device. Open network traffic on ports 443 and 6617 for requests coming from the child device. :::moniker-end
+<!-- end 1.2 -->
## Next steps
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot.md
description: Use this article to learn standard diagnostic skills for Azure IoT
Previously updated : 11/12/2020 Last updated : 04/01/2021
Your first step when troubleshooting IoT Edge should be to use the `check` comma
You can run the `check` command as follows, or include the `--help` flag to see a complete list of options:
+<!-- 1.1 -->
On Linux: ```bash
On Windows:
iotedge check ```
+<!-- end 1.1 -->
+
+<!-- 1.1 -->
+
+```bash
+sudo iotedge check
+```
+
+<!-- end 1.2 -->
+ The troubleshooting tool runs many checks that are sorted into these three categories: * *Configuration checks* examines details that could prevent IoT Edge devices from connecting to the cloud, including issues with the config file and the container engine.
When you need to gather logs from an IoT Edge device, the most convenient way is
Run the `support-bundle` command with the `--since` flag to specify how long from the past you want to get logs. For example `6h` will get logs since the last six hours, `6d` since the last six days, `6m` since the last six minutes and so on. Include the `--help` flag to see a complete list of options.
+<!-- 1.1 -->
+ On Linux: ```bash
On Windows:
iotedge support-bundle --since 6h ```
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+
+```bash
+sudo iotedge support-bundle --since 6h
+```
+
+<!-- end 1.2 -->
+ You can also use a [direct method](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics) call to your device to upload the output of the support-bundle command to Azure Blob Storage. > [!WARNING]
This command will output all the edgeAgent [reported properties](./module-edgeag
The [IoT Edge security manager](iot-edge-security-manager.md) is responsible for operations like initializing the IoT Edge system at startup and provisioning devices. If IoT Edge isn't starting, the security manager logs may provide useful information.
-On Linux:
- <!-- 1.1 --> :::moniker range="iotedge-2018-06"
+On Linux:
* View the status of the IoT Edge security
On Linux:
sudo systemctl daemon-reload sudo systemctl restart iotedge ```
-<!--end 1.1 -->
-
-<!-- 1.2 -->
-
-* View the status of the IoT Edge system
-
- ```bash
- sudo iotedge system status
- ```
-
-* View the logs of the IoT Edge system
-
- ```bash
- sudo iotedge system logs -- -f
- ```
-
-* Enable debug-level logs to view more detailed logs of the IoT Edge system
-
- 1. Enable debug-level logs.
-
- ```bash
- sudo iotedge system set-log-level debug
- sudo iotedge system restart
- ```
-
- 1. Switch back to the default info-level logs after debugging.
-
- ```bash
- sudo iotedge system set-log-level info
- sudo iotedge system restart
- ```
-
-<!-- end 1.2 -->
On Windows:
On Windows:
Restart-Service iotedge ```
+<!--end 1.1 -->
+
+<!-- 1.2 -->
+
+* View the status of the IoT Edge system
+
+ ```bash
+ sudo iotedge system status
+ ```
+
+* View the logs of the IoT Edge system
+
+ ```bash
+ sudo iotedge system logs -- -f
+ ```
+
+* Enable debug-level logs to view more detailed logs of the IoT Edge system
+
+ 1. Enable debug-level logs.
+
+ ```bash
+ sudo iotedge system set-log-level debug
+ sudo iotedge system restart
+ ```
+
+ 1. Switch back to the default info-level logs after debugging.
+
+ ```bash
+ sudo iotedge system set-log-level info
+ sudo iotedge system restart
+ ```
+
+<!-- end 1.2 -->
+ ## Check container logs for issues Once the IoT Edge security daemon is running, look at the logs of the containers to detect issues. Start with your deployed containers, then look at the containers that make up the IoT Edge runtime: edgeAgent and edgeHub. The IoT Edge agent logs typically provide info on the lifecycle of each container. The IoT Edge hub logs provide info on messaging and routing.
iot-edge Tutorial C Module Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-c-module-windows.md
Last updated 05/28/2019
-
+monikerRange: "=iotedge-2018-06"
# Tutorial: Develop C IoT Edge modules using Windows containers
This article shows you how to use Visual Studio to develop C code and deploy it to a Windows device that's running Azure IoT Edge. >[!NOTE]
->IoT Edge 1.1 LTS is the last release channel that will support Windows containers. Starting with version 1.2, Windows containers are not supported. Consider using or moving to [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) to run IoT Edge on Windows devices.
+>IoT Edge 1.1 LTS is the last release channel that supports Windows containers. Starting with version 1.2, Windows containers are not supported. Consider using or moving to [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) to run IoT Edge on Windows devices.
You can use Azure IoT Edge modules to deploy code that implements your business logic directly in your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data.
iot-edge Tutorial C Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-c-module.md
Use the following table to understand your options for developing and deploying
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
+* A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Csharp Module Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-csharp-module-windows.md
Last updated 08/03/2020
-
+monikerRange: "=iotedge-2018-06"
# Tutorial: Develop C# IoT Edge modules using Windows containers
This article shows you how to use Visual Studio to develop C# code and deploy it to a Windows device that's running Azure IoT Edge. >[!NOTE]
->IoT Edge 1.1 LTS is the last release channel that will support Windows containers. Starting with version 1.2, Windows containers are not supported. Consider using or moving to [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) to run IoT Edge on Windows devices.
+>IoT Edge 1.1 LTS is the last release channel that supports Windows containers. Starting with version 1.2, Windows containers are not supported. Consider using or moving to [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) to run IoT Edge on Windows devices.
You can use Azure IoT Edge modules to deploy code that implements your business logic directly in your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data.
iot-edge Tutorial Csharp Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-csharp-module.md
Use the following table to understand your options for developing and deploying
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment, [Develop an IoT Edge module using Linux containers](tutorial-develop-for-linux.md). After completing that tutorial, you already should have the following prerequisites: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
+* A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-deploy-custom-vision.md
In this tutorial, you learn how to:
Before beginning this tutorial, you should have gone through the previous tutorial to set up your environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
+* A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Deploy Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-deploy-function.md
The Azure Function that you create in this tutorial filters the temperature data
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
+* A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Deploy Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-deploy-machine-learning.md
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-Use Azure Notebooks to develop a machine learning module and deploy it to a Linux device running Azure IoT Edge.
+Use Azure Notebooks to develop a machine learning module and deploy it to a device running Azure IoT Edge with Linux containers.
You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through deploying an Azure Machine Learning module that predicts when a device fails based on simulated machine temperature data. For more information about Azure Machine Learning on IoT Edge, see [Azure Machine Learning documentation](../machine-learning/how-to-deploy-and-where.md). >[!NOTE]
iot-edge Tutorial Develop For Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-develop-for-windows.md
+monikerRange: "=iotedge-2018-06"
# Tutorial: Develop IoT Edge modules using Windows containers
Use Visual Studio to develop and deploy code to Windows devices running IoT Edge. >[!NOTE]
->IoT Edge 1.1 LTS is the last release channel that will support Windows containers. Starting with version 1.2, Windows containers are not supported. Consider using or moving to [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) to run IoT Edge on Windows devices.
+>IoT Edge 1.1 LTS is the last release channel that supports Windows containers. Starting with version 1.2, Windows containers are not supported. Consider using or moving to [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) to run IoT Edge on Windows devices.
This tutorial walks through what it takes to develop and deploy your own code to an IoT Edge device. This tutorial is a useful prerequisite for the other tutorials, which go into more detail about specific programming languages or Azure services.
iot-edge Tutorial Java Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-java-module.md
Use the following table to understand your options for developing and deploying
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing either of those tutorials, you should have the following prerequisites in place: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
+* A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Node Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-node-module.md
Last updated 07/30/2020 -+ # Tutorial: Develop and deploy a Node.js IoT Edge module using Linux containers
iot-edge Tutorial Store Data Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-store-data-sql-server.md
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-Deploy a SQL Server module to store data on a Linux device running Azure IoT Edge.
+Deploy a SQL Server module to store data on a device running Azure IoT Edge with Linux containers.
Use Azure IoT Edge and SQL Server to store and query data at the edge. Azure IoT Edge has basic storage capabilities to cache messages if a device goes offline, and then forward them when the connection is reestablished. However, you may want more advanced storage capabilities, like being able to query data locally. Your IoT Edge devices can use local databases to perform more complex computing without having to maintain a connection to IoT Hub.
In this tutorial, you learn how to:
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* An AMD64 device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
+* An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* ARM devices, like Raspberry Pis, cannot run SQL Server. If you want to use SQL on an ARM device, you can sign up to try [Azure SQL Edge](https://azure.microsoft.com/services/sql-edge/) in preview. * A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
iot-hub Iot Hub Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-tls-support.md
- Last updated 01/14/2020
+ Last updated 03/31/2021
TLS 1.0 and 1.1 are considered legacy and are planned for deprecation. For more
During a TLS handshake, IoT Hub presents RSA-keyed server certificates to connecting clients. Its root is the Baltimore Cybertrust Root CA. Recently, we rolled out a change to our TLS server certificate so that it is now issued by new intermediate certificate authorities (ICA). For more information, see [IoT Hub TLS certificate update](https://azure.microsoft.com/updates/iot-hub-tls-certificate-update/).
+### 4KB size limit on renewal
+
+During renewal of IoT Hub server side certificates, a check is made on IoT Hub service side to prevent `Server Hello` exceeding 4KB in size. A client should have at least 4KB of RAM set for incoming TLS max content length buffer, so that existing devices which are set for 4KB limit continue to work as before after certificate renewals. For constrained devices, IoT Hub supports [TLS maximum fragment length negotiation in preview](#tls-maximum-fragment-length-negotiation-preview).
+ ### Elliptic Curve Cryptography (ECC) server TLS certificate (preview) IoT Hub ECC server TLS certificate is available for public preview. While offering similar security to RSA certificates, ECC certificate validation (with ECC-only cipher suites) uses up to 40% less compute, memory, and bandwidth. These savings are important for IoT devices because of their smaller profiles and memory, and to support use cases in network bandwidth limited environments.
load-balancer Tutorial Cross Region Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-cross-region-portal.md
Create the backend address pool **myBackendPool-CR** to include the regional sta
:::image type="content" source="./media/tutorial-cross-region-portal/add-to-backendpool.png" alt-text="Add regional load balancers to backendpool" border="true":::
-## Create a health probe
-
-In this section, you'll create a health probe to create the load-balancing rule:
-
-* Named **myHealthProbe**.
-* Protocol **TCP**.
-* Interval of **5** seconds.
-* Unhealthy threshold of **two** failures.
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer-CR** from the resources list.
-
-2. Under **Settings**, select **Health probes**.
-
-3. Use these values to configure the health probe:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHealthProbe**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**. |
- | Interval | Enter **5**. |
- | Unhealthy threshold | Enter **2**. |
-
-4. Select **OK**.
-
- > [!NOTE]
- > Cross region load balancer has a built-in health probe. This probe is a placeholder for the load balancing rule creation to function. For more information, see **[Limitations of cross-region load balancer](cross-region-overview.md#limitations)**.
- ## Create a load balancer rule In this section, you'll create a load balancer rule:
logic-apps Connect Virtual Network Vnet Isolated Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/connect-virtual-network-vnet-isolated-environment.md
ms.suite: integration Previously updated : 12/18/2020 Last updated : 03/30/2021 # Connect to Azure virtual networks from Azure Logic Apps by using an integration service environment (ISE)
When you use an ISE with an Azure virtual network, a common setup problem is hav
To make sure that your ISE is accessible and that the logic apps in that ISE can communicate across each subnet in your virtual network, [open the ports described in this table for each subnet](#network-ports-for-ise). If any required ports are unavailable, your ISE won't work correctly.
-* If you have multiple ISE instances that need access to other endpoints that have IP restrictions, deploy an [Azure Firewall](../firewall/overview.md) or a [network virtual appliance](../virtual-network/virtual-networks-overview.md#filter-network-traffic) into your virtual network and route outbound traffic through that firewall or network virtual appliance. You can then [set up a single, outbound, public, static, and predictable IP address](connect-virtual-network-vnet-set-up-single-ip-address.md) that all the ISE instances in your virtual network can use to communicate with destination systems. That way, you don't have to set up additional firewall openings at those destination systems for each ISE.
+* If you have multiple ISE instances that need access to other endpoints that have IP restrictions, deploy an [Azure Firewall](../firewall/overview.md) or a [network virtual appliance](../virtual-network/virtual-networks-overview.md#filter-network-traffic) into your virtual network and route outbound traffic through that firewall or network virtual appliance. You can then [set up a single, outbound, public, static, and predictable IP address](connect-virtual-network-vnet-set-up-single-ip-address.md) that all the ISE instances in your virtual network can use to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE.
> [!NOTE] > You can use this approach for a single ISE when your scenario requires limiting the
- > number of IP addresses that need access. Consider whether the additional costs for
+ > number of IP addresses that need access. Consider whether the extra costs for
> the firewall or virtual network appliance make sense for your scenario. Learn more about > [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/).
To make sure that your ISE is accessible and that the logic apps in that ISE can
When you set up [NSG security rules](../virtual-network/network-security-groups-overview.md#security-rules), you need to use *both* the **TCP** and **UDP** protocols, or you can select **Any** instead so you don't have to create separate rules for each protocol. NSG security rules describe the ports that you must open for the IP addresses that need access to those ports. Make sure that any firewalls, routers, or other items that exist between these endpoints also keep those ports accessible to those IP addresses.
-* If you set up forced tunneling through your firewall to redirect Internet-bound traffic, review the [additional forced tunneling requirements](#forced-tunneling).
+* If you set up forced tunneling through your firewall to redirect Internet-bound traffic, review the [forced tunneling requirements](#forced-tunneling).
<a name="network-ports-for-ise"></a>
This table describes the ports that your ISE requires to be accessible and the p
|||--|--|-|-| | Intersubnet communication within virtual network | Address space for the virtual network with ISE subnets | * | Address space for the virtual network with ISE subnets | * | Required for traffic to flow *between* the subnets in your virtual network. <p><p>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. | | Both: <p>Communication to your logic app <p><p>Runs history for logic app| Internal ISE: <br>**VirtualNetwork** <p><p>External ISE: **Internet** or see **Notes** | * | **VirtualNetwork** | 443 | Rather than use the **Internet** service tag, you can specify the source IP address for these items: <p><p>- The computer or service that calls any request triggers or webhooks in your logic app <p>- The computer or service from where you want to access logic app runs history <p><p>**Important**: Closing or blocking this port prevents calls to logic apps that have request triggers or webhooks. You're also prevented from accessing inputs and outputs for each step in runs history. However, you're not prevented from accessing logic app runs history.|
-| Logic Apps designer - dynamic properties | **LogicAppsManagement** | * | **VirtualNetwork** | 454 | Requests come from the Logic Apps access endpoint's [inbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#inbound) for that region. |
-| Connector deployment | **AzureConnectors** | * | **VirtualNetwork** | 454 | Required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. |
-| Network health check | **LogicApps** | * | **VirtualNetwork** | 454 | Requests come from the Logic Apps access endpoint's [inbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#outbound) for that region. |
+| Logic Apps designer - dynamic properties | **LogicAppsManagement** | * | **VirtualNetwork** | 454 | Requests come from the Logic Apps access endpoint's [inbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#inbound) for that region. <p><p>**Important**: If you're working with Azure Government cloud, the **LogicAppsManagement** service tag won't work. Instead, you have to provide the Logic Apps [inbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#azure-government-inbound) for Azure Government. |
+| Network health check | **LogicApps** | * | **VirtualNetwork** | 454 | Requests come from the Logic Apps access endpoint's [inbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#outbound) for that region. <p><p>**Important**: If you're working with Azure Government cloud, the **LogicApps** service tag won't work. Instead, you have to provide both the Logic Apps [inbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#azure-government-inbound) and [outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
+| Connector deployment | **AzureConnectors** | * | **VirtualNetwork** | 454 | Required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. <p><p>**Important**: If you're working with Azure Government cloud, the **AzureConnectors** service tag won't work. Instead, you have to provide the [managed connector outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
| App Service Management dependency | **AppServiceManagement** | * | **VirtualNetwork** | 454, 455 || | Communication from Azure Traffic Manager | **AzureTrafficManager** | * | **VirtualNetwork** | Internal ISE: 454 <p><p>External ISE: 443 || | Both: <p>Connector policy deployment <p>API Management - management endpoint | **APIManagement** | * | **VirtualNetwork** | 3443 | For connector policy deployment, port access is required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. |
In addition, you need to add outbound rules for [App Service Environment (ASE)](
#### Forced tunneling requirements
-If you set up or use [forced tunneling](../firewall/forced-tunneling.md) through your firewall, you have to permit additional external dependencies for your ISE. Forced tunneling lets you redirect Internet-bound traffic to a designated next hop, such as your virtual private network (VPN) or to a virtual appliance, rather than to the Internet so that you can inspect and audit outbound network traffic.
+If you set up or use [forced tunneling](../firewall/forced-tunneling.md) through your firewall, you have to permit extra external dependencies for your ISE. Forced tunneling lets you redirect Internet-bound traffic to a designated next hop, such as your virtual private network (VPN) or to a virtual appliance, rather than to the Internet so that you can inspect and audit outbound network traffic.
If you don't permit access for these dependencies, your ISE deployment fails and your deployed ISE stops working.
-* User defined routes
+* User-defined routes
To prevent asymmetric routing, you must define a route for each and every IP address that's listed below with **Internet** as the next hop.
If you don't permit access for these dependencies, your ISE deployment fails and
| **Integration service environment name** | Yes | <*environment-name*> | Your ISE name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), and periods (`.`). | | **Location** | Yes | <*Azure-datacenter-region*> | The Azure datacenter region where to deploy your environment | | **SKU** | Yes | **Premium** or **Developer (No SLA)** | The ISE SKU to create and use. For differences between these SKUs, see [ISE SKUs](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level). <p><p>**Important**: This option is available only at ISE creation and can't be changed later. |
- | **Additional capacity** | Premium: <br>Yes <p><p>Developer: <br>Not applicable | Premium: <br>0 to 10 <p><p>Developer: <br>Not applicable | The number of additional processing units to use for this ISE resource. To add capacity after creation, see [Add ISE capacity](../logic-apps/ise-manage-integration-service-environment.md#add-capacity). |
+ | **Additional capacity** | Premium: <br>Yes <p><p>Developer: <br>Not applicable | Premium: <br>0 to 10 <p><p>Developer: <br>Not applicable | The number of extra processing units to use for this ISE resource. To add capacity after creation, see [Add ISE capacity](../logic-apps/ise-manage-integration-service-environment.md#add-capacity). |
| **Access endpoint** | Yes | **Internal** or **External** | The type of access endpoints to use for your ISE. These endpoints determine whether request or webhook triggers on logic apps in your ISE can receive calls from outside your virtual network. <p><p>For example, if you want to use the following webhook-based triggers, make sure that you select **External**: <p><p>- Azure DevOps <br>- Azure Event Grid <br>- Common Data Service <br>- Office 365 <br>- SAP (ISE version) <p><p>Your selection also affects the way that you can view and access inputs and outputs in your logic app runs history. For