Updates from: 01/18/2021 04:04:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-amazon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-amazon.md
@@ -10,7 +10,7 @@ ms.service: active-directory
ms.workload: identity ms.topic: how-to ms.custom: project-no-code
-ms.date: 12/07/2020
+ms.date: 01/15/2021
ms.author: mimart ms.subservice: B2C zone_pivot_groups: b2c-policy-type
@@ -32,7 +32,7 @@ zone_pivot_groups: b2c-policy-type
## Create an app in the Amazon developer console
-To use an Amazon account as a federated identity provider in Azure Active Directory B2C (Azure AD B2C), you need to create an application in your [Amazon Developer Services and Technologies](https://developer.amazon.com). If you don't already have an Amazon account, you can sign up at [https://www.amazon.com/](https://www.amazon.com/).
+To enable sign-in for users with an Amazon account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Amazon Developer Services and Technologies](https://developer.amazon.com). For more information, see [Register for Login with Amazon](https://developer.amazon.com/docs/login-with-amazon/register-web.html). If you don't already have an Amazon account, you can sign up at [https://www.amazon.com/](https://www.amazon.com/).
> [!NOTE] > Use the following URLs in **step 8** below, replacing `your-tenant-name` with the name of your tenant. When entering your tenant name, use all lowercase letters, even if the tenant is defined with uppercase letters in Azure AD B2C.
@@ -177,7 +177,7 @@ Now that you have a button in place, you need to link it to an action. The actio
## Add Amazon identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the Amazon identity provider.
+1. Click the user flow that you want to add the Amazon identity provider.
1. Under the **Social identity providers**, select **Amazon**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
@@ -199,4 +199,4 @@ Update the relying party (RP) file that initiates the user journey that you crea
1. Save your changes, upload the file, and then select the new policy in the list. 1. Make sure that Azure AD B2C application that you created is selected in the **Select application** field, and then test it by clicking **Run now**.
-::: zone-end
\ No newline at end of file
+::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-b2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/14/2021
+ms.date: 01/15/2021
ms.author: mimart ms.subservice: B2C ms.custom: fasttrack-edit, project-no-code
@@ -39,7 +39,7 @@ This article describes how to set up a federation with another Azure AD B2C tena
## Create an Azure AD B2C application
-To use an Azure AD B2C account as an [identity provider](openid-connect.md) in your Azure AD B2C tenant (for example, Contoso), in the other Azure AD B2C (for example, Fabrikam):
+To enable sign-in for users with an account from another Azure AD B2C tenant (for example, Fabrikam), in your Azure AD B2C (for example, Contoso):
1. Create a [user flow](tutorial-create-user-flows.md), or a [custom policy](custom-policy-get-started.md). 1. Then create an application in the Azure AD B2C, as describe in this section.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-multi-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/04/2020
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -36,7 +36,8 @@ This article shows you how to enable sign-in for users using the multi-tenant en
## Register an application
-To enable sign-in for users from a specific Azure AD organization, you need to register an application within the organizational Azure AD tenant.
+To enable sign-in for users with an Azure AD account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app).
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your organizational Azure AD tenant (for example, contoso.com). Select the **Directory + subscription filter** in the top menu, and then choose the directory that contains your tenant.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-single-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/15/2021
ms.author: mimart ms.subservice: B2C ms.custom: fasttrack-edit, project-no-code
@@ -34,7 +34,7 @@ This article shows you how to enable sign-in for users from a specific Azure AD
## Register an Azure AD app
-To enable sign-in for users from a specific Azure AD organization, you need to register an application within the organizational Azure AD tenant.
+To enable sign-in for users with an Azure AD account from a specific Azure AD organization, in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app).
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your organizational Azure AD tenant (for example, contoso.com). Select the **Directory + subscription filter** in the top menu, and then choose the directory that contains your Azure AD tenant.
@@ -234,7 +234,7 @@ Now that you have a button in place, you need to link it to an action. The actio
## Add Azure AD identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the Azure AD identity provider.
+1. Click the user flow that you want to add the Azure AD identity provider.
1. Under the **Social identity providers**, select **Contoso Azure AD**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-facebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-facebook.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -32,7 +32,7 @@ zone_pivot_groups: b2c-policy-type
## Create a Facebook application
-To use a Facebook account as an [identity provider](authorization-code-flow.md) in Azure Active Directory B2C (Azure AD B2C), you need to create an application in your tenant that represents it. If you don't already have a Facebook account, you can sign up at [https://www.facebook.com/](https://www.facebook.com/).
+To enable sign-in for users with a Facebook account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Facebook App Dashboard](https://developers.facebook.com/). For more information, see [App Development](https://developers.facebook.com/docs/development). If you don't already have a Facebook account, you can sign up at [https://www.facebook.com/](https://www.facebook.com/).
1. Sign in to [Facebook for developers](https://developers.facebook.com/) with your Facebook account credentials. 1. If you have not already done so, you need to register as a Facebook developer. To do this, select **Get Started** on the upper-right corner of the page, accept Facebook's policies, and complete the registration steps.
@@ -89,7 +89,7 @@ To use a Facebook account as an [identity provider](authorization-code-flow.md)
## Add Facebook identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the Facebook identity provider.
+1. Click the user flow that you want to add the Facebook identity provider.
1. Under the **Social identity providers**, select **Facebook**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-github.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -34,9 +34,9 @@ zone_pivot_groups: b2c-policy-type
## Create a GitHub OAuth application
-To use a GitHub account as an [identity provider](authorization-code-flow.md) in Azure Active Directory B2C (Azure AD B2C), you need to create an application in your tenant that represents it. If you don't already have a GitHub account, you can sign up at [https://www.github.com/](https://www.github.com/).
+To enable sign-in with a GitHub account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [GitHub Developer](https://github.com/settings/developers) portal. For more information, see [Creating an OAuth App](https://docs.github.com/en/free-pro-team@latest/developers/apps/creating-an-oauth-app). If you don't already have a GitHub account, you can sign up at [https://www.github.com/](https://www.github.com/).
-1. Sign in to the [GitHub Developer](https://github.com/settings/developers) website with your GitHub credentials.
+1. Sign in to the [GitHub Developer](https://github.com/settings/developers) with your GitHub credentials.
1. Select **OAuth Apps** and then select **New OAuth App**. 1. Enter an **Application name** and your **Homepage URL**. 1. Enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` in **Authorization callback URL**. Replace `your-tenant-name` with the name of your Azure AD B2C tenant. Use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
@@ -214,7 +214,7 @@ Now that you have a button in place, you need to link it to an action. The actio
## Add GitHub identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the GitHub identity provider.
+1. Click the user flow that you want to add the GitHub identity provider.
1. Under the **Social identity providers**, select **GitHub**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-google https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-google.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -33,7 +33,7 @@ zone_pivot_groups: b2c-policy-type
## Create a Google application
-To use a Google account as an [identity provider](authorization-code-flow.md) in Azure Active Directory B2C (Azure AD B2C), you need to create an application in your Google Developers Console. If you don't already have a Google account you can sign up at [https://accounts.google.com/SignUp](https://accounts.google.com/SignUp).
+To enable sign-in for users with a Google account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Google Developers Console](https://console.developers.google.com/). For more information, see [Setting up OAuth 2.0](https://support.google.com/googleapi/answer/6158849). If you don't already have a Google account you can sign up at [https://accounts.google.com/SignUp](https://accounts.google.com/SignUp).
1. Sign in to the [Google Developers Console](https://console.developers.google.com/) with your Google account credentials. 1. In the upper-left corner of the page, select the project list, and then select **New Project**.
@@ -185,7 +185,7 @@ Now that you have a button in place, you need to link it to an action. The actio
## Add Google identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the Google identity provider.
+1. Click the user flow that you want to add the Google identity provider.
1. Under the **Social identity providers**, select **Google**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-id-me https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-id-me.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/17/2020
+ms.date: 01/15/2021
ms.author: mimart ms.subservice: B2C zone_pivot_groups: b2c-policy-type
@@ -35,7 +35,7 @@ zone_pivot_groups: b2c-policy-type
## Create an ID.me application
-To use a ID.me account as an identity provider in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [ID.me Developer Resources for API & SDK](https://developers.id.me/). If you don't already have an ID.me developer account, you can sign up at [https://developers.id.me/registration/new](https://developers.id.me/registration/new).
+To enable sign-in for users with an ID.me account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [ID.me Developer Resources for API & SDK](https://developers.id.me/). For more information, see [OAuth Integration Guide](https://developers.id.me/documentation/oauth/overview/kyc). If you don't already have an ID.me developer account, you can sign up at [https://developers.id.me/registration/new](https://developers.id.me/registration/new).
1. Sign in to the [ID.me Developer Resources for API & SDK](https://developers.id.me/) with your ID.me account credentials. 1. Select **View My Applications**, and select **Continue**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-linkedin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-linkedin.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/17/2020
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -32,7 +32,7 @@ zone_pivot_groups: b2c-policy-type
## Create a LinkedIn application
-To use a LinkedIn account as an [identity provider](authorization-code-flow.md) in Azure Active Directory B2C (Azure AD B2C), you need to create an application in your tenant that represents it. If you don't already have a LinkedIn account, you can sign up at [https://www.linkedin.com/](https://www.linkedin.com/).
+To enable sign-in for users with a LinkedIn account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [LinkedIn Developers website](https://www.developer.linkedin.com/). For more information, see [Authorization Code Flow](https://docs.microsoft.com/linkedin/shared/authentication/authorization-code-flow). If you don't already have a LinkedIn account, you can sign up at [https://www.linkedin.com/](https://www.linkedin.com/).
1. Sign in to the [LinkedIn Developers website](https://www.developer.linkedin.com/) with your LinkedIn account credentials. 1. Select **My Apps**, and then click **Create app**.
@@ -228,7 +228,7 @@ Now that you have a button in place, you need to link it to an action. The actio
## Add LinkedIn identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the LinkedIn identity provider.
+1. Click the user flow that you want to add the LinkedIn identity provider.
1. Under the **Social identity providers**, select **LinkedIn**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-microsoft-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-microsoft-account.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -32,7 +32,7 @@ zone_pivot_groups: b2c-policy-type
## Create a Microsoft account application
-To use a Microsoft account as an [identity provider](openid-connect.md) in Azure Active Directory B2C (Azure AD B2C), you need to create an application in the Azure AD tenant. The Azure AD tenant is not the same as your Azure AD B2C tenant. If you don't already have a Microsoft account, you can get one at [https://www.live.com/](https://www.live.com/).
+To enable sign-in for users with a Microsoft account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app). If you don't already have a Microsoft account, you can get one at [https://www.live.com/](https://www.live.com/).
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your Azure AD tenant.
@@ -206,7 +206,7 @@ Now that you have a button in place, you need to link it to an action. The actio
## Add Microsoft identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the Microsoft identity provider.
+1. Click the user flow that you want to add the Microsoft identity provider.
1. Under the **Social identity providers**, select **Microsoft Account**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-qq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-qq.md
@@ -8,7 +8,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -27,7 +27,7 @@ zone_pivot_groups: b2c-policy-type
## Create a QQ application
-To use a QQ account as an identity provider in Azure Active Directory B2C (Azure AD B2C), you need to create an application in your tenant that represents it. If you don't already have a QQ account, you can sign up at [https://ssl.zc.qq.com/en/https://docsupdatetracker.net/index.html?type=1&ptlang=1033](https://ssl.zc.qq.com/en/https://docsupdatetracker.net/index.html?type=1&ptlang=1033).
+To enable sign-in for users with a QQ account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [QQ developer portal](http://open.qq.com). If you don't already have a QQ account, you can sign up at [https://ssl.zc.qq.com](https://ssl.zc.qq.com/en/https://docsupdatetracker.net/index.html?type=1&ptlang=1033).
### Register for the QQ developer program
@@ -185,7 +185,7 @@ Now that you have a button in place, you need to link it to an action. The actio
## Add QQ identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the QQ identity provider.
+1. Click the user flow that you want to add the QQ identity provider.
1. Under the **Social identity providers**, select **QQ**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-salesforce.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/05/2021
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -33,7 +33,7 @@ zone_pivot_groups: b2c-policy-type
## Create a Salesforce application
-To use a Salesforce account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in your Salesforce **App Manager**. For more information, see [Configure Basic Connected App Settings](https://help.salesforce.com/articleView?id=connected_app_create_basics.htm), and [Enable OAuth Settings for API Integration](https://help.salesforce.com/articleView?id=connected_app_create_api_integration.htm)
+To enable sign-in for users with a Salesforce account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in your Salesforce [App Manager](https://login.salesforce.com/). For more information, see [Configure Basic Connected App Settings](https://help.salesforce.com/articleView?id=connected_app_create_basics.htm), and [Enable OAuth Settings for API Integration](https://help.salesforce.com/articleView?id=connected_app_create_api_integration.htm)
1. [Sign in to Salesforce](https://login.salesforce.com/). 1. From the menu, select **Setup**.
@@ -206,7 +206,7 @@ Now that you have a button in place, you need to link it to an action. The actio
## Add Salesforce identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the Salesforce identity provider.
+1. Click the user flow that you want to add the Salesforce identity provider.
1. Under the **Social identity providers**, select **Salesforce**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-wechat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-wechat.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -29,7 +29,7 @@ zone_pivot_groups: b2c-policy-type
## Create a WeChat application
-To use a WeChat account as an identity provider in Azure Active Directory B2C (Azure AD B2C), you need to create an application in your tenant that represents it. If you don't already have a WeChat account, you can get information at [https://kf.qq.com/faq/161220Brem2Q161220uUjERB.html](https://kf.qq.com/faq/161220Brem2Q161220uUjERB.html).
+To enable sign-in for users with a WeChat account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [WeChat management center](https://open.weixin.qq.com/). If you don't already have a WeChat account, you can get information at [https://kf.qq.com](https://kf.qq.com/faq/161220Brem2Q161220uUjERB.html).
### Register a WeChat application
@@ -179,7 +179,7 @@ Now that you have a button in place, you need to link it to an action. The actio
## Add WeChat identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the WeChat identity provider.
+1. Click the user flow that you want to add the WeChat identity provider.
1. Under the **Social identity providers**, select **WeChat**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-weibo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-weibo.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -28,7 +28,7 @@ zone_pivot_groups: b2c-policy-type
## Create a Weibo application
-To use a Weibo account as an identity provider in Azure Active Directory B2C (Azure AD B2C), you need to create an application in your tenant that represents it. If you don't already have a Weibo account, you can sign up at [https://weibo.com/signup/signup.php?lang=en-us](https://weibo.com/signup/signup.php?lang=en-us).
+To enable sign-in for users with a Weibo account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Weibo developer portal](https://open.weibo.com/). If you don't already have a Weibo account, you can sign up at [https://weibo.com](https://weibo.com/signup/signup.php?lang=en-us).
1. Sign in to the [Weibo developer portal](https://open.weibo.com/) with your Weibo account credentials. 1. After signing in, select your display name in the top-right corner.
@@ -259,7 +259,7 @@ Now that you have a button in place, you need to link it to an action. The actio
## Add Weibo identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the Weibo identity provider.
+1. Click the user flow that you want to add the Weibo identity provider.
1. Under the **Social identity providers**, select **Weibo**. 1. Select **Save**. 1. To test your policy, select **Run user flow**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-beta-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-beta-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 04/24/2019
+ms.date: 12/18/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with Zscaler Beta
@@ -21,9 +21,6 @@ When you integrate Zscaler Beta with Azure AD, you can:
* Allow your users to be automatically signed in to Zscaler Beta with their Azure AD accounts. This access control is called single sign-on (SSO). * Manage your accounts in one central location by using the Azure portal.
-For more information about software as a service (SaaS) app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
- ## Prerequisites To configure Azure AD integration with Zscaler Beta, you need the following items:
@@ -35,68 +32,47 @@ To configure Azure AD integration with Zscaler Beta, you need the following item
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Zscaler Beta supports SP-initiated SSO.
-* Zscaler Beta supports just-in-time user provisioning.
-
-## Add Zscaler Beta from the Azure Marketplace
-
-To configure the integration of Zscaler Beta into Azure AD, add Zscaler Beta from the Azure Marketplace to your list of managed SaaS apps.
-
-To add Zscaler Beta from the Azure Marketplace, follow these steps.
-
-1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, select **Azure Active Directory**.
-
- ![Azure Active Directory button](common/select-azuread.png)
-
-2. Go to **Enterprise applications**, and then select **All applications**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add a new application, select **New application** at the top of the dialog box.
-
- ![New application button](common/add-new-app.png)
-
-4. In the search box, enter **Zscaler Beta**. Select **Zscaler Beta** from the result panel, and then select **Add**.
-
- ![Zscaler Beta in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Zscaler Beta based on the test user Britta Simon.
-For single sign-on to work, establish a link relationship between an Azure AD user and the related user in Zscaler Beta.
+* Zscaler Beta supports **SP** initiated SSO.
+* Zscaler Beta supports **Just In Time** user provisioning.
-To configure and test Azure AD single sign-on with Zscaler Beta, complete the following building blocks:
+## Adding Zscaler Beta from the gallery
-- [Configure Azure AD single sign-on](#configure-azure-ad-single-sign-on) to enable your users to use this feature.-- [Configure Zscaler Beta single sign-on](#configure-zscaler-beta-single-sign-on) to configure the single sign-on settings on the application side.-- [Create an Azure AD test user](#create-an-azure-ad-test-user) to test Azure AD single sign-on with Britta Simon.-- [Assign the Azure AD test user](#assign-the-azure-ad-test-user) to enable Britta Simon to use Azure AD single sign-on.-- [Create a Zscaler Beta test user](#create-a-zscaler-beta-test-user) to have a counterpart of Britta Simon in Zscaler Beta that's linked to the Azure AD representation of the user.-- [Test single sign-on](#test-single-sign-on) to verify whether the configuration works.
+To configure the integration of Zscaler Beta into Azure AD, you need to add Zscaler Beta from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zscaler Beta** in the search box.
+1. Select **Zscaler Beta** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Zscaler Beta
-To configure Azure AD single sign-on with Zscaler Beta, follow these steps.
+Configure and test Azure AD SSO with Zscaler Beta using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zscaler Beta.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zscaler Beta** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Zscaler Beta, perform the following steps:
- ![Configure Single sign-on link](common/select-sso.png)
-2. In the **Select a single sign-on method** dialog box, select the **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zscaler Beta SSO](#configure-zscaler-beta-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Zscaler Beta test user](#create-zscaler-beta-test-user)** - to have a counterpart of B.Simon in Zscaler Beta that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, select **Edit** to open the **Basic SAML Configuration** dialog box.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Zscaler Beta** application integration page, find the **Manage** section and select **Single sign-on**.
+1. On the **Select a Single sign-on method** page, select **SAML**.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. In the **Basic SAML Configuration** section, follow this step:
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![Zscaler Beta domain and URLs single sign-on information](common/sp-intiated.png)
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- - In the **Sign on URL** box, enter the URL used by your users to sign in to your Zscaler Beta application.
+ In the **Sign on URL** box, enter the URL used by your users to sign in to your Zscaler Beta Beta application.
> [!NOTE] > The value isn't real. Update the value with the actual Sign on URL value. To get the value, contact the [Zscaler Beta client support team](https://www.zscaler.com/company/contact).
@@ -113,10 +89,6 @@ To configure Azure AD single sign-on with Zscaler Beta, follow these steps.
a. Select **Add new claim** to open the **Manage user claims** dialog box.
- ![User claims dialog box](common/new-save-attribute.png)
-
- ![Manage user claims dialog box](common/new-attribute-details.png)
- b. In the **Name** box, enter the attribute name shown for that row. c. Leave the **Namespace** box blank.
@@ -129,8 +101,8 @@ To configure Azure AD single sign-on with Zscaler Beta, follow these steps.
g. Select **Save**.
- > [!NOTE]
- > To learn how to configure roles in Azure AD, see [Configure the role claim](../develop/active-directory-enterprise-app-role-management.md).
+ > [!NOTE]
+ > Please click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to configure Role in Azure AD.
7. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select **Download** to download the **Certificate (Base64)**. Save it on your computer.
@@ -140,11 +112,31 @@ To configure Azure AD single sign-on with Zscaler Beta, follow these steps.
![Copy configuration URLs](common/copy-configuration-urls.png)
- - Login URL
- - Azure AD Identifier
- - Logout URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zscaler Beta.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zscaler Beta**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure Zscaler Beta single sign-on
+## Configure Zscaler Beta SSO
1. To automate the configuration within Zscaler Beta, install **My Apps Secure Sign-in browser extension** by selecting **Install the extension**.
@@ -223,77 +215,24 @@ To configure the proxy settings in Internet Explorer, follow these steps.
6. Select **OK** to close the **Internet Options** dialog box.
-### Create an Azure AD test user
-
-Create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory** > **Users** > **All users**.
-
- ![Users and All users links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user button](common/new-user.png)
-
-3. In the **User** dialog box, follow these steps:
-
- ![User dialog box](common/user-properties.png)
-
- a. In the **Name** box, enter **BrittaSimon**.
-
- b. In the **User name** box, enter `brittasimon@yourcompanydomain.extension`. An example is BrittaSimon@contoso.com.
-
- c. Select the **Show password** check box. Write down the value that displays in the **Password** box.
-
- d. Select **Create**.
-
-### Assign the Azure AD test user
-
-Enable Britta Simon to use Azure single sign-on by granting access to Zscaler Beta.
-
-1. In the Azure portal, select **Enterprise applications** > **All applications** > **Zscaler Beta**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, enter and select **Zscaler Beta**.
-
- ![Zscaler Beta link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![Users and groups link](common/users-groups-blade.png)
-
-4. Select **Add user**. In the **Add Assignment** dialog box, select **Users and groups**.
-
- ![Add user button](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog box, select the user like **Britta Simon** from the list. Then choose **Select** at the bottom of the screen.
-
- ![Users and groups dialog box](./media/zscaler-beta-tutorial/tutorial_zscalerbeta_users.png)
-
-6. In the **Select Role** dialog box, select the appropriate user role in the list. Then choose **Select** at the bottom of the screen.
-
- ![Select Role dialog box](./media/zscaler-beta-tutorial/tutorial_zscalerbeta_roles.png)
-
-7. In the **Add Assignment** dialog box, select **Assign**.
-
- ![Add Assignment dialog box](./media/zscaler-beta-tutorial/tutorial_zscalerbeta_assign.png)
-
-### Create a Zscaler Beta test user
+### Create Zscaler Beta test user
In this section, the user Britta Simon is created in Zscaler Beta. Zscaler Beta supports **just-in-time user provisioning**, which is enabled by default. There's nothing for you to do in this section. If a user doesn't already exist in Zscaler Beta, a new one is created after authentication. >[!Note] >To create a user manually, contact the [Zscaler Beta support team](https://www.zscaler.com/company/contact).
-### Test single sign-on
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Zscaler Beta Sign-on URL where you can initiate the login flow.
+
+* Go to Zscaler Beta Sign-on URL directly and initiate the login flow from there.
-Test your Azure AD single sign-on configuration by using the Access Panel.
+* You can use Microsoft My Apps. When you click the Zscaler Beta tile in the My Apps, this will redirect to Zscaler Beta Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-When you select the Zscaler Beta tile in the Access Panel, you should be automatically signed in to the Zscaler Beta for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-## Additional resources
+## Next steps
-- [List of tutorials on how to integrate SaaS apps with Azure Active Directory](./tutorial-list.md)-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure Zscaler Beta you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-internet-access-administrator-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-internet-access-administrator-tutorial.md
@@ -9,20 +9,16 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 01/17/2019
+ms.date: 12/18/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with Zscaler Internet Access Administrator
-In this tutorial, you learn how to integrate Zscaler Internet Access Administrator with Azure Active Directory (Azure AD).
-Integrating Zscaler Internet Access Administrator with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Zscaler Internet Access Administrator with Azure Active Directory (Azure AD). When you integrate Zscaler Internet Access Administrator with Azure AD, you can:
-* You can control in Azure AD who has access to Zscaler Internet Access Administrator.
-* You can enable your users to be automatically signed-in to Zscaler Internet Access Administrator (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Zscaler Internet Access Administrator.
+* Enable your users to be automatically signed-in to Zscaler Internet Access Administrator with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
@@ -44,59 +40,37 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
To configure the integration of Zscaler Internet Access Administrator into Azure AD, you need to add Zscaler Internet Access Administrator from the gallery to your list of managed SaaS apps.
-**To add Zscaler Internet Access Administrator from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Zscaler Internet Access Administrator**, select **Zscaler Internet Access Administrator** from result panel then click **Add** button to add the application.
-
- ![Zscaler Internet Access Administrator in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Zscaler Internet Access Administrator based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Zscaler Internet Access Administrator needs to be established.
-
-To configure and test Azure AD single sign-on with Zscaler Internet Access Administrator, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zscaler Internet Access Administrator** in the search box.
+1. Select **Zscaler Internet Access Administrator** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Zscaler Internet Access Administrator Single Sign-On](#configure-zscaler-internet-access-administrator-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Zscaler Internet Access Administrator test user](#create-zscaler-internet-access-administrator-test-user)** - to have a counterpart of Britta Simon in Zscaler Internet Access Administrator that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for Zscaler Internet Access Administrator
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with Zscaler Internet Access Administrator using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zscaler Internet Access Administrator.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with Zscaler Internet Access Administrator, perform the following steps:
-To configure Azure AD single sign-on with Zscaler Internet Access Administrator, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+2. **[Configure Zscaler Internet Access Administrator SSO](#configure-zscaler-internet-access-administrator-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Zscaler Internet Access Administrator test user](#create-zscaler-internet-access-administrator-test-user)** - to have a counterpart of Britta Simon in Zscaler Internet Access Administrator that is linked to the Azure AD representation of user.
+6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zscaler Internet Access Administrator** application integration page, select **Single sign-on**.
+## Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **Zscaler Internet Access Administrator** application integration page, find the **Manage** section and select **Single sign-on**.
+1. On the **Select a Single sign-on method** page, select **SAML**.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **Basic SAML Configuration** dialog.
-
- ![Zscaler Internet Access Administrator Domain and URLs single sign-on information](common/idp-intiated.png)
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
a. In the **Identifier** text box, type a URL as per your requirement:
@@ -132,10 +106,6 @@ To configure Azure AD single sign-on with Zscaler Internet Access Administrator,
a. Click **Add new claim** to open the **Manage user claims** dialog.
- ![Screenshot shows User claims with the option to Add new claim.](./common/new-save-attribute.png)
-
- ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](./common/new-attribute-details.png)
- b. From the **Source attribute** list, select the attribute value. c. Click **Ok**.
@@ -143,7 +113,7 @@ To configure Azure AD single sign-on with Zscaler Internet Access Administrator,
d. Click **Save**. > [!NOTE]
- > Please click [here](../develop/active-directory-enterprise-app-role-management.md) to know how to configure Role in Azure AD
+ > Please click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to configure Role in Azure AD.
7. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
@@ -153,13 +123,32 @@ To configure Azure AD single sign-on with Zscaler Internet Access Administrator,
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- b. Azure Ad Identifier
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zscaler Internet Access Administrator.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zscaler Internet Access Administrator**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- c. Logout URL
-### Configure Zscaler Internet Access Administrator Single Sign-On
+## Configure Zscaler Internet Access Administrator SSO
1. In a different web browser window, log in to your Zscaler Internet Access Admin UI.
@@ -181,57 +170,6 @@ To configure Azure AD single sign-on with Zscaler Internet Access Administrator,
b. Click **Activate**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zscaler Internet Access Administrator.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zscaler Internet Access Administrator**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, type and select **Zscaler Internet Access Administrator**.
-
- ![The Zscaler Internet Access Administrator link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create Zscaler Internet Access Administrator test user The objective of this section is to create a user called Britta Simon in Zscaler Internet Access Administrator. Zscaler Internet Access does not support Just-In-Time provisioning for Administrator SSO. You are required to manually create an Administrator account.
@@ -239,16 +177,14 @@ For steps on how to create an Administrator account, refer to Zscaler documentat
https://help.zscaler.com/zia/adding-admins
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+### Test SSO
-When you click the Zscaler Internet Access Administrator tile in the Access Panel, you should be automatically signed in to the Zscaler Internet Access Admin UI for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Zscaler Internet Access Administrator for which you set up the SSO
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Zscaler Internet Access Administrator tile in the My Apps, you should be automatically signed in to the Zscaler Internet Access Administrator for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Zscaler Internet Access Administrator you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/13/2019
+ms.date: 12/18/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Zscaler with Azure Active Direct
* Enable your users to be automatically signed-in to Zscaler with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -41,18 +39,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Zscaler into Azure AD, you need to add Zscaler from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Zscaler** in the search box. 1. Select **Zscaler** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Zscaler
+## Configure and test Azure AD SSO for Zscaler
Configure and test Azure AD SSO with Zscaler using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zscaler.
-To configure and test Azure AD SSO with Zscaler, complete the following building blocks:
+To configure and test Azure AD SSO with Zscaler, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -65,9 +63,9 @@ To configure and test Azure AD SSO with Zscaler, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zscaler** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Zscaler** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -102,7 +100,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
f. Click **Save**. > [!NOTE]
- > Please click [here](../develop/active-directory-enterprise-app-role-management.md) to know how to configure Role in Azure AD
+ > Please click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to configure Role in Azure AD.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
@@ -126,35 +124,15 @@ In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zscaler.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zscaler**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Zscaler**.
-
- ![The Zscaler link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog, select the user like **Britta Simon** from the list, then click the **Select** button at the bottom of the screen.
-
- ![Screenshot shows the Users and groups dialog box where you can select a user.](./media/zscaler-tutorial/tutorial_zscaler_users.png)
-
-6. From the **Select Role** dialog choose the appropriate user role in the list, then click the **Select** button at the bottom of the screen.
-
- ![Screenshot shows the Select Role dialog box where you can choose a user role.](./media/zscaler-tutorial/tutorial_zscaler_roles.png)
-
-7. In the **Add Assignment** dialog select the **Assign** button.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zscaler.
- ![Screenshot shows the Add Assignment dialog box where you can select Assign.](./media/zscaler-tutorial/tutorial_zscaler_assign.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zscaler**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
+1. In the **Add Assignment** dialog, click the **Assign** button.
## Configure Zscaler SSO
@@ -245,16 +223,15 @@ In this section, a user called Britta Simon is created in Zscaler. Zscaler suppo
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Zscaler tile in the Access Panel, you should be automatically signed in to the Zscaler for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Zscaler Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to Zscaler Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Zscaler tile in the My Apps, this will redirect to Zscaler Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Zscaler with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Zscaler you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-dashboard.md new file mode 100644
@@ -0,0 +1,56 @@
+---
+title: Error investigation using dashboard
+description: This document contain information about error investigation using the dashboard
+ms.subservice: alerts
+ms.topic: conceptual
+author: nolavime
+ms.author: nolavime
+ms.date: 01/15/2021
+
+---
+
+# Error Investigation using the dashboard
+
+This page contains information about the ITSM connector dashboard. This dashboard will help you to investigate the status of your ITSM connector.
+
+## How to view the dashboard
+
+In order to view the errors in the dashboard, you should follow the next steps:
+
+1. In **All resources**, look for **ServiceDesk(*your workspace name*)**:
+
+ ![Screenshot that shows recent resources in the Azure portal.](media/itsmc-definition/create-new-connection-from-resource.png)
+
+2. Under **Workspace Data Sources** in the left pane, select **ITSM Connections**:
+
+ ![Screenshot that shows the ITSM Connections menu item.](media/itsmc-overview/add-new-itsm-connection.png)
+
+3. Under **Summary** in the left box **IT Service Management Connector**, select **View Summary**:
+
+ ![Screenshot that shows view summary.](media/itsmc-resync-servicenow/dashboard-view-summary.png)
+
+4. Under **Summary** in the left box **IT Service Management Connector**, click on the graph:
+
+ ![Screenshot that shows graph click.](media/itsmc-resync-servicenow/dashboard-graph-click.png)
+
+5. Using this dashboard you will be able to review the status and the errors in your connector.
+ ![Screenshot that shows connector status.](media/itsmc-resync-servicenow/connector-dashboard.png)
+
+## Dashboard Elements
+
+The dashboard contains information on the alerts that were sent into the ITSM tool using this connector.
+The dashboard is split into four parts:
+
+1. Work Item Created: The graph and the table below contain the count of the work item per type. If you click on the graph or on the table, you can see more details about the work items.
+ ![Screenshot that shows work item created.](media/itsmc-resync-servicenow/itsm-dashboard-workitems.png)
+2. Impacted computers: The tables contain details about the configuration items that created configuration items.
+ By clicking on rows in the tables, you can get further details on the configuration items.
+ The table contains limited number of rows if you would like to see all the list you can click on "See all".
+ ![Screenshot that shows impacted computers.](media/itsmc-resync-servicenow/itsm-dashboard-impacted-comp.png)
+3. Connector status: The graph and the table below contain messages about the status of the connector. By clicking on the graph on rows in the table, you can get further details on the messages of the connector status.
+ The table contains limited number of rows if you would like to see all the list you can click on "See all".
+ ![Screenshot that shows connector status.](media/itsmc-resync-servicenow/itsm-dashboard-connector-status.png)
+4. Alert rules: The tables contain the information on the number of alert rules that were detected.
+ By clicking on rows in the tables, you can get further details on the rules that were detected.
+ The table contains limited number of rows if you would like to see all the list you can click on "See all".
+ ![Screenshot that shows alert rules.](media/itsmc-resync-servicenow/itsm-dashboard-alert-rules.png)
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-resync-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-resync-servicenow.md
@@ -26,47 +26,7 @@ You can visualize the incident and change request data by using the ITSMC dashbo
The dashboard also provides information about connector status, which you can use as a starting point to analyze problems with the connections.
-### Error Investigation using the dashboard
-
-In order to view the errors in the dashboard, you should follow the next steps:
-
-1. In **All resources**, look for **ServiceDesk(*your workspace name*)**:
-
- ![Screenshot that shows recent resources in the Azure portal.](media/itsmc-definition/create-new-connection-from-resource.png)
-
-2. Under **Workspace Data Sources** in the left pane, select **ITSM Connections**:
-
- ![Screenshot that shows the ITSM Connections menu item.](media/itsmc-overview/add-new-itsm-connection.png)
-
-3. Under **Summary** in the left box **IT Service Management Connector**, select **View Summary**:
-
- ![Screenshot that shows view summary.](media/itsmc-resync-servicenow/dashboard-view-summary.png)
-
-4. Under **Summary** in the left box **IT Service Management Connector**, click on the graph:
-
- ![Screenshot that shows graph click.](media/itsmc-resync-servicenow/dashboard-graph-click.png)
-
-5. Using this dashboard you will be able to review the status and the errors in your connector.
- ![Screenshot that shows connector status.](media/itsmc-resync-servicenow/connector-dashboard.png)
-
-### Dashboard Elements
-
-The dashboard contains information on the alerts that were sent into the ITSM tool using this connector.
-The dashboard is split into 4 parts:
-
-1. Work Item Created: The graph and the table below contain the count of the work item per type. If you click on the graph or on the table you can see more details about the work items.
- ![Screenshot that shows work item created.](media/itsmc-resync-servicenow/itsm-dashboard-workitems.png)
-2. Impacted computers: The tables contain details about the configuration items that created configuration items.
- By clicking on rows in the tables you can get further details on the configuration items.
- The table contain limited number of rows if you would like to see all the list you can click on "See all".
- ![Screenshot that shows impacted computers.](media/itsmc-resync-servicenow/itsm-dashboard-impacted-comp.png)
-3. Connector status: The graph and the table below contain messages about the status of the connector. By clicking on the graph on rows in the table you can get further details on the messages of the connector status.
- The table contain limited number of rows if you would like to see all the list you can click on "See all".
- ![Screenshot that shows connector status.](media/itsmc-resync-servicenow/itsm-dashboard-connector-status.png)
-4. Alert rules: The tables contain the information on the number of alert rules that were detected.
- By clicking on rows in the tables you can get further details on the rules that were detected.
- The table contain limited number of rows if you would like to see all the list you can click on "See all".
- ![Screenshot that shows alert rules.](media/itsmc-resync-servicenow/itsm-dashboard-alert-rules.png)
+In order to get more information about the dashboard investigation, see [Error Investigation using the dashboard](./itsmc-dashboard.md).
### Service map
cdn https://docs.microsoft.com/en-us/azure/cdn/cdn-create-endpoint-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-create-endpoint-how-to.md
@@ -93,7 +93,7 @@ Log in to the [Azure portal](https://portal.azure.com) with your Azure account.
Because it takes time for the registration to propagate, the endpoint isn't immediately available for use: - For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes. - For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute.
- - For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes within 90 minutes.
+ - For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes within 30 minutes.
If you attempt to use the CDN domain name before the endpoint configuration has propagated to the point-of-presence (POP) servers, you might receive an HTTP 404 response status. If it's been several hours since you created your endpoint and you're still receiving a 404 response status, see [Troubleshooting Azure CDN endpoints that return a 404 status code](cdn-troubleshoot-endpoint.md).
cdn https://docs.microsoft.com/en-us/azure/cdn/cdn-restrict-access-by-country https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-restrict-access-by-country.md
@@ -1,33 +1,29 @@
---
-title: Restrict Azure CDN content by country/region | Microsoft Docs
+title: Restrict Azure CDN content by country/region
description: Learn how to restrict access by country/region to your Azure CDN content by using the geo-filtering feature. services: cdn documentationcenter: '' author: asudbring
-manager: danielgi
-editor: ''
-
-ms.assetid: 12c17cc5-28ee-4b0b-ba22-2266be2e786a
ms.service: azure-cdn
-ms.workload: tbd
-ms.tgt_pltfrm: na
-ms.devlang: na
ms.topic: how-to
-ms.date: 06/19/2018
+ms.date: 01/16/2021
ms.author: allensu --- # Restrict Azure CDN content by country/region ## Overview
-When a user requests your content, by default, the content is served regardless of the location of the user making the request. However, in some cases, you may want to restrict access to your content by country/region. With the *geo-filtering* feature, you can create rules on specific paths on your CDN endpoint to allow or block content in selected countries/regions.
+When a user requests your content, the content is served to users in all locations. You may want to restrict access to your content by country/region.
+
+With the *geo-filtering* feature, you can create rules on specific paths on your CDN endpoint. You can set the rules to allow or block content in selected countries/regions.
> [!IMPORTANT] > **Azure CDN Standard from Microsoft** profiles do not support path-based geo-filtering. > ## Standard profiles
-The procedures in this section are for **Azure CDN Standard from Akamai** and **Azure CDN Standard from Verizon** profiles only.
+
+These instructions are for **Azure CDN Standard from Akamai** and **Azure CDN Standard from Verizon** profiles.
For **Azure CDN Premium from Verizon** profiles, you must use the **Manage** portal to activate geo-filtering. For more information, see [Azure CDN Premium from Verizon profiles](#azure-cdn-premium-from-verizon-profiles).
@@ -38,7 +34,7 @@ To access the geo-filtering feature, select your CDN endpoint within the portal,
From the **PATH** box, specify the relative path to the location to which users will be allowed or denied access.
-You can apply geo-filtering for all your files with a forward slash (/) or select specific folders by specifying directory paths (for example, */pictures/*). You can also apply geo-filtering to a single file (for example */pictures/city.png*). Multiple rules are allowed; after you enter a rule, a blank row appears for you to enter the next rule.
+You can apply geo-filtering for all your files with a forward slash (/) or select specific folders by specifying directory paths (for example, */pictures/*). You can also apply geo-filtering to a single file (for example */pictures/city.png*). Multiple rules are allowed. After you enter a rule, a blank row appears for you to enter the next rule.
For example, all of the following directory path filters are valid: */*
@@ -59,6 +55,7 @@ For example, a geo-filtering rule for blocking the path */Photos/Strasbourg/* fi
*http:\//\<endpoint>.azureedge.net/Photos/Strasbourg/Cathedral/1000.jpg* ### Define the countries/regions+ From the **COUNTRY CODES** list, select the countries/regions that you want to block or allow for the path. After you have finished selecting the countries/regions, select **Save** to activate the new geo-filtering rule.
@@ -66,45 +63,47 @@ After you have finished selecting the countries/regions, select **Save** to acti
![Screenshot shows COUNTRY CODES to use to block or allow countries or regions.](./media/cdn-filtering/cdn-geo-filtering-rules.png) ### Clean up resources+ To delete a rule, select it from the list on the **Geo-filtering** page, then choose **Delete**. ## Azure CDN Premium from Verizon profiles
-For **For Azure CDN Premium from Verizon** profiles, the user interface for creating a geo-filtering rule is different:
+
+For **Azure CDN Premium from Verizon** profiles, the user interface for creating a geo-filtering rule is different:
1. From the top menu in your Azure CDN profile, select **Manage**. 2. From the Verizon portal, select **HTTP Large**, then select **Country Filtering**.
- ![Screenshot shows how select Country Filtering in Azure C D N.](./media/cdn-filtering/cdn-geo-filtering-premium.png)
-
+ :::image type="content" source="./media/cdn-filtering/cdn-geo-filtering-premium.png" alt-text="Screenshot shows how to select country filtering in Azure CDN" border="true":::
+
3. Select **Add Country Filter**.
- The **Step One:** page appears.
+4. In **Step One:**, enter the directory path. Select **Block** or **Add**, then select **Next**.
-4. Enter the directory path, select **Block** or **Add**, then select **Next**.
-
- The **Step Two:** page appears.
-
-5. Select one or more countries/regions from the list, then select **Finish** to activate the rule.
+ > [!IMPORTANT]
+ > The endpoint name must be in the path. Example: **/myendpoint8675/myfolder**. Replace **myendpoint8675** with the name of your endpoint.
+ >
+
+5. In **Step Two**, select one or more countries/regions from the list. Select **Finish** to activate the rule.
The new rule appears in the table on the **Country Filtering** page.-
- ![Screenshot shows where the rule appears in Country Filtering.](./media/cdn-filtering/cdn-geo-filtering-premium-rules.png)
-
+
+ :::image type="content" source="./media/cdn-filtering/cdn-geo-filtering-premium-rules.png" alt-text="Screenshot shows where the rule appears in country filtering." border="true":::
+
### Clean up resources In the country/region filtering rules table, select the delete icon next to a rule to delete it or the edit icon to modify it. ## Considerations
-* Changes to your geo-filtering configuration do not take effect immediately:
+* Changes to your geo-filtering configuration don't take effect immediately:
* For **Azure CDN Standard from Microsoft** profiles, propagation usually completes in 10 minutes. * For **Azure CDN Standard from Akamai** profiles, propagation usually completes within one minute. * For **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles, propagation usually completes in 10 minutes.
-* This feature does not support wildcard characters (for example, *).
+* This feature doesn't support wildcard characters (for example, *).
* The geo-filtering configuration associated with the relative path is applied recursively to that path.
-* Only one rule can be applied to the same relative path. That is, you cannot create multiple country/region filters that point to the same relative path. However, because country/region filters are recursive, a folder can have multiple country/region filters. In other words, a subfolder of a previously configured folder can be assigned a different country/region filter.
+* Only one rule can be applied to the same relative path. That is, you can't create multiple country/region filters that point to the same relative path. However, because country/region filters are recursive, a folder can have multiple country/region filters. In other words, a subfolder of a previously configured folder can be assigned a different country/region filter.
* The geo-filtering feature uses country codes to define the countries/regions from which a request is allowed or blocked for a secured directory. Although Akamai and Verizon profiles support most of the same country codes, there are a few differences. For more information, see [Azure CDN country codes](/previous-versions/azure/mt761717(v=azure.100)).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/spx-setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/spx-setup.md
@@ -54,11 +54,11 @@ Follow these steps to install the Speech CLI in a Docker container:
1. <a href="https://www.docker.com/get-started" target="_blank">Install Docker Desktop<span class="docon docon-navigate-external x-hidden-focus"></span></a> for your platform if it isn't already installed. 2. In a new command prompt or terminal, type this command:
- ```shell
+ ```console
docker pull msftspeech/spx ``` 3. Type this command. You should see help information for Speech CLI:
- ```shell
+ ```console
docker run -it --rm msftspeech/spx help ```
@@ -90,27 +90,27 @@ you must mount a directory in the container to your filesystem where the Speech
On Windows, your commands will start like this:
-```shell
+```console
docker run -it -v c:\spx-data:/data --rm msftspeech/spx ``` On Linux or macOS, your commands will look like the sample below. Replace `ABSOLUTE_PATH` with the absolute path for your mounted directory. This path was returned by the `pwd` command in the previous section. If you run this command before setting your key and region, you will get an error telling you to set your key and region:
-```shell
+```console
sudo docker run -it -v ABSOLUTE_PATH:/data --rm msftspeech/spx ``` To use the `spx` command installed in a container, always enter the full command shown above, followed by the parameters of your request. For example, on Windows, this command sets your key:
-```shell
+```console
docker run -it -v c:\spx-data:/data --rm msftspeech/spx config @key --set SUBSCRIPTION-KEY ``` For more extended interaction with the command line tool, you can start a container with an interactive bash shell by adding an entrypoint parameter. On Windows, enter this command to start a container that exposes an interactive command line interface where you can enter multiple `spx` commands:
-```shell
+```console
docker run -it --entrypoint=/bin/bash -v c:\spx-data:/data --rm msftspeech/spx ```
@@ -158,7 +158,7 @@ To start using the Speech CLI, you need to enter your Speech subscription key an
Get these credentials by following steps in [Try the Speech service for free](../overview.md#try-the-speech-service-for-free). Once you have your subscription key and region identifier (ex. `eastus`, `westus`), run the following commands.
-```shell
+```console
spx config @key --set SUBSCRIPTION-KEY spx config @region --set REGION ```
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/spx-basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-basics.md
@@ -1,22 +1,22 @@
---
-title: "Speech CLI basics"
+title: "Speech CLI quickstart - Speech service"
titleSuffix: Azure Cognitive Services
-description: Learn how to use the Speech CLI command tool to work with the Speech Service with no code and minimal setup.
+description: Get started with the Azure Speech CLI. You can interact with Speech services like speech to text, text to speech, and speech translation without writing code.
services: cognitive-services author: trevorbye manager: nitinme ms.service: cognitive-services ms.subservice: speech-service ms.topic: quickstart
-ms.date: 04/04/2020
+ms.date: 01/13/2021
ms.author: trbye ---
-# Learn the basics of the Speech CLI
+# Get started with the Azure Speech CLI
-In this article, you learn the basic usage patterns of the Speech CLI, a command line tool to use the Speech service without writing code. You can quickly test out the main features of the Speech service, without creating development environments or writing any code, to see if your use-cases can be adequately met. The Speech CLI is production ready and can be used to automate simple workflows in the Speech service, using `.bat` or shell scripts.
+In this article, you'll learn how to use the Speech CLI, a command-line interface, to access Speech services like speech to text, text to speech, and speech translation without writing code. The Speech CLI is production ready and can be used to automate simple workflows in the Speech service, using `.bat` or shell scripts.
-This article assumes that you have working knowledge of the command prompt, terminal or PowerShell.
+This article assumes that you have working knowledge of the command prompt, terminal, or PowerShell.
[!INCLUDE [](includes/spx-setup.md)]
@@ -24,191 +24,114 @@ This article assumes that you have working knowledge of the command prompt, term
This section shows a few basic SPX commands that are often useful for first-time testing and experimentation. Start by viewing the help built in to the tool by running the following command.
-```shell
+```console
spx ```
-Notice **see:** help topics listed right of command parameters. You can enter these commands to get detailed help about sub-commands.
- You can search help topics by keyword. For example, enter the following command to see a list of Speech CLI usage examples:
-```shell
+```console
spx help find --topics "examples" ``` Enter the following command to see options for the recognize command:
-```shell
+```console
spx help recognize ```
-Now, let's use the Speech CLI to perform speech recognition using your system's default microphone.
+Additional help commands listed in the right column. You can enter these commands to get detailed help about subcommands.
+
+## Speech to text (speech recognition)
+
+Let's use the Speech CLI to convert speech to text (speech recognition) using your system's default microphone. After entering the command, SPX will begin listening for audio on the current active input device, and stop when you press **ENTER**. The recorded speech is then recognized and converted to text in the console output.
->[!WARNING]
-> If you are using a Docker container, this command will not work.
+>[!IMPORTANT]
+> If you are using a Docker container, `--microphone` will not work.
Run this command:
-```shell
+```console
spx recognize --microphone ```
-With the Speech CLI you can also recognize speech from an audio file.
+With the Speech CLI, you can also recognize speech from an audio file.
-```shell
+```console
spx recognize --file /path/to/file.wav ```+ > [!TIP] > If you're recognizing speech from an audio file in a Docker container, make sure that the audio file is located in the directory that you mounted in the previous step.
-After entering the command, SPX will begin listening for audio on the current active input device, and stop after you press `ENTER`. The recorded speech is then recognized and converted to text in the console output. Text-to-speech synthesis is also easy to do using the Speech CLI.
-
-Running the following command will take the entered text as input, and output the synthesized speech to the current active output device.
+Don't forget, if you get stuck or want to learn more about the Speech CLI's recognition options, just type:
-```shell
-spx synthesize --text "Testing synthesis using the Speech CLI" --speakers
-```
-
-In addition to speech recognition and synthesis, you can also do speech translation with the Speech CLI. Similar to the speech recognition command above, run the following command to capture audio from your default microphone, and perform translation to text in the target language.
-
-```shell
-spx translate --microphone --source en-US --target ru-RU --output file C:\some\file\path\russian_translation.txt
+```console
+spx help recognize
```
-In this command, you specify both the source (language to translate **from**), and the target (language to translate **to**) languages. Using the `--microphone` argument will listen to audio on the current active input device, and stop after you press `ENTER`. The output is a text translation to the target language, written to a text file.
-
-> [!NOTE]
-> See the [language and locale article](language-support.md) for a list of all supported languages with their corresponding locale codes.
-
-### Configuration files in the datastore
-
-Speech CLI's behavior can rely on settings in configuration files, which you can refer to within Speech CLI calls using a \@ symbol.
-Speech CLI saves a new setting in a new `./spx/data` subdirectory it creates in the current working directory.
-When seeking a configuration value, Speech CLI looks in your current working directory, then in the datastore at `./spx/data`, and then in other datastores, including a final read-only datastore in the `spx` binary.
-Previously, you used the datastore to save your `@key` and `@region` values, so you did not need to specify them with each command line call.
-You can also use configuration files to store your own configuration settings, or even use them to pass URLs or other dynamic content generated at runtime.
-
-This section shows use of a configuration file in the local datastore to store and fetch command settings using `spx config`, and store output from Speech CLI using the `--output` option.
-
-The following example clears the `@my.defaults` configuration file,
-adds key-value pairs for **key** and **region** in the file, and uses the configuration
-in a call to `spx recognize`.
-
-```shell
-spx config @my.defaults --clear
-spx config @my.defaults --add key 000072626F6E20697320636F6F6C0000
-spx config @my.defaults --add region westus
-
-spx config @my.defaults
-
-spx recognize --nodefaults @my.defaults --file hello.wav
-```
+## Text to speech (speech synthesis)
-You can also write dynamic content to a configuration file. For example, the following command creates a custom speech model and stores the URL
-of the new model in a configuration file. The next command waits until the model at that URL is ready for use before returning.
+Running the following command will take text as input, and output the synthesized speech to the current active output device (for example, your computer speakers).
-```shell
-spx csr model create --name "Example 4" --datasets @my.datasets.txt --output url @my.model.txt
-spx csr model status --model @my.model.txt --wait
+```console
+spx synthesize --text "Testing synthesis using the Speech CLI" --speakers
```
-The following example writes two URLs to the `@my.datasets.txt` configuration file.
-In this scenario, `--output` can include an optional **add** keyword to create a configuration file or append to the existing one.
--
-```shell
-spx csr dataset create --name "LM" --kind Language --content https://crbn.us/data.txt --output url @my.datasets.txt
-spx csr dataset create --name "AM" --kind Acoustic --content https://crbn.us/audio.zip --output add url @my.datasets.txt
+You can also save the synthesized output to file. In this example, we'll create a file named `my-sample.wav` in the directory that the command is run.
-spx config @my.datasets.txt
+```console
+spx synthesize --text "We hope that you enjoy using the Speech CLI." --audio output my-sample.wav
```
-For more details about datastore files, including use of default configuration files (`@spx.default`, `@default.config`, and `@*.default.config` for command-specific default settings), enter this command:
+These examples presume that you're testing in English. However, we support speech synthesis in many languages. You can pull down a full list of voices with this command, or by visiting the [language support page](./language-support.md).
-```shell
-spx help advanced setup
+```console
+spx synthesize --voices
```
-## Batch operations
-
-The commands in the previous section are great for quickly seeing how the Speech service works. However, when assessing whether or not your use-cases can be met, you likely need to perform batch operations against a range of input you already have, to see how the service handles a variety of scenarios. This section shows how to:
-
-* Run batch speech recognition on a directory of audio files
-* Iterate through a `.tsv` file and run batch text-to-speech synthesis
+Here's how you use one of the voices you've discovered.
-## Batch speech recognition
-
-If you have a directory of audio files, it's easy with the Speech CLI to quickly run batch-speech recognition. Simply run the following command, pointing to your directory with the `--files` command. In this example, you append `\*.wav` to the directory to recognize all `.wav` files present in the dir. Additionally, specify the `--threads` argument to run the recognition on 10 parallel threads.
-
-> [!NOTE]
-> The `--threads` argument can be also used in the next section for `spx synthesize` commands, and the available threads will depend on the CPU and its current load percentage.
-
-```shell
-spx recognize --files C:\your_wav_file_dir\*.wav --output file C:\output_dir\speech_output.tsv --threads 10
+```console
+spx synthesize --text "Bienvenue chez moi." --voice fr-CA-Caroline --speakers
```
-The recognized speech output is written to `speech_output.tsv` using the `--output file` argument. The following is an example of the output file structure.
+Don't forget, if you get stuck or want to learn more about the Speech CLI's synthesis options, just type:
-```output
-audio.input.id recognizer.session.started.sessionid recognizer.recognized.result.text
-sample_1 07baa2f8d9fd4fbcb9faea451ce05475 A sample wave file.
-sample_2 8f9b378f6d0b42f99522f1173492f013 Sample text synthesized.
+```console
+spx help synthesize
```
-## Synthesize speech to a file
+## Speech to text translation
-Run the following command to change the output from your speaker to a `.wav` file.
+With the Speech CLI, you can also do speech to text translation. Run this command to capture audio from your default microphone, and output the translation as text. Keep in mind that you need to supply the `source` and `target` language with the `translate` command.
-```bash
-spx synthesize --text "The speech synthesizer greets you!" --audio output greetings.wav
+```console
+spx translate --microphone --source en-US --target ru-RU
```
-The Speech CLI will produce natural language in English into the `greetings.wav` audio file.
-In Windows, you can play the audio file by entering `start greetings.wav`.
-
+When translating into multiple languages, separate language codes with `;`.
-## Batch text-to-speech synthesis
-
-The easiest way to run batch text-to-speech is to create a new `.tsv` (tab-separated-value) file, and leverage the `--foreach` command in the Speech CLI. Consider the following file `text_synthesis.tsv`:
-
-<!-- The following example contains tabs. Don't accidentally convert these into spaces. -->
-
-```input
-audio.output text
-C:\batch_wav_output\wav_1.wav Sample text to synthesize.
-C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
-C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
+```console
+spx translate --microphone --source en-US --target ru-RU;fr-FR;es-ES
```
- Next, you run a command to point to `text_synthesis.tsv`, perform synthesis on each `text` field, and write the result to the corresponding `audio.output` path as a `.wav` file.
+If you want to save the output of your translation, use the `--output` flag. In this example, you'll also read from a file.
-```shell
-spx synthesize --foreach in @C:\your\path\to\text_synthesis.tsv
+```console
+spx translate --file /some/file/path/input.wav --source en-US --target ru-RU --output file /some/file/path/russian_translation.txt
```
-This command is the equivalent of running `spx synthesize --text Sample text to synthesize --audio output C:\batch_wav_output\wav_1.wav` **for each** record in the `.tsv` file. A couple things to note:
-
-* The column headers, `audio.output` and `text`, correspond to the command line arguments `--audio output` and `--text`, respectively. Multi-part command line arguments like `--audio output` should be formatted in the file with no spaces, no leading dashes, and periods separating strings, e.g. `audio.output`. Any other existing command line arguments can be added to the file as additional columns using this pattern.
-* When the file is formatted in this way, no additional arguments are required to be passed to `--foreach`.
-* Ensure to separate each value in the `.tsv` with a **tab**.
-
-However, if you have a `.tsv` file like the following example, with column headers that **do not match** command line arguments:
-
-<!-- The following example contains tabs. Don't accidentally convert these into spaces. -->
-
-```input
-wav_path str_text
-C:\batch_wav_output\wav_1.wav Sample text to synthesize.
-C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
-C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
-```
+> [!NOTE]
+> See the [language and locale article](language-support.md) for a list of all supported languages with their corresponding locale codes.
-You can override these field names to the correct arguments using the following syntax in the `--foreach` call. This is the same call as above.
+Don't forget, if you get stuck or want to learn more about the Speech CLI's translation options, just type:
-```shell
-spx synthesize --foreach audio.output;text in @C:\your\path\to\text_synthesis.tsv
+```console
+spx help translate
``` ## Next steps
-* Complete the [speech recognition](get-started-speech-to-text.md?pivots=programmer-tool-spx) or [speech synthesis](get-started-text-to-speech.md?pivots=programmer-tool-spx) quickstarts using Speech CLI.
+* [Speech CLI configuration options](./spx-data-store-configuration.md)
+* [Batch operations with the Speech CLI](./spx-batch-operations.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/spx-batch-operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-batch-operations.md new file mode 100644
@@ -0,0 +1,87 @@
+---
+title: "Speech CLI batch operations - Speech service"
+titleSuffix: Azure Cognitive Services
+description: learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI.
+services: cognitive-services
+author: erhopf
+manager: nitinme
+ms.service: cognitive-services
+ms.subservice: speech-service
+ms.topic: quickstart
+ms.date: 01/13/2021
+ms.author: erhopf
+---
+
+# Speech CLI batch operations
+
+Common tasks when using Azure Speech services, are batch operations. In this article, you'll learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI. Specifically, you'll learn how to:
+
+* Run batch speech recognition on a directory of audio files
+* Run batch speech synthesis by iterating over a `.tsv` file
+
+## Batch speech to text (speech recognition)
+
+The Speech service is often used to recognize speech from audio files. In this example, you'll learn how to iterate over a directory using the Speech CLI to capture the recognition output for each `.wav` file. The `--files` flag is used to point at the directory where audio files are stored, and the wildcard `*.wav` is used to tell the Speech CLI to run recognition on every file with the extension `.wav`. The output for each recognition file is written as a tab separated value in `speech_output.tsv`.
+
+> [!NOTE]
+> The `--threads` argument can be also used in the next section for `spx synthesize` commands, and the available threads will depend on the CPU and its current load percentage.
+
+```console
+spx recognize --files C:\your_wav_file_dir\*.wav --output file C:\output_dir\speech_output.tsv --threads 10
+```
+
+The following is an example of the output file structure.
+
+```output
+audio.input.id recognizer.session.started.sessionid recognizer.recognized.result.text
+sample_1 07baa2f8d9fd4fbcb9faea451ce05475 A sample wave file.
+sample_2 8f9b378f6d0b42f99522f1173492f013 Sample text synthesized.
+```
+
+## Batch text to speech (speech synthesis)
+
+The easiest way to run batch text-to-speech is to create a new `.tsv` (tab-separated-value) file, and use the `--foreach` command in the Speech CLI. You can create a `.tsv` file using your favorite text editor, for this example, let's call it `text_synthesis.tsv`:
+
+>[!IMPORTANT]
+> When copying the contents of this text file, make sure that your file has a **tab** not spaces between the file location and the text. Sometimes, when copying the contents from this example, tabs are converted to spaces causing the `spx` command to fail when run.
+
+```Input
+audio.output text
+C:\batch_wav_output\wav_1.wav Sample text to synthesize.
+C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
+C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
+```
+
+Next, you run a command to point to `text_synthesis.tsv`, perform synthesis on each `text` field, and write the result to the corresponding `audio.output` path as a `.wav` file.
+
+```console
+spx synthesize --foreach in @C:\your\path\to\text_synthesis.tsv
+```
+
+This command is the equivalent of running `spx synthesize --text "Sample text to synthesize" --audio output C:\batch_wav_output\wav_1.wav` **for each** record in the `.tsv` file.
+
+A couple things to note:
+
+* The column headers, `audio.output` and `text`, correspond to the command-line arguments `--audio output` and `--text`, respectively. Multi-part command-line arguments like `--audio output` should be formatted in the file with no spaces, no leading dashes, and periods separating strings, for example, `audio.output`. Any other existing command-line arguments can be added to the file as additional columns using this pattern.
+* When the file is formatted in this way, no additional arguments are required to be passed to `--foreach`.
+* Ensure to separate each value in the `.tsv` with a **tab**.
+
+However, if you have a `.tsv` file like the following example, with column headers that **do not match** command-line arguments:
+
+```Input
+wav_path str_text
+C:\batch_wav_output\wav_1.wav Sample text to synthesize.
+C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
+C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
+```
+
+You can override these field names to the correct arguments using the following syntax in the `--foreach` call. This is the same call as above.
+
+```console
+spx synthesize --foreach audio.output;text in @C:\your\path\to\text_synthesis.tsv
+```
+
+## Next steps
+
+* [Speech CLI overview](./spx-overview.md)
+* [Speech CLI quickstart](./spx-basics.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/spx-data-store-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-data-store-configuration.md new file mode 100644
@@ -0,0 +1,62 @@
+---
+title: "Speech CLI configuration options - Speech service"
+titleSuffix: Azure Cognitive Services
+description: Learn how to create and manage configuration files for use with the Azure Speech CLI.
+services: cognitive-services
+author: erhopf
+manager: nitinme
+ms.service: cognitive-services
+ms.subservice: speech-service
+ms.topic: quickstart
+ms.date: 01/13/2021
+ms.author: erhopf
+---
+
+# Speech CLI configuration options
+
+Speech CLI's behavior can rely on settings in configuration files, which you can refer to using a `@` symbol. The Speech CLI saves a new setting in a new `./spx/data` subdirectory that is created in the current working directory for the Speech CLI. When looking for a configuration value, the Speech CLI searches your current working directory, then in the datastore at `./spx/data`, and then in other datastores, including a final read-only datastore in the `spx` binary.
+
+In the Speech CLI quickstart, you used the datastore to save your `@key` and `@region` values, so you did not need to specify them with each `spx` command. Keep in mind, that you can use configuration files to store your own configuration settings, or even use them to pass URLs or other dynamic content generated at runtime.
+
+## Create and manage configuration files in the datastore
+
+This section shows how to use a configuration file in the local datastore to store and fetch command settings using `spx config`, and store output from Speech CLI using the `--output` option.
+
+The following example clears the `@my.defaults` configuration file, adds key-value pairs for **key** and **region** in the file, and uses the configuration in a call to `spx recognize`.
+
+```console
+spx config @my.defaults --clear
+spx config @my.defaults --add key 000072626F6E20697320636F6F6C0000
+spx config @my.defaults --add region westus
+
+spx config @my.defaults
+
+spx recognize --nodefaults @my.defaults --file hello.wav
+```
+
+You can also write dynamic content to a configuration file. For example, the following command creates a custom speech model and stores the URL of the new model in a configuration file. The next command waits until the model at that URL is ready for use before returning.
+
+```console
+spx csr model create --name "Example 4" --datasets @my.datasets.txt --output url @my.model.txt
+spx csr model status --model @my.model.txt --wait
+```
+
+The following example writes two URLs to the `@my.datasets.txt` configuration file. In this scenario, `--output` can include an optional **add** keyword to create a configuration file or append to the existing one.
++
+```console
+spx csr dataset create --name "LM" --kind Language --content https://crbn.us/data.txt --output url @my.datasets.txt
+spx csr dataset create --name "AM" --kind Acoustic --content https://crbn.us/audio.zip --output add url @my.datasets.txt
+
+spx config @my.datasets.txt
+```
+
+For more details about datastore files, including use of default configuration files (`@spx.default`, `@default.config`, and `@*.default.config` for command-specific default settings), enter this command:
+
+```console
+spx help advanced setup
+```
+
+## Next steps
+
+* [Batch operations with the Speech CLI](./spx-batch-operations.md)
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/spx-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-overview.md
@@ -1,30 +1,30 @@
--- title: The Azure Speech CLI titleSuffix: Azure Cognitive Services
-description: The Speech CLI is a command line tool for using the Speech service without writing any code. The Speech CLI requires minimal set up, and it's easy to immediately start experimenting with key features of the Speech service to see if your use-cases can be met.
+description: The Speech CLI is a command-line tool for using the Speech service without writing any code. The Speech CLI requires minimal setup, and it's easy to immediately start experimenting with key features of the Speech service to see if your use-cases can be met.
services: cognitive-services author: trevorbye manager: nitinme ms.service: cognitive-services ms.subservice: speech-service ms.topic: conceptual
-ms.date: 04/14/2020
+ms.date: 01/13/2021
ms.author: trbye ms.custom: devx-track-azurecli --- # What is the Speech CLI?
-The Speech CLI is a command line tool for using the Speech service without writing any code. The Speech CLI requires minimal setup, and it's easy to immediately start experimenting with key features of the Speech service to see if your use-cases can be met. Within minutes, you can run simple test workflows like batch speech-recognition from a directory of files, or text-to-speech on a collection of strings from a file. Beyond simple workflows, the Speech CLI is production-ready and can be scaled up to run larger processes using automated `.bat` or shell scripts.
+The Speech CLI is a command-line tool for using the Speech service without writing any code. The Speech CLI requires minimal setup, and it's easy to immediately start experimenting with key features of the Speech service to see if your use-cases can be met. Within minutes, you can run simple test workflows like batch speech-recognition from a directory of files, or text-to-speech on a collection of strings from a file. Beyond simple workflows, the Speech CLI is production-ready and can be scaled up to run larger processes using automated `.bat` or shell scripts.
-The majority of the primary features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI. Consider the following guidance to decide when to use the Speech CLI or the Speech SDK.
+Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI. Consider the following guidance to decide when to use the Speech CLI or the Speech SDK.
Use the Speech CLI when: * You want to experiment with Speech service features with minimal setup and no code * You have relatively simple requirements for a production application using the Speech service Use the Speech SDK when:
-* You want to integrate Speech service functionality within a specific language or platform (e.g. C#, Python, C++)
+* You want to integrate Speech service functionality within a specific language or platform (for example, C#, Python, C++)
* You have complex requirements that may require advanced service requests, or developing custom behavior including response streaming ## Core features
@@ -39,9 +39,10 @@ Use the Speech SDK when:
## Get started
-To get started with the Speech CLI, see the [basics article](spx-basics.md). This article shows you how to run some basic commands, and also shows slightly more advanced commands for running batch operations for speech-to-text and text-to-speech. After reading the basics article, you should have enough of an understanding of the syntax to start writing some custom commands, or automating simple Speech service operations.
+To get started with the Speech CLI, see the [quickstart](spx-basics.md). This article shows you how to run some basic commands, and also shows slightly more advanced commands for running batch operations for speech-to-text and text-to-speech. After reading the basics article, you should have enough of an understanding of the syntax to start writing some custom commands, or automating simple Speech service operations.
## Next steps -- [Speech CLI basics](spx-basics.md)-- If your use-case is more complex, [get the Speech SDK](speech-sdk.md)
+- Get started with the [Speech CLI quickstart](spx-basics.md)
+- [Configure your data store](./spx-data-store-configuration.md)
+- Learn how to [run batch operations with the Speech CLI](./spx-batch-operations.md)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-troubleshoot.md
@@ -32,7 +32,7 @@ The following article describes common errors and solutions for deployments usin
| 67 | CannotCreateIndex | The request to create an index cannot be completed. | Up to 500 single field indexes can be created in a container. Up to eight fields can be included in a compound index (compound indexes are supported in version 3.6+). | | 115 | CommandNotSupported | The request attempted is not supported. | Additional details should be provided in the error. If this functionality is important for your deployments, please let us know by creating a support ticket in the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). | | 11000 | DuplicateKey | The shard key (Azure Cosmos DB partition key) of the document you're inserting already exists in the collection or a unique index field constraint has been violated. | Use the update() function to update an existing document. If the unique index field constraint has been violated, insert or update the document with a field value that does not exist in the shard/partition yet. |
-| 16500 | TooManyRequests | The total number of request units consumed is more than the provisioned request-unit rate for the collection and has been throttled. | Consider scaling the throughput assigned to a container or a set of containers from the Azure portal or you can retry the operation. If you enable SSR (server-side retry), Azure Cosmos DB automatically retries the requests that fail due to this error. |
+| 16500 | TooManyRequests | The total number of request units consumed is more than the provisioned request-unit rate for the collection and has been throttled. | Consider scaling the throughput assigned to a container or a set of containers from the Azure portal or you can retry the operation. If you [enable SSR](prevent-rate-limiting-errors.md) (server-side retry), Azure Cosmos DB automatically retries the requests that fail due to this error. |
| 16501 | ExceededMemoryLimit | As a multi-tenant service, the operation has gone over the client's memory allotment. | Reduce the scope of the operation through more restrictive query criteria or contact support from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). Example: `db.getCollection('users').aggregate([{$match: {name: "Andy"}}, {$sort: {age: -1}}]))` | | 40324 | Unrecognized pipeline stage name. | The stage name in your aggregation pipeline request was not recognized. | Ensure that all aggregation pipeline names are valid in your request. | | - | MongoDB wire version issues | The older versions of MongoDB drivers are unable to detect the Azure Cosmos account's name in the connection strings. | Append *appName=@**accountName**@* at the end of your Cosmos DB's API for MongoDB connection string, where ***accountName*** is your Cosmos DB account name. |
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/prevent-rate-limiting-errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/prevent-rate-limiting-errors.md new file mode 100644
@@ -0,0 +1,39 @@
+---
+title: Prevent rate-limiting errors for Azure Cosmos DB API for MongoDB operations.
+description: Learn how to prevent your Azure Cosmos DB API for MongoDB operations from hitting rate limiting errors with the SSR (server side retry) feature.
+author: gahl-levy
+ms.service: cosmos-db
+ms.subservice: cosmosdb-mongo
+ms.topic: how-to
+ms.date: 01/13/2021
+ms.author: gahllevy
+---
+
+# Prevent rate-limiting errors for Azure Cosmos DB API for MongoDB operations
+[!INCLUDE[appliesto-mongodb-api](includes/appliesto-mongodb-api.md)]
+
+Azure Cosmos DB API for MongoDB operations may fail with rate-limiting (16500/429) errors if they exceed a collection's throughput limit (RUs).
+
+You can enable the Server Side Retry (SSR) feature and let the server retry these operations automatically. The requests are retried after a short delay for all collections in your account. This feature is a convenient alternative to handling rate-limiting errors in the client application.
++
+## Use the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to your Azure Cosmos DB API for MongoDB account.
+
+1. Go to the **Features** pane underneath the **Settings** section.
+
+1. Select **Server Side Retry**.
+
+1. Click **Enable** to enable this feature for all collections in your account.
+
+:::image type="content" source="./media/prevent-rate-limiting-errors/portal-features-server-side-retry.png" alt-text="Screenshot of the server side retry feature for Azure Cosmos DB API for MongoDB":::
++
+## Next steps
+
+To learn more about troubleshooting common errors, see this article:
+
+* [Troubleshoot common issues in Azure Cosmos DB's API for MongoDB](mongodb-troubleshoot.md)
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/pay-by-invoice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/pay-by-invoice.md
@@ -1,19 +1,21 @@
--- title: Pay for Azure subscriptions by invoice
-description: Learn how to pay for Azure subscriptions by invoice. See frequently asked questions and view additional resources.
+description: Learn how to pay for Azure subscriptions by invoice. See frequently asked questions to view more information.
author: bandersmsft ms.reviewer: judupont tags: billing ms.service: cost-management-billing ms.subservice: billing ms.topic: how-to
-ms.date: 11/16/2020
+ms.date: 01/13/2021
ms.author: banders ms.custom: contperf-fy21q2 --- # Pay for your Azure subscription by invoice
+This article applies to customers with a Microsoft Customer Agreement (MCA) and who signed up for Azure through the Azure website. [Check your access to a Microsoft Customer Agreement](#check-access-to-a-microsoft-customer-agreement). If you signed up for Azure through a Microsoft representative, then your default payment method will already be set to *check or wire transfer*.
+ If you switch to pay by invoice, that means you pay your bill within 30 days of the invoice date by check/wire transfer. To become eligible to pay for your Azure subscription by invoice, submit a request to Azure support. Once your request is approved, you can switch to invoice pay (check/wire transfer) in the Azure portal. > [!IMPORTANT]
@@ -23,51 +25,39 @@ If you switch to pay by invoice, that means you pay your bill within 30 days of
## Request to pay by invoice
-1. Go to the Azure portal to submit a support request. Search for and select **Help + support**.
-
+1. Sign in to the Azure portal to submit a support request. Search for and select **Help + support**.
![Search for Help and support, Microsoft Azure portal](./media/pay-by-invoice/search-for-help-and-support.png)-
-2. Select **New support request**.
-
+1. Select **New support request**.
![New support request link, Help and support screen, Microsoft Azure portal](./media/pay-by-invoice/help-and-support.png)-
-2. Select **Billing** as the **Issue type**. The *issue type* is the support request category. Select the subscription for which you want to pay by invoice, select a support plan, and then select **Next**.
-
-3. Select **Payment** as the **Problem Type**. The *problem type* is the support request subcategory.
-
-4. Select **Switch to Pay by Invoice** as the **Problem subtype**.
-
-5. Enter the following information in the **Details** box, and then select **Next**.
-
- New or existing customer:<br>
- If existing, current payment method:<br>
- Order ID (requesting for invoice option):<br>
- Account Admins Live ID (or Org ID) (should be company domain):<br>
- Commerce Account ID:<br>
- Company Name (as registered under VAT or Government Website):<br>
- Company Address (as registered under VAT or Government Website):<br>
- Company Website:<br>
- Country:<br>
- TAX ID/ VAT ID:<br>
- Company Established on (Year):<br>
- Any prior business with Microsoft:<br>
- Contact Name:<br>
- Contact Phone:<br>
- Contact Email:<br>
- Justification on why you prefer Invoice option over credit card:<br>
-
- For cores increase, provide the following additional information:<br>
-
- (Old quota) Existing Cores:<br>
- (New quota) Requested cores:<br>
- Specific region & series of Subscription:<br>
-
+1. Select **Billing** as the **Issue type**. The *issue type* is the support request category. Select the subscription for which you want to pay by invoice, select a support plan, and then select **Next**.
+1. Select **Payment** as the **Problem Type**. The *problem type* is the support request subcategory.
+1. Select **Switch to Pay by Invoice** as the **Problem subtype**.
+1. Enter the following information in the **Details** box, and then select **Next**.
+ - New or existing customer:
+ - If existing, current payment method:
+ - Order ID (requesting for invoice option):
+ - Account Admins Live ID (or Org ID) (should be company domain):
+ - Commerce Account ID:
+ - Company Name (as registered under VAT or Government Website):
+ - Company Address (as registered under VAT or Government Website):
+ - Company Website:
+ - Country:
+ - TAX ID/ VAT ID:
+ - Company Established on (Year):
+ - Any prior business with Microsoft:
+ - Contact Name:
+ - Contact Phone:
+ - Contact Email:
+ - Justification about why you want the invoice option instead of a credit card:
+ - For cores increase, provide the following additional information:
+ - (Old quota) Existing Cores:
+ - (New quota) Requested cores:
+ - Specific region & series of Subscription:
- The **Company name** and **Company address** should match the information that you provided for the Azure account. To view or update the information, see [Change your Azure account profile information](change-azure-account-profile.md). - Add your billing contact information in the Azure portal before the credit limit can be approved. The contact details should be related to the company's Accounts Payable or Finance department.
+1. Verify your contact information and preferred contact method, and then select **Create**.
-6. Verify your contact information and preferred contact method, and then select **Create**.
-
-If we need to run a credit check because of the amount of credit that you need, we'll send you a credit check application. We might ask you to provide your companyΓÇÖs audited financial statements. If no financial information is provided or if the information isn't strong enough to support the amount of credit limit required, we might ask for a security deposit or a standby letter of credit in order to approve your credit check request.
+If we need to run a credit check because of the amount of credit that you need, we'll send you a credit check application. We might ask you to provide your companyΓÇÖs audited financial statements. If no financial information is provided or if the information isn't strong enough to support the amount of credit limit required, we might ask for a security deposit or a standby letter of credit to approve your credit check request.
## Switch to invoice pay (check/wire transfer)
@@ -77,16 +67,13 @@ If you have a Microsoft Online Services Program account, you can switch your Azu
### Switch Azure subscription to check/wire transfer
-Follow the steps below to switch your Azure subscription to invoice pay (check/wire transfer). *Once you switch to invoice pay (check/wire transfer), you can't switch back to credit card*.
-
-1. Go to the Azure portal to sign in as the Account Administrator. Search for and select **Cost Management + Billing**.
+Follow the steps below to switch your Azure subscription to invoice pay (check/wire transfer). *Once you switch to invoice pay (check/wire transfer), you can't switch back to a credit card*.
+1. Go to the Azure portal to sign in as the Account Administrator. Search for and select **Cost Management + Billing**.
![Screenshot shows search for Cost Management and Billing in the Azure portal.](./media/pay-by-invoice/search.png)- 1. Select the subscription you'd like to switch to invoice payment. 1. Select **Payment methods**.
-1. In the command bar, select the **Pay by invoice** button.
-
+1. In the command bar, select the **Pay by invoice** button.
![Pay by invoice button, Payment methods, Microsoft Azure portal](./media/pay-by-invoice/pay-by-invoice.png) ### Switch billing profile to check/wire transfer
@@ -94,17 +81,12 @@ Follow the steps below to switch your Azure subscription to invoice pay (check/w
Follow the steps below to switch a billing profile to check/wire transfer. Only the person who signed up for Azure can change the default payment method of a billing profile. 1. Go to the Azure portal view your billing information. Search for and select **Cost Management + Billing**.
-1. In the menu, choose **Billing profiles**.
-
+1. In the menu, choose **Billing profiles**.
![Billing profiles menu item, Cost Management and Billing, Microsoft Azure portal](./media/pay-by-invoice/billing-profile.png)- 1. Select a billing profile.
-1. In the **Billing profile** menu, select **Payment methods**.
-
+1. In the **Billing profile** menu, select **Payment methods**.
![Payment methods menu item, Billing profiles, Cost Management, Microsoft Azure portal](./media/pay-by-invoice/billing-profile-payment-methods.png)-
-1. Select the banner that says you're eligible to pay by check/wire transfer.
-
+1. Select the banner that says you're eligible to pay by check/wire transfer.
![Banner to switch to check/wire, Payment methods, Microsoft Azure portal](./media/pay-by-invoice/customer-led-switch-to-invoice.png) ## Check access to a Microsoft Customer Agreement
@@ -121,4 +103,4 @@ Occasionally Microsoft needs legal documentation if the information you provided
## Next steps
-* If needed, update your billing contact information at the [Azure Account Center](https://account.azure.com/Profile).
+* If needed, update your billing contact information at the [Azure Account Center](https://account.azure.com/Profile).
\ No newline at end of file
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/pay-bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/pay-bill.md
@@ -8,7 +8,7 @@ tags: billing, past due, pay now, bill, invoice, pay
ms.service: cost-management-billing ms.subservice: billing ms.topic: how-to
-ms.date: 12/17/2020
+ms.date: 01/13/2021
ms.author: banders ---
@@ -20,6 +20,8 @@ This article applies to customers with a Microsoft Customer Agreement (MCA).
There are two ways to pay for your bill for Azure. You can pay with the default payment method of your billing profile or you can make a one-time payment called **Pay now**.
+If you signed up for Azure through a Microsoft representative, then your default payment method will always be set to *check or wire transfer*.
+ If you have Azure credits, they automatically apply to your invoice each billing period. ## Pay by default payment method
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/test-through-simulations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/test-through-simulations.md
@@ -38,9 +38,9 @@ We have partnered with [BreakingPoint Cloud](https://www.ixiacom.com/products/br
|--------- |--------- | |Target IP address | Enter one of your public IP address you want to test. | |Port Number | Enter _443_. |
- |DDoS Profile | Possible values include **DNS Flood**, **NTPv2 Flood**, **SSDP Flood**, **TCP SYN Flood**, **UDP 64B Flood**, **UDP 128B Flood**, **UDP 256B Flood**, **UDP 512B Flood**, **UDP 1024B Flood**, **UDP 1514B Flood**, **UDP Fragmentation** **UDP Memcached**.|
- |Test Size | Possible values include **100K pps, 50 Mbps and 4 source IPs**, **200K pps, 100 Mbps and 8 source IPs**, **400K pps, 200Mbps and 16 source IPs**, **800K pps, 400 Mbps and 32 source IPs**. |
- |Test Duration | Possible values include **10 Minutes**, **15 Minutes**, **20 Minutes**, **25 Minutes**, **30 Minutes**.|
+ |DDoS Profile | Possible values include `DNS Flood`, `NTPv2 Flood`, `SSDP Flood`, `TCP SYN Flood`, `UDP 64B Flood`, `UDP 128B Flood`, `UDP 256B Flood`, `UDP 512B Flood`, `UDP 1024B Flood`, `UDP 1514B Flood`, `UDP Fragmentation`, `UDP Memcached`.|
+ |Test Size | Possible values include `100K pps, 50 Mbps and 4 source IPs`, `200K pps, 100 Mbps and 8 source IPs`, `400K pps, 200Mbps and 16 source IPs`, `800K pps, 400 Mbps and 32 source IPs`. |
+ |Test Duration | Possible values include `10 Minutes`, `15 Minutes`, `20 Minutes`, `25 Minutes`, `30 Minutes`.|
It should now appear like this:
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-activate-and-set-up-your-on-premises-management-console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-on-premises-management-console.md
@@ -4,7 +4,7 @@ description: Management console activation and setup ensures that sensors are re
author: shhazam-ms manager: rkarlin ms.author: shhazam
-ms.date: 12/24/2020
+ms.date: 1/12/2021
ms.topic: how-to ms.service: azure ---
@@ -46,7 +46,7 @@ After initial activation, the number of monitored devices might exceed the numbe
## Set up a certificate
-Following installation of the management console, a local self-signed certificate is generated and used to access the console. After an administrator signs in to the management console for the first time, that user is prompted to onboard an SSL/TLS certificate. We recommend that you work with a trusted CA-signed certificate and not use the locally generated self-signed certificate.
+Following installation of the management console, a local self-signed certificate is generated and used to access the console. After an administrator signs in to the management console for the first time, that user is prompted to onboard an SSL/TLS certificate.
Two levels of security are available:
@@ -56,7 +56,9 @@ Two levels of security are available:
The console supports the following types of certificates: - Private and Enterprise Key Infrastructure (private PKI)+ - Public Key Infrastructure (public PKI)+ - Locally generated on the appliance (locally self-signed) > [!IMPORTANT]
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-activate-and-set-up-your-sensor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-sensor.md
@@ -4,7 +4,7 @@ description: This article describes how to sign in and activate a sensor console
author: shhazam-ms manager: rkarlin ms.author: shhazam
-ms.date: 12/26/2020
+ms.date: 1/12/2021
ms.topic: how-to ms.service: azure ---
@@ -60,10 +60,12 @@ Two levels of security are available:
The console supports the following certificate types: - Private and Enterprise Key Infrastructure (private PKI)+ - Public Key Infrastructure (public PKI)+ - Locally generated on the appliance (locally self-signed)
- > [IMPORTANT]
+ > [!IMPORTANT]
> We recommend that you don't use the default self-signed certificate. The certificate is not secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks. ### Sign in and activate the sensor
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-individual-sensors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-individual-sensors.md
@@ -4,7 +4,7 @@ description: Learn how to manage individual sensors, including managing activati
author: shhazam-ms manager: rkarlin ms.author: shhazam
-ms.date: 01/10/2021
+ms.date: 1/12/2021
ms.topic: how-to ms.service: azure ---
@@ -85,7 +85,7 @@ You'll receive an error message if the activation file could not be uploaded. Th
## Manage certificates
-Following sensor installation, a local self-signed certificate is generated and used to access the sensor web application. When logging in to the sensor for the first time, Administrator users are prompted to provide an SSL/TLS certificate. For more information about first time setup, see [Sign in and activate a sensor](how-to-activate-and-set-up-your-sensor.md).
+Following sensor installation, a local self-signed certificate is generated and used to access the sensor web application. When logging in to the sensor for the first time, Administrator users are prompted to provide an SSL/TLS certificate. For more information about first-time setup, see [Sign in and activate a sensor](how-to-activate-and-set-up-your-sensor.md).
This article provides information on updating certificates, working with certificate CLI commands, and supported certificates and certificate parameters.
@@ -93,11 +93,34 @@ This article provides information on updating certificates, working with certifi
Azure Defender for IoT uses SSL/TLS certificates to:
-1. Meet specific certificate and encryption requirements requested by your organization by uploading the CA-signed certificate.
+- Meet specific certificate and encryption requirements requested by your organization by uploading the CA-signed certificate.
-1. Allow validation between the management console and connected sensors, and between a management console and a High Availability management console. Validations is evaluated against a Certificate Revocation List, and the certificate expiration date. **If validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console. This option is enabled by default after installation.**
+- Allow validation between the management console and connected sensors, and between a management console and a High Availability management console. Validations is evaluated against a Certificate Revocation List, and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console*. This option is enabled by default after installation.
- Third party Forwarding rules, for example alert information sent to SYSLOG, Splunk or ServiceNow; or communication with Active Directory are not validated.
+ Third party Forwarding rules, for example alert information sent to SYSLOG, Splunk or ServiceNow; or communications with Active Directory are not validated.
+
+#### SSL certificates
+
+The Defender for IoT sensor, and on-premises management console use SSL, and TLS certificates for the following functions:
+
+ - Secure communications between users, and the web console of the appliance.
+
+ - Secure communications to the REST API on the sensor and on-premises management console.
+
+ - Secure communications between the sensors and an on-premises management console.
+
+Once installed, the appliance generates a local self-signed certificate to allow preliminary access to the web console. Enterprise SSL, and TLS certificates may be installed using the [`cyberx-xsense-certificate-import`](#cli-commands) command line tool.
+
+ > [!NOTE]
+ > For integrations and forwarding rules where the appliance is the client and initiator of the session, specific certificates are used and are not related to the system certificates.
+ >
+ >In these cases, the certificates are typically received from the server, or use asymmetric encryption where a specific certificate will be provided to set up the integration.
+
+Appliances may use unique certificate files. If you need to replace a certificate, you have uploaded;
+
+- From version 10.0, the certificate can be replaced from the System Settings menu.
+
+- For versions previous to 10.0, the SSL certificate can be replaced using the command line tool.
### Update certificates
@@ -106,15 +129,19 @@ Sensor Administrator users can update certificates.
To update a certificate: 1. Select **System Settings**.+ 1. Select **SSL/TLS Certificates.** 1. Delete or edit the certificate and add a new one.+ - Add a certificate name.
+
- Upload a CRT file and key file and enter a passphrase.
- - Upload a PEM file if required.
+ - Upload a PEM file if necessary.
To change the validation setting: 1. Enable or Disable the **Enable Certificate Validation** toggle.+ 1. Select **Save**. If the option is enabled and validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console.
@@ -123,88 +150,167 @@ If the option is enabled and validation fails, communication between the managem
The following certificates are supported: -- Private / Enterprise Key Infrastructure (Private PKI)
+- Private and Enterprise Key Infrastructure (Private PKI)
+ - Public Key Infrastructure (Public PKI) -- Locally generated on the appliance (locally self-signed). **Using self-signed certificates is not recommended.** This connection is *insecure* and should be used for test environments only. The owner of the certificate cannot be validated, and the security of your system cannot be maintained. Self-signed certificates should never be used for production networks. +
+- Locally generated on the appliance (locally self-signed).
+
+> [!IMPORTANT]
+> We don't recommend using a self-signed certificates. This type of connection is not secure and should be used for test environments only. Since, the owner of the certificate can't be validated, and the security of your system can't be maintained, self-signed certificates should never be used for production networks.
+
+### Supported SSL Certificates
The following parameters are supported.
-Certificate CRT
+
+**Certificate CRT**
- The primary certificate file for your domain name+ - Signature Algorithm = SHA256RSA - Signature Hash Algorithm = SHA256 - Valid from = Valid past date - Valid To = Valid future date-- Public Key = RSA 2048bits (Minimum) or 4096bits
+- Public Key = RSA 2048 bits (Minimum) or 4096 bits
- CRL Distribution Point = URL to .crl file-- Subject CN = URL, can be a wildcard certificate e.g. example.contoso.com or *.contoso.com**-- Subject (C)ountry = defined, e.g. US-- Subject (OU) Org Unit = defined, e.g. Contoso Labs-- Subject (O)rganization = defined, e.g. Contoso Inc.
+- Subject CN = URL, can be a wildcard certificate; for example, Sensor.contoso.<span>com, or *.contoso.<span>com
+- Subject (C)ountry = defined, for example, US
+- Subject (OU) Org Unit = defined, for example, Contoso Labs
+- Subject (O)rganization = defined, for example, Contoso Inc.
+
+**Key File**
-Key File
+- The key file generated when you created CSR.
-- The key file generated when you created CSR-- RSA 2048bits (Minimum) or 4096bits
+- RSA 2048 bits (Minimum) or 4096 bits.
-Certificate Chain
+ > [!Note]
+ > Using a key length of 4096bits:
+ > - The SSL handshake at the start of each connection will be slower.
+ > - There's an increase in CPU usage during handshakes.
+
+**Certificate Chain**
- The intermediate certificate file (if any) that was supplied by your CA+ - The CA certificate that issued the server's certificate should be first in the file, followed by any others up to the root. - Can include Bag attributes.
-Passphrase
+**Passphrase**
+
+- One key supported.
-- 1 key supported-- Setup when importing the certificate
+- Set up when you're importing the certificate.
-Certificates with other parameters may work but cannot be supported by Microsoft.
+Certificates with other parameters might work, but Microsoft doesn't support them.
#### Encryption key artifacts
-**.pem – Certificate Container File**
+**.pem – certificate container file**
-The name is from Privacy Enhanced Mail (PEM), an historic method for secure email but the container format it used lives on, and is a base64 translation of the x509 ASN.1 keys. 
+Privacy Enhanced Mail (PEM) files were the general file type used to secure emails. Nowadays, PEM files are used with certificates and use x509 ASN1 keys.ΓÇ»
-Defined in RFCs 1421 to 1424: a container format that may include just the public certificate (such as with Apache installs, and CA certificate files /etc/ssl/certs), or may include an entire certificate chain including public key, private key, and root certificates.
+The container file is defined in RFCs 1421 to 1424, a container format that may include the public certificate only. For example, Apache installs, a CA certificate, files, ETC, SSL, or CERTS. This can include an entire certificate chain including public key, private key, and root certificates.
-It may also encode a CSR as the PKCS10 format can be translated into PEM.
+It may also encode a CSR as the PKCS10 format, which can be translated into PEM.
-**.cert .cer .crt – Certificate Container File**
+**.cert .cer .crt – certificate container file**
-A .pem (or rarely .der) formatted file with a different extension. It is recognized by Windows Explorer as a certificate. The .pem file is not recognized by Windows Explorer.
+A `.pem`, or `.der` formatted file with a different extension. The file is recognized by Windows Explorer as a certificate. The `.pem` file is not recognized by Windows Explorer.
**.key – Private Key File**
-A KEY file is the same "format" as a PEM file, but it has a different extension.
-##### Use CLI commands to deploy certificates
+A key file is in the same format as a PEM file, but it has a different extension.
+
+#### Additional commonly available key artifacts
+
+**.csrΓÇ»- certificate signing request**.
-Use the *cyberx-xsense-certificate-import* CLI command to import certificates. To use this tool, certificate files need to be uploaded to the device (using tools such as winscp or wget).
+This file is used for submission to certificate authorities. The actual format is PKCS10, which is defined in RFC 2986, and may include some, or all of the key details of the requested certificate. For example, subject, organization, and state. It is the public key of the certificate that gets signed by the CA, and receives a certificate in return.
+
+The returned certificate is the public certificate, which includes the public key but not the private key.
+
+**.pkcs12 .pfx .p12 – password container**.
+
+Originally defined by RSA in the Public-Key Cryptography Standards (PKCS), the 12-variant was originally enhanced by Microsoft, and later submitted as RFC 7292.
+
+This container format requires a password that contains both public and private certificate pairs. Unlike `.pem` files, this container is fully encrypted. 
+
+You can use OpenSSL to turn this into a `.pem` file with both public and private keys: `openssl pkcs12 -in file-to-convert.p12 -out converted-file.pem -nodes` 
+
+**.der – binary encoded PEM**.
+
+The way to encode ASN.1 syntax in binary, is through a `.pem` file, which is just a Base64 encoded `.der` file.
+
+OpenSSL can convert these files to a `.pem`: `openssl x509 -inform der -in to-convert.der -out converted.pem`.
+
+Windows will recognize these files as certificate files. By default, Windows will export certificates as `.der` formatted files with a different extension.ΓÇ»
+
+**.crl - certificate revocation list**.
+Certificate authorities produce these files as a way to de-authorize certificates before their expiration.
+
+##### CLI commands
+
+Use the `cyberx-xsense-certificate-import` CLI command to import certificates. To use this tool, you need to upload certificate files to the device, by using tools such as WinSCP or Wget.
The command supports the following input flags: --h Show the command line help syntax
+- `-h`: Shows the command-line help syntax.
+
+- `--crt`: Path to a certificate file (.crt extension).
+- `--key`: \*.key file. Key length should be a minimum of 2,048 bits.
+- `--chain`: Path to a certificate chain file (optional).
+- `--pass`: Passphrase used to encrypt the certificate (optional).
+- `--passphrase-set`: Default = `False`, unused. Set to `True` to use the previous passphrase supplied with the previous certificate (optional).
+When you're using the CLI command:
-When using the CLI command:
+- Verify that the certificate files are readable on the appliance.
-- Verify the certificate files are readable on the appliance.
+- Verify that the domain name and IP in the certificate match the configuration that the IT department has planned.
-- Verify that the domain name and IP in the certificate match the configuration planned by the IT department.
+### Use OpenSSL to manage certificates
+Manage your certificates with the following commands:
+
+| Description | CLI Command |
+|--|--|
+| Generate a new private key and Certificate Signing Request | `openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key` |
+| Generate a self-signed certificate | `openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt` |
+| Generate a certificate signing request (CSR) for an existing private key | `openssl req -out CSR.csr -key privateKey.key -new` |
+| Generate a certificate signing request based on an existing certificate | `openssl x509 -x509toreq -in certificate.crt -out CSR.csr -signkey privateKey.key` |
+| Remove a passphrase from a private key | `openssl rsa -in privateKey.pem -out newPrivateKey.pem` |
+
+If you need to check the information within a Certificate, CSR or Private Key, use these commands;
+
+| Description | CLI Command |
+|--|--|
+| Check a Certificate Signing Request (CSR) | `openssl req -text -noout -verify -in CSR.csr` |
+| Check a private key | `openssl rsa -in privateKey.key -check` |
+| Check a certificate | `openssl x509 -in certificate.crt -text -noout` |
+
+If you receive an error that the private key doesnΓÇÖt match the certificate, or that a certificate that you installed to a site is not trusted, use these commands to fix the error;
+
+| Description | CLI Command |
+|--|--|
+| Check an MD5 hash of the public key to ensure that it matches with what is in a CSR or private key | 1. `openssl x509 -noout -modulus -in certificate.crt | openssl md5` <br /> 2. `openssl rsa -noout -modulus -in privateKey.key | openssl md5` <br /> 3. `openssl req -noout -modulus -in CSR.csr | openssl md5 ` |
+
+To convert certificates and keys to different formats to make them compatible with specific types of servers, or software, use these commands;
+
+| Description | CLI Command |
+|--|--|
+| Convert a DER file (.crt .cer .der) to PEM | `openssl x509 -inform der -in certificate.cer -out certificate.pem` |
+| Convert a PEM file to DER | `openssl x509 -outform der -in certificate.pem -out certificate.der` |
+| Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to PEM | `openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes` <br />You can add `-nocerts` to only output the private key, or add `-nokeys` to only output the certificates. |
+| Convert a PEM certificate file and a private key to PKCS#12 (.pfx .p12) | `openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt` |
## Connect a sensor to the management console
-This section describes how to ensure connection between the sensor and the on-premises management console. You need to do this if you're working in an air-gapped network and want to send asset and alert information to the management console from the sensor. This connection also allows the management console to push system settings to the sensor and perform other management tasks on the sensor.
+This section describes how to ensure connection between the sensor and the on-premises management console. Do this if you're working in an air-gapped network and want to send asset and alert information to the management console from the sensor. This connection also allows the management console to push system settings to the sensor and perform other management tasks on the sensor.
To connect:
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-the-on-premises-management-console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-the-on-premises-management-console.md
@@ -4,7 +4,7 @@ description: Learn about on-premises management console options like backup and
author: shhazam-ms manager: rkarlin ms.author: shhazam
-ms.date: 12/12/2020
+ms.date: 1/12/2021
ms.topic: article ms.service: azure ---
@@ -44,10 +44,27 @@ Azure Defender for IoT uses SSL and TLS certificates to:
- Meet specific certificate and encryption requirements requested by your organization by uploading the CA-signed certificate. -- Allow validation between the management console and connected sensors, and between a management console and a high-availability management console. Validation is evaluated against a certificate revocation list and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error appears in the console.* This option is enabled by default after installation.
+- Allow validation between the management console and connected sensors, and between a management console and a high-availability management console. Validation is evaluated against a certificate revocation list and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error appears in the console*. This option is enabled by default after installation.
Third-party forwarding rules aren't validated. Examples are alert information sent to SYSLOG, Splunk, or ServiceNow; and communication with Active Directory.
+#### SSL certificates
+
+The Defender for IoT sensor, and on-premises management console use SSL, and TLS certificates for the following functions:
+
+ - Secure communications between users, and the web console of the appliance.
+
+ - Secure communications to the REST API on the sensor and on-premises management console.
+
+ - Secure communications between the sensors and an on-premises management console.
+
+Once installed, the appliance generates a local self-signed certificate to allow preliminary access to the web console. Enterprise SSL, and TLS certificates may be installed using the [`cyberx-xsense-certificate-import`](#cli-commands) command-line tool.
+
+ > [!NOTE]
+ > For integrations and forwarding rules where the appliance is the client and initiator of the session, specific certificates are used and are not related to the system certificates.
+ >
+ >In these cases, the certificates are typically received from the server, or use asymmetric encryption where a specific certificate will be provided to set up the integration.
+ ### Update certificates Administrator users of the on-premises management console can update certificates.
@@ -55,16 +72,19 @@ Administrator users of the on-premises management console can update certificate
To update a certificate: 1. Select **System Settings**.+ 1. Select **SSL/TLS Certificates**. 1. Delete or edit the certificate and add a new one. - Add a certificate name.
+
- Upload a CRT file and key file, and enter a passphrase. - Upload a PEM file if necessary. To change the validation setting: 1. Turn on or turn off the **Enable Certificate Validation** toggle.+ 1. Select **Save**. If the option is enabled and validation fails, communication between the management console and the sensor is halted and a validation error appears in the console.
@@ -73,25 +93,30 @@ If the option is enabled and validation fails, communication between the managem
The following certificates are supported: -- Private and Enterprise Key Infrastructure (Private PKI)
+- Private and Enterprise Key Infrastructure (Private PKI)
+
- Public Key Infrastructure (Public PKI) + - Locally generated on the appliance (locally self-signed) > [!IMPORTANT]
- > We don't recommend that you use self-signed certificates. This connection is not secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Self-signed certificates should never be used for production networks.
+ > We don't recommend using a self-signed certificates. This type of connection is not secure and should be used for test environments only. Since, the owner of the certificate can't be validated, and the security of your system can't be maintained, self-signed certificates should never be used for production networks.
+
+### Supported SSL Certificates
The following parameters are supported: **Certificate CRT** - The primary certificate file for your domain name+ - Signature Algorithm = SHA256RSA - Signature Hash Algorithm = SHA256 - Valid from = Valid past date - Valid To = Valid future date - Public Key = RSA 2048 bits (Minimum) or 4096 bits - CRL Distribution Point = URL to .crl file-- Subject CN = URL, can be a wildcard certificate; for example, www.contoso.com or \*.contoso.com
+- Subject CN = URL, can be a wildcard certificate; for example, Sensor.contoso.<span>com,or *.contoso.<span>com
- Subject (C)ountry = defined, for example, US - Subject (OU) Org Unit = defined; for example, Contoso Labs - Subject (O)rganization = defined; for example, Contoso Inc
@@ -99,17 +124,25 @@ The following parameters are supported:
**Key file** - The key file generated when you created the CSR+ - RSA 2048 bits (minimum) or 4096 bits
+ > [!Note]
+ > Using a key length of 4096bits:
+ > - The SSL handshake at the start of each connection will be slower.
+ > - There's an increase in CPU usage during handshakes.
+ **Certificate chain** - The intermediate certificate file (if any) that was supplied by your CA.+ - The CA certificate that issued the server's certificate should be first in the file, followed by any others up to the root. - The chain can include bag attributes. **Passphrase** - One key is supported.+ - Set up when you're importing the certificate. Certificates with other parameters might work, but Microsoft doesn't support them.
@@ -118,23 +151,51 @@ Certificates with other parameters might work, but Microsoft doesn't support the
**.pem: certificate container file**
-The name is from Privacy Enhanced Mail (PEM), a historic method for secure email. The container format is a Base64 translation of the x509 ASN.1 keys. 
+Privacy Enhanced Mail (PEM) files were the general file type used to secure emails. Nowadays, PEM files are used with certificates and use x509 ASN1 keys.ΓÇ»
-This file is defined in RFCs 1421 to 1424: a container format that might include just the public certificate (such as with Apache installations, CA certificate files, and ETC SSL certificates). Or it might include an entire certificate chain, including a public key, a private key, and root certificates.
+The container file is defined in RFCs 1421 to 1424, a container format that may include the public certificate only. For example, Apache installs, a CA certificate, files, ETC, SSL, or CERTS. This can include an entire certificate chain including public key, private key, and root certificates.
-It might also encode a CSR, because the PKCS10 format can be translated into PEM.
+It may also encode a CSR as the PKCS10 format, which can be translated into PEM.
**.certΓÇ».cerΓÇ».crt: certificate container file**
-This is a .pem (or rarely, .der) formatted file with a different extension. Windows File Explorer recognizes it as a certificate. File Explorer doesn't recognize the .pem file.
+A `.pem`, or `.der` formatted file with a different extension. The file is recognized by Windows Explorer as a certificate. The `.pem` file is not recognized by Windows Explorer.
**.key: private key file**
-A key file is the same format as a PEM file, but it has a different extension.
+A key file is in the same format as a PEM file, but it has a different extension.
+
+#### Additional commonly available key artifacts
+
+**.csrΓÇ»- certificate signing request**.
+
+This file is used for submission to certificate authorities. The actual format is PKCS10, which is defined in RFC 2986, and may include some, or all of the key details of the requested certificate. For example, subject, organization, and state. It is the public key of the certificate that gets signed by the CA, and receives a certificate in return.
+
+The returned certificate is the public certificate, which includes the public key but not the private key.
+
+**.pkcs12 .pfx .p12 – password container**.
+
+Originally defined by RSA in the Public-Key Cryptography Standards (PKCS), the 12-variant was originally enhanced by Microsoft, and later submitted as RFC 7292.
+
+This container format requires a password that contains both public and private certificate pairs. Unlike `.pem` files, this container is fully encrypted. 
+
+You can use OpenSSL to turn the file into a `.pem` file with both public and private keys: `openssl pkcs12 -in file-to-convert.p12 -out converted-file.pem -nodes` 
+
+**.der – binary encoded PEM**.
+
+The way to encode ASN.1 syntax in binary, is through a `.pem` file, which is just a Base64 encoded `.der` file.
+
+OpenSSL can convert these files to a `.pem`: `openssl x509 -inform der -in to-convert.der -out converted.pem`.
+
+Windows will recognize these files as certificate files. By default, Windows will export certificates as `.der` formatted files with a different extension.
+
+**.crl - certificate revocation list**.
+
+Certificate authorities produce these as a way to de-authorize certificates before their expiration.
#### CLI commands
-Use the `cyberx-xsense-certificate-import` CLI command to import certificates. To use this tool, you need to upload certificate files to the device (by using tools such as winscp or wget).
+Use the `cyberx-xsense-certificate-import` CLI command to import certificates. To use this tool, you need to upload certificate files to the device, by using tools such as WinSCP or Wget.
The command supports the following input flags:
@@ -156,6 +217,41 @@ When you're using the CLI command:
- Verify that the domain name and IP in the certificate match the configuration that the IT department has planned.
+### Use OpenSSL to manage certificates
+
+Manage your certificates with the following commands:
+
+| Description | CLI Command |
+|--|--|
+| Generate a new private key and Certificate Signing Request | `openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key` |
+| Generate a self-signed certificate | `openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt` |
+| Generate a certificate signing request (CSR) for an existing private key | `openssl req -out CSR.csr -key privateKey.key -new` |
+| Generate a certificate signing request based on an existing certificate | `openssl x509 -x509toreq -in certificate.crt -out CSR.csr -signkey privateKey.key` |
+| Remove a passphrase from a private key | `openssl rsa -in privateKey.pem -out newPrivateKey.pem` |
+
+If you need to check the information within a Certificate, CSR or Private Key, use these commands;
+
+| Description | CLI Command |
+|--|--|
+| Check a Certificate Signing Request (CSR) | `openssl req -text -noout -verify -in CSR.csr` |
+| Check a private key | `openssl rsa -in privateKey.key -check` |
+| Check a certificate | `openssl x509 -in certificate.crt -text -noout` |
+
+If you receive an error that the private key doesnΓÇÖt match the certificate, or that a certificate that you installed to a site is not trusted, use these commands to fix the error;
+
+| Description | CLI Command |
+|--|--|
+| Check an MD5 hash of the public key to ensure that it matches with what is in a CSR or private key | 1. `openssl x509 -noout -modulus -in certificate.crt | openssl md5` <br /> 2. `openssl rsa -noout -modulus -in privateKey.key | openssl md5` <br /> 3. `openssl req -noout -modulus -in CSR.csr | openssl md5 ` |
+
+To convert certificates and keys to different formats to make them compatible with specific types of servers, or software, use these commands;
+
+| Description | CLI Command |
+|--|--|
+| Convert a DER file (.crt .cer .der) to PEM | `openssl x509 -inform der -in certificate.cer -out certificate.pem` |
+| Convert a PEM file to DER | `openssl x509 -outform der -in certificate.pem -out certificate.der` |
+| Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to PEM | `openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes` <br />You can add `-nocerts` to only output the private key, or add `-nokeys` to only output the certificates. |
+| Convert a PEM certificate file and a private key to PKCS#12 (.pfx .p12) | `openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt` |
+ ## Define backup and restore settings The on-premises management console system backup is performed automatically, daily. The data is saved on a different disk. The default location is `/var/cyberx/backups`.
dms https://docs.microsoft.com/en-us/azure/dms/tutorial-mongodb-cosmos-db-online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mongodb-cosmos-db-online.md
@@ -48,7 +48,7 @@ This article describes an online migration from MongoDB to Azure Cosmos DB's API
To complete this tutorial, you need to: * [Complete the pre-migration](../cosmos-db/mongodb-pre-migration.md) steps such as estimating throughput, choosing a partition key, and the indexing policy.
-* [Create an Azure Cosmos DB's API for MongoDB account](https://ms.portal.azure.com/#create/Microsoft.DocumentDB).
+* [Create an Azure Cosmos DB's API for MongoDB account](https://ms.portal.azure.com/#create/Microsoft.DocumentDB) and ensure [SSR (server side retry)](../cosmos-db/prevent-rate-limiting-errors.md) is enabled.
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). > [!NOTE]
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/support.md
@@ -76,7 +76,7 @@ The systems listed in the following table are considered compatible with Azure I
| [CentOS 7.5](https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.1804) | ![CentOS + AMD64](./media/tutorial-c-module/green-check.png) | ![CentOS + ARM32v7](./media/tutorial-c-module/green-check.png) | ![CentOS + ARM64](./media/tutorial-c-module/green-check.png) | | [Debian 8](https://www.debian.org/releases/jessie/) | ![Debian 8 + AMD64](./media/tutorial-c-module/green-check.png) | ![Debian 8 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Debian 8 + ARM64](./media/tutorial-c-module/green-check.png) | | [Debian 9](https://www.debian.org/releases/stretch/) | ![Debian 9 + AMD64](./media/tutorial-c-module/green-check.png) | ![Debian 9 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Debian 9 + ARM64](./media/tutorial-c-module/green-check.png) |
-| [Debian 10](https://www.debian.org/releases/buster/) <sup>1</sup> | ![Debian 10 + AMD64](./media/tutorial-c-module/green-check.png) | ![Debian 10 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Debian 10 + ARM64](./media/tutorial-c-module/green-check.png) |
+| [Debian 10](https://www.debian.org/releases/buster/) | ![Debian 10 + AMD64](./media/tutorial-c-module/green-check.png) | ![Debian 10 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Debian 10 + ARM64](./media/tutorial-c-module/green-check.png) |
| [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/tutorial-c-module/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/tutorial-c-module/green-check.png) | | [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) | ![Mentor Embedded Linux Omni OS + AMD64](./media/tutorial-c-module/green-check.png) | | ![Mentor Embedded Linux Omni OS + ARM64](./media/tutorial-c-module/green-check.png) | | [RHEL 7.5](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.5_release_notes/index) | ![RHEL 7.5 + AMD64](./media/tutorial-c-module/green-check.png) | ![RHEL 7.5 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![RHEL 7.5 + ARM64](./media/tutorial-c-module/green-check.png) |
@@ -84,16 +84,10 @@ The systems listed in the following table are considered compatible with Azure I
| [Ubuntu 18.04](https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes) | ![Ubuntu 18.04 + AMD64](./media/tutorial-c-module/green-check.png) | ![Ubuntu 18.04 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Ubuntu 18.04 + ARM64](./media/tutorial-c-module/green-check.png) | | [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/tutorial-c-module/green-check.png) | | | | [Yocto](https://www.yoctoproject.org/) | ![Yocto + AMD64](./media/tutorial-c-module/green-check.png) | ![Yocto + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Yocto + ARM64](./media/tutorial-c-module/green-check.png) |
-| Raspberry Pi OS Buster <sup>1</sup> | | ![Raspberry Pi OS Buster + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Raspberry Pi OS Buster + ARM64](./media/tutorial-c-module/green-check.png) |
-| [Ubuntu 20.04 <sup>2</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | ![Ubuntu 20.04 + AMD64](./media/tutorial-c-module/green-check.png) | ![Ubuntu 20.04 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Ubuntu 20.04 + ARM64](./media/tutorial-c-module/green-check.png) |
+| Raspberry Pi OS Buster | | ![Raspberry Pi OS Buster + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Raspberry Pi OS Buster + ARM64](./media/tutorial-c-module/green-check.png) |
+| [Ubuntu 20.04 <sup>1</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | ![Ubuntu 20.04 + AMD64](./media/tutorial-c-module/green-check.png) | ![Ubuntu 20.04 + ARM32v7](./media/tutorial-c-module/green-check.png) | ![Ubuntu 20.04 + ARM64](./media/tutorial-c-module/green-check.png) |
-<sup>1</sup> Debian 10 systems, including Raspberry Pi OS Buster, use a version of OpenSSL that IoT Edge doesn't support. Use the following command to install an earlier version before installing IoT Edge:
-
-```bash
-sudo apt-get install libssl1.1
-```
-
-<sup>2</sup> The Debian 9 packages from the [Azure IoT Edge releases repo](https://github.com/Azure/azure-iotedge/releases) should work out of the box with Ubuntu 20.04.
+<sup>1</sup> The Debian 9 packages from the [Azure IoT Edge releases repo](https://github.com/Azure/azure-iotedge/releases) should work out of the box with Ubuntu 20.04.
## Releases
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-data.md
@@ -39,7 +39,7 @@ When you're ready to use the data in your cloud-based storage solution, we recom
1. Consume it directly in Azure Machine Learning solutions like, automated machine learning (automated ML) experiment runs, machine learning pipelines, or the [Azure Machine Learning designer](concept-designer.md).
-4. Create [dataset monitors](#data-drift) for your model output dataset to detect for data drift.
+4. Create [dataset monitors](#drift) for your model output dataset to detect for data drift.
5. If data drift is detected, update your input dataset and retrain your model accordingly.
@@ -47,7 +47,8 @@ The following diagram provides a visual demonstration of this recommended workfl
![Diagram shows the Azure Storage Service which flows into a datastore, which flows into a dataset. The dataset flows into model training, which flows into data drift, which flows back to dataset.](./media/concept-data/data-concept-diagram.svg)
-## Datastores
+<a name="datastores"></a>
+## Connect to storage with datastores
Azure Machine Learning datastores securely keep the connection information to your Azure storage, so you don't have to code it in your scripts. [Register and create a datastore](how-to-access-data.md) to easily connect to your storage account, and access the data in your underlying Azure storage service.
@@ -62,7 +63,8 @@ Supported cloud-based storage services in Azure that can be registered as datast
+ Databricks File System + Azure Database for MySQL
-## Datasets
+<a name="datasets"></a>
+## Reference data in storage with datasets
Azure Machine Learning datasets aren't copies of your data. By creating a dataset, you create a reference to the data in its storage service, along with a copy of its metadata.
@@ -102,7 +104,7 @@ With datasets, you can accomplish a number of machine learning tasks through sea
<a name="label"></a>
-## Data labeling
+## Label data with data labeling projects
Labeling large amounts of data has often been a headache in machine learning projects. Those with a computer vision component, such as image classification or object detection, generally require thousands of images and corresponding labels.
@@ -112,7 +114,7 @@ Create a [data labeling project](how-to-create-labeling-projects.md), and output
<a name="drift"></a>
-## Data drift
+## Monitor model performance with data drift
In the context of machine learning, data drift is the change in model input data that leads to model performance degradation. It is one of the top reasons model accuracy degrades over time, thus monitoring data drift helps detect model performance issues.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-register-datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
@@ -1,5 +1,5 @@
---
-title: Create Azure Machine Learning datasets to access data
+title: Create Azure Machine Learning datasets
titleSuffix: Azure Machine Learning description: Learn how to create Azure Machine Learning datasets to access your data for machine learning experiment runs. services: machine-learning
@@ -19,8 +19,6 @@ ms.date: 07/31/2020
# Create Azure Machine Learning datasets -- In this article, you learn how to create Azure Machine Learning datasets to access data for your local or remote experiments with the Azure Machine Learning Python SDK. To understand where datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article. By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. Also datasets are lazily evaluated, which aids in workflow performance speeds. You can create datasets from datastores, public URLs, and [Azure Open Datasets](../open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md).
@@ -127,6 +125,7 @@ To reuse and share datasets across experiment in your workspace, [register your
> Upload files from a local directory and create a FileDataset in a single method with the public preview method, [upload_directory()](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory?preserve-view=true&view=azure-ml-py#upload-directory-src-dir--target--pattern-none--overwrite-false--show-progress-true-). This method is an [experimental](/python/api/overview/azure/ml/?preserve-view=true&view=azure-ml-py#stable-vs-experimental) preview feature, and may change at any time. > > This method uploads data to your underlying storage, and as a result incur storage costs. + ### Create a TabularDataset Use the [`from_delimited_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory) method on the `TabularDatasetFactory` class to read files in .csv or .tsv format, and to create an unregistered TabularDataset. If you're reading from multiple files, results will be aggregated into one tabular representation.
@@ -177,7 +176,6 @@ titanic_ds.take(3).to_pandas_dataframe()
To reuse and share datasets across experiments in your workspace, [register your dataset](#register-datasets). - ## Explore data After you create and [register](#register-datasets) your dataset, you can load it into your notebook for data exploration prior to model training. If you don't need to do any data exploration, see how to consume datasets in your training scripts for submitting ML experiments in [Train with datasets](how-to-train-with-datasets.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-with-datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-datasets.md
@@ -1,7 +1,7 @@
---
-title: Train with azureml-datasets
+title: Train with machine learning datasets
titleSuffix: Azure Machine Learning
-description: Learn how to make your data available to your local or remote compute for ML model training with Azure Machine Learning datasets.
+description: Learn how to make your data available to your local or remote compute for model training with Azure Machine Learning datasets.
services: machine-learning ms.service: machine-learning ms.subservice: core
@@ -17,8 +17,7 @@ ms.custom: how-to, devx-track-python, data4ml
---
-# Train with datasets in Azure Machine Learning
-
+# Train models with Azure Machine Learning datasets
In this article, you learn how to work with [Azure Machine Learning datasets](/python/api/azureml-core/azureml.core.dataset%28class%29?preserve-view=true&view=azure-ml-py) to train machine learning models. You can use datasets in your local or remote compute target without worrying about connection strings or data paths.
@@ -39,7 +38,7 @@ To create and train with datasets, you need:
> [!Note] > Some Dataset classes have dependencies on the [azureml-dataprep](/python/api/azureml-dataprep/?preserve-view=true&view=azure-ml-py) package. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux, Ubuntu, Fedora, and CentOS.
-## Use datasets directly in training scripts
+## Consume datasets in machine learning training scripts
If you have structured data not yet registered as a dataset, create a TabularDataset and use it directly in your training script for your local or remote experiment.
@@ -88,6 +87,7 @@ df = dataset.to_pandas_dataframe()
``` ### Configure the training run+ A [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrun?preserve-view=true&view=azure-ml-py) object is used to configure and submit the training run. This code creates a ScriptRunConfig object, `src`, that specifies
@@ -139,6 +139,7 @@ mnist_ds = Dataset.File.from_files(path = web_paths)
``` ### Configure the training run+ We recommend passing the dataset as an argument when mounting via the `arguments` parameter of the `ScriptRunConfig` constructor. By doing so, you will get the data path (mounting point) in your training script via arguments. This way, you will be able use the same training script for local debugging and remote training on any cloud platform. The following example creates a ScriptRunConfig that passes in the FileDataset via `arguments`. After you submit the run, data files referred by the dataset `mnist_ds` will be mounted to the compute target.
@@ -158,7 +159,7 @@ run = experiment.submit(src)
run.wait_for_completion(show_output=True) ```
-### Retrieve the data in your training script
+### Retrieve data in your training script
The following code shows how to retrieve the data in your script.
@@ -220,10 +221,9 @@ print(os.listdir(mounted_path))
print (mounted_path) ```
+## Get datasets in machine learning scripts
-## Directly access datasets in your script
-
-Registered datasets are accessible both locally and remotely on compute clusters like the Azure Machine Learning compute. To access your registered dataset across experiments, use the following code to access your workspace and registered dataset by name. By default, the [`get_by_name()`](/python/api/azureml-core/azureml.core.dataset.dataset?preserve-view=true&view=azure-ml-py#&preserve-view=trueget-by-name-workspace--name--version--latest--) method on the `Dataset` class returns the latest version of the dataset that's registered with the workspace.
+Registered datasets are accessible both locally and remotely on compute clusters like the Azure Machine Learning compute. To access your registered dataset across experiments, use the following code to access your workspace and get the dataset that was used in your previously submitted run. By default, the [`get_by_name()`](/python/api/azureml-core/azureml.core.dataset.dataset?preserve-view=true&view=azure-ml-py#&preserve-view=trueget-by-name-workspace--name--version--latest--) method on the `Dataset` class returns the latest version of the dataset that's registered with the workspace.
```Python %%writefile $script_folder/train.py
@@ -242,7 +242,7 @@ titanic_ds = Dataset.get_by_name(workspace=workspace, name=dataset_name)
df = titanic_ds.to_pandas_dataframe() ```
-## Accessing source code during training
+## Access source code during training
Azure Blob storage has higher throughput speeds than an Azure file share and will scale to large numbers of jobs started in parallel. For this reason, we recommend configuring your runs to use Blob storage for transferring source code files.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-version-track-datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-version-track-datasets.md
@@ -1,7 +1,7 @@
--- title: Dataset versioning titleSuffix: Azure Machine Learning
-description: Learn how to best version your datasets and how versioning works with machine learning pipelines.
+description: Learn how to version machine learning datasets and how versioning works with machine learning pipelines.
services: machine-learning ms.service: machine-learning ms.subservice: core
@@ -15,7 +15,7 @@ ms.custom: how-to, devx-track-python, data4ml
# Customer intent: As a data scientist, I want to version and track datasets so I can use and share them across multiple machine learning experiments. ---
-# Version and track datasets in experiments
+# Version and track Azure Machine Learning datasets
In this article, you'll learn how to version and track Azure Machine Learning datasets for reproducibility. Dataset versioning is a way to bookmark the state of your data so that you can apply a specific version of the dataset for future experiments.
mariadb https://docs.microsoft.com/en-us/azure/mariadb/concepts-certificate-rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/concepts-certificate-rotation.md
@@ -5,7 +5,7 @@ author: mksuni
ms.author: sumuth ms.service: mariadb ms.topic: conceptual
-ms.date: 01/15/2021
+ms.date: 01/18/2021
--- # Understanding the changes in the Root CA change for Azure Database for MariaDB
@@ -15,6 +15,9 @@ Azure Database for MariaDB will be changing the root certificate for the client
>[!NOTE] > Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA from October 26th, 2020 till February 15, 2021. We hope this extension provide sufficient lead time for our users to implement the client changes if they are impacted.
+> [!NOTE]
+> This article contains references to the term _slave_, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+ ## What update is going to happen? In some cases, applications use a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can only use the predefined certificate to connect to an Azure Database for MariaDB server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
@@ -64,7 +67,7 @@ To avoid your application's availability being interrupted due to certificates
- For .NET (MariaDB Connector/NET, MariaDBConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
- ![Azure Database for MariaDB .net cert](media/overview/netconnecter-cert.png)
+ [![Azure Database for MariaDB .net cert](media/overview/netconnecter-cert.png)](media/overview/netconnecter-cert.png#lightbox)
- For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file.
@@ -75,10 +78,10 @@ To avoid your application's availability being interrupted due to certificates
(Root CA1: BaltimoreCyberTrustRoot.crt.pem) -----END CERTIFICATE----- -----BEGIN CERTIFICATE-----
- (Root CA2: DigiCertGlobalRootG2.crt.pem)
+ (Root CA2: DigiCertGlobalRootG2.crt.pem)
-----END CERTIFICATE----- ```
-
+ - Replace the original root CA pem file with the combined root CA file and restart your application/client. - In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
@@ -145,11 +148,7 @@ Since this update is a client-side change, if the client used to read data from
### 12. If I'm using Data-in replication, do I need to perform any action?
-> [!NOTE]
-> This article contains references to the term _slave_, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
->
-
-* If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting.
+- If the data-replication is from a virtual machine (on-prem or Azure virtual machine) to Azure Database for MySQL, you need to check if SSL is being used to create the replica. Run **SHOW SLAVE STATUS** and check the following setting.
```azurecli-interactive Master_SSL_Allowed : Yes
@@ -172,10 +171,10 @@ If you're using [Data-in replication](concepts-data-in-replication.md) to connec
Master_SSL_Cipher : Master_SSL_Key : ~\azure_mysqlclient_key.pem ```+ If you do see the certificate is provided for the CA_file, SSL_Cert and SSL_Key, you will need to update the file by adding the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). -- If the data-replication is between two Azure Database for MySQL, then you'll need to reset the replica by executing
-**CALL mysql.az_replication_change_master** and provide the new dual root certificate as last parameter [master_ssl_ca](howto-data-in-replication.md#link-the-source-and-replica-servers-to-start-data-in-replication).
+- If the data-replication is between two Azure Database for MySQL, then you'll need to reset the replica by executing **CALL mysql.az_replication_change_master** and provide the new dual root certificate as last parameter [master_ssl_ca](howto-data-in-replication.md#link-the-source-and-replica-servers-to-start-data-in-replication).
### 13. Do we have server-side query to verify if SSL is being used?
mariadb https://docs.microsoft.com/en-us/azure/mariadb/concepts-read-replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/concepts-read-replicas.md
@@ -5,7 +5,7 @@ author: savjani
ms.author: pariks ms.service: mariadb ms.topic: conceptual
-ms.date: 01/15/2021
+ms.date: 01/18/2021
ms.custom: references_regions ---
@@ -22,13 +22,13 @@ To learn more about GTID replication, see the [MariaDB replication documentation
## When to use a read replica
-The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the master.
+The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the primary.
A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting.
-Because replicas are read-only, they don't directly reduce write-capacity burdens on the master. This feature isn't targeted at write-intensive workloads.
+Because replicas are read-only, they don't directly reduce write-capacity burdens on the primary. This feature isn't targeted at write-intensive workloads.
-The read replica feature uses asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the master. Use this feature for workloads that can accommodate this delay.
+The read replica feature uses asynchronous replication. The feature isn't meant for synchronous replication scenarios. There will be a measurable delay between the source and the replica. The data on the replica eventually becomes consistent with the data on the primary. Use this feature for workloads that can accommodate this delay.
## Cross-region replication
@@ -39,11 +39,13 @@ You can have a source server in any [Azure Database for MariaDB region](https://
[![Read replica regions](media/concepts-read-replica/read-replica-regions.png)](media/concepts-read-replica/read-replica-regions.png#lightbox) ### Universal replica regions+ You can create a read replica in any of the following regions, regardless of where your source server is located. The supported universal replica regions include: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East Asia, East US, East US 2, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, South Central US, Southeast Asia, UK South, UK West, West Europe, West US, West US 2, West Central US. ### Paired regions+ In addition to the universal replica regions, you can create a read replica in the Azure paired region of your source server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../best-practices-availability-paired-regions.md). If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
@@ -51,7 +53,7 @@ If you are using cross-region replicas for disaster recovery planning, we recomm
However, there are limitations to consider: * Regional availability: Azure Database for MariaDB is available in France Central, UAE North, and Germany Central. However, their paired regions are not available.
-
+ * Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South, and US Gov Virginia. This means that a source server in West India can create a replica in South India. However, a source server in South India cannot create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region is not West India.
@@ -105,7 +107,7 @@ Learn how to [stop replication to a replica](howto-read-replicas-portal.md).
## Failover
-There is no automated failover between source and replica servers.
+There is no automated failover between source and replica servers.
Since replication is asynchronous, there is lag between the source and the replica. The amount of lag can be influenced by a number of factors like how heavy the workload running on the source server is and the latency between data centers. In most cases, replica lag ranges between a few seconds to a couple minutes. You can track your actual replication lag using the metric *Replica Lag*, which is available for each replica. This metric shows the time since the last replayed transaction. We recommend that you identify what your average lag is by observing your replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you can take action.
@@ -114,13 +116,13 @@ Since replication is asynchronous, there is lag between the source and the repli
After you have decided you want to failover to a replica,
-1. Stop replication to the replica<br/>
+1. Stop replication to the replica.
- This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the master. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
+ This step is necessary to make the replica server able to accept writes. As part of this process, the replica server will be delinked from the primary. After you initiate stop replication, the backend process typically takes about 2 minutes to complete. See the [stop replication](#stop-replication) section of this article to understand the implications of this action.
-2. Point your application to the (former) replica
+2. Point your application to the (former) replica.
- Each server has a unique connection string. Update your application to point to the (former) replica instead of the master.
+ Each server has a unique connection string. Update your application to point to the (former) replica instead of the primary.
After your application is successfully processing reads and writes, you have completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 above.
@@ -143,10 +145,10 @@ A read replica is created as a new Azure Database for MariaDB server. An existin
### Replica configuration
-A replica is created by using the same server configuration as the master. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, backup retention period, and MariaDB engine version. The pricing tier can also be changed independently, except to or from the Basic tier.
+A replica is created by using the same server configuration as the primary. After a replica is created, several settings can be changed independently from the source server: compute generation, vCores, storage, backup retention period, and MariaDB engine version. The pricing tier can also be changed independently, except to or from the Basic tier.
> [!IMPORTANT]
-> Before a source server configuration is updated to new values, update the replica configuration to equal or greater values. This action ensures the replica can keep up with any changes made to the master.
+> Before a source server configuration is updated to new values, update the replica configuration to equal or greater values. This action ensures the replica can keep up with any changes made to the primary.
Firewall rules and parameter settings are inherited from the source server to the replica when the replica is created. Afterwards, the replica's rules are independent.
@@ -167,20 +169,21 @@ Users on the source server are replicated to the read replicas. You can only con
To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas. The following server parameters are locked on both the source and replica servers:-- [`innodb_file_per_table`](https://mariadb.com/kb/en/library/innodb-system-variables/#innodb_file_per_table) -- [`log_bin_trust_function_creators`](https://mariadb.com/kb/en/library/replication-and-binary-log-system-variables/#log_bin_trust_function_creators)+
+* [`innodb_file_per_table`](https://mariadb.com/kb/en/library/innodb-system-variables/#innodb_file_per_table)
+* [`log_bin_trust_function_creators`](https://mariadb.com/kb/en/library/replication-and-binary-log-system-variables/#log_bin_trust_function_creators)
The [`event_scheduler`](https://mariadb.com/kb/en/library/server-system-variables/#event_scheduler) parameter is locked on the replica servers.
-To update one of the above parameters on the source server, please delete replica servers, update the parameter value on the master, and recreate replicas.
+To update one of the above parameters on the source server, please delete replica servers, update the parameter value on the primary, and recreate replicas.
### Other -- Creating a replica of a replica is not supported.-- In-memory tables may cause replicas to become out of sync. This is a limitation of the MariaDB replication technology.-- Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.
+* Creating a replica of a replica is not supported.
+* In-memory tables may cause replicas to become out of sync. This is a limitation of the MariaDB replication technology.
+* Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.
## Next steps -- Learn how to [create and manage read replicas using the Azure portal](howto-read-replicas-portal.md)-- Learn how to [create and manage read replicas using the Azure CLI and REST API](howto-read-replicas-cli.md)
+* Learn how to [create and manage read replicas using the Azure portal](howto-read-replicas-portal.md)
+* Learn how to [create and manage read replicas using the Azure CLI and REST API](howto-read-replicas-cli.md)
mariadb https://docs.microsoft.com/en-us/azure/mariadb/howto-create-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/howto-create-users.md
@@ -5,19 +5,18 @@ author: savjani
ms.author: pariks ms.service: mariadb ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/18/2021
--- # Create users in Azure Database for MariaDB This article describes how you can create users in Azure Database for MariaDB.
+When you first created your Azure Database for MariaDB, you provided a server admin login user name and password. For more information, you can follow the [Quickstart](quickstart-create-mariadb-server-database-using-azure-portal.md). You can locate your server admin login user name from the Azure portal.
+ > [!NOTE] > This article contains references to the term _slave_, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. -
-When you first created your Azure Database for MariaDB, you provided a server admin login user name and password. For more information, you can follow the [Quickstart](quickstart-create-mariadb-server-database-using-azure-portal.md). You can locate your server admin login user name from the Azure portal.
- The server admin user gets certain privileges for your server as listed: SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER
@@ -58,7 +57,7 @@ After the Azure Database for MariaDB server is created, you can use the first se
1. Get the connection information and admin user name. To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
-2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as MySQL Workbench, mysql.exe, HeidiSQL, or others.
+2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as MySQL Workbench, mysql.exe, HeidiSQL, or others.
If you are unsure of how to connect, see [Use MySQL Workbench to connect and query data](./connect-workbench.md) 3. Edit and run the following SQL code. Replace the placeholder value `db_user` with your intended new user name, and placeholder value `testdb` with your own database name.
mariadb https://docs.microsoft.com/en-us/azure/mariadb/howto-data-in-replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/howto-data-in-replication.md
@@ -5,7 +5,7 @@ author: savjani
ms.author: pariks ms.service: mariadb ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/18/2021
--- # Configure Data-in Replication in Azure Database for MariaDB
@@ -19,6 +19,9 @@ Review the [limitations and requirements](concepts-data-in-replication.md#limita
> [!NOTE] > If your source server is version 10.2 or newer, we recommend that you set up Data-in Replication by using [Global Transaction ID](https://mariadb.com/kb/en/library/gtid/).
+> [!NOTE]
+> This article contains references to the term _slave_, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+ ## Create a MariaDB server to use as a replica 1. Create a new Azure Database for MariaDB server (for example, replica.mariadb.database.azure.com). The server is the replica server in Data-in Replication.
@@ -36,10 +39,6 @@ Review the [limitations and requirements](concepts-data-in-replication.md#limita
Update firewall rules using the [Azure portal](howto-manage-firewall-portal.md) or [Azure CLI](howto-manage-firewall-cli.md).
-> [!NOTE]
-> This article contains references to the term _slave_, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
->
- ## Configure the source server The following steps prepare and configure the MariaDB server hosted on-premises, in a VM, or in a cloud database service for Data-in Replication. The MariaDB server is the source in Data-in Replication.
@@ -90,7 +89,7 @@ The following steps prepare and configure the MariaDB server hosted on-premises,
3. Turn on binary logging.
- To see if binary logging is enabled on the master, enter the following command:
+ To see if binary logging is enabled on the primary, enter the following command:
```sql SHOW VARIABLES LIKE 'log_bin';
marketplace https://docs.microsoft.com/en-us/azure/marketplace/azure-vm-create-certification-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-certification-faq.md
@@ -6,7 +6,7 @@ ms.subservice: partnercenter-marketplace-publisher
ms.topic: troubleshooting author: iqshahmicrosoft ms.author: iqshah
-ms.date: 10/19/2020
+ms.date: 01/15/2021
--- # Troubleshoot virtual machine certification
@@ -18,7 +18,6 @@ This article explains common error messages during VM image publishing, along wi
> [!NOTE] > If you have questions about this article or suggestions for improvement, contact [Partner Center support](https://aka.ms/marketplacepublishersupport). - ## VM extension failure Check to see whether your image supports VM extensions.
@@ -55,13 +54,13 @@ Provisioning issues can include the following failure scenarios:
|1|Invalid virtual hard disk (VHD)|If the specified cookie value in the VHD footer is incorrect, the VHD will be considered invalid.|Re-create the image and submit the request.| |2|Invalid blob type|VM provisioning failed because the used block is a blob type instead of a page type.|Re-create the image and submit the request.| |3|Provisioning timeout or not properly generalized|There's an issue with VM generalization.|Re-create the image with generalization and submit the request.|
+|
> [!NOTE] > For more information about VM generalization, see: > - [Linux documentation](azure-vm-create-using-approved-base.md#generalize-the-image) > - [Windows documentation](../virtual-machines/windows/capture-image-resource.md#generalize-the-windows-vm-using-sysprep) - ## VHD specifications ### Conectix cookie and other VHD specifications
@@ -88,7 +87,7 @@ Checksum|4
Unique ID|16 Saved State|1 Reserved|427-
+|
### VHD specifications
@@ -134,6 +133,7 @@ The following table lists the Linux test cases that the toolkit will run. Test v
|8|Client Alive Interval|Set ClientAliveInterval to 180. On the application need, it can be set from 30 to 235. If you're enabling the SSH for your end users, this value must be set as explained.| |9|OS architecture|Only 64-bit operating systems are supported.| |10|Auto Update|Identifies whether Linux Agent Auto Update is enabled.|
+|
### Common test-case errors
@@ -145,7 +145,7 @@ Refer to the following table for the common errors you might see when running te
| 2 | Bash history test case | An error occurs if the size of the Bash history in your submitted image is more than 1 kilobyte (KB). The size is restricted to 1 KB to ensure that your Bash history file doesn't contain any potentially sensitive information. | Resolve by mounting the VHD to another working VM and make changes to reduce the size to 1 KB or less. For example, delete the `.bash` history files. | | 3 | Required kernel parameter test case | You'll receive this error when the value for `console` isn't set to `ttyS0`. Check by running the following command: <br /> `cat /proc/cmdline` | Set the value for `console` to `ttyS0`, and resubmit the request. | | 4 | ClientAlive interval test case | If the toolkit gives you a failed result for this test case, there's an inappropriate value for `ClientAliveInterval`. | Set the value for `ClientAliveInterval` to less than or equal to 235, and then resubmit the request. |-
+|
### Windows test cases
@@ -170,8 +170,9 @@ The following table lists the Windows test cases that the toolkit will run, alon
|15|SNMP Services|The Simple Network Management Protocol (SNMP) Services feature isn't yet supported. The application shouldn't be dependent on this feature.| |16|Windows Internet Name Service|Windows Internet Name Service. This server feature isn't yet supported. The application shouldn't be dependent on this feature.| |17|Wireless LAN Service|Wireless LAN Service. This server feature isn't yet supported. The application shouldn't be dependent on this feature.|
+|
-If you come across any failures with the preceding test cases, refer to the **Description** column in the table for the solution. For more information, contact the Support team.
+If you come across any failures with the preceding test cases, refer to the **Description** column in the table for the solution. For more information, contact the Support team.
## Data disk size verification
@@ -187,6 +188,7 @@ Refer to the following rules for limitations on OS disk size. When you submit an
|---|---| |Linux|1 GB to 1023 GB| |Windows|30 GB to 250 GB|
+|
Because VMs allow access to the underlying operating system, ensure that the VHD size is sufficiently large for the VHD. Disks aren't expandable without downtime. Use a disk size from 30 GB to 50 GB.
@@ -194,6 +196,7 @@ Because VMs allow access to the underlying operating system, ensure that the VHD
|---|---|---| |>500 tebibytes (TiB)|n/a|Contact the Support team for an exception approval.| |250-500 TiB|>200 gibibytes (GiB) difference from blob size|Contact the Support team for an exception approval.|
+|
> [!NOTE] > Larger disk sizes incur higher costs and will result in a delay during the setup and replication process. Because of this delay and cost, the Support team might seek justification for the exception approval.
@@ -204,7 +207,7 @@ To prevent a potential attack related to the WannaCry virus, ensure that all Win
You can verify the image file version from `C:\windows\system32\drivers\srv.sys` or `srv2.sys`.
-The following table shows the minimum patched version of Windows Server:
+The following table shows the minimum patched version of Windows Server:
|OS|Version| |---|---|
@@ -213,6 +216,7 @@ The following table shows the minimum patched version of Windows Server:
|Windows Server 2012 R2|6.3.9600.18604| |Windows Server 2016|10.0.14393.953| |Windows Server 2019|NA|
+|
> [!NOTE] > Windows Server 2019 doesn't have any mandatory version requirements.
@@ -225,8 +229,8 @@ Update the kernel with an approved version, and resubmit the request. You can fi
If your image isn't installed with one of the following kernel versions, update it with the correct patches. Request the necessary approval from the Support team after the image is updated with these required patches: -- CVE-2019-11477 -- CVE-2019-11478
+- CVE-2019-11477
+- CVE-2019-11478
- CVE-2019-11479 |OS family|Version|Kernel|
@@ -273,6 +277,7 @@ If your image isn't installed with one of the following kernel versions, update
||stretch (security)|4.9.168-1+deb9u3| ||Debian GNU/Linux 10 (buster)|Debian 6.3.0-18+deb9u1| ||buster, sid (stretch backports)|4.19.37-5|
+|
## Image size should be in multiples of megabytes
@@ -298,7 +303,7 @@ To submit your request with SSH disabled image for certification process:
3. Resubmit your certification request. ## Download failure
-
+ Refer to the following table for any issues that arise when you download the VM image with a shared access signature (SAS) URL. |Scenario|Error|Reason|Solution|
@@ -309,12 +314,13 @@ Refer to the following table for any issues that arise when you download the VM
|4|Invalid signature|The associated SAS URL for the VHD is incorrect.|Get the correct SAS URL.| |6|HTTP conditional header|The SAS URL is invalid.|Get the correct SAS URL.| |7|Invalid VHD name|Check to see whether any special characters, such as a percent sign `%` or quotation marks `"`, exist in the VHD name.|Rename the VHD file by removing the special characters.|
+|
-## First 1-MB partition (2,048 sectors, each sector of 512 bytes)
+## First 1 MB (2048 sectors, each sector of 512 bytes) partition
-If you're [building your own image](azure-vm-create-using-own-image.md), ensure that the first 2,048 sectors (1 MB) of the OS disk is empty. Otherwise, your publishing will fail. This requirement is applicable only to the OS disk (not to data disks). If you're building your image [from an approved base](azure-vm-create-using-approved-base.md), you can skip this requirement.
+If you are [building your own image](azure-vm-create-using-own-image.md), ensure the first 2048 sectors (1 MB) of the OS disk is empty. Otherwise, your publishing will fail. This requirement is applicable to the OS disk only (not data disks). If you are building your image [from an approved base](azure-vm-create-using-approved-base.md), you can skip this requirement.
-### Create a 1-MB partition (2,048 sectors, each sector of 512 bytes) on an empty VHD (Linux-only steps)
+### Create a 1 MB (2048 sectors, each sector of 512 bytes) partition on an empty VHD
These steps apply to Linux only.
@@ -369,17 +375,17 @@ These steps apply to Linux only.
![Putty client command line screenshot showing the commands and output for erased data.](./media/create-vm/vm-certification-issues-solutions-22.png)
- 1. Type `w` to confirm the creation of partition.
+ 1. Type `w` to confirm the creation of partition.
![Putty client command line screenshot showing the commands for creating a partition.](./media/create-vm/vm-certification-issues-solutions-23.png)
- 1. You can verify the partition table by running the command `n fdisk /dev/sdb` and typing `p`. You'll see that partition is created with 2048 offset value.
+ 1. You can verify the partition table by running the command `n fdisk /dev/sdb` and typing `p`. You'll see that partition is created with 2048 offset value.
![Putty client command line screenshot showing the commands for creating the 2048 offset.](./media/create-vm/vm-certification-issues-solutions-24.png) 1. Detach the VHD from VM and delete the VM.
-### Create a first 1-MB partition (2,048 sectors, each sector of 512 bytes) by moving existing data on VHD
+### Create a 1 MB (2048 sectors, each sector of 512 bytes) partition by moving existing data on VHD
These steps apply to Linux only.
@@ -447,11 +453,11 @@ When an image is created, it might be mapped to or assigned the wrong OS label.
If all images that are taken from Azure Marketplace are to be reused, the operating system VHD must be generalized.
-* For **Linux**, the following process generalizes a Linux VM and redeploys it as a separate VM.
+- For **Linux**, the following process generalizes a Linux VM and redeploys it as a separate VM.
In the SSH window, enter the following command: `sudo waagent -deprovision+user`.
-* For **Windows**, you generalize Windows images by using `sysreptool`.
+- For **Windows**, you generalize Windows images by using `sysreptool`.
For more information about the `sysreptool` tool, see [System preparation (Sysprep) overview](/windows-hardware/manufacture/desktop/sysprep--system-preparation--overview).
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-discover-aws https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-aws.md
@@ -36,7 +36,7 @@ Before you start this tutorial, check you have these prerequisites in place.
**Requirement** | **Details** --- | ---
-**Appliance** | You need an EC2 VM on which to run the Azure Migrate appliance. The machine should have:<br/><br/> - Windows Server 2016 installed. Running the appliance on a machine with Windows Server 2019 isn't supported.<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.
+**Appliance** | You need an EC2 VM on which to run the Azure Migrate appliance. The machine should have:<br/><br/> - Windows Server 2016 installed.<br/> _Running the appliance on a machine with Windows Server 2019 isn't supported_.<br/><br/> - 16 GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.
**Windows instances** | Allow inbound connections on WinRM port 5985 (HTTP), so that the appliance can pull configuration and performance metadata. **Linux instances** | Allow inbound connections on port 22 (TCP).<br/><br/> The instances should use `bash` as the default shell, otherwise discovery will fail.
@@ -44,7 +44,7 @@ Before you start this tutorial, check you have these prerequisites in place.
To create an Azure Migrate project and register the Azure Migrate appliance, you need an account with: - Contributor or Owner permissions on an Azure subscription.-- Permissions to register Azure Active Directory apps.
+- Permissions to register Azure Active Directory (AAD) apps.
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows:
@@ -63,18 +63,20 @@ If you just created a free Azure account, you're the owner of your subscription.
![Opens the Add Role assignment page to assign a role to the account](./media/tutorial-discover-aws/assign-role.png)
-7. In the portal, search for users, and under **Services**, select **Users**.
-8. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
+1. To register the appliance, your Azure account needs **permissions to register AAD apps.**
+1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
+1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
![Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-aws/register-apps.png)
+1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare AWS instances Set up an account that the appliance can use to access AWS instances. -- For Windows servers, set up a local user account on all the Windows servers that you want to include in the discovery. Add the user account to the following groups: - Remote Management Users - Performance Monitor Users - Performance Log users.
- - For Linux servers, you need a root account on the Linux servers that you want to discover.
+- For **Windows servers**, set up a local user account on all the Windows servers that you want to include in the discovery. Add the user account to the following groups: - Remote Management Users - Performance Monitor Users - Performance Log users.
+ - For **Linux servers**, you need a root account on the Linux servers that you want to discover. Refer to the instructions in the [support matrix](migrate-support-matrix-physical.md#physical-server-requirements) for an alternative.
- Azure Migrate uses password authentication when discovering AWS instances. AWS instances don't support password authentication by default. Before you can discover instance, you need to enable password authentication. - For Windows machines, allow WinRM port 5985 (HTTP). This allows remote WMI calls. - For Linux machines:
@@ -101,12 +103,13 @@ Set up a new Azure Migrate project.
![Boxes for project name and region](./media/tutorial-discover-aws/new-project.png) 7. Select **Create**.
-8. Wait a few minutes for the Azure Migrate project to deploy.
-
-The **Azure Migrate: Server Assessment** tool is added by default to the new project.
+8. Wait a few minutes for the Azure Migrate project to deploy. The **Azure Migrate: Server Assessment** tool is added by default to the new project.
![Page showing Server Assessment tool added by default](./media/tutorial-discover-aws/added-tool.png)
+> [!NOTE]
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
+ ## Set up the appliance The Azure Migrate appliance is a lightweight appliance, used by Azure Migrate Server Assessment to do the following:
@@ -116,17 +119,14 @@ The Azure Migrate appliance is a lightweight appliance, used by Azure Migrate Se
[Learn more](migrate-appliance.md) about the Azure Migrate appliance. -
-## Appliance deployment steps
- To set up the appliance you:-- Provide an appliance name and generate an Azure Migrate project key in the portal.-- Download a zipped file with Azure Migrate installer script from the Azure portal.-- Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges.-- Execute the PowerShell script to launch the appliance web application.-- Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key.
+1. Provide an appliance name and generate an Azure Migrate project key in the portal.
+1. Download a zipped file with Azure Migrate installer script from the Azure portal.
+1. Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges.
+1. Execute the PowerShell script to launch the appliance web application.
+1. Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key.
-### Generate the Azure Migrate project key
+### 1. Generate the Azure Migrate project key
1. In **Migration Goals** > **Servers** > **Azure Migrate: Server Assessment**, select **Discover**. 2. In **Discover machines** > **Are your machines virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
@@ -135,11 +135,10 @@ To set up the appliance you:
1. After the successful creation of the Azure resources, an **Azure Migrate project key** is generated. 1. Copy the key as you will need it to complete the registration of the appliance during its configuration.
-### Download the installer script
+### 2. Download the installer script
In **2: Download Azure Migrate appliance**, click on **Download**. - ### Verify security Check that the zipped file is secure, before you deploy it.
@@ -163,7 +162,7 @@ Check that the zipped file is secure, before you deploy it.
Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | ca67e8dbe21d113ca93bfe94c1003ab7faba50472cb03972d642be8a466f78ce
-### Run the Azure Migrate installer script
+### 3. Run the Azure Migrate installer script
The installer script does the following: - Installs agents and a web application for physical server discovery and assessment.
@@ -192,13 +191,11 @@ Run the script as follows:
If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting. -- ### Verify appliance access to Azure Make sure that the appliance VM can connect to Azure URLs for [public](migrate-appliance.md#public-cloud-urls) and [government](migrate-appliance.md#government-cloud-urls) clouds.
-### Configure the appliance
+### 4. Configure the appliance
Set up the appliance for the first time.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-discover-gcp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-gcp.md
@@ -36,7 +36,7 @@ Before you start this tutorial, check you have these prerequisites in place.
**Requirement** | **Details** --- | ---
-**Appliance** | You need a GCP VM instance on which to run the Azure Migrate appliance. The machine should have:<br/><br/> - Windows Server 2016 installed. Running the appliance on a machine with Windows Server 2019 isn't supported.<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.
+**Appliance** | You need a GCP VM instance on which to run the Azure Migrate appliance. The machine should have:<br/><br/> - Windows Server 2016 installed.<br/> _Running the appliance on a machine with Windows Server 2019 isn't supported_.<br/><br/> - 16 GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.
**Windows VM instances** | Allow inbound connections on WinRM port 5985 (HTTP), so that the appliance can pull configuration and performance metadata. **Linux VM instances** | Allow inbound connections on port 22 (TCP).
@@ -44,7 +44,7 @@ Before you start this tutorial, check you have these prerequisites in place.
To create an Azure Migrate project and register the Azure Migrate appliance, you need an account with: - Contributor or Owner permissions on an Azure subscription.-- Permissions to register Azure Active Directory apps.
+- Permissions to register Azure Active Directory (AAD) apps.
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows:
@@ -63,22 +63,24 @@ If you just created a free Azure account, you're the owner of your subscription.
![Opens the Add Role assignment page to assign a role to the account](./media/tutorial-discover-gcp/assign-role.png)
-7. In the portal, search for users, and under **Services**, select **Users**.
-8. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
+1. To register the appliance, your Azure account needs **permissions to register AAD apps.**
+1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
+1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
![Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-gcp/register-apps.png)
+1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare GCP instances Set up an account that the appliance can use to access GCP VM instances. -- For Windows servers
+- For **Windows servers**:
- Set up a local user account on non-domain joined machines, and a domain account on non-domain joined machines that you want to include in the discovery. Add the user account to the following groups: - Remote Management Users - Performance Monitor Users - Performance Log users.-- For Linux servers:
+- For **Linux servers**:
- You need a root account on the Linux servers that you want to discover. If you are not able to provide a root account, refer to the instructions in the [support matrix](migrate-support-matrix-physical.md#physical-server-requirements) for an alternative. - Azure Migrate uses password authentication when discovering AWS instances. AWS instances don't support password authentication by default. Before you can discover instance, you need to enable password authentication. 1. Sign into each Linux machine.
@@ -104,12 +106,13 @@ Set up a new Azure Migrate project.
![Boxes for project name and region](./media/tutorial-discover-gcp/new-project.png) 7. Select **Create**.
-8. Wait a few minutes for the Azure Migrate project to deploy.
-
-The **Azure Migrate: Server Assessment** tool is added by default to the new project.
+8. Wait a few minutes for the Azure Migrate project to deploy.The **Azure Migrate: Server Assessment** tool is added by default to the new project.
![Page showing Server Assessment tool added by default](./media/tutorial-discover-gcp/added-tool.png)
+> [!NOTE]
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
+ ## Set up the appliance The Azure Migrate appliance is a lightweight appliance, used by Azure Migrate Server Assessment to do the following:
@@ -119,17 +122,14 @@ The Azure Migrate appliance is a lightweight appliance, used by Azure Migrate Se
[Learn more](migrate-appliance.md) about the Azure Migrate appliance. -
-## Appliance deployment steps
- To set up the appliance you:-- Provide an appliance name and generate an Azure Migrate project key in the portal.-- Download a zipped file with Azure Migrate installer script from the Azure portal.-- Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges.-- Execute the PowerShell script to launch the appliance web application.-- Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key.
+1. Provide an appliance name and generate an Azure Migrate project key in the portal.
+1. Download a zipped file with Azure Migrate installer script from the Azure portal.
+1. Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges.
+1. Execute the PowerShell script to launch the appliance web application.
+1. Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key.
-### Generate the Azure Migrate project key
+### 1. Generate the Azure Migrate project key
1. In **Migration Goals** > **Servers** > **Azure Migrate: Server Assessment**, select **Discover**. 2. In **Discover machines** > **Are your machines virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
@@ -138,11 +138,10 @@ To set up the appliance you:
5. After the successful creation of the Azure resources, an **Azure Migrate project key** is generated. 6. Copy the key as you will need it to complete the registration of the appliance during its configuration.
-### Download the installer script
+### 2. Download the installer script
In **2: Download Azure Migrate appliance**, click on **Download**. - ### Verify security Check that the zipped file is secure, before you deploy it.
@@ -166,7 +165,7 @@ Check that the zipped file is secure, before you deploy it.
Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | ae132ebc574caf231bf41886891040ffa7abbe150c8b50436818b69e58622276
-### Run the Azure Migrate installer script
+### 3. Run the Azure Migrate installer script
The installer script does the following: - Installs agents and a web application for GCP server discovery and assessment.
@@ -195,13 +194,11 @@ Run the script as follows:
If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting. -- ### Verify appliance access to Azure Make sure that the appliance VM can connect to Azure URLs for [public](migrate-appliance.md#public-cloud-urls) and [government](migrate-appliance.md#government-cloud-urls) clouds.
-### Configure the appliance
+### 4. Configure the appliance
Set up the appliance for the first time.
@@ -234,7 +231,6 @@ Set up the appliance for the first time.
4. If the Azure user account used for logging has the right [permissions](#prepare-an-azure-user-account) on the Azure resources created during key generation, the appliance registration will be initiated. 5. After appliance is successfully registered, you can see the registration details by clicking on **View details**. - ## Start continuous discovery Now, connect from the appliance to the GCP servers to be discovered, and start the discovery.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-discover-hyper-v https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-hyper-v.md
@@ -38,16 +38,14 @@ Before you start this tutorial, check you have these prerequisites in place.
**Requirement** | **Details** --- | --- **Hyper-V host** | Hyper-V hosts on which VMs are located can be standalone, or in a cluster.<br/><br/> The host must be running Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2.<br/><br/> Verify inbound connections are allowed on WinRM port 5985 (HTTP), so that the appliance can connect to pull VM metadata and performance data, using a Common Information Model (CIM) session.
-**Appliance deployment** | Hyper-V host needs resources to allocate a VM for the appliance:<br/><br/> - Windows Server 2016<br/><br/> -16 GB of RAM<br/><br/> - Eight vCPUs<br/><br/> - Around 80 GB of disk storage.<br/><br/> - An external virtual switch.<br/><br/> - Internet access on for the VM, directly or via a proxy.
+**Appliance deployment** | Hyper-V host needs resources to allocate a VM for the appliance:<br/><br/> - 16 GB of RAM, 8 vCPUs, and around 80 GB of disk storage.<br/><br/> - An external virtual switch, and internet access on the appliance VM, directly or via a proxy.
**VMs** | VMs can be running any Windows or Linux operating system.
-Before you start, you can [review the data](migrate-appliance.md#collected-data---hyper-v) that the appliance collects during discovery.
- ## Prepare an Azure user account To create an Azure Migrate project and register the Azure Migrate appliance, you need an account with: - Contributor or Owner permissions on an Azure subscription.-- Permissions to register Azure Active Directory apps.
+- Permissions to register Azure Active Directory(AAD) apps.
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows:
@@ -67,20 +65,20 @@ If you just created a free Azure account, you're the owner of your subscription.
![Opens the Add Role assignment page to assign a role to the account](./media/tutorial-discover-hyper-v/assign-role.png)
-7. In the portal, search for users, and under **Services**, select **Users**.
-8. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
+1. To register the appliance, your Azure account needs **permissions to register AAD apps.**
+1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
+1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
![Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-hyper-v/register-apps.png)
-9. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App(s). [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+9. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare Hyper-V hosts Set up an account with Administrator access on the Hyper-V hosts. The appliance uses this account for discovery. - Option 1: Prepare an account with Administrator access to the Hyper-V host machine.-- Option 2: Prepare a Local Admin account, or Domain Admin account, and add the account to these groups: Remote Management Users, Hyper-V Administrators, and Performance Monitor Users.-
+- Option 2: If you don't want to assign Administrator permissions, create a local or domain user account, and add the user account to these groups- Remote Management Users, Hyper-V Administrators, and Performance Monitor Users.
## Set up a project
@@ -95,26 +93,28 @@ Set up a new Azure Migrate project.
![Boxes for project name and region](./media/tutorial-discover-hyper-v/new-project.png) 7. Select **Create**.
-8. Wait a few minutes for the Azure Migrate project to deploy.
-
-The **Azure Migrate: Server Assessment** tool is added by default to the new project.
+8. Wait a few minutes for the Azure Migrate project to deploy.The **Azure Migrate: Server Assessment** tool is added by default to the new project.
![Page showing Server Assessment tool added by default](./media/tutorial-discover-hyper-v/added-tool.png)
+> [!NOTE]
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of VMs.[Learn more](create-manage-projects.md#find-a-project)
## Set up the appliance
-This tutorial sets up the appliance on a Hyper-V VM, as follows:
+Azure Migrate: Server Assessment uses a lightweight Azure Migrate appliance. The appliance performs VM discovery and sends VM configuration and performance metadata to Azure Migrate. The appliance can be set up by deploying a VHD file that can be downloaded from the Azure Migrate project.
-- Provide an appliance name and generate an Azure Migrate project key in the portal.-- Download a compressed Hyper-V VHD from the Azure portal.-- Create the appliance, and check that it can connect to Azure Migrate Server Assessment.-- Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key. > [!NOTE]
-> If for some reason you can't set up the appliance using a template, you can set it up using a PowerShell script. [Learn more](deploy-appliance-script.md#set-up-the-appliance-for-hyper-v).
+> If for some reason you can't set up the appliance using the template, you can set it up using a PowerShell script on an existing Windows Server 2016 server. [Learn more](deploy-appliance-script.md#set-up-the-appliance-for-hyper-v).
+This tutorial sets up the appliance on a Hyper-V VM, as follows:
+
+1. Provide an appliance name and generate an Azure Migrate project key in the portal.
+1. Download a compressed Hyper-V VHD from the Azure portal.
+1. Create the appliance, and check that it can connect to Azure Migrate Server Assessment.
+1. Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key.
-### Generate the Azure Migrate project key
+### 1. Generate the Azure Migrate project key
1. In **Migration Goals** > **Servers** > **Azure Migrate: Server Assessment**, select **Discover**. 2. In **Discover machines** > **Are your machines virtualized?**, select **Yes, with Hyper-V**.
@@ -123,10 +123,9 @@ This tutorial sets up the appliance on a Hyper-V VM, as follows:
1. After the successful creation of the Azure resources, an **Azure Migrate project key** is generated. 1. Copy the key as you will need it to complete the registration of the appliance during its configuration.
-### Download the VHD
-
-In **2: Download Azure Migrate appliance**, select the .VHD file and click on **Download**.
+### 2. Download the VHD
+In **2: Download Azure Migrate appliance**, select the .VHD file and click on **Download**.
### Verify security
@@ -152,7 +151,7 @@ Check that the zipped file is secure, before you deploy it.
--- | --- | --- Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140424) | cfed44bb52c9ab3024a628dc7a5d0df8c624f156ec1ecc3507116bae330b257f
-### Create the appliance VM
+### 3. Create the appliance VM
Import the downloaded file, and create the VM.
@@ -173,7 +172,7 @@ Import the downloaded file, and create the VM.
Make sure that the appliance VM can connect to Azure URLs for [public](migrate-appliance.md#public-cloud-urls) and [government](migrate-appliance.md#government-cloud-urls) clouds.
-### Configure the appliance
+### 4. Configure the appliance
Set up the appliance for the first time.
@@ -211,8 +210,6 @@ Set up the appliance for the first time.
4. If the Azure user account used for logging has the right permissions on the Azure resources created during key generation, the appliance registration will be initiated. 1. After appliance is successfully registered, you can see the registration details by clicking on **View details**. -- ### Delegate credentials for SMB VHDs If you're running VHDs on SMBs, you must enable delegation of credentials from the appliance to the Hyper-V hosts. To do this from the appliance:
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-discover-physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-physical.md
@@ -36,7 +36,7 @@ Before you start this tutorial, check you have these prerequisites in place.
**Requirement** | **Details** --- | ---
-**Appliance** | You need a machine on which to run the Azure Migrate appliance. The machine should have:<br/><br/> - Windows Server 2016 installed. _(Currently the deployment of appliance is only supported on Windows Server 2016.)_<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.
+**Appliance** | You need a machine on which to run the Azure Migrate appliance. The machine should have:<br/><br/> - Windows Server 2016 installed.<br/> _(Currently the deployment of appliance is only supported on Windows Server 2016.)_<br/><br/> - 16 GB RAM, 8 vCPUs, around 80 GB of disk storage<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.
**Windows servers** | Allow inbound connections on WinRM port 5985 (HTTP), so that the appliance can pull configuration and performance metadata. **Linux servers** | Allow inbound connections on port 22 (TCP).
@@ -44,7 +44,7 @@ Before you start this tutorial, check you have these prerequisites in place.
To create an Azure Migrate project and register the Azure Migrate appliance, you need an account with: - Contributor or Owner permissions on an Azure subscription.-- Permissions to register Azure Active Directory apps.
+- Permissions to register Azure Active Directory (AAD) apps.
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows:
@@ -63,19 +63,20 @@ If you just created a free Azure account, you're the owner of your subscription.
![Opens the Add Role assignment page to assign a role to the account](./media/tutorial-discover-physical/assign-role.png)
-7. In the portal, search for users, and under **Services**, select **Users**.
-8. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
+1. To register the appliance, your Azure account needs **permissions to register AAD apps.**
+1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
+1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
![Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-physical/register-apps.png)
-9. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App(s). [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+9. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare physical servers Set up an account that the appliance can use to access the physical servers. -- For Windows servers, use a domain account for domain-joined machines, and a local account for machines that are not domain-joined. The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.-- For Linux servers, you need a root account on the Linux servers that you want to discover. Alternately, you can set a non-root account with the required capabilities using the following commands:
+- For **Windows servers**, use a domain account for domain-joined machines, and a local account for machines that are not domain-joined. The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+- For **Linux servers**, you need a root account on the Linux servers that you want to discover. Alternately, you can set a non-root account with the required capabilities using the following commands:
**Command** | **Purpose** --- | --- |
@@ -98,23 +99,25 @@ Set up a new Azure Migrate project.
![Boxes for project name and region](./media/tutorial-discover-physical/new-project.png) 7. Select **Create**.
-8. Wait a few minutes for the Azure Migrate project to deploy.
-
-The **Azure Migrate: Server Assessment** tool is added by default to the new project.
+8. Wait a few minutes for the Azure Migrate project to deploy. The **Azure Migrate: Server Assessment** tool is added by default to the new project.
![Page showing Server Assessment tool added by default](./media/tutorial-discover-physical/added-tool.png)
+> [!NOTE]
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
## Set up the appliance
+Azure Migrate appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate. The appliance can be set up by executing a PowerShell script that can be downloaded from the Azure Migrate project.
+ To set up the appliance you:-- Provide an appliance name and generate an Azure Migrate project key in the portal.-- Download a zipped file with Azure Migrate installer script from the Azure portal.-- Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges.-- Execute the PowerShell script to launch the appliance web application.-- Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key.
+1. Provide an appliance name and generate an Azure Migrate project key in the portal.
+2. Download a zipped file with Azure Migrate installer script from the Azure portal.
+3. Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges.
+4. Execute the PowerShell script to launch the appliance web application.
+5. Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key.
-### Generate the Azure Migrate project key
+### 1. Generate the Azure Migrate project key
1. In **Migration Goals** > **Servers** > **Azure Migrate: Server Assessment**, select **Discover**. 2. In **Discover machines** > **Are your machines virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
@@ -123,11 +126,10 @@ To set up the appliance you:
1. After the successful creation of the Azure resources, an **Azure Migrate project key** is generated. 1. Copy the key as you will need it to complete the registration of the appliance during its configuration.
-### Download the installer script
+### 2. Download the installer script
In **2: Download Azure Migrate appliance**, click on **Download**. - ### Verify security Check that the zipped file is secure, before you deploy it.
@@ -151,7 +153,7 @@ Check that the zipped file is secure, before you deploy it.
Physical (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | ae132ebc574caf231bf41886891040ffa7abbe150c8b50436818b69e58622276
-### Run the Azure Migrate installer script
+### 3. Run the Azure Migrate installer script
The installer script does the following: - Installs agents and a web application for physical server discovery and assessment.
@@ -180,13 +182,11 @@ Run the script as follows:
If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting. -- ### Verify appliance access to Azure Make sure that the appliance VM can connect to Azure URLs for [public](migrate-appliance.md#public-cloud-urls) and [government](migrate-appliance.md#government-cloud-urls) clouds.
-### Configure the appliance
+### 4. Configure the appliance
Set up the appliance for the first time.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-discover-vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-vmware.md
@@ -12,9 +12,9 @@ ms.custom: mvc
# Tutorial: Discover VMware VMs with Server Assessment
-As part of your migration journey to Azure, you discover your on-premises inventory and workloads.
+As part of your migration journey to Azure, you discover your on-premises inventory and workloads.
-This tutorial shows you how to discover on-premises VMware virtual machines (VMs) with the Azure Migrate: Server Assessment tool, using a lightweight Azure Migrate appliance. You deploy the appliance as a VMware VM, to continuously discover VM and performance metadata, apps running on VMs, and VM dependencies.
+This tutorial shows you how to discover on-premises VMware virtual machines (VMs) with the Azure Migrate: Server Assessment tool, using a lightweight Azure Migrate appliance. You deploy the appliance as a VMware VM, to continuously discover VMs and their performance metadata, applications running on VMs, and VM dependencies.
In this tutorial, you learn how to:
@@ -38,16 +38,17 @@ Before you start this tutorial, check you have these prerequisites in place.
**Requirement** | **Details** --- | ---
-**vCenter Server/ESXi host** | You need a vCenter Server running version 5.5, 6.0, 6.5 or 6.7.<br/><br/> VMs must be hosted on an ESXi host running version 5.5 or later.<br/><br/> On the vCenter Server, allow inbound connections on TCP port 443, so that the appliance can collect assessment data.<br/><br/> The appliance connects to vCenter on port 443 by default. If the vCenter server listens on a different port, you can modify the port when you connect from the appliance to the server to start discovery.<br/><br/> On the EXSi server that hosts the VMs, make sure that inbound access is allowed on TCP port 443, for app discovery.
-**Appliance** | vCenter Server needs resources to allocate a VM for the Azure Migrate appliance:<br/><br/> - Windows Server 2016<br/><br/> - 32 GB of RAM, eight vCPUs, and around 80 GB of disk storage.<br/><br/> - An external virtual switch, and internet access on for the VM, directly or via a proxy.
-**VMs** | To use this tutorial, Windows VMs must be running Windows Server 2016, 2012 R2, 2012, or 2008 R2.<br/><br/> Linux VMs must be running Red Hat Enterprise Linux 7/6/5, Ubuntu Linux 14.04/16.04, Debian 7/8, Oracle Linux 6/7, or CentOS 5/6/7.<br/><br/> VMs need VMware tools (a version later than 10.2.0) installed and running.<br/><br/> On Windows VMs, Windows PowerShell 2.0 or later should be installed.
+**vCenter Server/ESXi host** | You need a vCenter Server running version 5.5, 6.0, 6.5 or 6.7.<br/><br/> VMs must be hosted on an ESXi host running version 5.5 or later.<br/><br/> On the vCenter Server, allow inbound connections on TCP port 443, so that the appliance can collect the configuration and performance metadata .<br/><br/> The appliance connects to vCenter on port 443 by default. If the vCenter Server listens on a different port, you can modify the port when you provide the vCenter Server details on the appliance configuration manager.<br/><br/> On the ESXi server that hosts the VMs, make sure that inbound access is allowed on TCP port 443 to discover the applications installed on the VMs and VM dependencies.
+**Appliance** | vCenter Server needs resources to allocate a VM for the Azure Migrate appliance:<br/><br/> - 32 GB of RAM, 8 vCPUs, and around 80 GB of disk storage.<br/><br/> - An external virtual switch, and internet access on the appliance VM, directly or via a proxy.
+**VMs** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata as well as discovery of applications installed on VMs. <br/><br/> Check [here](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless) for the OS versions supported for agentless dependency analysis.<br/><br/> To discover installed applications and VM dependencies, VMware Tools (later than 10.2.0) must be installed and running on VMs and Windows VMs must have PowerShell version 2.0 or later installed.
## Prepare an Azure user account To create an Azure Migrate project and register the Azure Migrate appliance, you need an account with:-- Contributor or Owner permissions on an Azure subscription.-- Permissions to register Azure Active Directory apps.
+- Contributor or Owner permissions on the Azure subscription
+- Permissions to register Azure Active Directory (AAD) apps
+- Owner or Contributor plus User Access Administrator permissions on the Azure subscription to create a Key Vault, used during agentless VMware migration
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows:
@@ -66,16 +67,19 @@ If you just created a free Azure account, you're the owner of your subscription.
![Opens the Add Role assignment page to assign a role to the account](./media/tutorial-discover-vmware/assign-role.png)
-7. In the portal, search for users, and under **Services**, select **Users**.
-8. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
+1. To register the appliance, your Azure account needs **permissions to register AAD apps.**
+1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
+1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
![Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-vmware/register-apps.png)
-9. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App(s). [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+9. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare VMware
-On the vCenter Server, create an account that the appliance can use to access the vCenter Server, and check that required ports are open. You also need an account that the appliance can use to access VMs.
+On vCenter Server, check that your account has permissions to create a VM using an OVA file. This is needed when you deploy the Azure Migrate appliance as a VMware VM, using an OVA file.
+
+Server Assessment needs a vCenter Server read-only account for discovery and assessment of VMware VMs. If you also want to discover installed applications and VM dependencies, the account needs privileges enabled for **Virtual Machines > Guest Operations**.
### Create an account to access vCenter
@@ -86,20 +90,20 @@ In vSphere Web Client, set up an account as follows:
3. In **Users**, add a new user. 4. In **New User**, type in the account details. Then click **OK**. 5. In **Global Permissions**, select the user account, and assign the **Read-only** role to the account. Then click **OK**.
-6. In **Roles** > select the **Read-only** role, and in **Privileges**, select **Guest Operations**. These privileges are needed to discover apps running on VMs, and to analyze VM dependencies.
+6. If you also want to discover installed applications and VM dependencies, go to **Roles** > select the **Read-only** role, and in **Privileges**, select **Guest Operations**. You can propagate the privileges to all objects under the vCenter Server by selecting "Propagate to children" checkbox.
![Checkbox to allow guest operations on the read-only role](./media/tutorial-discover-vmware/guest-operations.png) ### Create an account to access VMs
-The appliance accesses VMs to discover apps, and analyze VM dependencies. The appliance doesn't install any agents on VMs.
+You need a user account with the necessary privileges on the VMs to discover installed applications and VM dependencies. You can provide the user account on the appliance configuration manager. The appliance does not install any agents on the VMs.
-1. Create a Local Admin account that the appliance can use to discover apps and dependencies on Windows VMs.
-2. For Linux machines, create a user account with Root privileges, or alternately, a user account with these permissions on /bin/netstat and /bin/ls files: CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE.
+1. For Windows VMs, create an account (local or domain) with administrative permissions on the VMs.
+2. For Linux VMs, create an account with Root privileges. Alternately, you can create an account with these permissions on /bin/netstat and /bin/ls files: CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE.
> [!NOTE]
-> Azure Migrate supports one credential for app-discovery on all Windows servers, and one credential for app-discovery on all Linux machines.
+> Currently Azure Migrate supports one user account for Windows VMs and one user account for Linux VMs that can be provided on the appliance for discovery of installed applications and VM dependencies.
## Set up a project
@@ -115,34 +119,30 @@ Set up a new Azure Migrate project.
![Boxes for project name and region](./media/tutorial-discover-vmware/new-project.png) 7. Select **Create**.
-8. Wait a few minutes for the Azure Migrate project to deploy.
-
-The **Azure Migrate: Server Assessment** tool is added by default to the new project.
+8. Wait a few minutes for the Azure Migrate project to deploy.The **Azure Migrate: Server Assessment** tool is added by default to the new project.
![Page showing Server Assessment tool added by default](./media/tutorial-discover-vmware/added-tool.png)
+> [!NOTE]
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of VMs.[Learn more](create-manage-projects.md#find-a-project)
## Set up the appliance
-To set up the appliance using an OVA template you:
-- Provide an appliance name and generate an Azure Migrate project key in the portal-- Download an OVA template file, and import it to vCenter Server.-- Create the appliance, and check that it can connect to Azure Migrate Server Assessment.-- Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key.
+Azure Migrate: Server Assessment uses a lightweight Azure Migrate appliance. The appliance performs VM discovery and sends VM configuration and performance metadata to Azure Migrate. The appliance can be set up by deploying an OVA template that can be downloaded from the Azure Migrate project.
> [!NOTE]
-> If for some reason you can't set up the appliance using the template, you can set it up using a PowerShell script. [Learn more](deploy-appliance-script.md#set-up-the-appliance-for-vmware).
+> If for some reason you can't set up the appliance using the template, you can set it up using a PowerShell script on an existing Windows Server 2016 server. [Learn more](deploy-appliance-script.md#set-up-the-appliance-for-vmware).
### Deploy with OVA To set up the appliance using an OVA template you:-- Provide an appliance name and generate an Azure Migrate project key in the portal-- Download an OVA template file, and import it to vCenter Server.-- Create the appliance, and check that it can connect to Azure Migrate Server Assessment.-- Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key.
+1. Provide an appliance name and generate an Azure Migrate project key in the portal
+1. Download an OVA template file, and import it to vCenter Server. Verify the OVA is secure.
+1. Create the appliance, and check that it can connect to Azure Migrate Server Assessment.
+1. Configure the appliance for the first time, and register it with the Azure Migrate project using the Azure Migrate project key.
-### Generate the Azure Migrate project key
+### 1. Generate the Azure Migrate project key
1. In **Migration Goals** > **Servers** > **Azure Migrate: Server Assessment**, select **Discover**. 2. In **Discover machines** > **Are your machines virtualized?**, select **Yes, with VMware vSphere hypervisor**.
@@ -151,10 +151,9 @@ To set up the appliance using an OVA template you:
1. After the successful creation of the Azure resources, an **Azure Migrate project key** is generated. 1. Copy the key as you will need it to complete the registration of the appliance during its configuration.
-### Download the OVA template
-
-In **2: Download Azure Migrate appliance**, select the .OVA file and click on **Download**.
+### 2. Download the OVA template
+In **2: Download Azure Migrate appliance**, select the .OVA file and click on **Download**.
### Verify security
@@ -181,10 +180,7 @@ Check that the OVA file is secure, before you deploy it:
--- | --- | --- VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140337) | 2daaa2a59302bf911e8ef195f8add7d7c8352de77a9af0b860e2a627979085ca ---
-### Create the appliance VM
+### 3. Create the appliance VM
Import the downloaded file, and create a VM.
@@ -204,7 +200,7 @@ will be hosted.
Make sure that the appliance VM can connect to Azure URLs for [public](migrate-appliance.md#public-cloud-urls) and [government](migrate-appliance.md#government-cloud-urls) clouds.
-### Configure the appliance
+### 4. Configure the appliance
Set up the appliance for the first time.
@@ -260,15 +256,16 @@ The appliance needs to connect to vCenter Server to discover the configuration a
1. You can **revalidate** the connectivity to vCenter Server any time before starting the discovery. 1. In **Step 3: Provide VM credentials to discover installed applications and to perform agentless dependency mapping**, click **Add credentials**, and specify the operating system for which the credentials are provided, friendly name for credentials and the **Username** and **Password**. Then click on **Save**.
- - You optionally add credentials here if you've created an account to use for the [application discovery feature](how-to-discover-applications.md), or the [agentless dependency analysis feature](how-to-create-group-machine-dependencies-agentless.md).
+ - You optionally add credentials here if you've created an account to use for [application discovery](how-to-discover-applications.md), or [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
- If you do not want to use these features, you can click on the slider to skip the step. You can reverse the intent any time later.
- - Review the credentials needed for [application discovery](migrate-support-matrix-vmware.md#application-discovery-requirements), or for [agentless dependency analysis](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless).
+ - Review the permissions needed on the account for [application discovery](migrate-support-matrix-vmware.md#application-discovery-requirements), or for [agentless dependency analysis](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless).
5. Click on **Start discovery**, to kick off VM discovery. After the discovery has been successfully initiated, you can check the discovery status against the vCenter Server IP address/FQDN in the table. Discovery works as follows: - It takes around 15 minutes for discovered VM metadata to appear in the portal. - Discovery of installed applications, roles, and features takes some time. The duration depends on the number of VMs being discovered. For 500 VMs, it takes approximately one hour for the application inventory to appear in the Azure Migrate portal.
+- After the discovery of VMs is completed, you can enable agentless dependency analysis on the desired VMs from the portal.
## Next steps
mysql https://docs.microsoft.com/en-us/azure/mysql/concepts-certificate-rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-certificate-rotation.md
@@ -5,7 +5,7 @@ author: mksuni
ms.author: sumuth ms.service: mysql ms.topic: conceptual
-ms.date: 01/13/2021
+ms.date: 01/18/2021
--- # Understanding the changes in the Root CA change for Azure Database for MySQL
@@ -69,20 +69,20 @@ To avoid your application's availability being interrupted due to certificates
* For .NET (MySQL Connector/NET, MySQLConnector) users, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
- ![Azure Database for MySQL .net cert](media/overview/netconnecter-cert.png)
+ :::image type="content" source="media/overview/netconnecter-cert.png" alt-text="Azure Database for MySQL .net cert diagram":::
* For .NET users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file.
- * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files into the following format:</b>
+ * For other (MySQL Client/MySQL Workbench/C/C++/Go/Python/Ruby/PHP/NodeJS/Perl/Swift) users, you can merge two CA certificate files into the following format:
- ```
- -----BEGIN CERTIFICATE-----
- (Root CA1: BaltimoreCyberTrustRoot.crt.pem)
- -----END CERTIFICATE-----
- -----BEGIN CERTIFICATE-----
- (Root CA2: DigiCertGlobalRootG2.crt.pem)
- -----END CERTIFICATE-----
- ```
+ ```
+ -----BEGIN CERTIFICATE-----
+ (Root CA1: BaltimoreCyberTrustRoot.crt.pem)
+ -----END CERTIFICATE-----
+ -----BEGIN CERTIFICATE-----
+ (Root CA2: DigiCertGlobalRootG2.crt.pem)
+ -----END CERTIFICATE-----
+ ```
* Replace the original root CA pem file with the combined root CA file and restart your application/client. * In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem.
mysql https://docs.microsoft.com/en-us/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/quickstart-create-mysql-server-database-using-azure-portal.md
@@ -40,7 +40,7 @@ An Azure subscription is required. If you don't have an Azure subscription, crea
Server name | **mydemoserver** | Enter a unique name. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters. Data source |**None** | Select **None** to create a new server from scratch. Select **Backup** only if you're restoring from a geo-backup of an existing server. Location |Your desired location | Select a location from the list.
- Version | The latest major version| Use the latest major version. See [all supported versions](../postgresql/concepts-supported-versions.md).
+ Version | The latest major version| Use the latest major version. See [all supported versions](../mysql/concepts-supported-versions.md).
Compute + storage | Use the defaults| The default pricing tier is **General Purpose** with **4 vCores** and **100 GB** storage. Backup retention is set to **7 days**, with the **Geographically Redundant** backup option.<br/>Review the [pricing](https://azure.microsoft.com/pricing/details/mysql/) page, and update the defaults if you need to. Admin username | **mydemoadmin** | Enter your server admin user name. You can't use **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public** for the admin user name. Password | A password | A new password for the server admin user. The password must be 8 to 128 characters long and contain a combination of uppercase or lowercase letters, numbers, and non-alphanumeric characters (!, $, #, %, and so on).
@@ -151,4 +151,4 @@ To delete the server, you can select **Delete** on the **Overview** page for you
> [!div class="nextstepaction"] >[Build PHP app on Linux with MySQL](../app-service/tutorial-php-mysql-app.md?pivots=platform-linux%3fpivots%3dplatform-linux)<br/><br/>
-[Can't find what you're looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
\ No newline at end of file
+[Can't find what you're looking for? Let us know.](https://aka.ms/mysql-doc-feedback)
purview https://docs.microsoft.com/en-us/azure/purview/register-scan-power-bi-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-power-bi-tenant.md
@@ -50,7 +50,7 @@ To set up authentication, create a security group and add the catalog's managed
> [!Important] > You need to be a Power BI Admin to see the tenant settings page.
-1. Select **Developer settings** > **Allow service principals to use read-only Power BI admin APIs (Preview)**.
+1. Select **Admin API settings** > **Allow service principals to use read-only Power BI admin APIs (Preview)**.
1. Select **Specific security groups**. :::image type="content" source="./media/setup-power-bi-scan-PowerShell/allow-service-principals-power-bi-admin.png" alt-text="Image showing how to allow service principals to get read-only Power BI admin API permissions":::
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/built-in-roles.md
@@ -7,7 +7,7 @@ ms.topic: reference
ms.workload: identity author: rolyon ms.author: rolyon
-ms.date: 12/16/2020
+ms.date: 01/15/2021
ms.custom: generated ---
@@ -114,6 +114,9 @@ The following table provides a brief description and the unique ID of each built
> | [HDInsight Domain Services Contributor](#hdinsight-domain-services-contributor) | Can Read, Create, Modify and Delete Domain Services related operations needed for HDInsight Enterprise Security Package | 8d8d5a11-05d3-4bda-a417-a08778121c7c | > | [Log Analytics Contributor](#log-analytics-contributor) | Log Analytics Contributor can read all monitoring data and edit monitoring settings. Editing monitoring settings includes adding the VM extension to VMs; reading storage account keys to be able to configure collection of logs from Azure Storage; creating and configuring Automation accounts; adding solutions; and configuring Azure diagnostics on all Azure resources. | 92aaf0da-9dab-42b6-94a3-d43ce8d16293 | > | [Log Analytics Reader](#log-analytics-reader) | Log Analytics Reader can view and search all monitoring data as well as and view monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources. | 73c42c96-874c-492b-b04d-ab87d138a893 |
+> | [Purview Data Curator](#purview-data-curator) | The Microsoft.Purview data curator can create, read, modify and delete catalog data objects and establish relationships between objects. This role is in preview and subject to change. | 8a3c2885-9b38-4fd2-9d99-91af537c1347 |
+> | [Purview Data Reader](#purview-data-reader) | The Microsoft.Purview data reader can read catalog data objects. This role is in preview and subject to change. | ff100721-1b9d-43d8-af52-42b69c1272db |
+> | [Purview Data Source Administrator](#purview-data-source-administrator) | The Microsoft.Purview data source administrator can manage data sources and data scans. This role is in preview and subject to change. | 200bba9e-f0c8-430f-892b-6f0794863803 |
> | [Schema Registry Contributor (Preview)](#schema-registry-contributor-preview) | Read, write, and delete Schema Registry groups and schemas. | 5dffeca3-4936-4216-b2bc-10343a5abb25 | > | [Schema Registry Reader (Preview)](#schema-registry-reader-preview) | Read and list Schema Registry groups and schemas. | 2c56ea50-c6b3-40a6-83c0-9d98858bc7d2 | > | **Blockchain** | | |
@@ -4883,6 +4886,133 @@ Log Analytics Reader can view and search all monitoring data as well as and view
} ```
+### Purview Data Curator
+
+The Microsoft.Purview data curator can create, read, modify and delete catalog data objects and establish relationships between objects. This role is in preview and subject to change.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | --- | --- |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/read | Read account resource for Microsoft Purview provider. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/data/read | Read data objects. |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/data/write | Create, update and delete data objects. |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "The Microsoft.Purview data curator can create, read, modify and delete catalog data objects and establish relationships between objects. This role is in preview and subject to change.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/8a3c2885-9b38-4fd2-9d99-91af537c1347",
+ "name": "8a3c2885-9b38-4fd2-9d99-91af537c1347",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Purview/accounts/read"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.Purview/accounts/data/read",
+ "Microsoft.Purview/accounts/data/write"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Purview Data Curator",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Purview Data Reader
+
+The Microsoft.Purview data reader can read catalog data objects. This role is in preview and subject to change.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | --- | --- |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/read | Read account resource for Microsoft Purview provider. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/data/read | Read data objects. |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "The Microsoft.Purview data reader can read catalog data objects. This role is in preview and subject to change.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/ff100721-1b9d-43d8-af52-42b69c1272db",
+ "name": "ff100721-1b9d-43d8-af52-42b69c1272db",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Purview/accounts/read"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.Purview/accounts/data/read"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Purview Data Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Purview Data Source Administrator
+
+The Microsoft.Purview data source administrator can manage data sources and data scans. This role is in preview and subject to change.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | --- | --- |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/read | Read account resource for Microsoft Purview provider. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/scan/read | Read data sources and scans. |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/scan/write | Create, update and delete data sources and manage scans. |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "The Microsoft.Purview data source administrator can manage data sources and data scans. This role is in preview and subject to change.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/200bba9e-f0c8-430f-892b-6f0794863803",
+ "name": "200bba9e-f0c8-430f-892b-6f0794863803",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Purview/accounts/read"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.Purview/accounts/scan/read",
+ "Microsoft.Purview/accounts/scan/write"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Purview Data Source Administrator",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Schema Registry Contributor (Preview) Read, write, and delete Schema Registry groups and schemas.
@@ -7010,7 +7140,9 @@ Read metadata of keys and perform wrap/unwrap operations. Only works for key vau
> [!div class="mx-tableFixed"] > | Actions | Description | > | --- | --- |
-> | *none* | |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/eventSubscriptions/write | Create or update an eventSubscription |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/eventSubscriptions/read | Read an eventSubscription |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/eventSubscriptions/delete | Delete an eventSubscription |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
@@ -7030,7 +7162,11 @@ Read metadata of keys and perform wrap/unwrap operations. Only works for key vau
"name": "e147488a-f6f5-4113-8e2d-b22465e65bf6", "permissions": [ {
- "actions": [],
+ "actions": [
+ "Microsoft.EventGrid/eventSubscriptions/write",
+ "Microsoft.EventGrid/eventSubscriptions/read",
+ "Microsoft.EventGrid/eventSubscriptions/delete"
+ ],
"notActions": [], "dataActions": [ "Microsoft.KeyVault/vaults/keys/read",
@@ -7433,6 +7569,9 @@ View permissions for Security Center. Can view recommendations, alerts, a securi
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. | > | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/*/read | Read security components and policies | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/*/read | |
+> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/iotDefenderSettings/packageDownloads/action | Gets downloadable IoT Defender packages information |
+> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/iotDefenderSettings/downloadManagerActivation/action | Download manager activation file with subscription quota data |
+> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/iotSensors/downloadResetPassword/action | Downloads reset password file for IoT Sensors |
> | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/read | List management groups for the authenticated user. | > | **NotActions** | | > | *none* | |
@@ -7459,6 +7598,9 @@ View permissions for Security Center. Can view recommendations, alerts, a securi
"Microsoft.Resources/subscriptions/resourceGroups/read", "Microsoft.Security/*/read", "Microsoft.Support/*/read",
+ "Microsoft.Security/iotDefenderSettings/packageDownloads/action",
+ "Microsoft.Security/iotDefenderSettings/downloadManagerActivation/action",
+ "Microsoft.Security/iotSensors/downloadResetPassword/action",
"Microsoft.Management/managementGroups/read" ], "notActions": [],
@@ -8606,8 +8748,8 @@ Role definition to authorize any user/service to create connectedClusters resour
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | Microsoft.Kubernetes/connectedClusters/Write | |
-> | Microsoft.Kubernetes/connectedClusters/read | |
+> | [Microsoft.Kubernetes](resource-provider-operations.md#microsoftkubernetes)/connectedClusters/Write | Writes connectedClusters |
+> | [Microsoft.Kubernetes](resource-provider-operations.md#microsoftkubernetes)/connectedClusters/read | Read connectedClusters |
> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket | > | **NotActions** | | > | *none* | |
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/resource-provider-operations.md
@@ -7,7 +7,7 @@ ms.topic: reference
ms.workload: identity author: rolyon ms.author: rolyon
-ms.date: 12/16/2020
+ms.date: 01/15/2021
ms.custom: generated ---
@@ -75,6 +75,7 @@ Click the resource provider name in the following table to see the list of opera
| [Microsoft.HDInsight](#microsofthdinsight) | | [Microsoft.Kusto](#microsoftkusto) | | [Microsoft.PowerBIDedicated](#microsoftpowerbidedicated) |
+| [Microsoft.Purview](#microsoftpurview) |
| [Microsoft.StreamAnalytics](#microsoftstreamanalytics) | | **Blockchain** | | [Microsoft.Blockchain](#microsoftblockchain) |
@@ -139,6 +140,7 @@ Click the resource provider name in the following table to see the list of opera
| [Microsoft.Features](#microsoftfeatures) | | [Microsoft.GuestConfiguration](#microsoftguestconfiguration) | | [Microsoft.HybridCompute](#microsofthybridcompute) |
+| [Microsoft.Kubernetes](#microsoftkubernetes) |
| [Microsoft.ManagedServices](#microsoftmanagedservices) | | [Microsoft.Management](#microsoftmanagement) | | [Microsoft.PolicyInsights](#microsoftpolicyinsights) |
@@ -397,6 +399,12 @@ Azure service: [Virtual Machines](../virtual-machines/index.yml), [Virtual Machi
> | Microsoft.Compute/availabilitySets/write | Creates a new availability set or updates an existing one | > | Microsoft.Compute/availabilitySets/delete | Deletes the availability set | > | Microsoft.Compute/availabilitySets/vmSizes/read | List available sizes for creating or updating a virtual machine in the availability set |
+> | Microsoft.Compute/capacityReservationGroups/read | Get the properties of a capacity reservation group |
+> | Microsoft.Compute/capacityReservationGroups/write | Creates a new capacity reservation group or updates an existing capacity reservation group |
+> | Microsoft.Compute/capacityReservationGroups/delete | Deletes the capacity reservation group |
+> | Microsoft.Compute/capacityReservationGroups/capacityReservations/read | Get the properties of a capacity reservation |
+> | Microsoft.Compute/capacityReservationGroups/capacityReservations/write | Creates a new capacity reservation or updates an existing capacity reservation |
+> | Microsoft.Compute/capacityReservationGroups/capacityReservations/delete | Deletes the capacity reservation |
> | Microsoft.Compute/cloudServices/read | Get the properties of a CloudService. | > | Microsoft.Compute/cloudServices/write | Created a new CloudService or Update an existing one. | > | Microsoft.Compute/cloudServices/delete | Deletes the CloudService. |
@@ -1499,7 +1507,6 @@ Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/AuthorizeReplication/action | Authorize the source volume replication | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/ResyncReplication/action | Resync the replication on the destination volume | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/DeleteReplication/action | Delete the replication on the destination volume |
-> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/InternalAction/action | Internal Operations For Resource. |
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/backups/read | Reads a backup resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/backups/write | Writes a backup resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/backups/delete | Deletes a backup resource. |
@@ -1553,6 +1560,7 @@ Azure service: [Storage](../storage/index.yml)
> | Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action | Returns a user delegation key for the blob service | > | Microsoft.Storage/storageAccounts/blobServices/write | Returns the result of put blob service properties | > | Microsoft.Storage/storageAccounts/blobServices/read | Returns blob service properties or statistics |
+> | Microsoft.Storage/storageAccounts/blobServices/containers/write | |
> | Microsoft.Storage/storageAccounts/blobServices/containers/write | Returns the result of patch blob container | > | Microsoft.Storage/storageAccounts/blobServices/containers/delete | Returns the result of deleting a container | > | Microsoft.Storage/storageAccounts/blobServices/containers/read | Returns a container |
@@ -1566,6 +1574,8 @@ Azure service: [Storage](../storage/index.yml)
> | Microsoft.Storage/storageAccounts/blobServices/containers/immutabilityPolicies/write | Put blob container immutability policy | > | Microsoft.Storage/storageAccounts/blobServices/containers/immutabilityPolicies/lock/action | Lock blob container immutability policy | > | Microsoft.Storage/storageAccounts/blobServices/containers/immutabilityPolicies/read | Get blob container immutability policy |
+> | Microsoft.Storage/storageAccounts/consumerDataSharePolicies/read | |
+> | Microsoft.Storage/storageAccounts/consumerDataSharePolicies/write | |
> | Microsoft.Storage/storageAccounts/dataSharePolicies/delete | | > | Microsoft.Storage/storageAccounts/dataSharePolicies/read | | > | Microsoft.Storage/storageAccounts/dataSharePolicies/read | |
@@ -2248,6 +2258,10 @@ Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | microsoft.web/hostingenvironments/multirolepools/usages/read | Get Hosting Environments MultiRole Pools Usages. | > | microsoft.web/hostingenvironments/operations/read | Get Hosting Environments Operations. | > | microsoft.web/hostingenvironments/outboundnetworkdependenciesendpoints/read | Get the network endpoints of all outbound dependencies. |
+> | Microsoft.Web/hostingEnvironments/privateEndpointConnections/Write | Approve or Reject a private endpoint connection. |
+> | Microsoft.Web/hostingEnvironments/privateEndpointConnections/Read | Get a private endpoint connection or the list of private endpoint connections. |
+> | Microsoft.Web/hostingEnvironments/privateEndpointConnections/Delete | Delete a private endpoint connection. |
+> | Microsoft.Web/hostingEnvironments/privateLinkResources/Read | Get Private Link Resources. |
> | microsoft.web/hostingenvironments/serverfarms/read | Get Hosting Environments App Service Plans. | > | microsoft.web/hostingenvironments/sites/read | Get Hosting Environments Web Apps. | > | microsoft.web/hostingenvironments/usages/read | Get Hosting Environments Usages. |
@@ -2260,6 +2274,11 @@ Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | microsoft.web/ishostingenvironmentnameavailable/read | Get if Hosting Environment Name is available. | > | microsoft.web/ishostnameavailable/read | Check if Hostname is Available. | > | microsoft.web/isusernameavailable/read | Check if Username is available. |
+> | Microsoft.Web/kubeEnvironments/read | Get the properties of a Kubernetes Environment |
+> | Microsoft.Web/kubeEnvironments/write | Create a Kubernetes Environment or update an existing one |
+> | Microsoft.Web/kubeEnvironments/delete | Delete a Kubernetes Environment |
+> | Microsoft.Web/kubeEnvironments/join/action | Joins a Kubernetes Environment |
+> | Microsoft.Web/kubeEnvironments/operations/read | Get the operations for a Kubernetes Environment |
> | Microsoft.Web/listSitesAssignedToHostName/Read | Get names of sites assigned to hostname. | > | microsoft.web/locations/extractapidefinitionfromwsdl/action | Extract Api Definition from WSDL for Locations. | > | microsoft.web/locations/listwsdlinterfaces/action | List WSDL Interfaces for Locations. |
@@ -2646,6 +2665,10 @@ Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | Microsoft.Web/staticSites/customdomains/Read | List the custom domains for a Static Site | > | Microsoft.Web/staticSites/customdomains/validate/Action | Validate a custom domain can be added to a Static Site | > | Microsoft.Web/staticSites/functions/Read | List the functions for a Static Site |
+> | Microsoft.Web/staticSites/privateEndpointConnections/Write | Approve or Reject Private Endpoint Connection for a Static Site |
+> | Microsoft.Web/staticSites/privateEndpointConnections/Read | Get a private endpoint connection or the list of private endpoint connections for a static site |
+> | Microsoft.Web/staticSites/privateEndpointConnections/Delete | Delete a Private Endpoint Connection for a Static Site |
+> | Microsoft.Web/staticSites/privateLinkResources/Read | Get Private Link Resources |
## Containers
@@ -3266,6 +3289,8 @@ Azure service: [Data Factory](../data-factory/index.yml)
> | Microsoft.DataFactory/factories/integrationruntimes/linkedIntegrationRuntime/action | Create Linked Integration Runtime Reference on the Specified Shared Integration Runtime. | > | Microsoft.DataFactory/factories/integrationruntimes/getObjectMetadata/action | Get SSIS Integration Runtime metadata for the specified Integration Runtime. | > | Microsoft.DataFactory/factories/integrationruntimes/refreshObjectMetadata/action | Refresh SSIS Integration Runtime metadata for the specified Integration Runtime. |
+> | Microsoft.DataFactory/factories/integrationruntimes/enableInteractiveQuery/action | Enable interactive authoring session. |
+> | Microsoft.DataFactory/factories/integrationruntimes/disableInteractiveQuery/action | Disable interactive authoring session. |
> | Microsoft.DataFactory/factories/integrationruntimes/getstatus/read | Reads Integration Runtime Status. | > | Microsoft.DataFactory/factories/integrationruntimes/monitoringdata/read | Gets the Monitoring Data for any Integration Runtime. | > | Microsoft.DataFactory/factories/integrationruntimes/nodes/read | Reads the Node for the specified Integration Runtime. |
@@ -3312,6 +3337,8 @@ Azure service: [Data Factory](../data-factory/index.yml)
> | Microsoft.DataFactory/factories/triggers/subscribetoevents/action | Subscribe to Events. | > | Microsoft.DataFactory/factories/triggers/geteventsubscriptionstatus/action | Event Subscription Status. | > | Microsoft.DataFactory/factories/triggers/unsubscribefromevents/action | Unsubscribe from Events. |
+> | Microsoft.DataFactory/factories/triggers/querysubscriptionevents/action | Query subscription events. |
+> | Microsoft.DataFactory/factories/triggers/deletequeuedsubscriptionevents/action | Delete queued subscription events. |
> | Microsoft.DataFactory/factories/triggers/start/action | Starts any Trigger. | > | Microsoft.DataFactory/factories/triggers/stop/action | Stops any Trigger. | > | Microsoft.DataFactory/factories/triggers/triggerruns/read | Reads the Trigger Runs. |
@@ -4658,6 +4685,38 @@ Azure service: [Power BI Embedded](/azure/power-bi-embedded/)
> | Microsoft.PowerBIDedicated/operations/read | Retrieves the information of operations | > | Microsoft.PowerBIDedicated/skus/read | Retrieves the information of Skus |
+### Microsoft.Purview
+
+Azure service: [Azure Purview](../purview/index.yml)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | --- | --- |
+> | Microsoft.Purview/register/action | Register the subscription for Microsoft Purview provider. |
+> | Microsoft.Purview/unregister/action | Unregister the subscription for Microsoft Purview provider. |
+> | Microsoft.Purview/setDefaultAccount/action | Sets the default account for the scope. |
+> | Microsoft.Purview/accounts/read | Read account resource for Microsoft Purview provider. |
+> | Microsoft.Purview/accounts/write | Write account resource for Microsoft Purview provider. |
+> | Microsoft.Purview/accounts/delete | Delete account resource for Microsoft Purview provider. |
+> | Microsoft.Purview/accounts/move/action | Move account resource for Microsoft Purview provider. |
+> | Microsoft.Purview/accounts/PrivateEndpointConnectionsApproval/action | Approve Private Endpoint Connection. |
+> | Microsoft.Purview/accounts/privateEndpointConnectionProxies/read | Read Account Private Endpoint Connection Proxy. |
+> | Microsoft.Purview/accounts/privateEndpointConnectionProxies/write | Write Account Private Endpoint Connection Proxy. |
+> | Microsoft.Purview/accounts/privateEndpointConnectionProxies/delete | Delete Account Private Endpoint Connection Proxy. |
+> | Microsoft.Purview/accounts/privateEndpointConnectionProxies/validate/action | Validate Account Private Endpoint Connection Proxy. |
+> | Microsoft.Purview/accounts/privateEndpointConnectionProxies/operationResults/read | Monitor Private Endpoint Connection Proxy async operations. |
+> | Microsoft.Purview/accounts/privateEndpointConnections/read | Read Private Endpoint Connection. |
+> | Microsoft.Purview/accounts/privateEndpointConnections/write | Create or update Private Endpoint Connection. |
+> | Microsoft.Purview/accounts/privateEndpointConnections/delete | Delete Private Endpoint Connection. |
+> | Microsoft.Purview/getDefaultAccount/read | Gets the default account for the scope. |
+> | Microsoft.Purview/locations/operationResults/read | Monitor async operations. |
+> | Microsoft.Purview/operations/read | Reads all available operations for Microsoft Purview provider. |
+> | **DataAction** | **Description** |
+> | Microsoft.Purview/accounts/data/read | Read data objects. |
+> | Microsoft.Purview/accounts/data/write | Create, update and delete data objects. |
+> | Microsoft.Purview/accounts/scan/read | Read data sources and scans. |
+> | Microsoft.Purview/accounts/scan/write | Create, update and delete data sources and manage scans. |
+ ### Microsoft.StreamAnalytics Azure service: [Stream Analytics](../stream-analytics/index.yml)
@@ -4823,8 +4882,8 @@ Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/ComputerVision/detect/action | This operation Performs object detection on the specified image. | > | Microsoft.CognitiveServices/accounts/ComputerVision/models/read | This operation returns the list of domain-specific models that are supported by the Computer Vision API. Currently, the API supports following domain-specific models: celebrity recognizer, landmark recognizer. | > | Microsoft.CognitiveServices/accounts/ComputerVision/models/analyze/action | This operation recognizes content within an image by applying a domain-specific model.<br> The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request.<br> Currently, the API provides following domain-specific models: celebrities, landmarks. |
-> | Microsoft.CognitiveServices/accounts/ComputerVision/read/analyze/action | Use this interface to perform a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents.<br>It can handle hand-written, printed or mixed documents.<br>When you use the Read interface, the response contains a header called 'Operation-Location'.<br>The 'Operation-Location' header contains the URL that you must use for your Get Read Result operation to access OCR results. |
-> | Microsoft.CognitiveServices/accounts/ComputerVision/read/analyzeresults/read | Use this interface to retrieve the status and OCR result of a Read operation. The URL containing the 'operationId' is returned in the Read operation 'Operation-Location' response header. |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/read/analyze/action | Use this interface to perform a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents.<br>It can handle hand-written, printed or mixed documents.<br>When you use the Read interface, the response contains a header called 'Operation-Location'.<br>The 'Operation-Location' header contains the URL that you must use for your Get Read Result operation to access OCR results.* |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/read/analyzeresults/read | Use this interface to retrieve the status and OCR result of a Read operation. The URL containing the 'operationId' is returned in the Read operation 'Operation-Location' response header.* |
> | Microsoft.CognitiveServices/accounts/ComputerVision/read/core/asyncbatchanalyze/action | Use this interface to get the result of a Batch Read File operation, employing the state-of-the-art Optical Character | > | Microsoft.CognitiveServices/accounts/ComputerVision/read/operations/read | This interface is used for getting OCR results of Read operation. The URL to this interface should be retrieved from <b>"Operation-Location"</b> field returned from Batch Read File interface. | > | Microsoft.CognitiveServices/accounts/ComputerVision/textoperations/read | This interface is used for getting recognize text operation result. The URL to this interface should be retrieved from <b>"Operation-Location"</b> field returned from Recognize Text interface. |
@@ -5094,6 +5153,126 @@ Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/ImmersiveReader/getcontentmodelforreader/action | Creates an Immersive Reader session | > | Microsoft.CognitiveServices/accounts/InkRecognizer/recognize/action | Given a set of stroke data analyzes the content and generates a list of recognized entities including recognized text. | > | Microsoft.CognitiveServices/accounts/LUIS/predict/action | Gets the published endpoint prediction for the given query. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/write | Creates a new LUIS app. Updates the name or description of the application. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/azureaccounts/action | Assigns an Azure account to the application. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/delete | Deletes an application. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/read | Gets the application info. Lists all of the user applications. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/import/action | Imports an application to LUIS, the application's JSON should be included in the request body. Returns new app ID. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/publish/action | Publishes a specific version of the application. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/querylogsasync/action | Start a download request for the query logs of the past month for the application. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/azureaccounts/read | Gets the LUIS Azure accounts assigned to the application for the user using his Azure Resource Manager token. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/azureaccounts/delete | Gets the LUIS Azure accounts for the user using his Azure Resource Manager token. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/cultures/read | Gets the supported LUIS application cultures. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/customprebuiltdomains/write | Adds a prebuilt domain along with its models as a new application. Returns new app ID. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/customprebuiltdomains/read | Gets all the available custom prebuilt domains for a specific culture Gets all the available custom prebuilt domains for all cultures |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/domains/read | Gets the available application domains. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/endpoints/read | Returns the available endpoint deployment regions and urls |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/publishsettings/read | Get the publish settings for the application. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/publishsettings/write | Updates the application publish settings. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/querylogs/read | Gets the query logs of the past month for the application. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/querylogsasync/read | Get the status of the download request for query logs. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/settings/read | Get the application settings |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/settings/write | Updates the application settings |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/usagescenarios/read | Gets the application available usage scenarios. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/train/action | Sends a training request for a version of a specified LUIS application. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/clone/action | Creates a new application version equivalent to the current snapshot of the selected application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/delete | Deletes an application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/read | Gets the application version info. Gets the info for the list of application versions. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/import/action | Imports a new version into a LUIS application, the version's JSON should be included in in the request body. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/write | Updates the name or description of the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/closedlists/write | Adds a list entity to the LUIS app. Adds a batch of sublists to an existing closedlist.* Updates the closed list model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/closedlists/delete | Deletes a closed list entity from the application. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/closedlists/read | Gets information of a closed list model. Gets information about the closedlist models. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/closedlists/roles/write | Adds a role for a closed list entity model Updates a role for a closed list entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/closedlists/roles/delete | Deletes the role for a closed list entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/closedlists/roles/read | Gets the role for a closed list entity model. Gets the roles for a closed list entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/closedlists/sublists/write | Adds a list to an existing closed list Updates one of the closed list's sublists |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/closedlists/sublists/delete | Deletes a sublist of a specified list entity. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/compositeentities/write | Adds a composite entity extractor to the application. Updates the composite entity extractor. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/compositeentities/delete | Deletes a composite entity extractor from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/compositeentities/read | Gets information about the composite entity model. Gets information about the composite entity models of the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/compositeentities/children/write | Adds a single child in an existing composite entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/compositeentities/children/delete | Deletes a composite entity extractor child from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/compositeentities/roles/write | Adds a role for a composite entity model. Updates a role for a composite entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/compositeentities/roles/delete | Deletes the role for a composite entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/compositeentities/roles/read | Gets the role for a composite entity model. Gets the roles for a composite entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/customprebuiltdomains/write | Adds a customizable prebuilt domain along with all of its models to this application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/customprebuiltdomains/delete | Deletes a prebuilt domain's models from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/customprebuiltentities/write | Adds a custom prebuilt domain entity model to the application version. Use [delete entity](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c1f) with the entity id to remove this entity. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/customprebuiltentities/read | Gets all custom prebuilt domain entities info for this application version |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/customprebuiltentities/roles/write | Adds a role for a custom prebuilt domain entity model Updates a role for a custom prebuilt domain entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/customprebuiltentities/roles/delete | Deletes the role for a custom prebuilt entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/customprebuiltentities/roles/read | Gets the role for a custom prebuilt domain entity model. Gets the roles for a custom prebuilt domain entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/customprebuiltintents/write | Adds a custom prebuilt domain intent model to the application. Use [delete intent](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c1c) with the intent id to remove this intent. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/customprebuiltintents/read | Gets custom prebuilt intents info for this application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/customprebuiltmodels/read | Gets all custom prebuilt domain models info for this application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/entities/write | Adds a simple entity extractor to the application version. Updates the name of an entity extractor. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/entities/delete | Deletes a simple entity extractor from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/entities/read | Gets info about the simple entity model. Gets info about the simple entity models in the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/entities/roles/write | Adds a role for a simple entity model Updates a role of a simple entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/entities/roles/delete | Deletes the role for a simple entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/entities/roles/read | Gets the role for a simple entity model. Gets the roles for a simple entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/entities/suggest/read | Suggests examples that would improve the accuracy of the entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/example/write | Adds a labeled example to the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/examples/write | Adds a batch of non-duplicate labeled examples to the specified application. Batch can't include hierarchical child entities. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/examples/delete | Deletes the label with the specified ID. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/examples/read | Returns a subset of endpoint examples to be reviewed. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/export/read | Exports a LUIS application version to JSON format. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/features/read | Gets all application version features. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/hierarchicalentities/write | Adds a hierarchical entity extractor to the application version. Updates the name and children of a hierarchical entity extractor model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/hierarchicalentities/delete | Deletes a hierarchical entity extractor from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/hierarchicalentities/read | Gets info about the hierarchical entity model. Gets information about the hierarchical entity models in the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/hierarchicalentities/children/write | Creates a single child in an existing hierarchical entity model. Renames a single child in an existing hierarchical entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/hierarchicalentities/children/delete | Deletes a hierarchical entity extractor child from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/hierarchicalentities/children/read | Gets info about the hierarchical entity child model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/hierarchicalentities/roles/write | Adds a role for a hierarchical entity model Updates a role for a hierarchical entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/hierarchicalentities/roles/delete | Deletes the role for a hierarchical entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/hierarchicalentities/roles/read | Gets the role for a hierarchical entity model. Gets the roles for a hierarchical entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/intents/write | Adds an intent classifier to the application version. Updates the name of an intent classifier. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/intents/delete | Deletes an intent classifier from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/intents/read | Gets info about the intent model. Gets info about the intent models in the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/intents/patternrules/read | Gets the patterns for a specific intent. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/intents/suggest/read | Suggests examples that would improve the accuracy of the intent model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/listprebuilts/read | Gets all the available prebuilt entities for the application based on the application's culture. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/models/read | Gets info about the application version models. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternanyentities/write | Adds a Pattern.any entity extractor to the application version. Updates the Pattern.any entity extractor. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternanyentities/delete | Deletes a Pattern.any entity extractor from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternanyentities/read | Gets info about the Pattern.any entity model. Gets info about the Pattern.any entity models in the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternanyentities/explicitlist/write | Adds an item to a Pattern.any explicit list. Updates the explicit list item for a Pattern.any entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternanyentities/explicitlist/delete | Deletes an item from a Pattern.any explicit list. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternanyentities/explicitlist/read | Gets the explicit list of a Pattern.any entity model. Gets the explicit list item for a Pattern.Any entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternanyentities/roles/write | Adds a role for a Pattern.any entity model Updates a role for a Pattern.any entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternanyentities/roles/delete | Deletes the role for a Pattern.any entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternanyentities/roles/read | Gets the role for a Pattern.any entity model. Gets the roles for a Pattern.any entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternrule/write | Adds a pattern to the specified application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternrules/write | Adds a list of patterns to the application version. Updates a pattern in the application version. Updates a list of patterns in the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternrules/delete | Deletes a list of patterns from the application version. Deletes a pattern from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patternrules/read | Gets the patterns in the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patterns/write | **THIS API IS DEPRECATED.** |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patterns/delete | **THIS API IS DEPRECATED.** |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/patterns/read | **THIS API IS DEPRECATED.** |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/phraselists/write | Creates a new phraselist feature. Updates the phrases, the state and the name of the phraselist feature. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/phraselists/delete | Deletes a phraselist feature from an application. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/phraselists/read | Gets phraselist feature info. Gets all phraselist features for the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/prebuilts/write | Adds a list of prebuilt entity extractors to the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/prebuilts/delete | Deletes a prebuilt entity extractor from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/prebuilts/read | Gets info about the prebuilt entity model. Gets info about the prebuilt entity models in the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/prebuilts/roles/write | Adds a role for a prebuilt entity model Updates a role for a prebuilt entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/prebuilts/roles/delete | Deletes the role for a prebuilt entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/prebuilts/roles/read | Gets the role for a prebuilt entity model. Gets the roles for a prebuilt entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/regexentities/write | Adds a regular expression entity extractor to the application version. Updates the regular expression entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/regexentities/delete | Deletes a regular expression entity model from the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/regexentities/read | Gets info about a regular expression entity model. Gets info about the regular expression entity models in the application version. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/regexentities/roles/write | Adds a role for a regular expression entity model Updates a role for a regular expression entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/regexentities/roles/delete | Deletes the role for a regular expression entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/regexentities/roles/read | Gets the roles for a regular expression entity model. Gets the role for a regular expression entity model. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/settings/read | Gets the application version settings. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/settings/write | Updates the application version settings. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/suggest/delete | Deleted an endpoint utterance. This utterance is in the "Review endpoint utterances" list. |
+> | Microsoft.CognitiveServices/accounts/LUIS/apps/versions/train/read | Gets the training status of all models (intents and entities) for the specified application version. You must call the train API to train the LUIS app before you call this API to get training status. |
+> | Microsoft.CognitiveServices/accounts/LUIS/azureaccounts/read | Gets the LUIS Azure accounts for the user using his Azure Resource Manager token. |
+> | Microsoft.CognitiveServices/accounts/LUIS/package/slot/gzip/read | Packages published LUIS application as GZip |
+> | Microsoft.CognitiveServices/accounts/LUIS/package/versions/gzip/read | Packages trained LUIS application as GZip |
> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/alert/anomaly/configurations/write | Create or update anomaly alerting configuration | > | Microsoft.CognitiveServices/accounts/MetricsAdvisor/alert/anomaly/configurations/delete | Delete anomaly alerting configuration | > | Microsoft.CognitiveServices/accounts/MetricsAdvisor/alert/anomaly/configurations/read | Query a single anomaly alerting configuration |
@@ -5301,6 +5480,9 @@ Azure service: [Machine Learning Service](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/workspaces/labeling/projects/write | Creates or updates labeling project in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/labeling/projects/delete | Deletes labeling project in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read | Gets labeling project summary in Machine Learning Services Workspace(s) |
+> | Microsoft.MachineLearningServices/workspaces/linkedServices/read | Gets all linked services for a Machine Learning Services Workspace |
+> | Microsoft.MachineLearningServices/workspaces/linkedServices/write | Create or Update Machine Learning Services Workspace Linked Service(s) |
+> | Microsoft.MachineLearningServices/workspaces/linkedServices/delete | Delete Machine Learning Services Workspace Linked Service(s) |
> | Microsoft.MachineLearningServices/workspaces/metadata/artifacts/read | Gets artifacts in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/metadata/artifacts/write | Creates or updates artifacts in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/metadata/artifacts/delete | Deletes artifacts in Machine Learning Services Workspace(s) |
@@ -5566,6 +5748,9 @@ Azure service: [Time Series Insights](../time-series-insights/index.yml)
> | Microsoft.TimeSeriesInsights/environments/eventsources/read | Get the properties of an event source. | > | Microsoft.TimeSeriesInsights/environments/eventsources/write | Creates a new event source for an environment, or updates an existing event source. | > | Microsoft.TimeSeriesInsights/environments/eventsources/delete | Deletes the event source. |
+> | Microsoft.TimeSeriesInsights/environments/privateendpointConnections/read | Get the properties of a private endpoint connection. |
+> | Microsoft.TimeSeriesInsights/environments/privateendpointConnections/write | Creates a new private endpoint connection for an environment, or updates an existing connection. |
+> | Microsoft.TimeSeriesInsights/environments/privateendpointConnections/delete | Deletes the private endpoint connection. |
> | Microsoft.TimeSeriesInsights/environments/referencedatasets/read | Get the properties of a reference data set. | > | Microsoft.TimeSeriesInsights/environments/referencedatasets/write | Creates a new reference data set for an environment, or updates an existing reference data set. | > | Microsoft.TimeSeriesInsights/environments/referencedatasets/delete | Deletes the reference data set. |
@@ -5878,9 +6063,6 @@ Azure service: core
> | Microsoft.AppConfiguration/configurationStores/eventGridFilters/read | Gets the properties of the specified configuration store event grid filter or lists all the configuration store event grid filters under the specified configuration store. | > | Microsoft.AppConfiguration/configurationStores/eventGridFilters/write | Create or update a configuration store event grid filter with the specified parameters. | > | Microsoft.AppConfiguration/configurationStores/eventGridFilters/delete | Deletes a configuration store event grid filter. |
-> | Microsoft.AppConfiguration/configurationStores/keyValues/read | Reads a key-value from the configuration store. |
-> | Microsoft.AppConfiguration/configurationStores/keyValues/write | Creates or updates a key-value in the configuration store. |
-> | Microsoft.AppConfiguration/configurationStores/keyValues/delete | Deletes an existing key-value from the configuration store. |
> | Microsoft.AppConfiguration/configurationStores/privateEndpointConnectionProxies/validate/action | Validate a private endpoint connection proxy under the specified configuration store. | > | Microsoft.AppConfiguration/configurationStores/privateEndpointConnectionProxies/read | Get a private endpoint connection proxy under the specified configuration store. | > | Microsoft.AppConfiguration/configurationStores/privateEndpointConnectionProxies/write | Create or update a private endpoint connection proxy under the specified configuration store. |
@@ -5895,6 +6077,10 @@ Azure service: core
> | Microsoft.AppConfiguration/configurationStores/providers/Microsoft.Insights/metricDefinitions/read | Retrieve all metric definitions for Microsoft App Configuration. | > | Microsoft.AppConfiguration/locations/operationsStatus/read | Get the status of an operation. | > | Microsoft.AppConfiguration/operations/read | Lists all of the operations supported by Microsoft App Configuration. |
+> | **DataAction** | **Description** |
+> | Microsoft.AppConfiguration/configurationStores/keyValues/read | Reads a key-value from the configuration store. |
+> | Microsoft.AppConfiguration/configurationStores/keyValues/write | Creates or updates a key-value in the configuration store. |
+> | Microsoft.AppConfiguration/configurationStores/keyValues/delete | Deletes an existing key-value from the configuration store. |
### Microsoft.AzureStack
@@ -6693,6 +6879,7 @@ Azure service: [Security Center](../security-center/index.yml)
> | Microsoft.Security/iotDefenderSettings/write | Create or updates IoT Defender Settings | > | Microsoft.Security/iotDefenderSettings/delete | Deletes IoT Defender Settings | > | Microsoft.Security/iotDefenderSettings/PackageDownloads/action | Gets downloadable IoT Defender packages information |
+> | Microsoft.Security/iotDefenderSettings/DownloadManagerActivation/action | Download manager activation file with subscription quota data |
> | Microsoft.Security/iotSecuritySolutions/write | Creates or updates IoT security solutions | > | Microsoft.Security/iotSecuritySolutions/delete | Deletes IoT security solutions | > | Microsoft.Security/iotSecuritySolutions/read | Gets IoT security solutions |
@@ -6713,6 +6900,8 @@ Azure service: [Security Center](../security-center/index.yml)
> | Microsoft.Security/iotSensors/write | Create or updates IoT Sensors | > | Microsoft.Security/iotSensors/delete | Deletes IoT Sensors | > | Microsoft.Security/iotSensors/DownloadActivation/action | Downloads activation file for IoT Sensors |
+> | Microsoft.Security/iotSensors/TriggerTiPackageUpdate/action | Triggers threat intelligence package update |
+> | Microsoft.Security/iotSensors/DownloadResetPassword/action | Downloads reset password file for IoT Sensors |
> | Microsoft.Security/iotSite/read | Gets IoT site | > | Microsoft.Security/iotSite/write | Creates or updates IoT site | > | Microsoft.Security/iotSite/delete | Deletes IoT site |
@@ -7071,6 +7260,7 @@ Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> | Action | Description | > | --- | --- | > | Microsoft.Migrate/register/action | Registers Subscription with Microsoft.Migrate resource provider |
+> | Microsoft.Migrate/unregister/action | Unregisters Subscription with Microsoft.Migrate resource provider |
> | Microsoft.Migrate/assessmentprojects/read | Gets the properties of assessment project | > | Microsoft.Migrate/assessmentprojects/write | Creates a new assessment project or updates an existing assessment project | > | Microsoft.Migrate/assessmentprojects/delete | Deletes the assessment project |
@@ -7153,6 +7343,7 @@ Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> | Microsoft.Migrate/moveCollections/moveResources/read | Gets all the move resources or a move resource from the move collection | > | Microsoft.Migrate/moveCollections/moveResources/write | Creates or updates a move resource | > | Microsoft.Migrate/moveCollections/moveResources/delete | Deletes a move resource from the move collection |
+> | Microsoft.Migrate/moveCollections/operations/read | Gets the status of the operation |
> | Microsoft.Migrate/moveCollections/unresolvedDependencies/read | Gets a list of unresolved dependencies in the move collection | > | Microsoft.Migrate/Operations/read | Lists operations available on Microsoft.Migrate resource provider | > | Microsoft.Migrate/projects/read | Gets the properties of a project |
@@ -9035,14 +9226,332 @@ Azure service: [Azure Arc](../azure-arc/index.yml)
> | Microsoft.HybridCompute/unregister/action | Unregisters the subscription for Microsoft.HybridCompute Resource Provider | > | Microsoft.HybridCompute/locations/operationresults/read | Reads the status of an operation on Microsoft.HybridCompute Resource Provider | > | Microsoft.HybridCompute/locations/operationstatus/read | Reads the status of an operation on Microsoft.HybridCompute Resource Provider |
+> | Microsoft.HybridCompute/locations/updateCenterOperationResults/read | Reads the status of an update center operation on machines |
> | Microsoft.HybridCompute/machines/read | Read any Azure Arc machines | > | Microsoft.HybridCompute/machines/write | Writes an Azure Arc machines | > | Microsoft.HybridCompute/machines/delete | Deletes an Azure Arc machines |
+> | Microsoft.HybridCompute/machines/assessPatches/action | Assesses any Azure Arc machines to get missing software patches |
+> | Microsoft.HybridCompute/machines/installPatches/action | Installs patches on any Azure Arc machines |
> | Microsoft.HybridCompute/machines/extensions/read | Reads any Azure Arc extensions | > | Microsoft.HybridCompute/machines/extensions/write | Installs or Updates an Azure Arc extensions | > | Microsoft.HybridCompute/machines/extensions/delete | Deletes an Azure Arc extensions |
+> | Microsoft.HybridCompute/machines/patchAssessmentResults/read | Reads any Azure Arc patchAssessmentResults |
+> | Microsoft.HybridCompute/machines/patchAssessmentResults/softwarePatches/read | Reads any Azure Arc patchAssessmentResults/softwarePatches |
+> | Microsoft.HybridCompute/machines/patchInstallationResults/read | Reads any Azure Arc patchInstallationResults |
+> | Microsoft.HybridCompute/machines/patchInstallationResults/softwarePatches/read | Reads any Azure Arc patchInstallationResults/softwarePatches |
> | Microsoft.HybridCompute/operations/read | Read all Operations for Azure Arc for Servers |
+### Microsoft.Kubernetes
+
+Azure service: [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | --- | --- |
+> | Microsoft.Kubernetes/connectedClusters/Read | Read connectedClusters |
+> | Microsoft.Kubernetes/connectedClusters/Write | Writes connectedClusters |
+> | Microsoft.Kubernetes/connectedClusters/Delete | Deletes connectedClusters |
+> | Microsoft.Kubernetes/connectedClusters/listClusterUserCredentials/action | List clusterUser credential |
+> | Microsoft.Kubernetes/RegisteredSubscriptions/read | Reads registered subscriptions |
+> | **DataAction** | **Description** |
+> | Microsoft.Kubernetes/connectedClusters/admissionregistration.k8s.io/initializerconfigurations/read | Reads initializerconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/admissionregistration.k8s.io/initializerconfigurations/write | Writes initializerconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/admissionregistration.k8s.io/initializerconfigurations/delete | Deletes initializerconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/admissionregistration.k8s.io/mutatingwebhookconfigurations/read | Reads mutatingwebhookconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/admissionregistration.k8s.io/mutatingwebhookconfigurations/write | Writes mutatingwebhookconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/admissionregistration.k8s.io/mutatingwebhookconfigurations/delete | Deletes mutatingwebhookconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/admissionregistration.k8s.io/validatingwebhookconfigurations/read | Reads validatingwebhookconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/admissionregistration.k8s.io/validatingwebhookconfigurations/write | Writes validatingwebhookconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/admissionregistration.k8s.io/validatingwebhookconfigurations/delete | Deletes validatingwebhookconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/api/read | Reads api |
+> | Microsoft.Kubernetes/connectedClusters/api/v1/read | Reads api/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apiextensions.k8s.io/customresourcedefinitions/read | Reads customresourcedefinitions |
+> | Microsoft.Kubernetes/connectedClusters/apiextensions.k8s.io/customresourcedefinitions/write | Writes customresourcedefinitions |
+> | Microsoft.Kubernetes/connectedClusters/apiextensions.k8s.io/customresourcedefinitions/delete | Deletes customresourcedefinitions |
+> | Microsoft.Kubernetes/connectedClusters/apiregistration.k8s.io/apiservices/read | Reads apiservices |
+> | Microsoft.Kubernetes/connectedClusters/apiregistration.k8s.io/apiservices/write | Writes apiservices |
+> | Microsoft.Kubernetes/connectedClusters/apiregistration.k8s.io/apiservices/delete | Deletes apiservices |
+> | Microsoft.Kubernetes/connectedClusters/apis/read | Reads apis |
+> | Microsoft.Kubernetes/connectedClusters/apis/admissionregistration.k8s.io/read | Reads admissionregistration.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/admissionregistration.k8s.io/v1/read | Reads admissionregistration.k8s.io/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/admissionregistration.k8s.io/v1beta1/read | Reads admissionregistration.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/apiextensions.k8s.io/read | Reads apiextensions.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/apiextensions.k8s.io/v1/read | Reads apiextensions.k8s.io/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/apiextensions.k8s.io/v1beta1/read | Reads apiextensions.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/apiregistration.k8s.io/read | Reads apiregistration.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/apiregistration.k8s.io/v1/read | Reads apiregistration.k8s.io/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/apiregistration.k8s.io/v1beta1/read | Reads apiregistration.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/apps/read | Reads apps |
+> | Microsoft.Kubernetes/connectedClusters/apis/apps/v1beta1/read | Reads apps/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/apps/v1beta2/read | Reads v1beta2 |
+> | Microsoft.Kubernetes/connectedClusters/apis/authentication.k8s.io/read | Reads authentication.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/authentication.k8s.io/v1/read | Reads authentication.k8s.io/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/authentication.k8s.io/v1beta1/read | Reads authentication.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/authorization.k8s.io/read | Reads authorization.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/authorization.k8s.io/v1/read | Reads authorization.k8s.io/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/authorization.k8s.io/v1beta1/read | Reads authorization.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/autoscaling/read | Reads autoscaling |
+> | Microsoft.Kubernetes/connectedClusters/apis/autoscaling/v1/read | Reads autoscaling/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/autoscaling/v2beta1/read | Reads autoscaling/v2beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/autoscaling/v2beta2/read | Reads autoscaling/v2beta2 |
+> | Microsoft.Kubernetes/connectedClusters/apis/batch/read | Reads batch |
+> | Microsoft.Kubernetes/connectedClusters/apis/batch/v1/read | Reads batch/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/batch/v1beta1/read | Reads batch/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/certificates.k8s.io/read | Reads certificates.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/certificates.k8s.io/v1beta1/read | Reads certificates.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/coordination.k8s.io/read | Reads coordination.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/coordination.k8s.io/v1/read | Reads coordination/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/coordination.k8s.io/v1beta1/read | Reads coordination.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/events.k8s.io/read | Reads events.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/events.k8s.io/v1beta1/read | Reads events.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/extensions/read | Reads extensions |
+> | Microsoft.Kubernetes/connectedClusters/apis/extensions/v1beta1/read | Reads extensions/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/metrics.k8s.io/read | Reads metrics.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/metrics.k8s.io/v1beta1/read | Reads metrics.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/networking.k8s.io/read | Reads networking.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/networking.k8s.io/v1/read | Reads networking/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/networking.k8s.io/v1beta1/read | Reads networking.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/node.k8s.io/read | Reads node.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/node.k8s.io/v1beta1/read | Reads node.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/policy/read | Reads policy |
+> | Microsoft.Kubernetes/connectedClusters/apis/policy/v1beta1/read | Reads policy/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/rbac.authorization.k8s.io/read | Reads rbac.authorization.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/rbac.authorization.k8s.io/v1/read | Reads rbac.authorization/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/rbac.authorization.k8s.io/v1beta1/read | Reads rbac.authorization.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/scheduling.k8s.io/read | Reads scheduling.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/scheduling.k8s.io/v1/read | Reads scheduling/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/scheduling.k8s.io/v1beta1/read | Reads scheduling.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/storage.k8s.io/read | Reads storage.k8s.io |
+> | Microsoft.Kubernetes/connectedClusters/apis/storage.k8s.io/v1/read | Reads storage/v1 |
+> | Microsoft.Kubernetes/connectedClusters/apis/storage.k8s.io/v1beta1/read | Reads storage.k8s.io/v1beta1 |
+> | Microsoft.Kubernetes/connectedClusters/apps/controllerrevisions/read | Reads controllerrevisions |
+> | Microsoft.Kubernetes/connectedClusters/apps/controllerrevisions/write | Writes controllerrevisions |
+> | Microsoft.Kubernetes/connectedClusters/apps/controllerrevisions/delete | Deletes controllerrevisions |
+> | Microsoft.Kubernetes/connectedClusters/apps/daemonsets/read | Reads daemonsets |
+> | Microsoft.Kubernetes/connectedClusters/apps/daemonsets/write | Writes daemonsets |
+> | Microsoft.Kubernetes/connectedClusters/apps/daemonsets/delete | Deletes daemonsets |
+> | Microsoft.Kubernetes/connectedClusters/apps/deployments/read | Reads deployments |
+> | Microsoft.Kubernetes/connectedClusters/apps/deployments/write | Writes deployments |
+> | Microsoft.Kubernetes/connectedClusters/apps/deployments/delete | Deletes deployments |
+> | Microsoft.Kubernetes/connectedClusters/apps/replicasets/read | Reads replicasets |
+> | Microsoft.Kubernetes/connectedClusters/apps/replicasets/write | Writes replicasets |
+> | Microsoft.Kubernetes/connectedClusters/apps/replicasets/delete | Deletes replicasets |
+> | Microsoft.Kubernetes/connectedClusters/apps/statefulsets/read | Reads statefulsets |
+> | Microsoft.Kubernetes/connectedClusters/apps/statefulsets/write | Writes statefulsets |
+> | Microsoft.Kubernetes/connectedClusters/apps/statefulsets/delete | Deletes statefulsets |
+> | Microsoft.Kubernetes/connectedClusters/authentication.k8s.io/tokenreviews/write | Writes tokenreviews |
+> | Microsoft.Kubernetes/connectedClusters/authentication.k8s.io/userextras/impersonate/action | Impersonate userextras |
+> | Microsoft.Kubernetes/connectedClusters/authorization.k8s.io/localsubjectaccessreviews/write | Writes localsubjectaccessreviews |
+> | Microsoft.Kubernetes/connectedClusters/authorization.k8s.io/selfsubjectaccessreviews/write | Writes selfsubjectaccessreviews |
+> | Microsoft.Kubernetes/connectedClusters/authorization.k8s.io/selfsubjectrulesreviews/write | Writes selfsubjectrulesreviews |
+> | Microsoft.Kubernetes/connectedClusters/authorization.k8s.io/subjectaccessreviews/write | Writes subjectaccessreviews |
+> | Microsoft.Kubernetes/connectedClusters/autoscaling/horizontalpodautoscalers/read | Reads horizontalpodautoscalers |
+> | Microsoft.Kubernetes/connectedClusters/autoscaling/horizontalpodautoscalers/write | Writes horizontalpodautoscalers |
+> | Microsoft.Kubernetes/connectedClusters/autoscaling/horizontalpodautoscalers/delete | Deletes horizontalpodautoscalers |
+> | Microsoft.Kubernetes/connectedClusters/batch/cronjobs/read | Reads cronjobs |
+> | Microsoft.Kubernetes/connectedClusters/batch/cronjobs/write | Writes cronjobs |
+> | Microsoft.Kubernetes/connectedClusters/batch/cronjobs/delete | Deletes cronjobs |
+> | Microsoft.Kubernetes/connectedClusters/batch/jobs/read | Reads jobs |
+> | Microsoft.Kubernetes/connectedClusters/batch/jobs/write | Writes jobs |
+> | Microsoft.Kubernetes/connectedClusters/batch/jobs/delete | Deletes jobs |
+> | Microsoft.Kubernetes/connectedClusters/bindings/write | Writes bindings |
+> | Microsoft.Kubernetes/connectedClusters/certificates.k8s.io/certificatesigningrequests/read | Reads certificatesigningrequests |
+> | Microsoft.Kubernetes/connectedClusters/certificates.k8s.io/certificatesigningrequests/write | Writes certificatesigningrequests |
+> | Microsoft.Kubernetes/connectedClusters/certificates.k8s.io/certificatesigningrequests/delete | Deletes certificatesigningrequests |
+> | Microsoft.Kubernetes/connectedClusters/clusterconfig.azure.com/azureclusteridentityrequests/read | Reads azureclusteridentityrequests |
+> | Microsoft.Kubernetes/connectedClusters/clusterconfig.azure.com/azureclusteridentityrequests/write | Writes azureclusteridentityrequests |
+> | Microsoft.Kubernetes/connectedClusters/clusterconfig.azure.com/azureclusteridentityrequests/delete | Deletes azureclusteridentityrequests |
+> | Microsoft.Kubernetes/connectedClusters/componentstatuses/read | Reads componentstatuses |
+> | Microsoft.Kubernetes/connectedClusters/componentstatuses/write | Writes componentstatuses |
+> | Microsoft.Kubernetes/connectedClusters/componentstatuses/delete | Deletes componentstatuses |
+> | Microsoft.Kubernetes/connectedClusters/configmaps/read | Reads configmaps |
+> | Microsoft.Kubernetes/connectedClusters/configmaps/write | Writes configmaps |
+> | Microsoft.Kubernetes/connectedClusters/configmaps/delete | Deletes configmaps |
+> | Microsoft.Kubernetes/connectedClusters/coordination.k8s.io/leases/read | Reads leases |
+> | Microsoft.Kubernetes/connectedClusters/coordination.k8s.io/leases/write | Writes leases |
+> | Microsoft.Kubernetes/connectedClusters/coordination.k8s.io/leases/delete | Deletes leases |
+> | Microsoft.Kubernetes/connectedClusters/endpoints/read | Reads endpoints |
+> | Microsoft.Kubernetes/connectedClusters/endpoints/write | Writes endpoints |
+> | Microsoft.Kubernetes/connectedClusters/endpoints/delete | Deletes endpoints |
+> | Microsoft.Kubernetes/connectedClusters/events/read | Reads events |
+> | Microsoft.Kubernetes/connectedClusters/events/write | Writes events |
+> | Microsoft.Kubernetes/connectedClusters/events/delete | Deletes events |
+> | Microsoft.Kubernetes/connectedClusters/events.k8s.io/events/read | Reads events |
+> | Microsoft.Kubernetes/connectedClusters/events.k8s.io/events/write | Writes events |
+> | Microsoft.Kubernetes/connectedClusters/events.k8s.io/events/delete | Deletes events |
+> | Microsoft.Kubernetes/connectedClusters/extensions/daemonsets/read | Reads daemonsets |
+> | Microsoft.Kubernetes/connectedClusters/extensions/daemonsets/write | Writes daemonsets |
+> | Microsoft.Kubernetes/connectedClusters/extensions/daemonsets/delete | Deletes daemonsets |
+> | Microsoft.Kubernetes/connectedClusters/extensions/deployments/read | Reads deployments |
+> | Microsoft.Kubernetes/connectedClusters/extensions/deployments/write | Writes deployments |
+> | Microsoft.Kubernetes/connectedClusters/extensions/deployments/delete | Deletes deployments |
+> | Microsoft.Kubernetes/connectedClusters/extensions/ingresses/read | Reads ingresses |
+> | Microsoft.Kubernetes/connectedClusters/extensions/ingresses/write | Writes ingresses |
+> | Microsoft.Kubernetes/connectedClusters/extensions/ingresses/delete | Deletes ingresses |
+> | Microsoft.Kubernetes/connectedClusters/extensions/networkpolicies/read | Reads networkpolicies |
+> | Microsoft.Kubernetes/connectedClusters/extensions/networkpolicies/write | Writes networkpolicies |
+> | Microsoft.Kubernetes/connectedClusters/extensions/networkpolicies/delete | Deletes networkpolicies |
+> | Microsoft.Kubernetes/connectedClusters/extensions/podsecuritypolicies/read | Reads podsecuritypolicies |
+> | Microsoft.Kubernetes/connectedClusters/extensions/podsecuritypolicies/write | Writes podsecuritypolicies |
+> | Microsoft.Kubernetes/connectedClusters/extensions/podsecuritypolicies/delete | Deletes podsecuritypolicies |
+> | Microsoft.Kubernetes/connectedClusters/extensions/replicasets/read | Reads replicasets |
+> | Microsoft.Kubernetes/connectedClusters/extensions/replicasets/write | Writes replicasets |
+> | Microsoft.Kubernetes/connectedClusters/extensions/replicasets/delete | Deletes replicasets |
+> | Microsoft.Kubernetes/connectedClusters/groups/impersonate/action | Impersonate groups |
+> | Microsoft.Kubernetes/connectedClusters/healthz/read | Reads healthz |
+> | Microsoft.Kubernetes/connectedClusters/healthz/autoregister-completion/read | Reads autoregister-completion |
+> | Microsoft.Kubernetes/connectedClusters/healthz/etcd/read | Reads etcd |
+> | Microsoft.Kubernetes/connectedClusters/healthz/log/read | Reads log |
+> | Microsoft.Kubernetes/connectedClusters/healthz/ping/read | Reads ping |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/apiservice-openapi-controller/read | Reads apiservice-openapi-controller |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/apiservice-registration-controller/read | Reads apiservice-registration-controller |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/apiservice-status-available-controller/read | Reads apiservice-status-available-controller |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/bootstrap-controller/read | Reads bootstrap-controller |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/ca-registration/read | Reads ca-registration |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/crd-informer-synced/read | Reads crd-informer-synced |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/generic-apiserver-start-informers/read | Reads generic-apiserver-start-informers |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/kube-apiserver-autoregistration/read | Reads kube-apiserver-autoregistration |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/rbac/bootstrap-roles/read | Reads bootstrap-roles |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/scheduling/bootstrap-system-priority-classes/read | Reads bootstrap-system-priority-classes |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/start-apiextensions-controllers/read | Reads start-apiextensions-controllers |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/start-apiextensions-informers/read | Reads start-apiextensions-informers |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/start-kube-aggregator-informers/read | Reads start-kube-aggregator-informers |
+> | Microsoft.Kubernetes/connectedClusters/healthz/poststarthook/start-kube-apiserver-admission-initializer/read | Reads start-kube-apiserver-admission-initializer |
+> | Microsoft.Kubernetes/connectedClusters/limitranges/read | Reads limitranges |
+> | Microsoft.Kubernetes/connectedClusters/limitranges/write | Writes limitranges |
+> | Microsoft.Kubernetes/connectedClusters/limitranges/delete | Deletes limitranges |
+> | Microsoft.Kubernetes/connectedClusters/livez/read | Reads livez |
+> | Microsoft.Kubernetes/connectedClusters/livez/autoregister-completion/read | Reads autoregister-completion |
+> | Microsoft.Kubernetes/connectedClusters/livez/etcd/read | Reads etcd |
+> | Microsoft.Kubernetes/connectedClusters/livez/log/read | Reads log |
+> | Microsoft.Kubernetes/connectedClusters/livez/ping/read | Reads ping |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/apiservice-openapi-controller/read | Reads apiservice-openapi-controller |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/apiservice-registration-controller/read | Reads apiservice-registration-controller |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/apiservice-status-available-controller/read | Reads apiservice-status-available-controller |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/bootstrap-controller/read | Reads bootstrap-controller |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/ca-registration/read | Reads ca-registration |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/crd-informer-synced/read | Reads crd-informer-synced |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/generic-apiserver-start-informers/read | Reads generic-apiserver-start-informers |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/kube-apiserver-autoregistration/read | Reads kube-apiserver-autoregistration |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/rbac/bootstrap-roles/read | Reads bootstrap-roles |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/scheduling/bootstrap-system-priority-classes/read | Reads bootstrap-system-priority-classes |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/start-apiextensions-controllers/read | Reads start-apiextensions-controllers |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/start-apiextensions-informers/read | Reads start-apiextensions-informers |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/start-kube-aggregator-informers/read | Reads start-kube-aggregator-informers |
+> | Microsoft.Kubernetes/connectedClusters/livez/poststarthook/start-kube-apiserver-admission-initializer/read | Reads start-kube-apiserver-admission-initializer |
+> | Microsoft.Kubernetes/connectedClusters/logs/read | Reads logs |
+> | Microsoft.Kubernetes/connectedClusters/metrics/read | Reads metrics |
+> | Microsoft.Kubernetes/connectedClusters/metrics.k8s.io/nodes/read | Reads nodes |
+> | Microsoft.Kubernetes/connectedClusters/metrics.k8s.io/pods/read | Reads pods |
+> | Microsoft.Kubernetes/connectedClusters/namespaces/read | Reads namespaces |
+> | Microsoft.Kubernetes/connectedClusters/namespaces/write | Writes namespaces |
+> | Microsoft.Kubernetes/connectedClusters/namespaces/delete | Deletes namespaces |
+> | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/ingresses/read | Reads ingresses |
+> | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/ingresses/write | Writes ingresses |
+> | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/ingresses/delete | Deletes ingresses |
+> | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/networkpolicies/read | Reads networkpolicies |
+> | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/networkpolicies/write | Writes networkpolicies |
+> | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/networkpolicies/delete | Deletes networkpolicies |
+> | Microsoft.Kubernetes/connectedClusters/node.k8s.io/runtimeclasses/read | Reads runtimeclasses |
+> | Microsoft.Kubernetes/connectedClusters/node.k8s.io/runtimeclasses/write | Writes runtimeclasses |
+> | Microsoft.Kubernetes/connectedClusters/node.k8s.io/runtimeclasses/delete | Deletes runtimeclasses |
+> | Microsoft.Kubernetes/connectedClusters/nodes/read | Reads nodes |
+> | Microsoft.Kubernetes/connectedClusters/nodes/write | Writes nodes |
+> | Microsoft.Kubernetes/connectedClusters/nodes/delete | Deletes nodes |
+> | Microsoft.Kubernetes/connectedClusters/openapi/v2/read | Reads v2 |
+> | Microsoft.Kubernetes/connectedClusters/persistentvolumeclaims/read | Reads persistentvolumeclaims |
+> | Microsoft.Kubernetes/connectedClusters/persistentvolumeclaims/write | Writes persistentvolumeclaims |
+> | Microsoft.Kubernetes/connectedClusters/persistentvolumeclaims/delete | Deletes persistentvolumeclaims |
+> | Microsoft.Kubernetes/connectedClusters/persistentvolumes/read | Reads persistentvolumes |
+> | Microsoft.Kubernetes/connectedClusters/persistentvolumes/write | Writes persistentvolumes |
+> | Microsoft.Kubernetes/connectedClusters/persistentvolumes/delete | Deletes persistentvolumes |
+> | Microsoft.Kubernetes/connectedClusters/pods/read | Reads pods |
+> | Microsoft.Kubernetes/connectedClusters/pods/write | Writes pods |
+> | Microsoft.Kubernetes/connectedClusters/pods/delete | Deletes pods |
+> | Microsoft.Kubernetes/connectedClusters/podtemplates/read | Reads podtemplates |
+> | Microsoft.Kubernetes/connectedClusters/podtemplates/write | Writes podtemplates |
+> | Microsoft.Kubernetes/connectedClusters/podtemplates/delete | Deletes podtemplates |
+> | Microsoft.Kubernetes/connectedClusters/policy/poddisruptionbudgets/read | Reads poddisruptionbudgets |
+> | Microsoft.Kubernetes/connectedClusters/policy/poddisruptionbudgets/write | Writes poddisruptionbudgets |
+> | Microsoft.Kubernetes/connectedClusters/policy/poddisruptionbudgets/delete | Deletes poddisruptionbudgets |
+> | Microsoft.Kubernetes/connectedClusters/policy/podsecuritypolicies/read | Reads podsecuritypolicies |
+> | Microsoft.Kubernetes/connectedClusters/policy/podsecuritypolicies/write | Writes podsecuritypolicies |
+> | Microsoft.Kubernetes/connectedClusters/policy/podsecuritypolicies/delete | Deletes podsecuritypolicies |
+> | Microsoft.Kubernetes/connectedClusters/policy/podsecuritypolicies/use/action | Use action on podsecuritypolicies |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/clusterrolebindings/read | Reads clusterrolebindings |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/clusterrolebindings/write | Writes clusterrolebindings |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/clusterrolebindings/delete | Deletes clusterrolebindings |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/clusterroles/read | Reads clusterroles |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/clusterroles/write | Writes clusterroles |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/clusterroles/delete | Deletes clusterroles |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/clusterroles/bind/action | Binds clusterroles |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/clusterroles/escalate/action | Escalates |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/rolebindings/read | Reads rolebindings |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/rolebindings/write | Writes rolebindings |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/rolebindings/delete | Deletes rolebindings |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/roles/read | Reads roles |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/roles/write | Writes roles |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/roles/delete | Deletes roles |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/roles/bind/action | Binds roles |
+> | Microsoft.Kubernetes/connectedClusters/rbac.authorization.k8s.io/roles/escalate/action | Escalates roles |
+> | Microsoft.Kubernetes/connectedClusters/readyz/read | Reads readyz |
+> | Microsoft.Kubernetes/connectedClusters/readyz/autoregister-completion/read | Reads autoregister-completion |
+> | Microsoft.Kubernetes/connectedClusters/readyz/etcd/read | Reads etcd |
+> | Microsoft.Kubernetes/connectedClusters/readyz/log/read | Reads log |
+> | Microsoft.Kubernetes/connectedClusters/readyz/ping/read | Reads ping |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/apiservice-openapi-controller/read | Reads apiservice-openapi-controller |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/apiservice-registration-controller/read | Reads apiservice-registration-controller |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/apiservice-status-available-controller/read | Reads apiservice-status-available-controller |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/bootstrap-controller/read | Reads bootstrap-controller |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/ca-registration/read | Reads ca-registration |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/crd-informer-synced/read | Reads crd-informer-synced |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/generic-apiserver-start-informers/read | Reads generic-apiserver-start-informers |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/kube-apiserver-autoregistration/read | Reads kube-apiserver-autoregistration |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/rbac/bootstrap-roles/read | Reads bootstrap-roles |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/scheduling/bootstrap-system-priority-classes/read | Reads bootstrap-system-priority-classes |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/start-apiextensions-controllers/read | Reads start-apiextensions-controllers |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/start-apiextensions-informers/read | Reads start-apiextensions-informers |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/start-kube-aggregator-informers/read | Reads start-kube-aggregator-informers |
+> | Microsoft.Kubernetes/connectedClusters/readyz/poststarthook/start-kube-apiserver-admission-initializer/read | Reads start-kube-apiserver-admission-initializer |
+> | Microsoft.Kubernetes/connectedClusters/readyz/shutdown/read | Reads shutdown |
+> | Microsoft.Kubernetes/connectedClusters/replicationcontrollers/read | Reads replicationcontrollers |
+> | Microsoft.Kubernetes/connectedClusters/replicationcontrollers/write | Writes replicationcontrollers |
+> | Microsoft.Kubernetes/connectedClusters/replicationcontrollers/delete | Deletes replicationcontrollers |
+> | Microsoft.Kubernetes/connectedClusters/resetMetrics/read | Reads resetMetrics |
+> | Microsoft.Kubernetes/connectedClusters/resourcequotas/read | Reads resourcequotas |
+> | Microsoft.Kubernetes/connectedClusters/resourcequotas/write | Writes resourcequotas |
+> | Microsoft.Kubernetes/connectedClusters/resourcequotas/delete | Deletes resourcequotas |
+> | Microsoft.Kubernetes/connectedClusters/scheduling.k8s.io/priorityclasses/read | Reads priorityclasses |
+> | Microsoft.Kubernetes/connectedClusters/scheduling.k8s.io/priorityclasses/write | Writes priorityclasses |
+> | Microsoft.Kubernetes/connectedClusters/scheduling.k8s.io/priorityclasses/delete | Deletes priorityclasses |
+> | Microsoft.Kubernetes/connectedClusters/secrets/read | Reads secrets |
+> | Microsoft.Kubernetes/connectedClusters/secrets/write | Writes secrets |
+> | Microsoft.Kubernetes/connectedClusters/secrets/delete | Deletes secrets |
+> | Microsoft.Kubernetes/connectedClusters/serviceaccounts/read | Reads serviceaccounts |
+> | Microsoft.Kubernetes/connectedClusters/serviceaccounts/write | Writes serviceaccounts |
+> | Microsoft.Kubernetes/connectedClusters/serviceaccounts/delete | Deletes serviceaccounts |
+> | Microsoft.Kubernetes/connectedClusters/serviceaccounts/impersonate/action | Impersonate serviceaccounts |
+> | Microsoft.Kubernetes/connectedClusters/services/read | Reads services |
+> | Microsoft.Kubernetes/connectedClusters/services/write | Writes services |
+> | Microsoft.Kubernetes/connectedClusters/services/delete | Deletes services |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csidrivers/read | Reads csidrivers |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csidrivers/write | Writes csidrivers |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csidrivers/delete | Deletes csidrivers |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csinodes/read | Reads csinodes |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csinodes/write | Writes csinodes |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csinodes/delete | Deletes csinodes |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/storageclasses/read | Reads storageclasses |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/storageclasses/write | Writes storageclasses |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/storageclasses/delete | Deletes storageclasses |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/volumeattachments/read | Reads volumeattachments |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/volumeattachments/write | Writes volumeattachments |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/volumeattachments/delete | Deletes volumeattachments |
+> | Microsoft.Kubernetes/connectedClusters/swagger-api/read | Reads swagger-api |
+> | Microsoft.Kubernetes/connectedClusters/swagger-ui/read | Reads swagger-ui |
+> | Microsoft.Kubernetes/connectedClusters/ui/read | Reads ui |
+> | Microsoft.Kubernetes/connectedClusters/users/impersonate/action | Impersonate users |
+> | Microsoft.Kubernetes/connectedClusters/version/read | Reads version |
+ ### Microsoft.ManagedServices Azure service: [Azure Lighthouse](../lighthouse/index.yml)
@@ -9148,76 +9657,76 @@ Azure service: [Site Recovery](../site-recovery/index.yml)
> | Action | Description | > | --- | --- | > | Microsoft.RecoveryServices/register/action | Registers subscription for given Resource Provider |
-> | Microsoft.RecoveryServices/Locations/backupPreValidateProtection/action | |
-> | Microsoft.RecoveryServices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
-> | Microsoft.RecoveryServices/Locations/backupValidateFeatures/action | Validate Features |
+> | microsoft.recoveryservices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
+> | microsoft.recoveryservices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
+> | microsoft.recoveryservices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
+> | microsoft.recoveryservices/Locations/backupPreValidateProtection/action | |
+> | microsoft.recoveryservices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
+> | microsoft.recoveryservices/Locations/backupValidateFeatures/action | Validate Features |
> | Microsoft.RecoveryServices/locations/allocateStamp/action | AllocateStamp is internal operation used by service | > | Microsoft.RecoveryServices/locations/checkNameAvailability/action | Check Resource Name Availability is an API to check if resource name is available | > | Microsoft.RecoveryServices/locations/allocatedStamp/read | GetAllocatedStamp is internal operation used by service |
-> | Microsoft.RecoveryServices/Locations/backupProtectedItem/write | Create a backup Protected Item |
-> | Microsoft.RecoveryServices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | microsoft.recoveryservices/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
+> | microsoft.recoveryservices/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
+> | microsoft.recoveryservices/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
+> | microsoft.recoveryservices/Locations/backupProtectedItem/write | Create a backup Protected Item |
+> | microsoft.recoveryservices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
> | Microsoft.RecoveryServices/locations/operationStatus/read | Gets Operation Status for a given Operation | > | Microsoft.RecoveryServices/operations/read | Operation returns the list of Operations for a Resource Provider |
-> | Microsoft.RecoveryServices/Vaults/backupCrossRegionRestore/action | Cross Region Restore Recovery Points for Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupJobsExport/action | Export Jobs |
-> | Microsoft.RecoveryServices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
+> | microsoft.recoveryservices/Vaults/backupJobsExport/action | Export Jobs |
+> | microsoft.recoveryservices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/write | Create Vault operation creates an Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/read | The Get Vault operation gets an object representing the Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/delete | The Delete Vault operation deletes the specified Azure resource of type 'vault' |
-> | Microsoft.RecoveryServices/Vaults/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
-> | Microsoft.RecoveryServices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
-> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
-> | Microsoft.RecoveryServices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/cancel/action | Cancel the Job |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/read | Returns all Job Objects |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/delete | Delete a Protection Policy |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/read | Returns all Protection Policies |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/write | Creates Protection Policy |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
-> | Microsoft.RecoveryServices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
-> | Microsoft.RecoveryServices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
-> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
+> | microsoft.recoveryservices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
+> | microsoft.recoveryservices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
+> | microsoft.recoveryservices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
+> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
+> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
+> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
+> | microsoft.recoveryservices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
+> | microsoft.recoveryservices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
+> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
+> | microsoft.recoveryservices/Vaults/backupJobs/cancel/action | Cancel the Job |
+> | microsoft.recoveryservices/Vaults/backupJobs/read | Returns all Job Objects |
+> | microsoft.recoveryservices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
+> | microsoft.recoveryservices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
+> | microsoft.recoveryservices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupPolicies/delete | Delete a Protection Policy |
+> | microsoft.recoveryservices/Vaults/backupPolicies/read | Returns all Protection Policies |
+> | microsoft.recoveryservices/Vaults/backupPolicies/write | Creates Protection Policy |
+> | microsoft.recoveryservices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
+> | microsoft.recoveryservices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
+> | microsoft.recoveryservices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
+> | microsoft.recoveryservices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | microsoft.recoveryservices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
+> | microsoft.recoveryservices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
+> | microsoft.recoveryservices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
> | Microsoft.RecoveryServices/Vaults/certificates/write | The Update Resource Certificate operation updates the resource/vault credential certificate. | > | Microsoft.RecoveryServices/Vaults/extendedInformation/read | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.RecoveryServices/Vaults/extendedInformation/write | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? |
@@ -9226,14 +9735,14 @@ Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/Vaults/monitoringAlerts/write | Resolves the alert. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/read | Gets the Recovery services vault notification configuration. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/write | Configures e-mail notifications to Recovery services vault. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
+> | microsoft.recoveryservices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
> | Microsoft.RecoveryServices/Vaults/registeredIdentities/write | The Register Service Container operation can be used to register a container with Recovery Service. | > | Microsoft.RecoveryServices/Vaults/registeredIdentities/read | The Get Containers operation can be used get the containers registered for a resource. | > | Microsoft.RecoveryServices/Vaults/registeredIdentities/delete | The UnRegister Container operation can be used to unregister a container. |
@@ -9289,7 +9798,7 @@ Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/addDisks/action | Add disks | > | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/removeDisks/action | Remove disks | > | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/ResolveHealthErrors/action | |
-> | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/failoverCancel/action | |
+> | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/failoverCancel/action | Failover Cancel |
> | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/operationresults/read | Track the results of an asynchronous operation on the resource Protected Items | > | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/recoveryPoints/read | Read any Replication Recovery Points | > | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/targetComputeSizes/read | Read any Target Compute Sizes |
@@ -9340,6 +9849,7 @@ Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/replicationRecoveryPlans/testFailoverCleanup/action | Test Failover Cleanup Recovery Plan | > | Microsoft.RecoveryServices/vaults/replicationRecoveryPlans/failoverCommit/action | Failover Commit Recovery Plan | > | Microsoft.RecoveryServices/vaults/replicationRecoveryPlans/reProtect/action | ReProtect Recovery Plan |
+> | Microsoft.RecoveryServices/vaults/replicationRecoveryPlans/failoverCancel/action | Cancel Failover Recovery Plan |
> | Microsoft.RecoveryServices/vaults/replicationRecoveryPlans/operationresults/read | Track the results of an asynchronous operation on the resource Recovery Plans | > | Microsoft.RecoveryServices/vaults/replicationRecoveryServicesProviders/read | Read any Recovery Services Providers | > | Microsoft.RecoveryServices/vaults/replicationStorageClassificationMappings/read | Read any Storage Classification Mappings |
@@ -9352,7 +9862,7 @@ Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/replicationVaultSettings/read | Read any | > | Microsoft.RecoveryServices/vaults/replicationVaultSettings/write | Create or Update any | > | Microsoft.RecoveryServices/vaults/replicationvCenters/read | Read any vCenters |
-> | Microsoft.RecoveryServices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
+> | microsoft.recoveryservices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
> | Microsoft.RecoveryServices/vaults/usages/read | Read any Vault Usages | > | Microsoft.RecoveryServices/Vaults/vaultTokens/read | The Vault Token operation can be used to get Vault Token for vault level backend operations. |
@@ -9532,6 +10042,16 @@ Azure service: [Azure Digital Twins](../digital-twins/index.yml)
> | Microsoft.DigitalTwins/digitalTwinsInstances/logDefinitions/read | Gets the log settings for the resource's Azure Monitor | > | Microsoft.DigitalTwins/digitalTwinsInstances/metricDefinitions/read | Gets the metric settings for the resource's Azure Monitor | > | Microsoft.DigitalTwins/digitalTwinsInstances/operationsResults/read | Read any Operation Result |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnectionProxies/validate/action | Validate PrivateEndpointConnectionProxies resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnectionProxies/read | Read PrivateEndpointConnectionProxies resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnectionProxies/write | Write PrivateEndpointConnectionProxies resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnectionProxies/delete | Delete PrivateEndpointConnectionProxies resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnectionProxies/operationResults/read | Get the result of an async operation on a private endpoint connection proxy |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnections/read | Read PrivateEndpointConnection resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnections/write | Write PrivateEndpointConnection resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnections/delete | Delete PrivateEndpointConnection resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnections/operationResults/read | Get the result of an async operation on a private endpoint connection |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/privateLinkResources/read | Reads PrivateLinkResources for Digital Twins |
> | Microsoft.DigitalTwins/locations/checkNameAvailability/action | Check Name Availability of a resource in the Digital Twins Resource Provider | > | Microsoft.DigitalTwins/locations/operationsResults/read | Read any Operation Result | > | Microsoft.DigitalTwins/operations/read | Read all Operations |
search https://docs.microsoft.com/en-us/azure/search/search-sku-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-sku-manage-costs.md
@@ -19,9 +19,14 @@ In this article, learn about the pricing model, billable events, and tips for ma
The scalability architecture in Azure Cognitive Search is based on flexible combinations of replicas and partitions so that you can vary capacity depending on whether you need more query or indexing power, and pay only for what you need.
-The amount resources used by your search service, multiplied by the billing rate established by the service tier, determines the cost of running the service. Costs and capacity are tightly bound. When estimating costs, understanding the capacity required to run your indexing and query workloads gives you the best idea as to what projected costs will be.
+The amount of resources used by your search service, multiplied by the billing rate established by the service tier, determines the cost of running the service. Costs and capacity are tightly bound. When estimating costs, understanding the capacity required to run your indexing and query workloads gives you the best idea as to what projected costs will be.
-For billing purposes, Cognitive Search has the concept of a *search unit* (SU). An SU is the product of the *replicas* and *partitions* used by a service: **(R x P = SU)**. The number of SUs multiplied by the billing rate **(SU * rate = monthly spend)** is the primary determinant of search-related costs.
+For billing purposes, there are two simple formulas to be aware of:
+
+| Formula | Description |
+|---------|-------------|
+| **R x P = SU** | Number of replicas used, multiplied by the number of partitions used, equals the quantity of *search units* (SU) used by a service. An SU is a unit of resource, and it can be either a partition or a replica. |
+| **SU * billing rate = monthly spend** | The number of SUs multiplied by the billing rate of the tier at which you provisioned the service is the primary determinant of your overall monthly bill. Some features or workloads have dependencies on other Azure services, which can increase the cost of your solution at the subscription level. The billable events section below identifies features that can add to your bill. |
Every service starts with one SU (one replica multiplied by one partition) as the minimum. The maximum for any service is 36 SUs. This maximum can be reached in multiple ways: 6 partitions x 6 replicas, or 3 partitions x 12 replicas, for example. It's common to use less than total capacity (for example, a 3-replica, 3-partition service billed as 9 SUs). See the [Partition and replica combinations](search-capacity-planning.md#chart) chart for valid combinations.
search https://docs.microsoft.com/en-us/azure/search/search-sku-tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-sku-tier.md
@@ -67,7 +67,7 @@ Tier pricing includes details about per-partition storage that ranges from 2 GB
## Billing rates
-Tiers have different billing rates, with higher rates for tiers that run on more expensive hardware or provide more expensive features. The billing rate is what you see in the [Azure pricing pages](https://azure.microsoft.com/pricing/details/search/) for each service tier of Azure Cognitive Search.
+Tiers have different billing rates, with higher rates for tiers that run on more expensive hardware or provide more expensive features. The per-tier billing rate can be found in the [Azure pricing pages](https://azure.microsoft.com/pricing/details/search/) for Azure Cognitive Search.
Once you create a service, the billing rate becomes both a *fixed cost* of running the service around the clock, and an *incremental cost* if you choose to add more capacity.
security-center https://docs.microsoft.com/en-us/azure/security-center/alerts-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/alerts-reference.md
@@ -7,7 +7,7 @@ author: memildin
manager: rkarlin ms.service: security-center ms.devlang: na
-ms.topic: overview
+ms.topic: reference
ms.tgt_pltfrm: na ms.workload: na ms.date: 01/11/2021
security-center https://docs.microsoft.com/en-us/azure/security-center/defender-for-sql-usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-sql-usage.md
@@ -74,7 +74,7 @@ Both of these are described below.
1. Optionally, configure email notification for security alerts.
- You can set a list of recipients to receive an email notification when Security Center alerts are generated. The email contains a direct sk to the alert in Azure Security Center with all the relevant details. For more information, see [Set up email notifications for security alerts](security-center-provide-security-contact-details.md).
+ You can set a list of recipients to receive an email notification when Security Center alerts are generated. The email contains a direct link to the alert in Azure Security Center with all the relevant details. For more information, see [Set up email notifications for security alerts](security-center-provide-security-contact-details.md).
security-center https://docs.microsoft.com/en-us/azure/security-center/release-notes-archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes-archive.md
@@ -7,7 +7,7 @@ author: memildin
manager: rkarlin ms.service: security-center ms.devlang: na
-ms.topic: conceptual
+ms.topic: reference
ms.tgt_pltfrm: na ms.workload: na ms.date: 01/07/2020
security-center https://docs.microsoft.com/en-us/azure/security-center/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
@@ -1,16 +1,16 @@
--- title: Release notes for Azure Security Center
-description: A description of what's new and changed in Azure Security Center.
+description: A description of what's new and changed in Azure Security Center
services: security-center documentationcenter: na author: memildin manager: rkarlin ms.service: security-center ms.devlang: na
-ms.topic: overview
+ms.topic: reference
ms.tgt_pltfrm: na ms.workload: na
-ms.date: 01/07/2021
+ms.date: 01/17/2021
ms.author: memildin ---
@@ -29,6 +29,24 @@ To learn about *planned* changes that are coming soon to Security Center, see [I
## January 2021
+Updates in December include:
+
+- [CSV export of filtered list of recommendations](#csv-export-of-filtered-list-of-recommendations)
+- [Vulnerability assessment for on-premise and multi-cloud machines is generally available](#vulnerability-assessment-for-on-premise-and-multi-cloud-machines-is-generally-available)
++
+### CSV export of filtered list of recommendations
+
+In November 2020, we added filters to the recommendations page ([Recommendations list now includes filters](#recommendations-list-now-includes-filters)). In December, we expanded those filters ([Recommendations page has new filters for environment, severity, and available responses](#recommendations-page-has-new-filters-for-environment-severity-and-available-responses)).
+
+With this announcement, we're changing the behavior of the **Download to CSV** button so that the CSV export only includes the recommendations currently displayed in the filtered list.
+
+For example, in the image below you can see that the list has been filtered to two recommendations. The CSV file that is generated includes the status details for every resource affected by those two recommendations.
+
+:::image type="content" source="media/security-center-managing-and-responding-alerts/export-to-csv-with-filters.png" alt-text="Exporting filtered recommendations to a CSV file":::
+
+Learn more in [Security recommendations in Azure Security Center](security-center-recommendations.md).
+ ### Vulnerability assessment for on-premise and multi-cloud machines is generally available In October, we announced a preview for scanning Azure Arc enabled servers with [Azure Defender for servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys).
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-azure-ddos-protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-ddos-protection.md
@@ -33,7 +33,7 @@ Distributed denial of service (DDoS) attacks attempt to exhaust an application's
1. Select **Azure DDoS Protection** from the data connectors gallery, and then select **Open Connector Page** on the preview pane.
-1. Enable **Diagnostic logs** on all the firewalls whose logs you wish to connect:
+1. Enable **Diagnostic logs** on all the public IP addresses whose logs you wish to connect:
1. Select the **Open Diagnostics settings >** link, and choose a **Public IP Address** resource from the list.
@@ -58,4 +58,4 @@ Distributed denial of service (DDoS) attacks attempt to exhaust an application's
In this document, you learned how to connect Azure DDoS Protection logs to Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).-- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).\ No newline at end of file
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-besecure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-besecure.md
@@ -14,7 +14,7 @@ ms.devlang: na
ms.topic: how-to ms.tgt_pltfrm: na ms.workload: na
-ms.date: 10/25/2020
+ms.date: 01/12/2021
ms.author: yelevin ---
@@ -22,9 +22,9 @@ ms.author: yelevin
# Connect your Beyond Security beSECURE to Azure Sentinel > [!IMPORTANT]
-> The Beyond Security beSECURE data connector in Azure Sentinel is currently in public preview. This feature is provided without a service level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> The Beyond Security beSECURE connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Beyond Security beSECURE connector allows you to easily connect all your beSECURE security solution logs with your Azure Sentinel, to view dashboards, create custom alerts, and improve investigation. Integration between beSECURE and Azure Sentinel makes use of REST API.
+The Beyond Security beSECURE connector allows you to easily connect all your beSECURE security solution logs with your Azure Sentinel, to view dashboards, create custom alerts, and improve investigation. Integration between beSECURE and Azure Sentinel makes use of REST API.
> [!NOTE] > Data will be stored in the geographic location of the workspace on which you are running Azure Sentinel.
@@ -33,7 +33,9 @@ Beyond Security beSECURE connector allows you to easily connect all your beSECUR
beSECURE can integrate with and export logs directly to Azure Sentinel.
-1. In the Azure Sentinel portal, click **Data connectors** and select **Beyond Security beSECURE (Preview)** and then **Open connector page**.
+1. In the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select **Beyond Security beSECURE (Preview)** and then **Open connector page**.
1. Follow the steps below to configure your beSECURE solution to send out scan results, scan status and audit trail logs to Azure Sentinel.
@@ -46,8 +48,11 @@ beSECURE can integrate with and export logs directly to Azure Sentinel.
1. Enable Azure Sentinel
- **Provide beSECURE with Azure Sentinel settings.**
- - Copy the *Workspace ID* and *Primary Key* values from the Azure Sentinel connector page, paste them in the beSECURE configuration, and click **Modify**.
+ **Provide beSECURE with Azure Sentinel settings:**
+
+ Copy the *Workspace ID* and *Primary Key* values from the Azure Sentinel connector page, paste them in the beSECURE configuration, and click **Modify**.
+
+ :::image type="content" source="media/connectors/workspace-id-primary-key.png" alt-text="{Workspace ID and primary key}":::
## Find your data
@@ -56,13 +61,13 @@ After a successful connection is established, the data appears in **Logs**, unde
- `beSECURE_ScanEvents_CL` - `beSECURE_Audit_CL`
-To query the beSECURE logs in Log Analytics, enter one of the above table names at the top of the query window.
+To query the beSECURE logs in analytics rules, hunting queries, investigations, or anywhere else in Azure Sentinel, enter one of the above table names at the top of the query window.
## Validate connectivity It may take up to 20 minutes until your logs start to appear in Log Analytics. ## Next steps In this document, you learned how to connect beSECURE to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:-- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md).
- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md). - [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-better-mtd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-better-mtd.md new file mode 100644
@@ -0,0 +1,65 @@
+---
+title: Connect BETTER Mobile Threat Defense (MTD) to Azure Sentinel | Microsoft Docs
+description: Learn how to use the BETTER Mobile Threat Defense (MTD) data connector to pull MTD logs into Azure Sentinel. View MTD data in workbooks, create alerts, and improve investigation.
+services: sentinel
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+editor: ''
+
+ms.assetid: 0001cad6-699c-4ca9-b66c-80c194e439a5
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 01/12/2021
+ms.author: yelevin
+
+---
+
+# Connect your BETTER Mobile Threat Defense (MTD) to Azure Sentinel
+
+> [!IMPORTANT]
+> The BETTER Mobile Threat Defense (MTD) connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The BETTER Mobile Threat Defense (MTD) connector allows you to easily connect all your BETTER MTD security solution logs with your Azure Sentinel, to view dashboards, create custom alerts, and improve investigation. Integration between BETTER Mobile Threat Defense and Azure Sentinel makes use of REST API.
+
+> [!NOTE]
+> Data will be stored in the geographic location of the workspace on which you are running Azure Sentinel.
+
+## Configure and connect BETTER Mobile Threat Defense
+
+BETTER MTD can integrate and export logs directly to Azure Sentinel.
+
+1. In the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select **BETTER Mobile Threat Defense (MTD) (Preview)** and then **Open connector page**.
+
+1. Follow the steps on the connector page and on [this page from the BETTER MTD Documentation](https://mtd-docs.bmobi.net/integrations/azure-sentinel/setup-integration#mtd-integration-configuration) to finalize the integration on BETTER MTD Console.
+
+ When requested to enter the **Workspace ID** and **Primary Key** values, copy them from the Azure Sentinel connector page and paste them into the BETTER MTD configuration.
+
+ :::image type="content" source="media/connectors/workspace-id-primary-key.png" alt-text="{Workspace ID and primary key}":::
+
+## Find your data
+
+After a successful connection is established, the data appears in **Logs**, under the **CustomLogs** section, in one or more of the following tables:
+- `BetterMTDDeviceLog_CL`
+- `BetterMTDIncidentLog_CL`
+- `BetterMTDAppLog_CL`
+- `BetterMTDNetflowLog_CL`
+
+To query the BETTER MTD logs in analytics rules, hunting queries, or anywhere else in Azure Sentinel, enter one of the above table names at the top of the query window.
+
+## Validate connectivity
+
+It may take up to 20 minutes until your logs start to appear in Log Analytics.
+
+## Next steps
+
+In this document, you learned how to connect BETTER Mobile Threat Defense (MTD) to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-cef-solution-config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-solution-config.md
@@ -27,15 +27,19 @@ If your security solution already has an existing connector, use the connector-s
- [AI Vectra Detect](connect-ai-vectra-detect.md) - [Check Point](connect-checkpoint.md)-- [Cisco](connect-cisco.md)
+- [Cisco ASA](connect-cisco.md)
+- [Citrix WAF](connect-citrix-waf.md)
+- [CyberArk Enterprise Password Vault](connect-cyberark.md)
- [ExtraHop Reveal(x)](connect-extrahop.md)-- [F5 ASM](connect-f5.md)
+- [F5 ASM](connect-f5.md)
- [Forcepoint products](connect-forcepoint-casb-ngfw.md) - [Fortinet](connect-fortinet.md) - [Illusive Networks AMS](connect-illusive-attack-management-system.md) - [One Identity Safeguard](connect-one-identity.md) - [Palo Alto Networks](connect-paloalto.md) - [Trend Micro Deep Security](connect-trend-micro.md)
+- [Trend Micro TippingPoint](connect-trend-micro-tippingpoint.md)
+- [WireX Network Forensics Platform](connect-wirex-systems.md)
- [Zscaler](connect-zscaler.md) ## Configure any other solution
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-data-sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-data-sources.md
@@ -65,6 +65,8 @@ The following data connection methods are supported by Azure Sentinel:
- [Alcide kAudit](connect-alcide-kaudit.md) - [Barracuda WAF](connect-barracuda.md) - [Barracuda CloudGen Firewall](connect-barracuda-cloudgen-firewall.md)
+ - [BETTER Mobile Threat Defense](connect-better-mtd.md)
+ - [Beyond Security beSECURE](connect-besecure.md)
- [Citrix Analytics (Security)](connect-citrix-analytics.md) - [F5 BIG-IP](connect-f5-big-ip.md) - [Forcepoint DLP](connect-forcepoint-dlp.md)
@@ -89,6 +91,8 @@ The following data connection methods are supported by Azure Sentinel:
- [AI Vectra Detect](connect-ai-vectra-detect.md) - [Check Point](connect-checkpoint.md) - [Cisco ASA](connect-cisco.md)
+ - [Citrix WAF](connect-citrix-waf.md)
+ - [CyberArk Enterprise Password Vault](connect-cyberark.md)
- [ExtraHop Reveal(x)](connect-extrahop.md) - [F5 ASM](connect-f5.md) - [Forcepoint products](connect-forcepoint-casb-ngfw.md)
@@ -97,6 +101,8 @@ The following data connection methods are supported by Azure Sentinel:
- [One Identity Safeguard](connect-one-identity.md) - [Palo Alto Networks](connect-paloalto.md) - [Trend Micro Deep Security](connect-trend-micro.md)
+ - [Trend Micro TippingPoint](connect-trend-micro-tippingpoint.md)
+ - [WireX Network Forensics Platform](connect-wirex-systems.md)
- [Zscaler](connect-zscaler.md) - [Other CEF-based appliances](connect-common-event-format.md) - **Firewalls, proxies, and endpoints - Syslog:**
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-trend-micro-tippingpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-trend-micro-tippingpoint.md new file mode 100644
@@ -0,0 +1,77 @@
+---
+title: Connect Trend Micro TippingPoint to Azure Sentinel | Microsoft Docs
+description: Learn how to use the Trend Micro TippingPoint data connector to pull TippingPoint SMS logs into Azure Sentinel. View TippingPoint data in workbooks, create alerts, and improve investigation.
+services: sentinel
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+editor: ''
+
+ms.assetid: 0001cad6-699c-4ca9-b66c-80c194e439a5
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 01/12/2021
+ms.author: yelevin
+
+---
+# Connect your Trend Micro TippingPoint solution to Azure Sentinel
+
+> [!IMPORTANT]
+> The Trend Micro TippingPoint connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article explains how to connect your Trend Micro TippingPoint Threat Protection System solution to Azure Sentinel. The Trend Micro TippingPoint data connector allows you to easily connect your TippingPoint Security Management System (SMS) logs with Azure Sentinel, so that you can view the data in workbooks, use it to create custom alerts, and incorporate it to improve investigation.
+
+> [!NOTE]
+> Data will be stored in the geographic location of the workspace on which you are running Azure Sentinel.
+
+## Prerequisites
+
+- You must have read and write permissions on your Azure Sentinel workspace.
+
+- You must have read permissions to shared keys for the workspace.
+
+## Send Trend Micro TippingPoint logs to Azure Sentinel
+
+To get its logs into Azure Sentinel, configure your TippingPoint TPS solution to send Syslog messages in CEF format to a Linux-based log forwarding server (running rsyslog or syslog-ng). This server will have the Log Analytics agent installed on it, and the agent forwards the logs to your Azure Sentinel workspace.
+
+1. In the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select **Trend Micro TippingPoint (Preview)**, and then **Open connector page**.
+
+1. Follow the instructions in the **Instructions** tab, under **Configuration**:
+
+ 1. **1. Linux Syslog agent configuration** - Do this step if you don't already have a log forwarder running, or if you need another one. See [STEP 1: Deploy the log forwarder](connect-cef-agent.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+
+ 1. **2. Forward Trend Micro TippingPoint SMS logs to Syslog agent** - This configuration should include the following elements:
+ - Log destination ΓÇô the hostname and/or IP address of your log forwarding server
+ - Protocol and port ΓÇô **TCP 514** (if recommended otherwise, be sure to make the parallel change in the syslog daemon on your log forwarding server)
+ - Log format ΓÇô **ArcSight CEF Format v4.2**
+ - Log types ΓÇô all available
+
+ 1. **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+
+ It may take up to 20 minutes until your logs start to appear in Log Analytics.
+
+## Find your data
+
+After a successful connection is established, the data appears in **Logs**, under the **Azure Sentinel** section, in the *CommonSecurityLog* table.
+
+To query TrendMicro TippingPoint data in Log Analytics, copy the following into the query window, applying other filters as you choose:
+
+```kusto
+CommonSecurityLog
+| where DeviceVendor == "TrendMicroTippingPoint"
+```
+
+See the **Next steps** tab in the connector page for more query samples.
+
+## Next steps
+
+In this document, you learned how to connect Trend Micro TippingPoint to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
\ No newline at end of file
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-wirex-systems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-wirex-systems.md new file mode 100644
@@ -0,0 +1,76 @@
+---
+title: Connect WireX Network Forensics Platform (NFP) to Azure Sentinel | Microsoft Docs
+description: Learn how to use the WireX Systems NFP data connector to pull WireX NFP logs into Azure Sentinel. View WireX NFP data in workbooks, create alerts, and improve investigation.
+services: sentinel
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+editor: ''
+
+ms.assetid: 0001cad6-699c-4ca9-b66c-80c194e439a5
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 01/12/2021
+ms.author: yelevin
+
+---
+# Connect your WireX Network Forensics Platform (NFP) appliance to Azure Sentinel
+
+> [!IMPORTANT]
+> The WireX Systems NFP connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article explains how to connect your WireX Systems Network Forensics Platform (NFP) appliance to Azure Sentinel. The WireX NFP data connector allows you to easily connect your NFP logs with Azure Sentinel, so that you can view the data in workbooks, use it to create custom alerts, and incorporate it to improve investigation.
+
+> [!NOTE]
+> Data will be stored in the geographic location of the workspace on which you are running Azure Sentinel.
+
+## Prerequisites
+
+- You must have read and write permissions on your Azure Sentinel workspace.
+
+- You must have read permissions to shared keys for the workspace.
+
+## Send WireX NFP logs to Azure Sentinel
+
+To get its logs into Azure Sentinel, configure your WireX Systems NFP appliance to send Syslog messages in CEF format to a Linux-based log forwarding server (running rsyslog or syslog-ng). This server will have the Log Analytics agent installed on it, and the agent forwards the logs to your Azure Sentinel workspace.
+
+1. In the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select **WireX Network Forensics Platform (Preview)**, and then **Open connector page**.
+
+1. Follow the instructions in the **Instructions** tab, under **Configuration**:
+
+ 1. **1. Linux Syslog agent configuration** - Do this step if you don't already have a log forwarder running, or if you need another one. See [STEP 1: Deploy the log forwarder](connect-cef-agent.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+
+ 1. **2. Forward Common Event Format (CEF) logs to Syslog agent** - Contact [WireX support](https://wirexsystems.com/contact-us/) for the proper configuration of your WireX NFP solution. This configuration should include the following elements:
+ - Log destination ΓÇô the hostname and/or IP address of your log forwarding server
+ - Protocol and port ΓÇô TCP 514 (if recommended otherwise, be sure to make the parallel change in the syslog daemon on your log forwarding server)
+ - Log format ΓÇô CEF
+ - Log types ΓÇô all recommended by WireX
+
+ 1. **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+
+ It may take up to 20 minutes until your logs start to appear in Log Analytics.
+
+## Find your data
+
+After a successful connection is established, the data appears in **Logs**, under the **Azure Sentinel** section, in the *CommonSecurityLog* table.
+
+To query WireX NFP data in Log Analytics, copy the following into the query window, applying other filters as you choose:
+
+```kusto
+CommonSecurityLog
+| where DeviceVendor == "WireX"
+```
+
+See the **Next steps** tab in the connector page for more query samples.
+
+## Next steps
+In this document, you learned how to connect WireX Systems NFP to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
\ No newline at end of file
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-management-libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-management-libraries.md
@@ -10,14 +10,20 @@ ms.custom: devx-track-csharp
# Dynamically provision Service Bus namespaces and entities The Azure Service Bus management libraries can dynamically provision Service Bus namespaces and entities. This enables complex deployments and messaging scenarios, and makes it possible to programmatically determine what entities to provision. These libraries are currently available for .NET.
-## Supported functionality
+## Overview
+There are three management libraries available for you create and manage Service Bus entities. They are:
-* Namespace creation, update, deletion
-* Queue creation, update, deletion
-* Topic creation, update, deletion
-* Subscription creation, update, deletion
+- [Azure.Messaging.ServiceBus.Administration](#azuremessagingservicebusadministration)
+- [Microsoft.Azure.ServiceBus.Management](#microsoftazureservicebusmanagement)
+- [Microsoft.Azure.Management.ServiceBus](#microsoftazuremanagementservicebus)
-## Azure.Messaging.ServiceBus.Administration (Recommended)
+All of these packages support create, get, list, delete, update, delete, and update operations on **queues, topics, and subscriptions**. But, only [Microsoft.Azure.Management.ServiceBus](#microsoftazuremanagementservicebus) supports create, update, list, get, and delete operations on **namespaces**, listing and re-regenerating SAS keys, and more.
+
+The Microsoft.Azure.Management.ServiceBus library works only with Azure Active Directory (Azure AD) authentication, and it doesn't support using a connection string. Whereas the other two libraries (Azure.Messaging.ServiceBus and Microsoft.Azure.ServiceBus) support using a connection string for authenticating with the service and are easier to use. Between these libraries, Azure.Messaging.ServiceBus is the latest and that's what we recommend you to use.
+
+The following sections provide more details on these libraries.
+
+## Azure.Messaging.ServiceBus.Administration
You can use the [ServiceBusAdministrationClient](/dotnet/api/azure.messaging.servicebus.administration.servicebusadministrationclient) class in the [Azure.Messaging.ServiceBus.Administration](/dotnet/api/azure.messaging.servicebus.administration) namespace to manage namespaces, queues, topics, and subscriptions. Here's the sample code. For a complete example, see [CRUD example](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/servicebus/Azure.Messaging.ServiceBus/tests/Samples/Sample07_CrudOperations.cs). ```csharp
@@ -84,7 +90,7 @@ namespace adminClientTrack2
You can use the [ManagementClient](/dotnet/api/microsoft.azure.servicebus.management.managementclient) class in the [Microsoft.Azure.ServiceBus.Management](/dotnet/api/microsoft.azure.servicebus.management) namespace to manage namespaces, queues, topics, and subscriptions. Here's the sample code: > [!NOTE]
-> We recommend that you use the `ServiceBusAdministrationClient` class from the `Azure.Messaging.ServiceBus.Administration` library, which is the latest SDK. For details, see the [first section](#azuremessagingservicebusadministration-recommended).
+> We recommend that you use the `ServiceBusAdministrationClient` class from the `Azure.Messaging.ServiceBus.Administration` library, which is the latest SDK. For details, see the [first section](#azuremessagingservicebusadministration).
```csharp using System;
@@ -151,7 +157,7 @@ To get started using this library, you must authenticate with the Azure Active D
* [Use the Azure portal to create Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md) * [Use Azure PowerShell to create a service principal to access resources](../active-directory/develop/howto-authenticate-service-principal-powershell.md)
-* [Use Azure CLI to create a service principal to access resources](/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest)
+* [Use Azure CLI to create a service principal to access resources](/cli/azure/create-an-azure-service-principal-azure-cli)
These tutorials provide you with an `AppId` (Client ID), `TenantId`, and `ClientSecret` (authentication key), all of which are used for authentication by the management libraries. You must have at-least [**Azure Service Bus Data Owner**](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner) or [**Contributor**](../role-based-access-control/built-in-roles.md#contributor) permissions for the resource group on which you wish to run.
service-bus-messaging https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-performance-improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-performance-improvements.md
@@ -2,7 +2,7 @@
title: Best practices for improving performance using Azure Service Bus description: Describes how to use Service Bus to optimize performance when exchanging brokered messages. ms.topic: article
-ms.date: 11/11/2020
+ms.date: 01/15/2021
ms.custom: devx-track-csharp ---
@@ -19,22 +19,27 @@ Service Bus enables clients to send and receive messages via one of three protoc
2. Service Bus Messaging Protocol (SBMP) 3. Hypertext Transfer Protocol (HTTP)
-AMQP is the most efficient, because it maintains the connection to Service Bus. It also implements batching and prefetching. Unless explicitly mentioned, all content in this article assumes the use of AMQP or SBMP.
+AMQP is the most efficient, because it maintains the connection to Service Bus. It also implements [batching](#batching-store-access) and [prefetching](#prefetching). Unless explicitly mentioned, all content in this article assumes the use of AMQP or SBMP.
> [!IMPORTANT] > The SBMP is only available for .NET Framework. AMQP is the default for .NET Standard. ## Choosing the appropriate Service Bus .NET SDK
-There are two supported Azure Service Bus .NET SDKs. Their APIs are similar, and it can be confusing which one to choose. Refer to the following table to help guide your decision. We suggest using the Microsoft.Azure.ServiceBus SDK as It's more modern, performant, and is cross-platform compatible. Additionally, it supports AMQP over WebSockets and is part of the Azure .NET SDK collection of open-source projects.
+There are three supported Azure Service Bus .NET SDKs. Their APIs are similar, and it can be confusing which one to choose. Refer to the following table to help guide your decision. Azure.Messaging.ServiceBus SDK is the latest and we recommend using it over other SDKs. Both Azure.Messaging.ServiceBus and Microsoft.Azure.ServiceBus SDKs are modern, performant, and cross-platform compatible. Additionally, they support AMQP over WebSockets and are part of the Azure .NET SDK collection of open-source projects.
| NuGet Package | Primary Namespace(s) | Minimum Platform(s) | Protocol(s) | |---------------|----------------------|---------------------|-------------|
-| <a href="https://www.nuget.org/packages/Microsoft.Azure.ServiceBus" target="_blank">Microsoft.Azure.ServiceBus <span class="docon docon-navigate-external x-hidden-focus"></span></a> | `Microsoft.Azure.ServiceBus`<br>`Microsoft.Azure.ServiceBus.Management` | .NET Core 2.0<br>.NET Framework 4.6.1<br>Mono 5.4<br>Xamarin.iOS 10.14<br>Xamarin.Mac 3.8<br>Xamarin.Android 8.0<br>Universal Windows Platform 10.0.16299 | AMQP<br>HTTP |
-| <a href="https://www.nuget.org/packages/WindowsAzure.ServiceBus" target="_blank">WindowsAzure.ServiceBus <span class="docon docon-navigate-external x-hidden-focus"></span></a> | `Microsoft.ServiceBus`<br>`Microsoft.ServiceBus.Messaging` | .NET Framework 4.6.1 | AMQP<br>SBMP<br>HTTP |
+| [Azure.Messaging.ServiceBus](https://www.nuget.org/packages/Azure.Messaging.ServiceBus) | `Azure.Messaging.ServiceBus`<br>`Azure.Messaging.ServiceBus.Administration` | .NET Core 2.0<br>.NET Framework 4.6.1<br>Mono 5.4<br>Xamarin.iOS 10.14<br>Xamarin.Mac 3.8<br>Xamarin.Android 8.0<br>Universal Windows Platform 10.0.16299 | AMQP<br>HTTP |
+| [Microsoft.Azure.ServiceBus](https://www.nuget.org/packages/Azure.Messaging.ServiceBus/) | `Microsoft.Azure.ServiceBus`<br>`Microsoft.Azure.ServiceBus.Management` | .NET Core 2.0<br>.NET Framework 4.6.1<br>Mono 5.4<br>Xamarin.iOS 10.14<br>Xamarin.Mac 3.8<br>Xamarin.Android 8.0<br>Universal Windows Platform 10.0.16299 | AMQP<br>HTTP |
+| [WindowsAzure.ServiceBus](https://www.nuget.org/packages/WindowsAzure.ServiceBus) | `Microsoft.ServiceBus`<br>`Microsoft.ServiceBus.Messaging` | .NET Framework 4.6.1 | AMQP<br>SBMP<br>HTTP |
For more information on minimum .NET Standard platform support, see [.NET implementation support](/dotnet/standard/net-standard#net-implementation-support). ## Reusing factories and clients
+# [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2)
+The Service Bus objects that interact with the service, such as [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient), [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender), [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver), and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor), should be registered for dependency injection as singletons (or instantiated once and shared). ServiceBusClient can be registered for dependency injection with the [ServiceBusClientBuilderExtensions](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/servicebus/Azure.Messaging.ServiceBus/src/Compatibility/ServiceBusClientBuilderExtensions.cs).
+
+We recommend that you don't close or dispose these objects after sending or receiving each message. Closing or disposing the entity-specific objects (ServiceBusSender/Receiver/Processor) results in tearing down the link to the Service Bus service. Disposing the ServiceBusClient results in tearing down the connection to the Service Bus service. Establishing a connection is an expensive operation that you can avoid by reusing the same ServiceBusClient and creating the necessary entity-specific objects from the same ServiceBusClient instance. You can safely use these client objects for concurrent asynchronous operations and from multiple threads.
# [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk)
@@ -51,6 +56,27 @@ Operations such as send, receive, delete, and so on, take some time. This time i
The client schedules concurrent operations by performing **asynchronous** operations. The next request is started before the previous request is completed. The following code snippet is an example of an asynchronous send operation:
+# [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2)
+```csharp
+var messageOne = new ServiceBusMessage(body);
+var messageTwo = new ServiceBusMessage(body);
+
+var sendFirstMessageTask =
+ sender.SendMessageAsync(messageOne).ContinueWith(_ =>
+ {
+ Console.WriteLine("Sent message #1");
+ });
+var sendSecondMessageTask =
+ sender.SendMessageAsync(messageTwo).ContinueWith(_ =>
+ {
+ Console.WriteLine("Sent message #2");
+ });
+
+await Task.WhenAll(sendFirstMessageTask, sendSecondMessageTask);
+Console.WriteLine("All messages sent");
+
+```
+ # [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk) ```csharp
@@ -97,6 +123,35 @@ Console.WriteLine("All messages sent");
The following code is an example of an asynchronous receive operation.
+# [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2)
+
+```csharp
+var client = new ServiceBusClient(connectionString);
+var options = new ServiceBusProcessorOptions
+{
+
+ AutoCompleteMessages = false,
+ MaxConcurrentCalls = 20
+};
+await using ServiceBusProcessor processor = client.CreateProcessor(queueName,options);
+processor.ProcessMessageAsync += MessageHandler;
+processor.ProcessErrorAsync += ErrorHandler;
+
+static Task ErrorHandler(ProcessErrorEventArgs args)
+{
+ Console.WriteLine(args.Exception);
+ return Task.CompletedTask;
+};
+
+static async Task MessageHandler(ProcessMessageEventArgs args)
+{
+Console.WriteLine("Handle message");
+ await args.CompleteMessageAsync(args.Message);
+}
+
+await processor.StartProcessingAsync();
+```
+ # [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk) See the GitHub repository for full <a href="https://github.com/Azure/azure-service-bus/blob/master/samples/DotNet/Microsoft.Azure.ServiceBus/SendersReceiversWithQueues" target="_blank">source code examples <span class="docon docon-navigate-external x-hidden-focus"></span></a>:
@@ -163,9 +218,12 @@ Service Bus doesn't support transactions for receive-and-delete operations. Also
Client-side batching enables a queue or topic client to delay the sending of a message for a certain period of time. If the client sends additional messages during this time period, it transmits the messages in a single batch. Client-side batching also causes a queue or subscription client to batch multiple **Complete** requests into a single request. Batching is only available for asynchronous **Send** and **Complete** operations. Synchronous operations are immediately sent to the Service Bus service. Batching doesn't occur for peek or receive operations, nor does batching occur across clients.
+# [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2)
+Batching functionality for the .NET Standard SDK doesn't yet expose a property to manipulate.
+ # [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk)
-Batching functionality for the .NET Standard SDK, doesn't yet expose a property to manipulate.
+Batching functionality for the .NET Standard SDK doesn't yet expose a property to manipulate.
# [WindowsAzure.ServiceBus SDK](#tab/net-framework-sdk)
@@ -213,6 +271,19 @@ Additional store operations that occur during this interval are added to the bat
When creating a new queue, topic or subscription, batched store access is enabled by default. +
+# [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2)
+To disable batched store access, you'll need an instance of a `ServiceBusAdministrationClient`. Create a `CreateQueueOptions` from a queue description that sets the `EnableBatchedOperations` property to `false`.
+
+```csharp
+var options = new CreateQueueOptions(path)
+{
+ EnableBatchedOperations = false
+};
+var queue = await administrationClient.CreateQueueAsync(options);
+```
++ # [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk) To disable batched store access, you'll need an instance of a `ManagementClient`. Create a queue from a queue description that sets the `EnableBatchedOperations` property to `false`.
@@ -265,6 +336,12 @@ The time-to-live (TTL) property of a message is checked by the server at the tim
Prefetching doesn't affect the number of billable messaging operations, and is available only for the Service Bus client protocol. The HTTP protocol doesn't support prefetching. Prefetching is available for both synchronous and asynchronous receive operations.
+# [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2)
+For more information, see the following `PrefetchCount` properties:
+
+- [ServiceBusReceiver.PrefetchCount](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount)
+- [ServiceBusProcessor.PrefetchCount](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount)
+ # [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk) For more information, see the following `PrefetchCount` properties:
@@ -282,10 +359,6 @@ For more information, see the following `PrefetchCount` properties:
--- ## Prefetching and ReceiveBatch-
-> [!NOTE]
-> This section only applies to the WindowsAzure.ServiceBus SDK, as the Microsoft.Azure.ServiceBus SDK doesn't expose batch functions.
- While the concepts of prefetching multiple messages together have similar semantics to processing messages in a batch (`ReceiveBatch`), there are some minor differences that must be kept in mind when using these approaches together. Prefetch is a configuration (or mode) on the client (`QueueClient` and `SubscriptionClient`) and `ReceiveBatch` is an operation (that has request-response semantics).
@@ -304,7 +377,7 @@ If a single queue or topic can't handle the expected, use multiple messaging ent
## Development and testing features > [!NOTE]
-> This section only applies to the WindowsAzure.ServiceBus SDK, as the Microsoft.Azure.ServiceBus SDK doesn't expose this functionality.
+> This section only applies to the WindowsAzure.ServiceBus SDK, as Microsoft.Azure.ServiceBus and Azure.Messaging.ServiceBus don't expose this functionality.
Service Bus has one feature, used specifically for development, which **should never be used in production configurations**: [`TopicDescription.EnableFilteringMessagesBeforePublishing`][TopicDescription.EnableFiltering].
@@ -367,9 +440,9 @@ To maximize throughput, follow these guidelines:
* Leave batched store access enabled. This access reduces the overall load of the entity. It also reduces the overall rate at which messages can be written into the queue or topic. * Set the prefetch count to a small value (for example, PrefetchCount = 10). This count prevents receivers from being idle while other receivers have large numbers of messages cached.
-### Topic with a small number of subscriptions
+### Topic with a few subscriptions
-Goal: Maximize the throughput of a topic with a small number of subscriptions. A message is received by many subscriptions, which means the combined receive rate over all subscriptions is larger than the send rate. The number of senders is small. The number of receivers per subscription is small.
+Goal: Maximize the throughput of a topic with a few subscriptions. A message is received by many subscriptions, which means the combined receive rate over all subscriptions is larger than the send rate. The number of senders is small. The number of receivers per subscription is small.
To maximize throughput, follow these guidelines:
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-customize-networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-customize-networking.md
@@ -39,7 +39,7 @@ You can provide the following key resource configurations for the failover VM wh
![Customize the failover networking configurations](media/azure-to-azure-customize-networking/edit-networking-properties.png)
-4. Select a test failover virtual network. You can choose to leave it blank and select one at the time of test failover.
+4. Select a test failover virtual network.
5. Failover network is Select **Edit** near the NIC you want to configure. In the next blade that opens, select the corresponding pre-created resources in the test failover and failover location. ![Edit the NIC configuration](media/azure-to-azure-customize-networking/nic-drilldown.png)
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
@@ -39,13 +39,13 @@ You can replicate and recover VMs between any two regions within the same geogra
**Geographic cluster** | **Azure regions** -- | -- America | Canada East, Canada Central, South Central US, West Central US, East US, East US 2, West US, West US 2, Central US, North Central US
-Europe | UK West, UK South, North Europe, West Europe, South Africa West, South Africa North, Norway East, Norway West, France Central, Switzerland North
+Europe | UK West, UK South, North Europe, West Europe, South Africa West, South Africa North, Norway East, France Central, Switzerland North
Asia | South India, Central India, West India, Southeast Asia, East Asia, Japan East, Japan West, Korea Central, Korea South Australia | Australia East, Australia Southeast, Australia Central, Australia Central 2 Azure Government | US GOV Virginia, US GOV Iowa, US GOV Arizona, US GOV Texas, US DOD East, US DOD Central Germany | Germany Central, Germany Northeast China | China East, China North, China North2, China East2
-Restricted Regions reserved for in-country disaster recovery |Germany North reserved for Germany West Central, Switzerland West reserved for Switzerland North, France South reserved for France Central, UAE Central restricted for UAE North customers
+Restricted Regions reserved for in-country disaster recovery |Germany North reserved for Germany West Central, Switzerland West reserved for Switzerland North, France South reserved for France Central, UAE Central restricted for UAE North customers, Norway West for Norway East customers
>[!NOTE] >
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-troubleshoot.md
@@ -4,7 +4,7 @@ description: Troubleshoot common issues in a deployment on Azure File Sync, whic
author: jeffpatt24 ms.service: storage ms.topic: troubleshooting
-ms.date: 1/13/2021
+ms.date: 1/15/2021
ms.author: jeffpatt ms.subservice: files ---
@@ -913,6 +913,22 @@ This error occurs because Azure File Sync does not support HTTP redirection (3xx
This error occurs when a data ingestion operation exceeds the timeout. This error can be ignored if sync is making progress (AppliedItemCount is greater than 0). See [How do I monitor the progress of a current sync session?](#how-do-i-monitor-the-progress-of-a-current-sync-session).
+<a id="-2134375814"></a>**Sync failed because the server endpoint path cannot be found on the server.**
+
+| | |
+|-|-|
+| **HRESULT** | 0x80c8027a |
+| **HRESULT (decimal)** | -2134375814 |
+| **Error string** | ECS_E_SYNC_ROOT_DIRECTORY_NOT_FOUND |
+| **Remediation required** | Yes |
+
+This error occurs if the directory used as the server endpoint path was renamed or deleted. If the directory was renamed, rename the directory back to the original name and restart the Storage Sync Agent service (FileSyncSvc).
+
+If the directory was deleted, perform the following steps to remove the existing server endpoint and create a new server endpoint using a new path:
+
+1. Remove the server endpoint in the sync group by following the steps documented in [Remove a server endpoint](./storage-sync-files-server-endpoint.md#remove-a-server-endpoint).
+2. Create a new server endpoint in the sync group by following the steps documented in [Add a server endpoint](https://docs.microsoft.com/azure/storage/files/storage-sync-files-server-endpoint#add-a-server-endpoint).
+ ### Common troubleshooting steps <a id="troubleshoot-storage-account"></a>**Verify the storage account exists.** # [Portal](#tab/azure-portal)
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/agent-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/agent-overview.md new file mode 100644
@@ -0,0 +1,42 @@
+---
+title: Get started with the Windows Virtual Desktop Agent
+description: An overview of the Windows Virtual Desktop Agent and update processes.
+author: Sefriend
+ms.topic: conceptual
+ms.date: 12/16/2020
+ms.author: sefriend
+manager: clarkn
+---
+# Get started with the Windows Virtual Desktop Agent
+
+In the Windows Virtual Desktop Service framework, there are three main components: the Remote Desktop client, the service, and the virtual machines. These virtual machines live in the customer subscription where the Windows Virtual Desktop agent and agent bootloader are installed. The agent acts as the intermediate communicator between the service and the virtual machines, enabling connectivity. Therefore, if you're experiencing any issues with the agent installation, update, or configuration, your virtual machines won't be able to connect to the service. The agent bootloader is the executable that loads the agent.
+
+This article will give you a brief overview of the agent installation and update processes.
+
+>[!NOTE]
+>This documentation is not for the FSLogix agent or the Remote Desktop Client agent.
++
+## Initial installation process
+
+The Windows Virtual Desktop agent is initially installed in one of two ways. If you provision virtual machines (VMs) in the Azure portal and Azure Marketplace, the agent and agent bootloader are automatically installed. If you provision VMs using PowerShell, you must manually download the agent and agent bootloader .msi files when [creating a Windows Virtual Desktop host pool with PowerShell](create-host-pools-powershell.md#register-the-virtual-machines-to-the-windows-virtual-desktop-host-pool). When the agent is installed, the Windows Virtual Desktop side-by-side stack and Geneva Monitoring agent are also installed simultaneously. The side-by-side stack component is required for users to securely establish reverse server-to-client connections. The Geneva Monitoring agent monitors the health of the agent. All three of these components are essential for end-to-end user connectivity to function properly.
+
+>[!IMPORTANT]
+>To successfully install the Windows Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent, you must unblock all the URLs listed in the [Required URL list](safe-url-list.md#virtual-machines). Unblocking these URLs is required to use the Windows Virtual Desktop service.
+
+## Agent update process
+
+The Windows Virtual Desktop service automatically updates the agent whenever an update becomes available. Agent updates can include new functionality or fix previous issues. Once the initial version of the Windows Virtual Desktop agent is installed, the agent regularly queries the Windows Virtual Desktop service to determine if there's a newer version of the agent and its components available. If there's a new version, the agent bootloader automatically downloads the latest version of the agent, the side-by-side stack, and Geneva Monitoring agent.
+
+>[!NOTE]
+>- When the Geneva Monitoring agent updates to the latest version, the old GenevaTask task is located and disabled before creating a new task for the new monitoring agent. The earlier version of the monitoring agent isn't deleted in case that the most recent version of the monitoring agent has a problem that requires reverting to the earlier version to fix. If the latest version has a problem, the old monitoring agent will be re-enabled to continue delivering monitoring data. All versions of the monitor that are earlier than the last one you installed before the update will be deleted from your VM.
+>- Your VM keeps three versions of the side-by-side stack at a time. This allows for quick recovery if something goes wrong with the update. The earliest version of the stack is removed from the VM whenever the stack updates.
+
+This update installation normally lasts 2-3 minutes on a new VM and shouldn't cause your VM to lose connection or shut down. This update process applies to both Windows Virtual Desktop (classic) and the latest version of Windows Virtual Desktop with Azure Resource Manager.
+
+## Next steps
+
+Now that you have a better understanding of the Windows Virtual Desktop agent, here are some resources that might help you:
+
+- Check out the [Windows Virtual Desktop Agent updates](whats-new.md) section to see information about what the new agent update entails each month.
+- If you're experiencing agent or connectivity-related issues, check out the [Windows Virtual Desktop Agent issues troubleshooting guide](troubleshoot-agent.md).
\ No newline at end of file
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/troubleshoot-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-agent.md new file mode 100644
@@ -0,0 +1,353 @@
+---
+title: Troubleshoot Windows Virtual Desktop Agent Issues - Azure
+description: How to resolve common agent and connectivity issues.
+author: Sefriend
+ms.topic: troubleshooting
+ms.date: 12/16/2020
+ms.author: sefriend
+manager: clarkn
+---
+# Troubleshoot common Windows Virtual Desktop Agent issues
+
+The Windows Virtual Desktop Agent can cause connection issues because of multiple factors:
+ - An error on the broker that makes the agent stop the service.
+ - Problems with updates.
+ - Issues with installing the during agent installation, which disrupts connection to the session host.
+
+This article will guide you through solutions to these common scenarios and how to address connection issues.
+
+## Error: The RDAgentBootLoader and/or Remote Desktop Agent Loader has stopped running
+
+If you're seeing any of the following issues, this means that the boot loader, which loads the agent, was unable to install the agent properly and the agent service isn't running:
+- **RDAgentBootLoader** is either stopped or not running.
+- There is no status for **Remote Desktop Agent Loader**.
+
+To resolve this issue, start the RDAgent boot loader:
+
+1. In the Services window, right-click **Remote Desktop Agent Loader**.
+2. Select **Start**. If this option is greyed out for you, you don't have administrator permissions and will need to get them to start the service.
+3. Wait 10 seconds, then right-click **Remote Desktop Agent Loader**.
+4. Select **Refresh**.
+5. If the service stops after you started and refreshed it, you may have a registration failure. For more information, see [INVALID_REGISTRATION_TOKEN](#error-invalid_registration_token).
+
+## Error: INVALID_REGISTRATION_TOKEN
+
+Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277, that says **INVALID_REGISTRATION_TOKEN** in the description, the registration token that you have isn't recognized as valid.
+
+To resolve this issue, create a valid registration token:
+
+1. To create a new registration token, follow the steps in the [Generate a new registration key for the VM](#step-3-generate-a-new-registration-key-for-the-vm) section.
+2. Open the Registry Editor.
+3. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **RDInfraAgent**.
+4. Select **IsRegistered**.
+5. In the **Value data:** entry box, type **0** and select **Ok**.
+6. Select **RegistrationToken**.
+7. In the **Value data:** entry box, paste the registration token from step 1.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of IsRegistered 0](media/isregistered-token.png)
+
+8. Open a command prompt as an administrator.
+9. Enter **net stop RDAgentBootLoader**.
+10. Enter **net start RDAgentBootLoader**.
+11. Open the Registry Editor.
+12. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **RDInfraAgent**.
+13. Verify that **IsRegistered** is set to 1 and there is nothing in the data column for **RegistrationToken**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of IsRegistered 1](media/isregistered-registry.png)
+
+## Error: Agent cannot connect to broker with INVALID_FORM or NOT_FOUND. URL
+
+Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277, that says **INVALID_FORM** or **NOT_FOUND. URL** in the description, something went wrong with the communication between the agent and the broker. The agent cannot connect to the broker and is unable to reach a particular URL. This may be because of your firewall or DNS settings.
+
+To resolve this issue, check that you can reach BrokerURI and BrokerURIGlobal:
+1. Open the Registry Editor.
+2. Go to **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **Microsoft** > **RDInfraAgent**.
+3. Make note of the values for **BrokerURI** and **BrokerURIGlobal**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of broker uri and broker uri global](media/broker-uri.png)
+
+
+4. Open a browser and go to *\<BrokerURI\>api/health*.
+ - Make sure you use the value from step 3 in the **BrokerURI**. In this section's example, it would be <https://rdbroker-g-us-r0.wvd.microsoft.com/api/health>.
+5. Open another tab in the browser and go to *\<BrokerURIGlobal\>api/health*.
+ - Make sure you use the value from step 3 in the **BrokerURIGlobal** link. In this section's example, it would be <https://rdbroker.wvd.microsoft.com/api/health>.
+6. If the network isn't blocking broker connection, both pages will load successfully and will show a message that says **"RD Broker is Healthy"** as shown in the following screenshots.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of successfully loaded broker uri access](media/broker-uri-web.png)
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of successfully loaded broker global uri access](media/broker-global.png)
+
+
+7. If the network is blocking broker connection, the pages will not load, as shown in the following screenshot.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of unsuccessful loaded broker access](media/unsuccessful-broker-uri.png)
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of unsuccessful loaded broker global access](media/unsuccessful-broker-global.png)
+
+8. If the network is blocking these URLs, you will need to unblock the required URLs. For more information, see [Required URL List](safe-url-list.md).
+9. If this does not resolve your issue, make sure that you do not have any group policies with ciphers that block the agent to broker connection. Windows Virtual Desktop uses the same TLS 1.2 ciphers as [Azure Front Door](../frontdoor/front-door-faq.MD#what-are-the-current-cipher-suites-supported-by-azure-front-door). For more information, see [Connection Security](network-connectivity.md#connection-security).
+
+## Error: 3703 or 3019
+
+Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3703, that says **RD Gateway Url: is not accessible** or any event with ID 3019 in the description, the agent is unable to reach the gateway URLs or the web socket transport URLs. To successfully connect to your session host and allow network traffic to these endpoints to bypass restrictions, you must unblock the URLs from the [Required URL List](safe-url-list.md). Also, make sure your firewall or proxy settings don't block these URLs. Unblocking these URLs is required to use Windows Virtual Desktop.
+
+To resolve this issue, verify that your firewall and/or DNS settings are not blocking these URLs:
+1. [Use Azure Firewall to protect Windows Virtual Desktop deployments.](../firewall/protect-windows-virtual-desktop.md).
+2. Configure your [Azure Firewall DNS settings](../firewall/dns-settings.md).
+
+## Error: InstallMsiException
+
+Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277, that says **InstallMsiException** in the description, the installer is already running for another application while you're trying to install the agent, or a policy is blocking the msiexec.exe program from running.
+
+To resolve this issue, disable the following policy:
+ - Turn off Windows Installer
+ - Category Path: Computer Configuration\Administrative Templates\Windows Components\Windows Installer
+
+>[!NOTE]
+>This isn't a comprehensive list of policies, just the ones we're currently aware of.
+
+To disable a policy:
+1. Open a command prompt as an administrator.
+2. Enter and run **rsop.msc**.
+3. In the **Resultant Set of Policy** window that pops up, go to the category path.
+4. Select the policy.
+5. Select **Disabled**.
+6. Select **Apply**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of Windows Installer policy in Resultant Set of Policy](media/gpo-policy.png)
+
+## Error: Win32Exception
+
+Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277, that says **InstallMsiException** in the description, a policy is blocking cmd.exe from launching. Blocking this program prevents you from running the console window, which is what you need to use to restart the service whenever the agent updates.
+
+To resolve this issue, disable the following policy:
+ - Prevent access to the command prompt
+ - Category Path: User Configuration\Administrative Templates\System
+
+>[!NOTE]
+>This isn't a comprehensive list of policies, just the ones we're currently aware of.
+
+To disable a policy:
+1. Open a command prompt as an administrator.
+2. Enter and run **rsop.msc**.
+3. In the **Resultant Set of Policy** window that pops up, go to the category path.
+4. Select the policy.
+5. Select **Disabled**.
+6. Select **Apply**.
+
+## Error: Stack listener isn't working on Windows 10 2004 VM
+
+Run **qwinsta** in your command prompt and make note of the version number that appears next to **rdp-sxs**. If you're not seeing the **rdp-tcp** and **rdp-sxs** components say **Listen** next to them or they aren't showing up at all after running **qwinsta**, it means that there's a stack issue. Stack updates get installed along with agent updates, and when this installation goes awry, the Windows Virtual Desktop Listener won't work.
+
+To resolve this issue:
+1. Open the Registry Editor.
+2. Go to **HKEY_LOCAL_MACHINE** > **SYSTEM** > **CurrentControlSet** > **Control** > **Terminal Server** > **WinStations**.
+3. Under **WinStations** you may see several folders for different stack versions, select the folder that matches the version information you saw when running **qwinsta** in your Command Prompt.
+4. Find **fReverseConnectMode** and make sure its data value is **1**. Also make sure that **fEnableWinStation** is set to **1**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of fReverseConnectMode](media/fenable-2.png)
+
+5. If **fReverseConnectMode** isn't set to **1**, select **fReverseConnectMode** and enter **1** in its value field.
+6. If **fEnableWinStation** isn't set to **1**, select **fEnableWinStation** and enter **1** into its value field.
+7. Restart your VM.
+
+>[!NOTE]
+>To change the **fReverseConnectMode** or **fEnableWinStation** mode for multiple VMs at a time, you can do one of the following two things:
+>
+>- Export the registry key from the machine that you already have working and import it into all other machines that need this change.
+>- Create a general policy object (GPO) that sets the registry key value for the machines that need the change.
+
+7. Go to **HKEY_LOCAL_MACHINE** > **SYSTEM** > **CurrentControlSet** > **Control** > **Terminal Server** > **ClusterSettings**.
+8. Under **ClusterSettings**, find **SessionDirectoryListener** and make sure its data value is **rdp-sxs...**.
+9. If **SessionDirectoryListener** isn't set to **rdp-sxs...**, you'll need to follow the steps in the [Uninstall the agent and boot loader](#step-1-uninstall-all-agent-boot-loader-and-stack-component-programs) section to first uninstall the agent, boot loader, and stack components, and then [Reinstall the agent and boot loader](#step-4-reinstall-the-agent-and-boot-loader). This will reinstall the side-by-side stack.
+
+## Error: Users keep getting disconnected from session hosts
+
+Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 0, that says **CheckSessionHostDomainIsReachableAsync** in the description and/or users keep getting disconnected from their session hosts, your server isn't picking up a heartbeat from the Windows Virtual Desktop service.
+
+To resolve this issue, change the heartbeat threshold:
+1. Open your command prompt as an administrator.
+2. Enter the **qwinsta** command and run it.
+3. There should be two stack components displayed: **rdp-tcp** and **rdp-sxs**.
+ - Depending on the version of the OS you're using, **rdp-sxs** may be followed by the build number as shown in the following screenshot. If it is, make sure to write this number down for later.
+4. Open the Registry Editor.
+5. Go to **HKEY_LOCAL_MACHINE** > **SYSTEM** > **CurrentControlSet** > **Control** > **Terminal Server** > **WinStations**.
+6. Under **WinStations** you may see several folders for different stack versions. Select the folder that matches the version number from step 3.
+7. Create a new registry DWORD by right-clicking the registry editor, then selecting **New** > **DWORD (32-bit) Value**. When you create the DWORD, enter the following values:
+ - HeartbeatInterval: 10000
+ - HeartbeatWarnCount: 30
+ - HeartbeatDropCount: 60
+8. Restart your VM.
+
+## Error: DownloadMsiException
+
+Go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3277, that says **DownloadMsiException** in the description, there isn't enough space on the disk for the RDAgent.
+
+To resolve this issue, make space on your disk by:
+ - Deleting files that are no longer in user
+ - Increasing the storage capacity of your VM
+
+## Error: VMs are stuck in Unavailable or Upgrading state
+
+Open a PowerShell window as an administrator and run the following cmdlet:
+
+```powershell
+Get-AzWvdSessionHost -TenantName <tenantname> -HostPoolName <hostpoolname>|Select-Object*
+```
+
+If the status listed for the session host or hosts in your host pool always says **Unavailable** or **Upgrading**, the agent or stack installation may have failed
+
+To resolve this issue, reinstall the side-by-side stack:
+1. Open a command prompt as an administrator.
+2. Enter **net stop RDAgentBootLoader**.
+3. Go to **Control Panel** > **Programs** > **Programs and Features**.
+4. Uninstall the latest version of the **Remote Desktop Services SxS Network Stack** or the version listed in **HKEY_LOCAL_MACHINE** > **SYSTEM** > **CurrentControlSet** > **Control** > **Terminal Server** > **WinStations** under **ReverseConnectListener**.
+5. Open a console window as an administrator and go to **Program Files** > **Microsoft RDInfra**.
+6. Select the **SxSStack** component or run the **msiexec /i SxsStack-<version>.msi** command to install the MSI.
+8. Restart your VM.
+9. Go back to the command prompt and run the **qwinsta** command.
+10. Verify that the stack component installed in step 6 says **Listen** next to it.
+ - If so, enter **net start RDAgentBootLoader** in the command prompt and restart your VM.
+ - If not, you will need to [re-register your VM and reinstall the agent](#your-issue-isnt-listed-here-or-wasnt-resolved) component.
+
+## Error: Connection not found: RDAgent does not have an active connection to the broker
+
+Your VMs may be at their connection limit, so the VM can't accept new connections.
+
+To resolve this issue:
+ - Decrease the max session limit. This ensures that resources are more evenly distributed across session hosts and will prevent resource depletion.
+ - Increase the resource capacity of the VMs.
+
+## Error: Operating a Pro VM or other unsupported OS
+
+The side-by-side stack is only supported by Windows Enterprise or Windows Server SKUs, which means that operating systems like Pro VM aren't. If you don't have an Enterprise or Server SKU, the stack will be installed on your VM but won't be activated, so you won't see it show up when you run **qwinsta** in your command line.
+
+To resolve this issue, create a VM that is Windows Enterprise or Windows Server.
+1. Go to [Virtual machine details](create-host-pools-azure-marketplace.md#virtual-machine-details) and follow steps 1-12 to set up one of the following recommended images:
+ - Windows 10 Enterprise multi-session, version 1909
+ - Windows 10 Enterprise multi-session, version 1909 + Microsoft 365 Apps
+ - Windows Server 2019 Datacenter
+ - Windows 10 Enterprise multi-session, version 2004
+ - Windows 10 Enterprise multi-session, version 2004 + Microsoft 365 Apps
+2. Select **Review and Create**.
+
+## Error: NAME_ALREADY_REGISTERED
+
+The name of your VM has already been registered and is probably a duplicate.
+
+To resolve this issue:
+1. Follow the steps in the [Remove the session host from the host pool](#step-2-remove-the-session-host-from-the-host-pool) section.
+2. [Create another VM](expand-existing-host-pool.md#add-virtual-machines-with-the-azure-portal). Make sure to choose a unique name for this VM.
+3. Go to the Azure portal](https://portal.azure.com) and open the **Overview** page for the host pool your VM was in.
+4. Open the **Session Hosts** tab and check to make sure all session hosts are in that host pool.
+5. Wait for 5-10 minutes for the session host status to say **Available**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of available session host](media/hostpool-portal.png)
+
+## Your issue isn't listed here or wasn't resolved
+
+If you can't find your issue in this article or the instructions didn't help you, we recommend you uninstall, reinstall, and re-register Windows Virtual Desktop Agent. The instructions in this section will show you how to reregister your VM to the Windows Virtual Desktop service by uninstalling all agent, boot loader, and stack components, removing the session host from the host pool, generating a new registration key for the VM, and reinstalling the agent and boot loader. If one or more of the following scenarios apply to you, follow these instructions:
+- Your VM is stuck in **Upgrading** or **Unavailable**
+- Your stack listener isn't working and you're running on Windows 10 1809, 1903, or 1904
+- You're receiving an **EXPIRED_REGISTRATION_TOKEN** error
+- You're not seeing your VMs show up in the session hosts list
+- You don't see the **Remote Desktop Agent Loader** in the Services window
+- You don't see the **RdAgentBootLoader** component in the Task Manager
+- The instructions in this article didn't resolve your issue
+
+### Step 1: Uninstall all agent, boot loader, and stack component programs
+
+Before reinstalling the agent, boot loader, and stack, you must uninstall any existing component programs from your VM. To uninstall all agent, boot loader, and stack component programs:
+1. Sign in to your VM as an administrator.
+2. Go to **Control Panel** > **Programs** > **Programs and Features**.
+3. Remove the following programs:
+ - Remote Desktop Agent Boot Loader
+ - Remote Desktop Services Infrastructure Agent
+ - Remote Desktop Services Infrastructure Geneva Agent
+ - Remote Desktop Services SxS Network Stack
+
+>[!NOTE]
+>You may see multiple instances of these programs. Make sure to remove all of them.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of uninstalling programs](media/uninstall-program.png)
+
+### Step 2: Remove the session host from the host pool
+
+When you remove the session host from the host pool, the session host is no longer registered to that host pool. This acts as a reset for the session host registration. To remove the session host from the host pool:
+1. Go to the **Overview** page for the host pool that your VM is in, in the [Azure portal](https://portal.azure.com).
+2. Go to the **Session Hosts** tab to see the list of all session hosts in that host pool.
+3. Look at the list of session hosts and select the VM that you want to remove.
+4. Select **Remove**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of removing VM from host pool](media/remove-sh.png)
+
+### Step 3: Generate a new registration key for the VM
+
+You must generate a new registration key that is used to re-register your VM to the host pool and to the service. To generate a new registration key for the VM:
+1. Open the [Azure portal](https://portal.azure.com) and go to the **Overview** page for the host pool of the VM you want to edit.
+2. Select **Registration key**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of registration key in portal](media/reg-key.png)
+
+3. Open the **Registration key** tab and select **Generate new key**.
+4. Enter the expiration date and then select **Ok**.
+
+>[!NOTE]
+>The expiration date can be no less than an hour and no longer than 27 days from its generation time and date. We highly recommend you set the expiration date to the 27 day maximum.
+
+5. Copy the newly generated key to your clipboard. You'll need this key later.
+
+### Step 4: Reinstall the agent and boot loader
+
+By reinstalling the most updated version of the agent and boot loader, the side-by-side stack and Geneva monitoring agent automatically get installed as well. To reinstall the agent and boot loader:
+1. Sign in to your VM as an administrator and follow the instructions in [Register virtual machines](create-host-pools-powershell.md#register-the-virtual-machines-to-the-windows-virtual-desktop-host-pool) to download the **Windows Virtual Desktop Agent** and the **Windows Virtual Desktop Agent Bootloader**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of agent and bootloader download page](media/download-agent.png)
+
+2. Right-click the agent and boot loader installers you just downloaded.
+3. Select **Properties**.
+4. Select **Unblock**.
+5. Select **Ok**.
+6. Run the agent installer.
+7. When the installer asks you for the registration token, paste the registration key from your clipboard.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of pasted registration token](media/pasted-agent-token.png)
+
+8. Run the boot loader installer.
+9. Restart your VM.
+10. Go to the [Azure portal](https://portal.azure.com) and open the **Overview** page for the host pool your VM belongs to.
+11. Go to the **Session Hosts** tab to see the list of all session hosts in that host pool.
+12. You should now see the session host registered in the host pool with the status **Available**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of available session host](media/hostpool-portal.png)
+
+## Next steps
+
+If the issue continues, create a support case and include detailed information about the problem you're having and any actions you've taken to try to resolve it. The following list includes other resources you can use to troubleshoot issues in your Windows Virtual Desktop deployment.
+
+- For an overview on troubleshooting Windows Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md).
+- To troubleshoot issues while creating a host pool in a Windows Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md).
+- To troubleshoot issues while configuring a virtual machine (VM) in Windows Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md).
+- To troubleshoot issues with Windows Virtual Desktop client connections, see [Windows Virtual Desktop service connections](troubleshoot-service-connection.md).
+- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client.md).
+- To troubleshoot issues when using PowerShell with Windows Virtual Desktop, see [Windows Virtual Desktop PowerShell](troubleshoot-powershell.md).
+- To learn more about the service, see [Windows Virtual Desktop environment](environment-setup.md).
+- To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md).
+- To learn about auditing actions, see [Audit operations with Resource Manager](../azure-resource-manager/management/view-activity-logs.md).
+- To learn about actions to determine the errors during deployment, see [View deployment operations](../azure-resource-manager/templates/deployment-history.md).
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/troubleshoot-client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-client.md
@@ -97,5 +97,6 @@ If you can't find the app ID 9cdead84-a844-4324-93f2-b2e6bb768d07 in the list, y
- For an overview on troubleshooting Windows Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md). - To troubleshoot issues while creating a Windows Virtual Desktop environment and host pool in a Windows Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md). - To troubleshoot issues while configuring a virtual machine (VM) in Windows Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md).
+- To troubleshoot issues related to the Windows Virtual Desktop agent or session connectivity, see [Troubleshoot common Windows Virtual Desktop Agent issues](troubleshoot-agent.md).
- To troubleshoot issues when using PowerShell with Windows Virtual Desktop, see [Windows Virtual Desktop PowerShell](troubleshoot-powershell.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md).
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/troubleshoot-service-connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-service-connection.md
@@ -52,5 +52,6 @@ This could also happen if a CSP Provider created the subscription and then trans
- For an overview on troubleshooting Windows Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md). - To troubleshoot issues while creating a Windows Virtual Desktop environment and host pool in a Windows Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md). - To troubleshoot issues while configuring a virtual machine (VM) in Windows Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md).
+- To troubleshoot issues related to the Windows Virtual Desktop agent or session connectivity, see [Troubleshoot common Windows Virtual Desktop Agent issues](troubleshoot-agent.md).
- To troubleshoot issues when using PowerShell with Windows Virtual Desktop, see [Windows Virtual Desktop PowerShell](troubleshoot-powershell.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md).
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/troubleshoot-set-up-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-set-up-issues.md
@@ -263,10 +263,11 @@ the VM.\\\"
- For an overview on troubleshooting Windows Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md). - To troubleshoot issues while configuring a virtual machine (VM) in Windows Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md).
+- To troubleshoot issues related to the Windows Virtual Desktop agent or session connectivity, see [Troubleshoot common Windows Virtual Desktop Agent issues](troubleshoot-agent.md).
- To troubleshoot issues with Windows Virtual Desktop client connections, see [Windows Virtual Desktop service connections](troubleshoot-service-connection.md). - To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client.md) - To troubleshoot issues when using PowerShell with Windows Virtual Desktop, see [Windows Virtual Desktop PowerShell](troubleshoot-powershell.md). - To learn more about the service, see [Windows Virtual Desktop environment](environment-setup.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md). - To learn about auditing actions, see [Audit operations with Resource Manager](../azure-resource-manager/management/view-activity-logs.md).-- To learn about actions to determine the errors during deployment, see [View deployment operations](../azure-resource-manager/templates/deployment-history.md).\ No newline at end of file
+- To learn about actions to determine the errors during deployment, see [View deployment operations](../azure-resource-manager/templates/deployment-history.md).
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/troubleshoot-set-up-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-set-up-overview.md
@@ -47,6 +47,7 @@ Use the following table to identify and resolve issues you may encounter when se
- To troubleshoot issues while creating a host pool in a Windows Virtual Desktop environment, see [host pool creation](troubleshoot-set-up-issues.md). - To troubleshoot issues while configuring a virtual machine (VM) in Windows Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md).
+- To troubleshoot issues related to the Windows Virtual Desktop agent or session connectivity, see [Troubleshoot common Windows Virtual Desktop Agent issues](troubleshoot-agent.md).
- To troubleshoot issues with Windows Virtual Desktop client connections, see [Windows Virtual Desktop service connections](troubleshoot-service-connection.md). - To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client.md) - To troubleshoot issues when using PowerShell with Windows Virtual Desktop, see [Windows Virtual Desktop PowerShell](troubleshoot-powershell.md).
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/troubleshoot-vm-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-vm-configuration.md
@@ -342,6 +342,7 @@ To learn more about this policy, see [Allow log on through Remote Desktop Servic
- For an overview on troubleshooting Windows Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md). - To troubleshoot issues while creating a host pool in a Windows Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md). - To troubleshoot issues while configuring a virtual machine (VM) in Windows Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md).
+- To troubleshoot issues related to the Windows Virtual Desktop agent or session connectivity, see [Troubleshoot common Windows Virtual Desktop Agent issues](troubleshoot-agent.md).
- To troubleshoot issues with Windows Virtual Desktop client connections, see [Windows Virtual Desktop service connections](troubleshoot-service-connection.md). - To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client.md) - To troubleshoot issues when using PowerShell with Windows Virtual Desktop, see [Windows Virtual Desktop PowerShell](troubleshoot-powershell.md).
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/disks-benchmarks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-benchmarks.md new file mode 100644
@@ -0,0 +1,25 @@
+---
+title: Benchmarking your application on Azure Disk Storage
+description: Learn about the process of benchmarking your application on Azure.
+author: roygara
+ms.author: rogarana
+ms.date: 01/11/2019
+ms.topic: how-to
+ms.service: virtual-machines
+ms.subservice: disks
+---
+# Benchmarking a disk
+
+Benchmarking is the process of simulating different workloads on your application and measuring the application performance for each workload. Using the steps described in the [designing for high performance article](premium-storage-performance.md), you have gathered the application performance requirements. By running benchmarking tools on the VMs hosting the application, you can determine the performance levels that your application can achieve with Premium Storage. In this article, we provide you examples of benchmarking a Standard DS14 VM provisioned with Azure Premium Storage disks.
+
+We have used common benchmarking tools Iometer and FIO, for Windows and Linux respectively. These tools spawn multiple threads simulating a production like workload, and measure the system performance. Using the tools you can also configure parameters like block size and queue depth, which you normally cannot change for an application. This gives you more flexibility to drive the maximum performance on a high scale VM provisioned with premium disks for different types of application workloads. To learn more about each benchmarking tool visit [Iometer](http://www.iometer.org/) and [FIO](http://freecode.com/projects/fio).
+
+To follow the examples below, create a Standard DS14 VM and attach 11 Premium Storage disks to the VM. Of the 11 disks, configure 10 disks with host caching as "None" and stripe them into a volume called NoCacheWrites. Configure host caching as "ReadOnly" on the remaining disk and create a volume called CacheReads with this disk. Using this setup, you are able to see the maximum Read and Write performance from a Standard DS14 VM. For detailed steps about creating a DS14 VM with premium SSDs, go to [Designing for high performance](premium-storage-performance.md).
+
+[!INCLUDE [virtual-machines-disks-benchmarking](../../includes/virtual-machines-managed-disks-benchmarking.md)]
+
+## Next steps
+
+Proceed to our article on [designing for high performance](premium-storage-performance.md).
+
+In that article, you create a checklist similar to your existing application for the prototype. Using Benchmarking tools you can simulate the workloads and measure performance on the prototype application. By doing so, you can determine which disk offering can match or surpass your application performance requirements. Then you can implement the same guidelines for your production application.
\ No newline at end of file
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/disks-incremental-snapshots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-incremental-snapshots.md new file mode 100644
@@ -0,0 +1,108 @@
+---
+title: Create an incremental snapshot
+description: Learn about incremental snapshots for managed disks, including how to create them using the Azure portal, Azure PowerShell module, and Azure Resource Manager.
+author: roygara
+ms.service: virtual-machines
+ms.topic: how-to
+ms.date: 01/15/2021
+ms.author: rogarana
+ms.subservice: disks
+---
+
+# Create an incremental snapshot for managed disks
+
+[!INCLUDE [virtual-machines-disks-incremental-snapshots-description](../../includes/virtual-machines-disks-incremental-snapshots-description.md)]
+
+## Restrictions
+
+[!INCLUDE [virtual-machines-disks-incremental-snapshots-restrictions](../../includes/virtual-machines-disks-incremental-snapshots-restrictions.md)]
++
+# [PowerShell](#tab/azure-powershell)
+
+You can use Azure PowerShell to create an incremental snapshot. You will need the latest version of Azure PowerShell, the following command will either install it or update your existing installation to latest:
+
+```PowerShell
+Install-Module -Name Az -AllowClobber -Scope CurrentUser
+```
+
+Once that is installed, login to your PowerShell session with `Connect-AzAccount`.
+
+To create an incremental snapshot with Azure PowerShell, set the configuration with [New-AzSnapShotConfig](/powershell/module/az.compute/new-azsnapshotconfig?view=azps-2.7.0) with the `-Incremental` parameter and then pass that as a variable to [New-AzSnapshot](/powershell/module/az.compute/new-azsnapshot?view=azps-2.7.0) through the `-Snapshot` parameter.
+
+```PowerShell
+$diskName = "yourDiskNameHere>"
+$resourceGroupName = "yourResourceGroupNameHere"
+$snapshotName = "yourDesiredSnapshotNameHere"
+
+# Get the disk that you need to backup by creating an incremental snapshot
+$yourDisk = Get-AzDisk -DiskName $diskName -ResourceGroupName $resourceGroupName
+
+# Create an incremental snapshot by setting the SourceUri property with the value of the Id property of the disk
+$snapshotConfig=New-AzSnapshotConfig -SourceUri $yourDisk.Id -Location $yourDisk.Location -CreateOption Copy -Incremental
+New-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName -Snapshot $snapshotConfig
+```
+
+You can identify incremental snapshots from the same disk with the `SourceResourceId` and the `SourceUniqueId` properties of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. `SourceUniqueId` is the value inherited from the `UniqueId` property of the disk. If you were to delete a disk and then create a new disk with the same name, the value of the `UniqueId` property changes.
+
+You can use `SourceResourceId` and `SourceUniqueId` to create a list of all snapshots associated with a particular disk. Replace `<yourResourceGroupNameHere>` with your value and then you can use the following example to list your existing incremental snapshots:
+
+```PowerShell
+$snapshots = Get-AzSnapshot -ResourceGroupName $resourceGroupName
+
+$incrementalSnapshots = New-Object System.Collections.ArrayList
+foreach ($snapshot in $snapshots)
+{
+
+ if($snapshot.Incremental -and $snapshot.CreationData.SourceResourceId -eq $yourDisk.Id -and $snapshot.CreationData.SourceUniqueId -eq $yourDisk.UniqueId){
+
+ $incrementalSnapshots.Add($snapshot)
+ }
+}
+
+$incrementalSnapshots
+```
+
+# [Portal](#tab/azure-portal)
+[!INCLUDE [virtual-machines-disks-incremental-snapshots-portal](../../includes/virtual-machines-disks-incremental-snapshots-portal.md)]
+
+# [Resource Manager Template](#tab/azure-resource-manager)
+
+You can also use Azure Resource Manager templates to create an incremental snapshot. You'll need to make sure the apiVersion is set to **2019-03-01** and that the incremental property is also set to true. The following snippet is an example of how to create an incremental snapshot with Resource Manager templates:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "diskName": {
+ "type": "string",
+ "defaultValue": "contosodisk1"
+ },
+ "diskResourceId": {
+ "defaultValue": "<your_managed_disk_resource_ID>",
+ "type": "String"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Compute/snapshots",
+ "name": "[concat( parameters('diskName'),'_snapshot1')]",
+ "location": "[resourceGroup().location]",
+ "apiVersion": "2019-03-01",
+ "properties": {
+ "creationData": {
+ "createOption": "Copy",
+ "sourceResourceId": "[parameters('diskResourceId')]"
+ },
+ "incremental": true
+ }
+ }
+ ]
+}
+```
+---
+
+## Next steps
+
+If you'd like to see sample code demonstrating the differential capability of incremental snapshots, using .NET, see [Copy Azure Managed Disks backups to another region with differential capability of incremental snapshots](https://github.com/Azure-Samples/managed-disks-dotnet-backup-with-incremental-snapshots).
\ No newline at end of file
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/disks-performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-performance.md new file mode 100644
@@ -0,0 +1,21 @@
+---
+title: Virtual machine and disk performance
+description: Learn more about how virtual machines and their attached disks work in combination for performance.
+author: albecker1
+ms.author: albecker
+ms.date: 10/12/2020
+ms.topic: conceptual
+ms.service: virtual-machines
+ms.subservice: disks
+---
+# Virtual machine and disk performance
+[!INCLUDE [VM and Disk Performance](../../includes/virtual-machine-disk-performance.md)]
+
+## Virtual machine uncached vs cached limits
+Virtual machines that are enabled for both premium storage and premium storage caching have two different storage bandwidth limits. Let's look at the Standard_D8s_v3 virtual machine as an example. Here is the documentation on the [Dsv3-series](dv3-dsv3-series.md) and the Standard_D8s_v3:
+
+[!INCLUDE [VM and Disk Performance](../../includes/virtual-machine-disk-performance-2.md)]
+
+Let's run a benchmarking test on this virtual machine and disk combination that creates IO activity. To learn how to benchmark storage IO on Azure, see [Benchmark your application on Azure Disk Storage](disks-benchmarks.md). From the benchmarking tool, you can see that the VM and disk combination can achieve 22,800 IOPS:
+
+[!INCLUDE [VM and Disk Performance](../../includes/virtual-machine-disk-performance-3.md)]
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disks-benchmarks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disks-benchmarks.md deleted file mode 100644
@@ -1,25 +0,0 @@
-title: Benchmarking your application on Azure Disk Storage
-description: Review these examples of benchmarking a Standard DS14 VM provisioned with Azure Premium Storage disks.
-author: roygara
-ms.author: rogarana
-ms.date: 01/11/2019
-ms.topic: how-to
-ms.service: virtual-machines-linux
-ms.subservice: disks
-# Benchmark your application on Azure Disk Storage
-
-Benchmarking is the process of simulating different workloads on your application and measuring the application performance for each workload. Using the steps described in the [designing for high performance article](../premium-storage-performance.md). By running benchmarking tools on the VMs hosting the application, you can determine the performance levels that your application can achieve with Premium Storage. In this article, we provide you examples of benchmarking a Standard DS14 VM provisioned with Azure Premium Storage disks.
-
-We have used common benchmarking tools Iometer and FIO, for Windows and Linux respectively. These tools spawn multiple threads simulating a production like workload, and measure the system performance. Using the tools you can also configure parameters like block size and queue depth, which you normally cannot change for an application. This gives you more flexibility to drive the maximum performance on a high scale VM provisioned with premium disks for different types of application workloads. To learn more about each benchmarking tool visit [Iometer](http://www.iometer.org/) and [FIO](http://freecode.com/projects/fio).
-
-To follow the examples below, create a Standard DS14 VM and attach 11 Premium Storage disks to the VM. Of the 11 disks, configure 10 disks with host caching as "None" and stripe them into a volume called NoCacheWrites. Configure host caching as "ReadOnly" on the remaining disk and create a volume called CacheReads with this disk. Using this setup, you are able to see the maximum Read and Write performance from a Standard DS14 VM. For detailed steps about creating a DS14 VM with premium disks, go to [Designing for high performance](../premium-storage-performance.md).
-
-[!INCLUDE [virtual-machines-disks-benchmarking](../../../includes/virtual-machines-managed-disks-benchmarking.md)]
-
-## Next steps
-
-Proceed to our article on [designing for high performance](../premium-storage-performance.md).
-
-In that article, you create a checklist similar to your existing application for the prototype. Using Benchmarking tools you can simulate the workloads and measure performance on the prototype application. By doing so, you can determine which disk offering can match or surpass your application performance requirements. Then you can implement the same guidelines for your production application.
\ No newline at end of file
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disks-incremental-snapshots-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disks-incremental-snapshots-portal.md deleted file mode 100644
@@ -1,13 +0,0 @@
-title: Create an incremental snapshot - Azure portal
-description: Learn about incremental snapshots for managed disks, including how to create them using Linux.
-author: roygara
-ms.service: virtual-machines-linux
-ms.topic: how-to
-ms.date: 04/02/2020
-ms.author: rogarana
-ms.subservice: disks
-
-# Creating an incremental snapshot for managed disks in the Azure portal
-[!INCLUDE [virtual-machines-disks-incremental-snapshots-portal](../../../includes/virtual-machines-disks-incremental-snapshots-portal.md)]
\ No newline at end of file
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disks-incremental-snapshots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disks-incremental-snapshots.md deleted file mode 100644
@@ -1,13 +0,0 @@
-title: Incremental snapshots for managed disks
-description: Learn about incremental snapshots for managed disks, including how to create them using CLI and Azure Resource Manager.
-author: roygara
-ms.service: virtual-machines-linux
-ms.topic: how-to
-ms.date: 03/13/2020
-ms.author: rogarana
-ms.subservice: disks
-
-# Create an incremental snapshot for managed disks - CLI
-[!INCLUDE [virtual-machines-disks-incremental-snapshot-cli](../../../includes/virtual-machines-disks-incremental-snapshot-cli.md)]
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disk-performance-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/disk-performance-windows.md deleted file mode 100644
@@ -1,21 +0,0 @@
-title: Virtual machine and disk performance - Windows
-description: Learn more about how virtual machines and their attached disks work in combination for performance on Windows.
-author: albecker1
-ms.author: albecker
-ms.date: 10/12/2020
-ms.topic: conceptual
-ms.service: virtual-machines
-ms.subservice: disks
-# Virtual machine and disk performance (Windows)
-[!INCLUDE [VM and Disk Performance](../../../includes/virtual-machine-disk-performance.md)]
-
-## Virtual machine uncached vs cached limits
- Virtual machines that are both premium storage enabled and premium storage caching enabled have two different storage bandwidth limits. LetΓÇÖs continue with looking at the Standard_D8s_v3 virtual machine as an example. Here is the documentation on the [Dsv3-series](../dv3-dsv3-series.md) and on it the Standard_D8s_v3:
-
-[!INCLUDE [VM and Disk Performance](../../../includes/virtual-machine-disk-performance-2.md)]
-
-Let's run a benchmarking test on this VM and disk combination that will do create IO activity and you can learn all about how to benchmark storage IO on Azure [here](disks-benchmarks.md). From the benchmarking tool, you can see that the VM and Disk combination is able to achieve 22,800 IOPS:
-
-[!INCLUDE [VM and Disk Performance](../../../includes/virtual-machine-disk-performance-3.md)]
\ No newline at end of file
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-incremental-snapshots-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/disks-incremental-snapshots-portal.md deleted file mode 100644
@@ -1,13 +0,0 @@
-title: Create an incremental snapshot - Azure portal
-description: Learn about incremental snapshots for managed disks, including how to create them using the Azure portal.
-author: roygara
-ms.service: virtual-machines
-ms.topic: how-to
-ms.date: 04/02/2020
-ms.author: rogarana
-ms.subservice: disks
-
-# Creating an incremental snapshot for managed disks
-[!INCLUDE [virtual-machines-disks-incremental-snapshots-portal](../../../includes/virtual-machines-disks-incremental-snapshots-portal.md)]
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-incremental-snapshots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/disks-incremental-snapshots.md deleted file mode 100644
@@ -1,13 +0,0 @@
-title: Incremental snapshots for managed disks
-description: Learn about incremental snapshots for managed disks, including how to create them using PowerShell and Azure Resource Manager.
-author: roygara
-ms.service: virtual-machines
-ms.topic: how-to
-ms.date: 03/13/2020
-ms.author: rogarana
-ms.subservice: disks
-
-# Create an incremental snapshot for managed disks - PowerShell
-[!INCLUDE [virtual-machines-disks-incremental-snapshot-powershell](../../../includes/virtual-machines-disks-incremental-snapshot-powershell.md)]
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/windows/hybrid-use-benefit-licensing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/hybrid-use-benefit-licensing.md
@@ -37,11 +37,10 @@ There are few ways to use Windows virtual machines with the Azure Hybrid Benefit
All Windows Server OS based images are supported for Azure Hybrid Benefit for Windows Server. You can use Azure platform support images or upload your own custom Windows Server images. ### Portal
-To create a VM with Azure Hybrid Benefit for Windows Server, use the toggle under the "Save money" section.
+To create a VM with Azure Hybrid Benefit for Windows Server, scroll to the bottom of the **Basics** tab during the creation process and under **Licensing** check the box to use an existing Windows Server license.
### PowerShell - ```powershell New-AzVm ` -ResourceGroupName "myResourceGroup" `