Updates from: 02/11/2021 04:10:15
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/workday-integration-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/workday-integration-reference.md
@@ -8,7 +8,7 @@
Previously updated : 01/18/2021 Last updated : 02/09/2021
@@ -448,6 +448,21 @@ To get this data set as part of the *Get_Workers* response, use the following XP
`wd:Worker/wd:Worker_Data/wd:Account_Provisioning_Data/wd:Provisioning_Group_Assignment_Data[wd:Status='Assigned']/wd:Provisioning_Group/text()`
+## Handling different HR scenarios
+
+### Retrieving international job assignments and secondary job details
+
+By default, the Workday connector retrieves attributes associated with the worker's primary job. The connector also supports retrieving *Additional Job Data* associated with international job assignments or secondary jobs.
+
+Use the steps below to retrieve attributes associated with international job assignments:
+
+1. Set the Workday connection URL uses Workday Web Service API version 30.0 or above. Accordingly set the [correct XPATH values](workday-attribute-reference.md#xpath-values-for-workday-web-services-wws-api-v30) in your Workday provisioning app.
+1. Use the selector `@wd:Primary_Job=0` on the `Worker_Job_Data` node to retrieve the correct attribute.
+ * **Example 1:** To get `SecondaryBusinessTitle` use the XPATH `wd:Worker/wd:Worker_Data/wd:Employment_Data/wd:Worker_Job_Data[@wd:Primary_Job=0]/wd:Position_Data/wd:Business_Title/text()`
+ * **Example 2:** To get `SecondaryBusinessLocation` use the XPATH `wd:Worker/wd:Worker_Data/wd:Employment_Data/wd:Worker_Job_Data[@wd:Primary_Job=0]/wd:Position_Data/wd:Business_Site_Summary_Data/wd:Location_Reference/@wd:Descriptor`
+
+
+ ## Next steps * [Learn how to configure Workday to Active Directory provisioning](../saas-apps/workday-inbound-tutorial.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
@@ -6,7 +6,7 @@
Previously updated : 08/07/2020 Last updated : 02/10/2021
@@ -31,7 +31,7 @@ For customers with access to [Identity Protection](../identity-protection/overvi
## User risk
-For customers with access to [Identity Protection](../identity-protection/overview-identity-protection.md), user risk can be evaluated as part of a Conditional Access policy. User risk represents the probability that a given a given identity or account is compromised. More information about user risk can be found in the articles, [What is risk](../identity-protection/concept-identity-protection-risks.md#user-risk) and [How To: Configure and enable risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md).
+For customers with access to [Identity Protection](../identity-protection/overview-identity-protection.md), user risk can be evaluated as part of a Conditional Access policy. User risk represents the probability that a given identity or account is compromised. More information about user risk can be found in the articles, [What is risk](../identity-protection/concept-identity-protection-risks.md#user-risk) and [How To: Configure and enable risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md).
## Device platforms
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-enterprise-app-role-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-enterprise-app-role-management.md
@@ -30,7 +30,7 @@ Use this feature if your application expects custom roles in the SAML response r
## Create roles for an application
-1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>, in the left pane, select the **Azure Active Directory** icon.
+1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, in the left pane, select the **Azure Active Directory** icon.
![Azure Active Directory icon][1]
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-optional-claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-optional-claims.md
@@ -134,7 +134,7 @@ This OptionalClaims object causes the ID token returned to the client to include
You can configure optional claims for your application through the UI or application manifest.
-1. Go to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Go to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations**. 1. Select the application you want to configure optional claims for in the list.
@@ -242,7 +242,7 @@ This section covers the configuration options under optional claims for changing
**Configuring groups optional claims through the UI:**
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. After you've authenticated, choose your Azure AD tenant by selecting it from the top-right corner of the page. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations**.
@@ -255,7 +255,7 @@ This section covers the configuration options under optional claims for changing
**Configuring groups optional claims through the application manifest:**
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. After you've authenticated, choose your Azure AD tenant by selecting it from the top-right corner of the page. 1. Search for and select **Azure Active Directory**. 1. Select the application you want to configure optional claims for in the list.
@@ -384,7 +384,7 @@ In the example below, you will use the **Token configuration** UI and **Manifest
**UI configuration:**
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. After you've authenticated, choose your Azure AD tenant by selecting it from the top-right corner of the page. 1. Search for and select **Azure Active Directory**.
@@ -407,7 +407,7 @@ In the example below, you will use the **Token configuration** UI and **Manifest
**Manifest configuration:**
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. After you've authenticated, choose your Azure AD tenant by selecting it from the top-right corner of the page. 1. Search for and select **Azure Active Directory**. 1. Find the application you want to configure optional claims for in the list and select it.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-saml-claims-customization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-saml-claims-customization.md
@@ -76,10 +76,8 @@ For more info, see [Table 3: Valid ID values per source](active-directory-claims
You can also assign any constant (static) value to any claims which you define in Azure AD. Please follow the below steps to assign a constant value:
-1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>, on the **User Attributes & Claims** section, click on the **Edit** icon to edit the claims.
-
+1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, on the **User Attributes & Claims** section, click on the **Edit** icon to edit the claims.
1. Click on the required claim which you want to modify.- 1. Enter the constant value without quotes in the **Source attribute** as per your organization and click **Save**. ![Org Attributes & Claims section in the Azure portal](./media/active-directory-saml-claims-customization/organization-attribute.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md
@@ -45,7 +45,7 @@ The number of roles you add counts toward application manifest limits enforced b
To create an app role by using the Azure portal's user interface:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. Select the **Directory + subscription** filter in top menu, and then choose the Azure Active Directory tenant that contains the app registration to which you want to add an app role. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations**, and then select the application you want to define app roles in.
@@ -70,7 +70,7 @@ To create an app role by using the Azure portal's user interface:
To add roles by editing the manifest directly:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. Select the **Directory + subscription** filter in top menu, and then choose the Azure Active Directory tenant that contains the app registration to which you want to add an app role. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations**, and then select the application you want to define app roles in.
@@ -132,7 +132,7 @@ Once you've added app roles in your application, you can assign users and groups
To assign users and groups to roles by using the Azure portal:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. In **Azure Active Directory**, select **Enterprise applications** in the left-hand navigation menu. 1. Select **All applications** to view a list of all your applications. If your application doesn't appear in the list, use the filters at the top of the **All applications** list to restrict the list, or scroll down the list to locate your application. 1. Select the application in which you want to assign users or security group to roles.
@@ -154,7 +154,7 @@ When you assign app roles to an application, you create *application permissions
To assign app roles to an application by using the Azure portal:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. In **Azure Active Directory**, select **App registrations** in the left-hand navigation menu. 1. Select **All applications** to view a list of all your applications. If your application doesn't appear in the list, use the filters at the top of the **All applications** list to restrict the list, or scroll down the list to locate your application. 1. Select the application to which you want to assign an app role.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-add-terms-of-service-privacy-statement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-add-terms-of-service-privacy-statement.md
@@ -54,11 +54,11 @@ When the terms of service and privacy statement are ready, you can add links to
### <a name="azure-portal"></a>Using the Azure portal Follow these steps in the Azure portal.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>, select the correct AzureAD tenant(not B2C).
-2. Navigate to the **App Registrations** section and select your app.
-3. Open the **Branding** pane.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> and select the correct Azure AD tenant(not B2C).
+2. Navigate to the **App registrations** section and select your app.
+3. Under **Manage**, select **Branding**.
4. Fill out the **Terms of Service URL** and **Privacy Statement URL** fields.
-5. Save your changes.
+5. Select **Save**.
![App properties contains terms of service and privacy statement URLs](./media/howto-add-terms-of-service-privacy-statement/azure-portal-terms-service-privacy-statement-urls.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-configure-publisher-domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-configure-publisher-domain.md
@@ -44,18 +44,12 @@ If your app was registered before May 21, 2019, your application's consent promp
To set your app's publisher domain, follow these steps.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a> using either a work or school account, or a personal Microsoft account.
-
-1. If your account is present in more than one Azure AD tenant:
- 1. Select your profile from the menu on the top-right corner of the page, and then **Switch directory**.
- 1. Change your session to the Azure AD tenant where you want to create your application.
-
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which the app is registered.
1. Navigate to [Azure Active Directory > App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) to find and select the app that you want to configure. Once you've selected the app, you'll see the app's **Overview** page.-
-1. From the app's **Overview** page, select the **Branding** section.
-
+1. Under **Manage**, select the **Branding**.
1. Find the **Publisher domain** field and select one of the following options: - Select **Configure a domain** if you haven't configured a domain already.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-create-service-principal-portal.md
@@ -71,7 +71,7 @@ To check your subscription permissions:
Let's jump straight into creating the identity. If you run into a problem, check the [required permissions](#permissions-required-for-registering-an-app) to make sure your account can create the identity.
-1. Sign in to your Azure Account through the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to your Azure Account through the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. Select **Azure Active Directory**. 1. Select **App registrations**. 1. Select **New registration**.
@@ -177,7 +177,7 @@ If you choose not to use a certificate, you can create a new application secret.
## Configure access policies on resources Keep in mind, you might need to configure additional permissions on resources that your application needs to access. For example, you must also [update a key vault's access policies](../../key-vault/general/secure-your-key-vault.md#data-plane-and-access-policies) to give your application access to keys, secrets, or certificates.
-1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>, navigate to your key vault and select **Access policies**.
+1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, navigate to your key vault and select **Access policies**.
1. Select **Add access policy**, then select the key, secret, and certificate permissions you want to grant your application. Select the service principal you created previously. 1. Select **Add** to add the access policy, then **Save** to commit your changes. ![Add access policy](./media/howto-create-service-principal-portal/add-access-policy.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-restrict-your-app-to-a-set-of-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md
@@ -43,7 +43,7 @@ There are two ways to create an application with enabled user assignment. One re
### Enterprise applications (requires the Global Administrator role)
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a> as a **Global Administrator**.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> as a **Global Administrator**.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **Enterprise Applications** > **All applications**.
@@ -55,7 +55,7 @@ There are two ways to create an application with enabled user assignment. One re
### App registration
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/identity-videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/identity-videos.md
@@ -27,21 +27,21 @@ ___
:::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=tkQJSHFsduY" target="_blank">The basics of modern authentication - Microsoft identity platform <span class="docon docon-navigate-external x-hidden-focus"></span></a> (12:28)
+ <a href="https://www.youtube.com/watch?v=tkQJSHFsduY" target="_blank">The basics of modern authentication - Microsoft identity platform</a>(12:28)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=tkQJSHFsduY" target="_blank"> <img src="./media/identity-videos/id-for-devs-07.jpg" alt="Video thumbnail for a video about the basics of modern authentication on the Microsoft identity platform."> </a>
+ <a href="https://www.youtube.com/watch?v=tkQJSHFsduY" target="_blank"> <img src="./media/identity-videos/id-for-devs-07.jpg" alt="Video thumbnail for a video about the basics of modern authentication on the Microsoft identity platform."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=7_vxnHiUA1M" target="_blank">Modern authentication: how we got here ΓÇô Microsoft identity platform <span class="docon docon-navigate-external x-hidden-focus"></span></a> (15:47)
+ <a href="https://www.youtube.com/watch?v=7_vxnHiUA1M" target="_blank">Modern authentication: how we got here ΓÇô Microsoft identity platform</a>(15:47)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=7_vxnHiUA1M" target="_blank"> <img src="./media/identity-videos/id-for-devs-08.jpg" alt="Video thumbnail for a video about modern authentication and the Microsoft identity platform." class="mx-imgBorder"> </a>
+ <a href="https://www.youtube.com/watch?v=7_vxnHiUA1M" target="_blank"> <img src="./media/identity-videos/id-for-devs-08.jpg" alt="Video thumbnail for a video about modern authentication and the Microsoft identity platform." class="mx-imgBorder"></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=JpeMeTjQJ04" target="_blank">Overview: Implementing single sign-on in mobile applications - Microsoft Identity Platform <span class="docon docon-navigate-external x-hidden-focus"></span></a> (20:30)
+ <a href="https://www.youtube.com/watch?v=JpeMeTjQJ04" target="_blank">Overview: Implementing single sign-on in mobile applications - Microsoft Identity Platform</a> (20:30)
:::column-end::: :::column::: <a href="https://www.youtube.com/watch?v=JpeMeTjQJ04" target="_blank"> <img src="./media/identity-videos/mobile-single-sign-on.jpg" alt="Video thumbnail for a video about implementing mobile single sign on using the Microsoft identity platform."></a> (20:30)
@@ -68,38 +68,38 @@ ___
:::row::: :::column:::
- 1 - <a href="https://www.youtube.com/watch?v=zjezqZPPOfc&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX&index=1" target="_blank">Overview of the Microsoft identity platform for developers <span class="docon docon-navigate-external x-hidden-focus"></span></a> (33:55)
+ 1 - <a href="https://www.youtube.com/watch?v=zjezqZPPOfc&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX&index=1" target="_blank">Overview of the Microsoft identity platform for developers</a> (33:55)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=zjezqZPPOfc" target="_blank"> <img src="./media/identity-videos/id-for-devs-01.jpg" alt="Video thumbnail for a video overview of the Microsoft identity platform for developers."> </a>
+ <a href="https://www.youtube.com/watch?v=zjezqZPPOfc" target="_blank"> <img src="./media/identity-videos/id-for-devs-01.jpg" alt="Video thumbnail for a video overview of the Microsoft identity platform for developers."></a>
:::column-end::: :::column:::
- 2 - <a href="https://www.youtube.com/watch?v=Mtpx_lpfRLs&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX&index=2" target="_blank">How to authenticate users of your apps with the Microsoft identity platform <span class="docon docon-navigate-external x-hidden-focus"></span></a> (29:09)
+ 2 - <a href="https://www.youtube.com/watch?v=Mtpx_lpfRLs&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX&index=2" target="_blank">How to authenticate users of your apps with the Microsoft identity platform </a> (29:09)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=Mtpx_lpfRLs" target="_blank"> <img src="./media/identity-videos/id-for-devs-02.jpg" alt="Video thumbnail for a video about how to authenticate users of your apps with the Microsoft identity platform."> </a>
+ <a href="https://www.youtube.com/watch?v=Mtpx_lpfRLs" target="_blank"> <img src="./media/identity-videos/id-for-devs-02.jpg" alt="Video thumbnail for a video about how to authenticate users of your apps with the Microsoft identity platform."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- 3 - <a href="https://www.youtube.com/watch?v=toAWRNqqDL4&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX&index=3" target="_blank">Microsoft identity platformΓÇÖs permissions and consent framework <span class="docon docon-navigate-external x-hidden-focus"></span></a> (45:08)
+ 3 - <a href="https://www.youtube.com/watch?v=toAWRNqqDL4&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX&index=3" target="_blank">Microsoft identity platformΓÇÖs permissions and consent framework</a> (45:08)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=toAWRNqqDL4" target="_blank"> <img src="./media/identity-videos/id-for-devs-03.jpg" alt="Video thumbnail for a video about Microsoft identity platformΓÇÖs permissions and consent framework."> </a>
+ <a href="https://www.youtube.com/watch?v=toAWRNqqDL4" target="_blank"> <img src="./media/identity-videos/id-for-devs-03.jpg" alt="Video thumbnail for a video about Microsoft identity platformΓÇÖs permissions and consent framework."></a>
:::column-end::: :::column:::
- 4 - <a href="https://www.youtube.com/watch?v=IIQ7QW4bYqA&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX&index=4" target="_blank">How to protect APIs using the Microsoft identity platform <span class="docon docon-navigate-external x-hidden-focus"></span></a> (33:17)
+ 4 - <a href="https://www.youtube.com/watch?v=IIQ7QW4bYqA&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX&index=4" target="_blank">How to protect APIs using the Microsoft identity platform</a> (33:17)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=IIQ7QW4bYqA" target="_blank"> <img src="./media/identity-videos/id-for-devs-04.jpg" alt="Video thumbnail for a video about how to protect APIs using the Microsoft identity platform."> </a>
+ <a href="https://www.youtube.com/watch?v=IIQ7QW4bYqA" target="_blank"> <img src="./media/identity-videos/id-for-devs-04.jpg" alt="Video thumbnail for a video about how to protect APIs using the Microsoft identity platform."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- 5 - <a href="https://www.youtube.com/watch?v=-BK2iBDrmNo&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX&index=5" target="_blank">Application roles and security groups on the Microsoft identity platform <span class="docon docon-navigate-external x-hidden-focus"></span></a> (15:52)
+ 5 - <a href="https://www.youtube.com/watch?v=-BK2iBDrmNo&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX&index=5" target="_blank">Application roles and security groups on the Microsoft identity platform</a> (15:52)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=-BK2iBDrmNo" target="_blank"> <img src="./media/identity-videos/id-for-devs-05.jpg" alt="Video thumbnail for a video about application roles and security groups on the Microsoft identity platform."> </a>
+ <a href="https://www.youtube.com/watch?v=-BK2iBDrmNo" target="_blank"> <img src="./media/identity-videos/id-for-devs-05.jpg" alt="Video thumbnail for a video about application roles and security groups on the Microsoft identity platform."></a>
:::column-end::: :::column::: :::column-end:::
@@ -131,44 +131,44 @@ ___
:::row::: :::column:::
- 1 - <a href="https://www.youtube.com/watch?v=fbSVgC8nGz4&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=1" target="_blank">Basics: The concepts of modern authentication <span class="docon docon-navigate-external x-hidden-focus"></span></a> (4:33)
+ 1 - <a href="https://www.youtube.com/watch?v=fbSVgC8nGz4&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=1" target="_blank">Basics: The concepts of modern authentication</a> (4:33)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=fbSVgC8nGz4" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-01.jpg" alt="Video thumbnail for a video about the concept of modern authentication."> </a>
+ <a href="https://www.youtube.com/watch?v=fbSVgC8nGz4" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-01.jpg" alt="Video thumbnail for a video about the concept of modern authentication."></a>
:::column-end::: :::column:::
- 2 - <a href="https://www.youtube.com/watch?v=tCNcG1lcCHY&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=2" target="_blank">Modern authentication for web applications <span class="docon docon-navigate-external x-hidden-focus"></span></a> (6:02)
+ 2 - <a href="https://www.youtube.com/watch?v=tCNcG1lcCHY&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=2" target="_blank">Modern authentication for web applications</a> (6:02)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=tCNcG1lcCHY" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-02.jpg" alt="Video thumbnail for a video about modern authentication for web applications."> </a>
+ <a href="https://www.youtube.com/watch?v=tCNcG1lcCHY" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-02.jpg" alt="Video thumbnail for a video about modern authentication for web applications."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- 3 - <a href="https://www.youtube.com/watch?v=51B-jSOBF8U&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=3" target="_blank">Web single sign-on <span class="docon docon-navigate-external x-hidden-focus"></span></a> (4:13)
+ 3 - <a href="https://www.youtube.com/watch?v=51B-jSOBF8U&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=3" target="_blank">Web single sign-on</a> (4:13)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=51B-jSOBF8U" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-03.jpg" alt="Video thumbnail for a video about web single sign-on."> </a>
+ <a href="https://www.youtube.com/watch?v=51B-jSOBF8U" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-03.jpg" alt="Video thumbnail for a video about web single sign-on."></a>
:::column-end::: :::column:::
- 4 - <a href="https://www.youtube.com/watch?v=CjarTgjKcX8&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=4" target="_blank">Federated web authentication <span class="docon docon-navigate-external x-hidden-focus"></span></a> (6:19)
+ 4 - <a href="https://www.youtube.com/watch?v=CjarTgjKcX8&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=4" target="_blank">Federated web authentication</a> (6:19)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=CjarTgjKcX8" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-04.jpg" alt="Video thumbnail for a video about federated web authentication."> </a>
+ <a href="https://www.youtube.com/watch?v=CjarTgjKcX8" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-04.jpg" alt="Video thumbnail for a video about federated web authentication."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- 5 - <a href="https://www.youtube.com/watch?v=OGMDnuDrAcQ&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=5" target="_blank">Native client applications - Part 1 <span class="docon docon-navigate-external x-hidden-focus"></span></a> (8:12)
+ 5 - <a href="https://www.youtube.com/watch?v=OGMDnuDrAcQ&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=5" target="_blank">Native client applications - Part 1 </a> (8:12)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=OGMDnuDrAcQ" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-05.jpg" alt="Video thumbnail for part 1 of a video about native client applications."> </a>
+ <a href="https://www.youtube.com/watch?v=OGMDnuDrAcQ" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-05.jpg" alt="Video thumbnail for part 1 of a video about native client applications."></a>
:::column-end::: :::column:::
- 6 - <a href="https://www.youtube.com/watch?v=2RE6IhXfmHY&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=6" target="_blank">Native client applications - Part 2 <span class="docon docon-navigate-external x-hidden-focus"></span></a> (5:33)
+ 6 - <a href="https://www.youtube.com/watch?v=2RE6IhXfmHY&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=6" target="_blank">Native client applications - Part 2 </a> (5:33)
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=2RE6IhXfmHY" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-06.jpg" alt="Video thumbnail for part 2 of a video about native client applications."> </a>
+ <a href="https://www.youtube.com/watch?v=2RE6IhXfmHY" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-06.jpg" alt="Video thumbnail for part 2 of a video about native client applications."></a>
:::column-end::: :::row-end:::
@@ -196,10 +196,10 @@ ___
:::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=bNlcFuIo3r8" target="_blank">Microsoft identity platform overview <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=bNlcFuIo3r8" target="_blank">Microsoft identity platform overview </a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=bNlcFuIo3r8" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for an overview video about Microsoft identity platform."> </a>
+ <a href="https://www.youtube.com/watch?v=bNlcFuIo3r8" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for an overview video about Microsoft identity platform."></a>
:::column-end::: :::column::: :::column-end:::
@@ -208,27 +208,27 @@ ___
:::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=apbbx2n4tnU" target="_blank">Microsoft Graph and the Microsoft Authentication Library (MSAL) <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=apbbx2n4tnU" target="_blank">Microsoft Graph and the Microsoft Authentication Library (MSAL) </a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=apbbx2n4tnU" target="_blank"> <img src="./media/identity-videos/graph-and-msal.jpg" alt="Video thumbnail for a video about Microsoft Graph and the Microsoft Authentication Library (MSAL)."> </a>
+ <a href="https://www.youtube.com/watch?v=apbbx2n4tnU" target="_blank"> <img src="./media/identity-videos/graph-and-msal.jpg" alt="Video thumbnail for a video about Microsoft Graph and the Microsoft Authentication Library (MSAL)."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=yLVEBU9Z96Q" target="_blank">What is the MSAL family of libraries? <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=yLVEBU9Z96Q" target="_blank">What is the MSAL family of libraries?</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=yLVEBU9Z96Q" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video about the MSAL family of libraries."> </a>
+ <a href="https://www.youtube.com/watch?v=yLVEBU9Z96Q" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video about the MSAL family of libraries."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=eiPHOoLmGJs" target="_blank">Scopes explained <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=eiPHOoLmGJs" target="_blank">Scopes explained </a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=eiPHOoLmGJs" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that explains scopes."> </a>
+ <a href="https://www.youtube.com/watch?v=eiPHOoLmGJs" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that explains scopes."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=Zd_Uubnu0U0" target="_blank">What are brokers <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=Zd_Uubnu0U0" target="_blank">What are brokers </a>
:::column-end::: :::column::: <a href="https://www.youtube.com/watch?v=Zd_Uubnu0U0" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video about brokers."> </a>
@@ -236,94 +236,94 @@ ___
:::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=znSN_3JAuoU" target="_blank">What redirect URIs do <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=znSN_3JAuoU" target="_blank">What redirect URIs do</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=znSN_3JAuoU" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that describers what redirect URIs do."> </a>
+ <a href="https://www.youtube.com/watch?v=znSN_3JAuoU" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that describers what redirect URIs do."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=mDhT4Zv1fZU" target="_blank">Tenants explained <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=mDhT4Zv1fZU" target="_blank">Tenants explained </a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=mDhT4Zv1fZU" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that explains tenants."> </a>
+ <a href="https://www.youtube.com/watch?v=mDhT4Zv1fZU" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that explains tenants."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=zDEC7A5ZS2Q" target="_blank">Role of Azure AD <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=zDEC7A5ZS2Q" target="_blank">Role of Azure AD </a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=zDEC7A5ZS2Q" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that describes the role of Azure AD."> </a>
+ <a href="https://www.youtube.com/watch?v=zDEC7A5ZS2Q" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that describes the role of Azure AD."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=HEpq_YSmuWw" target="_blank">Role of Azure AD app objects <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=HEpq_YSmuWw" target="_blank">Role of Azure AD app objects</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=HEpq_YSmuWw" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that describes the role of Azure AD app objects."> </a>
+ <a href="https://www.youtube.com/watch?v=HEpq_YSmuWw" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that describes the role of Azure AD app objects."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=E2OUluQQKSk" target="_blank">Organizational and personal Microsoft account differences <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=E2OUluQQKSk" target="_blank">Organizational and personal Microsoft account differences</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=E2OUluQQKSk" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video about organizational and personal Microsoft account differences."> </a>
+ <a href="https://www.youtube.com/watch?v=E2OUluQQKSk" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video about organizational and personal Microsoft account differences."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=ZJirt7eTVw8" target="_blank">SPA and web app differences <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=ZJirt7eTVw8" target="_blank">SPA and web app differences</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=ZJirt7eTVw8" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video about SPA and web app differences."> </a>
+ <a href="https://www.youtube.com/watch?v=ZJirt7eTVw8" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video about SPA and web app differences."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=6R3W9T01gdE" target="_blank">What are Application Permissions vs Delegated Permissions? <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=6R3W9T01gdE" target="_blank">What are Application Permissions vs Delegated Permissions?</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=6R3W9T01gdE" target="_blank"> <img src="./media/identity-videos/aad-basics-12.jpg" alt="Video thumbnail for a video about the differences between application permissions and delegated permissions."> </a>
+ <a href="https://www.youtube.com/watch?v=6R3W9T01gdE" target="_blank"> <img src="./media/identity-videos/aad-basics-12.jpg" alt="Video thumbnail for a video about the differences between application permissions and delegated permissions."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=Gm6sALdXtpg" target="_blank">What is Microsoft identity platform OpenID Connect certified? <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=Gm6sALdXtpg" target="_blank">What is Microsoft identity platform OpenID Connect certified?</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=Gm6sALdXtpg" target="_blank"> <img src="./media/identity-videos/one-dev-question-hs.jpg" alt="Video thumbnail for a video about Microsoft identity platform OpenID Connect certified."> </a>
+ <a href="https://www.youtube.com/watch?v=Gm6sALdXtpg" target="_blank"> <img src="./media/identity-videos/one-dev-question-hs.jpg" alt="Video thumbnail for a video about Microsoft identity platform OpenID Connect certified."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=NrydwrckYaw" target="_blank">What are the different Azure Active Directory app types and how do they compare? <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=NrydwrckYaw" target="_blank">What are the different Azure Active Directory app types and how do they compare?</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=NrydwrckYaw" target="_blank"> <img src="./media/identity-videos/aad-basics-13.jpg" alt="Video thumbnail for a video that compares Azure Active Directory app types."> </a>
+ <a href="https://www.youtube.com/watch?v=NrydwrckYaw" target="_blank"> <img src="./media/identity-videos/aad-basics-13.jpg" alt="Video thumbnail for a video that compares Azure Active Directory app types."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=cZKgTqF4o88" target="_blank">If you use MSAL, what essential protocol concepts should you know? <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=cZKgTqF4o88" target="_blank">If you use MSAL, what essential protocol concepts should you know?</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=cZKgTqF4o88" target="_blank"> <img src="./media/identity-videos/one-dev-question-hs.jpg" alt="Video thumbnail for a video about protocol concepts you should know if you use MSAL."> </a>
+ <a href="https://www.youtube.com/watch?v=cZKgTqF4o88" target="_blank"> <img src="./media/identity-videos/one-dev-question-hs.jpg" alt="Video thumbnail for a video about protocol concepts you should know if you use MSAL."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=41vmzPdbfXM" target="_blank">What is the difference between ID tokens, access tokens, refresh tokens, and session tokens? <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=41vmzPdbfXM" target="_blank">What is the difference between ID tokens, access tokens, refresh tokens, and session tokens?</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=41vmzPdbfXM" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-08.jpg" alt="Video thumbnail for a video that explains the difference between ID tokens, access tokens, refresh tokens, and session tokens."> </a>
+ <a href="https://www.youtube.com/watch?v=41vmzPdbfXM" target="_blank"> <img src="./media/identity-videos/aad-auth-fund-08.jpg" alt="Video thumbnail for a video that explains the difference between ID tokens, access tokens, refresh tokens, and session tokens."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=jEEwN7XAtUo" target="_blank">What is the relationship between an authorization request and tokens? <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=jEEwN7XAtUo" target="_blank">What is the relationship between an authorization request and tokens?</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=jEEwN7XAtUo" target="_blank"> <img src="./media/identity-videos/one-dev-question-hs.jpg" alt="Video thumbnail for a video that describes the relationship between an authorization request and tokens."> </a>
+ <a href="https://www.youtube.com/watch?v=jEEwN7XAtUo" target="_blank"> <img src="./media/identity-videos/one-dev-question-hs.jpg" alt="Video thumbnail for a video that describes the relationship between an authorization request and tokens."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=4pwuRYcZbz4" target="_blank">What aspects of using protocols does the MSAL libraries make easier? <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=4pwuRYcZbz4" target="_blank">What aspects of using protocols does the MSAL libraries make easier?</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=4pwuRYcZbz4" target="_blank"> <img src="./media/identity-videos/id-for-devs-06.jpg" alt="Video thumbnail for video that describes what aspects of using protocols does the MSAL libraries make easier."> </a>
+ <a href="https://www.youtube.com/watch?v=4pwuRYcZbz4" target="_blank"> <img src="./media/identity-videos/id-for-devs-06.jpg" alt="Video thumbnail for video that describes what aspects of using protocols does the MSAL libraries make easier."></a>
:::column-end::: :::column::: :::column-end:::
@@ -338,29 +338,29 @@ ___
:::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=qpdC45tZYDg" target="_blank">Why migrate from ADAL to MSAL <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=qpdC45tZYDg" target="_blank">Why migrate from ADAL to MSAL</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=qpdC45tZYDg" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that explains why to migrate from ADAL to MSAL."> </a>
+ <a href="https://www.youtube.com/watch?v=qpdC45tZYDg" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that explains why to migrate from ADAL to MSAL."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=xgL_z9yCnrE" target="_blank">Migrating your ADAL codebase to MSAL <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=xgL_z9yCnrE" target="_blank">Migrating your ADAL codebase to MSAL</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=xgL_z9yCnrE" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that describes migrating your ADAL codebase to MSAL."> </a>
+ <a href="https://www.youtube.com/watch?v=xgL_z9yCnrE" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that describes migrating your ADAL codebase to MSAL."></a>
:::column-end::: :::row-end::: :::row::: :::column:::
- <a href="https://www.youtube.com/watch?v=q-TDszj2O-4" target="_blank">Advantages of MSAL over ADAL <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=q-TDszj2O-4" target="_blank">Advantages of MSAL over ADAL</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=q-TDszj2O-4" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that describes the advantages of MSAL over ADAL."> </a>
+ <a href="https://www.youtube.com/watch?v=q-TDszj2O-4" target="_blank"> <img src="./media/identity-videos/one-dev-question-jm.jpg" alt="Video thumbnail for a video that describes the advantages of MSAL over ADAL."></a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=aBMUxC4evhU" target="_blank">What are the differences between v1 and v2 authentication? <span class="docon docon-navigate-external x-hidden-focus"></span></a>
+ <a href="https://www.youtube.com/watch?v=aBMUxC4evhU" target="_blank">What are the differences between v1 and v2 authentication?</a>
:::column-end::: :::column:::
- <a href="https://www.youtube.com/watch?v=aBMUxC4evhU" target="_blank"> <img src="./media/identity-videos/one-dev-question-hs.jpg" alt="Video thumbnail"> </a>
+ <a href="https://www.youtube.com/watch?v=aBMUxC4evhU" target="_blank"> <img src="./media/identity-videos/one-dev-question-hs.jpg" alt="Video thumbnail"></a>
:::column-end::: :::row-end:::
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/microsoft-identity-web https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/microsoft-identity-web.md
@@ -67,7 +67,7 @@ dotnet new blazorserver2 --auth SingleOrg --calls-graph --client-id "00000000-00
#### GitHub
-Microsoft Identity Web is an open-source project hosted on GitHub: <a href="https://github.com/AzureAD/microsoft-identity-web" target="_blank">AzureAD/microsoft-identity-web<span class="docon docon-navigate-external x-hidden-focus"></span></a>
+Microsoft Identity Web is an open-source project hosted on GitHub: <a href="https://github.com/AzureAD/microsoft-identity-web" target="_blank">AzureAD/microsoft-identity-web</a>
The [repository wiki](https://github.com/AzureAD/microsoft-identity-web/wiki) contains additional documentation, and if you need help or discover a bug, you can [file an issue](https://github.com/AzureAD/microsoft-identity-web/issues).
@@ -96,8 +96,8 @@ To see Microsoft Identity Web in action, try our Blazor Server tutorial:
The Microsoft Identity Web wiki on GitHub contains extensive reference documentation for various aspects of the library. For example, certificate usage, incremental consent, and conditional access reference can be found here: -- <a href="https://github.com/AzureAD/microsoft-identity-web/wiki/Using-certificates" target="_blank">Using certificates with Microsoft.Identity.Web<span class="docon docon-navigate-external x-hidden-focus"></span></a> (GitHub)-- <a href="https://github.com/AzureAD/microsoft-identity-web/wiki/Managing-incremental-consent-and-conditional-access" target="_blank">Incremental consent and conditional access<span class="docon docon-navigate-external x-hidden-focus"></span></a> (GitHub)
+- <a href="https://github.com/AzureAD/microsoft-identity-web/wiki/Using-certificates" target="_blank">Using certificates with Microsoft.Identity.Web</a> (GitHub)
+- <a href="https://github.com/AzureAD/microsoft-identity-web/wiki/Managing-incremental-consent-and-conditional-access" target="_blank">Incremental consent and conditional access</a> (GitHub)
<!-- LINKS --> <!-- [miw-certs]: microsoft-identity-web-certificates.md -->
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/migrate-spa-implicit-to-auth-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/migrate-spa-implicit-to-auth-code.md
@@ -36,7 +36,7 @@ If you'd like to continue using your existing app registration for your applicat
Follow these steps for app registrations that are currently configured with **Web** platform redirect URIs:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a> and select your **Azure Active Directory** tenant.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> and select your **Azure Active Directory** tenant.
1. In **App registrations**, select your application, and then **Authentication**. 1. In the **Web** platform tile under **Redirect URIs**, select the warning banner indicating that you should migrate your URIs.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-android-single-sign-on https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-android-single-sign-on.md
@@ -115,7 +115,7 @@ keytool -exportcert -alias androiddebugkey -keystore %HOMEPATH%\.android\debug.k
Once you've generated a signature hash with *keytool*, use the Azure portal to generate the redirect URI:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a> and select your Android app in **App registrations**.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> and select your Android app in **App registrations**.
1. Select **Authentication** > **Add a platform** > **Android**. 1. In the **Configure your Android app** pane that opens, enter the **Signature hash** that you generated earlier and a **Package name**. 1. Select the **Configure** button.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-national-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-national-cloud.md
@@ -67,7 +67,7 @@ To enable your MSAL.js application for sovereign clouds:
### Step 1: Register your application
-1. Sign in to the <a href="https://portal.azure.us/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.us/" target="_blank">Azure portal</a>.
To find Azure portal endpoints for other national clouds, see [App registration endpoints](authentication-national-cloud.md#app-registration-endpoints).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-use-brokers-with-xamarin-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-use-brokers-with-xamarin-apps.md
@@ -180,7 +180,7 @@ Add the redirect URI to the app's registration in the [Azure portal](https://por
**To generate the redirect URI:**
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. Select **Azure Active Directory** > **App registrations** > your registered app 1. Select **Authentication** > **Add a platform** > **iOS / macOS** 1. Enter your bundle ID, and then select **Configure**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-configure-app-access-web-apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md
@@ -39,7 +39,7 @@ This diagram shows how the two app registrations relate to one another. In this
Once you've registered both your client app and web API and you've exposed the API by creating scopes, you can configure the client's permissions to the API by following these steps:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration. 1. Select **Azure Active Directory** > **App registrations**, and then select your client application (*not* your web API). 1. Select **API permissions** > **Add a permission** > **My APIs**.
@@ -68,7 +68,7 @@ In addition to accessing your own web API on behalf of the signed-in user, your
Configure delegated permission to Microsoft Graph to enable your client application to perform operations on behalf of the logged-in user, for example reading their email or modifying their profile. By default, users of your client app are asked when they sign in to consent to the delegated permissions you've configured for it.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration. 1. Select **Azure Active Directory** > **App registrations**, and then select your client application. 1. Select **API permissions** > **Add a permission** > **Microsoft Graph**
@@ -93,7 +93,7 @@ Configure application permissions for an application that needs to authenticate
In the following steps, you grant permission to Microsoft Graph's *Files.Read.All* permission as an example.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration. 1. Select **Azure Active Directory** > **App registrations**, and then select your client application. 1. Select **API permissions** > **Add a permission** > **Microsoft Graph** > **Application permissions**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-configure-app-expose-web-apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-configure-app-expose-web-apis.md
@@ -42,7 +42,7 @@ The code in a client application requests permission to perform operations defin
First, follow these steps to create an example scope named `Employees.Read.All`:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/quickstart-configure-app-expose-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration. 1. Select **Azure Active Directory** > **App registrations**, and then select your API's app registration. 1. Select **Expose an API** > **Add a scope**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-create-new-tenant.md
@@ -47,7 +47,7 @@ Many developers already have tenants through services or subscriptions that are
To check the tenant:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>. Use the account you'll use to manage your application.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. Use the account you'll use to manage your application.
1. Check the upper-right corner. If you have a tenant, you'll automatically be signed in. You see the tenant name directly under your account name. * Hover over your account name to see your name, email address, directory or tenant ID (a GUID), and domain. * If your account is associated with multiple tenants, you can select your account name to open a menu where you can switch between tenants. Each tenant has its own tenant ID.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-modify-supported-accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-modify-supported-accounts.md
@@ -31,7 +31,7 @@ In the following sections, you learn how to modify your app's registration in th
To specify a different setting for the account types supported by an existing app registration:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations**, then select your application.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-remove-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-remove-app.md
@@ -36,9 +36,10 @@ Applications that you or your organization have registered are represented by bo
To delete an application, be listed as an owner of the application or have admin privileges.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a> using either a work or school account or a personal Microsoft account.
-1. If your account gives you access to more than one tenant, select your account in the top right corner, and set your portal session to the desired Azure AD tenant.
-1. In the left-hand navigation pane, select the **Azure Active Directory** service, then select **App registrations**. Find and select the application that you want to configure. Once you've selected the app, you'll see the application's **Overview** page.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which the app is registered.
+1. Search and select the **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** and select the application that you want to configure. Once you've selected the app, you'll see the application's **Overview** page.
1. From the **Overview** page, select **Delete**. 1. Select **Yes** to confirm that you want to delete the app.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-angular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-angular.md
@@ -33,9 +33,7 @@ In this quickstart, you download and run a code sample that demonstrates how an
> > ### Option 1 (express): Register and automatically configure the app, and then download the code sample >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
-> 1. If your account has access to more than one tenant, select the account at the upper right, and then set your portal session to the Azure Active Directory (Azure AD) tenant that you want to use.
-> 1. Open the new [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs) pane in the Azure portal.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application, and then select **Register**. > 1. Go to the quickstart pane and view the Angular quickstart. Follow the instructions to download and automatically configure your new application. >
@@ -43,8 +41,8 @@ In this quickstart, you download and run a code sample that demonstrates how an
> > #### Step 1: Register the application >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
-> 1. If your account has access to more than one tenant, select your account at the upper right, and set your portal session to the Azure AD tenant that you want to use.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
> 1. Follow the instructions to [register a single-page application](./scenario-spa-app-registration.md) in the Azure portal. > 1. Add a new platform on the **Authentication** pane of your app registration and register the redirect URI: `http://localhost:4200/`. > 1. This quickstart uses the [implicit grant flow](v2-oauth2-implicit-grant-flow.md). In the **Implicit grant and hybrid flows** section, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app signs in users and calls an API.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-aspnet-core-web-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
@@ -32,7 +32,7 @@ In this quickstart, you download an ASP.NET Core web API code sample and review
> > First, register the web API in your Azure AD tenant and add a scope by following these steps: >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
@@ -35,7 +35,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetCoreWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetCoreWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application for you in one click. >
@@ -44,7 +44,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> #### Step 1: Register your application > To register your application and add the app's registration information to your solution manually, follow these steps: >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-aspnet-core-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
@@ -35,7 +35,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://aka.ms/aspnetcore2-1-aad-quickstart-v2/" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://aka.ms/aspnetcore2-1-aad-quickstart-v2/" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application for you in one click. >
@@ -44,7 +44,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> #### Step 1: Register your application > To register your application and add the app's registration information to your solution manually, follow these steps: >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-aspnet-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
@@ -35,7 +35,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application for you in one click. >
@@ -44,7 +44,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> #### Step 1: Register your application > To register your application and add the app's registration information to your solution manually, follow these steps: >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-dotnet-native-aspnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
@@ -45,13 +45,12 @@ In this section, you register your web API in **App registrations** in the Azure
To register your apps manually, choose the Azure Active Directory (Azure AD) tenant where you want to create your apps.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a> with either a work or school account or a personal Microsoft account.
-1. If your account is present in more than one Azure AD tenant, select your profile at the upper right, and then select **Switch directory**.
-1. Change your portal session to the Azure AD tenant you want to use.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant that you want to use.
### Register the TodoListService app
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-ios.md
@@ -43,7 +43,7 @@ The quickstart applies to both iOS and macOS apps. Some steps are needed only fo
> ### Option 1: Register and auto configure your app and then download the code sample > #### Step 1: Register your application > To register your app,
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/IosQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/IosQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application with just one click. >
@@ -52,7 +52,7 @@ The quickstart applies to both iOS and macOS apps. Some steps are needed only fo
> #### Step 1: Register your application > To register your application and add the app's registration information to your solution manually, follow these steps: >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-java-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-webapp.md
@@ -34,7 +34,7 @@ To run this sample, you need:
> > ### Option 1: Register and automatically configure your app, and then download the code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application, and then select **Register**. > 1. Follow the instructions in the portal's quickstart experience to download the automatically configured application code. >
@@ -44,7 +44,7 @@ To run this sample, you need:
> > To register your application and manually add the app's registration information to it, follow these steps: >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register the application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-javascript-auth-code-angular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-angular.md
@@ -36,7 +36,7 @@ This quickstart uses MSAL Angular v2 with the authorization code flow. For a sim
> > ### Option 1 (Express): Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application. > 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. > 1. Select **Register**.
@@ -46,7 +46,7 @@ This quickstart uses MSAL Angular v2 with the authorization code flow. For a sim
> > #### Step 1: Register your application >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-javascript-auth-code-react https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md
@@ -37,7 +37,7 @@ This quickstart uses MSAL React with the authorization code flow. For a similar
> > ### Option 1 (Express): Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application. > 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. > 1. Select **Register**.
@@ -47,7 +47,7 @@ This quickstart uses MSAL React with the authorization code flow. For a similar
> > #### Step 1: Register your application >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-javascript-auth-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
@@ -36,7 +36,7 @@ This quickstart uses MSAL.js 2.0 with the authorization code flow. For a similar
> > ### Option 1 (Express): Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a>.
> 1. Enter a name for your application. > 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. > 1. Select **Register**.
@@ -46,7 +46,7 @@ This quickstart uses MSAL.js 2.0 with the authorization code flow. For a similar
> > #### Step 1: Register your application >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript.md
@@ -35,7 +35,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1 (Express): Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application. > 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. > 1. Select **Register**.
@@ -45,7 +45,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > #### Step 1: Register your application >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-netcore-daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
@@ -36,7 +36,7 @@ This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/DotNetCoreDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/DotNetCoreDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application with just one click. >
@@ -46,7 +46,7 @@ This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/
> #### Step 1: Register your application > To register your application and add the app's registration information to your solution manually, follow these steps: >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</span></a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-nodejs-webapp-msal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
@@ -38,7 +38,7 @@ This quickstart uses the Microsoft Authentication Library for Node.js (MSAL Node
> > #### Step 1: Register your application >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Under **Manage**, select **App registrations** > **New registration**. > 1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-nodejs-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp.md
@@ -27,7 +27,7 @@ In this quickstart, you download and run a code sample that demonstrates how to
## Register your application
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-python-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-webapp.md
@@ -36,7 +36,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application. >
@@ -46,7 +46,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > To register your application and add the app's registration information to your solution manually, follow these steps: >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Under **Manage**, select **App registrations** > **New registration**. > 1. Enter a **Name** for your application, for example `python-webapp` . Users of your app might see this name, and you can change it later.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-uwp.md
@@ -36,7 +36,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/UwpQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/UwpQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application for you in one click. >
@@ -44,7 +44,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> [!div renderon="docs"] > #### Step 1: Register your application > To register your application and add the app's registration information to your solution, follow these steps:
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-windows-desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-windows-desktop.md
@@ -33,7 +33,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1: Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application and select **Register**. > 1. Follow the instructions to download and automatically configure your new application with just one click. >
@@ -42,7 +42,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> #### Step 1: Register your application > To register your application and add the app's registration information to your solution manually, follow these steps: >
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. > 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/reference-app-manifest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-app-manifest.md
@@ -28,7 +28,7 @@ You can configure an app's attributes through the Azure portal or programmatical
To configure the application manifest:
-1. Go to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>. Search for and select the **Azure Active Directory** service.
+1. Go to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. Search for and select the **Azure Active Directory** service.
1. Select **App registrations**. 1. Select the app you want to configure. 1. From the app's **Overview** page, select the **Manifest** section. A web-based manifest editor opens, allowing you to edit the manifest within the portal. Optionally, you can select **Download** to edit the manifest locally, and then use **Upload** to reapply it to your application.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/registration-config-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/registration-config-how-to.md
@@ -20,7 +20,7 @@
You can find the authentication endpoints for your application in the [Azure portal](https://portal.azure.com).
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. Select **Azure Active Directory**. 1. Under **Manage**, select **App registrations**, and then select **Endpoints** in the top menu.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/registration-config-specific-application-property-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/registration-config-specific-application-property-how-to.md
@@ -21,7 +21,7 @@ This article gives you a brief description of all the available fields in the ap
## Register a new application -- To register a new application, navigate to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+- To register a new application, navigate to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
- From the left navigation pane, click **Azure Active Directory.**
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-desktop-app-registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-app-registration.md
@@ -46,7 +46,7 @@ The redirect URIs to use in a desktop application depend on the flow you want to
- If your app uses only Integrated Windows Authentication or a username and a password, you don't need to register a redirect URI for your application. These flows do a round trip to the Microsoft identity platform v2.0 endpoint. Your application won't be called back on any specific URI. - To distinguish [device code flow](scenario-desktop-acquire-token.md#device-code-flow), [Integrated Windows Authentication](scenario-desktop-acquire-token.md#integrated-windows-authentication), and a [username and a password](scenario-desktop-acquire-token.md#username-and-password) from a confidential client application using a client credential flow used in [daemon applications](scenario-daemon-overview.md), none of which requires a redirect URI, configure it as a public client application. To achieve this configuration:
- 1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>, select your app in **App registrations**, and then select **Authentication**.
+ 1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, select your app in **App registrations**, and then select **Authentication**.
1. In **Advanced settings** > **Allow public client flows** > **Enable the following mobile and desktop flows:**, select **Yes**. :::image type="content" source="media/scenarios/default-client-type.png" alt-text="Enable public client setting on Authentication pane in Azure portal":::
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-mobile-app-registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-mobile-app-registration.md
@@ -80,7 +80,7 @@ If your app uses only username-password authentication, you don't need to regist
However, identify your application as a public client application. To do so:
-1. Still in the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>, select your app in **App registrations**, and then select **Authentication**.
+1. Still in the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, select your app in **App registrations**, and then select **Authentication**.
1. In **Advanced settings** > **Allow public client flows** > **Enable the following mobile and desktop flows:**, select **Yes**. :::image type="content" source="media/scenarios/default-client-type.png" alt-text="Enable public client setting on Authentication pane in Azure portal":::
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-spa-app-registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-app-registration.md
@@ -23,7 +23,7 @@ To register a single-page application (SPA) in the Microsoft identity platform,
For both MSAL.js 1.0- and 2.0-based applications, start by completing the following steps to create the initial app registration.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-web-app-sign-user-app-registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-sign-user-app-registration.md
@@ -39,7 +39,7 @@ You can use these links to bootstrap the creation of your web application:
> The portal to use is different depending on whether your application runs in the Microsoft Azure public cloud or in a national or sovereign cloud. For more information, see [National clouds](./authentication-national-cloud.md#app-registration-endpoints).
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-android.md
@@ -70,7 +70,7 @@ If you do not already have an Android application, follow these steps to set up
### Register your application
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-asp-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-asp-webapp.md
@@ -357,7 +357,7 @@ To register your application and add your application registration information t
To quickly register your application, follow these steps:
-1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
1. Enter a name for your application and select **Register**. 1. Follow the instructions to download and automatically configure your new application in a single click.
@@ -371,7 +371,7 @@ To register your application and add the app's registration information to your
1. Right-click the project in Visual Studio, select **Properties**, and then select the **Web** tab. In the **Servers** section, change the **Project Url** setting to the **SSL URL**. 1. Copy the SSL URL. You'll add this URL to the list of Redirect URIs in the Registration portal's list of Redirect URIs in the next step.<br/><br/>![Project properties](media/active-directory-develop-guidedsetup-aspnetwebapp-configure/vsprojectproperties.png)<br />
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-aspnet-daemon-web-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-aspnet-daemon-web-app.md
@@ -90,7 +90,7 @@ If you don't want to use the automation, use the steps in the following sections
### Choose the Azure AD tenant
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
@@ -200,7 +200,7 @@ This project has web app and web API projects. To deploy them to Azure websites,
### Create and publish dotnet-web-daemon-v2 to an Azure website
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. In the upper-left corner, select **Create a resource**. 1. Select **Web** > **Web App**, and then give your website a name. For example, name it **dotnet-web-daemon-v2-contoso.azurewebsites.net**. 1. Select the information for **Subscription**, **Resource group**, and **App service plan and location**. **OS** is **Windows**, and **Publish** is **Code**.
@@ -221,7 +221,7 @@ Visual Studio will publish the project and automatically open a browser to the p
### Update the Azure AD tenant application registration for dotnet-web-daemon-v2
-1. Go back to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Go back to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. In the left pane, select the **Azure Active Directory** service, and then select **App registrations**. 1. Select the **dotnet-web-daemon-v2** application. 1. On the **Authentication** page for your application, update the **Front-channel logout URL** fields with the address of your service. For example, use `https://dotnet-web-daemon-v2-contoso.azurewebsites.net/Account/EndSession`.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-ios.md
@@ -67,7 +67,7 @@ If you'd like to download a completed version of the app you build in this tutor
## Register your application
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-javascript-spa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-javascript-spa.md
@@ -261,7 +261,7 @@ You now have a simple server to serve your SPA. The intended folder structure at
Before proceeding further with authentication, register your application on **Azure Active Directory**.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-windows-desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-windows-desktop.md
@@ -93,14 +93,14 @@ You can register your application in either of two ways.
### Option 1: Express mode You can quickly register your application by doing the following:
-1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
1. Enter a name for your application and select **Register**. 1. Follow the instructions to download and automatically configure your new application with just one click. ### Option 2: Advanced mode To register your application and add your application registration information to your solution, do the following:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-windows-uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-windows-uwp.md
@@ -341,7 +341,7 @@ private async Task DisplayMessageAsync(string message)
Now, register your application:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
@@ -352,8 +352,8 @@ Now, register your application:
Configure authentication for your application:
-1. Back in the <a href="https://portal.azure.com/" target="_blank">Azure portal<span class="docon docon-navigate-external x-hidden-focus"></span></a>, under **Manage**, select **Authentication** > **Add a platform**, and then select **Mobile and desktop applications**.
-1. In the **Redirect URIs** section, check **https://login.microsoftonline.com/common/oauth2/nativeclient**.
+1. Back in the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, under **Manage**, select **Authentication** > **Add a platform**, and then select **Mobile and desktop applications**.
+1. In the **Redirect URIs** section, enter `https://login.microsoftonline.com/common/oauth2/nativeclient`.
1. Select **Configure**. Configure API permissions for your application:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
@@ -177,7 +177,7 @@ In general, the permissions should be statically defined for a given application
To configure the list of statically requested permissions for an application:
-1. Go to your application in the <a href="https://go.microsoft.com/fwlink/?linkid=2083908" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience.
+1. Go to your application in the <a href="https://go.microsoft.com/fwlink/?linkid=2083908" target="_blank">Azure portal - App registrations</a> quickstart experience.
1. Select an application, or [create an app](quickstart-register-app.md) if you haven't already. 1. On the application's **Overview** page, under **Manage**, select **API Permissions** > **Add a permission**. 1. Select **Microsoft Graph** from the list of available APIs. Then add the permissions that your app requires.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/licensing-service-plan-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
@@ -32,7 +32,7 @@ When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information is accurate as of September 22, 2020.
+>This information is accurate as of February 2021.
| Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | | | | | | |
@@ -41,7 +41,10 @@ When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| AZURE ACTIVE DIRECTORY BASIC | AAD_BASIC | 2b9c8e7c-319c-43a2-a2a0-48c5c6161de7 | AAD_BASIC (c4da7f8a-5ee2-4c99-a7e1-87d2df57f6fe) | MICROSOFT AZURE ACTIVE DIRECTORY BASIC (c4da7f8a-5ee2-4c99-a7e1-87d2df57f6fe) | | AZURE ACTIVE DIRECTORY PREMIUM P1 | AAD_PREMIUM | 078d2b04-f1bd-4111-bbd4-b4b1b354cef4 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9) | | AZURE ACTIVE DIRECTORY PREMIUM P2 | AAD_PREMIUM_P2 | 84a661c4-e949-4bd2-a560-ed7766fcaf2b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998) |
-| AZURE INFORMATION PROTECTION PLAN 1 | RIGHTSMANAGEMENT | c52ea49f-fe5d-4e95-93ba-1de91d380f89 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)|
+| AZURE INFORMATION PROTECTION PLAN 1 | RIGHTSMANAGEMENT | c52ea49f-fe5d-4e95-93ba-1de91d380f89 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) |
+| COMMON AREA PHONE | MCOCAP | 295a8eb0-f78d-45c7-8b5b-1eed5ed02dff | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>
+MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) |
+| COMMUNICATIONS CREDITS | MCOPSTNC | 47794cd0-f0e5-45c5-9033-2eb6b5fc84e0 | MCOPSTNC (505e180f-f7e0-4b65-91d4-00d670bbd18c) | COMMUNICATIONS CREDITS (505e180f-f7e0-4b65-91d4-00d670bbd18c) |
| DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN ENTERPRISE EDITION | DYN365_ENTERPRISE_PLAN1 | ea126fc5-a19e-42e2-a731-da9d437bffcf | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>FLOW FOR DYNAMICS 365 (b650d915-9886-424b-a08d-633cede56f57)<br/>DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>PROJECT ONLINE SERVICE (fe71d6c3-a2ea-4499-9778-da042bf08063) | | DYNAMICS 365 FOR CUSTOMER SERVICE ENTERPRISE EDITION | DYN365_ENTERPRISE_CUSTOMER_SERVICE | 749742bf-0d37-4158-a120-33567104deeb | DYN365_ENTERPRISE_CUSTOMER_SERVICE (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR CUSTOMER SERVICE (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR FINANCIALS BUSINESS EDITION | DYN365_FINANCIALS_BUSINESS_SKU | cc13a803-544e-4464-b4e4-6d6169a138fa | DYN365_FINANCIALS_BUSINESS (920656a2-7dd8-4c83-97b6-a356414dbd36)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR FINANCIALS (920656a2-7dd8-4c83-97b6-a356414dbd36) |
@@ -69,10 +72,11 @@ When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT 365 APPS FOR BUSINESS | SMB_BUSINESS | b214fe43-f5a3-4703-beeb-fa97188220fc | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | MICROSOFT 365 APPS FOR ENTERPRISE | OFFICESUBSCRIPTION | c2273bd0-dff7-4215-9ef5-2c7bcfb06425 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | MICROSOFT 365 BUSINESS BASIC | O365_BUSINESS_ESSENTIALS | 3b555118-da6a-4418-894f-7df1e2096870 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
-| MICROSOFT 365 BUSINESS BASIC | SMB_BUSINESS_ESSENTIALS | dab7782a-93b1-4074-8bb1-0e61318bea0b | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) |
+| MICROSOFT 365 BUSINESS BASIC | SMB_BUSINESS_ESSENTIALS | dab7782a-93b1-4074-8bb1-0e61318bea0b | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) |
| MICROSOFT 365 BUSINESS STANDARD | O365_BUSINESS_PREMIUM | f245ecc8-75af-4f8e-b61f-27d8114de5f3 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)| BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | MICROSOFT 365 BUSINESS STANDARD | SMB_BUSINESS_PREMIUM | ac5cef5d-921b-4f97-9ef3-c99076e5470f | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | MICROSOFT 365 BUSINESS PREMIUM | SPB | cbdc14ab-d96c-4c30-b9f4-6ada7cdc1d46 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINBIZ (8e229017-d77b-43d5-9305-903395523b99)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINDOWS 10 BUSINESS (8e229017-d77b-43d5-9305-903395523b99)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| MICROSOFT 365 DOMESTIC CALLING PLAN (120 Minutes) | MCOPSTN_5 | 11dee6af-eca8-419f-8061-6864517c1875 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | MICROSOFT 365 DOMESTIC CALLING PLAN (120 min) (54a152dc-90de-4996-93d2-bc47e670fc06) |
| MICROSOFT 365 E3 | SPE_E3 | 05e9a617-0261-4cee-bb44-138d3ef5d965 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 2)(efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>MICROSOFT FORMS (PLAN E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365(c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 E5 | SPE_E5 | 06ebc4ee-1bb5-47dd-8120-11324bc54e06 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Advanced Threat Protection (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender Advanced Threat Protection (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Advanced Threat Protection (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Office 365 Advanced Threat Protection (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 E3_USGOV_DOD | SPE_E3_USGOV_DOD | d61d61cc-f992-433f-a577-5bd016037eeb | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_DOD (fd500458-c24c-478e-856c-a6067a8376cd)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams for DOD (AR) (fd500458-c24c-478e-856c-a6067a8376cd)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) |
@@ -93,6 +97,7 @@ When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT 365 PHONE SYSTEM FOR TELSTRA | MCOEV_TELSTRA | ffaf2d68-1c95-4eb3-9ddd-59b81fba0f61 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 PHONE SYSTEM_USGOV_DOD | MCOEV_USGOV_DOD | b0e7de67-e503-4934-b729-53d595ba5cd1 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 PHONE SYSTEM_USGOV_GCCHIGH | MCOEV_USGOV_GCCHIGH | 985fcb26-7b94-475b-b512-89356697be71 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
+| MICROSOFT 365 PHONE SYSTEM - VIRTUAL USER | MCOEV_VIRTUALUSER | 440eaaa8-b3e0-484b-a8be-62870b9ba70a | MCOEV_VIRTUALUSER (f47330e9-c134-43b3-9993-e7f004506889) | MICROSOFT 365 PHONE SYSTEM VIRTUAL USER (f47330e9-c134-43b3-9993-e7f004506889)
| Microsoft Defender Advanced Threat Protection | WIN_DEF_ATP | 111046dd-295b-4d6d-9724-d52ac90bd1f2 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender Advanced Threat Protection (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | MICROSOFT DYNAMICS CRM ONLINE BASIC | CRMPLAN2 | 906af65a-2970-46d5-9b58-4e9aa50f0657 | CRMPLAN2 (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE BASIC(bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | MICROSOFT DYNAMICS CRM ONLINE | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
@@ -111,14 +116,12 @@ When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| OFFICE 365 E4 | ENTERPRISEWITHSCAL | 1392051d-0cb9-4b7a-88d5-621fee5e8711 | BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MCOVOICECONF (27216c54-caf8-4d0d-97e2-517afb5c08f6)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 2)(efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>MICROSOFT FORMS (PLAN E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 3) (27216c54-caf8-4d0d-97e2-517afb5c08f6)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365(c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 E5 | ENTERPRISEPREMIUM | c7df2760-2c81-4ef7-b578-5b5392b571df | ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | OFFICE 365 CLOUD APP SECURITY (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>POWER BI PRO (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>OFFICE 365 ADVANCED EDISCOVERY (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE ONLINE (PLAN 2)(efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>MICROSOFT FORMS (PLAN E5)(e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>AUDIO CONFERENCING (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>OFFICE 365 ADVANCED THREAT PROTECTION (PLAN 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 E5 WITHOUT AUDIO CONFERENCING | ENTERPRISEPREMIUM_NOPSTNCONF | 26d45bd9-adf1-46cd-a9e1-51e9a5524128 | ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | OFFICE 365 CLOUD APP SECURITY (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>POWER BI PRO (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>OFFICE 365 ADVANCED EDISCOVERY (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE ONLINE (PLAN 2)(efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>MICROSOFT FORMS (PLAN E5)(e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>OFFICE 365 ADVANCED THREAT PROTECTION (PLAN 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
-| OFFICE 365 F1 | DESKLESSPACK | 4b585984-651b-448a-9e53-3b10f069cf7f | BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE KIOSK (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FLOW FOR OFFICE 365 K1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>MICROSOFT FORMS (PLAN K) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 K1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>SHAREPOINT ONLINE KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
| OFFICE 365 F3 | DESKLESSPACK | 4b585984-651b-448a-9e53-3b10f069cf7f | BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>CDS_O365_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>INTUNE_365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>KAIZALA_O365_P1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>POWER_VIRTUAL_AGENTS_O365_F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>PROJECT_O365_F3 (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_FIRSTLINE_1 (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>COMMON DATA SERVICE - O365 F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>COMMON DATA SERVICE FOR TEAMS_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>EXCHANGE ONLINE KIOSK (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FLOW FOR OFFICE 365 K1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>MICROSOFT AZURE RIGHTS MANAGEMENT SERVICE (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>MICROSOFT FORMS (PLAN F1) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>MICROSOFT KAIZALA PRO PLAN 1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>POWERAPPS FOR OFFICE 365 K1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>PROJECT FOR OFFICE (PLAN F) (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (FIRSTLINE) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 MIDSIZE BUSINESS | MIDSIZEPACK | 04a7fb0d-32e0-4241-b4f5-3f7618cd1162 | EXCHANGE_S_STANDARD_MIDMARKET (fc52cc4b-ed7d-472d-bbe7-b081c23ecc56)<br/>MCOSTANDARD_MIDMARKET (b2669e95-76ef-4e7e-a367-002f60a39f3e)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTENTERPRISE_MIDMARKET (6b5b6a67-fc72-4a1f-a2b5-beecf05de761)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | EXCHANGE ONLINE PLAN 1(fc52cc4b-ed7d-472d-bbe7-b081c23ecc56)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR MIDSIZE(b2669e95-76ef-4e7e-a367-002f60a39f3e)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTENTERPRISE_MIDMARKET (6b5b6a67-fc72-4a1f-a2b5-beecf05de761)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | OFFICE 365 SMALL BUSINESS | LITEPACK | bd09678e-b83c-4d3f-aaba-3dad4abd128b | EXCHANGE_L_STANDARD (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>MCOLITE (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | EXCHANGE ONLINE (P1)(d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN P1) (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | OFFICE 365 SMALL BUSINESS PREMIUM | LITEPACK_P2 | fc14ec4a-4169-49a4-a51e-2c852931814b | EXCHANGE_L_STANDARD (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>MCOLITE (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>OFFICE_PRO_PLUS_SUBSCRIPTION_SMBIZ (8ca59559-e2ca-470b-b7dd-afd8c0dee963)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | EXCHANGE ONLINE (P1)(d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN P1) (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>OFFICE_PRO_PLUS_SUBSCRIPTION_SMBIZ (8ca59559-e2ca-470b-b7dd-afd8c0dee963)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | ONEDRIVE FOR BUSINESS (PLAN 1) | WACONEDRIVESTANDARD | e6778190-713e-4e4f-9119-8b8238de25df | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | ONEDRIVE FOR BUSINESS (PLAN 2) | WACONEDRIVEENTERPRISE | ed01faf2-1d88-4947-ae91-45ca18703a96 | ONEDRIVEENTERPRISE (afcafa6a-d966-4462-918c-ec0b4e0fe642)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | ONEDRIVEENTERPRISE (afcafa6a-d966-4462-918c-ec0b4e0fe642)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |
-| POWER APPS PER USER PLAN | POWERAPPS_PER_USER | b30411f5-fea1-4a59-9ad9-3db7c7ead579 | |
| POWER BI (FREE) | POWER_BI_STANDARD | a403ebcc-fae0-4ca2-8c8c-7a907fd6c235 | BI_AZURE_P0 (2049e525-b859-401b-b2a0-e0a31c4b1fe4) | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | POWER BI (FREE) (2049e525-b859-401b-b2a0-e0a31c4b1fe4) | | POWER BI FOR OFFICE 365 ADD-ON | POWER_BI_ADDON | 45bc2c81-6072-436a-9b0b-3b12eefbc402 | BI_AZURE_P1 (2125cfd7-2110-4567-83c4-c1cd5275163d)<br/>SQL_IS_SSIM (fc0a60aa-feee-4746-a0e3-aecfe81a38dd) |MICROSOFT POWER BI REPORTING AND ANALYTICS PLAN 1 (2125cfd7-2110-4567-83c4-c1cd5275163d)<br/>MICROSOFT POWER BI INFORMATION SERVICES PLAN 1(fc0a60aa-feee-4746-a0e3-aecfe81a38dd) | | POWER BI PRO | POWER_BI_PRO | f8a1db68-be16-40ed-86d5-cb42ce701560 | BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | POWER BI PRO (70d33638-9c74-4d01-bfd3-562de28bd4ba) |
@@ -135,6 +138,7 @@ When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| SKYPE FOR BUSINESS PSTN DOMESTIC AND INTERNATIONAL CALLING | MCOPSTN2 | d3b4fe1f-9992-4930-8acb-ca6ec609365e | MCOPSTN2 (5a10155d-f5c1-411a-a8ec-e99aae125390) | DOMESTIC AND INTERNATIONAL CALLING PLAN (5a10155d-f5c1-411a-a8ec-e99aae125390) | | SKYPE FOR BUSINESS PSTN DOMESTIC CALLING | MCOPSTN1 | 0dab259f-bf13-4952-b7f8-7db8f131b28d | MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8) | DOMESTIC CALLING PLAN (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8) | | SKYPE FOR BUSINESS PSTN DOMESTIC CALLING (120 Minutes)| MCOPSTN5 | 54a152dc-90de-4996-93d2-bc47e670fc06 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | DOMESTIC CALLING PLAN (54a152dc-90de-4996-93d2-bc47e670fc06) |
+| TELSTRA CALLING FOR O365 | MCOPSTNEAU2 | de3312e1-c7b0-46e6-a7c3-a515ff90bc86 | MCOPSTNEAU (7861360b-dc3b-4eba-a3fc-0d323a035746) | AUSTRALIA CALLING PLAN (7861360b-dc3b-4eba-a3fc-0d323a035746) |
| VISIO ONLINE PLAN 1 | VISIOONLINE_PLAN1 | 4b244418-9658-4451-a2b8-b5e2b364e9bd | ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | VISIO Online Plan 2 | VISIOCLIENT | c5928f49-12ba-48f7-ada3-0d743a3601d5 | ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | WINDOWS 10 ENTERPRISE E3 | WIN10_PRO_ENT_SUB | cb10e6cd-9da4-4992-867b-67546b1db821 | WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111) | WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111) |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-sso-how-it-works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-how-it-works.md
@@ -62,6 +62,10 @@ The sign-in flow on a web browser is as follows:
6. Active Directory locates the computer account and returns a Kerberos ticket to the browser encrypted with the computer account's secret. 7. The browser forwards the Kerberos ticket it acquired from Active Directory to Azure AD. 8. Azure AD decrypts the Kerberos ticket, which includes the identity of the user signed into the corporate device, using the previously shared key.+
+ >[!NOTE]
+ >Azure AD will attempt to match user's UPN from the Kerberos ticket to an Azure AD user object that has a corresponding value in the userPrincipalName attribute. If this is not successful, Azure AD will fall back to matching the samAccountName from the Kerberos ticket to an Azure AD user object that has a corresponding value in the onPremisesSamAccountName attribute.
+
9. After evaluation, Azure AD either returns a token back to the application or asks the user to perform additional proofs, such as Multi-Factor Authentication. 10. If the user sign-in is successful, the user is able to access the application.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-faq.md
@@ -101,6 +101,15 @@ The default length is 85 seconds. The "long" setting is 180 seconds. The timeout
No, this is currently not supported.
+### What happens if I delete CWAP_AuthSecret (the client secret) in the app registration?
+
+The client secret, also called *CWAP_AuthSecret*, is automatically added to the application object (app registration) when the Azure AD Application Proxy app is created.
+
+The client secret is valid for one year. A new one-year client secret is automatically created before the current valid client secret expires. Three CWAP_AuthSecret client secrets are kept in the application object at all times.
+
+> [!IMPORTANT]
+> Deleting CWAP_AuthSecret breaks pre-authentication for Azure AD Application Proxy. Don't delete CWAP_AuthSecret.
+ ### How do I change the landing page my application loads? From the Application Registrations page, you can change the homepage URL to the desired external URL of the landing page. The specified page will load when the application is launched from My Apps or the Office 365 Portal. For configuration steps, see [Set a custom home page for published apps by using Azure AD Application Proxy](./application-proxy-configure-custom-home-page.md)
@@ -182,11 +191,11 @@ No. Azure AD Application Proxy is designed to work with Azure AD and doesnΓÇÖt f
## WebSocket
-### Does WebSocket support work for applications other than QlikSense?
+### Does WebSocket support work for applications other than QlikSense and Remote Desktop Web Client (HTML5)?
Currently, WebSocket protocol support is still in public preview and it may not work for other applications. Some customers have had mixed success using WebSocket protocol with other applications. If you test such scenarios, we would love to hear your results. Please send us your feedback at aadapfeedback@microsoft.com.
-Features (Eventlogs, PowerShell and Remote Desktop Services) in Windows Admin Center (WAC) or Remote Desktop Web Client (HTML5) do not work through Azure AD Application Proxy presently.
+Features (Eventlogs, PowerShell and Remote Desktop Services) in Windows Admin Center (WAC) do not work through Azure AD Application Proxy presently.
## Link translation
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/migrate-adfs-apps-to-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
@@ -1,6 +1,6 @@
Title: 'Moving application authentication from AD FS to Azure Active Directory'
-description: This article is intended to help organizations understand how to move applications to Azure AD, with a focus on federated SaaS applications.
+ Title: Moving application authentication from AD FS to Azure Active Directory
+description: This article is intended to help organizations understand how to move applications to Azure Active Directory, with a focus on federated SaaS applications.
@@ -8,12 +8,9 @@
Previously updated : 04/01/2020 Last updated : 02/10/2021 - # Moving application authentication from Active Directory Federation Services to Azure Active Directory
@@ -21,7 +18,7 @@
[Azure Active Directory (Azure AD)](../fundamentals/active-directory-whatis.md) offers a universal identity platform that provides your people, partners, and customers a single identity to access applications and collaborate from any platform and device. Azure AD has a [full suite of identity management capabilities](../fundamentals/active-directory-whatis.md). Standardizing your application (app) authentication and authorization to Azure AD enables the benefits these capabilities provide. > [!TIP]
-> This article is written for a developer audience. Project managers and administrators planning an application's move to Azure AD should consider reading our [Migrating application authentication to Azure AD](https://aka.ms/migrateapps/whitepaper) white paper (PDF).
+> This article is written for a developer audience. Project managers and administrators planning an application's move to Azure AD should consider reading our [Migrating application authentication to Azure AD](migrate-application-authentication-to-azure-active-directory.md) article.
## Introduction
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
@@ -62,6 +62,14 @@ This role also grants the ability to _consent_ to delegated permissions and appl
Users in this role can create application registrations when the "Users can register applications" setting is set to No. This role also grants permission to consent on one's own behalf when the "Users can consent to apps accessing company data on their behalf" setting is set to No. Users assigned to this role are added as owners when creating new application registrations or enterprise applications.
+### [Attack Payload Author](#attack-payload-author-permissions)
+
+Users in this role can create attack payloads but not actually launch or schedule them. Attack payloads are then available to all administrators in the tenant who can use them to create a simulation.
+
+### [Attack Simulation Administrator](#attack-simulation-administrator-permissions)
+
+Users in this role can create and manage all aspects of attack simulation creation, launch/scheduling of a simulation, and the review of simulation results. Members of this role have this access for all simulations in the tenant.
+ ### [Authentication Administrator](#authentication-administrator-permissions) Users with this role can set or reset non-password credentials for some users and can update passwords for all users. Authentication administrators can require users who are non-administrators or assigned to some roles to re-register against existing non-password credentials (for example, MFA or FIDO), and can also revoke **remember MFA on the device**, which prompts for MFA on the next sign-in. Whether an Authentication Administrator can reset a user's password depends on the role the user is assigned. For a list of the roles that an Authentication Administrator can reset passwords for, see [Password reset permissions](#password-reset-permissions).
@@ -77,14 +85,6 @@ The [Privileged Authentication Administrator](#privileged-authentication-adminis
>* Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. >* Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information.
-### [Attack Payload Author](#attack-payload-author-permissions)
-
-Users in this role can create attack payloads but not actually launch or schedule them. Attack payloads are then available to all administrators in the tenant who can use them to create a simulation.
-
-### [Attack Simulation Administrator](#attack-simulation-administrator-permissions)
-
-Users in this role can create and manage all aspects of attack simulation creation, launch/scheduling of a simulation, and the review of simulation results. Members of this role have this access for all simulations in the tenant.
- ### [Azure AD Joined Device Local Administrator](#azure-ad-joined-device-local-administrator-permissions)/Device Administrators This role is available for assignment only as an additional local administrator in [Device settings](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/DeviceSettings/menuId/). Users with this role become local machine administrators on all Windows 10 devices that are joined to Azure Active Directory. They do not have the ability to manage devices objects in Azure Active Directory.
@@ -560,22 +560,6 @@ Can create application registrations independent of the 'Users can register appl
> | microsoft.directory/oAuth2PermissionGrants/createAsOwner | Create oAuth2PermissionGrants in Azure Active Directory. Creator is added as the first owner, and the created object counts against the creator's 250 created objects quota. | > | microsoft.directory/servicePrincipals/createAsOwner | Create servicePrincipals in Azure Active Directory. Creator is added as the first owner, and the created object counts against the creator's 250 created objects quota. |
-### Authentication Administrator permissions
-
-Allowed to view, set and reset authentication method information for any non-admin user.
-
-> [!div class="mx-tableFixed"]
-> | Actions | Description |
-> | | |
-> | microsoft.directory/users/invalidateAllRefreshTokens | Invalidate all user refresh tokens in Azure Active Directory. |
-> | microsoft.directory/users/strongAuthentication/update | Update strong authentication properties like MFA credential information. |
-> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. |
-> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
-> | microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
-> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. |
-> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
-> | microsoft.directory/users/password/update | Update passwords for all users in the Microsoft 365 organization. See online documentation for more detail. |
- ### Attack Payload Author permissions Can create attack payloads that can be deployed by an administrator later.
@@ -597,6 +581,22 @@ Can create and manage all aspects of attack simulation campaigns.
> | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation, responses, and associated training. | > | microsoft.office365.protectionCenter/attackSimulator/simulation/allProperties/allTasks | Create and manage attack simulation templates in Attack Simulator. |
+### Authentication Administrator permissions
+
+Allowed to view, set and reset authentication method information for any non-admin user.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.directory/users/invalidateAllRefreshTokens | Invalidate all user refresh tokens in Azure Active Directory. |
+> | microsoft.directory/users/strongAuthentication/update | Update strong authentication properties like MFA credential information. |
+> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. |
+> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
+> | microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
+> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. |
+> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
+> | microsoft.directory/users/password/update | Update passwords for all users in the Microsoft 365 organization. See online documentation for more detail. |
+ ### Azure AD Joined Device Local Administrator permissions Users assigned to this role are added to the local administrators group on Azure AD-joined devices.
@@ -614,7 +614,6 @@ Can manage Azure DevOps organization policy and settings.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see [role description](#azure-devops-administrator) above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -627,7 +626,6 @@ Can manage all aspects of the Azure Information Protection service.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see [role description](#) above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -663,7 +661,6 @@ Can perform common billing related tasks like updating payment information.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -738,79 +735,6 @@ Full access to manage devices in Azure AD.
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. |
-### Global Administrator permissions
-
-Can manage all aspects of Azure AD and Microsoft services that use Azure AD identities.
-
-> [!NOTE]
-> This role has additional permissions outside of Azure Active Directory. For more information, see role description above.
--
-> [!div class="mx-tableFixed"]
-> | Actions | Description |
-> | | |
-> | microsoft.aad.cloudAppSecurity/allEntities/allTasks | Create and delete all resources, and read and update standard properties in microsoft.aad.cloudAppSecurity. |
-> | microsoft.directory/administrativeUnits/allProperties/allTasks | Create and delete administrativeUnits, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/applications/allProperties/allTasks | Create and delete applications, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/appRoleAssignments/allProperties/allTasks | Create and delete appRoleAssignments, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/auditLogs/allProperties/read | Read all properties (including privileged properties) on auditLogs in Azure Active Directory. |
-> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker key objects and properties (including recovery key) in Azure Active Directory. |
-> | microsoft.directory/contacts/allProperties/allTasks | Create and delete contacts, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/contracts/allProperties/allTasks | Create and delete contracts, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/devices/allProperties/allTasks | Create and delete devices, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/directoryRoles/allProperties/allTasks | Create and delete directoryRoles, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/directoryRoleTemplates/allProperties/allTasks | Create and delete directoryRoleTemplates, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/domains/allProperties/allTasks | Create and delete domains, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/entitlementManagement/allProperties/allTasks | Create and delete resources, and read and update all properties in Azure AD entitlement management. |
-> | microsoft.directory/groups/allProperties/allTasks | Create and delete groups, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/groupsAssignableToRoles/allProperties/update | Update groups with isAssignableToRole property set to true in Azure Active Directory. |
-> | microsoft.directory/groupsAssignableToRoles/create | Create groups with isAssignableToRole property set to true in Azure Active Directory. |
-> | microsoft.directory/groupsAssignableToRoles/delete | Delete groups with isAssignableToRole property set to true in Azure Active Directory. |
-> | microsoft.directory/groupSettings/allProperties/allTasks | Create and delete groupSettings, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/groupSettingTemplates/allProperties/allTasks | Create and delete groupSettingTemplates, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/loginTenantBranding/allProperties/allTasks | Create and delete loginTenantBranding, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete oAuth2PermissionGrants, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/organization/allProperties/allTasks | Create and delete organization, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/policies/allProperties/allTasks | Create and delete policies, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs. |
-> | microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete roleAssignments, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/roleDefinitions/allProperties/allTasks | Create and delete roleDefinitions, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/scopedRoleMemberships/allProperties/allTasks | Create and delete scopedRoleMemberships, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/serviceAction/activateService | Can perform the Activateservice service action in Azure Active Directory |
-> | microsoft.directory/serviceAction/disableDirectoryFeature | Can perform the Disabledirectoryfeature service action in Azure Active Directory |
-> | microsoft.directory/serviceAction/enableDirectoryFeature | Can perform the Enabledirectoryfeature service action in Azure Active Directory |
-> | microsoft.directory/serviceAction/getAvailableExtentionProperties | Can perform the Getavailableextentionproperties service action in Azure Active Directory |
-> | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete servicePrincipals, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/signInReports/allProperties/read | Read all properties (including privileged properties) on signInReports in Azure Active Directory. |
-> | microsoft.directory/subscribedSkus/allProperties/allTasks | Create and delete subscribedSkus, and read and update all properties in Azure Active Directory. |
-> | microsoft.directory/users/allProperties/allTasks | Create and delete users, and read and update all properties in Azure Active Directory. |
-> | microsoft.directorySync/allEntities/allTasks | Perform all actions in Azure AD Connect. |
-> | microsoft.aad.identityProtection/allEntities/allTasks | Create and delete all resources, and read and update standard properties in microsoft.aad.identityProtection. |
-> | microsoft.aad.privilegedIdentityManagement/allEntities/read | Read all resources in microsoft.aad.privilegedIdentityManagement. |
-> | microsoft.azure.advancedThreatProtection/allEntities/read | Read all resources in microsoft.azure.advancedThreatProtection. |
-> | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection. |
-> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. |
-> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
-> | microsoft.commerce.billing/allEntities/allTasks | Manage all aspects of billing. |
-> | microsoft.intune/allEntities/allTasks | Manage all aspects of Intune. |
-> | microsoft.office365.complianceManager/allEntities/allTasks | Manage all aspects of Office 365 Compliance Manager |
-> | microsoft.office365.desktopAnalytics/allEntities/allTasks | Manage all aspects of Desktop Analytics. |
-> | microsoft.office365.exchange/allEntities/allTasks | Manage all aspects of Exchange Online. |
-> | microsoft.office365.lockbox/allEntities/allTasks | Manage all aspects of Office 365 Customer Lockbox |
-> | microsoft.office365.messageCenter/messages/read | Read messages in microsoft.office365.messageCenter. |
-> | microsoft.office365.messageCenter/securityMessages/read | Read securityMessages in microsoft.office365.messageCenter. |
-> | microsoft.office365.protectionCenter/allEntities/allTasks | Manage all aspects of Office 365 Protection Center. |
-> | microsoft.office365.securityComplianceCenter/allEntities/allTasks | Create and delete all resources, and read and update standard properties in microsoft.office365.securityComplianceCenter. |
-> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. |
-> | microsoft.office365.sharepoint/allEntities/allTasks | Create and delete all resources, and read and update standard properties in microsoft.office365.sharepoint. |
-> | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online. |
-> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
-> | microsoft.office365.usageReports/allEntities/read | Read Office 365 usage reports. |
-> | microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
-> | microsoft.powerApps.dynamics365/allEntities/allTasks | Manage all aspects of Dynamics 365. |
-> | microsoft.powerApps.powerBI/allEntities/allTasks | Manage all aspects of Power BI. |
-> | microsoft.windows.defenderAdvancedThreatProtection/allEntities/read | Read all resources in microsoft.windows.defenderAdvancedThreatProtection. |
- ### Compliance Administrator permissions Can read and manage compliance configuration and reports in Azure AD and Microsoft 365.
@@ -818,7 +742,6 @@ Can read and manage compliance configuration and reports in Azure AD and Microso
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -837,7 +760,6 @@ Creates and manages compliance content.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -873,7 +795,6 @@ Can manage all aspects of the Dynamics 365 product.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -891,7 +812,6 @@ Can approve Microsoft support requests to access customer organizational data.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -905,7 +825,6 @@ Can manage the Desktop Analytics and Office Customization & Policy services. For
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1051,7 +970,6 @@ Can manage all aspects of the Exchange product.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1098,13 +1016,84 @@ Configure identity providers for use in direct federation.
> | | | > | microsoft.aad.b2c/identityProviders/allTasks | Read and configure identity providers in  Azure Active Directory B2C. |
+### Global Administrator permissions
+
+Can manage all aspects of Azure AD and Microsoft services that use Azure AD identities.
+
+> [!NOTE]
+> This role has additional permissions outside of Azure Active Directory. For more information, see role description above.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.aad.cloudAppSecurity/allEntities/allTasks | Create and delete all resources, and read and update standard properties in microsoft.aad.cloudAppSecurity. |
+> | microsoft.directory/administrativeUnits/allProperties/allTasks | Create and delete administrativeUnits, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/applications/allProperties/allTasks | Create and delete applications, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/appRoleAssignments/allProperties/allTasks | Create and delete appRoleAssignments, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/auditLogs/allProperties/read | Read all properties (including privileged properties) on auditLogs in Azure Active Directory. |
+> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker key objects and properties (including recovery key) in Azure Active Directory. |
+> | microsoft.directory/contacts/allProperties/allTasks | Create and delete contacts, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/contracts/allProperties/allTasks | Create and delete contracts, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/devices/allProperties/allTasks | Create and delete devices, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/directoryRoles/allProperties/allTasks | Create and delete directoryRoles, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/directoryRoleTemplates/allProperties/allTasks | Create and delete directoryRoleTemplates, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/domains/allProperties/allTasks | Create and delete domains, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/entitlementManagement/allProperties/allTasks | Create and delete resources, and read and update all properties in Azure AD entitlement management. |
+> | microsoft.directory/groups/allProperties/allTasks | Create and delete groups, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/groupsAssignableToRoles/allProperties/update | Update groups with isAssignableToRole property set to true in Azure Active Directory. |
+> | microsoft.directory/groupsAssignableToRoles/create | Create groups with isAssignableToRole property set to true in Azure Active Directory. |
+> | microsoft.directory/groupsAssignableToRoles/delete | Delete groups with isAssignableToRole property set to true in Azure Active Directory. |
+> | microsoft.directory/groupSettings/allProperties/allTasks | Create and delete groupSettings, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/groupSettingTemplates/allProperties/allTasks | Create and delete groupSettingTemplates, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/loginTenantBranding/allProperties/allTasks | Create and delete loginTenantBranding, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete oAuth2PermissionGrants, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/organization/allProperties/allTasks | Create and delete organization, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/policies/allProperties/allTasks | Create and delete policies, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs. |
+> | microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete roleAssignments, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/roleDefinitions/allProperties/allTasks | Create and delete roleDefinitions, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/scopedRoleMemberships/allProperties/allTasks | Create and delete scopedRoleMemberships, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/serviceAction/activateService | Can perform the Activateservice service action in Azure Active Directory |
+> | microsoft.directory/serviceAction/disableDirectoryFeature | Can perform the Disabledirectoryfeature service action in Azure Active Directory |
+> | microsoft.directory/serviceAction/enableDirectoryFeature | Can perform the Enabledirectoryfeature service action in Azure Active Directory |
+> | microsoft.directory/serviceAction/getAvailableExtentionProperties | Can perform the Getavailableextentionproperties service action in Azure Active Directory |
+> | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete servicePrincipals, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/signInReports/allProperties/read | Read all properties (including privileged properties) on signInReports in Azure Active Directory. |
+> | microsoft.directory/subscribedSkus/allProperties/allTasks | Create and delete subscribedSkus, and read and update all properties in Azure Active Directory. |
+> | microsoft.directory/users/allProperties/allTasks | Create and delete users, and read and update all properties in Azure Active Directory. |
+> | microsoft.directorySync/allEntities/allTasks | Perform all actions in Azure AD Connect. |
+> | microsoft.aad.identityProtection/allEntities/allTasks | Create and delete all resources, and read and update standard properties in microsoft.aad.identityProtection. |
+> | microsoft.aad.privilegedIdentityManagement/allEntities/read | Read all resources in microsoft.aad.privilegedIdentityManagement. |
+> | microsoft.azure.advancedThreatProtection/allEntities/read | Read all resources in microsoft.azure.advancedThreatProtection. |
+> | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection. |
+> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. |
+> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
+> | microsoft.commerce.billing/allEntities/allTasks | Manage all aspects of billing. |
+> | microsoft.intune/allEntities/allTasks | Manage all aspects of Intune. |
+> | microsoft.office365.complianceManager/allEntities/allTasks | Manage all aspects of Office 365 Compliance Manager |
+> | microsoft.office365.desktopAnalytics/allEntities/allTasks | Manage all aspects of Desktop Analytics. |
+> | microsoft.office365.exchange/allEntities/allTasks | Manage all aspects of Exchange Online. |
+> | microsoft.office365.lockbox/allEntities/allTasks | Manage all aspects of Office 365 Customer Lockbox |
+> | microsoft.office365.messageCenter/messages/read | Read messages in microsoft.office365.messageCenter. |
+> | microsoft.office365.messageCenter/securityMessages/read | Read securityMessages in microsoft.office365.messageCenter. |
+> | microsoft.office365.protectionCenter/allEntities/allTasks | Manage all aspects of Office 365 Protection Center. |
+> | microsoft.office365.securityComplianceCenter/allEntities/allTasks | Create and delete all resources, and read and update standard properties in microsoft.office365.securityComplianceCenter. |
+> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. |
+> | microsoft.office365.sharepoint/allEntities/allTasks | Create and delete all resources, and read and update standard properties in microsoft.office365.sharepoint. |
+> | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online. |
+> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
+> | microsoft.office365.usageReports/allEntities/read | Read Office 365 usage reports. |
+> | microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. |
+> | microsoft.powerApps.dynamics365/allEntities/allTasks | Manage all aspects of Dynamics 365. |
+> | microsoft.powerApps.powerBI/allEntities/allTasks | Manage all aspects of Power BI. |
+> | microsoft.windows.defenderAdvancedThreatProtection/allEntities/read | Read all resources in microsoft.windows.defenderAdvancedThreatProtection. |
+ ### Global Reader permissions Can read everything that a Global Administrator can, but not edit anything. > [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see [role description](#global-reader) above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1301,7 +1290,6 @@ Can manage all aspects of the Intune product.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1343,7 +1331,6 @@ Can manage settings for Microsoft Kaizala.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1371,7 +1358,6 @@ Can manage all aspects of the Skype for Business product.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1391,7 +1377,6 @@ Can read Message Center posts, data privacy messages, groups, domains and subscr
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1405,7 +1390,6 @@ Can read messages and updates for their organization in Message Center only.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1418,7 +1402,6 @@ Can manage commercial purchases for a company, department or team.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1434,7 +1417,6 @@ Can manage network locations and review enterprise network design insights for M
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1447,7 +1429,6 @@ Can manage Office apps' cloud services, including policy and settings management
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1466,7 +1447,6 @@ Do not use - not intended for general use.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1512,7 +1492,6 @@ Do not use - not intended for general use.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1655,7 +1634,6 @@ Can manage role assignments in Azure AD,and all aspects of Privileged Identity M
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1676,7 +1654,6 @@ Can read sign-in and audit reports.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1693,7 +1670,6 @@ Can create and manage all aspects of Microsoft Search settings.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1710,7 +1686,6 @@ Can create and manage the editorial content such as bookmarks, Q and As, locatio
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1724,7 +1699,6 @@ Can read security information and reports,and manage configuration in Azure AD a
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1758,7 +1732,6 @@ Creates and manages security events.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1781,7 +1754,6 @@ Can read security information and reports in Azure AD and Microsoft 365.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1805,7 +1777,6 @@ Can read service health information and manage support tickets.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1822,7 +1793,6 @@ Can manage all aspects of the SharePoint service.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1848,7 +1818,6 @@ Can manage the Microsoft Teams service.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1878,7 +1847,6 @@ Can manage calling and meetings features within the Microsoft Teams service.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1899,7 +1867,6 @@ Can troubleshoot communications issues within Teams using advanced tools.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1915,7 +1882,6 @@ Can troubleshoot communications issues within Teams using basic tools.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -1931,7 +1897,6 @@ Can perform management related tasks on Teams certified devices.
> [!NOTE] > This role has additional permissions outside of Azure Active Directory. For more information, see role description above. - > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
@@ -2104,7 +2069,6 @@ Authentication Admin | &nbsp; | &nbsp; | :heavy_check_mark: | &nbsp; | :heavy_ch
Directory Readers | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Global Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:\* Groups Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Guest | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
Guest Inviter | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Helpdesk Admin | &nbsp; | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Message Center Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
@@ -2112,7 +2076,6 @@ Password Admin | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
Privileged Authentication Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: Privileged Role Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Restricted Guest | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
User (no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/smartsheet-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/smartsheet-provisioning-tutorial.md
@@ -132,22 +132,20 @@ This section guides you through the steps to configure the Azure AD provisioning
9. Review the user attributes that are synchronized from Azure AD to Smartsheet in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Smartsheet for update operations. Select the **Save** button to commit any changes.
- |Attribute|Type|
- |||
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;|
|active|Boolean| |title|String|
- |userName|String|
|name.givenName|String| |name.familyName|String| |phoneNumbers[type eq "work"].value|String| |phoneNumbers[type eq "mobile"].value|String| |phoneNumbers[type eq "fax"].value|String|
+ |emails[type eq "work"].value|String|
|externalId|String|
- |roles[primary eq "True"].display|String|
- |roles[primary eq "True"].type|String|
- |roles[primary eq "True"].value|String|
|roles|String|
- urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|
@@ -183,6 +181,7 @@ Once you've configured provisioning, use the following resources to monitor your
## Change log * 06/16/2020 - Added support for enterprise extension attributes "Cost Center", "Division", "Manager" and "Department" for users.
+* 02/10/2021 - Added support for core attributes "emails[type eq "work"]" for users.
## Additional resources
aks https://docs.microsoft.com/en-us/azure/aks/concepts-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-identity.md
@@ -162,7 +162,7 @@ With the Azure RBAC integration, AKS will use a Kubernetes Authorization webhook
![Azure RBAC for Kubernetes authorization flow](media/concepts-identity/azure-rbac-k8s-authz-flow.png)
-As shown on the above diagram, when using the Azure RBAC integration all requests to the Kubernetes API will follow the same authentication flow as explained on the [Azure Active integration section](#azure-active-directory-integration).
+As shown on the above diagram, when using the Azure RBAC integration all requests to the Kubernetes API will follow the same authentication flow as explained on the [Azure Active Directory integration section](#azure-active-directory-integration).
But after that, instead of solely relying on Kubernetes RBAC for Authorization, the request is actually going to be authorized by Azure, as long as the identity that made the request exists in AAD. If the identity doesn't exist in AAD, for example a Kubernetes service account, then the Azure RBAC won't kick in, and it will be the normal Kubernetes RBAC.
@@ -170,6 +170,8 @@ In this scenario you could give users one of the four built-in roles, or create
This feature will allow you to, for example, not only give users permissions to the AKS resource across subscriptions but set up and give them the role and permissions that they will have inside each of those clusters that controls the access to the Kubernetes API. For example, you can grant the `Azure Kubernetes Service RBAC Viewer` role on the subscription scope and its recipient will be able to list and get all Kubernetes objects from all clusters, but not modify them.
+> [!IMPORTANT]
+> Please note that you need to enable Azure RBAC for Kubernetes authorization before using this feature. For more details and step by step guidance, [see here](manage-azure-rbac.md).
#### Built-in roles
@@ -182,7 +184,6 @@ AKS provides the following four built-in roles. They are similar to the [Kuberne
| Azure Kubernetes Service RBAC Admin | Allows admin access, intended to be granted within a namespace. Allows read/write access to most resources in a namespace (or cluster scope), including the ability to create roles and role bindings within the namespace. This role doesn't allow write access to resource quota or to the namespace itself. | | Azure Kubernetes Service RBAC Cluster Admin | Allows super-user access to perform any action on any resource. It gives full control over every resource in the cluster and in all namespaces. |
-**To learn how to enable Azure RBAC for Kubernetes authorization, [read here](manage-azure-rbac.md).**
## Summary
@@ -231,4 +232,4 @@ For more information on core Kubernetes and AKS concepts, see the following arti
[aks-concepts-storage]: concepts-storage.md [aks-concepts-network]: concepts-network.md [operator-best-practices-identity]: operator-best-practices-identity.md
-[upgrade-per-cluster]: ../azure-monitor/insights/container-insights-update-metrics.md#upgrade-per-cluster-using-azure-cli
+[upgrade-per-cluster]: ../azure-monitor/insights/container-insights-update-metrics.md#upgrade-per-cluster-using-azure-cli
aks https://docs.microsoft.com/en-us/azure/aks/ingress-own-tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-own-tls.md
@@ -215,7 +215,7 @@ metadata:
annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: "true"
- nginx.ingress.kubernetes.io/rewrite-target: /$1
+ nginx.ingress.kubernetes.io/rewrite-target: /$2
spec: tls: - hosts:
aks https://docs.microsoft.com/en-us/azure/aks/private-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
@@ -62,7 +62,7 @@ Where `--enable-private-cluster` is a mandatory flag for a private cluster.
> [!NOTE] > If the Docker bridge address CIDR (172.17.0.1/16) clashes with the subnet CIDR, change the Docker bridge address appropriately.
-## Configure Private DNS Zone
+## Configure Private DNS Zone
The following parameters can be leveraged to configure Private DNS Zone.
@@ -75,7 +75,7 @@ The following parameters can be leveraged to configure Private DNS Zone.
* The AKS Preview version 0.4.71 or later * The api version 2020-11-01 or later
-### Create a private AKS cluster with Private DNS Zone
+### Create a private AKS cluster with Private DNS Zone (Preview)
```azurecli-interactive az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone [none|system|custom private dns zone ResourceId]
automation https://docs.microsoft.com/en-us/azure/automation/learn/automation-tutorial-runbook-textual-python-3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/learn/automation-tutorial-runbook-textual-python-3.md
@@ -131,32 +131,32 @@ To do this, the script has to authenticate using the credentials from your Autom
import automationassets def get_automation_runas_credential(runas_connection):
- from OpenSSL import crypto
- import binascii
- from msrestazure import azure_active_directory
- import adal
+ from OpenSSL import crypto
+ import binascii
+ from msrestazure import azure_active_directory
+ import adal
- # Get the Azure Automation RunAs service principal certificate
- cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
- pks12_cert = crypto.load_pkcs12(cert)
- pem_pkey = crypto.dump_privatekey(crypto.FILETYPE_PEM,pks12_cert.get_privatekey())
+ # Get the Azure Automation RunAs service principal certificate
+ cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
+ pks12_cert = crypto.load_pkcs12(cert)
+ pem_pkey = crypto.dump_privatekey(crypto.FILETYPE_PEM,pks12_cert.get_privatekey())
+
+ # Get run as connection information for the Azure Automation service principal
+ application_id = runas_connection["ApplicationId"]
+ thumbprint = runas_connection["CertificateThumbprint"]
+ tenant_id = runas_connection["TenantId"]
- # Get run as connection information for the Azure Automation service principal
- application_id = runas_connection["ApplicationId"]
- thumbprint = runas_connection["CertificateThumbprint"]
- tenant_id = runas_connection["TenantId"]
-
- # Authenticate with service principal certificate
- resource ="https://management.core.windows.net/"
- authority_url = ("https://login.microsoftonline.com/"+tenant_id)
- context = adal.AuthenticationContext(authority_url)
- return azure_active_directory.AdalAuthentication(
- lambda: context.acquire_token_with_client_certificate(
- resource,
- application_id,
- pem_pkey,
- thumbprint)
- )
+ # Authenticate with service principal certificate
+ resource ="https://management.core.windows.net/"
+ authority_url = ("https://login.microsoftonline.com/"+tenant_id)
+ context = adal.AuthenticationContext(authority_url)
+ return azure_active_directory.AdalAuthentication(
+ lambda: context.acquire_token_with_client_certificate(
+ resource,
+ application_id,
+ pem_pkey,
+ thumbprint)
+ )
# Authenticate to Azure using the Azure Automation RunAs service principal runas_connection = automationassets.get_automation_connection("AzureRunAsConnection")
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-high-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-high-availability.md
@@ -4,9 +4,8 @@ description: Learn about Azure Cache for Redis high availability features and op
Previously updated : 10/28/2020 Last updated : 02/08/2021 - # High availability for Azure Cache for Redis
@@ -16,10 +15,9 @@ Azure Cache for Redis implements high availability by using multiple VMs, called
| Option | Description | Availability | Standard | Premium | Enterprise | | - | - | - | :: | :: | :: |
-| [Standard replication](#standard-replication)| Dual-node replicated configuration in a single datacenter or availability zone (AZ), with automatic failover | 99.9% |Γ£ö|Γ£ö|-|
-| [Enterprise cluster](#enterprise-cluster) | Linked cache instances in two regions, with automatic failover | 99.9% |-|-|Γ£ö|
-| [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across AZs, with automatic failover | 99.95% (standard replication), 99.99% (Enterprise cluster) |-|Γ£ö|Γ£ö|
-| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | 99.9% (for a single region) |-|Γ£ö|-|
+| [Standard replication](#standard-replication)| Dual-node replicated configuration in a single datacenter with automatic failover | 99.9% |Γ£ö|Γ£ö|-|
+| [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across AZs, with automatic failover | 99.95% (Premium tier), 99.99% (Enterprise tiers) |-|Preview|Γ£ö|
+| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | 99.99% (Premium tier) |-|Γ£ö|-|
## Standard replication
@@ -43,27 +41,18 @@ A primary node can go out of service as part of a planned maintenance activity s
In addition, Azure Cache for Redis allows additional replica nodes in the Premium tier. A [multi-replica cache](cache-how-to-multi-replicas.md) can be configured with up to three replica nodes. Having more replicas generally improves resiliency because of the additional nodes backing up the primary. Even with more replicas, an Azure Cache for Redis instance still can be severely impacted by a datacenter- or AZ-wide outage. You can increase cache availability by using multiple replicas in conjunction with [zone redundancy](#zone-redundancy).
-## Enterprise cluster
-
->[!NOTE]
->This is available as a preview.
->
->
-
-A cache in either Enterprise tier runs on a Redis Enterprise cluster. It requires an odd number of server nodes at all times to form a quorum. By default, it's comprised of three nodes, each hosted on a dedicated VM. An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*. An Enterprise Flash cache has three same-sized data nodes. The Enterprise cluster divides Redis data into partitions internally. Each partition has a *primary* and at least one *replica*. Each data node holds one or more partitions. The Enterprise cluster ensures that the primary and replica(s) of any partition are never colocated on the same data node. Partitions replicate data asynchronously from primaries to their corresponding replicas.
+## Zone redundancy
-When a data node becomes unavailable or a network split happens, a failover similar to the one described in [Standard replication](#standard-replication) takes place. The Enterprise cluster uses a quorum-based model to determine which surviving nodes will participate in a new quorum. It also promotes replica partitions within these nodes to primaries as needed.
+Azure Cache for Redis supports zone redundant configurations in the Premium and Enterprise tiers. A [zone redundant cache](cache-how-to-zone-redundancy.md) can place its nodes across different [Azure Availability Zones](../availability-zones/az-overview.md) in the same region. It eliminates datacenter or AZ outage as a single point of failure and increases the overall availability of your cache.
-## Zone redundancy
+### Premium tier
>[!NOTE] >This is available as a preview. > >
-Azure Cache for Redis supports zone redundant configurations in the Premium and Enterprise tiers. A [zone redundant cache](cache-how-to-zone-redundancy.md) can place its nodes across different [Azure Availability Zones](../availability-zones/az-overview.md) in the same region. It eliminates datacenter or AZ outage as a single point of failure and increases the overall availability of your cache.
-
-The following diagram illustrates the zone redundant configuration:
+The following diagram illustrates the zone redundant configuration for the Premium tier:
:::image type="content" source="media/cache-high-availability/zone-redundancy.png" alt-text="Zone redundancy setup":::
@@ -71,9 +60,17 @@ Azure Cache for Redis distributes nodes in a zone redundant cache in a round-rob
A zone redundant cache provides automatic failover. When the current primary node is unavailable, one of the replicas will take over. Your application may experience higher cache response time if the new primary node is located in a different AZ. AZs are geographically separated. Switching from one AZ to another alters the physical distance between where your application and cache are hosted. This change impacts round-trip network latencies from your application to the cache. The extra latency is expected to fall within an acceptable range for most applications. We recommend that you test your application to ensure that it can perform well with a zone-redundant cache.
+### Enterprise tiers
+
+A cache in either Enterprise tier runs on a Redis Enterprise cluster. It requires an odd number of server nodes at all times to form a quorum. By default, it's comprised of three nodes, each hosted on a dedicated VM. An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*. An Enterprise Flash cache has three same-sized data nodes. The Enterprise cluster divides Redis data into partitions internally. Each partition has a *primary* and at least one *replica*. Each data node holds one or more partitions. The Enterprise cluster ensures that the primary and replica(s) of any partition are never colocated on the same data node. Partitions replicate data asynchronously from primaries to their corresponding replicas.
+
+When a data node becomes unavailable or a network split happens, a failover similar to the one described in [Standard replication](#standard-replication) takes place. The Enterprise cluster uses a quorum-based model to determine which surviving nodes will participate in a new quorum. It also promotes replica partitions within these nodes to primaries as needed.
+ ## Geo-replication
-Geo-replication is designed mainly for disaster recovery. It gives you the ability to configure an Azure Cache for Redis instance, in a different Azure region, to back up your primary cache. [Set up geo-replication for Azure Cache for Redis](cache-how-to-geo-replication.md) gives a detailed explanation on how geo-replication works.
+[Geo-replication](cache-how-to-geo-replication.md) is a mechanism for linking two Azure Cache for Redis instances, typically spanning two Azure regions. One cache is chosen as the primary linked cache, and the other as the secondary linked cache. Only the primary linked cache accepts read and write requests. Data written to the primary cache is replicated to the secondary linked cache. The secondary linked cache can be used to serve read requests. Data transfer between the primary and secondary cache instances is secured by TLS.
+
+Geo-replication is designed mainly for disaster recovery. It gives you the ability to back up your cache data to a different region. By default, your application writes to and reads from the primary region. It can optionally be configured to read from the secondary region. Geo-replication doesn't provide automatic failover due to concerns over added network latency between regions if the rest of your application remains in the primary region. You'll need to manage and initiate the failover by unlinking the secondary cache. This will promote it to be the new primary instance.
## Next steps
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-how-to-geo-replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
@@ -1,20 +1,22 @@
Title: How to set up geo-replication for Azure Cache for Redis | Microsoft Docs
-description: Learn how to replicate your Azure Cache for Redis instances across geographical regions.
+ Title: Configure geo-replication for a Premium Azure Cache for Redis instance
+description: Learn how to replicate your Azure Cache for Redis Premium instances across Azure regions
- Previously updated : 03/06/2019 Last updated : 02/08/2021 -
-# How to set up geo-replication for Azure Cache for Redis
+# Configure geo-replication for a Premium Azure Cache for Redis instance
+
+In this article, you'll learn how to configure a geo-replicated Azure Cache instance using the Azure portal.
-Geo-replication provides a mechanism for linking two Premium tier Azure Cache for Redis instances. One cache is chosen as the primary linked cache, and the other as the secondary linked cache. The secondary linked cache becomes read-only, and data written to the primary cache is replicated to the secondary linked cache. Data transfer between the primary and secondary cache instances is secured by TLS. Geo-replication can be used to set up a cache that spans two Azure regions. This article provides a guide to configuring geo-replication for your Premium tier Azure Cache for Redis instances.
+Geo-replication links together two Premium Azure Cache for Redis instances and creates a data replication relationship. These cache instances are usually located in different Azure regions, though they aren't required to. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagates changes to the secondary. This process continues until the link between the two instances is removed.
> [!NOTE]
-> Geo-replication is designed as a disaster-recovery solution. By default, your application will write to and read from the primary region. It can optionally be configured to read from the secondary region. Geo-replication doesn't provide automatic failover due to concerns over added network latency between regions if the rest of your application remains in the primary region. You'll need to manage and initiate the failover by unlinking the secondary cache. This will promote it to be the new primary instance.
+> Geo-replication is designed as a disaster-recovery solution.
+>
+>
## Geo-replication prerequisites
@@ -107,7 +109,7 @@ After geo-replication is configured, the following restrictions apply to your li
- [Why did the operation fail when I tried to delete my linked cache?](#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - [What region should I use for my secondary linked cache?](#what-region-should-i-use-for-my-secondary-linked-cache) - [How does failing over to the secondary linked cache work?](#how-does-failing-over-to-the-secondary-linked-cache-work)-- [Can i configure Firewall with geo-replication?](#can-i-configure-a-firewall-with-geo-replication)
+- [Can I configure Firewall with geo-replication?](#can-i-configure-a-firewall-with-geo-replication)
### Can I use geo-replication with a Standard or Basic tier cache?
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-how-to-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-monitor.md
@@ -1,14 +1,13 @@
Title: How to monitor Azure Cache for Redis
+ Title: Monitor Azure Cache for Redis
description: Learn how to monitor the health and performance your Azure Cache for Redis instances Previously updated : 07/13/2017 Last updated : 02/08/2021 -
-# How to monitor Azure Cache for Redis
+# Monitor Azure Cache for Redis
Azure Cache for Redis uses [Azure Monitor](../azure-monitor/index.yml) to provide several options for monitoring your cache instances. You can view metrics, pin metrics charts to the Startboard, customize the date and time range of monitoring charts, add and remove metrics from the charts, and set alerts when certain conditions are met. These tools enable you to monitor the health of your Azure Cache for Redis instances and help you manage your caching applications.
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-how-to-premium-clustering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md
@@ -2,15 +2,14 @@
Title: Configure Redis clustering - Premium Azure Cache for Redis description: Learn how to create and manage Redis clustering for your Premium tier Azure Cache for Redis instances + Previously updated : 10/09/2020 Last updated : 02/08/2021
-# How to configure Redis clustering for a Premium Azure Cache for Redis
-Azure Cache for Redis has different cache offerings, which provide flexibility in the choice of cache size and features, including Premium tier features such as clustering, persistence, and virtual network support. This article describes how to configure clustering in a premium Azure Cache for Redis instance.
+# Configure Redis clustering for a Premium Azure Cache for Redis instance
-## What is Redis Cluster?
Azure Cache for Redis offers Redis cluster as [implemented in Redis](https://redis.io/topics/cluster-tutorial). With Redis Cluster, you get the following benefits: * The ability to automatically split your dataset among multiple nodes.
@@ -22,7 +21,8 @@ Clustering does not increase the number of connections available for a clustered
In Azure, Redis cluster is offered as a primary/replica model where each shard has a primary/replica pair with replication where the replication is managed by Azure Cache for Redis service.
-## Clustering
+## Set up clustering
+ Clustering is enabled on the **New Azure Cache for Redis** blade during cache creation. 1. To create a premium cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. In addition to creating caches in the Azure portal, you can also create them using Resource Manager templates, PowerShell, or Azure CLI. For more information about creating an Azure Cache for Redis, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache).
@@ -96,6 +96,7 @@ Increasing the cluster size increases max throughput and cache size. Increasing
> ## Clustering FAQ+ The following list contains answers to commonly asked questions about Azure Cache for Redis clustering. * [Do I need to make any changes to my client application to use clustering?](#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering)
@@ -180,6 +181,7 @@ Clustering is only available for premium caches.
If you are using StackExchange.Redis and receive `MOVE` exceptions when using clustering, ensure that you are using [StackExchange.Redis 1.1.603](https://www.nuget.org/packages/StackExchange.Redis/) or later. For instructions on configuring your .NET applications to use StackExchange.Redis, see [Configure the cache clients](cache-dotnet-how-to-use-azure-redis-cache.md#configure-the-cache-clients). ## Next steps+ Learn more about Azure Cache for Redis features. * [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-how-to-premium-persistence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
@@ -2,15 +2,14 @@
Title: Configure data persistence - Premium Azure Cache for Redis description: Learn how to configure and manage data persistence your Premium tier Azure Cache for Redis instances + Previously updated : 10/09/2020 Last updated : 02/08/2021
-# How to configure data persistence for a Premium Azure Cache for Redis
-In this article, you will learn how to configure persistence in a premium Azure Cache for Redis instance through the Azure portal. Azure Cache for Redis has different cache offerings, which provide flexibility in the choice of cache size and features, including Premium tier features such as clustering, persistence, and virtual network support.
+# Configure data persistence for a Premium Azure Cache for Redis instance
-## What is data persistence?
[Redis persistence](https://redis.io/topics/persistence) allows you to persist data stored in Redis. You can also take snapshots and back up the data, which you can load in case of a hardware failure. This is a huge advantage over Basic or Standard tier where all the data is stored in memory and there can be potential data loss in case of a failure where Cache nodes are down. Azure Cache for Redis offers Redis persistence using the following models:
@@ -26,6 +25,8 @@ Persistence writes Redis data into an Azure Storage account that you own and man
> >
+## Set up data persistence
+ 1. To create a premium cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. In addition to creating caches in the Azure portal, you can also create them using Resource Manager templates, PowerShell, or Azure CLI. For more information about creating an Azure Cache for Redis, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache). :::image type="content" source="media/cache-private-link/1-create-resource.png" alt-text="Create resource.":::
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-how-to-premium-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
@@ -1,26 +1,22 @@
Title: Configure a virtual network - Premium-tier Azure Cache for Redis instance
-description: Learn how to create and manage virtual network support for your Premium-tier Azure Cache for Redis instances.
+description: Learn how to create and manage virtual network support for your Premium-tier Azure Cache for Redis instance
+ - Previously updated : 10/09/2020 Last updated : 02/08/2021
-# Configure virtual network support for a Premium-tier Azure Cache for Redis instance
+# Configure virtual network support for a Premium Azure Cache for Redis instance
-Azure Cache for Redis has different cache offerings, which provide flexibility in the choice of cache size and features. Premium-tier features include clustering, persistence, and virtual network support. A virtual network is a private network in the cloud. When an Azure Cache for Redis instance is configured with a virtual network, it isn't publicly addressable and can only be accessed from virtual machines and applications within the virtual network. This article describes how to configure virtual network support for a Premium-tier Azure Cache for Redis instance.
+[Azure Virtual Network](https://azure.microsoft.com/services/virtual-network/) deployment provides enhanced security and isolation along with subnets, access control policies, and other features to further restrict access. When an Azure Cache for Redis instance is configured with a virtual network, it isn't publicly addressable and can only be accessed from virtual machines and applications within the virtual network. This article describes how to configure virtual network support for a Premium-tier Azure Cache for Redis instance.
> [!NOTE] > Azure Cache for Redis supports both classic deployment model and Azure Resource Manager virtual networks. >
-## Why Virtual Network?
-
-[Azure Virtual Network](https://azure.microsoft.com/services/virtual-network/) deployment provides enhanced security and isolation for your Azure Cache for Redis instance, along with subnets, access control policies, and other features to further restrict access.
-
-## Virtual network support
+## Set up virtual network support
Virtual network support is configured on the **New Azure Cache for Redis** pane during cache creation.
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-how-to-redis-cli-tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-redis-cli-tool.md
@@ -1,14 +1,14 @@
Title: How to use redis-cli with Azure Cache for Redis
-description: Learn how to use *redis-cli.exe* as a command-line tool for interacting with an Azure Cache for Redis as a client.
+ Title: Use redis-cli with Azure Cache for Redis
+description: Learn how to use *redis-cli.exe* as a command-line tool for interacting with an Azure Cache for Redis as a client
+ Previously updated : 03/22/2018 Last updated : 02/08/2021 -
-# How to use the Redis command-line tool with Azure Cache for Redis
+# Use the Redis command-line tool with Azure Cache for Redis
*redis-cli.exe* is a popular command-line tool for interacting with an Azure Cache for Redis as a client. This tool is also available for use with Azure Cache for Redis.
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-how-to-scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
@@ -1,15 +1,15 @@
Title: How to Scale Azure Cache for Redis
-description: Learn how to scale your Azure Cache for Redis instances using the Azure portal, and tools such as Azure PowerShell, and Azure CLI.
+ Title: Scale an Azure Cache for Redis instance
+description: Learn how to scale your Azure Cache for Redis instances using the Azure portal, and tools such as Azure PowerShell, and Azure CLI
+ - Previously updated : 04/11/2017 Last updated : 02/08/2021
-# How to Scale Azure Cache for Redis
-Azure Cache for Redis has different cache offerings, which provide flexibility in the choice of cache size and features. After a cache is created, you can scale the size and the pricing tier of the cache if the requirements of your application change. This article shows you how to scale your cache using the Azure portal, and tools such as Azure PowerShell, and Azure CLI.
+# Scale an Azure Cache for Redis instance
+Azure Cache for Redis has different cache offerings, which provide flexibility in the choice of cache size and features. For a Basic, Standard or Premium cache, you can change its size and tier after it's been created to keep up with your application needs. This article shows you how to scale your cache using the Azure portal, and tools such as Azure PowerShell, and Azure CLI.
## When to scale You can use the [monitoring](cache-how-to-monitor.md) features of Azure Cache for Redis to monitor the health and performance of your cache and help determine when to scale the cache.
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-overview.md
@@ -5,17 +5,17 @@
Previously updated : 05/12/2020 Last updated : 02/08/2021 #As a developer, I want to understand what Azure Cache for Redis is and how it can improve performance in my application.
-# Azure Cache for Redis
+# About Azure Cache for Redis
Azure Cache for Redis provides an in-memory data store based on the [Redis](https://redis.io/) software. Redis improves the performance and scalability of an application that uses on backend data stores heavily. It is able to process large volumes of application request by keeping frequently accessed data in the server memory that can be written to and read from quickly. Redis brings a critical low-latency and high-throughput data storage solution to modern applications.
-Azure Cache for Redis offers both the Redis open-source and a commercial product from Redis Labs as a managed service. It provides secure and dedicated Redis server instances and full Redis API compatibility. The service is operated by Microsoft, hosted on Azure, and accessible to any application within or outside of Azure.
+Azure Cache for Redis offers both the Redis open-source (OSS Redis) and a commercial product from Redis Labs (Redis Enterprise) as a managed service. It provides secure and dedicated Redis server instances and full Redis API compatibility. The service is operated by Microsoft, hosted on Azure, and accessible to any application within or outside of Azure.
-Azure Cache for Redis can be used as a distributed data or content cache, a session store, a message broker, and more. It can be deployed as a standalone or along side with other Azure database service, such as Azure SQL or Cosmos DB.
+Azure Cache for Redis can be used as a distributed data or content cache, a session store, a message broker, and more. It can be deployed as a standalone or along side with other Azure database services, such as Azure SQL or Cosmos DB.
## Key scenarios Azure Cache for Redis improves application performance by supporting common application architecture patterns. Some of the most common include the following:
@@ -53,11 +53,11 @@ The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| Data encryption |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | [Network isolation](cache-how-to-premium-vnet.md) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | [Scaling](cache-how-to-scale.md) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
-| [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö|-|-|
-| [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|-|-|
+| [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Preview|Γ£ö|Γ£ö|
+| [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
+| [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|Preview|Preview|
| [OSS cluster](cache-how-to-premium-clustering.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
-| [Modules](https://redis.io/modules) |-|-|-|Γ£ö|-|
+| [Modules](https://redis.io/modules) |-|-|-|Γ£ö|Γ£ö|
| [Import/Export](cache-how-to-import-export-data.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Scheduled updates](cache-administration.md#schedule-updates) |Γ£ö|Γ£ö|Γ£ö|-|-|
@@ -65,29 +65,28 @@ The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
You should consider the following when choosing an Azure Cache for Redis tier: * **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tiers 12 GB - 14 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
+* **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](cache-planning-faq.md#azure-cache-for-redis-performance).
+* **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis utilizes additional cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation, which will cause timeouts in your application.
* **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](cache-planning-faq.md#azure-cache-for-redis-performance).
-* **Throughput**: The Premium tier offers the maximum available throughput. If the cache server or client reaches the bandwidth limits, you may receive timeouts on the client side. For more information, see the following table.
-* **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA does not cover protection from data loss. We recommend using the Redis data persistence feature in the Premium tier to increase resiliency against data loss.
-* **Data persistence**: The Premium tier allows you to persist the cache data in an Azure Storage account. In other tiers, data are stored only in memory. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in the Premium tier to increase resiliency against data loss. Azure Cache for Redis offers RDB and AOF (preview) options in Redis persistence. For more information, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md).
-* **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).
* **Maximum number of client connections**: The Premium tier offers the maximum number of clients that can connect to Redis, with a higher number of connections for larger sized caches. Clustering does not increase the number of connections available for a clustered cache. For more information, see [Azure Cache for Redis pricing](https://azure.microsoft.com/pricing/details/cache/).
-* **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores.
-* **Single-threaded processing**: Redis, by design, uses only one thread for command processing. Azure Cache for Redis also utilizes additional cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation, which will cause timeouts in your application.
-* **Performance improvements**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](cache-planning-faq.md#azure-cache-for-redis-performance).
+* **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA does not cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss.
+* **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Enterprise tiers have data persistence enabled by default. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md).
+* **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).
+* **Extensibility**: Enterprise tiers support [RediSearch](https://docs.redislabs.com/latest/modules/redisearch/), [RedisBloom](https://docs.redislabs.com/latest/modules/redisbloom/) and [RedisTimeSeries](https://docs.redislabs.com/latest/modules/redistimeseries/). These modules add new data types and functionality to Redis.
-You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier is not supported. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation).
+You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier is not supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation).
### Enterprise tier requirements
-The Enterprise tiers rely on Redis Enterprise, a commercial version of Redis from Redis Labs. Customers will obtain and pay for a license to this software through an Azure Marketplace offer. Azure Cache for Redis will facilitate the license acquisition so that you won't have to do it separately. To purchase in the Azure Marketplace, you must have the following prerequisites:
+The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis from Redis Labs. Customers will obtain and pay for a license to this software through an Azure Marketplace offer. Azure Cache for Redis will facilitate the license acquisition so that you won't have to do it separately. To purchase in the Azure Marketplace, you must have the following prerequisites:
* Your Azure subscription has a valid payment instrument. Azure credits or free MSDN subscriptions are not supported. * You're an Owner or Contributor of the subscription. * Your organization allows [Azure Marketplace purchases](../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). * If you use a private Marketplace, it must contain the Redis Labs Enterprise offer. ## Next steps
-* [Create an Azure Cache for Redis instance](quickstart-create-redis.md)
-* [Create an Enterprise tier cache](quickstart-create-redis-enterprise.md)
+* [Create an open-source Redis cache](quickstart-create-redis.md)
+* [Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md)
* [Use Azure Cache for Redis in an ASP.NET web app](cache-web-app-howto.md) * [Use Azure Cache for Redis in .NET Core](cache-dotnet-core-quickstart.md) * [Use Azure Cache for Redis in .NET Framework](cache-dotnet-how-to-use-azure-redis-cache.md)
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/quickstart-create-redis-enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
@@ -1,15 +1,16 @@
Title: 'Quickstart: Create an Enterprise tier cache'
-description: In this quickstart, learn how to create an instance of Azure Cache for Redis Enterprise tier
+ Title: 'Quickstart: Create a Redis Enterprise cache'
+description: In this quickstart, learn how to create an instance of Azure Cache for Redis in Enterprise tiers
+ Previously updated : 10/28/2020 Last updated : 02/08/2021 #Customer intent: As a developer new to Azure Cache for Redis, I want to create an instance of Azure Cache for Redis Enterprise tier.
-# Quickstart: Create an Enterprise tier cache (preview)
+# Quickstart: Create a Redis Enterprise cache
Azure Cache for Redis' Enterprise tiers provide fully integrated and managed [Redis Enterprise](https://redislabs.com/redis-enterprise/) on Azure. They're currently available as a preview. There are two new tiers in this preview: * Enterprise, which uses volatile memory (DRAM) on a virtual machine to store data
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/quickstart-create-redis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis.md
@@ -1,15 +1,16 @@
Title: 'Quickstart: Create an Azure Cache for Redis'
-description: In this quickstart, learn how to create an instance of Azure Cache for Redis
+ Title: 'Quickstart: Create an open-source Redis cache'
+description: In this quickstart, learn how to create an instance of Azure Cache for Redis in Basic, Standard or Premium tier
+ Previously updated : 05/12/2020 Last updated : 02/08/2021 #Customer intent: As a developer new to Azure Cache for Redis, I want to create an instance of Azure Cache for Redis Enterprise tier.
-# Quickstart: Create an Azure Cache for Redis instance
+# Quickstart: Create an open-source Redis cache
Azure Cache for Redis provides fully managed [open-source Redis](https://redis.io/) within Azure. You can start with an Azure Cache for Redis instance of any tier (Basic, Standard or Premium) and size, and scale it to meet your application's performance needs. This quickstart demonstrates how to use the Azure portal to create a new Azure Cache for Redis.
@@ -26,4 +27,3 @@ In this quickstart, you learned how to create an instance of Azure Cache for Red
> [!div class="nextstepaction"] > [Create an ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)-
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-app-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
@@ -208,7 +208,7 @@ The value for this setting indicates a custom package index URL for Python apps.
To learn more, see [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
-## SCALE\_CONTROLLER\_LOGGING\_ENABLE
+## SCALE\_CONTROLLER\_LOGGING\_ENABLED
_This setting is currently in preview._
@@ -216,7 +216,7 @@ This setting controls logging from the Azure Functions scale controller. For mor
|Key|Sample value| |-|-|
-|SCALE_CONTROLLER_LOGGING_ENABLE|AppInsights:Verbose|
+|SCALE_CONTROLLER_LOGGING_ENABLED|AppInsights:Verbose|
The value for this key is supplied in the format `<DESTINATION>:<VERBOSITY>`, which is defined as follows:
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-best-practices.md
@@ -9,28 +9,25 @@
-# Optimize the performance and reliability of Azure Functions
+# Best practices for performance and reliability of Azure Functions
This article provides guidance to improve the performance and reliability of your [serverless](https://azure.microsoft.com/solutions/serverless/) function apps.
-## General best practices
- The following are best practices in how you build and architect your serverless solutions using Azure Functions.
-### Avoid long running functions
+## Avoid long running functions
-Large, long-running functions can cause unexpected timeout issues. To learn more about the timeouts for a given hosting plan, see [function app timeout duration](functions-scale.md#timeout).
+Large, long-running functions can cause unexpected timeout issues. To learn more about the timeouts for a given hosting plan, see [function app timeout duration](functions-scale.md#timeout).
-A function can become large because of many Node.js dependencies. Importing dependencies can also cause increased load times that result in unexpected timeouts. Dependencies are loaded both explicitly and implicitly. A single module loaded by your code may load its own additional modules.
+A function can become large because of many Node.js dependencies. Importing dependencies can also cause increased load times that result in unexpected timeouts. Dependencies are loaded both explicitly and implicitly. A single module loaded by your code may load its own additional modules.
Whenever possible, refactor large functions into smaller function sets that work together and return responses fast. For example, a webhook or HTTP trigger function might require an acknowledgment response within a certain time limit; it's common for webhooks to require an immediate response. You can pass the HTTP trigger payload into a queue to be processed by a queue trigger function. This approach lets you defer the actual work and return an immediate response. -
-### Cross function communication
+## Cross function communication
[Durable Functions](durable/durable-functions-overview.md) and [Azure Logic Apps](../logic-apps/logic-apps-overview.md) are built to manage state transitions and communication between multiple functions.
-If not using Durable Functions or Logic Apps to integrate with multiple functions, it's best to use storage queues for cross-function communication. The main reason is that storage queues are cheaper and much easier to provision than other storage options.
+If not using Durable Functions or Logic Apps to integrate with multiple functions, it's best to use storage queues for cross-function communication. The main reason is that storage queues are cheaper and much easier to provision than other storage options.
Individual messages in a storage queue are limited in size to 64 KB. If you need to pass larger messages between functions, an Azure Service Bus queue could be used to support message sizes up to 256 KB in the Standard tier, and up to 1 MB in the Premium tier.
@@ -38,28 +35,26 @@ Service Bus topics are useful if you require message filtering before processing
Event hubs are useful to support high volume communications.
+## Write functions to be stateless
-### Write functions to be stateless
-
-Functions should be stateless and idempotent if possible. Associate any required state information with your data. For example, an order being processed would likely have an associated `state` member. A function could process an order based on that state while the function itself remains stateless.
+Functions should be stateless and idempotent if possible. Associate any required state information with your data. For example, an order being processed would likely have an associated `state` member. A function could process an order based on that state while the function itself remains stateless.
Idempotent functions are especially recommended with timer triggers. For example, if you have something that absolutely must run once a day, write it so it can run anytime during the day with the same results. The function can exit when there's no work for a particular day. Also if a previous run failed to complete, the next run should pick up where it left off. -
-### Write defensive functions
+## Write defensive functions
Assume your function could encounter an exception at any time. Design your functions with the ability to continue from a previous fail point during the next execution. Consider a scenario that requires the following actions: 1. Query for 10,000 rows in a database. 2. Create a queue message for each of those rows to process further down the line.
-
+ Depending on how complex your system is, you may have: involved downstream services behaving badly, networking outages, or quota limits reached, etc. All of these can affect your function at any time. You need to design your functions to be prepared for it. How does your code react if a failure occurs after inserting 5,000 of those items into a queue for processing? Track items in a set that youΓÇÖve completed. Otherwise, you might insert them again next time. This double-insertion can have a serious impact on your work flow, so [make your functions idempotent](functions-idempotent.md). If a queue item was already processed, allow your function to be a no-op.
-Take advantage of defensive measures already provided for components you use in the Azure Functions platform. For example, see **Handling poison queue messages** in the documentation for [Azure Storage Queue triggers and bindings](functions-bindings-storage-queue-trigger.md#poison-messages).
+Take advantage of defensive measures already provided for components you use in the Azure Functions platform. For example, see **Handling poison queue messages** in the documentation for [Azure Storage Queue triggers and bindings](functions-bindings-storage-queue-trigger.md#poison-messages).
## Function organization best practices
@@ -82,7 +77,7 @@ Function apps have a `host.json` file, which is used to configure advanced behav
All functions in your local project are deployed together as a set of files to your function app in Azure. You might need to deploy individual functions separately or use features like [deployment slots](./functions-deployment-slots.md) for some functions and not others. In such cases, you should deploy these functions (in separate code projects) to different function apps.
-### Organize functions by privilege
+### Organize functions by privilege
Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/learn/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps.
@@ -96,7 +91,7 @@ Reuse connections to external resources whenever possible. See [how to manage co
### Avoid sharing storage accounts
-When you create a function app, you must associate it with a storage account. The storage account connection is maintained in the [AzureWebJobsStorage application setting](./functions-app-settings.md#azurewebjobsstorage).
+When you create a function app, you must associate it with a storage account. The storage account connection is maintained in the [AzureWebJobsStorage application setting](./functions-app-settings.md#azurewebjobsstorage).
[!INCLUDE [functions-shared-storage](../../includes/functions-shared-storage.md)]
@@ -120,9 +115,9 @@ In C#, always avoid referencing the `Result` property or calling `Wait` method o
### Use multiple worker processes
-By default, any host instance for Functions uses a single worker process. To improve performance, especially with single-threaded runtimes like Python, use the [FUNCTIONS_WORKER_PROCESS_COUNT](functions-app-settings.md#functions_worker_process_count) to increase the number of worker processes per host (up to 10). Azure Functions then tries to evenly distribute simultaneous function invocations across these workers.
+By default, any host instance for Functions uses a single worker process. To improve performance, especially with single-threaded runtimes like Python, use the [FUNCTIONS_WORKER_PROCESS_COUNT](functions-app-settings.md#functions_worker_process_count) to increase the number of worker processes per host (up to 10). Azure Functions then tries to evenly distribute simultaneous function invocations across these workers.
-The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Functions creates when scaling out your application to meet demand.
+The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Functions creates when scaling out your application to meet demand.
### Receive messages in batch whenever possible
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-input.md
@@ -344,7 +344,7 @@ The following table explains the binding configuration properties that you set i
|**direction** | n/a | Must be set to `in`. Exceptions are noted in the [usage](#usage) section. | |**name** | n/a | The name of the variable that represents the blob in function code.| |**path** |**BlobPath** | The path to the blob. |
-|**connection** |**Connection**| The name of an app setting that contains the [Storage connection string](../storage/common/storage-configure-connection-string.md) to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage". If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.<br><br>The connection string must be for a general-purpose storage account, not a [blob-only storage account](../storage/common/storage-account-overview.md#types-of-storage-accounts).|
+|**connection** |**Connection**| The name of an app setting that contains the [Storage connection string](../storage/common/storage-configure-connection-string.md) to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage". If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.<br><br>The connection string must be for a general-purpose storage account, not a [blob-only storage account](../storage/common/storage-account-overview.md#types-of-storage-accounts).<br><br>If you are using [version 5.x or higher of the extension](./functions-bindings-storage-blob.md#storage-extension-5x-and-higher), instead of a connection string, you can provide a reference to a configuration section which defines the connection. See [Connections](./functions-reference.md#connections).|
|**dataType**| n/a | For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). | |n/a | **Access** | Indicates whether you will be reading or writing. |
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-output.md
@@ -395,7 +395,7 @@ The following table explains the binding configuration properties that you set i
|**direction** | n/a | Must be set to `out` for an output binding. Exceptions are noted in the [usage](#usage) section. | |**name** | n/a | The name of the variable that represents the blob in function code. Set to `$return` to reference the function return value.| |**path** |**BlobPath** | The path to the blob container. |
-|**connection** |**Connection**| The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.<br><br>The connection string must be for a general-purpose storage account, not a [blob-only storage account](../storage/common/storage-account-overview.md#types-of-storage-accounts).|
+|**connection** |**Connection**| The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.<br><br>The connection string must be for a general-purpose storage account, not a [blob-only storage account](../storage/common/storage-account-overview.md#types-of-storage-accounts).<br><br>If you are using [version 5.x or higher of the extension](./functions-bindings-storage-blob.md#storage-extension-5x-and-higher), instead of a connection string, you can provide a reference to a configuration section which defines the connection. See [Connections](./functions-reference.md#connections).|
|n/a | **Access** | Indicates whether you will be reading or writing. | [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-trigger.md
@@ -318,7 +318,7 @@ The following table explains the binding configuration properties that you set i
|**direction** | n/a | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the [usage](#usage) section. | |**name** | n/a | The name of the variable that represents the blob in function code. | |**path** | **BlobPath** |The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](#blob-name-patterns). |
-|**connection** | **Connection** | The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.<br><br>The connection string must be for a general-purpose storage account, not a [Blob storage account](../storage/common/storage-account-overview.md#types-of-storage-accounts).|
+|**connection** | **Connection** | The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.<br><br>The connection string must be for a general-purpose storage account, not a [Blob storage account](../storage/common/storage-account-overview.md#types-of-storage-accounts).<br><br>If you are using [version 5.x or higher of the extension](./functions-bindings-storage-blob.md#storage-extension-5x-and-higher), instead of a connection string, you can provide a reference to a configuration section which defines the connection. See [Connections](./functions-reference.md#connections).|
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
@@ -458,10 +458,17 @@ If all 5 tries fail, Azure Functions adds a message to a Storage queue named *we
The blob trigger uses a queue internally, so the maximum number of concurrent function invocations is controlled by the [queues configuration in host.json](functions-host-json.md#queues). The default settings limit concurrency to 24 invocations. This limit applies separately to each function that uses a blob trigger.
+> [!NOTE]
+> For apps using the [5.0.0 or higher version of the Storage extension](functions-bindings-storage-blob.md#storage-extension-5x-and-higher), the queues configuration in host.json only applies to queue triggers. The blob trigger concurrency is instead controlled by [blobs configuration in host.json](functions-host-json.md#blobs).
+ [The Consumption plan](event-driven-scaling.md) limits a function app on one virtual machine (VM) to 1.5 GB of memory. Memory is used by each concurrently executing function instance and by the Functions runtime itself. If a blob-triggered function loads the entire blob into memory, the maximum memory used by that function just for blobs is 24 * maximum blob size. For example, a function app with three blob-triggered functions and the default settings would have a maximum per-VM concurrency of 3*24 = 72 function invocations. JavaScript and Java functions load the entire blob into memory, and C# functions do that if you bind to `string`, or `Byte[]`.
+## host.json properties
+
+The [host.json](functions-host-json.md#blobs) file contains settings that control blob trigger behavior. See the [host.json settings](functions-bindings-storage-blob.md#hostjson-settings) section for details regarding available settings.
+ ## Next steps - [Read blob storage data when a function runs](./functions-bindings-storage-blob-input.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob.md
@@ -30,6 +30,13 @@ Working with the trigger and bindings requires that you reference the appropriat
| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) is recommended to use with Visual Studio Code. | | C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+#### Storage extension 5.x and higher
+
+A new version of the Storage bindings extension is available as a [preview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0-beta.2). This preview introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For .NET applications, it also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs).
+
+> [!NOTE]
+> The preview package is not included in an extension bundle and must be installed manually. For .NET apps, add a reference to the package. For all other app types, see [Update your extensions].
+ [core tools]: ./functions-run-local.md [extension bundle]: ./functions-bindings-register.md#extension-bundles [NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
@@ -42,8 +49,30 @@ Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
[!INCLUDE [functions-storage-sdk-version](../../includes/functions-storage-sdk-version.md)]
+## host.json settings
+
+> [!NOTE]
+> This section does not apply when using extension versions prior to 5.0.0. For those versions, there are no global configuration settings for blobs.
+
+This section describes the global configuration settings available for this binding when using [extension version 5.0.0 and higher](#storage-extension-5x-and-higher). The example *host.json* file below contains only the version 2.x+ settings for this binding. For more information about global configuration settings in Functions versions 2.x and beyond, see [host.json reference for Azure Functions](functions-host-json.md).
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "blobs": {
+ "maxDegreeOfParallelism": "4"
+ }
+ }
+}
+```
+
+|Property |Default | Description |
+||||
+|maxDegreeOfParallelism|8 * (the number of available cores)|The integer number of concurrent invocations allowed for each blob-triggered function. The minimum allowed value is 1.|
+ ## Next steps - [Run a function when blob storage data changes](./functions-bindings-storage-blob-trigger.md) - [Read blob storage data when a function runs](./functions-bindings-storage-blob-input.md)-- [Write blob storage data from a function](./functions-bindings-storage-blob-output.md)
+- [Write blob storage data from a function](./functions-bindings-storage-blob-output.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue-output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-queue-output.md
@@ -394,7 +394,7 @@ The following table explains the binding configuration properties that you set i
|**direction** | n/a | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. | |**name** | n/a | The name of the variable that represents the queue in function code. Set to `$return` to reference the function return value.| |**queueName** |**QueueName** | The name of the queue. |
-|**connection** | **Connection** |The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "MyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.|
+|**connection** | **Connection** |The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here.<br><br>For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "MyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.<br><br>If you are using [version 5.x or higher of the extension](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher), instead of a connection string, you can provide a reference to a configuration section which defines the connection. See [Connections](./functions-reference.md#connections).|
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
@@ -402,6 +402,8 @@ The following table explains the binding configuration properties that you set i
# [C#](#tab/csharp)
+### Default
+ Write a single queue message by using a method parameter such as `out T paramName`. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types: * An object serializable as JSON
@@ -416,8 +418,19 @@ In C# and C# script, write multiple queue messages by using one of the following
* `ICollector<T>` or `IAsyncCollector<T>` * [CloudQueue](/dotnet/api/microsoft.azure.storage.queue.cloudqueue)
+### Additional types
+
+Apps using the [5.0.0 or higher version of the Storage extension](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) may also use types from the [Azure SDK for .NET](/dotnet/api/overview/azure/storage.queues-readme). This version drops support for the legacy `CloudQueue` and `CloudQueueMessage` types in favor of the following types:
+
+- [QueueMessage](/dotnet/api/azure.storage.queues.models.queuemessage)
+- [QueueClient](/dotnet/api/azure.storage.queues.queueclient) for writing multiple queue messages
+
+For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
+ # [C# Script](#tab/csharp-script)
+### Default
+ Write a single queue message by using a method parameter such as `out T paramName`. The `paramName` is the value specified in the `name` property of *function.json*. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types: * An object serializable as JSON
@@ -432,6 +445,15 @@ In C# and C# script, write multiple queue messages by using one of the following
* `ICollector<T>` or `IAsyncCollector<T>` * [CloudQueue](/dotnet/api/microsoft.azure.storage.queue.cloudqueue)
+### Additional types
+
+Apps using the [5.0.0 or higher version of the Storage extension](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) may also use types from the [Azure SDK for .NET](/dotnet/api/overview/azure/storage.queues-readme). This version drops support for the legacy `CloudQueue` and `CloudQueueMessage` types in favor of the following types:
+
+- [QueueMessage](/dotnet/api/azure.storage.queues.models.queuemessage)
+- [QueueClient](/dotnet/api/azure.storage.queues.queueclient) for writing multiple queue messages
+
+For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
+ # [Java](#tab/java) There are two options for outputting an Queue message from a function by using the [QueueOutput](/java/api/com.microsoft.azure.functions.annotation.queueoutput) annotation:
@@ -466,38 +488,6 @@ There are two options for outputting an Queue message from a function:
| Blob, Table, Queue | [Storage Error Codes](/rest/api/storageservices/fileservices/common-rest-api-error-codes) | | Blob, Table, Queue | [Troubleshooting](/rest/api/storageservices/fileservices/troubleshooting-api-operations) |
-<a name="host-json"></a>
-
-## host.json settings
-
-This section describes the global configuration settings available for this binding in versions 2.x and higher. The example host.json file below contains only the version 2.x+ settings for this binding. For more information about global configuration settings in versions 2.x and beyond, see [host.json reference for Azure Functions](functions-host-json.md).
-
-> [!NOTE]
-> For a reference of host.json in Functions 1.x, see [host.json reference for Azure Functions 1.x](functions-host-json-v1.md).
-
-```json
-{
- "version": "2.0",
- "extensions": {
- "queues": {
- "maxPollingInterval": "00:00:02",
- "visibilityTimeout" : "00:00:30",
- "batchSize": 16,
- "maxDequeueCount": 5,
- "newBatchThreshold": 8
- }
- }
-}
-```
-
-|Property |Default | Description |
-||||
-|maxPollingInterval|00:00:01|The maximum interval between queue polls. Minimum is 00:00:00.100 (100 ms) and increments up to 00:01:00 (1 min). In 1.x the data type is milliseconds, and in 2.x and higher it is a TimeSpan.|
-|visibilityTimeout|00:00:00|The time interval between retries when processing of a message fails. |
-|batchSize|16|The number of queue messages that the Functions runtime retrieves simultaneously and processes in parallel. When the number being processed gets down to the `newBatchThreshold`, the runtime gets another batch and starts processing those messages. So the maximum number of concurrent messages being processed per function is `batchSize` plus `newBatchThreshold`. This limit applies separately to each queue-triggered function. <br><br>If you want to avoid parallel execution for messages received on one queue, you can set `batchSize` to 1. However, this setting eliminates concurrency only so long as your function app runs on a single virtual machine (VM). If the function app scales out to multiple VMs, each VM could run one instance of each queue-triggered function.<br><br>The maximum `batchSize` is 32. |
-|maxDequeueCount|5|The number of times to try processing a message before moving it to the poison queue.|
-|newBatchThreshold|batchSize/2|Whenever the number of messages being processed concurrently gets down to this number, the runtime retrieves another batch.|
- ## Next steps - [Run a function as queue storage data changes (Trigger)](./functions-bindings-storage-queue-trigger.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-queue-trigger.md
@@ -353,7 +353,7 @@ The following table explains the binding configuration properties that you set i
|**direction**| n/a | In the *function.json* file only. Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. | |**name** | n/a |The name of the variable that contains the queue item payload in the function code. | |**queueName** | **QueueName**| The name of the queue to poll. |
-|**connection** | **Connection** |The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "MyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.|
+|**connection** | **Connection** |The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here.<br><br>For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "MyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.<br><br>If you are using [version 5.x or higher of the extension](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher), instead of a connection string, you can provide a reference to a configuration section which defines the connection. See [Connections](./functions-reference.md#connections).|
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
@@ -361,6 +361,8 @@ The following table explains the binding configuration properties that you set i
# [C#](#tab/csharp)
+### Default
+ Access the message data by using a method parameter such as `string paramName`. You can bind to any of the following types: * Object - The Functions runtime deserializes a JSON payload into an instance of an arbitrary class defined in your code.
@@ -370,8 +372,18 @@ Access the message data by using a method parameter such as `string paramName`.
If you try to bind to `CloudQueueMessage` and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md#azure-storage-sdk-version-in-functions-1x).
+### Additional types
+
+Apps using the [5.0.0 or higher version of the Storage extension](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) may also use types from the [Azure SDK for .NET](/dotnet/api/overview/azure/storage.queues-readme). This version drops support for the legacy `CloudQueueMessage` type in favor of the following types:
+
+- [QueueMessage](/dotnet/api/azure.storage.queues.models.queuemessage)
+
+For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
+ # [C# Script](#tab/csharp-script)
+### Default
+ Access the message data by using a method parameter such as `string paramName`. The `paramName` is the value specified in the `name` property of *function.json*. You can bind to any of the following types: * Object - The Functions runtime deserializes a JSON payload into an instance of an arbitrary class defined in your code.
@@ -381,6 +393,14 @@ Access the message data by using a method parameter such as `string paramName`.
If you try to bind to `CloudQueueMessage` and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md#azure-storage-sdk-version-in-functions-1x).
+### Additional types
+
+Apps using the [5.0.0 or higher version of the Storage extension](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) may also use types from the [Azure SDK for .NET](/dotnet/api/overview/azure/storage.queues-readme). This version drops support for the legacy `CloudQueueMessage` type in favor of the following types:
+
+- [QueueMessage](/dotnet/api/azure.storage.queues.models.queuemessage)
+
+For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
+ # [Java](#tab/java) The [QueueTrigger](/java/api/com.microsoft.azure.functions.annotation.queuetrigger?view=azure-java-stable&preserve-view=true) annotation gives you access to the queue message that triggered the function.
@@ -444,7 +464,7 @@ The queue trigger automatically prevents a function from processing a queue mess
## host.json properties
-The [host.json](functions-host-json.md#queues) file contains settings that control queue trigger behavior. See the [host.json settings](functions-bindings-storage-queue-output.md#hostjson-settings) section for details regarding available settings.
+The [host.json](functions-host-json.md#queues) file contains settings that control queue trigger behavior. See the [host.json settings](functions-bindings-storage-queue.md#hostjson-settings) section for details regarding available settings.
## Next steps
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-queue.md
@@ -30,6 +30,13 @@ Working with the trigger and bindings requires that you reference the appropriat
| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) is recommended to use with Visual Studio Code. | | C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+#### Storage extension 5.x and higher
+
+A new version of the Storage bindings extension is available as a [preview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0-beta.2). This preview introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For .NET applications, it also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Storage.Queues](/dotnet/api/azure.storage.queues).
+
+> [!NOTE]
+> The preview package is not included in an extension bundle and must be installed manually. For .NET apps, add a reference to the package. For all other app types, see [Update your extensions].
+ [core tools]: ./functions-run-local.md [extension bundle]: ./functions-bindings-register.md#extension-bundles [NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
@@ -42,7 +49,41 @@ Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
[!INCLUDE [functions-storage-sdk-version](../../includes/functions-storage-sdk-version.md)]
+<a name="host-json"></a>
+
+## host.json settings
+
+This section describes the global configuration settings available for this binding in versions 2.x and higher. The example *host.json* file below contains only the version 2.x+ settings for this binding. For more information about global configuration settings in versions 2.x and beyond, see [host.json reference for Azure Functions](functions-host-json.md).
+
+> [!NOTE]
+> For a reference of host.json in Functions 1.x, see [host.json reference for Azure Functions 1.x](functions-host-json-v1.md).
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "queues": {
+ "maxPollingInterval": "00:00:02",
+ "visibilityTimeout" : "00:00:30",
+ "batchSize": 16,
+ "maxDequeueCount": 5,
+ "newBatchThreshold": 8,
+ "messageEncoding": "base64"
+ }
+ }
+}
+```
+
+|Property |Default | Description |
+||||
+|maxPollingInterval|00:00:01|The maximum interval between queue polls. Minimum is 00:00:00.100 (100 ms) and increments up to 00:01:00 (1 min). In Functions 2.x and higher the data type is a `TimeSpan`, while in version 1.x it is in milliseconds.|
+|visibilityTimeout|00:00:00|The time interval between retries when processing of a message fails. |
+|batchSize|16|The number of queue messages that the Functions runtime retrieves simultaneously and processes in parallel. When the number being processed gets down to the `newBatchThreshold`, the runtime gets another batch and starts processing those messages. So the maximum number of concurrent messages being processed per function is `batchSize` plus `newBatchThreshold`. This limit applies separately to each queue-triggered function. <br><br>If you want to avoid parallel execution for messages received on one queue, you can set `batchSize` to 1. However, this setting eliminates concurrency as long as your function app runs only on a single virtual machine (VM). If the function app scales out to multiple VMs, each VM could run one instance of each queue-triggered function.<br><br>The maximum `batchSize` is 32. |
+|maxDequeueCount|5|The number of times to try processing a message before moving it to the poison queue.|
+|newBatchThreshold|batchSize/2|Whenever the number of messages being processed concurrently gets down to this number, the runtime retrieves another batch.|
+|messageEncoding|base64| This setting is only available in [extension version 5.0.0 and higher](#storage-extension-5x-and-higher). It represents the encoding format for messages. Valid values are `base64` and `none`.|
+ ## Next steps - [Run a function as queue storage data changes (Trigger)](./functions-bindings-storage-queue-trigger.md)-- [Write queue storage messages (Output binding)](./functions-bindings-storage-queue-output.md)
+- [Write queue storage messages (Output binding)](./functions-bindings-storage-queue-output.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-get-started.md
@@ -23,6 +23,7 @@ Use the following resources to get started.
| **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio](./functions-create-your-first-function-visual-studio.md)<li>[Visual Studio Code](./create-first-function-vs-code-csharp.md)<li>[Command line](./create-first-function-cli-csharp.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=csharp&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=C%23) | | **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=csharp)<li>[Security](./security-concepts.md)|
| **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [C# language reference](./functions-dotnet-class-library.md)| ::: zone-end
@@ -33,6 +34,7 @@ Use the following resources to get started.
| **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-java.md)<li>[Jav) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=java&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Java) | | **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Develop an App using the Maven Plugin for Azure Functions](/learn/modules/develop-azure-functions-app-with-maven-plugin/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=java)<li>[Security](./security-concepts.md)|
| **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [Java language reference](./functions-reference-java.md)| ::: zone-end
@@ -42,6 +44,7 @@ Use the following resources to get started.
| **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-node.md)<li>[Node.js terminal/command prompt](./create-first-function-cli-java.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=javascript%2ctypescript&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=JavaScript%2CTypeScript) | | **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=javascript)<li>[Security](./security-concepts.md)|
| **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [JavaScript](./functions-reference-node.md) or [TypeScript](./functions-reference-node.md#typescript) language reference| ::: zone-end
@@ -51,6 +54,7 @@ Use the following resources to get started.
| **Create your first function** | <li>Using [Visual Studio Code](./create-first-function-vs-code-powershell.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=powershell&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=PowerShell) | | **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=powershell)<li>[Security](./security-concepts.md)|
| **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [PowerShell language reference](./functions-reference-powershell.md))| ::: zone-end
@@ -60,6 +64,7 @@ Use the following resources to get started.
| **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-csharp.md?pivots=programming-language-python)<li>[Terminal/command prompt](./create-first-function-cli-csharp.md?pivots=programming-language-python) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=python&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Python) | | **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=python)<li>[Security](./security-concepts.md)<li>[Improve throughput performance](./python-scale-performance-reference.md)|
| **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [Python language reference](./functions-reference-python.md)| ::: zone-end
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-host-json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-host-json.md
@@ -34,6 +34,7 @@ The following sample *host.json* file for version 2.x+ has all possible options
"flushTimeout": "00:00:30" }, "extensions": {
+ "blobs": {},
"cosmosDb": {}, "durableTask": {}, "eventHubs": {},
@@ -211,6 +212,10 @@ For more information on snapshots, see [Debug snapshots on exceptions in .NET ap
| thresholdForSnapshotting | 1 | How many times Application Insights needs to see an exception before it asks for snapshots. | | uploaderProxy | null | Overrides the proxy server used in the Snapshot Uploader process. You may need to use this setting if your application connects to the internet via a proxy server. The Snapshot Collector runs within your application's process and will use the same proxy settings. However, the Snapshot Uploader runs as a separate process and you may need to configure the proxy server manually. If this value is null, then Snapshot Collector will attempt to autodetect the proxy's address by examining System.Net.WebRequest.DefaultWebProxy and passing on the value to the Snapshot Uploader. If this value isn't null, then autodetection isn't used and the proxy server specified here will be used in the Snapshot Uploader. |
+## blobs
+
+Configuration settings can be found in [Storage blob triggers and bindings](functions-bindings-storage-blob.md#hostjson-settings).
+ ## cosmosDb Configuration setting can be found in [Cosmos DB triggers and bindings](functions-bindings-cosmosdb-v2-output.md#host-json).
@@ -373,7 +378,7 @@ Managed dependency is a feature that is currently only supported with PowerShell
## queues
-Configuration settings can be found in [Storage queue triggers and bindings](functions-bindings-storage-queue-output.md#host-json).
+Configuration settings can be found in [Storage queue triggers and bindings](functions-bindings-storage-queue.md#host-json).
## retry
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference.md
@@ -35,11 +35,11 @@ For more information, see [Azure Functions triggers and bindings concepts](funct
The `bindings` property is where you configure both triggers and bindings. Each binding shares a few common settings and some settings which are specific to a particular type of binding. Every binding requires the following settings:
-| Property | Values/Types | Comments |
-| | | |
-| `type` |string |Binding type. For example, `queueTrigger`. |
-| `direction` |'in', 'out' |Indicates whether the binding is for receiving data into the function or sending data from the function. |
-| `name` |string |The name that is used for the bound data in the function. For C#, this is an argument name; for JavaScript, it's the key in a key/value list. |
+| Property | Values | Type | Comments|
+|||||
+| type | Name of binding.<br><br>For example, `queueTrigger`. | string | |
+| direction | `in`, `out` | string | Indicates whether the binding is for receiving data into the function or sending data from the function. |
+| name | Function identifier.<br><br>For example, `myQueue`. | string | The name that is used for the bound data in the function. For C#, this is an argument name; for JavaScript, it's the key in a key/value list. |
## Function app A function app provides an execution context in Azure in which your functions run. As such, it is the unit of deployment and management for your functions. A function app is comprised of one or more individual functions that are managed, deployed, and scaled together. All of the functions in a function app share the same pricing plan, deployment method, and runtime version. Think of a function app as a way to organize and collectively manage your functions. To learn more, see [How to manage a function app](functions-how-to-use-azure-function-app-settings.md).
@@ -87,6 +87,83 @@ Here is a table of all supported bindings.
Having issues with errors coming from the bindings? Review the [Azure Functions Binding Error Codes](functions-bindings-error-pages.md) documentation. +
+## Connections
+
+Your function project references connection information by name from its configuration provider. It does not directly accept the connection details, allowing them to be changed across environments. For example, a trigger definition might include a `connection` property. This might refer to a connection string, but you cannot set the connection string directly in a `function.json`. Instead, you would set `connection` to the name of an environment variable that contains the connection string.
+
+The default configuration provider uses environment variables. These might be set by [Application Settings](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) when running in the Azure Functions service, or from the [local settings file](functions-run-local.md#local-settings-file) when developing locally.
+
+### Connection values
+
+When the connection name resolves to a single exact value, the runtime identifies the value as a _connection string_, which typically includes a secret. The details of a connection string are defined by the service to which you wish to connect.
+
+However, a connection name can also refer to a collection of multiple configuration items. Environment variables can be treated as a collection by using a shared prefix that ends in double underscores `__`. The group can then be referenced by setting the connection name to this prefix.
+
+For example, the `connection` property for a Azure Blob trigger definition might be `Storage1`. As long as there is no single string value configured with `Storage1` as its name, `Storage1__serviceUri` would be used for the `serviceUri` property of the connection. The connection properties are different for each service. Refer to the documentation for the extension that uses the connection.
+
+### Configure an identity-based connection
+
+Some connections in Azure Functions are configured to use an identity instead of a secret. Support depends on the extension using the connection. In some cases, a connection string may still be required in Functions even though the service to which you are connecting supports identity-based connections.
+
+> [!IMPORTANT]
+> Even if a binding extension supports identity-based connections, that configuration may not be supported yet in the Consumption plan. See the support table below.
+
+Identity-based connections are supported by the following trigger and binding extensions:
+
+| Extension name | Extension version | Supports identity-based connections in the Consumption plan |
+|-|-||
+| Azure Blob | [Version 5.0.0-beta1 or later](./functions-bindings-storage-blob.md#storage-extension-5x-and-higher) | No |
+| Azure Queue | [Version 5.0.0-beta1 or later](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) | No |
+
+> [!NOTE]
+> Support for identity-based connections is not yet available for storage connections used by the Functions runtime for core behaviors. This means that the `AzureWebJobsStorage` setting must be a connection string.
+
+#### Connection properties
+
+An identity-based connection for an Azure service accepts the following properties:
+
+| Property | Environment variable | Is Required | Description |
+|||||
+| Service URI | `<CONNECTION_NAME_PREFIX>__serviceUri` | Yes | The data plane URI of the service to which you are connecting. |
+
+Additional options may be supported for a given connection type. Please refer to the documentation for the component making the connection.
+
+When hosted in the Azure Functions service, identity-based connections use a [managed identity](../app-service/overview-managed-identity.md?toc=%2fazure%2fazure-functions%2ftoc.json). The system-assigned identity is used by default. When run in other contexts, such as local development, your developer identity is used instead, although this can be customized using alternative connection parameters.
+
+##### Local development
+
+When running locally, the above configuration tells the runtime to use your local developer identity. The connection will attempt to get a token from the following locations, in order:
+
+- A local cache shared between Microsoft applications
+- The current user context in Visual Studio
+- The current user context in Visual Studio Code
+- The current user context in the Azure CLI
+
+If none of these options are successful, an error will occur.
+
+In some cases, you may wish to specify use of a different identity. You can add configuration properties for the connection that point to the alternate identity.
+
+> [!NOTE]
+> The following configuration options are not supported when hosted in the Azure Functions service.
+
+To connect using an Azure Active Directory service principal with a client ID and secret, define the connection with the following properties:
+
+| Property | Environment variable | Is Required | Description |
+|||||
+| Service URI | `<CONNECTION_NAME_PREFIX>__serviceUri` | Yes | The data plane URI of the service to which you are connecting. |
+| Tenant ID | `<CONNECTION_NAME_PREFIX>__tenantId` | Yes | The Azure Active Directory tenant (directory) ID. |
+| Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | Yes | The client (application) ID of an app registration in the tenant. |
+| Client secret | `<CONNECTION_NAME_PREFIX>__clientSecret` | Yes | A client secret that was generated for the app registration. |
+
+#### Grant permission to the identity
+
+Whatever identity is being used must have permissions to perform the intended actions. This is typically done by assigning a role in Azure RBAC or specifying the identity in an access policy, depending on the service to which you are connecting. Refer to the documentation for each service on what permissions are needed and how they can be set.
+
+> [!IMPORTANT]
+> Some permissions might be exposed by the service that are not necessary for all contexts. Where possible, adhere to the **principle of least privilege**, granting the identity only required privileges. For example, if the app just needs to read from a blob, use the [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) role as the [Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner) includes excessive permissions for a read operation.
++ ## Reporting Issues [!INCLUDE [Reporting Issues](../../includes/functions-reporting-issues.md)]
@@ -97,4 +174,4 @@ For more information, see the following resources:
* [Code and test Azure Functions locally](./functions-develop-local.md) * [Best Practices for Azure Functions](functions-best-practices.md) * [Azure Functions C# developer reference](functions-dotnet-class-library.md)
-* [Azure Functions Node.js developer reference](functions-reference-node.md)
+* [Azure Functions Node.js developer reference](functions-reference-node.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/security-concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/security-concepts.md
@@ -104,6 +104,8 @@ Connection strings and other credentials stored in application settings gives al
[!INCLUDE [app-service-managed-identities](../../includes/app-service-managed-identities.md)]
+Managed identities can be used in place of secrets for connections from some triggers and bindings. See [Identity-based connections](#identity-based-connections).
+ For more information, see [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md?toc=%2fazure%2fazure-functions%2ftoc.json). #### Restrict CORS access
@@ -134,6 +136,14 @@ While application settings are sufficient for most many functions, you may want
[Azure Key Vault](../key-vault/general/overview.md) is a service that provides centralized secrets management, with full control over access policies and audit history. You can use a Key Vault reference in the place of a connection string or key in your application settings. To learn more, see [Use Key Vault references for App Service and Azure Functions](../app-service/app-service-key-vault-references.md?toc=%2fazure%2fazure-functions%2ftoc.json).
+### Identity-based connections
+
+Identities may be used in place of secrets for connecting to some resources. This has the advantage of not requiring the management of a secret, and it provides more fine-grained access control and auditing.
+
+When you are writing code that creates the connection to [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication), you can choose to use an identity instead of a secret or connection string. Details for both connection methods are covered in the documentation for each service.
+
+Some Azure Functions trigger and binding extensions may be configured using an identity-based connection. Today, this includes the [Azure Blob](./functions-bindings-storage-blob.md) and [Azure Queue](./functions-bindings-storage-queue.md) extensions. For information about how to configure these extensions to use an identity, see [How to use identity-based connections in Azure Functions](./functions-reference.md#configure-an-identity-based-connection).
+ ### Set usage quotas Consider setting a usage quota on functions running in a Consumption plan. When you set a daily GB-sec limit on the sum total execution of functions in your function app, execution is stopped when the limit is reached. This could potentially help mitigate against malicious code executing your functions. To learn how to estimate consumption for your functions, see [Estimating Consumption plan costs](functions-consumption-costs.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/create-new-resource.md
@@ -2,7 +2,7 @@
Title: Create a new Azure Application Insights resource | Microsoft Docs description: Manually set up Application Insights monitoring for a new live application. Previously updated : 12/02/2019 Last updated : 02/10/2021
@@ -10,6 +10,9 @@ Last updated 12/02/2019
Azure Application Insights displays data about your application in a Microsoft Azure *resource*. Creating a new resource is therefore part of [setting up Application Insights to monitor a new application][start]. After you have created your new resource, you can get its instrumentation key and use that to configure the Application Insights SDK. The instrumentation key links your telemetry to the resource.
+> [!IMPORTANT]
+> Classic Application Insights has been deprecated. Please follow these [instructions on how upgrade to workspace-based Application Insights](convert-classic-resource.md).
+ ## Sign in to Microsoft Azure If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/pricing.md
@@ -5,7 +5,7 @@
Previously updated : 5/7/2020 Last updated : 2/7/2021
@@ -282,15 +282,18 @@ To disable the daily volume cap e-mails, under the **Configure** section of your
For early adopters of Azure Application Insights, there are still two possible pricing tiers: Basic and Enterprise. The Basic pricing tier is the same as described above and is the default tier. It includes all Enterprise tier features, at no additional cost. The Basic tier bills primarily on the volume of data that's ingested.
-> [!NOTE]
-> These legacy pricing tiers have been renamed. The Enterprise pricing tier is now called **Per Node** and the Basic pricing tier is now called **Per GB**. These new names are used below and in the Azure portal.
+These legacy pricing tiers have been renamed. The Enterprise pricing tier is now called **Per Node** and the Basic pricing tier is now called **Per GB**. These new names are used below and in the Azure portal.
-The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you are charged for data ingested above the included allowance. If you are using Operations Management Suite, you should choose the Per Node tier.
+The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you are charged for data ingested above the included allowance. If you are using Operations Management Suite, you should choose the Per Node tier. In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](../platform/usage-estimated-costs.md).
For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/application-insights/).
-> [!NOTE]
-> In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](../platform/usage-estimated-costs.md), how to [assess the impact of moving to this model](../platform/usage-estimated-costs.md#understanding-your-azure-monitor-costs) based on your usage patterns, and [how to opt into the new model](../platform/usage-estimated-costs.md#azure-monitor-pricing-model)
+### Understanding billed usage on the legacy Enterprise (Per Node) tier
+
+As described below in more detail, the legacy Enterprise (Per Node) tier combines usage from across all Application Insights resources in a subscription to calculate the number of nodes and the data overage. Due to this combination process, **usage for all Application Insights resources in a subscription are reported against just one of the resources**. This makes reconciling your [billed usage](https://docs.microsoft.com/azure/azure-monitor/app/pricing#viewing-application-insights-usage-on-your-azure-bill) with the usage you observe for each Application Insights resources very complicated.
+
+> [!WARNING]
+> Because of the complexity of tracking and understanding usage of Application Insights resources in the legacy Enterprise (Per Node) tier we strongly recommend using the current Pay-As-You-Go pricing tier.
### Per Node tier and Operations Management Suite subscription entitlements
@@ -344,4 +347,3 @@ You can write a script to set the pricing tier by using Azure Resource Managemen
[start]: ./app-insights-overview.md [pricing]: https://azure.microsoft.com/pricing/details/application-insights/ [pricing]: https://azure.microsoft.com/pricing/details/application-insights/-
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-metric-near-real-time https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-metric-near-real-time.md
@@ -5,7 +5,7 @@
Previously updated : 12/15/2020 Last updated : 02/10/2021
@@ -30,7 +30,7 @@ Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.AppConfiguration/configurationStores |Yes | No | [App Configuration](./metrics-supported.md#microsoftappconfigurationconfigurationstores) | |Microsoft.AppPlatform/Spring | Yes | No | [Azure Spring Cloud](./metrics-supported.md#microsoftappplatformspring) | |Microsoft.Automation/automationAccounts | Yes| No | [Automation Accounts](./metrics-supported.md#microsoftautomationautomationaccounts) |
-|Microsoft.AVS/privateClouds | No | No | |
+|Microsoft.AVS/privateClouds | No | No | [Azure VMware Solution](./metrics-supported.md#microsoftavsprivateclouds) |
|Microsoft.Batch/batchAccounts | Yes | No | [Batch Accounts](./metrics-supported.md#microsoftbatchbatchaccounts) | |Microsoft.Cache/Redis | Yes | Yes | [Azure Cache for Redis](./metrics-supported.md#microsoftcacheredis) | |Microsoft.ClassicCompute/domainNames/slots/roles | No | No | [Classic Cloud Services](./metrics-supported.md#microsoftclassiccomputedomainnamesslotsroles) |
@@ -49,7 +49,7 @@ Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.DataBoxEdge/dataBoxEdgeDevices | Yes | Yes | [Data Box](./metrics-supported.md#microsoftdataboxedgedataboxedgedevices) | |Microsoft.DataFactory/datafactories| Yes| No | [Data Factories V1](./metrics-supported.md#microsoftdatafactorydatafactories) | |Microsoft.DataFactory/factories |Yes | No | [Data Factories V2](./metrics-supported.md#microsoftdatafactoryfactories) |
-|Microsoft.DataShare/accounts | Yes | No | |
+|Microsoft.DataShare/accounts | Yes | No | [Data Shares](./metrics-supported.md#microsoftdatashareaccounts) |
|Microsoft.DBforMariaDB/servers | No | No | [DB for MariaDB](./metrics-supported.md#microsoftdbformariadbservers) | |Microsoft.DBforMySQL/servers | No | No |[DB for MySQL](./metrics-supported.md#microsoftdbformysqlservers)| |Microsoft.DBforPostgreSQL/servers | No | No | [DB for PostgreSQL](./metrics-supported.md#microsoftdbforpostgresqlservers)|
@@ -57,7 +57,7 @@ Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.DBforPostgreSQL/flexibleServers | Yes | No | [DB for PostgreSQL (flexible servers)](./metrics-supported.md#microsoftdbforpostgresqlflexibleservers)| |Microsoft.Devices/IotHubs | Yes | No |[IoT Hub](./metrics-supported.md#microsoftdevicesiothubs) | |Microsoft.Devices/provisioningServices| Yes | No | [Device Provisioning Services](./metrics-supported.md#microsoftdevicesprovisioningservices) |
-|Microsoft.DigitalTwins/digitalTwinsInstances | Yes | No | |
+|Microsoft.DigitalTwins/digitalTwinsInstances | Yes | No | [Digital Twins](./metrics-supported.md#microsoftdigitaltwinsdigitaltwinsinstances) |
|Microsoft.DocumentDB/databaseAccounts | Yes | No | [Cosmos DB](./metrics-supported.md#microsoftdocumentdbdatabaseaccounts) | |Microsoft.EventGrid/domains | Yes | No | [Event Grid Domains](./metrics-supported.md#microsofteventgriddomains) | |Microsoft.EventGrid/systemTopics | Yes | No | [Event Grid System Topics](./metrics-supported.md#microsofteventgridsystemtopics) |
@@ -82,10 +82,10 @@ Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Network/expressRouteCircuits | Yes | No |[ExpressRoute Circuits](./metrics-supported.md#microsoftnetworkexpressroutecircuits) | |Microsoft.Network/expressRoutePorts | Yes | No |[ExpressRoute Direct](./metrics-supported.md#microsoftnetworkexpressrouteports) | |Microsoft.Network/loadBalancers (only for Standard SKUs)| Yes| No | [Load Balancers](./metrics-supported.md#microsoftnetworkloadbalancers) |
-|Microsoft.Network/natGateways| No | No | |
-|Microsoft.Network/privateEndpoints| No | No | |
-|Microsoft.Network/privateLinkServices| No | No |
-|Microsoft.Network/publicipaddresses | No | No |[Public IP Addresses](./metrics-supported.md#microsoftnetworkpublicipaddresses)|
+|Microsoft.Network/natGateways| No | No | [NAT Gateways](./metrics-supported.md#microsoftnetworknatgateways) |
+|Microsoft.Network/privateEndpoints| No | No | [Private Endpoints](./metrics-supported.md#microsoftnetworkprivateendpoints) |
+|Microsoft.Network/privateLinkServices| No | No | [Private Link Services](./metrics-supported.md#microsoftnetworkprivatelinkservices) |
+|Microsoft.Network/publicipaddresses | No | No | [Public IP Addresses](./metrics-supported.md#microsoftnetworkpublicipaddresses)|
|Microsoft.Network/trafficManagerProfiles | Yes | No | [Traffic Manager Profiles](./metrics-supported.md#microsoftnetworktrafficmanagerprofiles) | |Microsoft.OperationalInsights/workspaces| Yes | No | [Log Analytics workspaces](./metrics-supported.md#microsoftoperationalinsightsworkspaces)| |Microsoft.Peering/peerings | Yes | No | [Peerings](./metrics-supported.md#microsoftpeeringpeerings) |
@@ -102,7 +102,7 @@ Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Storage/storageAccounts/fileServices | Yes| No | [Storage Accounts - Files](./metrics-supported.md#microsoftstoragestorageaccountsfileservices) | |Microsoft.Storage/storageAccounts/queueServices | Yes| No | [Storage Accounts - Queues](./metrics-supported.md#microsoftstoragestorageaccountsqueueservices) | |Microsoft.Storage/storageAccounts/tableServices | Yes| No | [Storage Accounts - Tables](./metrics-supported.md#microsoftstoragestorageaccountstableservices) |
-|Microsoft.StorageCache/caches | Yes | No | |
+|Microsoft.StorageCache/caches | Yes | No | [HPC Caches](./metrics-supported.md#microsoftstoragecachecaches) |
|Microsoft.StorageSync/storageSyncServices | Yes | No | [Storage Sync Services](./metrics-supported.md#microsoftstoragesyncstoragesyncservices) | |Microsoft.StreamAnalytics/streamingjobs | Yes | No | [Stream Analytics](./metrics-supported.md#microsoftstreamanalyticsstreamingjobs) | |Microsoft.Synapse/workspaces | Yes | No | [Synapse Analytics](./metrics-supported.md#microsoftsynapseworkspaces) |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-secure-webhook-connections-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-secure-webhook-connections-servicenow.md
@@ -19,7 +19,8 @@ The following sections provide details about how to connect your ServiceNow prod
Ensure that you've met the following prerequisites: * Azure AD is registered.
-* You have the supported version of The ServiceNow Event Management - ITOM (version Orlando or later).
+* You have the supported version of The ServiceNow Event Management - ITOM (version New York or later).
+* [Application](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ac4c9c57dbb1d090561b186c1396191a/1.2.0?referer=%2Fstore%2Fsearch%3Flistingtype%3Dallintegrations%25253Bancillary_app%25253Bcertified_apps%25253Bcontent%25253Bindustry_solution%25253Boem%25253Butility%26q%3Devent%2520management%2520connectors&sl=sh) installed on ServiceNow instance.
## Configure the ServiceNow connection
@@ -28,4 +29,4 @@ Ensure that you've met the following prerequisites:
2. Follow the instructions according to the version: * [Paris](https://docs.servicenow.com/bundle/paris-it-operations-management/page/product/event-management/task/azure-events-authentication.html) * [Orlando](https://docs.servicenow.com/bundle/orlando-it-operations-management/page/product/event-management/task/azure-events-authentication.html)
- * [New York](https://docs.servicenow.com/bundle/newyork-it-operations-management/page/product/event-management/task/azure-events-authentication.html)
+ * [New York](https://docs.servicenow.com/bundle/newyork-it-operations-management/page/product/event-management/task/azure-events-authentication.html)
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-create-volumes-smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
@@ -321,5 +321,6 @@ You can set permissions for a file or folder by using the **Security** tab of th
* [Mount or unmount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) * [SMB FAQs](./azure-netapp-files-faqs.md#smb-faqs)
+* [Troubleshoot SMB or dual-protocol volumes](troubleshoot-dual-protocol-volumes.md)
* [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md) * [Install a new Active Directory forest using Azure CLI](/windows-server/identity/ad-ds/deploy/virtual-dc/adds-on-azure-vm)
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/create-volumes-dual-protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
@@ -135,4 +135,4 @@ Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
## Next steps * [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md)
-* [Troubleshoot dual-protocol volumes](troubleshoot-dual-protocol-volumes.md)
+* [Troubleshoot SMB or dual-protocol volumes](troubleshoot-dual-protocol-volumes.md)
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/troubleshoot-dual-protocol-volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-dual-protocol-volumes.md
@@ -1,6 +1,6 @@
Title: Troubleshoot dual-protocol volumes for Azure NetApp Files | Microsoft Docs
-description: Describes error messages and resolutions that can help you troubleshoot dual-protocol issues for Azure NetApp Files.
+ Title: Troubleshoot SMB or dual-protocol volumes for Azure NetApp Files | Microsoft Docs
+description: Describes error messages and resolutions that can help you troubleshoot SMB or dual-protocol issues for Azure NetApp Files.
documentationcenter: ''
@@ -13,24 +13,34 @@
na ms.devlang: na Previously updated : 01/22/2021 Last updated : 02/02/2021
-# Troubleshoot dual-protocol volumes
+# Troubleshoot SMB or dual-protocol volumes
This article describes resolutions to error conditions you might have when creating or managing dual-protocol volumes.
-## Error conditions and resolutions
+## Errors for dual-protocol volumes
-| Error conditions | Resolution |
+| Error conditions | Resolutions |
|-|-| | LDAP over TLS is enabled, and dual-protocol volume creation fails with the error `This Active Directory has no Server root CA Certificate`. | If this error occurs when you are creating a dual-protocol volume, make sure that the root CA certificate is uploaded in your NetApp account. |
-| Dual-protocol volume creation fails with the error `Failed to validate LDAP configuration, try again after correcting LDAP configuration`. | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `1.1.1.1`, the hostname of the AD machine (as found by using the `hostname` command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `1.1.1.1` -> `contoso.com`. |
-| Dual-protocol volume creation fails with the error `Failed to create the Active Directory machine account \\\"TESTAD-C8DD\\\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\\n [ 434] Loaded the preliminary configuration.\\n [ 537] Successfully connected to ip 1.1.1.1, port 88 using TCP\\n**[ 950] FAILURE`. | This error indicates that the AD password is incorrect when Active Directory is joined to the NetApp account. Update the AD connection with the correct password and try again. |
+| Dual-protocol volume creation fails with the error `Failed to validate LDAP configuration, try again after correcting LDAP configuration`. | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.X.X.X`, the hostname of the AD machine (as found by using the `hostname` command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.X.X.X` -> `contoso.com`. |
+| Dual-protocol volume creation fails with the error `Failed to create the Active Directory machine account \\\"TESTAD-C8DD\\\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\\n [ 434] Loaded the preliminary configuration.\\n [ 537] Successfully connected to ip 10.X.X.X, port 88 using TCP\\n**[ 950] FAILURE`. | This error indicates that the AD password is incorrect when Active Directory is joined to the NetApp account. Update the AD connection with the correct password and try again. |
| Dual-protocol volume creation fails with the error `Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available`. | This error indicates that DNS is not reachable. The reason might be because DNS IP is incorrect, or there is a networking issue. Check the DNS IP entered in AD connection and make sure that the IP is correct. <br> Also, make sure that the AD and the volume are in same region and in same VNet. If they are in different VNETs, ensure that VNet peering is established between the two VNets.| | Permission is denied error when mounting a dual-protocol volume. | A dual-protocol volume supports both the NFS and SMB protocols. When you try to access the mounted volume on the UNIX system, the system attempts to map the UNIX user you use to a Windows user. If no mapping is found, the ΓÇ£Permission deniedΓÇ¥ error occurs. <br> This situation applies also when you use the ΓÇÿrootΓÇÖ user for the access. <br> To avoid the ΓÇ£Permission deniedΓÇ¥ issue, make sure that Windows Active Directory includes `pcuser` before you access the mount point. If you add `pcuser` after encountering the ΓÇ£Permission deniedΓÇ¥ issue, wait 24 hours for the cache entry to clear before trying the access again. |
+## Common errors for SMB and dual-protocol volumes
+
+| Error conditions | Resolutions |
+|-|-|
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if ADDS and the volume are being deployed in same region.</li> <li>Check if ADDS and the volume are using the same VNet. If they are using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](azure-netapp-files-create-volumes-smb.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Azure ADDS. Azure ADDS should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine accounts. </li> <li> If you use Azure ADDS, make sure that the user is part of the Azure AD group `Azure AD DC Administrators`. </li></ul> |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-A452\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\n [ 567] Loaded the preliminary configuration.\n [ 671] Successfully connected to ip 10.X.X.X, port 88 using TCP\n**[ 1099] FAILURE: Could not authenticate as\n** 'user@contoso.com': CIFS server account password does\n** not match password stored in Active Directory\n** (KRB5KDC_ERR_PREAUTH_FAILED)\n. "}]}` | Make sure that the password entered for joining the AD connection is correct. |
+| The SMB or dual-protocol volume creation fails with the following error: `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError","message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-D9A2\". Reason: SecD Error: ou not found Details: Error: Machine account creation procedure failed\n [ 561] Loaded the preliminary configuration.\n [ 665] Successfully connected to ip 10.X.X.X, port 88 using TCP\n [ 1039] Successfully connected to ip 10.x.x.x, port 389 using TCP\n**[ 1147] FAILURE: Specifed OU 'OU=AADDC Com' does not exist in\n** contoso.com\n. "}]}` | Make sure that the OU path specified for joining the AD connection is correct. If you use Azure ADDS, make sure that the organizational unit path is `OU=AADDC Computers`. |
+ ## Next steps
+* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md)
* [Create a dual-protocol volume](create-volumes-dual-protocol.md) * [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md
@@ -53,7 +53,7 @@ Virtual machines created from Marketplace resources with plans attached can't be
To move virtual machines configured with Azure Backup, you must delete the restore points from the vault.
-If [soft delete](../../../backup/backup-azure-security-feature-cloud.md) is enabled for your virtual machine, you can't move the virtual machine while those restore points are kept. Either [disable soft delete](../../../backup/backup-azure-security-feature-cloud.md#enabling-and-disabling-soft-delete) or wait 14 days after deleting the restore points.
+If [soft delete](../../../backup/soft-delete-virtual-machines.md) is enabled for your virtual machine, you can't move the virtual machine while those restore points are kept. Either [disable soft delete](../../../backup/backup-azure-security-feature-cloud.md#enabling-and-disabling-soft-delete) or wait 14 days after deleting the restore points.
### Portal
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-tutorial-linked-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-tutorial-linked-template.md
@@ -1,7 +1,7 @@
Title: Tutorial - Deploy a linked template description: Learn how to deploy a linked template Previously updated : 01/12/2021 Last updated : 02/10/2021
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-tutorial-local-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-tutorial-local-template.md
@@ -1,7 +1,7 @@
Title: Tutorial - Deploy a local Azure Resource Manager template description: Learn how to deploy an Azure Resource Manager template (ARM template) from your local computer Previously updated : 01/12/2021 Last updated : 02/10/2021
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
@@ -2,7 +2,7 @@
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 01/04/2021 Last updated : 02/10/2021 # Resource functions for ARM templates
@@ -198,6 +198,7 @@ The possible uses of list* are shown in the following table.
| Microsoft.ApiManagement/service/identityProviders | [listSecrets](/rest/api/apimanagement/2019-12-01/identityprovider/listsecrets) | | Microsoft.ApiManagement/service/namedValues | [listValue](/rest/api/apimanagement/2019-12-01/namedvalue/listvalue) | | Microsoft.ApiManagement/service/openidConnectProviders | [listSecrets](/rest/api/apimanagement/2019-12-01/openidconnectprovider/listsecrets) |
+| Microsoft.ApiManagement/service/subscriptions | [listSecrets](/rest/api/apimanagement/2019-12-01/subscription/listsecrets) |
| Microsoft.AppConfiguration/configurationStores | [ListKeys](/rest/api/appconfiguration/configurationstores/listkeys) | | Microsoft.AppPlatform/Spring | [listTestKeys](/rest/api/azurespringcloud/services/listtestkeys) | | Microsoft.Automation/automationAccounts | [listKeys](/rest/api/automation/keys/listbyautomationaccount) |
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/migrate-dtu-to-vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/migrate-dtu-to-vcore.md
@@ -9,7 +9,7 @@
Previously updated : 05/28/2020 Last updated : 02/09/2021 # Migrate Azure SQL Database from the DTU-based model to the vCore-based model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
@@ -46,24 +46,33 @@ Execute this query in the context of the database to be migrated, rather than in
```SQL WITH dtu_vcore_map AS (
-SELECT TOP (1) rg.slo_name,
- CASE WHEN rg.slo_name LIKE '%SQLG4%' THEN 'Gen4'
- WHEN rg.slo_name LIKE '%SQLGZ%' THEN 'Gen4'
- WHEN rg.slo_name LIKE '%SQLG5%' THEN 'Gen5'
- WHEN rg.slo_name LIKE '%SQLG6%' THEN 'Gen5'
- END AS dtu_hardware_gen,
- s.scheduler_count * CAST(rg.instance_cap_cpu/100. AS decimal(3,2)) AS dtu_logical_cpus,
- CAST((jo.process_memory_limit_mb / s.scheduler_count) / 1024. AS decimal(4,2)) AS dtu_memory_per_core_gb
+SELECT rg.slo_name,
+ DATABASEPROPERTYEX(DB_NAME(), 'Edition') AS dtu_service_tier,
+ CASE WHEN rg.slo_name LIKE '%SQLG4%' THEN 'Gen4'
+ WHEN rg.slo_name LIKE '%SQLGZ%' THEN 'Gen4'
+ WHEN rg.slo_name LIKE '%SQLG5%' THEN 'Gen5'
+ WHEN rg.slo_name LIKE '%SQLG6%' THEN 'Gen5'
+ WHEN rg.slo_name LIKE '%SQLG7%' THEN 'Gen5'
+ END AS dtu_hardware_gen,
+ s.scheduler_count * CAST(rg.instance_cap_cpu/100. AS decimal(3,2)) AS dtu_logical_cpus,
+ CAST((jo.process_memory_limit_mb / s.scheduler_count) / 1024. AS decimal(4,2)) AS dtu_memory_per_core_gb
FROM sys.dm_user_db_resource_governance AS rg CROSS JOIN (SELECT COUNT(1) AS scheduler_count FROM sys.dm_os_schedulers WHERE status = 'VISIBLE ONLINE') AS s CROSS JOIN sys.dm_os_job_object AS jo WHERE dtu_limit > 0 AND DB_NAME() <> 'master'
+ AND
+ rg.database_id = DB_ID()
) SELECT dtu_logical_cpus, dtu_hardware_gen, dtu_memory_per_core_gb,
+ dtu_service_tier,
+ CASE WHEN dtu_service_tier = 'Basic' THEN 'General Purpose'
+ WHEN dtu_service_tier = 'Standard' THEN 'General Purpose or Hyperscale'
+ WHEN dtu_service_tier = 'Premium' THEN 'Business Critical or Hyperscale'
+ END AS vcore_service_tier,
CASE WHEN dtu_hardware_gen = 'Gen4' THEN dtu_logical_cpus WHEN dtu_hardware_gen = 'Gen5' THEN dtu_logical_cpus * 0.7 END AS Gen4_vcores,
@@ -91,7 +100,7 @@ Besides the number of vCores (logical CPUs) and the hardware generation, several
- For the same hardware generation and the same number of vCores, IOPS and transaction log throughput resource limits for vCore databases are often higher than for DTU databases. For IO-bound workloads, it may be possible to lower the number of vCores in the vCore model to achieve the same level of performance. Resource limits for DTU and vCore databases in absolute values are exposed in the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) view. Comparing these values between the DTU database to be migrated and a vCore database using an approximately matching service objective will help you select the vCore service objective more precisely. - The mapping query also returns the amount of memory per core for the DTU database or elastic pool to be migrated, and for each hardware generation in the vCore model. Ensuring similar or higher total memory after migration to vCore is important for workloads that require a large memory data cache to achieve sufficient performance, or workloads that require large memory grants for query processing. For such workloads, depending on actual performance, it may be necessary to increase the number of vCores to get sufficient total memory. - The [historical resource utilization](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) of the DTU database should be considered when choosing the vCore service objective. DTU databases with consistently under-utilized CPU resources may need fewer vCores than the number returned by the mapping query. Conversely, DTU databases where consistently high CPU utilization causes inadequate workload performance may require more vCores than returned by the query.-- If migrating databases with intermittent or unpredictable usage patterns, consider the use of [Serverless](serverless-tier-overview.md) compute tier. Note that the max number of concurrent workers (requests) in serverless is 75% the limit in provisioned compute for the same number of max vcores configured. Also, the max memory available in serverless is 3 GB times the maximum number of vcores configured; for example, max memory is 120 GB when 40 max vcores are configured.
+- If migrating databases with intermittent or unpredictable usage patterns, consider the use of [Serverless](serverless-tier-overview.md) compute tier. Note that the max number of concurrent workers (requests) in serverless is 75% the limit in provisioned compute for the same number of max vcores configured. Also, the max memory available in serverless is 3 GB times the maximum number of vcores configured; for example, max memory is 120 GB when 40 max vcores are configured.
- In the vCore model, the supported maximum database size may differ depending on hardware generation. For large databases, check supported maximum sizes in the vCore model for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md). - For elastic pools, the [DTU](resource-limits-dtu-elastic-pools.md) and [vCore](resource-limits-vcore-elastic-pools.md) models have differences in the maximum supported number of databases per pool. This should be considered when migrating elastic pools with many databases. - Some hardware generations may not be available in every region. Check availability under [Hardware Generations](service-tiers-vcore.md#hardware-generations).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/serverless-tier-overview.md
@@ -19,14 +19,14 @@ Serverless is a compute tier for single databases in Azure SQL Database that aut
## Serverless compute tier
-The serverless compute tier for single databases in Azure SQL Database is parameterized by a compute autoscaling range and an autopause delay. The configuration of these parameters shapes the database performance experience and compute cost.
+The serverless compute tier for single databases in Azure SQL Database is parameterized by a compute autoscaling range and an auto-pause delay. The configuration of these parameters shapes the database performance experience and compute cost.
![serverless billing](./media/serverless-tier-overview/serverless-billing.png) ### Performance configuration - The **minimum vCores** and **maximum vCores** are configurable parameters that define the range of compute capacity available for the database. Memory and IO limits are proportional to the vCore range specified.ΓÇ» -- The **autopause delay** is a configurable parameter that defines the period of time the database must be inactive before it is automatically paused. The database is automatically resumed when the next login or other activity occurs. Alternatively, autopausing can be disabled.
+- The **auto-pause delay** is a configurable parameter that defines the period of time the database must be inactive before it is automatically paused. The database is automatically resumed when the next login or other activity occurs. Alternatively, automatic pausing can be disabled.
### Cost
@@ -42,16 +42,16 @@ For more cost details, see [Billing](serverless-tier-overview.md#billing).
Serverless is price-performance optimized for single databases with intermittent, unpredictable usage patterns that can afford some delay in compute warm-up after idle usage periods. In contrast, the provisioned compute tier is price-performance optimized for single databases or multiple databases in elastic pools with higher average usage that cannot afford any delay in compute warm-up.
-### Scenarios well-suited for serverless compute
+### Scenarios well suited for serverless compute
- Single databases with intermittent, unpredictable usage patterns interspersed with periods of inactivity and lower average compute utilization over time. - Single databases in the provisioned compute tier that are frequently rescaled and customers who prefer to delegate compute rescaling to the service. - New single databases without usage history where compute sizing is difficult or not possible to estimate prior to deployment in SQL Database.
-### Scenarios well-suited for provisioned compute
+### Scenarios well suited for provisioned compute
- Single databases with more regular, predictable usage patterns and higher average compute utilization over time.-- Databases that cannot tolerate performance trade-offs resulting from more frequent memory trimming or delay in autoresuming from a paused state.
+- Databases that cannot tolerate performance trade-offs resulting from more frequent memory trimming or delays in resuming from a paused state.
- Multiple databases with intermittent, unpredictable usage patterns that can be consolidated into elastic pools for better price-performance optimization. ## Comparison with provisioned compute tier
@@ -87,42 +87,42 @@ Unlike provisioned compute databases, memory from the SQL cache is reclaimed fro
- Active cache utilization is considered low when the total size of the most recently used cache entries falls below a threshold for a period of time. - When cache reclamation is triggered, the target cache size is reduced incrementally to a fraction of its previous size and reclaiming only continues if usage remains low. - When cache reclamation occurs, the policy for selecting cache entries to evict is the same selection policy as for provisioned compute databases when memory pressure is high.-- The cache size is never reduced below the min memory limit as defined by min vCores which can be configured.
+- The cache size is never reduced below the min memory limit as defined by min vCores, that can be configured.
In both serverless and provisioned compute databases, cache entries may be evicted if all available memory is used.
-Note that when CPU utilization is low, active cache utilization can remain high depending on the usage pattern and prevent memory reclamation. Also, there can be additional delay after user activity stops before memory reclamation occurs due to periodic background processes responding to prior user activity. For example, delete operations and QDS cleanup tasks generate ghost records that are marked for deletion, but are not physically deleted until the ghost cleanup process runs which can involve reading data pages into cache.
+Note that when CPU utilization is low, active cache utilization can remain high depending on the usage pattern and prevent memory reclamation. Also, there can be additional delays after user activity stops before memory reclamation occurs due to periodic background processes responding to prior user activity. For example, delete operations and QDS cleanup tasks generate ghost records that are marked for deletion, but are not physically deleted until the ghost cleanup process runs that can involve reading data pages into cache.
#### Cache hydration The SQL cache grows as data is fetched from disk in the same way and with the same speed as for provisioned databases. When the database is busy, the cache is allowed to grow unconstrained up to the max memory limit.
-## Autopausing and autoresuming
+## Auto-pause and auto-resume
-### Autopausing
+### Auto-pause
-Autopausing is triggered if all of the following conditions are true for the duration of the autopause delay:
+Auto-pause is triggered if all of the following conditions are true for the duration of the auto-pause delay:
- Number sessions = 0 - CPU = 0 for user workload running in the user pool
-An option is provided to disable autopausing if desired.
+An option is provided to disable auto-pausing if desired.
-The following features do not support autopausing, but do support auto-scaling. If any of the following features are used, then autopausing should be disabled and the database will remain online regardless of the duration of database inactivity:
+The following features do not support auto-pausing, but do support auto-scaling. If any of the following features are used, then auto-pausing should be disabled and the database will remain online regardless of the duration of database inactivity:
- Geo-replication (active geo-replication and auto-failover groups). - Long-term backup retention (LTR).-- The sync database used in SQL data sync. Unlike sync databases, hub and member databases support autopausing.
+- The sync database used in SQL data sync. Unlike sync databases, hub and member databases support auto-pausing.
- DNS aliasing - The job database used in Elastic Jobs (preview).
-Autopausing is temporarily prevented during the deployment of some service updates which require the database be online. In such cases, autopausing becomes allowed again once the service update completes.
+Auto-pausing is temporarily prevented during the deployment of some service updates which require the database be online. In such cases, auto-pausing becomes allowed again once the service update completes.
-### Autoresuming
+### Auto-resuming
-Autoresuming is triggered if any of the following conditions are true at any time:
+Auto-resuming is triggered if any of the following conditions are true at any time:
-|Feature|Autoresume trigger|
+|Feature|Auto-resume trigger|
||| |Authentication and authorization|Login| |Threat detection|Enabling/disabling threat detection settings at the database or server level.<br>Modifying threat detection settings at the database or server level.|
@@ -133,7 +133,7 @@ Autoresuming is triggered if any of the following conditions are true at any tim
|Vulnerability assessment|Ad hoc scans and periodic scans if enabled| |Query (performance) data store|Modifying or viewing query store settings| |Performance recommendations|Viewing or applying performance recommendations|
-|Autotuning|Application and verification of autotuning recommendations such as auto-indexing|
+|Auto-tuning|Application and verification of auto-tuning recommendations such as auto-indexing|
|Database copying|Create database as copy.<br>Export to a BACPAC file.| |SQL data sync|Synchronization between hub and member databases that run on a configurable schedule or are performed manually| |Modifying certain database metadata|Adding new database tags.<br>Changing max vCores, min vCores, or autopause delay.|
@@ -141,7 +141,7 @@ Autoresuming is triggered if any of the following conditions are true at any tim
Monitoring, management, or other solutions performing any of the operations listed above will trigger auto-resuming.
-Autoresuming is also triggered during the deployment of some service updates which require the database be online.
+Auto-resuming is also triggered during the deployment of some service updates which require the database be online.
### Connectivity
@@ -149,7 +149,7 @@ If a serverless database is paused, then the first login will resume the databas
### Latency
-The latency to autoresume and autopause a serverless database is generally order of 1 minute to autoresume and 1-10 minutes to autopause.
+The latency to auto-resume and auto-pause a serverless database is generally order of 1 minute to auto-resume and 1-10 minutes to auto-pause.
### Customer managed transparent data encryption (BYOK)
@@ -203,7 +203,7 @@ CREATE DATABASE testdb
( EDITION = 'GeneralPurpose', SERVICE_OBJECTIVE = 'GP_S_Gen5_1' ) ; ```
-For details, see [CREATE DATABASE](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-current).
+For details, see [CREATE DATABASE](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-current&preserve-view=true).
### Move a database from the provisioned compute tier into the serverless compute tier
@@ -228,14 +228,14 @@ az sql db update -g $resourceGroupName -s $serverName -n $databaseName `
#### Use Transact-SQL (T-SQL)
-When using T-SQL, default values are applied for the min vcores and autopause delay.
+When using T-SQL, default values are applied for the min vcores and auto-pause delay.
```sql ALTER DATABASE testdb MODIFY ( SERVICE_OBJECTIVE = 'GP_S_Gen5_1') ; ```
-For details, see [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-current).
+For details, see [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-current&preserve-view=true).
### Move a database from the serverless compute tier into the provisioned compute tier
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/licensing-model-azure-hybrid-benefit-ahb-change https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/licensing-model-azure-hybrid-benefit-ahb-change.md
@@ -114,7 +114,6 @@ Changing the license model is:
- Only supported for the Standard and Enterprise editions of SQL Server. License changes for Express, Web, and Developer are not supported. - Only supported for virtual machines deployed through the Azure Resource Manager model. Virtual machines deployed through the classic model are not supported. - Available only for the public or Azure Government clouds.
- - Only supported on virtual machines that have a single network interface (NIC).
> [!Note] > Only SQL Server core-based licensing with Software Assurance or subscription licenses are eligible for Azure Hybrid Benefit. If you are using Server + CAL licensing for SQL Server and you have Software Assurance, you can use bring-your-own-license to an Azure SQL Server virtual machine image to leverage license mobility for these servers, but you cannot leverage the other features of Azure Hybrid Benefit.
@@ -132,10 +131,6 @@ This error occurs when you try to change the license model on a SQL Server VM th
You'll need to register your subscription with the resource provider, and then [register your SQL Server VM with the SQL IaaS Agent Extension](sql-agent-extension-manually-register-single-vm.md).
-**The virtual machine '\<vmname\>' has more than one NIC associated**
-
-This error occurs on virtual machines that have more than one NIC. Remove one of the NICs before you change the licensing model. Although you can add the NIC back to the VM after you change the license model, operations in the Azure portal such as automatic backup and patching will no longer be supported.
- ## Next steps
backup https://docs.microsoft.com/en-us/azure/backup/encryption-at-rest-with-cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/encryption-at-rest-with-cmk.md
@@ -31,6 +31,7 @@ This article discusses the following:
- The Recovery Services vault can be encrypted only with keys stored in an Azure Key Vault, located in the **same region**. Also, keys must be **RSA 2048 keys** only and should be in **enabled** state. - Moving CMK encrypted Recovery Services vault across Resource Groups and Subscriptions isn't currently supported.
+- When you move a Recovery Services vault already encrypted with customer-managed keys to a new tenant, you'll need to update the Recovery Services vault to recreate and reconfigure the vaultΓÇÖs managed identity and CMK (which should be in the new tenant). If this isn't done, the backup and restore operations will start failing. Also, any role-based access control (RBAC) permissions set up within the subscription will need to be reconfigured.
- This feature can be configured through the Azure portal and PowerShell.
@@ -114,32 +115,6 @@ You now need to permit the Recovery Services vault to access the Azure Key Vault
1. Select **Save** to save changes made to the access policy of the Azure Key Vault.
-**With PowerShell**:
-
-Use the [Set-AzRecoveryServicesVaultProperty](/powershell/module/az.recoveryservices/set-azrecoveryservicesvaultproperty) command to enable encryption using customer-managed keys, and to assign or update the encryption key to be used.
-
-Example:
-
-```azurepowershell
-$keyVault = Get-AzKeyVault -VaultName "testkeyvault" -ResourceGroupName "testrg"
-$key = Get-AzKeyVaultKey -VaultName $keyVault -Name "testkey"
-Set-AzRecoveryServicesVaultProperty -EncryptionKeyId $key.ID -KeyVaultSubscriptionId "xxxx-yyyy-zzzz" -VaultId $vault.ID
--
-$enc=Get-AzRecoveryServicesVaultProperty -VaultId $vault.ID
-$enc.encryptionProperties | fl
-```
-
-Output:
-
-```output
-EncryptionAtRestType : CustomerManaged
-KeyUri : testkey
-SubscriptionId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
-LastUpdateStatus : Succeeded
-InfrastructureEncryptionState : Disabled
-```
- ### Enable soft-delete and purge protection on the Azure Key Vault You need to **enable soft delete and purge protection** on your Azure Key Vault that stores your encryption key. You can do this from the Azure Key Vault UI as shown below. (Alternatively, these properties can be set while creating the Key Vault). Read more about these Key Vault properties [here](../key-vault/general/soft-delete-overview.md).
@@ -192,7 +167,7 @@ You can also enable soft delete and purge protection through PowerShell using th
Once the above are ensured, continue with selecting the encryption key for your vault.
-To assign the key:
+#### To assign the key in the portal
1. Go to your Recovery Services vault -> **Properties**
@@ -226,6 +201,32 @@ To assign the key:
![Activity log](./media/encryption-at-rest-with-cmk/activity-log.png)
+#### To assign the key with PowerShell
+
+Use the [Set-AzRecoveryServicesVaultProperty](/powershell/module/az.recoveryservices/set-azrecoveryservicesvaultproperty) command to enable encryption using customer-managed keys, and to assign or update the encryption key to be used.
+
+Example:
+
+```azurepowershell
+$keyVault = Get-AzKeyVault -VaultName "testkeyvault" -ResourceGroupName "testrg"
+$key = Get-AzKeyVaultKey -VaultName $keyVault -Name "testkey"
+Set-AzRecoveryServicesVaultProperty -EncryptionKeyId $key.ID -KeyVaultSubscriptionId "xxxx-yyyy-zzzz" -VaultId $vault.ID
++
+$enc=Get-AzRecoveryServicesVaultProperty -VaultId $vault.ID
+$enc.encryptionProperties | fl
+```
+
+Output:
+
+```output
+EncryptionAtRestType : CustomerManaged
+KeyUri : testkey
+SubscriptionId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+LastUpdateStatus : Succeeded
+InfrastructureEncryptionState : Disabled
+```
+ >[!NOTE] > This process remains the same when you wish to update or change the encryption key. If you wish to update and use a key from another Key Vault (different from the one that's being currently used), make sure that: >
@@ -334,4 +335,4 @@ Using CMK encryption for Backup doesn't incur any additional costs to you. You m
## Next steps -- [Overview of security features in Azure Backup](security-overview.md)
+- [Overview of security features in Azure Backup](security-overview.md)
backup https://docs.microsoft.com/en-us/azure/backup/private-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/private-endpoints.md
@@ -22,26 +22,29 @@ This article will help you understand the process of creating private endpoints
- Virtual networks with Network Policies aren't supported for Private Endpoints. You'll need to disable Network Polices before continuing. - You need to re-register the Recovery Services resource provider with the subscription if you registered it before May 1 2020. To re-register the provider, go to your subscription in the Azure portal, navigate to **Resource provider** on the left navigation bar, then select **Microsoft.RecoveryServices** and select **Re-register**. - [Cross-region restore](backup-create-rs-vault.md#set-cross-region-restore) for SQL and SAP HANA database backups aren't supported if the vault has private endpoints enabled.
+- When you move a Recovery Services vault already using private endpoints to a new tenant, you'll need to update the Recovery Services vault to recreate and reconfigure the vaultΓÇÖs managed identity and create new private endpoints as needed (which should be in the new tenant). If this isn't done, the backup and restore operations will start failing. Also, any role-based access control (RBAC) permissions set up within the subscription will need to be reconfigured.
## Recommended and supported scenarios While private endpoints are enabled for the vault, they're used for backup and restore of SQL and SAP HANA workloads in an Azure VM and MARS agent backup only. You can use the vault for backup of other workloads as well (they won't require private endpoints though). In addition to backup of SQL and SAP HANA workloads and backup using the MARS agent, private endpoints are also used to perform file recovery for Azure VM backup. For more information, see the following table:
-| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent | Use of private endpoints is recommended to allow backup and restore without needing to allowlist any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. In that scenario, ensure that VMs that host SQL databases can reach Azure AD IPs or FQDNs. |
+| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent | Use of private endpoints is recommended to allow backup and restore without needing to add to an allow list any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. In that scenario, ensure that VMs that host SQL databases can reach Azure AD IPs or FQDNs. |
| | | | **Azure VM backup** | VM backup doesn't require you to allow access to any IPs or FQDNs. So it doesn't require private endpoints for backup and restore of disks. <br><br> However, file recovery from a vault containing private endpoints would be restricted to virtual networks that contain a private endpoint for the vault. <br><br> When using ACLΓÇÖed unmanaged disks, ensure the storage account containing the disks allows access to **trusted Microsoft services** if it's ACLΓÇÖed. | | **Azure Files backup** | Azure Files backups are stored in the local storage account. So it doesn't require private endpoints for backup and restore. |
-## Creating and using Private Endpoints for Backup
+## Get started with creating private endpoints for Backup
-This section talks about the steps involved in creating and using private endpoints for Azure Backup inside your virtual networks.
+The following sections discuss the steps involved in creating and using private endpoints for Azure Backup inside your virtual networks.
>[!IMPORTANT] > It's highly recommended that you follow steps in the same sequence as mentioned in this document. Failure to do so may lead to the vault being rendered incompatible to use private endpoints and requiring you to restart the process with a new vault.
+## Create a Recovery Services vault
-See [this section](#create-a-recovery-services-vault-using-the-azure-resource-manager-client) to learn how to create a vault using the Azure Resource Manager client. This creates a vault with its managed identity already enabled. Learn more about Recovery Services vaults [here](./backup-azure-recovery-services-vault-overview.md).
+Private endpoints for Backup can be only created for Recovery Services vaults that don't have any items protected to it (or haven't had any items attempted to be protected or registered to it in the past). So we suggest you create a new vault to start with. For more information about creating a new vault, see [Create and configure a Recovery Services vault](backup-create-rs-vault.md).
+
+See [this section](#create-a-recovery-services-vault-using-the-azure-resource-manager-client) to learn how to create a vault using the Azure Resource Manager client. This creates a vault with its managed identity already enabled.
## Enable Managed Identity for your vault
@@ -64,7 +67,7 @@ To create the required private endpoints for Azure Backup, the vault (the Manage
- The Resource Group that contains the target VNet - The Resource Group where the Private Endpoints are to be created-- The Resource Group that contains the Private DNS zones, as discussed in detail [here](#creating-private-endpoints-for-backup)
+- The Resource Group that contains the Private DNS zones, as discussed in detail [here](#create-private-endpoints-for-azure-backup)
We recommend that you grant the **Contributor** role for those three resource groups to the vault (managed identity). The following steps describe how to do this for a particular resource group (this needs to be done for each of the three resource groups):
@@ -79,41 +82,39 @@ We recommend that you grant the **Contributor** role for those three resource gr
To manage permissions at a more granular level, see [Create roles and permissions manually](#create-roles-and-permissions-manually).
-## Creating and approving Private Endpoints for Azure Backup
-
-### Creating Private Endpoints for Backup
-
-This section describes the process of creating a private endpoint for your vault.
-
-1. In the search bar, search for and select **Private Link**. This takes you to the **Private Link Center**.
+## Create Private Endpoints for Azure Backup
- ![Search for Private Link](./media/private-endpoints/search-for-private-link.png)
+This section explains how to create a private endpoint for your vault.
-1. On the left navigation bar, select **Private Endpoints**. Once in the **Private Endpoints** pane, select **+Add** to start creating a Private Endpoint for your vault.
+1. Navigate to your vault created above and go to **Private endpoint connections** on the left navigation bar. Select **+Private endpoint** on the top to start creating a new private endpoint for this vault.
- ![Add private endpoint in Private Link Center](./media/private-endpoints/add-private-endpoint.png)
+ ![Create new private endpoint](./media/private-endpoints/new-private-endpoint.png)
1. Once in the **Create Private Endpoint** process, you'll be required to specify details for creating your private endpoint connection.
+
+ 1. **Basics**: Fill in the basic details for your private endpoints. The region should be the same as the vault and the resource being backed up.
- 1. **Basics**: Fill in the basic details for your private endpoints. The region should be the same as the vault and the resource.
+ ![Fill in basic details](./media/private-endpoints/basics-tab.png)
- ![Fill in basic details](./media/private-endpoints/basic-details.png)
+ 1. **Resource**: This tab requires you to select the PaaS resource for which you want to create your connection. Select **Microsoft.RecoveryServices/vaults** from the resource type for your desired subscription. Once done, choose the name of your Recovery Services vault as the **Resource** and **AzureBackup** as the **Target sub-resource**.
- 1. **Resource**: This tab requires you to mention the PaaS resource for which you want to create your connection. Select **Microsoft.RecoveryServices/vaults** from the resource type for your desired subscription. Once done, choose the name of your Recovery Services vault as the **Resource** and **AzureBackup** as the **Target sub-resource**.
+ ![Select the resource for your connection](./media/private-endpoints/resource-tab.png)
- ![Fill in Resource tab](./media/private-endpoints/resource-tab.png)
+ 1. **Configuration**: In configuration, specify the virtual network and subnet where you want the private endpoint to be created. This will be the Vnet where the VM is present.
- 1. **Configuration**: In configuration, specify the virtual network and subnet where you want the private endpoint to be created. This will be the Vnet where the VM is present. You can opt to **integrate your private endpoint** with a private DNS zone. Alternately, you can also use your custom DNS server or create a private DNS zone.
+ To connect privately, you need required DNS records. Based on your network setup, you can choose one of the following:
- ![Fill in Configuration tab](./media/private-endpoints/configuration-tab.png)
+ - Integrate your private endpoint with a private DNS zone: Select **Yes** if you wish to integrate.
+ - Use your custom DNS server: Select **No** if you wish to use your own DNS server.
- Refer to [this section](#dns-changes-for-custom-dns-servers) if you want to use your custom DNS servers instead of integrating with Azure Private DNS Zones.
+ Managing DNS records for both these are [described later](#manage-dns-records).
- 1. Optionally, you can add **Tags** for your private endpoint.
+ ![Specify the virtual network and subnet](./media/private-endpoints/configuration-tab.png)
+ 1. Optionally, you can add **Tags** for your private endpoint.
1. Continue to **Review + create** once done entering details. When the validation completes, select **Create** to create the private endpoint.
-## Approving Private Endpoints
+## Approve Private Endpoints
If the user creating the private endpoint is also the owner of the Recovery Services vault, the private endpoint created above will be auto-approved. Otherwise, the owner of the vault must approve the private endpoint before being able to use it. This section discusses manual approval of private endpoints through the Azure portal.
@@ -125,7 +126,90 @@ See [Manual approval of private endpoints using the Azure Resource Manager Clien
![Approve private endpoints](./media/private-endpoints/approve-private-endpoints.png)
-## Using Private Endpoints for Backup
+## Manage DNS records
+
+As described previously, you need the required DNS records in your private DNS zones or servers in order to connect privately. You can either integrate your private endpoint directly with Azure private DNS zones or use your custom DNS servers to achieve this, based on your network preferences. This will need to be done for all three
+
+### When integrating private endpoints with Azure private DNS zones
+
+If you choose to integrate your private endpoint with private DNS zones, Backup will add the required DNS records. You can view the private DNS zones being used under **DNS configuration** of the private endpoint. If these DNS zones aren't present, they'll be created automatically when creating the private endpoint. However, you must verify that your virtual network (which contains the resources to be backed up) is properly linked with all three private DNS zones, as described below.
+
+![DNS configuration in Azure private DNS zone](./media/private-endpoints/dns-configuration.png)
+
+#### Validate virtual network links in private DNS zones
+
+For **each private DNS** zone listed above (for Backup, Blobs and Queues), do the following:
+
+1. Navigate to the respective **Virtual network links** option on the left navigation bar.
+1. You should be able to see an entry for the virtual network for which you've created the private endpoint, like the one shown below:
+
+ ![Virtual network for private endpoint](./media/private-endpoints/virtual-network-links.png)
+
+1. If you donΓÇÖt see an entry, add a virtual network link to all those DNS zones that don't have them.
+
+ ![Add virtual network link](./media/private-endpoints/add-virtual-network-link.png)
+
+### When using custom DNS server or host files
+
+If you're using your custom DNS servers, you'll need to create the required DNS zones and add the DNS records needed by the private endpoints to your DNS servers. For blobs and queues, you can also use conditional forwarders.
+
+#### For the Backup service
+
+1. In your DNS server, create a DNS zone for Backup according to the following naming convention:
+
+ |Zone |Service |
+ |||
+ |`privatelink.<geo>.backup.windowsazure.com` | Backup |
+
+ >[!NOTE]
+ > In the above text, `<geo>` refers to the region code (for example *eus* and *ne* for East US and North Europe respectively). Refer to the following lists for regions codes:
+ >
+ > - [All public clouds](https://download.microsoft.com/download/1/2/6/126a410b-0e06-45ed-b2df-84f353034fa1/AzureRegionCodesList.docx)
+ > - [China](https://docs.microsoft.com/azure/china/resources-developer-guide#check-endpoints-in-azure)
+ > - [Germany](https://docs.microsoft.com/azure/germany/germany-developer-guide#endpoint-mapping)
+ > - [US Gov](https://docs.microsoft.com/azure/azure-government/documentation-government-developer-guide)
+
+1. Next, we need to add the required DNS records. To view the records that need to be added to the Backup DNS zone, navigate to the private endpoint you created above, and go to the **DNS configuration** option under the left navigation bar.
+
+ ![DNS configuration for custom DNS server](./media/private-endpoints/custom-dns-configuration.png)
+
+1. Add one entry for each FQDN and IP displayed as A type records in your DNS zone for Backup. If you're using a host file for name resolution, make corresponding entries in the host file for each IP and FQDN according to the following format:
+
+ `<private ip><space><backup service privatelink FQDN>`
+
+>[!NOTE]
+>As shown in the screenshot above, the FQDNs depict `xxxxxxxx.<geo>.backup.windowsazure.com` and not `xxxxxxxx.privatelink.<geo>.backup. windowsazure.com`. In such cases, ensure you include (and if required, add) the `.privatelink.` according to the stated format.
+
+#### For Blob and Queue services
+
+For blobs and queues, you can either use conditional forwarders or create DNS zones in your DNS server.
+
+##### If using conditional forwarders
+
+If you're using conditional forwarders, add forwarders for blob and queue FQDNs as follows:
+
+|FQDN |IP |
+|||
+|`privatelink.blob.core.windows.net` | 168.63.129.16 |
+|`privatelink.queue.core.windows.net` | 168.63.129.16 |
+
+##### If using private DNS zones
+
+If you're using DNS zones for blobs and queues, you'll need to first create these DNS zones and later add the required A records.
+
+|Zone |Service |
+|||
+|`privatelink.blob.core.windows.net` | Blob |
+|`privatelink.queue.core.windows.net` | Queue |
+
+At this moment, we'll only create the zones for blobs and queues when using custom DNS servers. Adding DNS records will be done later in two steps:
+
+1. When you register the first backup instance, that is, when you configure backup for the first time
+1. When you run the first backup
+
+We'll perform these steps in the following sections.
+
+## Use Private Endpoints for Backup
Once the private endpoints created for the vault in your VNet have been approved, you can start using them for performing your backups and restores.
@@ -133,21 +217,80 @@ Once the private endpoints created for the vault in your VNet have been approved
>Ensure that you've completed all the steps mentioned above in the document successfully before proceeding. To recap, you must have completed the steps in the following checklist: > >1. Created a (new) Recovery Services vault
->1. Enabled the vault to use system assigned Managed Identity
->1. Assigned relevant permissions to the Managed Identity of the vault
->1. Created a Private Endpoint for your vault
->1. Approved the Private Endpoint (if not auto approved)
+>2. Enabled the vault to use system assigned Managed Identity
+>3. Assigned relevant permissions to the Managed Identity of the vault
+>4. Created a Private Endpoint for your vault
+>5. Approved the Private Endpoint (if not auto approved)
+>6. Ensured all DNS records are appropriately added (except blob and queue records for custom servers, which will be discussed in the following sections)
+
+### Check VM connectivity
+
+In the VM in the locked down network, ensure the following:
+
+1. The VM should have access to AAD.
+2. Execute **nslookup** on the backup URL (`xxxxxxxx.privatelink.<geo>.backup. windowsazure.com`) from your VM, to ensure connectivity. This should return the private IP assigned in your virtual network.
+
+### Configure backup
+
+Once you ensure the above checklist and access to have been successfully completed, you can continue to configure backup of workloads to the vault. If you're using a custom DNS server, you'll need to add DNS entries for blobs and queues that are available after configuring the first backup.
+
+#### DNS records for blobs and queues (only for custom DNS servers/host files) after the first registration
+
+After you have configured backup for at least one resource on a private endpoint enabled vault, add the required DNS records for blobs and queues as described below.
+
+1. Navigate to your Resource Group, and search for the private endpoint you created.
+1. Aside from the private endpoint name given by you, you'll see two more private endpoints being created. These start with `<the name of the private endpoint>_ecs` and are suffixed with `_blob` and `_queue` respectively.
-### Backup and restore of workloads in Azure VM (SQL, SAP HANA)
+ ![Private endpoint resources](./media/private-endpoints/private-endpoint-resources.png)
-Once the private endpoint is created and approved, no additional changes are required from the client side to use the private endpoint. All communication and data transfer from your secured network to the vault will be performed through the private endpoint.
-However, if you remove private endpoints for the vault after a server (SQL/SAP HANA) has been registered to it, you'll need to re-register the container with the vault. You don't need to stop protection for them.
+1. Navigate to each of these private endpoints. In the DNS configuration option for each of the two private endpoints, you'll see a record with and an FQDN and an IP address. Add both of these to your custom DNS server, in addition to the ones described earlier.
+If you're using a host file, make corresponding entries in the host file for each IP/FQDN according to the following format:
-### Backup and restore through MARS agent
+ `<private ip><space><blob service privatelink FQDN>`<br>
+ `<private ip><space><queue service privatelink FQDN>`
-When using the MARS Agent to back up your on-premises resources, make sure your on-premises network (containing your resources to be backed up) is peered with the Azure VNet that contains a private endpoint for the vault, so you can use it. You can then continue to install the MARS agent and configure backup as detailed here. You must, however, ensure all communication for backup happens through the peered network only.
+ ![Blob DNS configuration](./media/private-endpoints/blob-dns-configuration.png)
-However, if you remove private endpoints for the vault after a MARS agent has been registered to it, you'll need to re-register the container with the vault. You don't need to stop protection for them.
+In addition to the above, there's another entry needed after the first backup, which is discussed [later](#dns-records-for-blobs-only-for-custom-dns-servershost-files-after-the-first-backup).
+
+### Backup and restore of workloads in Azure VM (SQL and SAP HANA)
+
+Once the private endpoint is created and approved, no other changes are required from the client side to use the private endpoint (unless you're using SQL Availability Groups, which we discuss later in this section). All communication and data transfer from your secured network to the vault will be performed through the private endpoint. However, if you remove private endpoints for the vault after a server (SQL or SAP HANA) has been registered to it, you'll need to re-register the container with the vault. You don't need to stop protection for them.
+
+#### DNS records for blobs (only for custom DNS servers/host files) after the first backup
+
+After you run the first backup and you're using a custom DNS server (without conditional forwarding), it's likely that your backup will fail. If that happens:
+
+1. Navigate to your Resource Group, and search for the private endpoint you created.
+1. Aside from the three private endpoints discussed earlier, you'll now see a fourth private endpoint with its name starting with `<the name of the private endpoint>_prot` and are suffixed with `_blob`.
+
+ ![Private endpoing with suffix "prot"](./media/private-endpoints/private-endpoint-prot.png)
+
+1. Navigate to this new private endpoint. In the DNS configuration option, you'll see a record with an FQDN and an IP address. Add these to your private DNS server, in addition to the ones described earlier.
+
+ If you're using a host file, make the corresponding entries in the host file for each IP and FQDN according to the following format:
+
+ `<private ip><space><blob service privatelink FQDN>`
+
+>[!NOTE]
+>At this point, you should be able to run **nslookup** from the VM and resolve to private IP addresses when done on the vaultΓÇÖs Backup and Storage URLs.
+
+### When using SQL Availability Groups
+
+When using SQL Availability Groups (AG), you'll need to provision conditional forwarding in the custom AG DNS as described below:
+
+1. Sign in to your domain controller.
+1. Under the DNS application, add conditional forwarders for all three DNS zones (Backup, Blobs, and Queues) to the host IP 168.63.129.16 or the custom DNS server IP address, as necessary. The following screenshots show when you're forwarding to the Azure host IP. If you're using your own DNS server, replace with the IP of your DNS server.
+
+ ![Conditional forwarders in DNS Manager](./media/private-endpoints/dns-manager.png)
+
+ ![New conditional forwarder](./media/private-endpoints/new-conditional-forwarder.png)
+
+### Backup and restore through MARS Agent
+
+When using the MARS Agent to back up your on-premises resources, make sure your on-premises network (containing your resources to be backed up) is peered with the Azure VNet that contains a private endpoint for the vault, so you can use it. You can then continue to install the MARS agent and configure backup as detailed here. However, you must ensure all communication for backup happens through the peered network only.
+
+But if you remove private endpoints for the vault after a MARS agent has been registered to it, you'll need to re-register the container with the vault. You don't need to stop protection for them.
## Additional topics
@@ -336,7 +479,11 @@ $privateEndpointConnection = New-AzPrivateLinkServiceConnection `
-Name $privateEndpointConnectionName ` -PrivateLinkServiceId $vault.ID ` -GroupId "AzureBackup"
-
+
+$vnet = Get-AzVirtualNetwork -Name $vnetName -ResourceGroupName $VMResourceGroupName
+$subnet = $vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq '<subnetName>'}
++ $privateEndpoint = New-AzPrivateEndpoint ` -ResourceGroupName $vmResourceGroupName ` -Name $privateEndpointName `
@@ -381,65 +528,7 @@ $privateEndpoint = New-AzPrivateEndpoint `
} ```
-### DNS changes for custom DNS servers
-
-#### Create DNS zones for custom DNS servers
-
-You need to create three private DNS zones and link them to your virtual network. Keep in mind that, unlike Blob and Queue, the Backup service public URLs don't register in Azure Public DNS for the redirection to the Private Link DNS zones.
-
-| **Zone** | **Service** |
-| | -- |
-| `privatelink.<geo>.backup.windowsazure.com` | Backup |
-| `privatelink.blob.core.windows.net` | Blob |
-| `privatelink.queue.core.windows.net` | Queue |
-
->[!NOTE]
->In the text above, *geo* refers to the region code. For example, *wcus* and *ne* for West Central US and North Europe respectively.
-
-Refer to [this list](https://download.microsoft.com/download/1/2/6/126a410b-0e06-45ed-b2df-84f353034fa1/AzureRegionCodesList.docx) for region codes. See the following links for URL naming conventions in national regions:
--- [China](/azure/china/resources-developer-guide#check-endpoints-in-azure)-- [Germany](../germany/germany-developer-guide.md#endpoint-mapping)-- [US Gov](../azure-government/documentation-government-developer-guide.md)-
-#### Adding DNS records for custom DNS servers
-
-This requires you to make entries for each FQDN in your private endpoint into your Private DNS Zone.
-
-It should be noted that we'll be using the private endpoints created for Backup, Blob, and Queue service.
--- The private endpoint for the vault uses the name specified while creating the private endpoint-- The private endpoints for blob and queue services are prefixed with the name of the same for the vault.-
-For example, the following picture shows the three private endpoints created for a private endpoint connection with the name *pee2epe*:
-
-![Three private endpoints for a private endpoint connection](./media/private-endpoints/three-private-endpoints.png)
-
-DNS zone for the Backup service (`privatelink.<geo>.backup.windowsazure.com`):
-
-1. Navigate to your private endpoint for Backup in the **Private Link Center**. The overview page lists the FQDN and private IPs for your private endpoint.
-
-1. Add one entry for each FQDN and private IP as an A type record.
-
- ![Add entry for each FQDN and private IP](./media/private-endpoints/add-entry-for-each-fqdn-and-ip.png)
-
-DNS zone for the Blob service (`privatelink.blob.core.windows.net`):
-
-1. Navigate to your private endpoint for Blob in the **Private Link Center**. The overview page lists the FQDN and private IPs for your private endpoint.
-
-1. Add an entry for the FQDN and private IP as an A type record.
-
- ![Add entry for the FQDN and private IP as an A type record for the Blob service](./media/private-endpoints/add-type-a-record-for-blob.png)
-
-DNS zone for the Queue service (`privatelink.queue.core.windows.net`):
-
-1. Navigate to your private endpoint for Queue in the **Private Link Center**. The overview page lists the FQDN and private IPs for your private endpoint.
-
-1. Add an entry for the FQDN and private IP as an A type record.
-
- ![Add entry for the FQDN and private IP as an A type record for the Queue service](./media/private-endpoints/add-type-a-record-for-queue.png)
-
-## Frequently Asked Questions
+## Frequently asked questions
Q. Can I create a private endpoint for an existing Backup vault?<br> A. No, private endpoints can be created for new Backup vaults only. So the vault must not have ever had any items protected to it. In fact, no attempts to protect any items to the vault can be made before creating private endpoints.
backup https://docs.microsoft.com/en-us/azure/backup/tutorial-sap-hana-manage-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-sap-hana-manage-cli.md
@@ -221,11 +221,12 @@ Sample JSON (sappolicy.json) output:
], "workLoadType": "SAPHanaDatabase" },
- "resourceGroup": "azurefiles",
+ "resourceGroup": "saphanaResourceGroup",
"tags": null, "type": "Microsoft.RecoveryServices/vaults/backupPolicies" } ```
+Once the policy is created successfully, the output of the command will display the policy JSON that you passed as a parameter while executing the command.
You can modify the following section of the policy to specify the desired backup frequency and retention for incremental backups.
batch https://docs.microsoft.com/en-us/azure/batch/batch-applications-to-pool-nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-applications-to-pool-nodes.md
@@ -2,41 +2,64 @@
Title: Copy applications and data to pool nodes description: Learn how to copy applications and data to pool nodes. Previously updated : 02/17/2020 Last updated : 02/10/2021 # Copy applications and data to pool nodes
-Azure Batch supports several ways for getting data and applications onto compute nodes so that the data and applications are available for use by tasks. Data and applications may be required to run the entire job and so need to be installed on every node. Some may be required only for a specific task, or need to be installed for the job but don't need to be on every node. Batch has tools for each of these scenarios.
+Azure Batch supports several ways for getting data and applications onto compute nodes so that they're available for use by tasks.
-- **Pool start task resource files**: For applications or data that need to be installed on every node in the pool. Use this method along with either an application package or the start task's resource file collection in order to perform an install command.
+The method you choose may depend on the scope of your file or application. Data and applications may be required to run the entire job, and so need to be installed on every node. Some files or applications may be required only for a specific task. Others may need to be installed for the job, but don't need to be on every node. Batch has tools for each of these scenarios.
-Examples:
-- Use the start task command line to move or install applications
+## Determine the scope required of a file
-- Specify a list of specific files or containers in an Azure storage account. For more information see [add#resourcefile in REST documentation](/rest/api/batchservice/pool/add#resourcefile)
+You need to determine the scope of a file - is the file required for a pool, a job, or a task. Files that are scoped to the pool should use pool application packages, or a start task. Files scoped to the job should use a job preparation task. A good example of files scoped at the pool or job level are applications. Files scoped to the task should use task resource files.
-- Every job that runs on the pool runs MyApplication.exe that must first be installed with MyApplication.msi. If you use this mechanism, you need to set the start task's **wait for success** property to **true**. For more information, see the [add#starttask in REST documentation](/rest/api/batchservice/pool/add#starttask).
+## Pool start task resource files
-- **Application package references** on the pool: For applications or data that need to be installed on every node in the pool. There is no install command associated with an application package, but you can use a start task to run any install command. If your application doesn't require installation, or consists of a large number of files, you can use this method. Application packages are well suited for large numbers of files because they combine a large number of file references into a small payload. If you try to include more than 100 separate resource files into one task, the Batch service might come up against internal system limitations for a single task. Also, use application packages if you have rigorous versioning requirements where you might have many different versions of the same application and need to choose between them. For more information, read [Deploy applications to compute nodes with Batch application packages](./batch-application-packages.md).
+For applications or data that need to be installed on every node in the pool, use pool start task resource files. Use this method along with either an [application package](batch-application-packages.md) or the start task's resource file collection in order to perform an install command.
-- **Job preparation task resource files**: For applications or data that must be installed for the job to run, but don't need to be installed on the entire pool. For example: if your pool has many different types of jobs, and only one job type needs MyApplication.msi to run, it makes sense to put the installation step into a job preparation task. For more information about job preparation tasks see [Run job preparation and job release tasks on Batch compute nodes](./batch-job-prep-release.md).
+For example, you can use the start task command line to move or install applications. You can also specify a list of files or containers in an Azure storage account. For more information, see [Add#ResourceFile in REST documentation](/rest/api/batchservice/pool/add#resourcefile).
-- **Task resource files**: For when an application or data is relevant only to an individual task. For example: You have five tasks, each processes a different file and then writes the output to blob storage. In this case, the input file should be specified on the **tasks resource files** collection because each task has its own input file.
+If every job that runs on the pool runs an application (.exe) that must first be installed with a .msi file, you'll need to set the start task's **wait for success** property to **true**. For more information, see [Add#StartTask in REST documentation](/rest/api/batchservice/pool/add#starttask).
-## Determine the scope required of a file
+## Application package references
-You need to determine the scope of a file - is the file required for a pool, a job, or a task. Files that are scoped to the pool should use pool application packages, or a start task. Files scoped to the job should use a job preparation task. A good example of files scoped at the pool or job level are applications. Files scoped to the task should use task resource files.
+For applications or data that need to be installed on every node in the pool, consider using [application packages](batch-application-packages.md). There is no install command associated with an application package, but you can use a start task to run any install command. If your application doesn't require installation, or consists of a large number of files, you can use this method.
+
+Application packages are useful when you have a large number of files, because they can combine many file references into a small payload. If you try to include more than 100 separate resource files into one task, the Batch service might come up against internal system limitations for a single task. Application packages are also useful when you have many different versions of the same application and need to choose between them.
+
+## Extensions
+
+[Extensions](create-pool-extensions.md) are small applications that facilitate post-provisioning configuration and setup on Batch compute nodes. When you create a pool, you can select a supported extension to be installed on the compute nodes as they are provisioned. After that, the extension can perform its intended operation.
+
+## Job preparation task resource files
-### Other ways to get data onto Batch compute nodes
+For applications or data that must be installed for the job to run, but don't need to be installed on the entire pool, consider using [job preparation task resource files](./batch-job-prep-release.md).
-There are other ways to get data onto Batch compute nodes that are not officially integrated into the Batch REST API. Because you have control over Azure Batch nodes, and can run custom executables, you are able to pull data from any number of custom sources as long as the Batch node has connectivity to the target and you have the credentials to that source onto the Azure Batch node. A few common examples are:
+For example, if your pool has many different types of jobs, and only one job type needs an .msi file in order to run, it makes sense to put the installation step into a job preparation task.
+
+## Task resource files
+
+Task resource files are appropriate when your application or data is relevant only to an individual task.
+
+For example, you might have five tasks, each processing a different file and then writing the output to blob storage In this case, the input file should be specified on the task resource files collection, because each task has its own input file.
+
+## Additional ways to get data onto nodes
+
+Because you have control over Azure Batch nodes, and can run custom executables, you can pull data from any number of custom sources. Make sure the Batch node has connectivity to the target and that you have credentials to that source on the node.
+
+A few examples of ways to transfer data to Batch nodes are:
- Downloading data from SQL - Downloading data from other web services/custom locations - Mapping a network share
-### Azure storage
+## Azure storage
+
+Keep in mind that blob storage has download scalability targets. Azure storage file share scalability targets are the same as for a single blob. The size will impact the number of nodes and pools you need.
-Blob storage has download scalability targets. Azure storage file share scalability targets are the same as for a single blob. Size will impact the number of nodes and pools you need.
+## Next steps
+- Learn about using [application packages with Batch](batch-application-packages.md).
+- Learn more about [working with nodes and pools](nodes-and-pools.md).
batch https://docs.microsoft.com/en-us/azure/batch/create-pool-extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/create-pool-extensions.md
@@ -0,0 +1,124 @@
+
+ Title: Use extensions with Batch pools
+description: Extensions are small applications that facilitate post-provisioning configuration and setup on Batch compute nodes.
+ Last updated : 02/10/2021++
+# Use extensions with Batch pools
+
+Extensions are small applications that facilitate post-provisioning configuration and setup on Batch compute nodes. You can select any of the extensions that are allowed by Azure Batch and have them installed on the compute nodes as they are provisioned. After that, the extension can perform its intended operation.
+
+You can check the live status of the extensions you use and retrieve the information they return in order to pursue any detection, correction, or diagnostics capabilities.
+
+## Prerequisites
+
+- Pools with extensions must use [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration).
+- The CustomScript extension type is reserved for the Azure Batch service and can't be overridden.
+
+### Supported extensions
+
+The following extensions can currently be installed when creating a Batch pool.
+
+- Azure Key Vault extension for both [Linux](../virtual-machines/extensions/key-vault-linux.md) and [Windows](../virtual-machines/extensions/key-vault-windows.md)
+- Log analytics and Monitoring extension for both [Linux](../virtual-machines/extensions/oms-linux.md) and [Windows](../virtual-machines/extensions/oms-windows.md)
+- Azure Security Pack
+
+You can request support for additional publishers and/or extension types by opening a support request.
+
+## Create a pool with extensions
+
+The example below creates a Batch pool of Linux nodes that uses the Azure Key Vault extension.
+
+REST API URI
+
+```http
+ PUT https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.Batch/batchAccounts/<batchaccountName>/pools/<batchpoolName>?api-version=2021-01-01
+```
+
+Request Body
+
+```json
+{
+ "name": "test1",
+ "type": "Microsoft.Batch/batchAccounts/pools",
+ "properties": {
+ "vmSize": "STANDARD_DS2_V2",
+ "taskSchedulingPolicy": {
+ "nodeFillType": "Pack"
+ },
+ "deploymentConfiguration": {
+ "virtualMachineConfiguration": {
+ "imageReference": {
+ "publisher": "canonical",
+ "offer": "ubuntuserver",
+ "sku": "18.04-lts",
+ "version": "latest"
+ },
+ "nodeAgentSkuId": "batch.node.ubuntu 18.04",
+ "extensions": [
+ {
+ "name": "secretext",
+ "type": "KeyVaultForLinux",
+ "publisher": "Microsoft.Azure.KeyVault",
+ "typeHandlerVersion": "1.0",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "secretsManagementSettings": {
+ "pollingIntervalInS": "300",
+ "certificateStoreLocation": "/var/lib/waagent/Microsoft.Azure.KeyVault",
+ "requireInitialSync": true,
+ "observedCertificates": [
+ "https://testkvwestus2.vault.azure.net/secrets/authsecreat"
+ ]
+ },
+ "authenticationSettings": {
+ "msiEndpoint": "http://169.254.169.254/metadata/identity",
+ "msiClientId": "885b1a3d-f13c-4030-afcf-9f05044d78dc"
+ }
+ },
+ "protectedSettings":{}
+ }
+ ]
+ }
+ },
+ "scaleSettings": {
+ "fixedScale": {
+ "targetDedicatedNodes": 1,
+ "targetLowPriorityNodes": 0,
+ "resizeTimeout": "PT15M"
+ }
+ }
+```
+
+## Get extension data from a pool
+
+The example below retrieves data from the Azure Key Vault extension.
+
+REST API URI
+
+```http
+ GET https://<accountname>.<region>.batch.azure.com/pools/test3/nodes/tvmps_a3ce79db285d6c124399c5bd3f3cf308d652c89675d9f1f14bfc184476525278_d/extensions/secretext?api-version=2010-01-01
+```
+
+Response Body
+
+```json
+{
+ "odata.metadata":"https://testwestus2batch.westus2.batch.azure.com/$metadata#extensions/@Element","instanceView":{
+ "name":"secretext","statuses":[
+ {
+ "code":"ProvisioningState/succeeded","level":0,"displayStatus":"Provisioning succeeded","message":"Successfully started Key Vault extension service. 2021-02-08T19:49:39Z"
+ }
+ ]
+ },"vmExtension":{
+ "name":"KVExtensions","publisher":"Microsoft.Azure.KeyVault","type":"KeyVaultForLinux","typeHandlerVersion":"1.0","autoUpgradeMinorVersion":true,"settings":"{\r\n \"secretsManagementSettings\": {\r\n \"pollingIntervalInS\": \"300\",\r\n \"certificateStoreLocation\": \"/var/lib/waagent/Microsoft.Azure.KeyVault\",\r\n \"requireInitialSync\": true,\r\n \"observedCertificates\": [\r\n \"https://testkvwestus2.vault.azure.net/secrets/testumi\"\r\n ]\r\n },\r\n \"authenticationSettings\": {\r\n \"msiEndpoint\": \"http://169.254.169.254/metadata/identity\",\r\n \"msiClientId\": \"885b1a3d-f13c-4030-afcf-922f05044d78dc\"\r\n }\r\n}"
+ }
+}
+
+```
+
+## Next steps
+
+- Learn about various ways to [copy applications and data to pool nodes](batch-applications-to-pool-nodes.md).
+- Learn more about working with [nodes and pools](nodes-and-pools.md).
cdn https://docs.microsoft.com/en-us/azure/cdn/cdn-create-new-endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-create-new-endpoint.md
@@ -77,7 +77,7 @@ In the preceding steps, you created a CDN profile and an endpoint in a resource
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Use CDN to server static content from a web app](cdn-add-to-web-app.md)
+> [Tutorial: Use CDN to serve static content from a web app](cdn-add-to-web-app.md)
> [!div class="nextstepaction"] > [Tutorial: Add a custom domain to your Azure CDN endpoint](cdn-map-content-to-custom-domain.md)
cdn https://docs.microsoft.com/en-us/azure/cdn/cdn-custom-ssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-custom-ssl.md
@@ -154,7 +154,9 @@ Grant Azure CDN permission to access the certificates (secrets) in your Azure Ke
5. Select **Add**.
- Azure CDN can now access this key vault and the certificates (secrets) that are stored in this key vault.
+> [!NOTE]
+> Azure CDN can now access this key vault and the certificates (secrets) that are stored in this key vault. Any CDN instance created in this subscription will have access to the certificates in this key vault.
+ ### Select the certificate for Azure CDN to deploy
cdn https://docs.microsoft.com/en-us/azure/cdn/cdn-optimization-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-optimization-overview.md
@@ -4,14 +4,7 @@ description: Learn how Azure Content Delivery Network can optimize delivery base
documentationcenter: '' -- - Last updated 03/25/2019
@@ -59,7 +52,11 @@ Microsoft recommends that you test performance variations between different prov
## Select and configure optimization types
-When you create a CDN endpoint, select an optimization type that best matches the scenario and type of content that you want the endpoint to deliver. **General web delivery** is the default selection. For existing **Azure CDN Standard from Akamai** endpoints only, you can update the optimization option at any time. This change doesn't interrupt delivery from Azure CDN.
+When you create a CDN endpoint, select an optimization type that best matches the scenario and type of content that you want the endpoint to deliver.
+
+**General web delivery** is the default selection. You can only update **Azure CDN Standard from Akamai** endpoints optimization at any time.
+
+For **Azure CDN Standard from Microsoft**, **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon**, you can't.
1. In an **Azure CDN Standard from Akamai** profile, select an endpoint.
cdn https://docs.microsoft.com/en-us/azure/cdn/cdn-preload-endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-preload-endpoint.md
@@ -64,5 +64,5 @@ This tutorial walks you through pre-loading cached content on all Azure CDN edge
## See also * [Purge an Azure CDN endpoint](cdn-purge-endpoint.md)
-* [Azure CDN REST API reference: Pre-load content on an endpoint](/rest/api/cdn/endpoints/loadcontent)
-* [Azure CDN REST API reference: Purge content from an endpoint](/rest/api/cdn/endpoints/purgecontent)
+* [Azure CDN REST API reference: Pre-load content on an endpoint](/rest/api/cdn/cdn/endpoints/loadcontent)
+* [Azure CDN REST API reference: Purge content from an endpoint](/rest/api/cdn/cdn/endpoints/purgecontent)
cloud-services https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-guestos-msrc-releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
@@ -10,13 +10,68 @@
na Previously updated : 2/5/2021 Last updated : 2/9/2021 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## February 2021 Guest OS
+
+>[!NOTE]
+
+>The February Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the February Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 21-02 | [4601345] | Latest Cumulative Update(LCU) | 6.28 | Feb 9, 2021 |
+| Rel 21-02 | [4580325] | Flash update | 3.94, 4.87, 5.52, 6.28 | Oct 13, 2020 |
+| Rel 21-02 | [4586768] | IE Cumulative Updates | 2.107, 3.94, 4.87 | Nov 10, 2020 |
+| Rel 21-02 | [4601318] | Latest Cumulative Update(LCU) | 5.52 | Feb 9, 2021 |
+| Rel 21-02 | [4578952] | .NET Framework 3.5 Security and Quality Rollup | 2.107 | Jan 12, 2021 |
+| Rel 21-02 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup | 2.107 | Jan 12, 2021 |
+| Rel 21-02 | [4578953] | .NET Framework 3.5 Security and Quality Rollup | 4.87 | Jan 12, 2021 |
+| Rel 21-02 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup | 4.87 | Jan 12, 2021 |
+| Rel 21-02 | [4578950] | .NET Framework 3.5 Security and Quality Rollup | 3.94 | Jan 12, 2021 |
+| Rel 21-02 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup | 3.94 | Jan 12, 2021 |
+| Rel 21-02 | [4578966] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.28 | Oct 13, 2020 |
+| Rel 21-02 | [4601347] | Monthly Rollup | 2.107 | Feb 9, 2021 |
+| Rel 21-02 | [4601348] | Monthly Rollup | 3.94 | Feb 9, 2021 |
+| Rel 21-02 | [4601384] | Monthly Rollup | 4.87 | Feb 9, 2021 |
+| Rel 21-02 | [4566426] | Servicing Stack update | 3.94 | July 14, 2020 |
+| Rel 21-02 | [4566425] | Servicing Stack update | 4.87 | July 14, 2020 |
+| Rel 21-02 OOB | [4578013] | Standalone Security Update | 4.87 | Aug 19, 2020 |
+| Rel 21-02 | [4601392] | Servicing Stack update | 5.52 | Feb 9, 2021 |
+| Rel 21-02 | [4592510] | Servicing Stack update | 2.107 | Dec 8, 2020 |
+| Rel 21-02 | [4601393] | Servicing Stack update | 6.28 | Feb 9, 2021 |
+| Rel 21-02 | [4494175] | Microcode | 5.52 | Sep 1, 2020 |
+| Rel 21-02 | [4494174] | Microcode | 6.28 | Sep 1, 2020 |
+
+[4601345]: https://support.microsoft.com/kb/4601345
+[4580325]: https://support.microsoft.com/kb/4580325
+[4586768]: https://support.microsoft.com/kb/4586768
+[4601318]: https://support.microsoft.com/kb/4601318
+[4578952]: https://support.microsoft.com/kb/4578952
+[4578955]: https://support.microsoft.com/kb/4578955
+[4578953]: https://support.microsoft.com/kb/4578953
+[4578956]: https://support.microsoft.com/kb/4578956
+[4578950]: https://support.microsoft.com/kb/4578950
+[4578954]: https://support.microsoft.com/kb/4578954
+[4578966]: https://support.microsoft.com/kb/4578966
+[4601347]: https://support.microsoft.com/kb/4601347
+[4601348]: https://support.microsoft.com/kb/4601348
+[4601384]: https://support.microsoft.com/kb/4601384
+[4566426]: https://support.microsoft.com/kb/4566426
+[4566425]: https://support.microsoft.com/kb/4566425
+[4578013]: https://support.microsoft.com/kb/4578013
+[4601392]: https://support.microsoft.com/kb/4601392
+[4592510]: https://support.microsoft.com/kb/4592510
+[4601393]: https://support.microsoft.com/kb/4601393
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
++ ## January 2021 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Concepts/best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/best-practices.md
@@ -142,7 +142,7 @@ QnA Maker allows users to collaborate on a knowledge base. Users need access to
## Active learning
-[Active learning](../How-to/use-active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. It is important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in the QnA Maker portal, you can **[filter by suggestions](../How-To/improve-knowledge-base.md#accept-an-active-learning-suggestion-in-the-knowledge-base)** then review and accept or reject those suggestions.
+[Active learning](../How-to/use-active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. It is important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in the QnA Maker portal, you can **[filter by suggestions](../How-To/improve-knowledge-base.md)** then review and accept or reject those suggestions.
## Next steps
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-use-conversation-transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-conversation-transcription.md
@@ -20,7 +20,7 @@ The Speech SDK's **ConversationTranscriber** API allows you to transcribe meetin
## Limitations * Only available in the following subscription regions: `centralus`, `eastasia`, `eastus`, `westeurope`
-* Requires a 7-mic circular multi-microphone array with a playback reference stream. The microphone array should meet [our specification](./speech-devices-sdk-microphone.md).
+* Requires a 7-mic circular multi-microphone array. The microphone array should meet [our specification](./speech-devices-sdk-microphone.md).
* The [Speech Devices SDK](speech-devices-sdk.md) provides suitable devices and a sample app demonstrating Conversation Transcription. ## Prerequisites
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/text-to-speech-basics/text-to-speech-basics-javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/text-to-speech-basics/text-to-speech-basics-javascript.md
@@ -2,7 +2,7 @@
Previously updated : 04/15/2020 Last updated : 02/10/2021
@@ -20,7 +20,7 @@ If you want to skip straight to sample code, see the [JavaScript quickstart samp
## Prerequisites
-This article assumes that you have an Azure account and Speech service subscription. If you don't have an account and subscription, [try the Speech service for free](../../../overview.md#try-the-speech-service-for-free).
+This article assumes that you have an Azure account and Speech service resource. If you don't have an account and resource, [try the Speech service for free](../../../overview.md#try-the-speech-service-for-free).
## Install the Speech SDK
@@ -45,7 +45,7 @@ Download and extract the <a href="https://aka.ms/csspeech/jsbrowserpackage" targ
# [import](#tab/import) ```javascript
-import * from "microsoft-cognitiveservices-speech-sdk";
+import * as sdk from "microsoft-cognitiveservices-speech-sdk";
``` For more information on `import`, see <a href="https://javascript.info/import-export" target="_blank">export and import <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
@@ -63,23 +63,23 @@ For more information on `require`, see <a href="https://nodejs.org/en/knowledge/
## Create a speech configuration
-To call the Speech service using the Speech SDK, you need to create a [`SpeechConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig). This class includes information about your subscription, like your key and associated region, endpoint, host, or authorization token.
+To call the Speech service using the Speech SDK, you need to create a [`SpeechConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig). This class includes information about your resource, like your key and associated region, endpoint, host, or authorization token.
> [!NOTE] > Regardless of whether you're performing speech recognition, speech synthesis, translation, or intent recognition, you'll always create a configuration. There are a few ways that you can initialize a [`SpeechConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig):
-* With a subscription: pass in a key and the associated region.
+* With a resource: pass in a key and the associated region.
* With an endpoint: pass in a Speech service endpoint. A key or authorization token is optional. * With a host: pass in a host address. A key or authorization token is optional. * With an authorization token: pass in an authorization token and the associated region.
-In this example, you create a [`SpeechConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig) using a subscription key and region. Get these credentials by following steps in [Try the Speech service for free](../../../overview.md#try-the-speech-service-for-free). You also create some basic boilerplate code to use for the rest of this article, which you modify for different customizations.
+In this example, you create a [`SpeechConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig) using a resource key and region. Get these credentials by following steps in [Try the Speech service for free](../../../overview.md#try-the-speech-service-for-free). You also create some basic boilerplate code to use for the rest of this article, which you modify for different customizations.
```javascript function synthesizeSpeech() {
- const speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
+ const speechConfig = sdk.SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
} synthesizeSpeech();
@@ -93,7 +93,7 @@ To start, create an `AudioConfig` to automatically write the output to a `.wav`
```javascript function synthesizeSpeech() {
- const speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
+ const speechConfig = sdk.SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
const audioConfig = AudioConfig.fromAudioFileOutput("path/to/file.wav"); } ```
@@ -109,10 +109,11 @@ function synthesizeSpeech() {
synthesizer.speakTextAsync( "A simple test to write to a file.", result => {
+ synthesizer.close();
if (result) {
- console.log(JSON.stringify(result));
+ // return result as stream
+ return fs.createReadStream("path-to-file.wav");
}
- synthesizer.close();
}, error => { console.log(error);
@@ -137,9 +138,9 @@ function synthesizeSpeech() {
"Synthesizing directly to speaker output.", result => { if (result) {
- console.log(JSON.stringify(result));
+ synthesizer.close();
+ return result.audioData;
}
- synthesizer.close();
}, error => { console.log(error);
@@ -161,7 +162,9 @@ It's simple to make this change from the previous example. First, remove the `Au
> [!NOTE] > Passing `undefined` for the `AudioConfig`, rather than omitting it like in the speaker output example above, will not play the audio by default on the current active output device.
-This time, you save the result to a [`SpeechSynthesisResult`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesisresult) variable. The `SpeechSynthesisResult.audioData` property returns an `ArrayBuffer` of the output data. You can work with this `ArrayBuffer` manually.
+This time, you save the result to a [`SpeechSynthesisResult`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesisresult) variable. The `SpeechSynthesisResult.audioData` property returns an `ArrayBuffer` of the output data, the default browser stream type. For server-code, convert the arrayBuffer to a buffer stream.
+
+The following code works for client-side code.
```javascript function synthesizeSpeech() {
@@ -171,11 +174,8 @@ function synthesizeSpeech() {
synthesizer.speakTextAsync( "Getting the response as an in-memory stream.", result => {
- // Interact with the audio ArrayBuffer data
- const audioData = result.audioData;
- console.log(`Audio data byte size: ${audioData.byteLength}.`)
- synthesizer.close();
+ return result.audioData;
}, error => { console.log(error);
@@ -184,7 +184,34 @@ function synthesizeSpeech() {
} ```
-From here you can implement any custom behavior using the resulting `ArrayBuffer` object.
+From here you can implement any custom behavior using the resulting `ArrayBuffer` object. The ArrayBuffer is a common type to receive in a browser and play from this format.
+
+For any server-based code, if you need to work with the data as a stream, instead of an ArrayBuffer, you need to convert the object into a stream.
+
+```javascript
+function synthesizeSpeech() {
+ const speechConfig = sdk.SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
+ const synthesizer = new sdk.SpeechSynthesizer(speechConfig);
+
+ synthesizer.speakTextAsync(
+ "Getting the response as an in-memory stream.",
+ result => {
+ const { audioData } = result;
+
+ synthesizer.close();
+
+ // convert arrayBuffer to stream
+ // return stream
+ const bufferStream = new PassThrough();
+ bufferStream.end(Buffer.from(audioData));
+ return bufferStream;
+ },
+ error => {
+ console.log(error);
+ synthesizer.close();
+ });
+}
+```
## Customize audio format
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/containers/container-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/container-faq.md
@@ -1,200 +0,0 @@
- Title: Cognitive Services containers frequently asked questions (FAQ)-
-description: Frequently asked questions and answers.
----- Previously updated : 08/31/2020---
-# Azure Cognitive Services containers frequently asked questions (FAQ)
-
-## General questions
-
-**Q: What is available?**
-
-**A:** Azure Cognitive Services containers allow developers to use the same intelligent APIs that are available in Azure, but with the [benefits](../cognitive-services-container-support.md#features-and-benefits) of containerization. Some containers are available as a gated preview which may require an application to access. Other containers are publicly available as an ungated preview, or are generally available. You can find a full list of containers and their availability in the [Container support in Azure Cognitive Services](../cognitive-services-container-support.md) article. You can also view the containers in the [Docker Hub](https://hub.docker.com/_/microsoft-azure-cognitive-services).
-
-**Q: Is there any difference between the Cognitive Services cloud and the containers?**
-
-**A:** Cognitive Services containers are an alternative to the Cognitive Services cloud. Containers offer the same capabilities as the corresponding cloud services. Customers can deploy the containers on-premises or in Azure. The core AI technology, pricing tiers, API keys, and API signature are the same between the container and the corresponding cloud services. Here are the [features and benefits](../cognitive-services-container-support.md#features-and-benefits) for choosing containers over their cloud service equivalent.
-
-**Q: How do I access and use a gated preview container?**
-
-**A:** Previously, gated preview containers were hosted on the `containerpreview.azurecr.io` repository. Starting September 22nd 2020, these containers are hosted on the Microsoft Container Registry, and downloading them doesn't require you to use the docker login command. You'll be able to run a gated preview container if your Azure resource was created with the approved Azure subscription ID. You won't be able to run the container if your Azure subscription has not been approved after completing the [request form](https://aka.ms/csgate).
--
-**Q: Will containers be available for all Cognitive Services and what are the next set of containers we should expect?**
-
-**A:** We would like to make more Cognitive Services available as container offerings. Contact to your local Microsoft account manager to get updates on new container releases and other Cognitive Services announcements.
-
-**Q: What will the Service-Level Agreement (SLA) be for Cognitive Services containers?**
-
-**A:** Cognitive Services containers do not have an SLA.
-
-Cognitive Services container configurations of resources are controlled by customers, so Microsoft will not offer an SLA for general availability (GA). Customers are free to deploy containers on-premises, thus they define the host environments.
-
-> [!IMPORTANT]
-> To learn more about Cognitive Services Service-Level Agreements, [visit our SLA page](https://azure.microsoft.com/support/legal/sla/cognitive-services/v1_1/).
-
-**Q: Are these containers available in sovereign clouds?**
-
-**A:** Not everyone is familiar with the term "sovereign cloud", so let's begin with definition:
-
-> The "sovereign cloud" consists of the [Azure Government](../../azure-government/documentation-government-welcome.md), [Azure Germany](../../germany/germany-welcome.md), and [Azure China 21Vianet](/azure/china/overview-operations) clouds.
-
-Unfortunately, the Cognitive Services containers are *not* natively supported in the sovereign clouds. The containers can be run in these clouds, but they will be pulled from the public cloud and need to send usage data to the public endpoint.
-
-### Versioning
-
-**Q: How are containers updated to the latest version?**
-
-**A:** Customers can choose when to update the containers they have deployed. Containers will be marked with standard [Docker tags](https://docs.docker.com/engine/reference/commandline/tag/) such as `latest` to indicate the most recent version. We encourage customers to pull the latest version of containers as they are released, checkout [Azure Container Registry webhooks](../../container-registry/container-registry-webhook.md) for details on how to get notified when an image is updated.
-
-**Q: What versions will be supported?**
-
-**A:** The current and last major version of the container will be supported. However, we encourage customers to stay current to get the latest technology.
-
-**Q: How are updates versioned?**
-
-**A:** Major version changes indicate that there is a breaking change to the API signature. We anticipate that this will generally coincide with major version changes to the corresponding Cognitive Service cloud offering. Minor version changes indicate bug fixes, model updates, or new features that do not make a breaking change to the API signature.
-
-## Technical questions
-
-**Q: How should I run the Cognitive Services containers on IoT devices?**
-
-**A:** Whether you don't have a reliable internet connection, or want to save on bandwidth cost. Or if have low-latency requirements, or are dealing with sensitive data that needs to be analyzed on-site, [Azure IoT Edge with the Cognitive Services containers](https://azure.microsoft.com/blog/running-cognitive-services-on-iot-edge/) gives you consistency with the cloud.
-
-**Q: Are these containers compatible with OpenShift?**
-
-We don't test containers with OpenShift, but generally, Cognitive Services containers should run on any platform that support Docker images. If you're using OpenShift, we recommend running the containers as `root-user`.
-
-**Q: How do I provide product feedback and feature recommendations?**
-
-**A:** Customers are encouraged to [voice their concerns](https://cognitive.uservoice.com/) publicly, and up-vote others who have done the same where potential issues overlap. The user voice tool can be used for both product feedback and feature recommendations.
-
-**Q: What status messages and errors are returned by Cognitive Services containers?**
-
-**A:** See the following table for a list of status messages and errors.
-
-|Status | Description |
-|||
-| `Valid` | Your API key is valid, no action is needed. |
-| `Invalid` | Your API key is invalid. You must provide an valid API key to run the container. Find your API key and service region in the **Keys and Endpoint** section for your Azure Cognitive Services resource, in the Azure portal. |
-| `Mismatch` | You have provided an API Key or endpoint for a different kind of cognitive services resource. Find your API key and service region in the **Keys and Endpoint** section for your Azure Cognitive Services resource. |
-| `CouldNotConnect` | The container couldn't connect to the billing endpoint. Check the `Retry-After` value and wait for this period to end before making additional requests. |
-| `OutOfQuota` | The API key is out of quota. You can either upgrade your pricing tier, or wait for additional quota to be made available. Find your tier in the **Pricing Tier** section of your Azure Cognitive Service resource, in the Azure portal. |
-| `BillingEndpointBusy` | The billing endpoint is currently busy. Check the `Retry-After` value and wait for this period to end before making additional requests. |
-| `ContainerUseUnauthorized` | The API key provided is not authorized for use with this container. You are likely using a gated container, so make sure your Azure Subscription ID is approved by submitting an [online request](https://aka.ms/csgate). |
-| `Unknown` | The server is currently unable to process billing requests. |
--
-**Q: Who do I contact for support?**
-
-**A:** Customer support channels are the same as the Cognitive Services cloud offering. All Cognitive Services containers include logging features that will help us and the community support customers. For additional support, see the following options.
-
-### Customer support plan
-
-Customers should refer to their [Azure support plan](https://azure.microsoft.com/support/plans/) to see who to contact for support.
-
-### Azure knowledge center
-
-Customer are free to explore the [Azure knowledge center](https://azure.microsoft.com/resources/knowledge-center/) to answer questions and support issues.
-
-### Stack Overflow
-
-> [Stack Overflow](https://en.wikipedia.org/wiki/Stack_Overflow) is a question and answer site for professional and enthusiast programmers.
-
-Explore the following tags for potential questions and answers that align with your needs.
-
-* [Azure Cognitive Services](https://stackoverflow.com/questions/tagged/azure-cognitive-services)
-* [Microsoft Cognitive](https://stackoverflow.com/questions/tagged/microsoft-cognitive)
-
-**Q: How does billing work?**
-
-**A:** Customers are charged based on consumption, similar to the Cognitive Services cloud. The containers need to be configured to send metering data to Azure, and transactions will be billed accordingly. Resources used across the hosted and on-premises services will add to single quota with tiered pricing, counting against both usages. For more detail, refer to pricing page of the corresponding offering.
-
-* [Anomaly Detector][ad-containers-billing]
-* [Computer Vision][cv-containers-billing]
-* [Face][fa-containers-billing]
-* [Form Recognizer][fr-containers-billing]
-* [Language Understanding (LUIS)][lu-containers-billing]
-* [Speech Service API][sp-containers-billing]
-* [Text Analytics][ta-containers-billing]
-
-> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data to Microsoft.
-
-**Q: What is the current support warranty for containers?**
-
-**A:** There is no warranty for previews. Microsoft's standard warranty for enterprise software will apply when containers are formally announced as general availability (GA).
-
-**Q: What happens to Cognitive Services containers when internet connectivity is lost?**
-
-**A:** Cognitive Services containers are *not licensed* to run without being connected to Azure for metering. Customers need to enable the containers to communicate with the metering service at all times.
-
-**Q: How long can the container operate without being connected to Azure?**
-
-**A:** Cognitive Services containers are *not licensed* to run without being connected to Azure for metering. Customers need to enable the containers to communicate with the metering service at all times.
-
-**Q: What is current hardware required to run these containers?**
-
-**A:** Cognitive Services containers are x64 based containers that can run any compatible Linux node, VM, and edge device that supports x64 Linux Docker Containers. They all require CPU processors. The minimum and recommended configurations for each container offering are available below:
-
-* [Anomaly Detector][ad-containers-recommendations]
-* [Computer Vision][cv-containers-recommendations]
-* [Face][fa-containers-recommendations]
-* [Form Recognizer][fr-containers-recommendations]
-* [Language Understanding (LUIS)][lu-containers-recommendations]
-* [Speech Service API][sp-containers-recommendations]
-* [Text Analytics][ta-containers-recommendations]
-
-**Q: Are these containers currently supported on Windows?**
-
-**A:** The Cognitive Services containers are Linux containers, however there is some support for Linux containers on Windows. For more information about Linux containers on Windows, see [Docker documentation](https://blog.docker.com/2017/09/preview-linux-containers-on-windows/).
-
-**Q: How do I discover the containers?**
-
-**A:** Cognitive Services containers are available in various locations, such as the Azure portal, Docker hub, and Azure container registries. For the most recent container locations, refer to [container images](container-image-tags.md).
-
-**Q: How does Cognitive Services containers compare to AWS and Google offerings?**
-
-**A:** Microsoft is first cloud provider to move their pre-trained AI models in containers with simple billing per transaction as though customers are using a cloud service. Microsoft believes a hybrid cloud gives customers more choice.
-
-**Q: What compliance certifications do containers have?**
-
-**A:** Cognitive services containers do not have any compliance certifications
-
-**Q: What regions are Cognitive Services containers available in?**
-
-**A:** Containers can be run anywhere in any region however they need a key and to call back to Azure for metering. All supported regions for the Cloud Service are supported for the containers metering call.
--
-[ad-containers]: ../anomaly-Detector/anomaly-detector-container-howto.md
-[cv-containers]: ../computer-vision/computer-vision-how-to-install-containers.md
-[fa-containers]: ../face/face-how-to-install-containers.md
-[fr-containers]: ../form-recognizer/form-recognizer-container-howto.md
-[lu-containers]: ../luis/luis-container-howto.md
-[sp-containers]: ../speech-service/speech-container-howto.md
-[ta-containers]: ../text-analytics/how-tos/text-analytics-how-to-install-containers.md
-
-[ad-containers-billing]: ../anomaly-Detector/anomaly-detector-container-howto.md#billing
-[cv-containers-billing]: ../computer-vision/computer-vision-how-to-install-containers.md#billing
-[fa-containers-billing]: ../face/face-how-to-install-containers.md#billing
-[fr-containers-billing]: ../form-recognizer/form-recognizer-container-howto.md#billing
-[lu-containers-billing]: ../luis/luis-container-howto.md#billing
-[sp-containers-billing]: ../speech-service/speech-container-howto.md#billing
-[ta-containers-billing]: ../text-analytics/how-tos/text-analytics-how-to-install-containers.md#billing
-
-[ad-containers-recommendations]: ../anomaly-Detector/anomaly-detector-container-howto.md#container-requirements-and-recommendations
-[cv-containers-recommendations]: ../computer-vision/computer-vision-how-to-install-containers.md#container-requirements-and-recommendations
-[fa-containers-recommendations]: ../face/face-how-to-install-containers.md#container-requirements-and-recommendations
-[fr-containers-recommendations]: ../form-recognizer/form-recognizer-container-howto.md#container-requirements-and-recommendations
-[lu-containers-recommendations]: ../luis/luis-container-howto.md#container-requirements-and-recommendations
-[sp-containers-recommendations]: ../speech-service/speech-container-howto.md#container-requirements-and-recommendations
-[ta-containers-recommendations]: ../text-analytics/how-tos/text-analytics-how-to-install-containers.md#container-requirements-and-recommendations
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/containers/includes/cognitive-services-faq-note https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/includes/cognitive-services-faq-note.md
@@ -12,4 +12,4 @@
> [!TIP]
-> For more troubleshooting information and guidance, see [Cognitive Services containers frequently asked questions (FAQ)](../container-faq.md).
+> For more troubleshooting information and guidance, see [Cognitive Services containers frequently asked questions (FAQ)](../container-faq.yml).
connectors https://docs.microsoft.com/en-us/azure/connectors/connectors-create-api-servicebus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-servicebus.md
@@ -3,9 +3,9 @@ Title: Exchange messages with Azure Service Bus
description: Create automated tasks and workflows that send and receive messages by using Azure Service Bus in Azure Logic Apps ms.suite: integration-+ Previously updated : 10/22/2020 Last updated : 02/10/2021 tags: connectors
@@ -174,7 +174,7 @@ When you create a logic app, you can select the **Correlated in-order delivery u
## Delays in updates to your logic app taking effect
-If a Service Bus trigger's polling interval is small, such as 10 seconds, updates to your logic app might not take effect for up to 10 minutes. To work around this problem, you can temporarily increase the polling interval to a larger value, such as 30 seconds or 1 minute, before you update your logic app. After you make the update, you can reset the polling interval to the original value.
+If a Service Bus trigger's polling interval is small, such as 10 seconds, updates to your logic app might not take effect for up to 10 minutes. To work around this problem, you can disable the logic app, make the changes, and then enable the logic app again.
<a name="connector-reference"></a>
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cassandra-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-support.md
@@ -233,7 +233,7 @@ Azure Cosmos DB Cassandra API is a managed service platform. It does not require
## Hosted CQL shell (preview)
-You can open a hosted native Cassandra shell (CQLSH v5.0.1) directly from the Data Explorer in the [Azure portal](data-explorer.md) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/). Before enabling the CQL shell, you must [enable the Notebooks](enable-notebooks.md) feature in your account (if not already enabled, you will be prompted when clicking on `Open Cassandra Shell`). Check the highlighted note in [Enable notebooks for Azure Cosmos DB accounts](enable-notebooks.md) for supported Azure Regions.
+You can open a hosted native Cassandra shell (CQLSH v5.0.1) directly from the Data Explorer in the [Azure portal](data-explorer.md) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/). Before enabling the CQL shell, you must [enable the Notebooks](enable-notebooks.md) feature in your account (if not already enabled, you will be prompted when clicking on `Open Cassandra Shell`). See the article [Enable notebooks for Azure Cosmos DB accounts](enable-notebooks.md#supported-regions) for supported Azure Regions.
:::image type="content" source="./media/cassandra-support/cqlsh.png" alt-text="Open CQLSH":::
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cosmosdb-jupyter-notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-jupyter-notebooks.md
@@ -56,5 +56,7 @@ Jupyter Notebooks can include several types of components, each organized into d
To get started with built-in Jupyter Notebooks in Azure Cosmos DB, see the following articles: * [Enable notebooks in an Azure Cosmos account](enable-notebooks.md)
+* [Explore notebook samples gallery](https://cosmos.azure.com/gallery.html)
* [Use Python notebook features and commands](use-python-notebook-features-and-commands.md) * [Use C# notebook features and commands](use-csharp-notebook-features-and-commands.md)
+* [Import notebooks from a GitHub repo](import-github-notebooks.md)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/create-cosmosdb-resources-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cosmosdb-resources-portal.md
@@ -65,7 +65,7 @@ Go to the [Azure portal](https://portal.azure.com/) to create an Azure Cosmos DB
> - Geo-redundancy > - Multi-region Writes
- :::image type="content" source="./media/create-cosmosdb-resources-portal/azure-cosmos-db-create-new-account-detail.png" alt-text="The new account page for Azure Cosmos DB":::
+ :::image type="content" source="./media/create-cosmosdb-resources-portal/azure-cosmos-db-create-new-account-detail-2.png" alt-text="The new account page for Azure Cosmos DB":::
1. Select **Review + create**. You can skip the **Network** and **Tags** sections.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/create-notebook-visualize-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-notebook-visualize-data.md
@@ -17,7 +17,7 @@ This article describes how to use built-in Jupyter notebooks to import sample re
## Prerequisites
-* [Enable notebooks support while creating the Azure Cosmos account](enable-notebooks.md)
+* [Enable notebooks on an Azure Cosmos account](enable-notebooks.md)
## Create the resources and import data
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/enable-notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/enable-notebooks.md
@@ -5,8 +5,9 @@
Previously updated : 09/22/2019 Last updated : 02/09/2021 +
@@ -14,17 +15,17 @@
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] > [!IMPORTANT]
-> Built-in notebooks for Azure Cosmos DB are currently available in the following Azure regions: Australia East, East US, East US 2, North Europe, South Central US, Southeast Asia, UK South, West Europe and West US 2. To use notebooks, [create a new account with notebooks](#enable-notebooks-in-a-new-cosmos-account) or [enable notebooks on an existing account](#enable-notebooks-in-an-existing-cosmos-account) in one of these regions.
+> Built-in notebooks for Azure Cosmos DB are currently available in [29 regions](#supported-regions). To use notebooks, [create a new Cosmos account](#create-a-new-cosmos-account) or [enable notebooks on an existing account](#enable-notebooks-in-an-existing-cosmos-account) in one of these regions.
Built-in Jupyter notebooks in Azure Cosmos DB enable you to analyze and visualize your data from the Azure portal. This article describes how to enable this feature for your Azure Cosmos DB account.
-## Enable notebooks in a new Cosmos account
-
+## Create a new Cosmos account
+Starting February 10, 2021, new Azure Cosmos accounts created in one of the [supported region](#supported-regions) will automatically have notebooks enabled. There is no additional configuration needed to enable notebooks. Use the following instructions to create a new account:
1. Sign into the [Azure portal](https://portal.azure.com/). 1. Select **Create a resource** > **Databases** > **Azure Cosmos DB**.
-1. On the **Create Azure Cosmos DB Account** page, select **Notebooks**.
+1. Enter the basic settings for the account.
- :::image type="content" source="media/enable-notebooks/create-new-account-with-notebooks.png" alt-text="Select notebooks option in Azure Cosmos DB Create blade":::
+ :::image type="content" source="./media/create-cosmosdb-resources-portal/azure-cosmos-db-create-new-account-detail-2.png" alt-text="The new account page for Azure Cosmos DB":::
1. Select **Review + create**. You can skip the **Network** and **Tags** option. 1. Review the account settings, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete**.
@@ -70,6 +71,45 @@ You can also select **New Notebook** to create a new notebook or upload an exist
:::image type="content" source="media/enable-notebooks/create-or-upload-new-notebook.png" alt-text="Create or upload a new notebook":::
+## Supported regions
+Built-in notebooks for Azure Cosmos DB are currently available in 29 Azure regions. New Azure Cosmos accounts created in these regions will have notebooks automatically enabled. Notebooks are free with your account.
+
+- Australia Central
+- Australia Central 2
+- Australia East
+- Australia Southeast
+- Brazil South
+- Canada Central
+- Canada East
+- Central India
+- Central US
+- East US
+- East US 2
+- France Central
+- France South
+- Germany North
+- Germany West Central
+- Japan West
+- Korea South
+- North Central us
+- North Europe
+- South Central us
+- Southeast Asia
+- Switzerland North
+- UAE Central
+- UK South
+- UK West
+- West Central us
+- West Europe
+- West India
+- West US 2
+ ## Next steps -- Learn about the benefits of [Azure Cosmos DB Jupyter Notebooks](cosmosdb-jupyter-notebooks.md)
+* Learn about the benefits of [Azure Cosmos DB Jupyter Notebooks](cosmosdb-jupyter-notebooks.md)
+* [Explore notebook samples gallery](https://cosmos.azure.com/gallery.html)
+* [Use Python notebook features and commands](use-python-notebook-features-and-commands.md)
+* [Use C# notebook features and commands](use-csharp-notebook-features-and-commands.md)
+* [Import notebooks from a GitHub repo](import-github-notebooks.md)
++
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/tutorial-mongotools-cosmos-db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-mongotools-cosmos-db.md
@@ -0,0 +1,153 @@
+
+ Title: Migrate MongoDB offline to Azure Cosmos DB API for MongoDB, using MongoDB native tools
+description: Learn how MongoDB native tools can be used to migrate small datasets from MongoDB instances to Azure Cosmos DB
+++++ Last updated : 02/10/2021+++
+# Tutorial: Migrate MongoDB to Azure Cosmos DB's API for MongoDB offline using MongoDB native tools
+
+You can use MongoDB native tools to perform an offline (one-time) migration of databases from an on-premises or cloud instance of MongoDB to Azure Cosmos DB's API for MongoDB.
+
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+>
+> * Choose the appropriate MongoDB native tool for your use-case
+> * Run the migration.
+> * Monitor the migration.
+> * Verify that migration was successful.
+
+In this tutorial, you migrate a dataset in MongoDB hosted in an Azure Virtual Machine to Azure Cosmos DB's API for MongoDB by using MongoDB native tools. The MongoDB native tools are a set of binaries that facilitate data manipulation on an existing MongoDB instance. Since Azure Cosmos DB exposes a Mongo API, the MongoDB native tools are able to insert data into Azure Cosmos DB. The focus of this doc is on migrating data out of a MongoDB instance using *mongoexport/mongoimport* or *mongodump/mongorestore*. Since the native tools connect to MongoDB using connection strings, you can run the tools anywhere, however we recommend running these tools within the same network as the MongoDB instance to avoid firewall issues.
+
+The MongoDB native tools can move data only as fast as the host hardware allows; the native tools can be the simplest solution for small datasets where total migration time is not a concern. [MongoDB Spark connector](https://docs.mongodb.com/spark-connector/current/), [Azure Data Migration Service (DMS)](../dms/tutorial-mongodb-cosmos-db.md), or [Azure Data Factory (ADF)](../data-factory/connector-azure-cosmos-db-mongodb-api.md) can be better alternatives if you need a scalable migration pipeline.
+
+If you don't have a MongoDB source set up already, see the article [Install and configure MongoDB on a Windows VM in Azure](../virtual-machines/windows/install-mongodb.md).
+
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+* [Complete the pre-migration](../cosmos-db/mongodb-pre-migration.md) steps such as estimating throughput, choosing a partition key, and the indexing policy.
+* [Create an Azure Cosmos DB API for MongoDB account](https://ms.portal.azure.com/#create/Microsoft.DocumentDB).
+* Log into your MongoDB instance
+ * [Download and install the MongoDB native tools from this link](https://www.mongodb.com/try/download/database-tools).
+ * **Ensure that your MongoDB native tools version matches your existing MongoDB instance.**
+ * If your MongoDB instance has a different version than Azure Cosmos DB Mongo API, then **install both MongoDB native tool versions and use the appropriate tool version for MongoDB and Azure Cosmos DB Mongo API, respectively.**
+ * Add a user with `readWrite` permissions, unless one already exists. Later in this tutorial, provide this username/password to the *mongoexport* and *mongodump* tools.
+
+## Configure Azure Cosmos DB Server Side Retries
+
+Customers migrating from MongoDB to Azure Cosmos DB benefit from resource governance capabilities, which guarantee the ability to fully utilize your provisioned RU/s of throughput. Azure Cosmos DB may throttle a given request in the course of migration if that request exceeds the container provisioned RU/s; then that request needs to be retried. The round-trip time involved in the network hop between the migration tool and Azure Cosmos DB impacts the overall response time of that request; furthermore MongoDB native tools may not handle retries. The *Server Side Retry* feature of Azure Cosmos DB allows the service to intercept throttle error codes and retry with much lower round-trip time, dramatically improving request response times. From the perspective of MongoDB native tools, the need to handle retries is minimized, which will positively impact your experience during migration.
+
+You can find the Server Side Retry capability in the *Features* blade of the Azure Cosmos DB portal
+
+![Screenshot of MongoDB SSR feature.](../dms/media/tutorial-mongodb-to-cosmosdb/mongo-server-side-retry-feature.png)
+
+And if it is *Disabled*, then we recommend you enable it as shown below
+
+![Screenshot of MongoDB SSR enable.](../dms/media/tutorial-mongodb-to-cosmosdb/mongo-server-side-retry-enable.png)
+
+## Choose the proper MongoDB native tool
+
+![Diagram of selecting the best MongoDB native tool.](media/tutorial-mongotools-cosmos-db/mongodb-native-tool-selection-table.png)
+
+* *mongoexport/mongoimport* is the best pair of migration tools for migrating a subset of your MongoDB database.
+ * *mongoexport* exports your existing data to a human-readable JSON or CSV file. *mongoexport* takes an argument specifying the subset of your existing data to export.
+ * *mongoimport* opens a JSON or CSV file and inserts the content into the target database instance (Azure Cosmos DB in this case.).
+ * Note that JSON and CSV are not a compact formats; you may incur excess network charges as *mongoimport* sends data to Azure Cosmos DB.
+* *mongodump/mongorestore* is the best pair of migration tools for migrating your entire MongoDB database. The compact BSON format will make more efficient use of network resources as the data is inserted into Azure Cosmos DB.
+ * *mongodump* exports your existing data as a BSON file.
+ * *mongorestore* imports your BSON file dump into Azure Cosmos DB.
+* As an aside - if you simply have a small JSON file that you want to import into Azure Cosmos DB Mongo API, the *mongoimport* tool is a quick solution for ingesting the data.
+
+## Collect the Azure Cosmos DB Mongo API credentials
+
+Azure Cosmos DB Mongo API provides compatible access credentials which MongoDB native tools can utilize. You will need to have these access credentials on-hand in order to migrate data into Azure Cosmos DB Mongo API. To find these credentials:
+
+1. Open the Azure portal
+1. Navigate to your Azure Cosmos DB Mongo API account
+1. In the left nav, select the *Connection String* blade, and you should see a display similar to the below:
+
+ ![Screenshot of Azure Cosmos DB credentials.](media/tutorial-mongotools-cosmos-db/cosmos-mongo-credentials.png)
+
+ * *HOST* - the Azure Cosmos DB endpoint functions as a MongoDB hostname
+ * *PORT* - when MongoDB native tools connect to Azure Cosmos DB, you must specify this port explicitly
+ * *USERNAME* - the prefix of the Azure Cosmos DB endpoint domain name functions as the MongoDB username
+ * *PASSWORD* - the Azure Cosmos DB master key functions as the MongoDB password
+ * Additionally, note the *SSL* field which is `true` - the MongoDB native tool **must** enable SSL when writing data into Azure Cosmos DB
+
+## Perform the migration
+
+1. Choose which database(s) and collection(s) you would like to migrate. In this example, we are migrating the *query* collection in the *edx* database from MongoDB to Azure Cosmos DB.
+
+The rest of this section will guide you through using the pair of tools you selected in the previous section.
+
+### *mongoexport/mongoimport*
+
+1. To export the data from the source MongoDB instance, open a terminal on the MongoDB instance machine. If it is a Linux machine, type
+
+ `mongoexport --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection query --out edx.json`
+
+ On windows, the executable will be `mongoexport.exe`. *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the properties of your existing MongoDB database instance.
+
+ You may also choose to export only a subset of the MongoDB dataset. One way to do this is by adding an additional filter argument:
+
+ `mongoexport --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection query --out edx.json --query '{"field1":"value1"}'`
+
+ Only documents which match the filter `{"field1":"value1"}` will be exported.
+
+ Once you execute the call, you should see that an `edx.json` file is produced:
+
+ ![Screenshot of mongoexport call.](media/tutorial-mongotools-cosmos-db/mongo-export-output.png)
+1. You can use the same terminal to import `edx.json` into Azure Cosmos DB. If you are running `mongoimport` on a Linux machine, type
+
+ `mongoimport --host HOST:PORT -u USERNAME -p PASSWORD --db edx --collection importedQuery --ssl --type json --writeConcern="{w:0}" --file edx.json`
+
+ On Windows, the executable will be `mongoimport.exe`. *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the Azure Cosmos DB credentials you collected earlier.
+1. **Monitor** the terminal output from *mongoimport*. You should see that it prints lines of text to the terminal containing updates on the migration status:
+
+ ![Screenshot of mongoimport call.](media/tutorial-mongotools-cosmos-db/mongo-import-output.png)
+
+1. Finally, examine Azure Cosmos DB to **validate** that migration was successful. Open the Azure Cosmos DB portal and navigate to Data Explorer. You should see (1) that an *edx* database with an *importedQuery* collection has been created, and (2) if you exported only a subset of data, *importedQuery* should contain *only* docs matching the desired subset of the data. In the example below, only one doc matched the filter `{"field1":"value1"}`:
+
+ ![Screenshot of Cosmos DB data verification.](media/tutorial-mongotools-cosmos-db/mongo-review-cosmos.png)
+
+### *mongodump/mongorestore*
+
+1. To create a BSON data dump of your MongoDB instance, open a terminal on the MongoDB instance machine. If it is a Linux machine, type
+
+ `mongodump --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection query --out edx-dump`
+
+ *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the properties of your existing MongoDB database instance. You should see that an `edx-dump` directory is produced and that the directory structure of `edx-dump` reproduces the resource hierarchy (database and collection structure) of your source MongoDB instance. Each collection is represented by a BSON file:
+
+ ![Screenshot of mongodump call.](media/tutorial-mongotools-cosmos-db/mongo-dump-output.png)
+1. You can use the same terminal to restore the contents of `edx-dump` into Azure Cosmos DB. If you are running `mongorestore` on a Linux machine, type
+
+ `mongorestore --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db edx --collection importedQuery --ssl edx-dump/edx/query.bson`
+
+ On Windows, the executable will be `mongorestore.exe`. *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the Azure Cosmos DB credentials you collected earlier.
+1. **Monitor** the terminal output from *mongorestore*. You should see that it prints lines to the terminal updating on the migration status:
+
+ ![Screenshot of mongorestore call.](media/tutorial-mongotools-cosmos-db/mongo-restore-output.png)
+
+1. Finally, examine Azure Cosmos DB to **validate** that migration was successful. Open the Azure Cosmos DB portal and navigate to Data Explorer. You should see (1) that an *edx* database with an *importedQuery* collection has been created, and (2) *importedQuery* should contain the *entire* dataset from the source collection:
+
+ ![Screenshot of verifying Cosmos DB mongorestore data.](media/tutorial-mongotools-cosmos-db/mongo-review-cosmos-restore.png)
+
+## Post-migration optimization
+
+After you migrate the data stored in MongoDB database to Azure Cosmos DBΓÇÖs API for MongoDB, you can connect to Azure Cosmos DB and manage the data. You can also perform other post-migration optimization steps such as optimizing the indexing policy, update the default consistency level, or configure global distribution for your Azure Cosmos DB account. For more information, see the [Post-migration optimization](../cosmos-db/mongodb-post-migration.md) article.
+
+## Additional resources
+
+* [Cosmos DB service information](https://azure.microsoft.com/services/cosmos-db/)
+* [MongoDB database tools documentation](https://docs.mongodb.com/database-tools/)
+
+## Next steps
+
+* Review migration guidance for additional scenarios in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/manage-reserved-vm-instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/manage-reserved-vm-instance.md
@@ -6,7 +6,7 @@
Previously updated : 12/08/2020 Last updated : 02/09/2021 # Manage Reservations for Azure resources
@@ -29,7 +29,7 @@ To view a Reservation Order, go to **Reservations** > select the reservation, an
![Example of reservation order details showing Reservation order ID ](./media/manage-reserved-vm-instance/reservation-order-details.png)
-A reservation inherits permissions from its reservation order.
+A reservation inherits permissions from its reservation order. To exchange or refund a reservation, the user should be added to the reservation order.
## Change the reservation scope
data-factory https://docs.microsoft.com/en-us/azure/data-factory/ci-cd-github-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
@@ -73,14 +73,14 @@ CI/CD release pipeline failing with the following error:
#### Cause
-This is due to an Integration Runtime with the same name in the target factory but with a different type. Integration Runtime needs to be of the same type when deploying.
+This is due to an integration runtime with the same name in the target factory but with a different type. Integration Runtime needs to be of the same type when deploying.
#### Recommendation - Refer to this Best Practices for CI/CD below: https://docs.microsoft.com/azure/data-factory/continuous-integration-deployment#best-practices-for-cicd -- Integration runtimes don't change often and are similar across all stages in your CI/CD, so Data Factory expects you to have the same name and type of integration runtime across all stages of CI/CD. If the name and types & properties are different, make sure to match the source and target IR configuration and then deploy the release pipeline.
+- Integration runtimes don't change often and are similar across all stages in your CI/CD, so Data Factory expects you to have the same name and type of integration runtime across all stages of CI/CD. If the name and types & properties are different, make sure to match the source and target integration runtime configuration and then deploy the release pipeline.
- If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type. ### Document creation or update failed because of invalid reference
@@ -128,7 +128,7 @@ You are unable to move Data Factory from one Resource Group to another, failing
#### Resolution
-You need to delete the SSIS-IR and Shared IRs to allow the move operation. If you do not want to delete the IRs, then the best way is to follow the copy and clone document to do the copy and after it's done, delete the old Data factory.
+You need to delete the SSIS-IR and Shared IRs to allow the move operation. If you do not want to delete the integration runtimes, then the best way is to follow the copy and clone document to do the copy and after it's done, delete the old Data Factory.
### Unable to export and import ARM template
@@ -146,6 +146,34 @@ You have created a customer role as the user and it did not have the necessary p
In order to resolve the issue, you need to add the following permission to your role: *Microsoft.DataFactory/factories/queryFeaturesValue/action*. This permission should be included by default in the "Data Factory Contributor" role.
+### Automatic publishing for CI/CD without clicking Publish button
+
+#### Issue
+
+Manual publishing with button click in ADF portal does not enable automatic CI/CD operation.
+
+#### Cause
+
+Until recently, only way to publish ADF pipeline for deployments was using ADF Portal button click. Now, you can make the process automatic.
+
+#### Resolution
+
+CI/CD process has been enhanced. The **Automated publish** feature takes, validates and exports all Azure Resource Manager (ARM) template features from the ADF UX. It makes the logic consumable via a publicly available npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities). This allows you to programmatically trigger these actions instead of having to go to the ADF UI and do a button click. This gives your CI/CD pipelines a **true** continuous integration experience. Please follow [ADF CI/CD Publishing Improvements](https://docs.microsoft.com/azure/data-factory/continuous-integration-deployment-improvements) for details.
+
+### Cannot publish because of 4mb ARM template limit
+
+#### Issue
+
+You can not deploy because you hit Azure Resource Manager limit of 4mb total template size. You need a solution to deploy after crossing the limit.
+
+#### Cause
+
+Azure Resource Manager restricts template size to be 4mb. Limit the size of your template to 4 MB, and each parameter file to 64 KB. The 4-MB limit applies to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. But, you have crossed the limit.
+
+#### Resolution
+
+For small to medium solutions, a single template is easier to understand and maintain. You can see all the resources and values in a single file. For advanced scenarios, linked templates enable you to break down the solution into targeted components. Please follow best practice at [Using Linked and Nested Templates](https://docs.microsoft.com/azure/azure-resource-manager/templates/linked-templates?tabs=azure-powershell).
+ ## Next steps For more help with troubleshooting, try the following resources:
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-roles-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-roles-permissions.md
@@ -49,14 +49,13 @@ The **Data Factory Contributor** role, at the resource group level or above, let
Permissions on Azure Repos and GitHub are independent of Data Factory permissions. As a result, a user with repo permissions who is only a member of the Reader role can edit Data Factory child resources and commit changes to the repo, but can't publish these changes.
-> [!IMPORTANT]
-> Resource Manager template deployment with the **Data Factory Contributor** role does not elevate your permissions. For example, if you deploy a template that creates an Azure virtual machine, and you don't have permission to create virtual machines, the deployment fails with an authorization error.
> [!IMPORTANT]
-> The permission **Microsoft.DataFactory/factories/write** is required in both modes within the publish context.
+> Resource Manager template deployment with the **Data Factory Contributor** role does not elevate your permissions. For example, if you deploy a template that creates an Azure virtual machine, and you don't have permission to create virtual machines, the deployment fails with an authorization error.
-- This permission is only required in Live mode when the customer modifies global parameters.-- This permission is always required in Git mode since every time after the customer publishes, because the factory object with the last commit id is updated.
+ In publish context, **Microsoft.DataFactory/factories/write** permission applies to following modes.
+- That permission is only required in Live mode when the customer modifies the global parameters.
+- That permission is always required in Git mode since every time after the customer publishes,the factory object with the last commit ID needs to be updated.
### Custom scenarios and custom roles
@@ -91,6 +90,7 @@ Here are a few examples that demonstrate what you can achieve with custom roles:
Assign the built-in **contributor** role on the data factory resource for the user. This role lets the user see the resources in the Azure portal, but the user can't access the **Publish** and **Publish All** buttons. + ## Next steps - Learn more about roles in Azure - [Understand role definitions](../role-based-access-control/role-definitions.md)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/policy-reference.md
@@ -0,0 +1,34 @@
+
+ Title: Built-in policy definitions
+description: Lists Azure Policy built-in policy definitions for Data Factory. These built-in policy definitions provide common approaches to managing your Azure resources.
++++++++++ Last updated : 12/3/2020++
+# Azure Policy built-in definitions for Data Factory (Preview)
+
+This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy definitions for Data Factory. For additional Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Data Factory
++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-powershell.md
@@ -36,6 +36,9 @@ This quickstart describes how to use PowerShell to create an Azure Data Factory.
Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+>[!WARNING]
+>If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands.
+ #### Log in to PowerShell 1. Launch **PowerShell** on your machine. Keep PowerShell open until the end of this quickstart. If you close and reopen, you need to run these commands again.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/source-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/source-control.md
@@ -25,7 +25,7 @@ By default, the Azure Data Factory user interface experience (UX) authors direct
To provide a better authoring experience, Azure Data Factory allows you to configure a Git repository with either Azure Repos or GitHub. Git is a version control system that allows for easier change tracking and collaboration. This article will outline how to configure and work in a git repository along with highlighting best practices and a troubleshooting guide. > [!NOTE]
-> Azure Data Factory git integration is only available for GitHub Enterprise in the Azure Government Cloud.
+> For Azure Government Cloud, only GitHub Enterprise is available.
To learn more about how Azure Data Factory integrates with Git, view the 15-minute tutorial video below:
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-migrate-fpga-gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-migrate-fpga-gpu.md
@@ -7,7 +7,7 @@
Previously updated : 02/09/2021 Last updated : 02/10/2021 # Migrate workloads from an Azure Stack Edge Pro FPGA to an Azure Stack Edge Pro GPU
@@ -185,8 +185,8 @@ Follow these steps to recover the data from local shares:
Once the IoT Edge modules are prepared, you will need to deploy IoT Edge workloads on your target device. If you face any errors in deploying IoT Edge modules, see: -- [Common issues and resolutions for Azure IoT Edge](../iot-edge/troubleshoot-common-errors.md), and -- [IoT Edge runtime errors][Manage an Azure Stack Edge Pro GPU device via Windows PowerShell](azure-stack-edge-gpu-troubleshoot.md#troubleshoot-iot-edge-errors).
+- [Common issues and resolutions for Azure IoT Edge](../iot-edge/troubleshoot-common-errors.md).
+- [IoT Edge runtime errors](azure-stack-edge-gpu-troubleshoot.md#troubleshoot-iot-edge-errors).
## Verify data
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/concept-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-baseline.md
@@ -31,9 +31,13 @@ Baseline custom checks establish a custom list of checks for each device baselin
## Setting baseline properties 1. In your IoT Hub, locate and select the device you wish to change.+ 1. Click on the device, and then click the **azureiotsecurity** module.+ 1. Click **Module Identity Twin**.+ 1. Upload the **baseline custom checks** file to the device.+ 1. Add baseline properties to the security module and click **Save**. ### Baseline custom check file example
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/references-defender-for-iot-glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-defender-for-iot-glossary.md
@@ -13,6 +13,9 @@
This glossary provides a brief description of important terms and concepts for the Azure Defender for IoT platform. Select the **Learn more** links to go to related terms in the glossary. This will help you more quickly learn and use product tools.
+> [!Note]
+> Any term with a `(AL)` listed in it's name, is a Agent-based device builder term.
+ <a name="glossary-a"></a> ## A
@@ -53,6 +56,7 @@ This glossary provides a brief description of important terms and concepts for t
| **Device inventory - sensor** | The device inventory displays an extensive range of device attributes detected by Defender for IoT. Options are available to:<br /><br />- Filter displayed information.<br /><br />- Export this information to a CSV file.<br /><br />- Import Windows registry details. | **[Group](#g)** <br /><br />**[Device inventory- on-premises management console](#d)** | | **Device inventory - on-premises management console** | Device information from connected sensors can be viewed from the on-premises management console in the device inventory. This gives users of the on-premises management console a comprehensive view of all network information. | **[Device inventory - sensor](#d)<br /><br />[Device inventory - data integrator](#d)** | | **Device inventory - data integrator** | The data integration capabilities of the on-premises management console let you enhance the data in the device inventory with information from other enterprise resources. Example resources are CMDBs, DNS, firewalls, and Web APIs. | **[Device inventory - on-premises management console](#d)** |
+| **Device twins** `(AL)` | Device twins are JSON documents that store device state information including metadata, configurations, and conditions. | [Module Twin](#m) <br /> <br />[Security module twin](#s) |
## E
@@ -85,6 +89,7 @@ This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--|
+| **IoT Hub** `(AL)` | Managed service, hosted in the cloud, that acts as a central message hub for bi-directional communication between your IoT application and the devices it manages. | |
| **Integrations** | Expand Defender for IoT capabilities by sharing device information with partner systems. Organizations can bridge previously siloed security, NAC, incident management, and device management solutions to accelerate system-wide responses and more rapidly mitigate risks. | **[Forwarding rule](#f)** | | **Internal subnet** | Subnet configurations defined by Defender for IoT. In some cases, such as environments that use public ranges as internal ranges, you can instruct Defender for IoT to resolve all subnets as internal subnets. Subnets are displayed in the map and in various Defender for IoT reports. | **[Subnets](#s)** |
@@ -100,6 +105,8 @@ This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--|
+| **Micro Agent** `(AL)` | Provides depth security capabilities for IoT devices including security posture and threat detection. | |
+| **Module twin** `(AL)` | Module twins are JSON documents that store module state information including metadata, configurations, and conditions. | [Device twin](#d) <br /> <br />[Security module twin](#s) |
| **Mute Alert Event** | Instruct Defender for IoT to continuously ignore activity with identical devices and comparable traffic. | **[Alert](#glossary-a)<br /><br />[Exclusion rule](#e)<br /><br />[Acknowledge alert event](#glossary-a)<br /><br />[Learn alert event](#l)** | ## N
@@ -135,6 +142,7 @@ This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--| | **Security alert** | Alerts that deal with security issues, such as excessive SMB sign in attempts or malware detections. | **[Alert](#glossary-a)<br /><br />[Operational alert](#o)** |
+| **Security module twin** `(AL)` | The security module twin holds all of the information that is relevant to device security, for each specific device in your solution. | [Device twin](#d) <br /> <br />[Module Twin](#m) |
| **Selective probing** | Defender for IoT passively inspects IT and OT traffic and detects relevant information on devices, their attributes, their behavior, and more. In certain cases, some information might not be visible in passive network analyses.<br /><br />When this happens, you can use the safe, granular probing tools in Defender for IoT to discover important information on previously unreachable devices. | - | | **Sensor** | The physical or virtual machine on which the Defender for IoT platform is installed. | **[On-premises management console](#o)** | | **Site** | A location that a factory or other entity. The site should contain a zone or several zones in which a sensor is installed. | **[Zone](#z)** |
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-messaging-exceptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-messaging-exceptions.md
@@ -2,7 +2,7 @@
Title: Azure Event Hubs - exceptions (legacy) description: This article provides a list of Azure Event Hubs messaging exceptions and suggested actions. Previously updated : 11/02/2020 Last updated : 02/10/2021 # Event Hubs messaging exceptions - .NET (legacy)
@@ -122,14 +122,14 @@ This error can occur for one of two reasons:
If you see values higher than number of TUs * limits (1 MB per second for ingress or 1000 requests for ingress/second, 2 MB per second for egress), increase the number of TUs by using the **Scale** (on the left menu) page of an Event Hubs namespace to manually scale higher or to use the [Auto-inflate](event-hubs-auto-inflate.md) feature of Event Hubs. Note that auto-Inflate can only increase up to 20 TUS. To raise it to exactly 40 TUs, submit a [support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
-### Error code 50001
+### Error code 50008
This error should rarely occur. It happens when the container running code for your namespace is low on CPU ΓÇô not more than a few seconds before the Event Hubs load balancer begins.
-**Resolution**: Limit on calls to the GetRuntimeInformation method. Azure Event Hubs supports up to 50 calls per second to the GetRuntimeInfo per second. You may receive an exception similar to the following one once the limit is reached:
+**Resolution**: Limit calls to the GetRuntimeInformation method. Azure Event Hubs supports up to 50 calls per second per consumer group to the GetRuntimeInfo per second. You may receive an exception similar to the following one once the limit is reached:
```
-ExceptionId: 00000000000-00000-0000-a48a-9c908fbe84f6-ServerBusyException: The request was terminated because the namespace 75248:aaa-default-eventhub-ns-prodb2b is being throttled. Error code : 50001. Please wait 10 seconds and try again.
+ExceptionId: 00000000000-00000-0000-a48a-9c908fbe84f6-ServerBusyException: The request was terminated because the namespace 75248:aaa-default-eventhub-ns-prodb2b is being throttled. Error code : 50008. Please wait 10 seconds and try again.
```
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations-providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
@@ -94,7 +94,7 @@ The following table shows connectivity locations and the service providers for e
| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | 10G, 100G | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport | | **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GEANT, InterCloud, Interxion, Megaport, Orange, Telia Carrier, T-Systems | | **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | |
-| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Equinix, Megaport |
+| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Equinix, Megaport, Swisscom |
| **Hong Kong** | [Equinix HK1](https://www.equinix.com/locations/asia-colocation/hong-kong-colocation/hong-kong-data-center/hk1/) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon | | **Hong Kong2** | [MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, Megaport, PCCW Global Limited, SingTel | | **Jakarta** | Telin, Telkom Indonesia | 4 | n/a | 10G | Telin |
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
@@ -153,7 +153,7 @@ The following table shows locations by service provider. If you want to view ava
| **[Sohonet](https://www.sohonet.com/fastlane/)** |Supported |Supported |London2 | | **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** |Supported |Supported |Auckland, Sydney | | **[Sprint](https://business.sprint.com/solutions/cloud-networking/)** |Supported |Supported |Chicago, Silicon Valley, Washington DC |
-| **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Zurich |
+| **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva, Zurich |
| **[Tata Communications](https://www.tatacommunications.com/lp/izo/azure/azure_https://docsupdatetracker.net/index.html)** |Supported |Supported |Amsterdam, Chennai, Hong Kong SAR, London, Mumbai, Sao Paulo, Silicon Valley, Singapore, Washington DC | | **[Telefonica](https://www.business-solutions.telefonica.com/es/enterprise/solutions/efficient-infrastructure/managed-voice-data-connectivity/)** |Supported |Supported |Amsterdam, Sao Paulo | | **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** |Supported |Supported |London, London2, Singapore2 |
governance https://docs.microsoft.com/en-us/azure/governance/policy/concepts/guest-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
@@ -215,12 +215,17 @@ The Audit policy definitions available for Guest Configuration include the
[Azure Arc for servers](../../../azure-arc/servers/overview.md) that are in the scope of the policy assignment are automatically included.
+## Troubleshooting guest configuration
+
+For more information about troubleshooting Guest Configuration, see
+[Azure Policy troubleshooting](../troubleshoot/general.md).
+ ### Multiple assignments Guest Configuration policy definitions currently only support assigning the same Guest Assignment once per machine, even if the Policy assignment uses different parameters.
-## Client log files
+### Client log files
The Guest Configuration extension writes log files to the following locations:
@@ -261,6 +266,15 @@ logPath=/var/lib/GuestConfig/gc_agent_logs/gc_agent.log
egrep -B $linesToIncludeBeforeMatch -A $linesToIncludeAfterMatch 'DSCEngine|DSCManagedEngine' $logPath | tail ```
+### Client files
+
+The Guest Configuration client downloads content packages to a machine and extracts the contents.
+To verify what content has been downloaded and stored, view the folder locations given below.
+
+Windows: `c:\programdata\guestconfig\configurations`
+
+Linux: `/var/lib/guestconfig/configurations`
+ ## Guest Configuration samples Guest Configuration built-in policy samples are available in the following locations:
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/customer-managed-key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/customer-managed-key.md
@@ -21,7 +21,7 @@ In Azure, this is typically accomplished using an encryption key in the customer
- [Add an access policy to your Azure Key Vault instance](../cosmos-db/how-to-setup-cmk.md#add-an-access-policy-to-your-azure-key-vault-instance) - [Generate a key in Azure Key Vault](../cosmos-db/how-to-setup-cmk.md#generate-a-key-in-azure-key-vault)
-## Specify the Azure Key Vault key
+## Using Azure portal
When creating your Azure API for FHIR account on Azure portal, you can see a "Data Encryption" configuration option under the "Database Settings" on the "Additional Settings" tab. By default, the service-managed key option will be chosen.
@@ -39,9 +39,100 @@ For existing FHIR accounts, you can view the key encryption choice (service- or
In addition, you can create a new version of the specified key, after which your data will get encrypted with the new version without any service interruption. You can also remove access to the key to remove access to the data. When the key is disabled, queries will result in an error. If the key is re-enabled, queries will succeed again. +++
+## Using Azure PowerShell
+
+With your Azure Key Vault key URI, you can configure CMK using PowerShell by running the PowerShell command below:
+
+```powershell
+New-AzHealthcareApisService
+ -Name "myService"
+ -Kind "fhir-R4"
+ -ResourceGroupName "myResourceGroup"
+ -Location "westus2"
+ -CosmosKeyVaultKeyUri "https://<my-vault>.vault.azure.net/keys/<my-key>"
+```
+
+## Using Azure CLI
+
+As with PowerShell method, you can configure CMK by passing your Azure Key Vault key URI under the `key-vault-key-uri` parameter and running the CLI command below:
+
+```azurecli-interactive
+az healthcareapis service create
+ --resource-group "myResourceGroup"
+ --resource-name "myResourceName"
+ --kind "fhir-R4"
+ --location "westus2"
+ --cosmos-db-configuration key-vault-key-uri="https://<my-vault>.vault.azure.net/keys/<my-key>"
+
+```
+## Using Azure Resource Manager Template
+
+With your Azure Key Vault key URI, you can configure CMK by passing it under the **keyVaultKeyUri** property in the **properties** object.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "services_myService_name": {
+ "defaultValue": "myService",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.HealthcareApis/services",
+ "apiVersion": "2020-03-30",
+ "name": "[parameters('services_myService_name')]",
+ "location": "westus2",
+ "kind": "fhir-R4",
+ "properties": {
+ "accessPolicies": [],
+ "cosmosDbConfiguration": {
+ "offerThroughput": 400,
+ "keyVaultKeyUri": "https://<my-vault>.vault.azure.net/keys/<my-key>"
+ },
+ "authenticationConfiguration": {
+ "authority": "https://login.microsoftonline.com/72f988bf-86f1-41af-91ab-2d7cd011db47",
+ "audience": "[concat('https://', parameters('services_myService_name'), '.azurehealthcareapis.com')]",
+ "smartProxyEnabled": false
+ },
+ "corsConfiguration": {
+ "origins": [],
+ "headers": [],
+ "methods": [],
+ "maxAge": 0,
+ "allowCredentials": false
+ }
+ }
+ }
+ ]
+}
+```
+
+And you can deploy the template with the following PowerShell script:
+
+```powershell
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$accountLocation = "West US 2"
+$keyVaultKeyUri = "https://<my-vault>.vault.azure.net/keys/<my-key>"
+
+New-AzResourceGroupDeployment `
+ -ResourceGroupName $resourceGroupName `
+ -TemplateFile "deploy.json" `
+ -accountName $accountName `
+ -location $accountLocation `
+ -keyVaultKeyUri $keyVaultKeyUri
+```
+ ## Next steps
-In this article, you learned how to configure customer-managed keys at rest. Next, you can check out the Azure Cosmos DB FAQ section:
+In this article, you learned how to configure customer-managed keys at rest using Azure portal, PowerShell, CLI, and Resource Manager Template. You can check out the Azure Cosmos DB FAQ section for additional questions you might have:
>[!div class="nextstepaction"] >[Cosmos DB: how to setup CMK](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-cmk#frequently-asked-questions)
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/fhir-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir-faq.md
@@ -111,6 +111,9 @@ You can see more details at this [community post](https://chat.fhir.org/#narrow/
$export is part of the FHIR specification: https://hl7.org/fhir/uv/bulkdat).
+### Is de-identified export available at Patient and Group level as well?
+Anonymized export is currently supported only on a full system export (/$export), and not for Patient export (/Patient/$export). We are working on making it available at the Patient level as well.
+ ## Using Azure API for FHIR ### How do I enable log analytics for Azure API for FHIR?
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/iot-edge-as-gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-as-gateway.md
@@ -29,7 +29,7 @@ All gateway patterns provide the following benefits:
* **Analytics at the edge** ΓÇô Use AI services locally to process data coming from downstream devices without sending full-fidelity telemetry to the cloud. Find and react to insights locally and only send a subset of data to IoT Hub. * **Downstream device isolation** ΓÇô The gateway device can shield all downstream devices from exposure to the internet. It can sit in between an operational technology (OT) network that does not have connectivity and an information technology (IT) network that provides access to the web. Similarly, devices that don't have the capability to connect to IoT Hub on their own can connect to a gateway device instead.
-* **Connection multiplexing** - All devices connecting to IoT Hub through an IoT Edge gateway use the same underlying connection.
+* **Connection multiplexing** - All devices connecting to IoT Hub through an IoT Edge gateway can use the same underlying connection. This multiplexing capability requires that the IoT Edge gateway uses AMQP as its upstream protocol.
* **Traffic smoothing** - The IoT Edge device will automatically implement exponential backoff if IoT Hub throttles traffic, while persisting the messages locally. This benefit makes your solution resilient to spikes in traffic. * **Offline support** - The gateway device stores messages and twin updates that cannot be delivered to IoT Hub.
@@ -37,6 +37,8 @@ All gateway patterns provide the following benefits:
In the transparent gateway pattern, devices that theoretically could connect to IoT Hub can connect to a gateway device instead. The downstream devices have their own IoT Hub identities and connect using either MQTT or AMQP protocols. The gateway simply passes communications between the devices and IoT Hub. Both the devices and the users interacting with them through IoT Hub are unaware that a gateway is mediating their communications. This lack of awareness means the gateway is considered *transparent*.
+For more information about how the IoT Edge hub manages communication between downstream devices and the cloud, see [Understand the Azure IoT Edge runtime and its architecture](iot-edge-runtime.md).
+ <!-- 1.0.10 --> ::: moniker range="iotedge-2018-06"
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/overview-vnet-service-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/overview-vnet-service-endpoints.md
@@ -46,6 +46,7 @@ Here's a list of trusted services that are allowed to access a key vault if the
|Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](../../azure-sql/database/transparent-data-encryption-byok-overview.md?view=sql-server-2017&preserve-view=true&viewFallbackFrom=azuresqldb-current).| |Azure Storage|[Storage Service Encryption using customer-managed keys in Azure Key Vault](../../storage/common/customer-managed-keys-configure-key-vault.md).| |Azure Data Lake Store|[Encryption of data in Azure Data Lake Store](../../data-lake-store/data-lake-store-encryption.md) with a customer-managed key.|
+|Azure Synapse Analytics|[Encryption of data using customer-managed keys in Azure Key Vault](../../synapse-analytics/security/workspaces-encryption.md)|
|Azure Databricks|[Fast, easy, and collaborative Apache SparkΓÇôbased analytics service](/azure/databricks/scenarios/what-is-azure-databricks)| |Azure API Management|[Deploy certificates for Custom Domain from Key Vault using MSI](../../api-management/api-management-howto-use-managed-service-identity.md#use-ssl-tls-certificate-from-azure-key-vault)| |Azure Data Factory|[Fetch data store credentials in Key Vault from Data Factory](https://go.microsoft.com/fwlink/?linkid=2109491)|
lighthouse https://docs.microsoft.com/en-us/azure/lighthouse/concepts/managed-services-offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/managed-services-offers.md
@@ -1,7 +1,7 @@
Title: Managed Service offers in Azure Marketplace description: Managed Service offers let you sell resource management offers to customers in Azure Marketplace. Previously updated : 07/28/2020 Last updated : 02/10/2021
@@ -13,7 +13,7 @@ This article describes the **Managed Service** offer type in [Azure Marketplace]
Managed Service offers streamline the process of onboarding customers to Azure Lighthouse. When a customer purchases an offer in Azure Marketplace, they'll be able to specify which subscriptions and/or resource groups should be onboarded.
-After that, users in your organization will be able to work on those resources from within your managing tenant through [Azure delegated resource management](azure-delegated-resource-management.md), according to the access you defined when creating the offer. This is done through a manifest that specifies the Azure Active Directory (Azure AD) users, groups, and service principals that will have access to customer resources, along with roles that define their level of access. By assigning permissions to an Azure AD group rather than a series of individual user or application accounts, you can add or remove individual users when your access requirements change.
+After that, users in your organization will be able to work on those resources from within your managing tenant through [Azure delegated resource management](azure-delegated-resource-management.md), according to the access you defined when creating the offer. This is done through a manifest that specifies the Azure Active Directory (Azure AD) users, groups, and service principals that will have access to customer resources, along with [roles](tenants-users-roles.md) that define their level of access.
## Public and private offers
lighthouse https://docs.microsoft.com/en-us/azure/lighthouse/how-to/publish-managed-services-offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/publish-managed-services-offers.md
@@ -1,7 +1,7 @@
Title: Publish a Managed Service offer to Azure Marketplace description: Learn how to publish a Managed Service offer that onboards customers to Azure Lighthouse. Previously updated : 12/17/2020 Last updated : 02/10/2021
@@ -13,7 +13,7 @@ In this article, you'll learn how to publish a public or private Managed Service
You need to have a valid [account in Partner Center](../../marketplace/partner-center-portal/create-account.md) to create and publish offers. If you don't have an account already, the [sign-up process](https://aka.ms/joinmarketplace) will lead you through the steps of creating an account in Partner Center and enrolling in the Commercial Marketplace program.
-Per the [Managed Service offer certification requirements](/legal/marketplace/certification-policies#7004-business-requirements), you must have a [Silver or Gold Cloud Platform competency level](/partner-center/learn-about-competencies) or be an [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp) in order to publish a Managed Service offer. You must also [enter a lead destination that will create a record in your CRM system](../../marketplace/plan-managed-service-offer.md#customer-leads) each time a customer deploys your offer.
+Per the [Managed Service offer certification requirements](/legal/marketplace/certification-policies#700-managed-services), you must have a [Silver or Gold Cloud Platform competency level](/partner-center/learn-about-competencies) or be an [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp) in order to publish a Managed Service offer. You must also [enter a lead destination that will create a record in your CRM system](../../marketplace/plan-managed-service-offer.md#customer-leads) each time a customer deploys your offer.
Your Microsoft Partner Network (MPN) ID will be [automatically associated](../../cost-management-billing/manage/link-partner-id.md) with the offers you publish to track your impact across customer engagements.
@@ -33,9 +33,9 @@ The following table can help determine whether to onboard customers by publishin
## Create your offer
-For detailed instructions about how to create your offer, including all of the information and assets you'll need to provide, see [Create a Managed Service offer](../../marketplace/plan-managed-service-offer.md).
+For detailed instructions about how to create your offer, including all of the information and assets you'll need to provide, see [Create a Managed Service offer](../../marketplace/create-managed-service-offer.md).
-To learn about the general publishing process, see [Azure Marketplace and AppSource Publishing Guide](../../marketplace/overview.md). You should also review the [commercial marketplace certification policies](/legal/marketplace/certification-policies), particularly the [Managed Services](/legal/marketplace/certification-policies#700-managed-services) section.
+To learn about the general publishing process, review the [Commercial Marketplace documentation](../../marketplace/overview.md). You should also review the [commercial marketplace certification policies](/legal/marketplace/certification-policies), particularly the [Managed Services](/legal/marketplace/certification-policies#700-managed-services) section.
Once a customer adds your offer, they will be able to delegate one or more subscriptions or resource groups, which will then be [onboarded to Azure Lighthouse](#the-customer-onboarding-process).
@@ -44,7 +44,7 @@ Once a customer adds your offer, they will be able to delegate one or more subsc
## Publish your offer
-Once you've completed all of the sections, your next step is to publish the offer to Azure Marketplace. Select the **Publish** button to initiate the process of making your offer live. More info about this process, can be found [here](../../marketplace/plan-managed-service-offer.md).
+Once you've completed all of the sections, your next step is to publish the offer to Azure Marketplace. Select the **Publish** button to initiate the process of making your offer live. More info about this process can be found [here](../../marketplace/review-publish-offer.md).
You can [publish an updated version of your offer](../..//marketplace/partner-center-portal/update-existing-offer.md) at any time. For example, you may want to add a new role definition to a previously-published offer. When you do so, customers who have already added the offer will see an icon in the [**Service providers**](view-manage-service-providers.md) page in the Azure portal that lets them know an update is available. Each customer will be able to [review the changes](view-manage-service-providers.md#update-service-provider-offers) and decide whether they want to update to the new version.
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/tutorial-cross-region-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-cross-region-powershell.md
@@ -0,0 +1,231 @@
+
+ Title: 'Tutorial: Create a cross-region load balancer using Azure PowerShell'
+
+description: Get started with this tutorial deploying a cross-region Azure Load Balancer using Azure PowerShell.
++++ Last updated : 02/10/2021
+#Customer intent: As a administrator, I want to deploy a cross-region load balancer for global high availability of my application or service.
++
+# Tutorial: Create a cross-region Azure Load Balancer using Azure PowerShell
+
+A cross-region load balancer ensures a service is available globally across multiple Azure regions. If one region fails, the traffic is routed to the next closest healthy regional load balancer.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create cross-region load balancer.
+> * Create a load balancer rule.
+> * Create a backend pool containing two regional load balancers.
+> * Test the load balancer.
+
+If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+- An Azure subscription.
+- Two **standard** sku Azure Load Balancers with backend pools deployed in two different Azure regions.
+ - For information on creating a regional standard load balancer and virtual machines for backend pools, see [Quickstart: Create a public load balancer to load balance VMs using Azure PowerShell](quickstart-load-balancer-standard-public-powershell.md).
+ - Append the name of the load balancers and virtual machines in each region with a **-R1** and **-R2**.
+- Azure PowerShell installed locally or Azure Cloud Shell.
++
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+## Create cross-region load balancer
++
+### Create a resource group
+
+An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup).
++
+```azurepowershell-interactive
+$rg = @{
+ Name = 'MyResourceGroupLB-CR'
+ Location = 'westus'
+}
+New-AzResourceGroup @rg
+
+```
+
+### Create cross-region load balancer resources
+
+In this section, you'll create the resources needed for the cross-region load balancer.
+
+A global standard sku public IP is used for the frontend of the cross-region load balancer.
+
+* Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create the public IP address.
+
+* Create a front-end IP configuration with [New-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/new-azloadbalancerfrontendipconfig).
+
+* Create a back-end address pool with [New-AzLoadBalancerBackendAddressPoolConfig](/powershell/module/az.network/new-azloadbalancerbackendaddresspoolconfig).
+
+* Create a load balancer rule with [Add-AzLoadBalancerRuleConfig](/powershell/module/az.network/add-azloadbalancerruleconfig).
+
+* Create a cross-region load Balancer with [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer).
+
+```azurepowershell-interactive
+## Create global IP address for load balancer ##
+$ip = @{
+ Name = 'myPublicIP-CR'
+ ResourceGroupName = 'MyResourceGroupLB-CR'
+ Location = 'westus'
+ Sku = 'Standard'
+ Tier = 'Global'
+ AllocationMethod = 'Static'
+}
+$publicIP = New-AzPublicIpAddress @ip
+
+## Create frontend configuration ##
+$fe = @{
+ Name = 'myFrontEnd-CR'
+ PublicIpAddress = $publicIP
+}
+$feip = New-AzLoadBalancerFrontendIpConfig @fe
+
+## Create back-end address pool ##
+$be = @{
+ Name = 'myBackEndPool-CR'
+}
+$bepool = New-AzLoadBalancerBackendAddressPoolConfig @be
+
+## Create the load balancer rule ##
+$rul = @{
+ Name = 'myHTTPRule-CR'
+ Protocol = 'tcp'
+ FrontendPort = '80'
+ BackendPort = '80'
+ FrontendIpConfiguration = $feip
+ BackendAddressPool = $bepool
+}
+$rule = New-AzLoadBalancerRuleConfig @rul
+
+## Create cross-region load balancer resource ##
+$lbp = @{
+ ResourceGroupName = 'myResourceGroupLB-CR'
+ Name = 'myLoadBalancer-CR'
+ Location = 'westus'
+ Sku = 'Standard'
+ Tier = 'Global'
+ FrontendIpConfiguration = $feip
+ BackendAddressPool = $bepool
+ LoadBalancingRule = $rule
+}
+$lb = New-AzLoadBalancer @lbp
+```
+
+## Configure backend pool
+
+In this section, you'll add two regional standard load balancers to the backend pool of the cross-region load balancer.
+
+> [!IMPORTANT]
+> To complete these steps, ensure that two regional load balancers with backend pools have been deployed in your subscription. For more information, see, **[Quickstart: Create a public load balancer to load balance VMs using Azure PowerShell](quickstart-load-balancer-standard-public-powershell.md)**.
+
+* Use [Get-AzLoadBalancer](/powershell/module/az.network/get-azloadbalancer) and [Get-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/get-azloadbalancerfrontendipconfig) to store the regional load balancer information in variables.
+
+* Use [New-AzLoadBalancerBackendAddressConfig](/powershell/module/az.network/new-azloadbalancerbackendaddressconfig) to create the backend address pool configuration for the load balancer.
+
+* Use [Set-AzLoadBalancerBackendAddressPool](/powershell/module/az.network/new-azloadbalancerbackendaddresspool) to add the regional load balancer frontend to the cross-region backend pool.
+
+```azurepowershell-interactive
+## Place the region one load balancer configuration in a variable ##
+$region1 = @{
+ Name = 'myLoadBalancer-R1'
+ ResourceGroupName = 'CreatePubLBQS-rg-r1'
+}
+$R1 = Get-AzLoadBalancer @region1
+
+## Place the region two load balancer configuration in a variable ##
+$region2 = @{
+ Name = 'myLoadBalancer-R2'
+ ResourceGroupName = 'CreatePubLBQS-rg-r2'
+}
+$R2 = Get-AzLoadBalancer @region2
+
+## Place the region one load balancer front-end configuration in a variable ##
+$region1fe = @{
+ Name = 'MyFrontEnd-R1'
+ LoadBalancer = $R1
+}
+$R1FE = Get-AzLoadBalancerFrontendIpConfig @region1fe
+
+## Place the region two load balancer front-end configuration in a variable ##
+$region2fe = @{
+ Name = 'MyFrontEnd-R2'
+ LoadBalancer = $R2
+}
+$R2FE = Get-AzLoadBalancerFrontendIpConfig @region2fe
+
+## Create the cross-region backend address pool configuration for region 1 ##
+$region1ap = @{
+ Name = 'MyBackendPoolConfig-R1'
+ LoadBalancerFrontendIPConfigurationId = $R1FE.Id
+}
+$beaddressconfigR1 = New-AzLoadBalancerBackendAddressConfig @region1ap
+
+## Create the cross-region backend address pool configuration for region 2 ##
+$region2ap = @{
+ Name = 'MyBackendPoolConfig-R2'
+ LoadBalancerFrontendIPConfigurationId = $R2FE.Id
+}
+$beaddressconfigR2 = New-AzLoadBalancerBackendAddressConfig @region2ap
+
+## Apply the backend address pool configuration for the cross-region load balancer ##
+$bepoolcr = @{
+ ResourceGroupName = 'myResourceGroupLB-CR'
+ LoadBalancerName = 'myLoadBalancer-CR'
+ Name = 'myBackEndPool-CR'
+ LoadBalancerBackendAddress = $beaddressconfigR1,$beaddressconfigR2
+}
+Set-AzLoadBalancerBackendAddressPool @bepoolcr
+
+```
+
+## Test the load balancer
+
+In this section, you'll test the cross-region load balancer. You'll connect to the public IP address in a web browser. You'll stop the virtual machines in one of the regional load balancer backend pools and observe the failover.
+
+1. Use [Get-AzPublicIpAddress](https://docs.microsoft.com/powershell/module/az.network/get-azpublicipaddress) to get the public IP address of the load balancer:
+
+```azurepowershell-interactive
+$ip = @{
+ Name = 'myPublicIP-CR'
+ ResourceGroupName = 'myResourceGroupLB-CR'
+}
+Get-AzPublicIPAddress @ip | select IpAddress
+
+```
+2. Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS Web server is displayed on the browser.
+
+3. Stop the virtual machines in the backend pool of one of the regional load balancers.
+
+4. Refresh the web browser and observe the failover of the connection to the other regional load balancer.
+
+## Clean up resources
+
+When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, load balancer, and the remaining resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name 'myResourceGroupLB-CR'
+```
+
+## Next steps
+
+In this tutorial, you:
+
+* Created a global IP address.
+* Created a cross-region load balancer.
+* Created a load-balancing rule.
+* Added regional load balancers to the backend pool of the cross-region load balancer.
+* Tested the load balancer.
++
+Advance to the next article to learn how to...
+> [!div class="nextstepaction"]
+> [Load balancer VMs across availability zones](tutorial-load-balancer-standard-public-zone-redundant-portal.md)
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/connect-virtual-network-vnet-isolated-environment-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md
@@ -88,20 +88,23 @@ To access on-premises systems and data sources that don't have ISE connectors, a
## ISE SKUs
-When you create your ISE, you can select the Developer SKU or Premium SKU. Here are the differences between these SKUs:
+When you create your ISE, you can select the Developer SKU or Premium SKU. This SKU option is available only at ISE creation and can't be changed later. Here are the differences between these SKUs:
* **Developer**
- Provides a lower-cost ISE that you can use for experimentation, development, and testing, but not for production or performance testing. The Developer SKU includes built-in triggers and actions, Standard connectors, Enterprise connectors, and a single [Free tier](../logic-apps/logic-apps-limits-and-config.md#artifact-number-limits) integration account for a fixed monthly price. However, this SKU doesn't include any service-level agreement (SLA), options for scaling up capacity, or redundancy during recycling, which means that you might experience delays or downtime.
+ Provides a lower-cost ISE that you can use for exploration, experiments, development, and testing, but not for production or performance testing. The Developer SKU includes built-in triggers and actions, Standard connectors, Enterprise connectors, and a single [Free tier](../logic-apps/logic-apps-limits-and-config.md#artifact-number-limits) integration account for a [fixed monthly price](https://azure.microsoft.com/pricing/details/logic-apps).
-* **Premium**
+ > [!IMPORTANT]
+ > This SKU has no service-level agreement (SLA), scale up capability,
+ > or redundancy during recycling, which means that you might experience delays or downtime. Backend updates might intermittently interrupt service.
- Provides an ISE that you can use for production and includes SLA support, built-in triggers and actions, Standard connectors, Enterprise connectors, a single [Standard tier](../logic-apps/logic-apps-limits-and-config.md#artifact-number-limits) integration account, options for scaling up capacity, and redundancy during recycling for a fixed monthly price.
+ For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). To learn how billing works for ISEs, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#fixed-pricing).
-> [!IMPORTANT]
-> The SKU option is available only at ISE creation and can't be changed later.
+* **Premium**
+
+ Provides an ISE that you can use for production and performance testing. The Premium SKU includes SLA support, built-in triggers and actions, Standard connectors, Enterprise connectors, a single [Standard tier](../logic-apps/logic-apps-limits-and-config.md#artifact-number-limits) integration account, scale up capability, and redundancy during recycling for a [fixed monthly price](https://azure.microsoft.com/pricing/details/logic-apps).
-For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). To learn how pricing and billing work for ISEs, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#fixed-pricing).
+ For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). To learn how billing works for ISEs, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#fixed-pricing).
<a name="endpoint-access"></a>
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
@@ -189,21 +189,20 @@ For more information about your logic app resource definition, see [Overview: Au
### Integration service environment (ISE)
-Here are the throughput limits for the [Premium ISE SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level):
+* [Developer ISE SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level): Provides up to 500 executions per minute, but note these considerations:
-| Name | Limit | Notes |
-||-|-|
-| Base unit execution limit | System-throttled when infrastructure capacity reaches 80% | Provides ~4,000 action executions per minute, which is ~160 million action executions per month | |
-| Scale unit execution limit | System-throttled when infrastructure capacity reaches 80% | Each scale unit can provide ~2,000 additional action executions per minute, which is ~80 million more action executions per month | |
-| Maximum scale units that you can add | 10 | |
-||||
+ * Make sure that you use this SKU only for exploration, experiments, development, or testing - not for production or performance testing. This SKU has no service-level agreement (SLA), scale up capability, or redundancy during recycling, which means that you might experience delays or downtime.
-To go above these limits in normal processing, or run load testing that might go above these limits, [contact the Logic Apps team](mailto://logicappsemail@microsoft.com) for help with your requirements.
+ * Backend updates might intermittently interrupt service.
-> [!NOTE]
-> The [Developer ISE SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level)
-> has no published limits, no capabilities for scaling up, and no service-level agreement (SLA). Use this SKU
-> only for experimenting, development, and testing, not production or performance testing.
+* [Premium ISE SKU](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level): The following table describes this SKU's throughput limits, but to exceed these limits in normal processing, or run load testing that might go above these limits, [contact the Logic Apps team](mailto://logicappsemail@microsoft.com) for help with your requirements.
+
+ | Name | Limit | Notes |
+ ||-|-|
+ | Base unit execution limit | System-throttled when infrastructure capacity reaches 80% | Provides ~4,000 action executions per minute, which is ~160 million action executions per month | |
+ | Scale unit execution limit | System-throttled when infrastructure capacity reaches 80% | Each scale unit can provide ~2,000 additional action executions per minute, which is ~80 million more action executions per month | |
+ | Maximum scale units that you can add | 10 | |
+ ||||
<a name="gateway-limits"></a>
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-pricing.md
@@ -67,8 +67,8 @@ A fixed pricing model applies to logic apps that run in an [*integration service
| ISE SKU | Description | ||-|
-| **Premium** | The base unit has fixed capacity and is [billed at an hourly rate for the Premium SKU](https://azure.microsoft.com/pricing/details/logic-apps). If you need more throughput, you can [add more scale units](../logic-apps/ise-manage-integration-service-environment.md#add-capacity) when you create your ISE or afterwards. Each scale unit is billed at an [hourly rate that's roughly half the base unit rate](https://azure.microsoft.com/pricing/details/logic-apps). <p><p>For limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). |
-| **Developer** | The base unit has fixed capacity and is [billed at an hourly rate for the Developer SKU](https://azure.microsoft.com/pricing/details/logic-apps). This SKU doesn't have scale up capability, a service-level agreement (SLA), or published limits. Use this SKU only for exploration, experiments, development, and testing, not production or performance testing. |
+| **Premium** | The base unit has [fixed capacity](logic-apps-limits-and-config.md#integration-service-environment-ise) and is [billed at an hourly rate for the Premium SKU](https://azure.microsoft.com/pricing/details/logic-apps). If you need more throughput, you can [add more scale units](../logic-apps/ise-manage-integration-service-environment.md#add-capacity) when you create your ISE or afterwards. Each scale unit is billed at an [hourly rate that's roughly half the base unit rate](https://azure.microsoft.com/pricing/details/logic-apps). <p><p>For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). |
+| **Developer** | The base unit has [fixed capacity](logic-apps-limits-and-config.md#integration-service-environment-ise) and is [billed at an hourly rate for the Developer SKU](https://azure.microsoft.com/pricing/details/logic-apps). However, this SKU has no service-level agreement (SLA), scale up capability, or redundancy during recycling, which means that you might experience delays or downtime. Backend updates might intermittently interrupt service. <p><p>**Important**: Make sure that you use this SKU only for exploration, experiments, development, and testing - not for production or performance testing. <p><p>For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). |
||| ### Included at no extra cost
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-securing-a-logic-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-securing-a-logic-app.md
@@ -5,7 +5,7 @@
ms.suite: integration Previously updated : 01/20/2021 Last updated : 02/12/2021 # Secure access and data in Azure Logic Apps
@@ -65,7 +65,7 @@ Each URL contains the `sp`, `sv`, and `sig` query parameter as described in this
| Query parameter | Description | |--|-|
-| `sp` | Specifies permissions for the permitted HTTP methods to use. |
+| `sp` | Specifies permissions for the allowed HTTP methods to use. |
| `sv` | Specifies the SAS version to use for generating the signature. | | `sig` | Specifies the signature to use for authenticating access to the trigger. This signature is generated by using the SHA256 algorithm with a secret access key on all the URL paths and properties. Never exposed or published, this key is kept encrypted and stored with the logic app. Your logic app authorizes only those triggers that contain a valid signature created with the secret key. | |||
@@ -118,11 +118,11 @@ In the body, include the `KeyType` property as either `Primary` or `Secondary`.
### Enable Azure Active Directory Open Authentication (Azure AD OAuth)
-For inbound calls to an endpoint that's created by a request-based trigger, you can enable [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml) by defining or adding an authorization policy for your logic app. This way, inbound calls use OAuth [access tokens](../active-directory/develop/access-tokens.md) for authorization.
+For inbound calls to an endpoint that's created by a request-based trigger, you can enable [Azure AD OAuth](../active-directory/develop/index.yml) by defining or adding an authorization policy for your logic app. This way, inbound calls use OAuth [access tokens](../active-directory/develop/access-tokens.md) for authorization.
When your logic app receives an inbound request that includes an OAuth access token, the Azure Logic Apps service compares the token's claims against the claims specified by each authorization policy. If a match exists between the token's claims and all the claims in at least one policy, authorization succeeds for the inbound request. The token can have more claims than the number specified by the authorization policy.
-Before you enable Azure AD OAuth, review these considerations:
+#### Considerations before you enable Azure AD OAuth
* An inbound call to the request endpoint can use only one authorization scheme, either Azure AD OAuth or [Shared Access Signature (SAS)](#sas). Although using one scheme doesn't disable the other scheme, using both schemes at the same time causes an error because the Logic Apps service doesn't know which scheme to choose.
@@ -175,11 +175,15 @@ Before you enable Azure AD OAuth, review these considerations:
} ```
+#### Enable Azure AD OAuth for your logic app
+
+Follow these steps for either the Azure portal or your Azure Resource Manager template:
+ <a name="define-authorization-policy-portal"></a>
-#### Define authorization policy in Azure portal
+#### [Portal](#tab/azure-portal)
-To enable Azure AD OAuth for your logic app in the Azure portal, follow these steps to add one or more authorization policies to your logic app:
+In the [Azure portal](https://portal.azure.com), add one or more authorization policies to your logic app:
1. In the [Azure portal](https://portal.microsoft.com), find and open your logic app in the Logic App Designer.
@@ -211,9 +215,9 @@ To enable Azure AD OAuth for your logic app in the Azure portal, follow these st
<a name="define-authorization-policy-template"></a>
-#### Define authorization policy in Azure Resource Manager template
+#### [Resource Manager Template](#tab/azure-resource-manager)
-To enable Azure AD OAuth in the ARM template for deploying your logic app, follow these steps and the syntax below:
+In your ARM template, define an authorization policy following these steps and syntax below:
1. In the `properties` section for your [logic app's resource definition](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#logic-app-resource-definition), add an `accessControl` object, if none exists, that contains a `triggers` object.
@@ -266,6 +270,8 @@ Here's the syntax to follow:
], ``` ++ <a name="include-auth-header"></a> #### Include 'Authorization' header in request trigger outputs
@@ -305,11 +311,13 @@ Along with Shared Access Signature (SAS), you might want to specifically limit t
Regardless of any IP addresses that you specify, you can still run a logic app that has a request-based trigger by using the [Logic Apps REST API: Workflow Triggers - Run](/rest/api/logic/workflowtriggers/run) request or by using API Management. However, this scenario still requires [authentication](../active-directory/develop/authentication-vs-authorization.md) against the Azure REST API. All events appear in the Azure Audit Log. Make sure that you set access control policies accordingly.
+To restrict the inbound IP addresses for your logic app, follow these steps for either the Azure portal or your Azure Resource Manager template:
+ <a name="restrict-inbound-ip-portal"></a>
-#### Restrict inbound IP ranges in Azure portal
+#### [Portal](#tab/azure-portal)
-When you use the portal to restrict inbound IP addresses for your logic app, these restrictions affect both triggers *and* actions, despite the description in the portal under **Allowed inbound IP addresses**. To set up restrictions on triggers separately from actions, use the [`accessControl` object in your logic app's Azure Resource Manager template](#restrict-inbound-ip-template) or the [Logic Apps REST API: Workflow - Create Or Update operation](/rest/api/logic/workflows/createorupdate).
+In the [Azure portal](https://portal.azure.com), this filter affects both triggers *and* actions, contrary to the description in the portal under **Allowed inbound IP addresses**. To set up this filter separately for triggers and for actions, use the `accessControl` object in an Azure Resource Manager template for your logic app or the [Logic Apps REST API: Workflow - Create Or Update operation](/rest/api/logic/workflows/createorupdate).
1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
@@ -318,28 +326,28 @@ When you use the portal to restrict inbound IP addresses for your logic app, the
1. In the **Access control configuration** section, under **Allowed inbound IP addresses**, choose the path for your scenario: * To make your logic app callable only as a nested logic app by using the built-in [Azure Logic Apps action](../logic-apps/logic-apps-http-endpoint.md), select **Only other Logic Apps**, which works *only* when you use the **Azure Logic Apps** action to call the nested logic app.
-
+ This option writes an empty array to your logic app resource and requires that only calls from parent logic apps that use the built-in **Azure Logic Apps** action can trigger the nested logic app. * To make your logic app callable only as a nested app by using the HTTP action, select **Specific IP ranges**, *not* **Only other Logic Apps**. When the **IP ranges for triggers** box appears, enter the parent logic app's [outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#outbound). A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x*.
-
+ > [!NOTE] > If you use the **Only other Logic Apps** option and the HTTP action to call your nested logic app, > the call is blocked, and you get a "401 Unauthorized" error.
-
+ * For scenarios where you want to restrict inbound calls from other IPs, when the **IP ranges for triggers** box appears, specify the IP address ranges that the trigger accepts. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x*. 1. Optionally, under **Restrict calls to get input and output messages from run history to the provided IP addresses**, you can specify the IP address ranges for inbound calls that can access input and output messages in run history. <a name="restrict-inbound-ip-template"></a>
-#### Restrict inbound IP ranges in Azure Resource Manager template
+#### [Resource Manager Template](#tab/azure-resource-manager)
-If you [automate deployment for logic apps by using Resource Manager templates](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md), you can specify the permitted inbound IP address ranges in your logic app's resource definition by using the `accessControl` section. In this section, use the `triggers`, `actions`, and the optional `contents` sections as appropriate by including the `allowedCallerIpAddresses` section with the `addressRange` property and set the property value to the permitted IP range in *x.x.x.x/x* or *x.x.x.x-x.x.x.x* format.
+In your ARM template, specify the allowed inbound IP address ranges in your logic app's resource definition by using the `accessControl` section. In this section, use the `triggers`, `actions`, and the optional `contents` sections as appropriate by including the `allowedCallerIpAddresses` section with the `addressRange` property and set the property value to the allowed IP range in *x.x.x.x/x* or *x.x.x.x-x.x.x.x* format.
* If your nested logic app uses the **Only other Logic Apps** option, which permits inbound calls only from other logic apps that use the Azure Logic Apps action, set the `addressRange` property to an empty array (**[]**).
-* If your nested logic app uses the **Specific IP ranges** option for other inbound calls, such as other logic apps that use the HTTP action, set the `addressRange` property to the permitted IP range.
+* If your nested logic app uses the **Specific IP ranges** option for other inbound calls, such as other logic apps that use the HTTP action, set the `addressRange` property to the allowed IP range.
This example shows a resource definition for a nested logic app that permits inbound calls only from logic apps that use the built-in Azure Logic Apps action:
@@ -435,6 +443,8 @@ This example shows a resource definition for a nested logic app that permits inb
} ``` ++ <a name="secure-operations"></a> ## Access to logic app operations
@@ -469,11 +479,15 @@ To control access to the inputs and outputs in your logic app's run history, you
### Restrict access by IP address range
-You can limit access to the inputs and outputs in your logic app's run history so that only requests from specific IP address ranges can view that data. For example, to block anyone from accessing inputs and outputs, specify an IP address range such as `0.0.0.0-0.0.0.0`. Only a person with administrator permissions can remove this restriction, which provides the possibility for "just-in-time" access to your logic app's data. You can specify the IP ranges to restrict either by using the Azure portal or in an Azure Resource Manager template that you use for logic app deployment.
+You can limit access to the inputs and outputs in your logic app's run history so that only requests from specific IP address ranges can view that data.
-#### Restrict IP ranges in Azure portal
+For example, to block anyone from accessing inputs and outputs, specify an IP address range such as `0.0.0.0-0.0.0.0`. Only a person with administrator permissions can remove this restriction, which provides the possibility for "just-in-time" access to your logic app's data.
-1. In the Azure portal, open your logic app in the Logic App Designer.
+To specify the allowed IP ranges, follow these steps for either the Azure portal or your Azure Resource Manager template:
+
+#### [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
1. On your logic app's menu, under **Settings**, select **Workflow settings**.
@@ -483,9 +497,9 @@ You can limit access to the inputs and outputs in your logic app's run history s
A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x*
-#### Restrict IP ranges in Azure Resource Manager template
+#### [Resource Manager Template](#tab/azure-resource-manager)
-If you [automate deployment for logic apps by using Resource Manager templates](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md), you can specify the IP ranges by using the `accessControl` section with the `contents` section in your logic app's resource definition, for example:
+In your ARM template, specify the IP ranges by using the `accessControl` section with the `contents` section in your logic app's resource definition, for example:
``` json {
@@ -524,11 +538,41 @@ If you [automate deployment for logic apps by using Resource Manager templates](
} ``` ++ <a name="obfuscate"></a> ### Secure data in run history by using obfuscation
-Many triggers and actions have settings to secure inputs, outputs, or both from a logic app's run history. Before using these settings to help you secure this data, [review these considerations](#obfuscation-considerations).
+Many triggers and actions have settings to secure inputs, outputs, or both from a logic app's run history. Before using these settings to help you secure this data, review these considerations:
+
+* When you obscure the inputs or outputs on a trigger or action, Logic Apps doesn't send the secured data to Azure Log Analytics. Also, you can't add [tracked properties](../logic-apps/monitor-logic-apps-log-analytics.md#extend-data) to that trigger or action for monitoring.
+
+* The [Logic Apps API for handling workflow history](/rest/api/logic/) doesn't return secured outputs.
+
+* To secure outputs from an action that obscures inputs or explicitly obscures outputs, manually turn on **Secure Outputs** in that action.
+
+* Make sure that you turn on **Secure Inputs** or **Secure Outputs** in downstream actions where you expect the run history to obscure that data.
+
+ **Secure Outputs setting**
+
+ When you manually turn on **Secure Outputs** in a trigger or action, Logic Apps hides these outputs in the run history. If a downstream action explicitly uses these secured outputs as inputs, Logic Apps hides this action's inputs in the run history, but *doesn't enable* the action's **Secure Inputs** setting.
+
+ ![Secured outputs as inputs and downstream impact on most actions](./media/logic-apps-securing-a-logic-app/secure-outputs-as-inputs-flow.png)
+
+ The Compose, Parse JSON, and Response actions has only the **Secure Inputs** setting. When turned on, the setting also hides these actions' outputs. If these actions explicitly use the upstream secured outputs as inputs, Logic Apps hides these actions' inputs and outputs, but *doesn't enable* these actions' **Secure Inputs** setting. If a downstream action explicitly uses the hidden outputs from the Compose, Parse JSON, or Response actions as inputs, Logic Apps *doesn't hide this downstream action's inputs or outputs*.
+
+ ![Secured outputs as inputs with downstream impact on specific actions](./media/logic-apps-securing-a-logic-app/secure-outputs-as-inputs-flow-special.png)
+
+ **Secure Inputs setting**
+
+ When you manually turn on **Secure Inputs** in a trigger or action, Logic Apps hides these inputs in the run history. If a downstream action explicitly uses the visible outputs from that trigger or action as inputs, Logic Apps hides this downstream action's inputs in the run history, but *doesn't enable* **Secure Inputs** in this action and doesn't hide this action's outputs.
+
+ ![Secured inputs and downstream impact on most actions](./media/logic-apps-securing-a-logic-app/secure-inputs-impact-on-downstream.png)
+
+ If the Compose, Parse JSON, and Response actions explicitly use the visible outputs from the trigger or action that has the secured inputs, Logic Apps hides these actions' inputs and outputs, but *doesn't enable* these action's **Secure Inputs** setting. If a downstream action explicitly uses the hidden outputs from the Compose, Parse JSON, or Response actions as inputs, Logic Apps *doesn't hide this downstream action's inputs or outputs*.
+
+ ![Secured inputs and downstream impact on specific actions](./media/logic-apps-securing-a-logic-app/secure-inputs-flow-special.png)
#### Secure inputs and outputs in the designer
@@ -571,8 +615,6 @@ In the underlying trigger or action definition, add or update the `runtimeConfig
* `"inputs"`: Secures inputs in run history. * `"outputs"`: Secures outputs in run history.
-Here are some [considerations to review](#obfuscation-considerations) when you use these settings to help you secure this data.
- ```json "<trigger-or-action-name>": { "type": "<trigger-or-action-type>",
@@ -591,38 +633,6 @@ Here are some [considerations to review](#obfuscation-considerations) when you u
} ```
-<a name="obfuscation-considerations"></a>
-
-#### Considerations when securing inputs and outputs
-
-* When you obscure the inputs or outputs on a trigger or action, Logic Apps doesn't send the secured data to Azure Log Analytics. Also, you can't add [tracked properties](../logic-apps/monitor-logic-apps-log-analytics.md#extend-data) to that trigger or action for monitoring.
-
-* The [Logic Apps API for handling workflow history](/rest/api/logic/) doesn't return secured outputs.
-
-* To secure outputs from an action that obscures inputs or explicitly obscures outputs, manually turn on **Secure Outputs** in that action.
-
-* Make sure that you turn on **Secure Inputs** or **Secure Outputs** in downstream actions where you expect the run history to obscure that data.
-
- **Secure Outputs setting**
-
- When you manually turn on **Secure Outputs** in a trigger or action, Logic Apps hides these outputs in the run history. If a downstream action explicitly uses these secured outputs as inputs, Logic Apps hides this action's inputs in the run history, but *doesn't enable* the action's **Secure Inputs** setting.
-
- ![Secured outputs as inputs and downstream impact on most actions](./media/logic-apps-securing-a-logic-app/secure-outputs-as-inputs-flow.png)
-
- The Compose, Parse JSON, and Response actions has only the **Secure Inputs** setting. When turned on, the setting also hides these actions' outputs. If these actions explicitly use the upstream secured outputs as inputs, Logic Apps hides these actions' inputs and outputs, but *doesn't enable* these actions' **Secure Inputs** setting. If a downstream action explicitly uses the hidden outputs from the Compose, Parse JSON, or Response actions as inputs, Logic Apps *doesn't hide this downstream action's inputs or outputs*.
-
- ![Secured outputs as inputs with downstream impact on specific actions](./media/logic-apps-securing-a-logic-app/secure-outputs-as-inputs-flow-special.png)
-
- **Secure Inputs setting**
-
- When you manually turn on **Secure Inputs** in a trigger or action, Logic Apps hides these inputs in the run history. If a downstream action explicitly uses the visible outputs from that trigger or action as inputs, Logic Apps hides this downstream action's inputs in the run history, but *doesn't enable* **Secure Inputs** in this action and doesn't hide this action's outputs.
-
- ![Secured inputs and downstream impact on most actions](./media/logic-apps-securing-a-logic-app/secure-inputs-impact-on-downstream.png)
-
- If the Compose, Parse JSON, and Response actions explicitly use the visible outputs from the trigger or action that has the secured inputs, Logic Apps hides these actions' inputs and outputs, but *doesn't enable* these action's **Secure Inputs** setting. If a downstream action explicitly uses the hidden outputs from the Compose, Parse JSON, or Response actions as inputs, Logic Apps *doesn't hide this downstream action's inputs or outputs*.
-
- ![Secured inputs and downstream impact on specific actions](./media/logic-apps-securing-a-logic-app/secure-inputs-flow-special.png)
- <a name="secure-action-parameters"></a> ## Access to parameter inputs
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/azure-machine-learning-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
@@ -15,6 +15,35 @@ Last updated 09/10/2020
In this article, learn about Azure Machine Learning releases. For the full SDK reference content, visit the Azure Machine Learning's [**main SDK for Python**](/python/api/overview/azure/ml/intro?preserve-view=true&view=azure-ml-py) reference page. +
+## 2021-02-09
+
+### Azure Machine Learning SDK for Python v1.22.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixed bug where an extra pip dependency was added to the conda yml file for vision models.
+ + **azureml-automl-runtime**
+ + Fixed a bug where classical forecasting models (e.g. AutoArima) could receive training data wherein rows with imputed target values were not present. This violated the data contract of these models. * Fixed various bugs with lag-by-occurrence behavior in the time-series lagging operator. Previously, the lag-by-occurrence operation did not mark all imputed rows correctly and so would not always generate the correct occurrence lag values. Also fixed some compatibility issues between the lag operator and the rolling window operator with lag-by-occurrence behavior. This previously resulted in the rolling window operator dropping some rows from the training data that it should otherwise use.
+ + **azureml-core**
+ + Adding support for Token Authentication by audience.
+ + Add `process_count` to [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration?preserve-view=true&view=azure-ml-py) to support multi-process multi-node PyTorch jobs.
+ + **azureml-pipeline-steps**
+ + [CommandStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.commandstep?preserve-view=true&view=azure-ml-py) now GA and no longer experimental.
+ + [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig?preserve-view=true&view=azure-ml-py): add argument allowed_failed_count and allowed_failed_percent to check error threshold on mini batch level. Error threshold has 3 flavors now:
+ + error_threshold - the number of allowed failed mini batch items;
+ + allowed_failed_count - the number of allowed failed mini batches;
+ + allowed_failed_percent- the percent of allowed failed mini batches.
+
+ A job will stop if exceeds any of them. error_threshold is required to keep it backward compatibility. Set the value to -1 to ignore it.
+ + Fixed whitespace handling in AutoMLStep name.
+ + **azureml-train-core**
+ + HyperDrive runs invoked from a ScriptRun will now be considered a child run.
+ + Add `process_count` to [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration?preserve-view=true&view=azure-ml-py) to support multi-process multi-node PyTorch jobs.
+ + **azureml-widgets**
+ + Add widget ParallelRunStepDetails to visualize status of a ParallelRunStep.
+ + Allows hyperdrive users to see an additional axis on the parallel coordinates chart that shows the metric value corresponding to each set of hyperparameters for each child run.
++ ## 2021-01-31 ### Azure Machine Learning Studio Notebooks Experience (January Update) + **New features**
@@ -30,6 +59,7 @@ In this article, learn about Azure Machine Learning releases. For the full SDK
+ Improved performance + Improved speed and kernel reliability + ## 2021-01-25 ### Azure Machine Learning SDK for Python v1.21.0
@@ -140,7 +170,7 @@ In this article, learn about Azure Machine Learning releases. For the full SDK
+ HyperDriveRun.get_children_sorted_by_primary_metric() should complete faster now + Improved error handling in HyperDrive SDK. + Deprecated all estimator classes in favor of using ScriptRunConfig to configure experiment runs. Deprecated classes include:
- + MMLBaseEstimator
+ + MMLBase
+ Estimator + PyTorch + TensorFlow
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-forecast https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-auto-train-forecast.md
@@ -189,6 +189,14 @@ automl_config = AutoMLConfig(task='forecasting',
**forecasting_parameters) ```
+The amount of data required to successfully train a forecasting model with automated ML is influenced by the `forecast_horizon`, `n_cross_validations`, and `target_lags` or `target_rolling_window_size` values specified when you configure your `AutoMLConfig`.
+
+The following formula calculates the amount of historic data that what would be needed to construct time series features.
+
+Minimum historic data required: (2x `forecast_horizon`) + #`n_cross_validations` + max(max(`target_lags`), `target_rolling_window_size`)
+
+An Error exception will be raised for any series in the dataset that does not meet the required amount of historic data for the relevant settings specified.
+ ### Featurization steps In every automated machine learning experiment, automatic scaling and normalization techniques are applied to your data by default. These techniques are types of **featurization** that help *certain* algorithms that are sensitive to features on different scales. Learn more about default featurization steps in [Featurization in AutoML](how-to-configure-auto-features.md#automatic-featurization)
@@ -366,8 +374,7 @@ day_datetime,store,week_of_year
Repeat the necessary steps to load this future data to a dataframe and then run `best_run.predict(test_data)` to predict future values. > [!NOTE]
-> Values cannot be predicted for number of periods greater than the `forecast_horizon`. The model must be re-trained with a larger horizon to predict future values beyond
-> the current horizon.
+> In-sample predictions are not supported for forecasting with automated ML when `target_lags` and/or `target_rolling_window_size` are enabled.
## Example notebooks
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-private-link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-private-link.md
@@ -69,6 +69,19 @@ The [Azure CLI extension for machine learning](reference-azure-machine-learning-
* `--pe-vnet-name`: The existing virtual network to create the private endpoint in. * `--pe-subnet-name`: The name of the subnet to create the private endpoint in. The default value is `default`.
+These parameters are in addition to other required parameters for the create command. For example, the following command creates a new workspace in the West US region, using an existing resource group and VNet:
+
+```azurecli
+az ml workspace create -r myresourcegroup \
+ -l westus \
+ -n myworkspace \
+ --pe-name myprivateendpoint \
+ --pe-auto-approval \
+ --pe-resource-group myresourcegroup \
+ --pe-vnet-name myvnet \
+ --pe-subnet-name mysubnet
+```
+ # [Portal](#tab/azure-portal) The __Networking__ tab in Azure Machine Learning studio allows you to configure a private endpoint. However, it requires an existing virtual network. For more information, see [Create workspaces in the portal](how-to-manage-workspace.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-understand-automated-ml.md
@@ -229,10 +229,7 @@ In this example, note that the better model has a predicted vs. true line that i
While model evaluation metrics and charts are good for measuring the general quality of a model, inspecting which dataset features a model used to make its predictions is essential when practicing responsible AI. That's why automated ML provides a model interpretability dashboard to measure and report the relative contributions of dataset features.
-![Feature importances](./media/how-to-understand-automated-ml/how-to-feature-importance.gif)
- To view the interpretability dashboard in the studio:- 1. [Sign into the studio](https://ml.azure.com/) and navigate to your workspace 2. In the left menu, select **Experiments** 3. Select your experiment from the list of experiments
@@ -241,10 +238,11 @@ To view the interpretability dashboard in the studio:
6. In the **Explanations** tab, you may see an explanation was already created if the model was the best 7. To create a new explanation, select **Explain model** and select the remote compute with which to compute explanations
+[Learn more about model explanations in automated ML](how-to-machine-learning-interpretability-automl.md).
+ > [!NOTE] > The ForecastTCN model is not currently supported by automated ML explanations and other forecasting models may have limited access to interpretability tools. ## Next steps * Try the [automated machine learning model explanation sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/explain-model).
-* Learn more about [responsible AI offerings in automated ML](how-to-machine-learning-interpretability-automl.md).
* For automated ML specific questions, reach out to askautomatedml@microsoft.com.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-managed-identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-managed-identities.md
@@ -97,7 +97,7 @@ If you do not bring your own ACR, Azure Machine Learning service will create one
### Create compute with managed identity to access Docker images for training
-To access the workspace ACR, create machine learning compute cluster with system-assigned managed identity enabled. You can enable the identity from Azure portal or Studio when creating compute, or from Azure CLI using
+To access the workspace ACR, create machine learning compute cluster with system-assigned managed identity enabled. You can enable the identity from Azure portal or Studio when creating compute, or from Azure CLI using the below. For more information, see [using managed identity with compute clusters](how-to-create-attach-compute-cluster.md#managed-identity).
# [Python](#tab/python)
@@ -166,7 +166,7 @@ env.python.user_managed_dependencies = True
### Build Azure Machine Learning managed environment into base image from private ACR for training or inference
-In this scenario, Azure Machine Learning service builds the training or inference environment on top of a base image you supply from a private ACR. Because the image build task happens on the workspace ACR using ACR Tasks, you must perform additional steps to allow access.
+In this scenario, Azure Machine Learning service builds the training or inference environment on top of a base image you supply from a private ACR. Because the image build task happens on the workspace ACR using ACR Tasks, you must perform more steps to allow access.
1. Create __user-assigned managed identity__ and grant the identity ACRPull access to the __private ACR__. 1. Grant the workspace __system-assigned managed identity__ a Managed Identity Operator role on the __user-assigned managed identity__ from the previous step. This role allows the workspace to assign the user-assigned managed identity to ACR Task for building the managed environment.
@@ -226,4 +226,4 @@ Once you've configured ACR without admin user as described earlier, you can acce
## Next steps
-* Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md).
+* Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-power-bi-custom-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-power-bi-custom-model.md
@@ -113,7 +113,7 @@ Create a new *code cell* in your notebook. Then copy the following code and past
import joblib from sklearn.linear_model import Ridge
-model = Ridge().fit(X,y)
+model = Ridge().fit(X_df,y_df)
joblib.dump(model, 'sklearn_regression_model.pkl') ```
@@ -281,10 +281,8 @@ We recommend that you test the web service to ensure it works as expected. To re
```python import json - input_payload = json.dumps({
- 'data': X_df[0:2].values.tolist(),
- 'method': 'predict' # If you have a classification model, you can get probabilities by changing this to 'predict_proba'.
+ 'data': X_df[0:2].values.tolist()
}) output = service.run(input_payload)
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-known-issues-limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-known-issues-limitations.md
@@ -0,0 +1,31 @@
+
+ Title: Known issues and limitations - Azure Database for PostgreSQL - Single Server and Flexible Server (Preview)
+description: Lists the known issues that customers should be aware of.
++++ Last updated : 02/05/2020+
+# Azure Database for PostgreSQL - Known issues and limitations
+
+This page provides a list of known issues in Azure Database for PostgreSQL that could impact your application. It also lists any mitigation and recommendations to workaround the issue.
+
+## Intelligent Performance - Query Store
+
+Applicable to Azure Database for PostgreSQL - Single Server.
+
+| Applicable | Cause | Remediation|
+| -- | | - |
+| PostgreSQL 9.6, 10, 11 | Turning on the server parameter `pg_qs.replace_parameter_placeholders` might lead to a server shutdown in some rare scenarios. | Through Azure Portal, Server Parameters section, turn the parameter `pg_qs.replace_parameter_placeholders` value to `OFF` and save. |
+
+## Server Parameters
+
+Applicable to Azure Database for PostgreSQL - Single Server and Flexible Server
+
+| Applicable | Cause | Remediation|
+| -- | | - |
+| PostgreSQL 9.6, 10, 11 | Changing the server parameter `max_locks_per_transaction` to a higher value than what is [recommended](https://www.postgresql.org/docs/11/kernel-resources.html) could lead to server unable to come up after a restart. | Leave it to the default value (32 or 64) or change to a reasonable value per PostgreSQL [documentation](https://www.postgresql.org/docs/11/kernel-resources.html). <br> <br> From the service side, this is being worked on to limit the high value based on the SKU. |
+
+## Next steps
+- See Query Store [best practices](./concepts-query-store-best-practices.md)
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-version-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-version-policy.md
@@ -43,7 +43,7 @@ The table below provides the retirement details for PostgreSQL major versions. T
## Retired PostgreSQL engine versions not supported in Azure Database for PostgreSQL
-After the retirement date for each PostgreSQL database version, if you continue running the retired version, note the following restrictions:
+You may continue to run the retired version in Azure Database for PostgreSQL. However, please note the following restrictions after the retirement date for each PostgreSQL database version:
- As the community will not be releasing any further bug fixes or security fixes, Azure Database for PostgreSQL will not patch the retired database engine for any bugs or security issues or otherwise take security measures with regard to the retired database engine. You may experience security vulnerabilities or other issues as a result. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components. - If any support issue you may experience relates to the PostgreSQL database, we may not be able to provide you with support. In such cases, you will have to upgrade your database in order for us to provide you with any support. - You will not be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/manage-resources-created-move-process https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/manage-resources-created-move-process.md
@@ -23,10 +23,10 @@ Manually delete the move collection, and Site Recovery resources created for the
2. Check that the VM and all other source resources in the move collection have been moved/deleted. This ensures that there are no pending resources using them. 2. Delete these resources.
- - The move collection name is ```movecollection-<sourceregion>-<target-region>```.
+ - The move collection name is ```movecollection-<sourceregion>-<target-region>-<metadata-region>```.
- The cache storage account name is ```resmovecache<guid>``` - The vault name is ```ResourceMove-<sourceregion>-<target-region>-GUID```. ## Next steps
-Try [moving a VM](tutorial-move-region-virtual-machines.md) to another region with Resource Mover.
+Try [moving a VM](tutorial-move-region-virtual-machines.md) to another region with Resource Mover.
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/modify-target-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/modify-target-settings.md
@@ -1,30 +1,30 @@
Title: Modify target settings when moving Azure VMs between regions with Azure Resource Mover
-description: Learn how to modify target settings when moving Azure VMs between regions with Azure Resource Mover.
+ Title: Modify destination settings when moving Azure VMs between regions with Azure Resource Mover
+description: Learn how to modify destination settings when moving Azure VMs between regions with Azure Resource Mover.
Last updated 02/08/2021
-#Customer intent: As an Azure admin, I want modify target settings when moving resources to another region.
+#Customer intent: As an Azure admin, I want to modify destination settings when moving resources to another region.
-# Modify target settings
+# Modify destination settings
-This article describes how to modify target settings, when moving resources between Azure regions with [Azure Resource Mover](overview.md).
+This article describes how to modify destination settings, when moving resources between Azure regions with [Azure Resource Mover](overview.md).
## Modify VM settings
-When moving Azure VMs and associated resources, you can modify the target settings.
+When moving Azure VMs and associated resources, you can modify the destination settings.
-- We recommend that you only change target settings after the move collection is validated.-- We recommend that you modify settings before preparing the resources, because some target properties might be unavailable for edit after prepare is complete.
+- We recommend that you only change destination settings after the move collection is validated.
+- We recommend that you modify settings before preparing the resources, because some destination properties might be unavailable for edit after prepare is complete.
However:-- If you're moving the source resource, you can usually modify target settings until you start the initiate move process.-- If you assign an existing resource in the source region, you can modify target settings until the move commit is complete.
+- If you're moving the source resource, you can usually modify destination settings until you start the initiate move process.
+- If you assign an existing resource in the source region, you can modify destination settings until the move commit is complete.
### Settings you can modify
@@ -32,63 +32,63 @@ Configuration settings you can modify are summarized in the table.
**Resource** | **Options** | |
-**VM name** | Options:<br/><br/> - Create a new VM with the same name in the target region.<br/><br/> - Create a new VM with a different name in the target region.<br/><br/> - Use an existing VM in the target region.<br/><br/> If you create a new VM, with the exception of the settings you modify, the new target VM is assigned the same settings as the source.
-**VM availability zone** | The availability zone in which the target VM will be placed. Select **Not applicable** if you donΓÇÖt want to change the source settings, or if you donΓÇÖt want to place the VM in an availability zone.
-**VM SKU** | The [VM type](https://azure.microsoft.com/pricing/details/virtual-machines/series/) (available in the target region) that will be used for the target VM.<br/><br/> The selected target VM shouldn't be smaller than the source VM.
-**VM availability set | The availability set in which the target VM will be placed. Select **Not applicable** you donΓÇÖt want to change the source settings, or if you donΓÇÖt want to place the VM in an availability set.
+**VM name** | Options:<br/><br/> - Create a new VM with the same name in the destination region.<br/><br/> - Create a new VM with a different name in the destination region.<br/><br/> - Use an existing VM in the destination region.<br/><br/> If you create a new VM, with the exception of the settings you modify, the new destination VM is assigned the same settings as the source.
+**VM availability zone** | The availability zone in which the destination VM will be placed. Select **Not applicable** if you donΓÇÖt want to change the source settings, or if you donΓÇÖt want to place the VM in an availability zone.
+**VM SKU** | The [VM type](https://azure.microsoft.com/pricing/details/virtual-machines/series/) (available in the destination region) that will be used for the destination VM.<br/><br/> The selected destination VM shouldn't be smaller than the source VM.
+**VM availability set | The availability set in which the destination VM will be placed. Select **Not applicable** you donΓÇÖt want to change the source settings, or if you donΓÇÖt want to place the VM in an availability set.
**VM key vault** | The associated key vault when you enable Azure disk encryption on a VM. **Disk encryption set** | The associated disk encryption set if the VM uses a customer-managed key for server-side encryption.
-**Resource group** | The resource group in which the target VM will be placed.
-**Networking resources** | Options for network interfaces, virtual networks (VNets/), and network security groups/network interfaces:<br/><br/> - Create a new resource with the same name in the target region.<br/><br/> - Create a new resource with a different name in the target region.<br/><br/> - Use an existing networking resource in the target region.<br/><br/> If you create a new target resource, with the exception of the settings you modify, it's assigned the same settings as the source resource.
+**Resource group** | The resource group in which the destination VM will be placed.
+**Networking resources** | Options for network interfaces, virtual networks (VNets/), and network security groups/network interfaces:<br/><br/> - Create a new resource with the same name in the destination region.<br/><br/> - Create a new resource with a different name in the destination region.<br/><br/> - Use an existing networking resource in the destination region.<br/><br/> If you create a new destination resource, with the exception of the settings you modify, it's assigned the same settings as the source resource.
**Public IP address name, SKU, and zone** | Specifies the name, [SKU](../virtual-network/public-ip-addresses.md#sku), and [zone](../virtual-network/public-ip-addresses.md#standard) for standard public IP addresses.<br/><br/> If you want it to be zone redundant, enter as **Zone redundant**. **Load balancer name, SKU, and zone ** | Specifies the name, SKU (Basic or Standard), and zone for the load balancer.<br/><br/> We recommend using Standard sKU.<br/><br/> If you want it to be zone redundant, specify as **Zone redundant**.
-**Resource dependencies** | Options for each dependency:<br/><br/>- The resource uses source dependent resources that will move to the target region.<br/><br/> - The resource uses different dependent resources located in the target region. In this case, you can choose from any similar resources in the target region.
+**Resource dependencies** | Options for each dependency:<br/><br/>- The resource uses source dependent resources that will move to the destination region.<br/><br/> - The resource uses different dependent resources located in the destination region. In this case, you can choose from any similar resources in the destination region.
-### Edit VM target settings
+### Edit VM destination settings
-If you don't want to dependent resources from the source region to the target, you have a couple of other options:
+If you don't want to dependent resources from the source region to the destination, you have a couple of other options:
-- Create a new resource in the target region. Unless you specify different settings, the new resource will have the same settings as the source resource.-- Use an existing resource in the target region.
+- Create a new resource in the destination region. Unless you specify different settings, the new resource will have the same settings as the source resource.
+- Use an existing resource in the destination region.
-Exact behavior depends on the resource type. [Learn more](modify-target-settings.md) about modifying target settings.
+Exact behavior depends on the resource type. [Learn more](modify-target-settings.md) about modifying destination settings.
-You modify the target settings for a resource using the **Target configuration** entry in the resource move collection.
+You modify the destination settings for a resource using the **Destination configuration** entry in the resource move collection.
To modify a setting:
-1. In the **Across regions** page > **Target configuration** column, click the link for the resource entry.
-2. In **Configuration settings**, you can create a new VM in the target region.
-3. Assign a new availability zone, availability set, or SKU to the target VM. **Availability zone** and **SKU**.
+1. In the **Across regions** page > **Destination configuration** column, click the link for the resource entry.
+2. In **Configuration settings**, you can create a new VM in the destination region.
+3. Assign a new availability zone, availability set, or SKU to the destination VM. **Availability zone** and **SKU**.
Changes are only made for the resource you're editing. You need to update any dependent resource separately. ## Modify SQL settings
-When moving Azure SQL Database resources, you can modify the target settings for the move.
+When moving Azure SQL Database resources, you can modify the destination settings for the move.
- For SQL database:
- - We recommend that you modify target configuration settings before you prepare them for move.
- - You can modify the settings for the target database, and zone redundancy for the database.
+ - We recommend that you modify destination configuration settings before you prepare them for move.
+ - You can modify the settings for the destination database, and zone redundancy for the database.
- For elastic pools:
- - You can modify the target configuration anytime before initiating the move.
- - You can modify the target elastic pool, and zone redundancy for the pool.
+ - You can modify the destination configuration anytime before initiating the move.
+ - You can modify the destination elastic pool, and zone redundancy for the pool.
### SQL settings you can modify **Setting** | **SQL database** | **Elastic pool** | |
-**Name** | Create a new database with the same name in the target region.<br/><br/> Create a new database with a different name in the target region.<br/><br/> Use an existing database in the target region. | Create a new elastic pool with the same name in the target region.<br/><br/> Create a new elastic pool with a different name in the target region.<br/><br/> Use an existing elastic pool in the target region.
+**Name** | Create a new database with the same name in the destination region.<br/><br/> Create a new database with a different name in the destination region.<br/><br/> Use an existing database in the destination region. | Create a new elastic pool with the same name in the destination region.<br/><br/> Create a new elastic pool with a different name in the destination region.<br/><br/> Use an existing elastic pool in the destination region.
**Zone redundancy** | To move from a region that supports zone redundancy to a region that doesn't, type **Disable** in the zone setting.<br/><br/> To move from a region that doesn't support zone redundancy to a region that does, type **Enable** in the zone setting. | To move from a region that supports zone redundancy to a region that doesn't, type **Disable** in the zone setting.<br/><br/> To move from a region that doesn't support zone redundancy to a region that does, type **Enable** in the zone setting.
-### Edit SQL target settings
+### Edit SQL destination settings
-You modify the target settings for a Azure SQL Database resource as follows:
+You modify the destination settings for a Azure SQL Database resource as follows:
-1. In **Across regions**, for the resource you want to modify, click the **Target configuration** entry.
-2. In **Configuration settings**, specify the target settings summarized in the table above.
+1. In **Across regions**, for the resource you want to modify, click the **Destination configuration** entry.
+2. In **Configuration settings**, specify the destination settings summarized in the table above.
## Next steps
-[Move an Azure VM](tutorial-move-region-virtual-machines.md) to another region.
+[Move an Azure VM](tutorial-move-region-virtual-machines.md) to another region.
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/tutorial-move-region-encrypted-virtual-machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/tutorial-move-region-encrypted-virtual-machines.md
@@ -5,7 +5,7 @@
Previously updated : 02/04/2021 Last updated : 02/10/2021 #Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.
@@ -51,26 +51,49 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
**Target region charges** | Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you.
-## Verify key vault permissions (Azure Disk Encryption)
+## Verify user permissions on key vault for VMS using Azure Disk Encryption (ADE)
-If you're moving VMs that have Azure disk encryption enabled, in the key vaults in the source and destination regions, verify/set permissions to ensure that moving encrypted VMs will work as expected.
+If you're moving VMs that have Azure disk encryption enabled, you need to run a script as mentioned [below](#copy-the-keys-to-the-destination-key-vault) for which the user executing the script should have appropriate permissions. Please refer to below table to know about permissions needed. The options to change the permissions can be found by navigating to the key vault in the Azure portal, Under **Settings**, select **Access policies**.
-1. In the Azure portal, open the key vault in the source region.
-2. Under **Settings**, select **Access policies**.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/key-vault-access-policies.png" alt-text="Button to open key vault access policies." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/key-vault-access-policies.png":::
+If there are no user permissions, select **Add Access Policy**, and specify the permissions. If the user account already has a policy, under **User**, set the permissions as per the table below.
-3. If there are no user permissions, select **Add Access Policy**, and specify the permissions. If the user account already has a policy, under **User**, set the permissions.
+Azure VMs using ADE can have the following variations and the permissions need to be set accordingly for relevant components.
+- Default option where the disk is encrypted using only secrets
+- Added security using [key encryption key](../virtual-machines/windows/disk-encryption-key-vault.md#set-up-a-key-encryption-key-kek)
- - If VMs you want to move are enabled with Azure disk encryption (ADE), In **Key Permissions** > **Key Management Operations**, select **Get** and **List** if they're not selected.
- - If you're using customer-managed keys (CMKs) to encrypt disk encryption keys used for encryption-at-rest (server-side encryption), in **Key Permissions** > **Key Management Operations**, select **Get** and **List**. Additionally, in **Cryptographic Operations**, select **Decrypt** and **Encrypt**
-
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/set-vault-permissions.png" alt-text="Dropdown list to select key vault permissions." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/set-vault-permissions.png":::
+### Source region keyvault
+
+The below permissions need to be set for the user executing the script
+
+**Component** | **Permission needed**
+ |
+Secrets| Get permission <br> </br> In **Secret permissions**> **Secret Management Operations**, select **Get**
+Keys <br> </br> If you are using Key encryption key (KEK) you need this permission in addition to secrets| Get and Decrypt permission <br> </br> In **Key Permissions** > **Key Management Operations**, select **Get**. In **Cryptographic Operations**, select **Decrypt**.
+
+### Destination region keyvault
+
+In **Access policies**, make sure that **Azure Disk Encryption for volume encryption** is enabled.
+
+The below permissions need to be set for the user executing the script
+
+**Component** | **Permission needed**
+ |
+Secrets| Set permission <br> </br> In **Secret permissions**> **Secret Management Operations**, select **Set**
+Keys <br> </br> If you are using Key encryption key (KEK) you need this permission in addition to secrets| Get, Create and Encrypt permission <br> </br> In **Key Permissions** > **Key Management Operations**, select **Get** and **Create** . In **Cryptographic Operations**, select **Encrypt**.
+
+In addition to the the above permissions, in the destination key vault you need to add permissions for the [Managed System Identity](./common-questions.md#how-is-managed-identity-used-in-resource-mover) that Resource Mover uses for accessing the Azure resources on your behalf.
+
+1. Under **Settings**, select **Add Access policies**.
+2. In **Select principal**, search for the MSI. The MSI name is ```movecollection-<sourceregion>-<target-region>-<metadata-region>```.
+3. Add the below permissions for the MSI
+
+**Component** | **Permission needed**
+ |
+Secrets| Get and List permission <br> </br> In **Secret permissions**> **Secret Management Operations**, select **Get** and **List**
+Keys <br> </br> If you are using Key encryption key (KEK) you need this permission in addition to secrets| Get, List permission <br> </br> In **Key Permissions** > **Key Management Operations**, select **Get** and **List**
-4. In **Secret permissions**, **Secret Management Operations**, select **Get**, **List**, and **Set**.
-5. If you're assigning permissions to a new user account, in **Select principal**, select the user to whom you're assigning permissions.
-6. In **Access policies**, make sure that **Azure Disk Encryption for volume encryption** is enabled.
-7. Repeat the procedure for the key vault in the destination region.
### Copy the keys to the destination key vault
search https://docs.microsoft.com/en-us/azure/search/cognitive-search-incremental-indexing-conceptual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-incremental-indexing-conceptual.md
@@ -8,7 +8,7 @@
Previously updated : 06/18/2020 Last updated : 02/09/2021 # Incremental enrichment and caching in Azure Cognitive Search
@@ -19,7 +19,7 @@ Last updated 06/18/2020
*Incremental enrichment* is a feature that targets [skillsets](cognitive-search-working-with-skillsets.md). It leverages Azure Storage to save the processing output emitted by an enrichment pipeline for reuse in future indexer runs. Wherever possible, the indexer reuses any cached output that is still valid.
-Not only does incremental enrichment preserve your monetary investment in processing (in particular, OCR and image processing) but it also makes for a more efficient system. When structures and content are cached, an indexer can determine which skills have changed and run only those that have been modified, as well as any downstream dependent skills.
+Not only does incremental enrichment preserve your monetary investment in processing (in particular, OCR and image processing) but it also makes for a more efficient system.
A workflow that uses incremental caching includes the following steps:
@@ -91,7 +91,7 @@ Setting this parameter ensures that only updates to the skillset definition are
The following example shows an Update Skillset request with the parameter: ```http
-PUT https://customerdemos.search.windows.net/skillsets/callcenter-text-skillset?api-version=2020-06-30-Preview&disableCacheReprocessingChangeDetection=true
+PUT https://[search service].search.windows.net/skillsets/[skillset name]?api-version=2020-06-30-Preview&disableCacheReprocessingChangeDetection=true
``` ### Bypass data source validation checks
@@ -99,7 +99,7 @@ PUT https://customerdemos.search.windows.net/skillsets/callcenter-text-skillset?
Most changes to a data source definition will invalidate the cache. However, for scenarios where you know that a change should not invalidate the cache - such as changing a connection string or rotating the key on the storage account - append the`ignoreResetRequirement` parameter on the data source update. Setting this parameter to `true` allows the commit to go through, without triggering a reset condition that would result in all objects being rebuilt and populated from scratch. ```http
-PUT https://customerdemos.search.windows.net/datasources/callcenter-ds?api-version=2020-06-30-Preview&ignoreResetRequirement=true
+PUT https://[search service].search.windows.net/datasources/[data source name]?api-version=2020-06-30-Preview&ignoreResetRequirement=true
``` ### Force skillset evaluation
@@ -108,6 +108,10 @@ The purpose of the cache is to avoid unnecessary processing, but suppose you mak
In this case, you can use the [Reset Skills](/rest/api/searchservice/preview-api/reset-skills) to force reprocessing of a particular skill, including any downstream skills that have a dependency on that skill's output. This API accepts a POST request with a list of skills that should be invalidated and marked for reprocessing. After Reset Skills, run the indexer to invoke the pipeline.
+### Reset documents
+
+[Reset of an indexer](/rest/api/searchservice/reset-indexer) will result in all documents in the search corpus being reprocessed. In scenarios where only a few documents need to be reprocessed, and the data source cannot be updated, use [Reset Documents (preview)](/rest/api/searchservice/preview-api/reset-documents) to force reprocessing of specific documents. When a document is reset, the indexer invalidates the cache for that document and the document is reprocessed by reading it from the data source. For more information, see [Run or reset indexers, skills, and documents](search-howto-run-reset-indexers.md).
+ ## Change detection Once you enable a cache, the indexer evaluates changes in your pipeline composition to determine which content can be reused and which needs reprocessing. This section enumerates changes that invalidate the cache outright, followed by changes that trigger incremental processing.
search https://docs.microsoft.com/en-us/azure/search/search-howto-create-indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-create-indexers.md
@@ -139,6 +139,20 @@ Scheduled processing usually coincides with a need for incremental indexing of c
+ [Azure Table Storage](search-howto-indexing-azure-tables.md) + [Azure Cosmos DB](search-howto-index-cosmosdb.md)
+## Change detection and indexer state
+
+Indexers can detect changes in the underlying data and only process new or updated documents on each indexer run. For example, if indexer status says that a run was successful with `0/0` documents processed, it means that the indexer didn't find any new or changed rows or blobs in the underlying data source.
+
+How an indexer supports change detection varies by data source:
+++ Azure Blob Storage, Azure Table Storage, and Azure Data Lake Storage Gen2 stamp each blob or row update with a date and time. The various indexers use this information to determine which documents to update in the index. Built-in change detection means that an indexer can recognize new and updated documents, with no additional configuration required on your part.+++ Azure SQL and Cosmos DB provide change detection features in their platforms. You can specify the change detection policy in your data source definition.+
+For large indexing loads, an indexer also keeps track of the last document it processed through an internal "high water mark". The marker is never exposed in the API, but internally the indexer keeps track of where it stopped. When indexing resumes, either through a scheduled run or an on-demand invocation, the indexer references the high water mark so that it can pick up where it left off.
+
+If you need to clear the high water mark to re-index in full, you can use [Reset Indexer](https://docs.microsoft.com/rest/api/searchservice/reset-indexer). For more selective re-indexing, use [Reset Skills](https://docs.microsoft.com/rest/api/searchservice/preview-api/reset-skills) or [Reset Documents](https://docs.microsoft.com/rest/api/searchservice/preview-api/reset-documents). Through the reset APIs, you can clear internal state, and also flush the cache if you enabled [incremental enrichment](search-howto-incremental-index.md). For more background and comparison of each reset option, see [Run or reset indexers, skills, and documents](search-howto-run-reset-indexers.md).
+ ## Know your data Indexers expect a tabular row set, where each row becomes a full or partial search document in the index. Often, there is a one-to-one correspondence between a row and the resulting search document, where all the fields in the row set fully populate each document. But you can use indexers to generate just part of a document, for example if you're using multiple indexers or approaches to build out the index.
search https://docs.microsoft.com/en-us/azure/search/search-howto-index-changed-deleted-blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-changed-deleted-blobs.md
@@ -11,7 +11,7 @@
Last updated 01/29/2021
-# How to set up change and deletion detection for blobs in Azure Cognitive Search indexing
+# Change and deletion detection in blob indexing (Azure Cognitive Search)
After an initial search index is created, you might want subsequent indexer jobs to only pick up new and changed documents. For search content that originates from Azure Blob storage, change detection occurs automatically when you use a schedule to trigger indexing. By default, the service reindexes only the changed blobs, as determined by the blob's `LastModified` timestamp. In contrast with other data sources supported by search indexers, blobs always have a timestamp, which eliminates the need to set up a change detection policy manually.
search https://docs.microsoft.com/en-us/azure/search/search-howto-run-reset-indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-run-reset-indexers.md
@@ -18,13 +18,11 @@ Indexer execution can occur when you first create the [indexer](search-indexer-o
You can clear the high water mark by resetting the indexer if you want to reprocess from scratch. Reset APIs are available at decreasing levels in the object hierarchy: + The entire search corpus (use [Reset Indexers](#reset-indexers))
-+ A specific document or list of documents (use [Reset Skills (preview)](#reset-skills))
-+ A specific skill or enrichment in a document (use [Reset Documents (preview)](#reset-docs))
++ A specific document or list of documents (use [Reset Documents - preview](#reset-docs))++ A specific skill or enrichment in a document (use [Reset Skills - preview](#reset-skills)) The Reset APIs are used to refresh cached content (applicable in [AI enrichment](cognitive-search-concept-intro.md) scenarios), or to clear the high water mark and rebuild the index.
-If specified, the reset parameters become the sole determinant of what gets processed, regardless of other changes in the underlying data. For example, if 20 blobs were added or updated since the last indexer run, but you only reset one document, only that one document will be processed.
- Reset, followed by run, can reprocess existing documents and new documents, but does not remove orphaned search documents in the search index that were created on previous runs. For more information about deletion, see [Add, Update or Delete Documents](/rest/api/searchservice/addupdate-or-delete-documents). ## Run indexers
@@ -68,12 +66,12 @@ A reset flag is cleared after the run is finished. Any regular change detection
<a name="reset-skills"></a>
-## Reset individual skills (preview)
+## Reset skills (preview)
> [!IMPORTANT] > [Reset Skills](/rest/api/searchservice/preview-api/reset-skills) is in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-For indexers that have skillsets, you can reset specific skills to force processing of that skill and any downstream skills that depend on its output. [Cached enrichments](search-howto-incremental-index.md) pertaining to the affected skills are also refreshed.
+For indexers that have skillsets, you can reset specific skills to force processing of that skill and any downstream skills that depend on its output. [Cached enrichments](search-howto-incremental-index.md) are also refreshed. Resetting skills invalidates the cached skill results, which is useful when a new version of a skill is deployed and you want the indexer to rerun that skill for all documents.
[Reset Skills](/rest/api/searchservice/preview-api/reset-skills) is available through REST **`api-version=2020-06-30-Preview`**.
@@ -94,14 +92,16 @@ If no skills are specified, the entire skillset is executed and if caching is en
<a name="reset-docs"></a>
-## Reset individual documents (preview)
+## Reset docs (preview)
> [!IMPORTANT] > [Reset Documents](/rest/api/searchservice/preview-api/reset-documents) is in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-The [Reset documents API](https://docs.microsoft.com/rest/api/searchservice/preview-api/reset-documents) accepts a list of document keys so that you can refresh specific documents.
+The [Reset documents API](https://docs.microsoft.com/rest/api/searchservice/preview-api/reset-documents) accepts a list of document keys so that you can refresh specific documents. If specified, the reset parameters become the sole determinant of what gets processed, regardless of other changes in the underlying data. For example, if 20 blobs were added or updated since the last indexer run, but you only reset one document, only that one document will be processed.
+
+On a per-document basis, all fields in that search document are refreshed with values from the data source. You cannot pick and choose which fields to refresh.
-All fields in the search document are refreshed from corresponding fields in the data source. If the document is enriched through a skillset and has cached data, the cached parts are also refreshed. The skillset is invoked for just the specified documents.
+If the document is enriched through a skillset and has cached data, the skillset is invoked for just the specified documents, and the cached is updated for the reprocessed documents.
When testing this API for the first time, the following APIs will help you validate and test the behaviors:
search https://docs.microsoft.com/en-us/azure/search/search-performance-optimization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-performance-optimization.md
@@ -83,13 +83,16 @@ For more details on this, please visit the [Azure Cognitive Search Service Level
Since replicas are copies of your data, having multiple replicas allows Azure Cognitive Search to do machine reboots and maintenance against one replica, while query execution continues on other replicas. Conversely, if you take replicas away, you'll incur query performance degradation, assuming those replicas were an under-utilized resource.
+<a name="availability-zones"></a>
+ ### Availability Zones
-[Availability Zones](https://docs.microsoft.com/azure/availability-zones/az-overview) divide a region's data centers into distinct physical location groups to provide high-availability, intra-regionally. The search service runs within one region; the replicas run in different zones.
+[Availability Zones](https://docs.microsoft.com/azure/availability-zones/az-overview) divide a region's data centers into distinct physical location groups to provide high-availability, within the same region. For Cognitive Search, individual replicas are the units for zone assignment. A search service runs within one region; its replicas run in different zones.
You can utilize Availability Zones with Azure Cognitive Search by adding two or more replicas to your search service. Each replica will be placed in a different Availability Zone within the region. If you have more replicas than Availability Zones, the replicas will be distributed across Availability Zones as evenly as possible. Azure Cognitive Search currently supports Availability Zones for Standard tier or higher search services that were created in one of the following regions:+ + Australia East (created January 30, 2021 or later) + Canada Central (created January 30, 2021 or later) + Central US (created December 4, 2020 or later)
@@ -102,7 +105,7 @@ Azure Cognitive Search currently supports Availability Zones for Standard tier o
+ West Europe (created January 29, 2021 or later) + West US 2 (created January 30, 2021 or later)
-Availability Zones do not impact the [Azure Cognitive Search Service Level Agreement](https://azure.microsoft.com/support/legal/sla/search/v1_0/).
+Availability Zones do not impact the [Azure Cognitive Search Service Level Agreement](https://azure.microsoft.com/support/legal/sla/search/v1_0/). You still need 3 or more replicas for query high availability.
## Scale for geo-distributed workloads and geo-redundancy
search https://docs.microsoft.com/en-us/azure/search/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/whats-new.md
@@ -19,7 +19,7 @@ Learn what's new in the service. Bookmark this page to keep up to date with the
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability | |||| | [Reset Documents (preview)](search-howto-run-reset-indexers.md) | Reprocesses individually selected search documents in indexer workloads. | [Search REST API 2020-06-30-Preview](/rest/api/searchservice/index-preview) |
-| Availability Zone support | Search services with two or more replicas in certain regions, as listed in [Scale for performance](search-performance-optimization.md), gain resiliency by having replicas in two or more distinct physical locations. | The region and date of search service creation determine availability. See the performance tuning document for details. |
+| [Availability Zones](search-performance-optimization.md#availability-zones)| Search services with two or more replicas in certain regions, as listed in [this article](search-performance-optimization.md#availability-zones), gain resiliency by having replicas in two or more distinct physical locations. | The region and date of search service creation determine availability. See the performance tuning document for details. |
## January 2021
@@ -31,7 +31,8 @@ Learn what's new in the service. Bookmark this page to keep up to date with the
| Month | Feature | Description | |-||-|
-| November | [Customer-managed key encryption (extended)](search-security-manage-encryption-keys.md) | extends customer-managed encryption over the full range of assets created and managed by a search service. Generally available.|
+| November | [Customer-managed key encryption (extended)](search-security-manage-encryption-keys.md) | Extends customer-managed encryption over the full range of assets created and managed by a search service. Generally available.|
+| September | [Visual Studio Code extension for Azure Cognitive Search](search-get-started-vs-code.md) | Adds a workspace, navigation, intellisense, and templates for creating indexes, indexers, data sources, and skillsets. | Public preview |
| September | [Managed service identity (indexers)](search-howto-managed-identities-data-sources.md) | Generally available. | | September | [Outbound requests using a private link](search-indexer-howto-access-private.md) | Generally available. | | September | [Management REST API (2020-08-01)](/rest/api/searchmanagement/management-api-versions) | Generally available. |
security-center https://docs.microsoft.com/en-us/azure/security-center/asset-inventory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/asset-inventory.md
@@ -5,7 +5,7 @@
Previously updated : 12/22/2020 Last updated : 02/10/2021
@@ -32,7 +32,6 @@ The asset management possibilities for this tool are substantial and continue to
## Availability- |Aspect|Details| |-|:-| |Release state:|General Availability (GA)|
@@ -43,33 +42,36 @@ The asset management possibilities for this tool are substantial and continue to
## What are the key features of asset inventory?- The inventory page provides the following tools: -- **Summaries** - Before you define any filters, a prominent strip of values at the top of the inventory view shows:
- - **Total resources**: The total number of resources connected to Security Center.
- - **Unhealthy resources**: Resources with active security recommendations. [Learn more about security recommendations](security-center-recommendations.md).
- - **Unmonitored resources**: Resources with agent monitoring issues - they have the Log Analytics agent deployed, but the agent isn't sending data or has other health issues.
-- **Filters** - The multiple filters at the top of the page provide a way to quickly refine the list of resources according to the question you're trying to answer. For example, if you wanted to answer the question *Which of my machines with the tag 'Production' are missing the Log Analytics agent?* you could combine the **Agent monitoring** filter with the **Tags** filter as shown in the following clip:
+### 1 - Summaries
+Before you define any filters, a prominent strip of values at the top of the inventory view shows:
- :::image type="content" source="./media/asset-inventory/filtering-to-prod-unmonitored.gif" alt-text="Filtering to production resources that aren't monitored":::
+- **Total resources**: The total number of resources connected to Security Center.
+- **Unhealthy resources**: Resources with active security recommendations. [Learn more about security recommendations](security-center-recommendations.md).
+- **Unmonitored resources**: Resources with agent monitoring issues - they have the Log Analytics agent deployed, but the agent isn't sending data or has other health issues.
+- **Unregistered subscriptions**: Any subscription in the selected scope that haven't yet been connected to Azure Security Center.
- As soon as you've applied filters, the summary values are updated to relate to the query results.
+### 2 - Filters
+The multiple filters at the top of the page provide a way to quickly refine the list of resources according to the question you're trying to answer. For example, if you wanted to answer the question *Which of my machines with the tag 'Production' are missing the Log Analytics agent?* you could combine the **Agent monitoring** filter with the **Tags** filter.
-- **Export options** - Inventory provides the option to export the results of your selected filter options to a CSV file. In addition, you can export the query itself to Azure Resource Graph Explorer to further refine, save, or modify the Kusto Query Language (KQL) query.
+As soon as you've applied filters, the summary values are updated to relate to the query results.
- :::image type="content" source="./media/asset-inventory/inventory-export-options.png" alt-text="Inventory's export options":::
+### 3 - Export and asset management tools
- > [!TIP]
- > The KQL documentation provides a database with some sample data together with some simple queries to get the "feel" for the language. [Learn more in this KQL tutorial](/azure/data-explorer/kusto/query/tutorial?pivots=azuredataexplorer).
+**Export options** - Inventory includes an option to export the results of your selected filter options to a CSV file. You can also export the query itself to Azure Resource Graph Explorer to further refine, save, or modify the Kusto Query Language (KQL) query.
-- **Asset management options** - Inventory lets you perform complex discovery queries. When you've found the resources that match your queries, inventory provides shortcuts for operations such as:
+> [!TIP]
+> The KQL documentation provides a database with some sample data together with some simple queries to get the "feel" for the language. [Learn more in this KQL tutorial](/azure/data-explorer/kusto/query/tutorial?pivots=azuredataexplorer).
- - Assign tags to the filtered resources - select the checkboxes alongside the resources you want to tag.
- - Onboard new servers to Security Center - use the **Add non-Azure servers** toolbar button.
- - Automate workloads with Azure Logic Apps - use the **Trigger Logic App** button to run a logic app on one or more resources. Your logic apps have to be prepared in advance, and accept the relevant trigger type (HTTP request). [Learn more about logic apps](../logic-apps/logic-apps-overview.md).
+**Asset management options** - Inventory lets you perform complex discovery queries. When you've found the resources that match your queries, inventory provides shortcuts for operations such as:
+
+- Assign tags to the filtered resources - select the checkboxes alongside the resources you want to tag.
+- Onboard new servers to Security Center - use the **Add non-Azure servers** toolbar button.
+- Automate workloads with Azure Logic Apps - use the **Trigger Logic App** button to run a logic app on one or more resources. Your logic apps have to be prepared in advance, and accept the relevant trigger type (HTTP request). [Learn more about logic apps](../logic-apps/logic-apps-overview.md).
## How does asset inventory work?
@@ -89,8 +91,6 @@ Using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), asset
1. Select the relevant options in the filters to create the specific query you want to perform.
- :::image type="content" source="./media/asset-inventory/inventory-filters.png" alt-text="Inventory's filtering options" lightbox="./media/asset-inventory/inventory-filters.png":::
- By default, the resources are sorted by the number of active security recommendations. > [!IMPORTANT]
@@ -98,6 +98,8 @@ Using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), asset
> > For example, if you've selected only one subscription, and the subscription has no resources with outstanding security recommendations to remediate (0 unhealthy resources), the **Recommendations** filter will have no options.
+ :::image type="content" source="./media/asset-inventory/filtering-to-prod-unmonitored.gif" alt-text="Using the filter options in Azure Security Center's asset inventory to filter resources to production resources that aren't monitored":::
+ 1. To use the **Security findings contain** filter, enter free text from the ID, security check, or CVE name of a vulnerability finding to filter to the affected resources: !["Security findings contain" filter](./media/asset-inventory/security-findings-contain-elements.png)
@@ -107,7 +109,7 @@ Using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), asset
1. To use the **Azure Defender** filter, select one or more options (Off, On, or Partial):
- - **Off** - Resources that aren't protected by an Azure Defender plan. You can right click on any of these and upgrade them:
+ - **Off** - Resources that aren't protected by an Azure Defender plan. You can right-click on any of these and upgrade them:
:::image type="content" source="./media/asset-inventory/upgrade-resource-inventory.png" alt-text="Upgrade a resource to Azure Defender from right click" lightbox="./media/asset-inventory/upgrade-resource-inventory.png":::
security-center https://docs.microsoft.com/en-us/azure/security-center/defender-for-app-service-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-app-service-introduction.md
@@ -19,13 +19,13 @@ Azure App Service is a fully managed platform for building and hosting your web
## Availability
-|Aspect|Details|
-|-|:-|
-|Release state:|General Availability (GA)|
-|Pricing:|[Azure Defender for App Service](azure-defender.md) is billed as shown on [the pricing page](security-center-pricing.md)<br>The pricing and settings page lists the number of instances for your **Resource Quantity**. That number is the total number of compute instances, in all App Service plans on this subscription, running at the moment you opened the pricing tier page.<br>To validate the count, open **App Service plans** in the Azure portal and check the number of compute instances used by each plan.|
-|Supported App Service plans:|![Yes](./media/icons/yes-icon.png) Basic, Standard, Premium, Isolated, or Linux<br>![No](./media/icons/no-icon.png) Free, Shared, or Consumption<br>[Learn more about App Service Plans](https://azure.microsoft.com/pricing/details/app-service/plans/)|
-|Clouds:|![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![No](./media/icons/no-icon.png) National/Sovereign (US Gov, China Gov, Other Gov)|
-|||
+| Aspect | Details |
+||:|
+| Release state: | General Availability (GA) |
+| Pricing: | [Azure Defender for App Service](azure-defender.md) is billed as shown on [the pricing page](security-center-pricing.md)<br>Billing is according to total compute instances in all plans|
+| Supported App Service plans: | All App Service plans are supported (with one exception, see below). [Learn more about App Service plans](https://azure.microsoft.com/pricing/details/app-service/plans/).<br>Azure Functions on the consumption plan isn't supported. [Learn more about Azure Functions hosting options](../azure-functions/functions-scale.md). |
+| Clouds: | ![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![No](./media/icons/no-icon.png) National/Sovereign (US Gov, China Gov, Other Gov) |
+| | |
## What are the benefits of Azure Defender for App Service?
security-center https://docs.microsoft.com/en-us/azure/security-center/prevent-misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/prevent-misconfigurations.md
@@ -13,7 +13,7 @@
# Prevent misconfigurations with Enforce/Deny recommendations
-Security misconfigurations are a major cause of security incidents. Security Center now has the ability to help *prevent* misconfigurations of new resources with regards to specific recommendations.
+Security misconfigurations are a major cause of security incidents. Security Center now has the ability to help *prevent* misconfigurations of new resources with regard to specific recommendations.
This feature can help keep your workloads secure and stabilize your secure score.
@@ -58,40 +58,7 @@ This can be found at the top of the resource details page for selected security
These recommendations can be used with the **deny** option: -- Access to storage accounts with firewall and virtual network configurations should be restricted-- Azure Cache for Redis should reside within a virtual network-- Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest-- Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)-- Azure Spring Cloud should use network injection-- Cognitive Services accounts should enable data encryption with a customer-managed key (CMK)-- Container CPU and memory limits should be enforced-- Container images should be deployed from trusted registries only-- Container registries should be encrypted with a customer-managed key (CMK)-- Container with privilege escalation should be avoided-- Containers sharing sensitive host namespaces should be avoided-- Containers should listen on allowed ports only-- Immutable (read-only) root filesystem should be enforced for containers-- Key Vault keys should have an expiration date-- Key Vault secrets should have an expiration date-- Key vaults should have purge protection enabled-- Key vaults should have soft delete enabled-- Least privileged Linux capabilities should be enforced for containers-- Only secure connections to your Redis Cache should be enabled-- Overriding or disabling of containers AppArmor profile should be restricted-- Privileged containers should be avoided-- Running containers as root user should be avoided-- Secure transfer to storage accounts should be enabled-- Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign-- Service Fabric clusters should only use Azure Active Directory for client authentication-- Services should listen on allowed ports only-- Storage accounts should be migrated to new Azure Resource Manager resources-- Storage accounts should restrict network access using virtual network rules-- Usage of host networking and ports should be restricted-- Usage of pod HostPath volume mounts should be restricted to a known list to restrict node access from compromised containers-- Validity period of certificates stored in Azure Key Vault should not exceed 12 months-- Virtual machines should be migrated to new Azure Resource Manager resources-- Web Application Firewall (WAF) should be enabled for Application Gateway-- Web Application Firewall (WAF) should be enabled for Azure Front Door Service service These recommendations can be used with the **enforce** option:
@@ -105,4 +72,4 @@ These recommendations can be used with the **enforce** option:
- Diagnostic logs in Key Vault should be enabled - Diagnostic logs in Logic Apps should be enabled - Diagnostic logs in Search services should be enabled-- Diagnostic logs in Service Bus should be enabled
+- Diagnostic logs in Service Bus should be enabled
security-center https://docs.microsoft.com/en-us/azure/security-center/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
@@ -10,7 +10,7 @@ ms.devlang: na
na Previously updated : 02/04/2021 Last updated : 02/10/2021
@@ -35,6 +35,7 @@ Updates in February include:
- [Direct link to policy from recommendation details page](#direct-link-to-policy-from-recommendation-details-page) - [SQL data classification recommendation no longer affects your secure score](#sql-data-classification-recommendation-no-longer-affects-your-secure-score) - [Workflow automations can be triggered by changes to regulatory compliance assessments (preview)](#workflow-automations-can-be-triggered-by-changes-to-regulatory-compliance-assessments-preview)
+- [Asset inventory page enhancements](#asset-inventory-page-enhancements)
### Kubernetes workload protection recommendations released for General Availability (GA)
@@ -66,17 +67,31 @@ If you're reviewing the list of recommendations on our [Security recommendations
### SQL data classification recommendation no longer affects your secure score- The recommendation **Sensitive data in your SQL databases should be classified** no longer affects your secure score. This is the only recommendation in the **Apply data classification** security control, so that control now has a secure score value of 0. ### Workflow automations can be triggered by changes to regulatory compliance assessments (preview)- We've added a third data type to the trigger options for your workflow automations: changes to regulatory compliance assessments. :::image type="content" source="media/release-notes/regulatory-compliance-triggers-workflow-automation.png" alt-text="Using changes to regulatory compliance assessments to trigger a workflow automation" lightbox="media/release-notes/regulatory-compliance-triggers-workflow-automation.png":::
+### Asset inventory page enhancements
+Security Center's asset inventory page has been improved in the following ways:
+
+- Summaries at the top of the page now include **Unregistered subscriptions**, showing the number of subscriptions without Security Center enabled.
+
+ :::image type="content" source="media/release-notes/unregistered-subscriptions.png" alt-text="Count of unregistered subscriptions in the summaries at the top of the asset inventory page":::
+
+- Filters have been expanded and enhanced to include:
+ - **Counts** - Each filter presents the number of resources that meet the criteria of each category
+
+ :::image type="content" source="media/release-notes/counts-in-inventory-filters.png" alt-text="Counts in the filters in the asset inventory page of Azure Security Center":::
+
+ - **Contains exemptions filter** (Optional) - narrow the results to resources that have/haven't got exemptions. This filter isn't shown by default, but is accessible from the **Add filter** button.
+
+ :::image type="content" source="media/release-notes/adding-contains-exemption-filter.gif" alt-text="Adding the filter 'contains exemption' in Azure Security Center's asset inventory page":::
+ ## January 2021 Updates in January include:
@@ -627,7 +642,7 @@ Security Center's regulatory compliance dashboard provides insights into your co
The dashboard includes a default set of regulatory standards. If any of the supplied standards isn't relevant to your organization, it's now a simple process to remove them from the UI for a subscription. Standards can be removed only at the *subscription* level; not the management group scope.
-Learn more in [Removing a standard from your dashboard](update-regulatory-compliance-packages.md#removing-a-standard-from-your-dashboard).
+Learn more in [Remove a standard from your dashboard](update-regulatory-compliance-packages.md#remove-a-standard-from-your-dashboard).
### Microsoft.Security/securityStatuses table removed from Azure Resource Graph (ARG)
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-compliance-dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-compliance-dashboard.md
@@ -11,7 +11,7 @@ ms.devlang: na
na Previously updated : 02/04/2021 Last updated : 02/10/2021
@@ -21,7 +21,7 @@ Azure Security Center helps streamline the process for meeting regulatory compli
Security Center continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards applied to your subscriptions. The dashboard reflects the status of your compliance with these standards.
-When you enable Security Center on an Azure subscription, it is automatically assigned the [Azure Security Benchmark](../security/benchmarks/introduction.md). This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
+When you enable Security Center on an Azure subscription, the [Azure Security Benchmark](../security/benchmarks/introduction.md) is automatically assigned to that subscription. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
The regulatory compliance dashboard shows the status of all the assessments within your environment for your chosen standards and regulations. As you act on the recommendations and reduce risk factors in your environment, your compliance posture improves.
@@ -40,7 +40,7 @@ If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
To step through the features covered in this tutorial: - [Azure Defender](azure-defender.md) must be enabled. You can try Azure Defender for free for 30 days.-- You need to be signed in with an account that has reader access to the policy compliance data (**Security Reader** is insufficient). The role of **Global reader** for the subscription will work. At a minimum, you'll need to have **Resource Policy Contributor** and **Security Admin** roles assigned.
+- You must be signed in with an account that has reader access to the policy compliance data (**Security Reader** is insufficient). The role of **Global reader** for the subscription will work. At a minimum, you'll need to have **Resource Policy Contributor** and **Security Admin** roles assigned.
## Assess your regulatory compliance
@@ -54,13 +54,13 @@ Use the regulatory compliance dashboard to help focus your attention on the gaps
:::image type="content" source="./media/security-center-compliance-dashboard/compliance-dashboard.png" alt-text="Regulatory compliance dashboard" lightbox="./media/security-center-compliance-dashboard/compliance-dashboard.png":::
-1. Select a tab for a compliance standard that is relevant to you (1). You'll see which subscriptions the standard is applied on (2), and the list of all controls for that standard (3). For the applicable controls, you can view the details of passing and failing assessments associated with that control (4), as well as the numbers of affected resources (5). Some controls are grayed out. These controls don't have any Security Center assessments associated with them. Check the requirements for these and assess them in your environment on your own. Some of these might be process-related and not technical.
+1. Select a tab for a compliance standard that is relevant to you (1). You'll see which subscriptions the standard is applied on (2), and the list of all controls for that standard (3). For the applicable controls, you can view the details of passing and failing assessments associated with that control (4), and the number of affected resources (5). Some controls are grayed out. These controls don't have any Security Center assessments associated with them. Check their requirements and assess them in your environment. Some of these might be process-related and not technical.
:::image type="content" source="./media/security-center-compliance-dashboard/compliance-drilldown.png" alt-text="Exploring the details of compliance with a specific standard"::: 1. To generate a PDF report with a summary of your current compliance status for a particular standard, select **Download report**.
- The report provides a high-level summary of your compliance status for the selected standard based on Security Center assessments data, and is organized according to the controls of that particular standard. The report can be shared with relevant stakeholders, and might provide evidence to internal and external auditors.
+ The report provides a high-level summary of your compliance status for the selected standard based on Security Center assessments data. The report's organized according to the controls of that particular standard. The report can be shared with relevant stakeholders, and might provide evidence to internal and external auditors.
:::image type="content" source="./media/security-center-compliance-dashboard/download-report.png" alt-text="Download compliance report":::
@@ -68,7 +68,7 @@ Use the regulatory compliance dashboard to help focus your attention on the gaps
Using the information in the regulatory compliance dashboard, improve your compliance posture by resolving recommendations directly within the dashboard.
-1. Click through any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps that should be followed to resolve the issue.
+1. Select any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue.
1. Select a particular resource to view more details and resolve the recommendation for that resource. <br>For example, in the **Azure CIS 1.1.0** standard, select the recommendation **Disk encryption should be applied on virtual machines**.
@@ -80,7 +80,7 @@ Using the information in the regulatory compliance dashboard, improve your compl
For more information about how to apply recommendations, see [Implementing security recommendations in Azure Security Center](security-center-recommendations.md).
-1. After you take action to resolve recommendations, you'll see the impact in the compliance dashboard report because your compliance score improves.
+1. After you take action to resolve recommendations, you'll see the result in the compliance dashboard report because your compliance score improves.
> [!NOTE] > Assessments run approximately every 12 hours, so you will see the impact on your compliance data only after the next run of the relevant assessment.
@@ -111,21 +111,99 @@ Learn more in [continuously export Security Center data](continuous-export.md).
Security Center's workflow automation feature can trigger Logic Apps whenever one of your regulatory compliance assessments change state.
-For example, you might want Security Center to email a specific user when a compliance assessment fails. You'll need to create the logic app (using [Azure Logic Apps](../logic-apps/logic-apps-overview.md)) first and then setup the trigger in a new workflow automation as explained in [Automate responses to Security Center triggers](workflow-automation.md).
+For example, you might want Security Center to email a specific user when a compliance assessment fails. You'll need to create the logic app first (using [Azure Logic Apps](../logic-apps/logic-apps-overview.md)) and then set up the trigger in a new workflow automation as explained in [Automate responses to Security Center triggers](workflow-automation.md).
:::image type="content" source="media/release-notes/regulatory-compliance-triggers-workflow-automation.png" alt-text="Using changes to regulatory compliance assessments to trigger a workflow automation" lightbox="media/release-notes/regulatory-compliance-triggers-workflow-automation.png"::: +++
+## FAQ - Regulatory compliance dashboard
+
+- [What standards are supported in the compliance dashboard?](#what-standards-are-supported-in-the-compliance-dashboard)
+- [Why do some controls appear grayed out?](#why-do-some-controls-appear-grayed-out)
+- [How can I remove a built-in standard, like PCI-DSS, ISO 27001, or SOC2 TSP from the dashboard?](#how-can-i-remove-a-built-in-standard-like-pci-dss-iso-27001-or-soc2-tsp-from-the-dashboard)
+- [I made the suggested changed based on the recommendation, yet it isn't being reflected in the dashboard](#i-made-the-suggested-changed-based-on-the-recommendation-yet-it-isnt-being-reflected-in-the-dashboard)
+- [What permissions do I need to access the compliance dashboard?](#what-permissions-do-i-need-to-access-the-compliance-dashboard)
+- [The regulatory compliance dashboard isn't loading for me](#the-regulatory-compliance-dashboard-isnt-loading-for-me)
+- [How can I view a report of passing and failing controls per standard in my dashboard?](#how-can-i-view-a-report-of-passing-and-failing-controls-per-standard-in-my-dashboard)
+- [How can I download a report with compliance data in a format other than PDF?](#how-can-i-download-a-report-with-compliance-data-in-a-format-other-than-pdf)
+- [How can I create exceptions for some of the policies in the regulatory compliance dashboard?](#how-can-i-create-exceptions-for-some-of-the-policies-in-the-regulatory-compliance-dashboard)
+- [What Azure Defender plans or licenses do I need to use the regulatory compliance dashboard?](#what-azure-defender-plans-or-licenses-do-i-need-to-use-the-regulatory-compliance-dashboard)
+
+### What standards are supported in the compliance dashboard?
+By default, the regulatory compliance dashboard shows you the Azure Security Benchmark. The Azure Security Benchmark is the Microsoft-authored, Azure-specific guidelines for security, and compliance best practices based on common compliance frameworks. Learn more in the [Azure Security Benchmark introduction](../security/benchmarks/introduction.md).
+
+To track your compliance with any other standard, you'll need to explicitly add them to your dashboard.
+
+You can add standards such as Azure CIS 1.1.0 (new), NIST SP 800-53 R4, NIST SP 800-171 R2, SWIFT CSP CSCF-v2020, UK Official and UK NHS, HIPAA HITRUST, Canada Federal PBMM, ISO 27001, SOC2-TSP, and PCI-DSS 3.2.1.
+
+More standards will be added to the dashboard and included in the information on [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
+
+### Why do some controls appear grayed out?
+For each compliance standard in the dashboard, there's a list of the standard's controls. For the applicable controls, you can view the details of passing and failing assessments.
+
+Some controls are grayed out. These controls don't have any Security Center assessments associated with them. Some may be procedure or process-related, and therefore can't be verified by Security Center. Some don't have any automated policies or assessments implemented yet, but will have in the future. And some controls may be the platform responsibility as explained in [Shared responsibility in the cloud](../security/fundamentals/shared-responsibility.md).
+
+### How can I remove a built-in standard, like PCI-DSS, ISO 27001, or SOC2 TSP from the dashboard?
+To customize the regulatory compliance dashboard, and focus only on the standards that are applicable to you, you can remove any of the displayed regulatory standards that aren't relevant to your organization. To remove a standard, follow the instructions in [Remove a standard from your dashboard](update-regulatory-compliance-packages.md#remove-a-standard-from-your-dashboard).
+
+### I made the suggested changed based on the recommendation, yet it isn't being reflected in the dashboard
+After you take action to resolve recommendations, wait 12 hours to see the changes to your compliance data. Assessments are run approximately every 12 hours, so you will see the effect on your compliance data only after the assessments run.
+
+### What permissions do I need to access the compliance dashboard?
+To view compliance data, you need to have at least **Reader** access to the policy compliance data as well; so Security Reader alone wonΓÇÖt suffice. If you're a global reader on the subscription, that will be enough too.
+
+The minimum set of roles for accessing the dashboard and managing standards is **Resource Policy Contributor** and **Security Admin**.
++
+### The regulatory compliance dashboard isn't loading for me
+To use the regulatory compliance dashboard, Azure Security Center must have Azure Defender enabled at the subscription level. If the dashboard isn't loading correctly, try the following steps:
+
+1. Clear your browser's cache.
+1. Try a different browser.
+1. Try opening the dashboard from different network location.
++
+### How can I view a report of passing and failing controls per standard in my dashboard?
+On the main dashboard, you can see a report of passing and failing controls for (1) the 'top 4' lowest compliance standards in the dashboard. To see all the passing/failing controls status, select (2) **Show all *x*** (where x is the number of standards you're tracking). A context plane displays the compliance status for every one of your tracked standards.
+++
+### How can I download a report with compliance data in a format other than PDF?
+When you select **Download report**, select the standard and the format (PDF or CSV). The resulting report will reflect the current set of subscriptions you've selected in the portal's filter.
+
+- The PDF report shows a summary status for the standard you selected
+- The CSV report provides detailed results per resource, as it relates to policies associated with each control
+
+Currently, there's no support for downloading a report for a custom policy; only for the supplied regulatory standards.
++
+### How can I create exceptions for some of the policies in the regulatory compliance dashboard?
+For policies that are built into Security Center and included in the secure score, you can create exemptions for one or more resources directly in the portal as explained in [Exempting resources and recommendations from your secure score](exempt-resource.md).
+
+For other policies, you can create an exemption directly in the policy itself, by following the instructions in [Azure Policy exemption structure](../governance/policy/concepts/exemption-structure.md).
++
+### What Azure Defender plans or licenses do I need to use the regulatory compliance dashboard?
+If you have any of the Azure Defender packages enabled on any of your Azure resource types, you have access to the Regulatory Compliance Dashboard, with all of its data, in Security Center.
+++++ ## Next steps In this tutorial, you learned about using Security CenterΓÇÖs regulatory compliance dashboard to: -- View and monitor your compliance posture regarding the standards and regulations that are important to you.-- Improve your compliance status by resolving relevant recommendations and watching the compliance score improve.
+> [!div class="checklist"]
+> * View and monitor your compliance posture regarding the standards and regulations that are important to you.
+> * Improve your compliance status by resolving relevant recommendations and watching the compliance score improve.
The regulatory compliance dashboard can greatly simplify the compliance process, and significantly cut the time required for gathering compliance evidence for your Azure, hybrid, and multi-cloud environment. To learn more, see these related pages: - [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md) - Learn how to select which standards appear in your regulatory compliance dashboard. -- [Security health monitoring in Azure Security Center](security-center-monitoring.md) - Learn how to monitor the health of your Azure resources.-- [Managing security recommendations in Azure Security Center](security-center-recommendations.md) - Learn how to use recommendations in Azure Security Center to help protect your Azure resources.
+- [Managing security recommendations in Azure Security Center](security-center-recommendations.md) - Learn how to use recommendations in Security Center to help protect your Azure resources.
security-center https://docs.microsoft.com/en-us/azure/security-center/security-center-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-pricing.md
@@ -11,7 +11,7 @@ ms.devlang: na
na Previously updated : 01/26/2021 Last updated : 02/10/2021
security-center https://docs.microsoft.com/en-us/azure/security-center/update-regulatory-compliance-packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/update-regulatory-compliance-packages.md
@@ -87,7 +87,7 @@ The following steps explain how to add a package to monitor your compliance with
:::image type="content" source="./media/security-center-compliance-dashboard/compliance-dashboard.png" alt-text="Regulatory compliance dashboard" lightbox="./media/security-center-compliance-dashboard/compliance-dashboard.png":::
-## Removing a standard from your dashboard
+## Remove a standard from your dashboard
If any of the supplied regulatory standards isn't relevant to your organization, it's a simple process to remove them from the UI. This lets you further customize the regulatory compliance dashboard, and focus only on the standards that are applicable to you.
security https://docs.microsoft.com/en-us/azure/security/fundamentals/subdomain-takeover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/subdomain-takeover.md
@@ -116,7 +116,7 @@ If you're a global administrator of your organizationΓÇÖs tenant, elevate your a
### Run the script
-Learn more about the PowerShell script, **Get-DanglingDnsRecords.ps1**, and download it from GitHub: https://aka.ms/DanglingDNSDomains.
+Learn more about the PowerShell script, **Get-DanglingDnsRecords.ps1**, and download it from GitHub: https://aka.ms/Get-DanglingDnsRecords.
## Remediate dangling DNS entries
sentinel https://docs.microsoft.com/en-us/azure/sentinel/create-custom-connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/create-custom-connector.md
@@ -0,0 +1,218 @@
+
+ Title: Resources for creating Azure Sentinel custom connectors | Microsoft Docs
+description: Learn about available resources for creating custom connectors for Azure Sentinel. Methods include the Log Analytics agent and API, Logstash, Logic Apps, PowerShell, and Azure Functions.
+
+documentationcenter: na
++
+editor: ''
+++
+ms.devlang: na
++
+ na
+ Last updated : 02/09/2021+++
+# Resources for creating Azure Sentinel custom connectors
+
+Azure Sentinel provides a wide range of [built-in connectors for Azure services and external solutions](connect-data-sources.md), and also supports ingesting data from some sources without a dedicated connector.
+
+If you're unable to connect your data source to Azure Sentinel using any of the existing solutions available, consider creating your own data source connector.
+
+For a full list of supported connectors, see the [Azure Sentinel: The connectors grand (CEF, Syslog, Direct, Agent, Custom, and more)](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-the-connectors-grand-cef-syslog-direct-agent/ba-p/803891) blog post.
+
+## Compare methods for creating custom connectors
+
+The following table compares essential details about each method for creating custom connectors described in this article. Select the links in the table for more details about each method.
+
+|Method description |Capability | Serverless |Complexity |
+|||||
+|**[Log Analytics agent](#use-the-azure-monitor-log-analytics-agent-to-create-your-connector)** <br>Best for collecting files from on-premises and IaaS sources | File collection only | No |Low |
+|**[Logstash](#use-logstash-to-create-your-connector)** <br>Best for on-premises and IaaS sources, any source for which a plugin is available, and organizations already familiar with Logstash | Available plugins, plus custom plugin, capabilities provide significant flexibility. | No; requires a VM or VM cluster to run | Low; supports many scenarios with plugins |
+|**[Logic Apps](#using-logic-apps-to-create-your-connector)** <br>High cost; avoid for high-volume data <br>Best for low-volume cloud sources | Codeless programming allows for limited flexibility, without support for implementing algorithms.<br><br> If no available action already supports your requirements, creating a custom action may add complexity. | Yes | Low; simple, codeless development |
+|**[PowerShell](#use-powershell-to-create-your-custom-connector)** <br>Best for prototyping and periodic file uploads | Direct support for file collection. <br><br>PowerShell can be used to collect more sources, but will require coding and configuring the script as a service. |No | Low |
+|**[Log Analytics API](#create-a-custom-connector-via-the-log-analytics-data-collector-api)** <br>Best for ISVs implementing integration, and for unique collection requirements | Supports all capabilities available with the code. | Depends on the implementation | High |
+|**[Azure Functions](#use-azure-functions-to-create-your-custom-connector)** Best for high-volume cloud sources, and for unique collection requirements | Supports all capabilities available with the code. | Yes | High; requires programming knowledge |
+| | | |
+
+> [!TIP]
+> For comparisons of using Logic Apps and Azure Functions for the same connector, see:
+>
+> - [Ingest Fastly Web Application Firewall logs into Azure Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-fastly-web-application-firewall-logs-into-azure-sentinel/ba-p/1238804)
+> - Office 365 (Azure Sentinel GitHub community): [Logic App connector](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Get-O365Data) | [Azure Function connector](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/O365%20Data)
+>
+
+## Use the Azure Monitor Log Analytics agent to create your connector
+
+If your data source delivers events in files, we recommend that you use the Azure Monitor Log Analytics agent to create your custom connector.
+
+- For more information, see [Collecting custom logs in Azure Monitor](/azure/azure-monitor/platform/data-sources-custom-logs).
+
+- For an example of this method, see [Collecting custom JSON data sources with the Log Analytics agent for Linux in Azure Monitor](/azure/azure-monitor/platform/data-sources-json).
+
+## Use Logstash to create your connector
+
+If you're familiar with [Logstash](https://www.elastic.co/logstash), you may want to use Logstash with the [Logstash output plug-in for Azure Sentinel](connect-logstash.md) to create your custom connector.
+
+With the Azure Sentinel Logstash Output plugin, you can use any Logstash input and filtering plugins, and configure Azure Sentinel as the output for a Logstash pipeline. Logstash has a large library of plugins that enable input from various sources, such as Event Hubs, Apache Kafka, Files, Databases, and Cloud services. Use filtering plug-ins to parse events, filter unnecessary events, obfuscate values, and more.
+
+For examples of using Logstash as a custom connector, see:
+
+- [Hunting for Capital One Breach TTPs in AWS logs using Azure Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/hunting-for-capital-one-breach-ttps-in-aws-logs-using-azure/ba-p/1019767) (blog)
+- [Radware Azure Sentinel implementation guide](https://support.radware.com/ci/okcsFattach/get/1025459_3)
+
+For examples of useful Logstash plugins, see:
+
+- [Cloudwatch input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-cloudwatch.html)
+- [Azure Event Hubs plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-azure_event_hubs.html)
+- [Google Cloud Storage input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-google_cloud_storage.html)
+- [Google_pubsub input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-google_pubsub.html)
+
+> [!TIP]
+> Logstash also enables scaled data collection using a cluster. For more information, see [Using a load-balanced Logstash VM at scale](https://techcommunity.microsoft.com/t5/azure-sentinel/scaling-up-syslog-cef-collection/ba-p/1185854).
+>
+
+## Using Logic Apps to create your connector
+
+Use an [Azure Logic App](/azure/logic-apps/) to create a serverless, custom connector for Azure Sentinel.
+
+> [!NOTE]
+> While creating serverless connectors using Logic Apps may be convenient, using Logic Apps for your connectors may be costly for large volumes of data.
+>
+> We recommend that you use this method only for low-volume data sources, or enriching your data uploads.
+>
+
+1. **Use one of the following triggers to start your Logic Apps**:
+
+ |Trigger |Description |
+ |||
+ |**A recurring task** | For example, schedule your Logic App to retrieve data regularly from specific files, databases, or external APIs. <br>For more information, see [Create, schedule, and run recurring tasks and workflows in Azure Logic Apps](/azure/connectors/connectors-native-recurrence). |
+ |**On-demand triggering** | Run your Logic App on-demand for manual data collection and testing. <br>For more information, see [Call, trigger, or nest logic apps using HTTPS endpoints](/azure/logic-apps/logic-apps-http-endpoint). |
+ |**HTTP/S endpoint** | Recommended for streaming, and if the source system can start the data transfer. <br>For more information, see [Call service endpoints over HTTP or HTTPs](/azure/connectors/connectors-native-http). |
+ | | |
+
+1. **Use any of the Logic App connectors that read information to get your events**. For example:
+
+ - [Connect to a REST API](/connectors/custom-connectors/)
+ - [Connect to a SQL Server](/connectors/sql/)
+ - [Connect to a file system](/connectors/filesystem/)
+
+ > [!TIP]
+ > Custom connectors to REST APIs, SQL Servers, and file systems also support retrieving data from on-premises data sources. For more information, see [Install on-premises data gateway](/connectors/filesystem/) documentation.
+ >
+
+1. **Prepare the information you want to retrieve**.
+
+ For example, use the [parse JSON action](/azure/logic-apps/logic-apps-perform-data-operations#parse-json-action) to access properties in JSON content, enabling you to select those properties from the dynamic content list when you specify inputs for your Logic App.
+
+ For more information, see [Perform data operations in Azure Logic Apps](/azure/logic-apps/logic-apps-perform-data-operations).
+
+1. **Write the data to Log Analytics**.
+
+ For more information, see the [Azure Log Analytics Data Collector](/connectors/azureloganalyticsdatacollector/) documentation.
+
+For examples of how you can create a custom connector for Azure Sentinel using Logic Apps, see:
+
+- [Create a data pipeline with the Data Collector API](/connectors/azureloganalyticsdatacollector/)
+- [Palo Alto Prisma Logic App connector using a webhook](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Ingest-Prisma) (Azure Sentinel GitHub community)
+- [Secure your Microsoft Teams calls with scheduled activation](https://techcommunity.microsoft.com/t5/azure-sentinel/secure-your-calls-monitoring-microsoft-teams-callrecords/ba-p/1574600) (blog)
+- [Ingesting AlienVault OTX threat indicators into Azure Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/ingesting-alien-vault-otx-threat-indicators-into-azure-sentinel/ba-p/1086566) (blog)
+- [Sending Proofpoint TAP logs to Azure Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/sending-proofpoint-tap-logs-to-azure-sentinel/ba-p/767727) (blog)
++
+## Use PowerShell to create your custom connector
+
+The [Upload-AzMonitorLog PowerShell script](https://www.powershellgallery.com/packages/Upload-AzMonitorLog/) enables you to use PowerShell to stream events or context information to Azure Sentinel from the command line. This streaming effectively creates a custom connector between your data source and Azure Sentinel.
+
+For example, the following script uploads a CSV file to Azure Sentinel:
+
+``` PowerShell
+Import-Csv .\testcsv.csv
+| .\Upload-AzMonitorLog.ps1
+-WorkspaceId '69f7ec3e-cae3-458d-b4ea-6975385-6e426'
+-WorkspaceKey $WSKey
+-LogTypeName 'MyNewCSV'
+-AddComputerName
+-AdditionalDataTaggingName "MyAdditionalField"
+-AdditionalDataTaggingValue "Foo"
+```
+
+The [Upload-AzMonitorLog PowerShell script](https://www.powershellgallery.com/packages/Upload-AzMonitorLog/) script uses the following parameters:
+
+|Parameter |Description |
+|||
+|**WorkspaceId** | Your Azure Sentinel workspace ID, where you'll be storing your data. [Find your workspace ID and key](#find-your-workspace-id-and-key). |
+|**WorkspaceKey** | The primary or secondary key for the Azure Sentinel workspace where you'll be storing your data. [Find your workspace ID and key](#find-your-workspace-id-and-key). |
+|**LogTypeName** | The name of the custom log table where you want to store the data. A suffix of **_CL** will automatically be added to the end of your table name. |
+|**AddComputerName** | When this parameter exists, the script adds the current computer name to every log record, in a field named **Computer**. |
+|**TaggedAzureResourceId** | When this parameter exists, the script associates all uploaded log records with the specified Azure resource. <br><br>This association enables the uploaded log records for resource-context queries, and adheres to resource-centric, role-based access control. |
+|**AdditionalDataTaggingName** | When this parameter exists, the script adds another field to every log record, with the configured name, and the value that's configured for the **AdditionalDataTaggingValue** parameter. <br><br>In this case, **AdditionalDataTaggingValue** must not be empty. |
+|**AdditionalDataTaggingValue** | When this parameter exists, the script adds another field to every log record, with the configured value, and the field name configured for the **AdditionalDataTaggingName** parameter. <br><br>If the **AdditionalDataTaggingName** parameter is empty, but a value is configured, the default field name is **DataTagging**. |
+| | |
+
+### Find your workspace ID and key
+
+Find the details for the **WorkspaceID** and **WorkspaceKey** parameters in Azure Sentinel:
+
+1. In Azure Sentinel, select **Settings** on the left, and then select the **Workspace settings** tab.
+
+1. Under **Get started with Log Analytics** > **1 Connect a data source**, select **Windows and Linux agents management**.
+
+1. Find your workspace ID, primary key, and secondary key on the **Windows servers** tabs.
+## Create a custom connector via the Log Analytics Data Collector API
++
+You can stream events to Azure Sentinel by using the Log Analytics Data Collector API to call a RESTful endpoint directly.
+
+While calling a RESTful endpoint directly requires more programming, it also provides more flexibility.
+
+For more information, see the [Log Analytics Data collector API](/azure/azure-monitor/platform/data-collector-api), especially the following examples:
+
+- [C#](https://docs.microsoft.com/azure/azure-monitor/platform/data-collector-api#c-sample)
+- [Python 2](https://docs.microsoft.com/azure/azure-monitor/platform/data-collector-api#python-2-sample)
+
+## Use Azure Functions to create your custom connector
+
+Use Azure Functions together with a RESTful API and various coding languages, such as [PowerShell](/azure/azure-functions/functions-reference-powershell), to create a serverless custom connector.
+
+For examples of this method, see:
+
+- [Connect your VMware Carbon Black Cloud Endpoint Standard to Azure Sentinel with Azure Function](connect-vmware-carbon-black.md)
+- [Connect your Okta Single Sign-On to Azure Sentinel with Azure Function](connect-okta-single-sign-on.md)
+- [Connect your Proofpoint TAP to Azure Sentinel with Azure Function](connect-proofpoint-tap.md)
+- [Connect your Qualys VM to Azure Sentinel with Azure Function](connect-qualys-vm.md)
+- [Ingesting XML, CSV, or other formats of data](/azure/azure-monitor/platform/create-pipeline-datacollector-api#ingesting-xml-csv-or-other-formats-of-data)
+- [Monitoring Zoom with Azure Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/monitoring-zoom-with-azure-sentinel/ba-p/1341516) (blog)
+- [Deploy a Function App for getting Office 365 Management API data into Azure Sentinel](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/O365%20Data) (Azure Sentinel GitHub community)
+
+## Parsing your custom connector data
+
+You can use your custom connector's built-in parsing technique to extract the relevant information and populate the relevant fields in Azure Sentinel.
+
+For example:
+
+- **If you've used Logstash**, use the [Grok](https://www.elastic.co/guide/logstash/current/plugins-filters-grok.html) filter plugin to parse your data.
+- **If you've used an Azure function**, parse your data with code. For more information, see [Parsers](normalization.md#parsers).
+
+Azure Sentinel supports parsing at query time. Parsing at query time enables you to push data in at the original format, and then parse on demand, when needed.
+
+Parsing at query time also means you don't need to know your data's exact structure ahead of time, when you create your custom connector, or even the information you'll need to extract. Instead, parse your data at any time, even during an investigation.
+
+> [!NOTE]
+> Updating your parser also applies to data that you've already ingested into Azure Sentinel.
+>
+## Next steps
+
+Use the data ingested into Azure Sentinel to secure your environment with any of the following processes:
+
+- [Get visibility into alerts](quickstart-get-visibility.md)
+- [Visualize and monitor your data](tutorial-monitor-your-data.md)
+- [Investigate incidents](tutorial-investigate-cases.md)
+- [Detect threats](tutorial-detect-threats-built-in.md)
+- [Automate threat prevention](tutorial-respond-threats-playbook.md)
+- [Hunt for threats](hunting.md)
sentinel https://docs.microsoft.com/en-us/azure/sentinel/data-source-schema-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/data-source-schema-reference.md
@@ -32,6 +32,9 @@ This article lists supported Azure and third-party data source schemas, with lin
| **Network** | NSG Flow Logs | AzureNetworkAnalytics | [Schema and data aggregation in Traffic Analytics](/azure/network-watcher/traffic-analytics-schema) | | | | | |
+> [!NOTE]
+> For more information, see the entire [Azure Monitor data reference](/azure/azure-monitor/reference/).
+>
## 3rd-party vendor data sources The following table lists supported third-party vendors and their Syslog or Common Event Format (CEF)-mapping documentation for various supported log types, which contain CEF field mappings and sample logs for each category type.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/normalization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization.md
@@ -67,6 +67,9 @@ The schema reference also includes value and format standardization. The source
## Parsers
+- [What is parsing](#what-is-parsing)
+- [Using query time parsers](#using-query-time-parsers)
+ ### What is parsing With a base set of defined normalized tables available, you will need to transform (parse/map) your data into those tables. That is, you will extract specific data from its raw form into well-known columns in the normalized schema. Parsing in Azure Sentinel happens at **query time** - parsers are built as Log Analytics user functions (using Kusto Query Language - KQL) that transform data in existing tables (such as CommonSecurityLog, custom logs tables, syslog) into the normalized tables schema.
@@ -75,6 +78,10 @@ The other kind of parsing, not yet supported in Azure Sentinel, is at **ingestio
### Using query time parsers
+- [Installing a parser](#installing-a-parser)
+- [Using the parsers](#using-the-parsers)
+- [Customizing parsers](#customizing-parsers)
+ #### Installing a parser The available query time parsers are available in the Azure Sentinel [official GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Parsers/Normalized%20Schema%20-%20Networking%20(v1.0.0)). Each parser is versioned to allow customers to use and monitor for future updates easily. To install a parser:
@@ -116,6 +123,12 @@ On the pane on the right, Expand ΓÇ£Saved queriesΓÇ¥ section and find the ΓÇÿNor
You can click on each individual parser and see the underlying function it uses, and run it (or access it directly by its alias, as described above). Note that some parsers can retain the original fields side-by-side to the normalized fields for convenience. This can be easily edited in the parser's query.
+> [!TIP]
+> You can use your saved functions instead of Azure Sentinel tables in any query, including hunting and detection queries. For more information, see:
+>
+> - [Data normalization in Azure Sentinel](normalization.md#parsers)
+> - [Parse text in Azure Monitor logs](/azure/azure-monitor/log-query/parse-text)
+>
#### Customizing parsers You can repeat the above steps (finding the parser in query explorer), click on the relevant parser and see its function implementation.
@@ -129,6 +142,8 @@ Once the function is altered, click ΓÇ£SaveΓÇ¥ again and use the same name, alia
#### Additional information
+JSON, XML, and CSV are especially convenient for parsing at query time. Azure Sentinel has built-in parsing functions for JSON, XML, and CSV, as well as a JSON parsing tool. For more information, see [Using JSON fields in Azure Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/tip-easily-use-json-fields-in-sentinel/ba-p/768747) (blog).
+ Learn more about [saved queries](../azure-monitor/log-query/example-queries.md) (the query time parsers implementation) in Log Analytics.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/resources.md
@@ -71,15 +71,6 @@ You can view this data by streaming it from the Azure Activity log into Azure Se
``` -
-## Vendor documentation
-
-| **Vendor** | **Use incident in Azure Sentinel** | **Link**|
-|-|-|-|
-| GitHub| Used to access Community page| <https://github.com/Azure/Azure-Sentinel> |
-| PaloAlto| Configure CEF| <https://www.paloaltonetworks.com/documentation/misc/cef.html>|
-| PluralSight | Kusto Query Language course| [https://www.pluralsight.com/courses/kusto-query-language-kql-from-scratch](https://www.pluralsight.com/courses/kusto-query-language-kql-from-scratch)|
- ## Blogs and forums We love hearing from our users!
@@ -93,8 +84,6 @@ We love hearing from our users!
- [TechCommunity](https://techcommunity.microsoft.com/t5/Azure-Sentinel/bg-p/AzureSentinelBlog) - [Microsoft Azure](https://azure.microsoft.com/blog/tag/azure-sentinel/)
-For more information about Azure security and compliance, see the [Microsoft Azure Security and Compliance blog](https://techcommunity.microsoft.com/t5/microsoft-security-and/bg-p/MicrosoftSecurityandCompliance).
- ## Next steps
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-deployment-planner https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-deployment-planner.md
@@ -61,7 +61,7 @@ The tool provides the following details:
| **Category** | **VMware to Azure** |**Hyper-V to Azure**|**Azure to Azure**|**Hyper-V to secondary site**|**VMware to secondary site** --|--|--|--|--|-- Supported scenarios |Yes|Yes|No|Yes*|No
-Supported version | vCenter 6.7, 6.5, 6.0 or 5.5| Windows Server 2016, Windows Server 2012 R2 | NA |Windows Server 2016, Windows Server 2012 R2|NA
+Supported version | vCenter 7.0, 6.7, 6.5, 6.0 or 5.5| Windows Server 2016, Windows Server 2012 R2 | NA |Windows Server 2016, Windows Server 2012 R2|NA
Supported configuration|vCenter, ESXi| Hyper-V cluster, Hyper-V host|NA|Hyper-V cluster, Hyper-V host|NA| Number of servers that can be profiled per running instance of Site Recovery Deployment Planner |Single (VMs belonging to one vCenter Server or one ESXi server can be profiled at a time)|Multiple (VMs across multiple hosts or host clusters can be profiled at a time)| NA |Multiple (VMs across multiple hosts or host clusters can be profiled at a time)| NA
@@ -120,4 +120,4 @@ The latest Site Recovery Deployment Planner tool version is 2.5.
See the [Site Recovery Deployment Planner version history](./site-recovery-deployment-planner-history.md) page for the fixes that are added in each update. ## Next steps
-[Run Site Recovery Deployment Planner](site-recovery-vmware-deployment-planner-run.md)
+[Run Site Recovery Deployment Planner](site-recovery-vmware-deployment-planner-run.md)
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/vmware-physical-azure-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-physical-azure-support-matrix.md
@@ -176,6 +176,7 @@ Add disk on replicated VM | Not supported.<br/> Disable replication for the VM,
> [!NOTE] > Any change to disk identity is not supported. For example, if the disk partitioning has been changed from GPT to MBR or vice versa, then this will change the disk identity. In such a scenario, the replication will break and a fresh setup will be required.
+> For Linux machines, device name change is not supported as it has an impact on the disk identity.
## Network
storage https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-change-feed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-change-feed.md
@@ -154,7 +154,7 @@ See [Process change feed logs in Azure Blob Storage](storage-blob-change-feed-ho
The change feed is a log of changes that are organized into **hourly** *segments* but appended to and updated every few minutes. These segments are created only when there are blob change events that occur in that hour. This enables your client application to consume changes that occur within specific ranges of time without having to search through the entire log. To learn more, see the [Specifications](#specifications).
-An available hourly segment of the change feed is described in a manifest file that specifies the paths to the change feed files for that segment. The listing of the `$blobchangefeed/idx/segments/` virtual directory shows these segments ordered by time. The path of the segment describes the start of the hourly time-range that the segment represents. You can use that list to filter out the segments of logs that are interest to you.
+An available hourly segment of the change feed is described in a manifest file that specifies the paths to the change feed files for that segment. The listing of the `$blobchangefeed/idx/segments/` virtual directory shows these segments ordered by time. The path of the segment describes the start of the hourly time-range that the segment represents. You can use that list to filter out the segments of logs that are of interest to you.
```text Name Blob Type Blob Tier Length Content Type
@@ -314,4 +314,4 @@ You can leverage both features as Change feed and [Blob storage events](storage-
- See an example of how to read the change feed by using a .NET client application. See [Process change feed logs in Azure Blob Storage](storage-blob-change-feed-how-to.md). - Learn about how to react to events in real time. See [Reacting to Blob Storage events](storage-blob-event-overview.md)-- Learn more about detailed logging information for both successful and failed operations for all requests. See [Azure Storage analytics logging](../common/storage-analytics-logging.md)
+- Learn more about detailed logging information for both successful and failed operations for all requests. See [Azure Storage analytics logging](../common/storage-analytics-logging.md)
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-redundancy.md
@@ -182,6 +182,14 @@ The following table indicates whether your data is durable and available in a gi
<sup>1</sup> Account failover is required to restore write availability if the primary region becomes unavailable. For more information, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
+### Supported Azure Storage services
+
+The following table shows which redundancy options are supported by each Azure Storage service.
+
+| LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS |
+|:-|:-|:-|:-|
+| Blob storage<br />Queue storage<br />Table storage<br />Azure Files<br />Azure managed disks | Blob storage<br />Queue storage<br />Table storage<br />Azure Files | Blob storage<br />Queue storage<br />Table storage<br />Azure Files<br /> | Blob storage<br />Queue storage<br />Table storage<br />Azure Files<br /> |
+ ### Supported storage account types The following table shows which redundancy options are supported by each type of storage account. For information for storage account types, see [Storage account overview](storage-account-overview.md).
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-files-migration-storsimple-8000 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-migration-storsimple-8000.md
@@ -28,12 +28,12 @@ When you begin planning your migration, first identify all the StorSimple applia
### Migration cost summary
-Migrations to Azure file shares from StorSimple volumes via data transformation service jobs in a StorSimple Data Manager resource are free of charge. Other costs might be incurred during and after a migration:
+Migrations to Azure file shares from StorSimple volumes via migration jobs in a StorSimple Data Manager resource are free of charge. Other costs might be incurred during and after a migration:
* **Network egress:** Your StorSimple files live in a storage account within a specific Azure region. If you provision the Azure file shares you migrate into a storage account that's located in the same Azure region, no egress cost will occur. You can move your files to a storage account in a different region as part of this migration. In that case, egress costs will apply to you. * **Azure file share transactions:** When files are copied into an Azure file share (as part of a migration or outside of one), transaction costs apply as files and metadata are being written. As a best practice, start your Azure file share on the transaction optimized tier during the migration. Switch to your desired tier after the migration is finished. The following phases will call this out at the appropriate point. * **Change an Azure file share tier:** Changing the tier of an Azure file share costs transactions. In most cases, it will be more cost efficient to follow the advice from the previous point.
-* **Storage cost:** When this migration starts copying files into an Azure file share, Azure Files storage is consumed and billed.
+* **Storage cost:** When this migration starts copying files into an Azure file share, Azure Files storage is consumed and billed. Migrated backups will become [Azure file share snapshots](storage-snapshots-files.md). File share snapshots only consume storage capacity for the differences they contain.
* **StorSimple:** Until you have a chance to deprovision the StorSimple devices and storage accounts, StorSimple cost for storage, backups, and appliances will continue to occur. ### Direct-share-access vs. Azure File Sync
@@ -44,7 +44,7 @@ An alternative to direct access is [Azure File Sync](./storage-sync-files-planni
Azure File Sync is a Microsoft cloud service, based on two main components:
-* File synchronization and cloud tiering.
+* File synchronization and cloud tiering to create a performance access cache on any Windows Server.
* File shares as native storage in Azure that can be accessed over multiple protocols like SMB and file REST. Azure file shares retain important file fidelity aspects on stored files like attributes, permissions, and timestamps. With Azure file shares, there's no longer a need for an application or service to interpret the files and folders stored in the cloud. You can access them natively over familiar protocols and clients like Windows File Explorer. Azure file shares allow you to store general-purpose file server data and application data in the cloud. Backup of an Azure file share is a built-in functionality and can be further enhanced by Azure Backup.
@@ -54,16 +54,16 @@ This article focuses on the migration steps. If you want to learn more about Azu
* [Azure File Sync overview](./storage-sync-files-planning.md "Overview") * [Azure File Sync deployment guide](storage-sync-files-deployment-guide.md)
-### StorSimple Service Data Encryption Key
+### StorSimple service data encryption key
-When you first set up your StorSimple appliance, it generated a Service Data Encryption Key and instructed you to securely store the key. This key is used to encrypt all data in the associated Azure storage account where the StorSimple appliance stores your files.
+When you first set up your StorSimple appliance, it generated a "service data encryption key" and instructed you to securely store the key. This key is used to encrypt all data in the associated Azure storage account where the StorSimple appliance stores your files.
-The Service Data Encryption Key is necessary for a successful migration. Now is a good time to retrieve this key from your records, for each of the appliances in your inventory.
+The "service data encryption key" is necessary for a successful migration. Now is a good time to retrieve this key from your records, one for each of the appliances in your inventory.
If you can't find the keys in your records, you can retrieve the key from the appliance. Each appliance has a unique encryption key. To retrieve the key:
-* File a support request with Microsoft Azure through the Azure portal. The content of the request should have the StorSimple device serial numbers and the request to retrieve the "Service Data Encryption Key."
-* A StorSimple support engineer will contact you with a request for a screen sharing meeting.
+* File a support request with Microsoft Azure through the Azure portal. The request should contain your StorSimple device serial number(s) and a request to retrieve the "service data encryption key."
+* A StorSimple support engineer will contact you with a request for a virtual meeting.
* Ensure that before the meeting begins, you connect to your StorSimple appliance [via a serial console](../../storsimple/storsimple-8000-windows-powershell-administration.md#connect-to-windows-powershell-for-storsimple-via-the-device-serial-console) or through a [remote PowerShell session](../../storsimple/storsimple-8000-windows-powershell-administration.md#connect-remotely-to-storsimple-using-windows-powershell-for-storsimple). > [!CAUTION]
@@ -76,15 +76,21 @@ If you can't find the keys in your records, you can retrieve the key from the ap
### StorSimple volume backups StorSimple offers differential backups on the volume level. Azure file shares also have this ability, called share snapshots.
+Your migration jobs can only move backups, not data from the live volume. So the most recent backup should always be on the list of backups moved in a migration.
-Decide if as part of your migration, you also have an obligation to move any backups.
+Decide if you need to move any older backups during your migration.
+Best practice is to keep this list as small as possible, so your migration jobs complete faster.
-> [!CAUTION]
-> Stop here if you must migrate backups from StorSimple volumes.
->
-> You can currently only migrate your most recent volume backup. Support for backup migration will arrive at the end of 2020. If you start now, you can't "bolt on" your backups later. In the upcoming version, backups must be "played back" to the Azure file shares from oldest to newest, with Azure file share snapshots taken in between.
+To identify critical backups that must be migrated, make a checklist of your backup policies. For instance:
+* The most recent backup. (Note: The most recent backup should always be part of this list.)
+* One backup a month for 12 months.
+* One backup a year for three years.
+
+Later on, when you create your migration jobs, you can use this list to identify the exact StorSimple volume backups that must be migrated to satisfy the requirements on your list.
-If you want to migrate only the live data and have no requirements for backups, you can continue following this guide. If you have a short-term backup retention requirement of, say, a month or two, you can decide to continue your migration now and deprovision your StorSimple resources after that period. This approach allows you to create as much backup history on the Azure file share side as you need. For the time you keep both systems running, additional cost applies, which makes this approach one you shouldn't consider if you need more than short-term backup retention.
+> [!CAUTION]
+> Selecting more than **50** StorSimple volume backups is not supported.
+> Your migration jobs can only move backups, never data from the live volume. Therefore the most recent backup is closest to the live data and thus should always be part of the list of backups to be moved in a migration.
### Map your existing StorSimple volumes to Azure file shares
@@ -94,31 +100,26 @@ If you want to migrate only the live data and have no requirements for backups,
Your migration will likely benefit from a deployment of multiple storage accounts that each hold a smaller number of Azure file shares.
-If your file shares are highly active (utilized by many users or applications), two Azure file shares might reach the performance limit of your storage account. Because of this, the best practice is to migrate to multiple storage accounts, each with their own individual file shares and typically no more than two or three shares per storage account.
+If your file shares are highly active (utilized by many users or applications), two Azure file shares might reach the performance limit of your storage account. Because of this, the best practice is to migrate to multiple storage accounts, each with their own individual file shares, and typically no more than two or three shares per storage account.
A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares into the same storage account, if you have archival shares in them.
-These considerations apply more to [direct cloud access](#direct-share-access-vs-azure-file-sync) (through an Azure VM or service) than to Azure File Sync. If you plan to use Azure File Sync only on these shares, grouping several into a single Azure storage account is fine. Also consider you might want to lift and shift an app into the cloud that would then directly access a file share. Or you could start using a service in Azure that would also benefit from having higher IOPS and throughput numbers available.
+These considerations apply more to [direct cloud access](#direct-share-access-vs-azure-file-sync) (through an Azure VM or service) than to Azure File Sync. If you plan to exclusively use Azure File Sync on these shares, grouping several into a single Azure storage account is fine. In the future, you may want to lift and shift an app into the cloud that would then directly access a file share, this scenario would benefit from having higher IOPS and throughput. Or you could start using a service in Azure that would also benefit from having higher IOPS and throughput.
If you've made a list of your shares, map each share to the storage account where it will reside. > [!IMPORTANT] > Decide on an Azure region, and ensure each storage account and Azure File Sync resource matches the region you selected.
+> Don't configure network and firewall settings for the storage accounts now. Making these configurations at this point would make a migration impossible. Configure these Azure storage settings after the migration is complete.
### Phase 1 summary At the end of Phase 1: * You have a good overview of your StorSimple devices and volumes.
-* The data transformation service is ready to access your StorSimple volumes in the cloud because you've retrieved your Service Data Encryption key for each StorSimple device.
-* You have a plan for which volumes need to be migrated and also how to map your volumes to the appropriate number of Azure file shares and storage accounts.
-
-> [!CAUTION]
-> If you must migrate backups from StorSimple volumes, **STOP HERE**.
->
-> This migration approach relies on new data transformation service capabilities that currently can't migrate backups. Support for backup migration will arrive at the end of 2020. You can currently only migrate your live data. If you start now, you can't "bolt on" your backups later. Backups must be "played back" to the Azure file shares from oldest to newest to live data, with Azure file share snapshots in between.
-
-If you want to migrate only the live data and have no requirements for backups, you can continue following this guide.
+* The Data Manager service is ready to access your StorSimple volumes in the cloud because you've retrieved your "service data encryption key" for each StorSimple device.
+* You have a plan for which volumes and backups (if any beyond the most recent) need to be migrated.
+* You know how to map your volumes to the appropriate number of Azure file shares and storage accounts.
## Phase 2: Deploy Azure storage and migration resources
@@ -128,9 +129,12 @@ This section discusses considerations around deploying the different resource ty
You'll likely need to deploy several Azure storage accounts. Each one will hold a smaller number of Azure file shares, as per your deployment plan, completed in the previous section of this article. Go to the Azure portal to [deploy your planned storage accounts](../common/storage-account-create.md#create-a-storage-account). Consider adhering to the following basic settings for any new storage account.
+> [!IMPORTANT]
+> Do not configure network and firewall settings for your storage accounts now. Making those configurations at this point would make a migration impossible. Configure these Azure storage settings after the migration is complete.
+ #### Subscription
-You can use the same subscription you used for your StorSimple deployment or a different one. The only limitation is that your subscription must be in the same Azure Active Directory tenant as the StorSimple subscription. Consider moving the StorSimple subscription to the correct tenant before you start a migration. You can only move the entire subscription. Individual StorSimple resources can't be moved to a different tenant or subscription.
+You can use the same subscription you used for your StorSimple deployment or a different one. The only limitation is that your subscription must be in the same Azure Active Directory tenant as the StorSimple subscription. Consider moving the StorSimple subscription to the appropriate tenant before you start a migration. You can only move the entire subscription, individual StorSimple resources can't be moved to a different tenant or subscription.
#### Resource group
@@ -192,7 +196,7 @@ Opting for the large, 100-TiB-capacity file shares has several benefits:
* Your performance is greatly increased as compared to the smaller 5-TiB-capacity file shares (for example, 10 times the IOPS). * Your migration will finish significantly faster.
-* You ensure that a file share will have enough capacity to hold all the data you'll migrate into it.
+* You ensure that a file share will have enough capacity to hold all the data you'll migrate into it, including the storage capacity differential backups require.
* Future growth is covered. ### Azure file shares
@@ -227,24 +231,57 @@ At the end of Phase 2, you'll have deployed your storage accounts and all Azure
## Phase 3: Create and run a migration job
-This section describes how to set up a migration job and carefully map the directories on a StorSimple volume that should be copied into the target Azure file share you select. To get started, go to your StorSimple Data Manager, find **Job definitions** on the menu, and select **+ Job definition**. The target storage type is the default **Azure file share**.
+This section describes how to set up a migration job and carefully map the directories on a StorSimple volume that should be copied into the target Azure file share you select. To get started, go to your StorSimple Data Manager, find **Job definitions** on the menu, and select **+ Job definition**. The correct target storage type is the default: **Azure file share**.
![StorSimple 8000 series migration job types.](media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-new-job-type.png "A screenshot of the Job definitions Azure portal with a new Job definitions dialog box opened that asks for the type of job: Copy to a file share or a blob container.")
-> [!IMPORTANT]
-> Before you run any migration job, stop any automatically scheduled backups of your StorSimple volumes.
+ :::column:::
+ ![StorSimple 8000 series migration job.](media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-new-job.png "A screenshot of the new job creation form for a migration job.")
+ :::column-end:::
+ :::column:::
+ **Job definition name**</br>This name should indicate the set of files you're moving. Giving it a similar name as your Azure file share is a good practice. </br></br>**Location where the job runs**</br>When selecting a region, you must select the same region as your StorSimple storage account or, if that isn't available, then a region close to it. </br></br><h3>Source</h3>**Source subscription**</br>Select the subscription in which you store your StorSimple Device Manager resource. </br></br>**StorSimple resource**</br>Select your StorSimple Device Manager your appliance is registered with. </br></br>**Service data encryption key**</br>Check this [prior section in this article](#storsimple-service-data-encryption-key) in case you can't locate the key in your records. </br></br>**Device**</br>Select your StorSimple device that holds the volume where you want to migrate. </br></br>**Volume**</br>Select the source volume. Later you'll decide if you want to migrate the whole volume or subdirectories into the target Azure file share.</br></br> **Volume backups**</br>You can select *Select volume backups* to choose specific backups to move as part of this job. An upcoming, [dedicated section in this article](#selecting-volume-backups-to-migrate) covers the process in detail.</br></br><h3>Target</h3>Select the subscription, storage account, and Azure file share as the target of this migration job.</br></br><h3>Directory mapping</h3>[A dedicated section in this article](#directory-mapping), discusses all relevant details.
+ :::column-end:::
+
+### Selecting volume backups to migrate
+
+There are important aspects around choosing backups that need to be migrated:
+
+- Your migration jobs can only move backups, not data from a live volume. So the most recent backup is closest to the live data and should always be on the list of backups moved in a migration.
+- Make sure your latest backup is recent to keep the delta to the live share as small as possible. It could be worth manually triggering and completing another volume backup before creating a migration job. A small delta to the live share will improve your migration experience. If this delta can be zero = no more changes to the StorSimple volume happened after the newest backup was taken in your list - then Phase 5: User cut-over will be drastically simplified and sped up.
+- Backups must be played back into the Azure file share **from oldest to newest**. An older backup cannot be "sorted into" the list of backups on the Azure file share after a migration job has run. Therefore you must ensure that your list of backups is complete *before* you create a job.
+- This list of backups in a job cannot be modified once the job is created - even if the job never ran.
:::row:::
+ :::column:::
+ :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups.png" alt-text="A screenshot of the new job creation form detailing the portion where StorSimple backups are selected for migration." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-expanded.png":::
+ :::column-end:::
:::column:::
- ![StorSimple 8000 series migration job.](media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-new-job.png "A screenshot of the new job creation form for a data transformation service job.")
+ To select backups of your StorSimple volume for your migration job, select the *Select volume backups* on the job creation form.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-annotated.png" alt-text="An image showing that the upper half of the blade for selecting backups lists all available backups. A selected backup will be grayed-out in this list and added to a second list on the lower half of the blade. There it can also be deleted again." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-annotated.png":::
:::column-end::: :::column:::
- **Job definition name**</br>This name should indicate the set of files you're moving. Giving it a similar name as your Azure file share is a good practice. </br></br>**Location where the job runs**</br>When selecting a region, you must select the same region as your StorSimple storage account or, if that isn't available, then a region close to it. </br></br><h3>Source</h3>**Source subscription**</br>Select the subscription in which you store your StorSimple Device Manager resource. </br></br>**StorSimple resource**</br>Select your StorSimple Device Manager your appliance is registered with. </br></br>**Service data encryption key**</br>Check this [prior section in this article](#storsimple-service-data-encryption-key) in case you can't locate the key in your records. </br></br>**Device**</br>Select your StorSimple device that holds the volume where you want to migrate. </br></br>**Volume**</br>Select the source volume. Later you'll decide if you want to migrate the whole volume or subdirectories into the target Azure file share. </br></br><h3>Target</h3>Select the subscription, storage account, and Azure file share as the target of this migration job.
+ When the backup selection blade opens, it is separated into two lists. In the first list, all available backups are displayed. You can expand and narrow the result set by filtering for a specific time range. (see next section) </br></br>A selected backup will display as grayed-out and it is added to a second list on the lower half of the blade. The second list displays all the backups selected for migration. A backup selected in error can also be removed again.
+ > [!CAUTION]
+ > You must select **all** backups you wish to migrate. You cannot add older backups later on. You cannot modify the job to change your selection once the job is created.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-time.png" alt-text="A screenshot showing the selection of a time range of the backup selection blade." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-time-expanded.png":::
+ :::column-end:::
+ :::column:::
+ By default, the list is filtered to show the StorSimple volume backups within the past seven days to make it easy to select the most recent backup. For backups further in the past, use the time range filter at the top of the blade. You can either select from an existing filter or set a custom time range to filter for only the backups taken during this period.
:::column-end::: :::row-end:::
-> [!IMPORTANT]
-> The latest volume backup will be used to perform the migration. Ensure at least one volume backup is present or the job will fail. Also ensure that the latest backup you have is fairly recent to keep the delta to the live share as small as possible. It could be worth manually triggering and completing another volume backup *before* running the job you just created.
+> [!CAUTION]
+> Selecting more than 50 StorSimple volume backups is not supported. Jobs with a large number of backups may fail.
### Directory mapping
@@ -305,11 +342,30 @@ Sorts multiple source locations into a new directory structure:
* Like Windows, folder names are case insensitive but case preserving. > [!NOTE]
-> Contents of the *\System Volume Information* folder and the *$Recycle.Bin* on your StorSimple volume won't be copied by the transformation job.
+> Contents of the *\System Volume Information* folder and the *$Recycle.Bin* on your StorSimple volume won't be copied by the migration job.
+
+### Run a migration job
+
+Your migration jobs are listed under *Job definitions* in the Data Manager resource you've deployed to a resource group.
+From the list of job definitions, select the job you want to run.
+
+In the job blade that opens, you can see your job runs in the lower list. Initially, this list will be empty. At the top of the blade, there is a command called *Run job*. This command will not immediately run the job, it opens the **Job run** blade:
+
+ :::column:::
+ :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-run-job.png" alt-text="An image showing the job run blade with a dropdown control opened, displaying the selected backups to be migrated. The oldest backup is highlighted, it needs to be selected first." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-run-job-expanded.png":::
+ :::column-end:::
+ :::column:::
+ In this release, each job must be run several times. </br></br>**You must start with the oldest backup from your list of backups you wish to migrate.** (highlighted in the image)</br></br>You run the job again, as many times as you have backups selected, each time against a progressively newer backup.
+ </br></br>
+ > [!CAUTION]
+ > It is imperative that you run the migration job with the oldest backup selected first and then again, each time with a progressively newer backup. You always must maintain the order of your backups manually - from oldest to newest.
+ :::column-end:::
### Phase 3 summary
-At the end of Phase 3, you'll have run your data transformation service jobs from StorSimple volumes into Azure file shares. You can now focus on either setting up Azure File Sync for the share (after the migration jobs for a share have completed) or directing share access for your information workers and apps to the Azure file share.
+At the end of Phase 3, you'll have run at least one of your migration jobs from StorSimple volumes into Azure file share(s). You will have run the same migration job several time, from oldest to newest backups that must be migrated. You can now focus on either setting up Azure File Sync for the share (once migration jobs for a share have completed) or directing share access for your information workers and apps to the Azure file share.
## Phase 4: Access your Azure file shares
@@ -386,21 +442,21 @@ Your registered on-premises Windows Server instance must be ready and connected
### Phase 4 summary
-In this phase, you've created and run multiple data transformation service jobs in your StorSimple Data Manager. Those jobs have migrated your files and folders to Azure file shares. You've also deployed Azure File Sync or prepared your network and storage accounts for direct-share-access.
+In this phase, you've created and run multiple migration jobs in your StorSimple Data Manager. Those jobs have migrated your files and folders to Azure file shares. You've also deployed Azure File Sync or prepared your network and storage accounts for direct-share-access.
## Phase 5: User cut-over This phase is all about wrapping up your migration: * Plan your downtime.
-* Catch up with any changes your users and apps produced on the StorSimple side while the data transformation jobs in Phase 3 have been running.
+* Catch up with any changes your users and apps produced on the StorSimple side while the migration jobs in Phase 3 have been running.
* Fail your users over to the new Windows Server instance with Azure File Sync or the Azure file shares via direct-share-access. ### Plan your downtime This migration approach requires some downtime for your users and apps. The goal is to keep downtime to a minimum. The following considerations can help:
-* Keep your StorSimple volumes available while running your data transformation jobs.
+* Keep your StorSimple volumes available while running your migration jobs.
* When you've finished running your data migration jobs for a share, it's time to remove user access (at least write access) from the StorSimple volumes or shares. A final RoboCopy will catch up your Azure file share. Then you can cut over your users. Where you run RoboCopy depends on whether you chose to use Azure File Sync or direct-share-access. The upcoming section on RoboCopy covers that subject. * After you've completed the RoboCopy catch-up, you're ready to expose the new location to your users by either the Azure file share directly or an SMB share on a Windows Server instance with Azure File Sync. Often a DFS-N deployment will help accomplish a cut-over quickly and efficiently. It will keep your existing share addresses consistent and repoint to a new location that contains your migrated files and folders.
@@ -433,7 +489,7 @@ At this point, there are differences between your on-premises Windows Server ins
1. You need to catch up with the changes that users or apps produced on the StorSimple side while the migration was ongoing. 1. For cases where you use Azure File Sync: The StorSimple appliance has a populated cache versus the Windows Server instance with just a namespace with no file content stored locally at this time. The final RoboCopy can help jump-start your local Azure File Sync cache by pulling over locally cached file content as much as is available and can fit on the Azure File Sync server.
-1. Some files might have been left behind by the data transformation job because of invalid characters. If so, copy them to the Azure File Sync-enabled Windows Server instance. Later on, you can adjust them so that they will sync. If you don't use Azure File Sync for a particular share, you're better off renaming the files with invalid characters on the StorSimple volume. Then run the RoboCopy directly against the Azure file share.
+1. Some files might have been left behind by the migration job because of invalid characters. If so, copy them to the Azure File Sync-enabled Windows Server instance. Later on, you can adjust them so that they will sync. If you don't use Azure File Sync for a particular share, you're better off renaming the files with invalid characters on the StorSimple volume. Then run the RoboCopy directly against the Azure file share.
> [!WARNING] > Robocopy in Windows Server 2019 currently experiences an issue that will cause files tiered by Azure File Sync on the target server to be recopied from the source and re-uploaded to Azure when using the /MIR function of robocopy. It is imperative that you use Robocopy on a Windows Server other than 2019. A preferred choice is Windows Server 2016. This note will be updated should the issue be resolved via Windows Update.
@@ -578,4 +634,4 @@ Your migration is complete.
* Get more familiar with [Azure File Sync: aka.ms/AFS](./storage-sync-files-planning.md). * Understand the flexibility of [cloud tiering](storage-sync-cloud-tiering.md) policies. * [Enable Azure Backup](../../backup/backup-afs.md#configure-backup-from-the-file-share-pane) on your Azure file shares to schedule snapshots and define backup retention schedules.
-* If you see in the Azure portal that some files are permanently not syncing, review the [Troubleshooting guide](storage-sync-files-troubleshoot.md) for steps to resolve these issues.
+* If you see in the Azure portal that some files are permanently not syncing, review the [Troubleshooting guide](storage-sync-files-troubleshoot.md) for steps to resolve these issues.
stream-analytics https://docs.microsoft.com/en-us/azure/stream-analytics/cicd-tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/cicd-tools.md
@@ -154,7 +154,7 @@ If you want the test validation to ignore a certain output, set the **Required**
"ExpectedOutputs": [ { "OutputAlias": [Output alias string],
- "FilePath": "Required",
+ "FilePath": [Required],
"Required": true } ]
@@ -239,4 +239,4 @@ You can use the Azure Resource Manager template and parameter files generated fr
## Next steps * [Continuous integration and Continuous deployment for Azure Stream Analytics](cicd-overview.md)
-* [Set up CI/CD pipeline for Stream Analytics job using Azure Pipelines](set-up-cicd-pipeline.md)
+* [Set up CI/CD pipeline for Stream Analytics job using Azure Pipelines](set-up-cicd-pipeline.md)
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network.md
@@ -69,7 +69,7 @@ After the private link endpoint is created, you can access the sign-in page of t
To access the resources inside your Azure Synapse Analytics Studio workspace resource, you need to create the following: -- At least one private link endpoint with a **Dev** type of **Target sub-resource**.
+- At least one private link endpoint with a **Target sub-resource** type of **Dev**.
- Two other optional private link endpoints with types of **Sql** or **SqlOnDemand**, depending on what resources in the workspace you want to access. Creating these is similar to how you create the endpoint in the previous step.
@@ -153,4 +153,4 @@ After the virtual network link is added, you need to add the DNS record set in t
Learn more about [Managed workspace virtual network](./synapse-workspace-managed-vnet.md).
-Learn more about [Managed private endpoints](./synapse-workspace-managed-private-endpoints.md).
+Learn more about [Managed private endpoints](./synapse-workspace-managed-private-endpoints.md).
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/how-to-manage-synapse-rbac-role-assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-manage-synapse-rbac-role-assignments.md
@@ -1,12 +1,12 @@
Title: How to manage Synapse RBAC assignments in Synapse Studio description: This article describes how to assign and revoke Synapse RBAC roles to AAD security principals-+ Last updated 12/1/2020-+
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/how-to-review-synapse-rbac-role-assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-review-synapse-rbac-role-assignments.md
@@ -1,12 +1,12 @@
Title: How to review Synapse RBAC role assignments in Synapse Studio description: This article describes how to review Synapse RBAC role assignments using Synapse Studio-+ Last updated 12/1/2020-+
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/how-to-set-up-access-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-set-up-access-control.md
@@ -4,12 +4,12 @@ Title: How to set up access control for your Synapse workspace
description: This article will teach you how to control access to a Synapse workspace using Azure roles, Synapse roles, SQL permissions, and Git permissions. -+ Last updated 12/03/2020 -+
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-access-control-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-access-control-overview.md
@@ -2,12 +2,12 @@
Title: Synapse workspace access control overview description: This article describes the mechanisms used to control access to a Synapse workspace and the resources and code artifacts it contains. -+ Last updated 12/03/2020 -+ # Synapse access control
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-synapse-rbac-roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
@@ -1,12 +1,12 @@
Title: Synapse RBAC roles description: This article describes the built-in Synapse RBAC roles-+ Last updated 12/1/2020-+
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-synapse-rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-synapse-rbac.md
@@ -1,12 +1,12 @@
Title: Synapse role-based access control description: An article that explains role-based access control in Azure Synapse Analytics-+ Last updated 12/1/2020-+ # What is Synapse role-based access control (RBAC)?
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-understand-what-role-you-need https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
@@ -1,12 +1,12 @@
Title: Understand the roles required to perform common tasks in Synapse description: This article describes which built-in Synapse RBAC role(s) are required to accomplish specific tasks-+ Last updated 12/1/2020-+ # Understand the roles required to perform common tasks in Synapse
@@ -81,7 +81,7 @@ View the logs for notebook and job execution |Synapse Compute Operator|
Cancel any notebook or Spark job running on an Apache Spark pool|Synapse Compute Operator on the Apache Spark pool.|bigDataPools/useCompute Create a notebook or job definition|Synapse User, or </br>Azure Owner, Contributor, or Reader on the workspace</br> *Additional permissions are required to run, publish, or commit changes*|read</br></br></br></br></br> List and open a published notebook or job definition, including reviewing saved outputs|Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor on the workspace|artifacts/read
-Run a notebook and review its output|Synapse Apache Spark Administrator, Synapse Compute Operator on the selected Apache Spark pool|bigDataPools/useCompute
+Run a notebook and review its output, or submit a Spark job|Synapse Apache Spark Administrator, Synapse Compute Operator on the selected Apache Spark pool|bigDataPools/useCompute
Publish or delete a notebook or job definition (including output) to the service|Artifact Publisher on the workspace, Synapse Apache Spark Administrator|notebooks/write, delete Commit changes to a notebook or job definition to the Git repo|Git permissions|none PIPELINES, INTEGRATION RUNTIMES, DATAFLOWS, DATASETS & TRIGGERS|
@@ -116,4 +116,4 @@ Assign and remove Synapse RBAC role assignments for users, groups, and service p
Learn [how to review Synapse RBAC role assignments](./how-to-review-synapse-rbac-role-assignments.md)
-Learn [how to manage Synapse RBAC role assignments](./how-to-manage-synapse-rbac-role-assignments.md).
+Learn [how to manage Synapse RBAC role assignments](./how-to-manage-synapse-rbac-role-assignments.md).
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/microsoft-spark-utilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/microsoft-spark-utilities.md
@@ -1,7 +1,7 @@
Title: Introduction to Microsoft Spark utilities description: "Tutorial: MSSparkutils in Azure Synapse Analytics notebooks"-+
@@ -20,9 +20,9 @@ Microsoft Spark Utilities (MSSparkUtils) is a builtin package to help you easily
### Configure access to Azure Data Lake Storage Gen2
-Synapse notebooks use Azure active directory (Azure AD) pass-through to access the ADLS Gen2 accounts. You need to be a **Blob Storage Contributor** to access the ADLS Gen2 account (or folder).
+Synapse notebooks use Azure active directory (Azure AD) pass-through to access the ADLS Gen2 accounts. You need to be a **Blob Storage Data Contributor** to access the ADLS Gen2 account (or folder).
-Synapse pipelines use workspace identity (MSI) to access the storage accounts. To use MSSparkUtils in your pipeline activities, your workspace identity needs to be **Blob Storage Contributor** to access the ADLS Gen2 account (or folder).
+Synapse pipelines use workspace identity (MSI) to access the storage accounts. To use MSSparkUtils in your pipeline activities, your workspace identity needs to be **Blob Storage Data Contributor** to access the ADLS Gen2 account (or folder).
Follow these steps to make sure your Azure AD and workspace MSI have access to the ADLS Gen2 account: 1. Open the [Azure portal](https://portal.azure.com/) and the storage account you want to access. You can navigate to the specific container you want to access.
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-integration.md
@@ -47,7 +47,8 @@ To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![TimeXtender](./media/sql-data-warehouse-partner-data-integration/timextender-logo.png) |**TimeXtender**<br>TimeXtender's Discovery Hub helps companies build a modern data estate by providing an integrated data management platform that accelerates time to data insights by up to 10 times. Going beyond everyday ETL and ELT, it provides capabilities for data access, data modeling, and compliance in a single platform. Discovery Hub provides a cohesive data fabric for cloud scale analytics. It allows you to connect and integrate various data silos, catalog, model, move, and document data for analytics and AI. | [Product page](https://www.timextender.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=timextender&page=1) | | ![Trifacta](./media/sql-data-warehouse-partner-data-integration/trifacta_logo.png) |**Trifacta Wrangler**<br> Trifacta helps individuals and organizations explore, and join together diverse data for analysis. Trifacta Wrangler is designed to handle data wrangling workloads that need to support data at scale and a large number of end users.|[Product page](https://www.trifacta.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trifactainc1587522950142.trifactaazure?tab=Overview) | | ![WhereScape](./media/sql-data-warehouse-partner-data-integration/wherescape_logo.png) |**Wherescape RED**<br> WhereScape RED is an IDE that provides teams with automation tools to streamline ETL workflows. The IDE provides best practice, optimized native code for popular data targets. Use WhereScape RED to cut the time to develop, deploy, and operate your data infrastructure.|[Product page](https://www.wherescape.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wherescapesoftware.wherescape-red?source=datamarket&tab=Overview) |
-| ![Xplenty](./media/sql-data-warehouse-partner-data-integration/xplenty-logo.png) |**Xplenty**<br> Xplenty ELT platform lets you quickly and easily prepare your data for analytics and production use cases using a simple cloud service. XplentyΓÇÖs point & click, drag & drop interface enables data integration, processing and preparation without installing, deploying, or maintaining any software. Connect and integrate with a wide set of data repositories and SaaS applications including Azure Synapse, Azure blob storage, and SQL Server. Xplenty also supports all Web Services that are accessible via Rest API.|[Product page](https://www.xplenty.com/integrations/azure-synapse-analytics/ )<br> |
+| ![Xpert BI](./media/sql-data-warehouse-partner-data-integration/xpertbi-logo.png) |**Xpert BI**<br> Xpert BI helps organizations build and maintain a robust and scalable data platform in Azure faster through metadata-based automation. It extends Azure Synapse with best practices and DataOps, for agile data development with built-in data governance functionalities. Use Xpert BI to quickly test out and switch between different Azure solutions such as Azure Synapse, Azure Data Lake Storage, and Azure SQL Database, as your business and analytics needs changes and grows.|[Product page](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br> |
+| ![Xplenty](./media/sql-data-warehouse-partner-data-integration/xplenty-logo.png) |**Xplenty**<br> Xplenty ELT platform lets you quickly and easily prepare your data for analytics and production use cases using a simple cloud service. XplentyΓÇÖs point & select, drag & drop interface enables data integration, processing and preparation without installing, deploying, or maintaining any software. Connect and integrate with a wide set of data repositories and SaaS applications including Azure Synapse, Azure blob storage, and SQL Server. Xplenty also supports all Web Services that are accessible via Rest API.|[Product page](https://www.xplenty.com/integrations/azure-synapse-analytics/ )<br> |
## Next steps
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-management.md
@@ -20,7 +20,7 @@ This article highlights Microsoft partner companies with data management tools a
## Data management partners | Partner | Description | Website/Product link | | - | -- | -- |
-| ![Aginity](./media/sql-data-warehouse-partner-data-management/aginity-logo.png) |**Aginity**<br>Aginity is an analytics development tool, which puts the full power of MicrosoftΓÇÖs Synapse platform in the hands of analysts and engineers. The rich and intuitive SQL development environment allows team members to connect to over a dozen industry leading analytics platforms, ingest data in a variety of formats, and quickly build complex business calculation to serve the results into Business Intelligence and Machine Learning use cases. The entire application is built around a central catalog which makes collaboration across the analytics team a reality, and the sophisticated management capabilities and fine grained security make governance a breeze. |[Product page](https://www.aginity.com/databases/microsoft/)<br> |
+| ![Aginity](./media/sql-data-warehouse-partner-data-management/aginity-logo.png) |**Aginity**<br>Aginity is an analytics development tool. It puts the full power of MicrosoftΓÇÖs Synapse platform in the hands of analysts and engineers. The rich and intuitive SQL development environment allows team members to connect to over a dozen industry leading analytics platforms. It allows users to ingest data in a variety of formats, and quickly build complex business calculation to serve the results into Business Intelligence and Machine Learning use cases. The entire application is built around a central catalog which makes collaboration across the analytics team a reality, and the sophisticated management capabilities and fine grained security make governance a breeze. |[Product page](https://www.aginity.com/databases/microsoft/)<br> |
| ![Alation](./media/sql-data-warehouse-partner-data-management/alation-logo.png) |**Alation**<br>AlationΓÇÖs data catalog dramatically improves the productivity, increases the accuracy, and drives confident data-driven decision making for analysts. AlationΓÇÖs data catalog empowers everyone in your organization to find, understand, and govern data. |[Product page](https://www.alation.com/product/data-catalog/)<br> | | ![Coffing Data Warehousing](./media/sql-data-warehouse-partner-data-management/coffing-data-warehousing-logo.png) |**Coffing Data Warehousing**<br>Coffing Data Warehousing provides Nexus Chameleon, a tool with 10 years of design dedicated to querying systems. Nexus is available as a query tool for dedicated SQL pool in Azure Synapse Analytics. Use Nexus to query in-house and cloud computers and join data across different platforms. Point-Click-Report! |[Product page](https://www.coffingdw.com/software/nexus/)<br> | | ![Inbrein](./media/sql-data-warehouse-partner-data-management/inbrein-logo.png) |**Inbrein MicroERD**<br>Inbrein MicroERD provides the tools that you need to create a precise data model, reduce data redundancy, improve productivity, and observe standards. By using its UI, which was developed based on extensive user experiences, a modeler can work on DB models easily and conveniently. You can continuously enjoy new and improved functions of MicroERD through prompt functional improvements and updates. |[Product page](http://microerd.com/)<br> |
@@ -29,8 +29,9 @@ This article highlights Microsoft partner companies with data management tools a
| ![Redpoint Global](./media/sql-data-warehouse-partner-data-management/redpoint-global-logo.png) |**RedPoint Data Management**<br>RedPoint Data Management enables marketers to apply all their data to drive cross-channel customer engagement while doing structured and unstructured data management. With RedPoint, you can maximize the value of your structured and unstructured data to deliver the hyper-personalized, contextual interactions needed to engage today's omni-channel customer. Drag-and-drop interface makes designing and executing data management processes easy. |[Product page](https://www.redpointglobal.com/customer-data-management)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/redpoint-global.redpoint-rpdm)<br> | | ![Sentry One](./media/sql-data-warehouse-partner-data-management/sql-sentry-logo.png) |**SentryOne (DW Sentry)**<br>With the intelligent data movement dashboard and event calendar, you always know exactly what is impacting your workload. Designed to give you visibility into your queries and jobs running to load, backup, or restore your data, never worry about making the most of your Azure resources. |[Product page](https://sentryone.com/platform/azure-sql-dw-performance-monitoring/)<br>[Azure Marketplace](https://sentryone.com/platform/azure-sql-dw-performance-monitoring/)<br> | | ![SqlDBM](./media/sql-data-warehouse-partner-data-management/sqldbm-logo.png) |**SqlDBM**<br>SqlDBM is a Cloud-based Data Modeling Tool that offers you an easy, convenient way to develop your database anywhere on any browser. All while incorporating any needed database rules and objects such as database keys, schemas, indexes, column constraints, and relationships. |[Product page](http://sqldbm.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sqldbm1583438206845.sqldbm-data-modeling-tool?tab=Overview)<br>|
-| ![Tamr](./media/sql-data-warehouse-partner-data-management/tamr-logo.png) |**Tamr**<br>With Tamr, organizations can supply Azure Synapse with mastered data, allowing them to get most from Azure SynapseΓÇÖs analytic capabilities. TamrΓÇÖs cloud-native data mastering solutions use machine learning to do the heavy lifting to combine, cleanse, and categorize data, with intuitive human feedback workflows to bridge the gap between data and business outcomes. Tamr integrates with AzureΓÇÖs data services, including Azure Synapse Analytics, Azure Databricks, Azure HDInsight, Azure Data Catalog, Azure Data Lake Storage, and Azure Data Factory. It allows for data mastering at scale with a lower total cost of ownership by taking advantage of the flexibility and scale of Azure. |[Product page](https://www.tamr.com/tamr-partners/microsoft-azure/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tamrinc.unify_v_2019?tab=Overview) |
+| ![Tamr](./media/sql-data-warehouse-partner-data-management/tamr-logo.png) |**Tamr**<br>With Tamr, organizations can supply Azure Synapse with mastered data, allowing them to get most from Azure SynapseΓÇÖs analytic capabilities. TamrΓÇÖs cloud-native data mastering solutions use machine learning to do the heavy lifting to combine, cleanse, and categorize data, with intuitive human feedback workflows to bridge the gap between data and business outcomes. Tamr integrates with AzureΓÇÖs data services including Azure Synapse Analytics, Azure Databricks, Azure HDInsight, Azure Data Catalog, Azure Data Lake Storage, and Azure Data Factory. It allows for data mastering at scale with a lower total cost of ownership, by taking advantage of the flexibility and scale of Azure. |[Product page](https://www.tamr.com/tamr-partners/microsoft-azure/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tamrinc.unify_v_2019?tab=Overview) |
| ![Teleran](./media/sql-data-warehouse-partner-data-management/teleran-logo.jpg) |**Teleran**<br>TeleranΓÇÖs Query Control prevents inappropriate and poorly formed queries from reaching Synapse and wasting compute resources. It sends intelligent messages to analytics users guiding them to more efficiently interact with the data. The goal is to ensure good business results without needlessly driving up Azure costs. Teleran Usage Analysis delivers an analysis of user, application, query, and data usage activity. It allows you to always have the entire picture of whatΓÇÖs going on. It enables you to improve service, increase business productivity, and optimize Synapse consumption costs. |[Product page](https://teleran.com/azure-synapse-optimization-cost-control/)<br>|
+| ![Xpert BI](./media/sql-data-warehouse-partner-data-integration/xpertbi-logo.png) |**Xpert BI**<br> Xpert BI provides an intuitive and searchable catalog for the line-of-business user to find, trust, and understand data and reports. The solution covers the whole data platform including Azure Synapse Analytics, ADLS Gen 2, Azure SQL Database, Analysis Services and Power BI, and also data flows and data movement end-to-end. Data stewards can update descriptions and tag data to follow regulatory requirements. Xpert BI can be integrated via APIs to other catalogs such as Azure Purview. It supplements traditional data catalogs with a business user perspective.|[Product page](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br> |
## Next steps To learn more about other partners, see [Business Intelligence partners](sql-data-warehouse-partner-business-intelligence.md), [Data Integration partners](sql-data-warehouse-partner-data-integration.md), and [Machine Learning and AI partners](sql-data-warehouse-partner-machine-learning-ai.md).
time-series-insights https://docs.microsoft.com/en-us/azure/time-series-insights/concepts-query-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-query-overview.md
@@ -49,13 +49,12 @@ Most of these APIs support batch execution operation to enable batch CRUD operat
## Time Series Query (TSQ) APIs
-These APIs are available across both stores (Warm and Cold) in our multilayered storage solution. Query URL parameters are used to specify the [store type](/rest/api/time-series-insights/dataaccessgen2/query/execute#uri-parameters) the query should execute on:
+These APIs are available across both stores (Warm and Cold) in our multilayered storage solution.
* [Get Events API](/rest/api/time-series-insights/dataaccessgen2/query/execute#getevents): Enables query and retrieval of raw events and the associated event timestamps as they're recorded in Azure Time Series Insights Gen2 from the source provider. This API allows retrieval of raw events for a given Time Series ID and search span. This API supports pagination to retrieve the complete response dataset for the selected input. > [!IMPORTANT]-
- > * As part of the [upcoming changes to JSON flattening and escaping rules](./ingestion-rules-update.md), arrays will be stored as **Dynamic** type. Payload properties stored as this type are **ONLY accessible through the Get Events API**.
+ > As part of the [upcoming changes to JSON flattening and escaping rules](./ingestion-rules-update.md), arrays will be stored as **Dynamic** type. Payload properties stored as this type are **ONLY accessible through the Get Events API**.
* [Get Series API](/rest/api/time-series-insights/dataaccessgen2/query/execute#getseries): Enables query and retrieval of computed values and the associated event timestamps by applying calculations defined by variables on raw events. These variables can be defined in either the Time Series Model or provided inline in the query. This API supports pagination to retrieve the complete response dataset for the selected input.
@@ -65,6 +64,16 @@ These APIs are available across both stores (Warm and Cold) in our multilayered
The timestamps returned in the response set are of the left interval boundaries, not of the sampled events from the interval. +
+### Selecting Store Type
+
+The above APIs can only execute against one of the two storage types (Cold or Warm) in a single call. Query URL parameters are used to specify the [store type](/rest/api/time-series-insights/dataaccessgen2/query/execute#uri-parameters) the query should execute on.
+
+If no parameter is specified, the query will be executed on Cold Store, by default. If a query spans a time range overlapping both Cold and Warm store, it is recommended to route the query to Cold store for the best experience since Warm store will only contain partial data.
+
+The [Azure Time Series Insights Explorer](./concepts-ux-panels.md) and [Power BI Connector](./how-to-connect-power-bi.md) make calls to the above APIs and will automatically select the correct storeType parameter where relevant.
++ ## Next steps * Read more about different variables that can be defined in the [Time Series Model](./concepts-model-overview.md).
time-series-insights https://docs.microsoft.com/en-us/azure/time-series-insights/how-to-connect-power-bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-connect-power-bi.md
@@ -34,9 +34,7 @@ Please review [environment access policies](./concepts-access-policies.md) and m
> [!IMPORTANT] > * Download and install the latest version of [Power BI Desktop](https://powerbi.microsoft.com/downloads/). To follow along with the steps in this article, please make sure you have at least the December 2020 (2.88.321.0) version of Power BI Desktop installed.
-## Connect data from Azure Time Series Insights to Power BI
-
-### 1. Export data into Power BI desktop
+## Export data from Azure Time Series Insights into Power BI desktop
To get started:
@@ -50,37 +48,36 @@ To get started:
* **Data format**: Choose whether you want to export **Aggregate data** or **Raw events** to Power BI. > [!NOTE]
- > * If you export raw events, you can aggregate that data later in Power BI. However, if you export aggregate data, you cannot revert to raw data in Power BI.
- > * There is a 250,000 event count limit for Raw Event level data.
+ > If you export raw events, you can aggregate that data later in Power BI. However, if you export aggregate data, you cannot revert to raw data in Power BI. There is a 250,000 event count limit for Raw Event level data.
* **Time Range**: Choose whether you'd like to see a **fixed** time range or the **latest** data in Power BI. Choosing the fixed time range means the data in the search span you've charted will be exported to Power BI. Choosing the latest time range means that Power BI will grab the latest data for the search span you've chosen (e.g. If you chart any 1 hour of data and choose the "latest" setting, Power BI Connector will always make queries for the latest 1 hour of data.)
- * **Store Type**: Choose whether you'd like to run your selected query against **Warm Store** or **Cold Store**.
+ * **Store Type**: Choose whether you'd like to run your selected query against **Warm Store** or **Cold Store**. If you've selected a range that spans both Cold and Warm stores, your query will be routed to Cold Store by default since Warm store will contain only the latest data. Manually changing the storeType parameter is allowed, but not recommended for best experience.
- > [!TIP]
- > * Azure Time Series Insights Explorer will automatically select the recommended parameters depending on the data you've chosen to export.
+ > [!TIP]
+ > Azure Time Series Insights Explorer will automatically select the recommended parameters depending on the search span and view of data you've chosen to export.
1. Once you have configured your settings, select **Copy query to clipboard**. [![Azure Time Series Insights Explorer export modal](media/how-to-connect-power-bi/choose-explorer-parameters.jpg)](media/how-to-connect-power-bi/choose-explorer-parameters.jpg#lightbox)
-2. Launch Power BI Desktop.
+1. Launch Power BI Desktop.
-3. In Power BI Desktop on the **Home** tab, select **Get Data** in the upper left corner, then **More**.
+1. In Power BI Desktop on the **Home** tab, select **Get Data** in the upper left corner, then **More**.
[![Get data in Power BI](media/how-to-connect-power-bi/get-data-power-bi.jpg)](media/how-to-connect-power-bi/get-data-power-bi.jpg#lightbox)
-4. Search for **Azure Time Series Insights**, select **Azure Time Series Insights (Beta)**, then **Connect**.
+1. Search for **Azure Time Series Insights**, select **Azure Time Series Insights (Beta)**, then **Connect**.
[![Connect Power BI to Azure Time Series Insights](media/how-to-connect-power-bi/select-tsi-connector.jpg)](media/how-to-connect-power-bi/select-tsi-connector.jpg#lightbox) Alternatively, navigate to the **Azure** tab, select **Azure Time Series Insights (Beta)**, then **Connect**.
-5. Paste the query you copied from Azure Time Series Insights Explorer into the **Custom Query** field, then press **OK**.
+1. Paste the query you copied from Azure Time Series Insights Explorer into the **Custom Query** field, then press **OK**.
[![Paste in the custom query and select OK](media/how-to-connect-power-bi/custom-query-load.png)](media/how-to-connect-power-bi/custom-query-load.png#lightbox)
-6. The data table will now load. Press **Load** to load into Power BI. If you wish to make any transformations to the data, you can do so now by clicking **Transform Data**. You can also transform your data after it's loaded.
+1. The data table will now load. Press **Load** to load into Power BI. If you wish to make any transformations to the data, you can do so now by clicking **Transform Data**. You can also transform your data after it's loaded.
[![Review the data in the table and select Load](media/how-to-connect-power-bi/review-the-loaded-data-table.png)](media/how-to-connect-power-bi/review-the-loaded-data-table.png#lightbox)
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/app-attach-azure-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/app-attach-azure-portal.md
@@ -59,7 +59,7 @@ Next, you'll need to download and configure the the MSIX app attach management i
To set up the management interface:
-1. [Open the preview portal](https://preview.portal.azure.com/?feature.msixapplications=true#home).
+1. [Open the Azure portal](https://portal.azure.com).
2. If you get a prompt asking if you consider the extension trustworthy, select **Allow**. > [!div class="mx-imgBorder"]
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/custom-script-linux.md
@@ -3,7 +3,7 @@ Title: Run Custom Script Extension on Linux VMs in Azure
description: Automate Linux VM configuration tasks by using the Custom Script Extension v2 documentationcenter: ''-+ editor: '' tags: azure-resource-manager
@@ -14,7 +14,7 @@
vm-linux Last updated 04/25/2018-+ # Use the Azure Custom Script Extension Version 2 with Linux virtual machines
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/custom-script-windows.md
@@ -2,15 +2,15 @@
Title: Azure Custom Script Extension for Windows description: Automate Windows VM configuration tasks by using the Custom Script extension --++ vm-windows Last updated 08/31/2020-+ # Custom Script Extension for Windows
@@ -27,6 +27,7 @@ This document details how to use the Custom Script Extension using the Azure Pow
### Operating System The Custom Script Extension for Windows will run on the extension supported extension OSs;+ ### Windows * Windows Server 2008 R2
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/scheduled-events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/scheduled-events.md
@@ -68,7 +68,7 @@ If the VM is not created within a Virtual Network, the default cases for cloud s
To learn how to [discover the host endpoint](https://github.com/azure-samples/virtual-machines-python-scheduled-events-discover-endpoint-for-non-vnet-vm), see this sample. ### Version and Region Availability
-The Scheduled Events service is versioned. Versions are mandatory; the current version is `2019-01-01`.
+The Scheduled Events service is versioned. Versions are mandatory; the current version is `2019-08-01`.
| Version | Release Type | Regions | Release Notes | | - | - | - | - |
@@ -234,4 +234,4 @@ if __name__ == '__main__':
- Watch [Scheduled Events on Azure Friday](https://channel9.msdn.com/Shows/Azure-Friday/Using-Azure-Scheduled-Events-to-Prepare-for-VM-Maintenance) to see a demo. - Review the Scheduled Events code samples in the [Azure Instance Metadata Scheduled Events GitHub repository](https://github.com/Azure-Samples/virtual-machines-scheduled-events-discover-endpoint-for-non-vnet-vm). - Read more about the APIs that are available in the [Instance Metadata Service](instance-metadata-service.md).-- Learn about [planned maintenance for Linux virtual machines in Azure](../maintenance-and-updates.md?bc=/azure/virtual-machines/linux/breadcrumb/toc.json&toc=/azure/virtual-machines/linux/toc.json).
+- Learn about [planned maintenance for Linux virtual machines in Azure](../maintenance-and-updates.md?bc=/azure/virtual-machines/linux/breadcrumb/toc.json&toc=/azure/virtual-machines/linux/toc.json).
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/np-series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/np-series.md
@@ -0,0 +1,52 @@
+
+ Title: NP-series - Azure Virtual Machines
+description: Specifications for the NP-series VMs.
++++ Last updated : 02/09/2021+++
+# NP-series (Preview)
+
+The NP-series virtual machines are powered by [Xilinx U250 ](https://www.xilinx.com/products/boards-and-kits/alveo/u250.html) FPGAs for accelerating workloads including machine learning inference, video transcoding, and database search & analytics. NP-series VMs are also powered by Intel Xeon 8171M (Skylake) CPUs with all core turbo clock speed of 3.2 GHz.
++
+[Premium Storage](premium-storage-performance.md): Supported<br>
+[Premium Storage caching](premium-storage-performance.md): Supported<br>
+[Live Migration](maintenance-and-updates.md): Not Supported<br>
+[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br>
+VM Generation Support: Generation 1<br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+<br>
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | FPGA | FPGA memory: GiB | Max data disks | Max NICs |
+|||||||||||
+| Standard_NP10s | 10 | 168 | 736 | 1 | 64 | 8 | 1 |
+| Standard_NP20s | 20 | 336 | 1474 | 2 | 128 | 16 | 2 |
+| Standard_NP40s | 40 | 672 | 2948 | 4 | 256 | 32 | 4 |
++++
+## Supported operating systems and drivers
+Visit [Xilinx Runtime (XRT) release notes](https://www.xilinx.com/support/documentation/sw_manuals/xilinx2020_2/ug1451-xrt-release-notes.pdf) to get the full list of supported operating systems.
+
+During the preview program Microsoft Azure engineering teams will share specific instructions for driver installation.
+
+## Other sizes
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/redhat/byos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/redhat/byos.md
@@ -28,10 +28,6 @@ Red Hat Enterprise Linux (RHEL) images are available in Azure via a pay-as-you-g
- The images are unentitled. You must use Red Hat Subscription-Manager to register and subscribe the VMs to get updates from Red Hat directly. - It's possible to switch from pay-as-you-go images to BYOS using the [Azure Hybrid Benefit](../../linux/azure-hybrid-benefit-linux.md). However it's not possible to switch from an initially deployed BYOS to pay-as-you-go billing models for Linux images. To switch the billing model from BYOS to pay-as-you-go, you must redeploy the VM from the respective image.
->[!NOTE]
-> Generation 2 RHEL BYOS images aren't currently available through the marketplace offer. If you require a
-Generation 2 RHEL BYOS image, visit the Cloud Access dashboard in Red Hat subscription management. For more information, see the [Red Hat documentation](https://access.redhat.com/articles/4847681).
- ## Requirements and conditions to access the Red Hat Gold Images 1. Get familiar with the [Red Hat Cloud Access program](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) terms. Enable your Red Hat subscriptions for Cloud Access at [Red Hat Subscription-Manager](https://access.redhat.com/management/cloud). You need to have on hand the Azure subscriptions that are going to be registered for Cloud Access.
@@ -214,4 +210,4 @@ For steps to apply Azure Disk Encryption, see [Azure Disk Encryption scenarios o
- To learn more about the Red Hat Update Infrastructure, see [Azure Red Hat Update Infrastructure](./redhat-rhui.md). - To learn more about all the Red Hat images in Azure, see the [documentation page](./redhat-images.md). - For information on Red Hat support policies for all versions of RHEL, see the [Red Hat Enterprise Linux life cycle](https://access.redhat.com/support/policy/updates/errata) page.-- For additional documentation on the RHEL Gold Images, see the [Red Hat documentation](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/cloud-access-gold-images_cloud-access#proc_using-gold-images-azure_cloud-access).
+- For additional documentation on the RHEL Gold Images, see the [Red Hat documentation](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/cloud-access-gold-images_cloud-access#proc_using-gold-images-azure_cloud-access).
vpn-gateway https://docs.microsoft.com/en-us/azure/vpn-gateway/create-vpn-azure-aws-managed-solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/create-vpn-azure-aws-managed-solutions.md
@@ -1,221 +0,0 @@
- Title: 'Create a VPN between Azure and AWS using managed solutions'
-description: How to create a VPN connection between Azure and AWS using managed solutions, instead of VMs or appliances.
------ Previously updated : 02/03/2021----
-# Create a VPN connection between Azure and AWS using managed solutions
-
-You can establish a connection between Azure and AWS by using managed solutions. Previously, you were required to use an appliance or VM acting as a responder. Now, you can connect the AWS virtual private gateway to Azure VPN Gateway directly without having to worry about managing IaaS resources such as virtual machines. This article helps you create a VPN connection between Azure and AWS by using only managed solutions.
--
-## Configure Azure
-
-### Configure a virtual network
-
-Configure a virtual network. For instructions, see the [Virtual Network Quickstart](../virtual-network/quick-create-portal.md).
-
-The following example values are used in this article:
-
-* **Resource Group:** rg-azure-aws
-* **Region:** East US
-* **Virtual network name:** vnet-azure
-* **IPv4 address space:** 172.10.0.0/16
-* **Subnet name:** subnet-01
-* **Subnet address range:** 172.10.1.0/24
-
-### Create a VPN gateway
-
-Create a VPN gateway for your virtual network. For instructions, see [Tutorial: Create and manage a VPN gateway](tutorial-create-gateway-portal.md).
-
-The following example values and settings are used in this article:
-
-* **Gateway name:** vpn-azure-aws
-* **Region:** East US
-* **Gateway type:** VPN
-* **VPN type:** Route-based
-* **SKU:** VpnGw1
-* **Generation:** Generation 1
-* **Virtual network:** Must be the VNet that you want to create the gateway for.
-* **Gateway subnet address range:** 172.10.0.0/27
-* **Public IP address:** Create new
-* **Public IP address name:** pip-vpn-azure-aws
-* **Enable active-active mode:** Disable
-* **Configure BGP:** Disable
-
-Example:
--
-## Configure AWS
-
-1. Create the Virtual Private Cloud (VPC).
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/create-vpc.png" alt-text="Create VPC info":::
-
-1. Create a subnet inside the VPC (virtual network).
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/create-subnet-vpc.png" alt-text="Create the subnet":::
-
-1. Create a customer gateway that points to the public IP address of Azure VPN Gateway. The **Customer Gateway** is an AWS resource that contains information for AWS about the customer gateway device, which in this case, is the Azure VPN Gateway.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/create-customer-gw.png" alt-text="Create customer gateway":::
-
-1. Create the virtual private gateway.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/create-vpg.png" alt-text="Create virtual private gateway":::
-
-1. Attach the virtual private gateway to the VPC.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/attach-vpg.png" alt-text="Attach the VPG to the VPC":::
-
-1. Select the VPC.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/attaching-vpg.png" alt-text="Attach":::
-
-1. Create a site-to-site VPN connection.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/create-vpn-connection.png" alt-text="Create VPN Connection":::
-
-1. Set the routing option to **Static** and point to the Azure subnet-01 prefix **(172.10.1.0/24).**
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/set-static-route.png" alt-text="Setting a static route":::
-
-1. After you fill the options, **Create** the connection.
-
-1. Download the configuration file. To download the correct configuration, change the Vendor, Platform, and Software to **Generic**, since Azure isn't a valid option.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/download-config.png" alt-text="Download configuration":::
-
-1. The configuration file contains the Pre-Shared Key and the public IP Address for each of the two IPsec tunnels created by AWS.
-
- **Tunnel 1**
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/tunnel-1.png" alt-text="Tunnel 1":::
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/tunnel-1-config.png" alt-text="Tunnel 1 configuration":::
-
- **Tunnel 2**
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/tunnel-2.png" alt-text="Tunnel 2":::
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/tunnel-2-config.png" alt-text="Tunnel 2 configuration":::
-
-1. After the tunnels are created, you will see something similar to this example.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/aws-connection-details.png" alt-text="AWS VPN Connection Details":::
-
-## Create local network gateway
-
-In Azure, the local network gateway is an Azure resource that typically represents an on-premises location. It's populated with information used to connect to the on-premises VPN device. However, in this configuration, the local network gateway is created and populated with the AWS virtual private gateway connection information. For more information about Azure local network gateways, see [Azure VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#lng).
-
-Create a local network gateway in Azure. For steps, see [Create a local network gateway](tutorial-site-to-site-portal.md#LocalNetworkGateway).
-
-Specify the following values:
-
-* **Name:** In the example, we use lng-azure-aws.
-* **Endpoint:** IP address
-* **IP address:** The public IP address from the AWS virtual private gateway and the VPC CIDR prefix. You can find the public IP address in configuration file you previously downloaded.
-
-AWS creates two IPsec tunnels for high availability purposes. The following example shows the public IP address from IPsec Tunnel #1.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/local-network-gateway.png" alt-text="Local network gateway":::
-
-## Create VPN connection
-
-In this section, you create the VPN connection between the Azure virtual network gateway, and the AWS gateway.
-
-1. Create the Azure connection. For steps to create a connection, see [Create a VPN connection](tutorial-site-to-site-portal.md#CreateConnection).
-
- In the following example, the Shared key was obtained from the configuration file that you downloaded earlier. In this example, we use the values for IPsec Tunnel #1 created by AWS and described at the configuration file.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/create-connection.png" alt-text="Azure connection object":::
-
-1. View the connection. After a few minutes, the connection is established.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/connection-established.png" alt-text="Working connection":::
-
-1. Verify that AWS IPsec Tunnel #1 is **UP**.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/aws-connection-established.png" alt-text="Verify AWS tunnel is UP":::
-
-1. Edit the route table associated with the VPC.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/edit-aws-route.png" alt-text="Edit the route":::
-
-1. Add the route to Azure subnet. This route will travel through the VPC gateway.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/save-aws-route.png" alt-text="Save the route configuration":::
-
-## Configure second connection
-
-In this section, you create a second connection to ensure high availability.
-
-1. Create another local network gateway that points to the public IP address of the IPsec tunnel #2 on AWS. This is the standby gateway.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/create-lng-standby.png" alt-text="Create the local network gateway":::
-
-1. Create the second VPN connection from Azure to AWS.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/create-connection-standby.png" alt-text="Create the standby local network gateway connection":::
-
-1. View the Azure VPN gateway connections.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/azure-tunnels.png" alt-text="Azure connection status":::
-
-1. View the AWS connections. In this example, you can see that the connections are now established.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/aws-tunnels.png" alt-text="AWS connection status":::
-
-## To test connections
-
-1. Add an **internet gateway** to the VPC on AWS. The internet gateway is a logical connection between an Amazon VPN and the Internet. This resource allows you to connect through the test VM from the AWS public IP through the Internet. This resource is not required for the VPN connection. We are only using it to test.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/create-igw.png" alt-text="Create the Internet gateway":::
-
-1. Select **Attach to VPC**.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/attach-igw.png" alt-text="Attaching the Internet Gateway to VPC":::
-
-1. Select a VPC and **Attach internet gateway**.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/attach-igw-2.png" alt-text="Attach the gateway":::
-
-1. Create a route to allow connections to **0.0.0.0/0** (Internet) through the internet gateway.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/allow-internet-igw.png" alt-text="Configure the route through the gateway":::
-
-1. In Azure, the route is automatically created. You can check the route from the Azure VM by selecting **VM > Networking > Network Interface > Effective routes**. You see 2 routes, 1 route per connection.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/azure-effective-routes.png" alt-text="Check the effective routes":::
-
-1. You can test this from a Linux VM on Azure. The result will appear similar to the following example.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/azure-overview.png" alt-text="Azure overview from Linux VM":::
-
-1. You can also test this from a Linux VM on AWS. The result will appear similar to the following example.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/aws-overview.png" alt-text="AWS overview from Linux VM":::
-
-1. Test the connectivity from the Azure VM.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/azure-ping.png" alt-text="Ping test from Azure":::
-
-1. Test the connectivity from the AWS VM.
-
- :::image type="content" source="./media/create-vpn-azure-aws-managed-solutions/aws-ping.png" alt-text="Ping test from AWS":::
-
-## Next steps
-
-* For more information about AWS support for IKEv2, see the [AWS article](https://aws.amazon.com/about-aws/whats-new/2019/02/aws-site-to-site-vpn-now-supports-ikev2/).
-
-* For more information about building a multicloud VPN at scale, see the video [Build the Best MultiCloud VPN at Scale](https://www.youtube.com/watch?v=p7h-frLDFE0).
vpn-gateway https://docs.microsoft.com/en-us/azure/vpn-gateway/openvpn-azure-ad-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/openvpn-azure-ad-tenant.md
@@ -113,4 +113,4 @@ Use the steps in [Add or delete users - Azure Active Directory](../active-direct
## Next steps
-To to your virtual network, you must create and configure a VPN client profile. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
+Create and configure a VPN client profile. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
vpn-gateway https://docs.microsoft.com/en-us/azure/vpn-gateway/scripts/vpn-gateway-sample-site-to-site-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/scripts/vpn-gateway-sample-site-to-site-powershell.md
@@ -1,15 +1,13 @@
Title: 'Azure PowerShell script sample - Configure a Site-to-Site VPN | Microsoft Docs'
+ Title: 'Azure PowerShell script sample - Configure a Site-to-Site VPN'
description: Use PowerShell to create a route-based VPN gateway and configure your VPN device to add site-to-site connectivity. -+ Previously updated : 04/30/2018- Last updated : 02/09/2021+
@@ -17,70 +15,69 @@
This script creates a route-based VPN Gateway and adds Site-to-Site configuration. In order to create the connection, you also need to configure your VPN device. For more information, see [About VPN devices and IPsec/IKE parameters for Site-to-Site VPN Gateway connections](../vpn-gateway-about-vpn-devices.md). - ```azurepowershell-interactive # Declare variables $VNetName = "VNet1"
+ $RG = "TestRG1"
+ $Location = "East US"
$FESubName = "FrontEnd"
- $BESubName = "Backend"
+ $BESubName = "BackEnd"
$GWSubName = "GatewaySubnet"
- $VNetPrefix1 = "10.0.0.0/16"
+ $VNetPrefix1 = "10.1.0.0/16"
$FESubPrefix = "10.1.0.0/24" $BESubPrefix = "10.1.1.0/24" $GWSubPrefix = "10.1.255.0/27" $VPNClientAddressPool = "192.168.0.0/24"
- $RG = "TestRG1"
- $Location = "East US"
$GWName = "VNet1GW" $GWIPName = "VNet1GWIP" $GWIPconfName = "gwipconf"
+ $LNGName = "Site1"
# Create a resource group
-New-AzResourceGroup -Name TestRG1 -Location EastUS
+New-AzResourceGroup -Name $RG -Location $Location
# Create a virtual network $virtualNetwork = New-AzVirtualNetwork `
- -ResourceGroupName TestRG1 `
- -Location EastUS `
- -Name VNet1 `
- -AddressPrefix 10.1.0.0/16
+ -ResourceGroupName $RG `
+ -Location $Location `
+ -Name $VNetName `
+ -AddressPrefix $VNetPrefix1
# Create a subnet configuration $subnetConfig = Add-AzVirtualNetworkSubnetConfig `
- -Name Frontend `
- -AddressPrefix 10.1.0.0/24 `
+ -Name $FESubName `
+ -AddressPrefix $FESubPrefix `
-VirtualNetwork $virtualNetwork # Set the subnet configuration for the virtual network $virtualNetwork | Set-AzVirtualNetwork # Add a gateway subnet
-$vnet = Get-AzVirtualNetwork -ResourceGroupName TestRG1 -Name VNet1
-Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.1.255.0/27 -VirtualNetwork $vnet
+$vnet = Get-AzVirtualNetwork -ResourceGroupName $RG -Name $VNetName
+Add-AzVirtualNetworkSubnetConfig -Name $GWSubName -AddressPrefix $GWSubPrefix -VirtualNetwork $vnet
# Set the subnet configuration for the virtual network $vnet | Set-AzVirtualNetwork # Request a public IP address
-$gwpip= New-AzPublicIpAddress -Name VNet1GWIP -ResourceGroupName TestRG1 -Location 'East US' `
+$gwpip= New-AzPublicIpAddress -Name $GWIPName -ResourceGroupName $RG -Location $Location `
-AllocationMethod Dynamic # Create the gateway IP address configuration
-$vnet = Get-AzVirtualNetwork -Name VNet1 -ResourceGroupName TestRG1
-$subnet = Get-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -VirtualNetwork $vnet
-$gwipconfig = New-AzVirtualNetworkGatewayIpConfig -Name gwipconfig1 -SubnetId $subnet.Id -PublicIpAddressId $gwpip.Id
+$vnet = Get-AzVirtualNetwork -Name $VNetName -ResourceGroupName $RG
+$subnet = Get-AzVirtualNetworkSubnetConfig -Name $GWSubName -VirtualNetwork $vnet
+$gwipconfig = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName -SubnetId $subnet.Id -PublicIpAddressId $gwpip.Id
# Create the VPN gateway
-New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 `
- -Location 'East US' -IpConfigurations $gwipconfig -GatewayType Vpn `
+New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG `
+ -Location $Location -IpConfigurations $gwipconfig -GatewayType Vpn `
-VpnType RouteBased -GatewaySku VpnGw1 # Create the local network gateway
-New-AzLocalNetworkGateway -Name Site1 -ResourceGroupName TestRG1 `
- -Location 'East US' -GatewayIpAddress '23.99.221.164' -AddressPrefix @('10.101.0.0/24','10.101.1.0/24')
+New-AzLocalNetworkGateway -Name $LNGName -ResourceGroupName $RG `
+ -Location $Location -GatewayIpAddress '23.99.221.164' -AddressPrefix @('10.101.0.0/24','10.101.1.0/24')
# Configure your on-premises VPN device # Create the VPN connection
-$gateway1 = Get-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1
-$local = Get-AzLocalNetworkGateway -Name Site1 -ResourceGroupName TestRG1
-New-AzVirtualNetworkGatewayConnection -Name VNet1toSite1 -ResourceGroupName TestRG1 `
- -Location 'East US' -VirtualNetworkGateway1 $gateway1 -LocalNetworkGateway2 $local `
- -ConnectionType IPsec -RoutingWeight 10 -SharedKey 'abc123'
+$gateway1 = Get-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG
+$local = Get-AzLocalNetworkGateway -Name $LNGName -ResourceGroupName $RG
+New-AzVirtualNetworkGatewayConnection -Name VNet1toSite1 -ResourceGroupName $RG `
+-Location $Location -VirtualNetworkGateway1 $gateway1 -LocalNetworkGateway2 $local `
+-ConnectionType IPsec -ConnectionProtocol IKEv2 -RoutingWeight 10 -SharedKey 'abc123'
``` ## Clean up resources
-When you no longer need the resources you created, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to delete the resource group. This will delete the resource group and all of the resources it contains.
+When you no longer need the resources you created, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to delete the resource group. This will