Updates from: 01/16/2021 04:08:37
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-twitter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-twitter.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/15/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -31,16 +31,21 @@ zone_pivot_groups: b2c-policy-type
## Create an application
-To use Twitter as an identity provider in Azure AD B2C, you need to create a Twitter application. If you don't already have a Twitter account, you can sign up at [https://twitter.com/signup](https://twitter.com/signup).
-
-1. Sign in to the [Twitter Developers](https://developer.twitter.com/en/apps) website with your Twitter account credentials.
-1. Select **Create an app**.
-1. Enter an **App name** and an **Application description**.
-1. In **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`.
-1. For the **Callback URL**, enter `https://your-tenant.b2clogin.com/your-tenant.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Replace `your-tenant` with the name of your tenant name and `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1A_signup_signin_twitter`. You need to use all lowercase letters when entering your tenant name and user flow id even if they are defined with uppercase letters in Azure AD B2C.
-1. At the bottom of the page, read and accept the terms, and then select **Create**.
-1. On the **App details** page, select **Edit > Edit details**, check the box for **Enable Sign in with Twitter**, and then select **Save**.
-1. Select **Keys and tokens** and record the **Consumer API Key** and the **Consumer API secret key** values to be used later.
+To enable sign-in for users with a Twitter account in Azure Active Directory B2C (Azure AD B2C) you need to create a Twitter application. If you don't already have a Twitter account, you can sign up at [https://twitter.com/signup](https://twitter.com/signup). You also need to [Apply for a developer account](https://developer.twitter.com/en/apply/user.html). For more information, see [Apply for access](https://developer.twitter.com/en/apply-for-access).
+
+1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials.
+1. Under **Standalone Apps**, select **+Create App**.
+1. Enter an **App name**, and then select **Complete**.
+1. Copy the value of the **App key**, and **API key secret**. You use both of them to configure Twitter as an identity provider in your tenant.
+1. Under **Setup your App**, select **App settings**.
+1. Under **Authentication settings**, select **Edit**
+ 1. Select **Enable 3-legged OAuth** checkbox.
+ 1. Select **Request email address from users** checkbox.
+ 1. For the **Callback URLs**, enter `https://your-tenant.b2clogin.com/your-tenant.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Replace `your-tenant` with the name of your tenant name and `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1A_signup_signin_twitter`. You need to use all lowercase letters when entering your tenant name and user flow id even if they are defined with uppercase letters in Azure AD B2C.
+ 1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`.
+ 1. Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application.
+ 1. Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application.
+ 1. Select **Save**.
::: zone pivot="b2c-user-flow"
@@ -51,9 +56,19 @@ To use Twitter as an identity provider in Azure AD B2C, you need to create a Twi
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **Twitter**. 1. Enter a **Name**. For example, *Twitter*.
-1. For the **Client ID**, enter the Consumer API Key of the Twitter application that you created earlier.
-1. For the **Client secret**, enter the Consumer API secret key that you recorded.
+1. For the **Client ID**, enter the *API Key* of the Twitter application that you created earlier.
+1. For the **Client secret**, enter the *API key secret* that you recorded.
+1. Select **Save**.
+
+## Add Twitter identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Select the user flow that you want to add the Twitter identity provider.
+1. Under the **Social identity providers**, select **Twitter**.
1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
::: zone-end
@@ -99,7 +114,7 @@ You can define a Twitter account as a claims provider by adding it to the **Clai
<Item Key="request_token_endpoint">https://api.twitter.com/oauth/request_token</Item> <Item Key="ClaimsEndpoint">https://api.twitter.com/1.1/account/verify_credentials.json?include_email=true</Item> <Item Key="ClaimsResponseFormat">json</Item>
- <Item Key="client_id">Your Twitter application consumer key</Item>
+ <Item Key="client_id">Your Twitter application API key</Item>
</Metadata> <CryptographicKeys> <Key Id="client_secret" StorageReferenceId="B2C_1A_TwitterSecret" />
@@ -123,7 +138,7 @@ You can define a Twitter account as a claims provider by adding it to the **Clai
</ClaimsProvider> ```
-4. Replace the value of **client_id** with the consumer key that you previously recorded.
+4. Replace the value of **client_id** with the *API key secret* that you previously recorded.
5. Save the file. ### Upload the extension file for verification
@@ -170,24 +185,6 @@ Now that you have a button in place, you need to link it to an action. The actio
3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add Twitter identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to the Twitter identity provider.
-1. Under the **Social identity providers**, select **Twitter**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
- ## Update and test the relying party file Update the relying party (RP) file that initiates the user journey that you created.
@@ -199,4 +196,4 @@ Update the relying party (RP) file that initiates the user journey that you crea
1. Save your changes, upload the file, and then select the new policy in the list. 1. Make sure that Azure AD B2C application that you created is selected in the **Select application** field, and then test it by clicking **Run now**.
-::: zone-end
\ No newline at end of file
+::: zone-end
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
@@ -785,6 +785,7 @@ The SCIM service must have an HTTP address and server authentication certificate
* Go Daddy * VeriSign * WoSign
+* DST Root CA X3
The .NET Core SDK includes an HTTPS development certificate that can be used during development, the certificate is installed as part of the first-run experience. Depending on how you run the ASP.NET Core Web Application it will listen to a different port:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-nps-extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-nps-extension.md
@@ -113,6 +113,8 @@ Additionally, connectivity to the following URLs is required to complete the [se
* *https:\//login.microsoftonline.com* * *https:\//provisioningapi.microsoftonline.com* * *https:\//aadcdn.msauth.net*
+* *https:\//www.powershellgallery.com*
+* *https:\//aadcdn.msftauthimages.net*
## Prepare your environment
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/terms-of-use https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/terms-of-use.md
@@ -17,41 +17,41 @@ ms.collection: M365-identity-device-management
--- # Azure Active Directory terms of use
-Azure AD terms of use provides a simple method that organizations can use to present information to end users. This presentation ensures users see relevant disclaimers for legal or compliance requirements. This article describes how to get started with terms of use (ToU).
+Azure AD terms of use policies provide a simple method that organizations can use to present information to end users. This presentation ensures users see relevant disclaimers for legal or compliance requirements. This article describes how to get started with terms of use (ToU) policies.
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-intro-sentence.md)] ## Overview videos
-The following video provides a quick overview of terms of use.
+The following video provides a quick overview of terms of use policies.
>[!VIDEO https://www.youtube.com/embed/tj-LK0abNao] For additional videos, see:-- [How to deploy terms of use in Azure Active Directory](https://www.youtube.com/embed/N4vgqHO2tgY)-- [How to roll out terms of use in Azure Active Directory](https://www.youtube.com/embed/t_hA4y9luCY)
+- [How to deploy a terms of use policy in Azure Active Directory](https://www.youtube.com/embed/N4vgqHO2tgY)
+- [How to roll out a terms of use policy in Azure Active Directory](https://www.youtube.com/embed/t_hA4y9luCY)
## What can I do with terms of use?
-Azure AD terms of use has the following capabilities:
--- Require employees or guests to accept your terms of use before getting access.-- Require employees or guests to accept your terms of use on every device before getting access.-- Require employees or guests to accept your terms of use on a recurring schedule.-- Require employees or guests to accept your terms of use prior to registering security information in Azure AD Multi-Factor Authentication (MFA).-- Require employees to accept your terms of use prior to registering security information in Azure AD self-service password reset (SSPR).-- Present general terms of use for all users in your organization.-- Present specific terms of use based on a user attributes (ex. doctors vs nurses or domestic vs international employees, by using [dynamic groups](../enterprise-users/groups-dynamic-membership.md)).-- Present specific terms of use when accessing high business impact applications, like Salesforce.-- Present terms of use in different languages.-- List who has or hasn't accepted to your terms of use.
+Azure AD terms of use policies have the following capabilities:
+
+- Require employees or guests to accept your terms of use policy before getting access.
+- Require employees or guests to accept your terms of use policy on every device before getting access.
+- Require employees or guests to accept your terms of use policy on a recurring schedule.
+- Require employees or guests to accept your terms of use policy prior to registering security information in Azure AD Multi-Factor Authentication (MFA).
+- Require employees to accept your terms of use policy prior to registering security information in Azure AD self-service password reset (SSPR).
+- Present a general terms of use policy for all users in your organization.
+- Present specific terms of use policies based on a user attributes (ex. doctors vs nurses or domestic vs international employees, by using [dynamic groups](../enterprise-users/groups-dynamic-membership.md)).
+- Present specific terms of use policies when accessing high business impact applications, like Salesforce.
+- Present terms of use policies in different languages.
+- List who has or hasn't accepted to your terms of use policies.
- Assist in meeting privacy regulations.-- Display a log of terms of use activity for compliance and audit.-- Create and manage terms of use using [Microsoft Graph APIs](/graph/api/resources/agreement?view=graph-rest-beta) (currently in preview).
+- Display a log of terms of use policy activity for compliance and audit.
+- Create and manage terms of use policies using [Microsoft Graph APIs](/graph/api/resources/agreement?view=graph-rest-beta) (currently in preview).
## Prerequisites
-To use and configure Azure AD terms of use, you must have:
+To use and configure Azure AD terms of use policies, you must have:
- Azure AD Premium P1, P2, EMS E3, or EMS E5 subscription. - If you don't have one of theses subscriptions, you can [get Azure AD Premium](../fundamentals/active-directory-get-started-premium.md) or [enable Azure AD Premium trial](https://azure.microsoft.com/trial/get-started-active-directory/).
@@ -62,11 +62,11 @@ To use and configure Azure AD terms of use, you must have:
## Terms of use document
-Azure AD terms of use uses the PDF format to present content. The PDF file can be any content, such as existing contract documents, allowing you to collect end-user agreements during user sign-in. To support users on mobile devices, the recommended font size in the PDF is 24 point.
+Azure AD terms of use policies use the PDF format to present content. The PDF file can be any content, such as existing contract documents, allowing you to collect end-user agreements during user sign-in. To support users on mobile devices, the recommended font size in the PDF is 24 point.
## Add terms of use
-Once you have finalized your terms of use document, use the following procedure to add it.
+Once you have finalized your terms of use policy document, use the following procedure to add it.
1. Sign in to Azure as a Global Administrator, Security Administrator, or Conditional Access Administrator. 1. Navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
@@ -77,22 +77,22 @@ Once you have finalized your terms of use document, use the following procedure
![New term of use pane to specify your terms of use settings](./media/terms-of-use/new-tou.png)
-1. In the **Name** box, enter a name for the terms of use that will be used in the Azure portal.
+1. In the **Name** box, enter a name for the terms of use policy that will be used in the Azure portal.
1. In the **Display name** box, enter a title that users see when they sign in.
-1. For **Terms of use document**, browse to your finalized terms of use PDF and select it.
-1. Select the language for your terms of use document. The language option allows you to upload multiple terms of use, each with a different language. The version of the terms of use that an end user will see will be based on their browser preferences.
-1. To require end users to view the terms of use prior to accepting them, set **Require users to expand the terms of use** to **On**.
-1. To require end users to accept your terms of use on every device they are accessing from, set **Require users to consent on every device** to **On**. Users may be required to install additional applications if this option is enabled. For more information, see [Per-device terms of use](#per-device-terms-of-use).
-1. If you want to expire terms of use consents on a schedule, set **Expire consents** to **On**. When set to On, two additional schedule settings are displayed.
+1. For **Terms of use document**, browse to your finalized terms of use policy PDF and select it.
+1. Select the language for your terms of use policy document. The language option allows you to upload multiple terms of use policies, each with a different language. The version of the terms of use policy that an end user will see will be based on their browser preferences.
+1. To require end users to view the terms of use policy prior to accepting them, set **Require users to expand the terms of use** to **On**.
+1. To require end users to accept your terms of use policy on every device they are accessing from, set **Require users to consent on every device** to **On**. Users may be required to install additional applications if this option is enabled. For more information, see [Per-device terms of use](#per-device-terms-of-use).
+1. If you want to expire terms of use policy consents on a schedule, set **Expire consents** to **On**. When set to On, two additional schedule settings are displayed.
![Expire consents settings to set start date, frequency, and duration](./media/terms-of-use/expire-consents.png)
-1. Use the **Expire starting on** and **Frequency** settings to specify the schedule for terms of use expirations. The following table shows the result for a couple of example settings:
+1. Use the **Expire starting on** and **Frequency** settings to specify the schedule for terms of use policy expirations. The following table shows the result for a couple of example settings:
| Expire starting on | Frequency | Result | | --- | --- | --- |
- | Today's date | Monthly | Starting today, users must accept the terms of use and then reaccept every month. |
- | Date in the future | Monthly | Starting today, users must accept the terms of use. When the future date occurs, consents will expire and then users must reaccept every month. |
+ | Today's date | Monthly | Starting today, users must accept the terms of use policy and then reaccept every month. |
+ | Date in the future | Monthly | Starting today, users must accept the terms of use policy. When the future date occurs, consents will expire and then users must reaccept every month. |
For example, if you set the expire starting on date to **Jan 1** and frequency to **Monthly**, here is how expirations might occur for two users:
@@ -101,7 +101,7 @@ Once you have finalized your terms of use document, use the following procedure
| Alice | Jan 1 | Feb 1 | Mar 1 | Apr 1 | | Bob | Jan 15 | Feb 1 | Mar 1 | Apr 1 |
-1. Use the **Duration before reacceptance requires (days)** setting to specify the number of days before the user must reaccept the terms of use. This allows users to follow their own schedule. For example, if you set the duration to **30** days, here is how expirations might occur for two users:
+1. Use the **Duration before reacceptance requires (days)** setting to specify the number of days before the user must reaccept the terms of use policy. This allows users to follow their own schedule. For example, if you set the duration to **30** days, here is how expirations might occur for two users:
| User | First accept date | First expire date | Second expire date | Third expire date | | --- | --- | --- | --- | --- |
@@ -110,7 +110,7 @@ Once you have finalized your terms of use document, use the following procedure
It is possible to use the **Expire consents** and **Duration before reacceptance requires (days)** settings together, but typically you use one or the other.
-1. Under **Conditional Access**, use the **Enforce with Conditional Access policy template** list to select the template to enforce the terms of use.
+1. Under **Conditional Access**, use the **Enforce with Conditional Access policy template** list to select the template to enforce the terms of use policy.
![Conditional Access drop-down list to select a policy template](./media/terms-of-use/conditional-access-templates.png)
@@ -118,13 +118,13 @@ Once you have finalized your terms of use document, use the following procedure
| --- | --- | | **Access to cloud apps for all guests** | A Conditional Access policy will be created for all guests and all cloud apps. This policy impacts the Azure portal. Once this is created, you might be required to sign out and sign in. | | **Access to cloud apps for all users** | A Conditional Access policy will be created for all users and all cloud apps. This policy impacts the Azure portal. Once this is created, you will be required to sign out and sign in. |
- | **Custom policy** | Select the users, groups, and apps that this terms of use will be applied to. |
- | **Create Conditional Access policy later** | This terms of use will appear in the grant control list when creating a Conditional Access policy. |
+ | **Custom policy** | Select the users, groups, and apps that this terms of use policy will be applied to. |
+ | **Create Conditional Access policy later** | This terms of use policy will appear in the grant control list when creating a Conditional Access policy. |
>[!IMPORTANT]
- >Conditional Access policy controls (including terms of use) do not support enforcement on service accounts. We recommend excluding all service accounts from the Conditional Access policy.
+ >Conditional Access policy controls (including terms of use policies) do not support enforcement on service accounts. We recommend excluding all service accounts from the Conditional Access policy.
- Custom Conditional Access policies enable granular terms of use, down to a specific cloud application or group of users. For more information, see [Quickstart: Require terms of use to be accepted before accessing cloud apps](require-tou.md).
+ Custom Conditional Access policies enable granular terms of use policies, down to a specific cloud application or group of users. For more information, see [Quickstart: Require terms of use to be accepted before accessing cloud apps](require-tou.md).
1. Click **Create**.
@@ -132,19 +132,19 @@ Once you have finalized your terms of use document, use the following procedure
![New Conditional Access pane if you chose the custom Conditional Access policy template](./media/terms-of-use/custom-policy.png)
- You should now see your new terms of use.
+ You should now see your new terms of use policies.
![New terms of use listed in the terms of use blade](./media/terms-of-use/create-tou.png) ## View report of who has accepted and declined
-The Terms of use blade shows a count of the users who have accepted and declined. These counts and who accepted/declined are stored for the life of the terms of use.
+The Terms of use blade shows a count of the users who have accepted and declined. These counts and who accepted/declined are stored for the life of the terms of use policy.
1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou). ![Terms of use blade listing the number of user show have accepted and declined](./media/terms-of-use/view-tou.png)
-1. For a terms of use, click the numbers under **Accepted** or **Declined** to view the current state for users.
+1. For a terms of use policy, click the numbers under **Accepted** or **Declined** to view the current state for users.
![Terms of use consents pane listing the users that have accepted](./media/terms-of-use/accepted-tou.png)
@@ -158,12 +158,12 @@ The Terms of use blade shows a count of the users who have accepted and declined
## View Azure AD audit logs
-If you want to view additional activity, Azure AD terms of use includes audit logs. Each user consent triggers an event in the audit logs that is stored for **30 days**. You can view these logs in the portal or download as a .csv file.
+If you want to view additional activity, Azure AD terms of use policies includes audit logs. Each user consent triggers an event in the audit logs that is stored for **30 days**. You can view these logs in the portal or download as a .csv file.
To get started with Azure AD audit logs, use the following procedure: 1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
-1. Select a terms of use.
+1. Select a terms of use policy.
1. Click **View audit logs**. ![Terms of use blade with the View audit logs option highlighted](./media/terms-of-use/audit-tou.png)
@@ -180,23 +180,23 @@ To get started with Azure AD audit logs, use the following procedure:
## What terms of use looks like for users
-Once a terms of use is created and enforced, users, who are in scope, will see the following screen during sign-in.
+Once a terms of use policy is created and enforced, users, who are in scope, will see the following screen during sign-in.
![Example terms of use that appears when a user signs in](./media/terms-of-use/user-tou.png)
-Users can view the terms of use and, if necessary, use buttons to zoom in and out.
+Users can view the terms of use policy and, if necessary, use buttons to zoom in and out.
![View of terms of use with zoom buttons](./media/terms-of-use/zoom-buttons.png)
-The following screen shows how terms of use looks on mobile devices.
+The following screen shows how terms of use policy looks on mobile devices.
![Example terms of use that appears when a user signs in on a mobile device](./media/terms-of-use/mobile-tou.png)
-Users are only required to accept the terms of use once and they will not see the terms of use again on subsequent sign-ins.
+Users are only required to accept the terms of use policy once and they will not see the terms of use policy again on subsequent sign-ins.
### How users can review their terms of use
-Users can review and see the terms of use that they have accepted by using the following procedure.
+Users can review and see the terms of use policies that they have accepted by using the following procedure.
1. Sign in to [https://myapps.microsoft.com](https://myapps.microsoft.com). 1. In the upper right corner, click your name and select **Profile**.
@@ -207,23 +207,23 @@ Users can review and see the terms of use that they have accepted by using the f
![Profile page for a user showing the Review terms of use link](./media/terms-of-use/tou13a.png)
-1. From there, you can review the terms of use you have accepted.
+1. From there, you can review the terms of use policies you have accepted.
## Edit terms of use details
-You can edit some details of terms of use, but you can't modify an existing document. The following procedure describes how to edit the details.
+You can edit some details of terms of use policies, but you can't modify an existing document. The following procedure describes how to edit the details.
1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
-1. Select the terms of use you want to edit.
+1. Select the terms of use policy you want to edit.
1. Click **Edit terms**. 1. In the Edit terms of use pane, you can change the following: - **Name** ΓÇô this is the internal name of the ToU that is not shared with end users - **Display name** ΓÇô this is the name that end users can see when viewing the ToU
- - **Require users to expand the terms of use** ΓÇô Setting this to **On** will force the end use to expand the terms of use document before accepting it.
+ - **Require users to expand the terms of use** ΓÇô Setting this to **On** will force the end use to expand the terms of use policy document before accepting it.
- (Preview) You can **update an existing terms of use** document - You can add a language to an existing ToU
- If there are other settings you would like to change, such as PDF document, require users to consent on every device, expire consents, duration before reacceptance, or Conditional Access policy, you must create a new terms of use.
+ If there are other settings you would like to change, such as PDF document, require users to consent on every device, expire consents, duration before reacceptance, or Conditional Access policy, you must create a new terms of use policy.
![Edit showing different language options ](./media/terms-of-use/edit-terms-use.png)
@@ -232,7 +232,7 @@ You can edit some details of terms of use, but you can't modify an existing docu
## Update the version or pdf of an existing terms of use 1. Sign in to Azure and navigate to [Terms of use](https://aka.ms/catou)
-2. Select the terms of use you want to edit.
+2. Select the terms of use policy you want to edit.
3. Click **Edit terms**. 4. For the language that you would like to update a new version, click **Update** under the action column
@@ -249,7 +249,7 @@ You can edit some details of terms of use, but you can't modify an existing docu
## View previous versions of a terms of use 1. Sign in to Azure and navigate to **Terms of use** at https://aka.ms/catou.
-2. Select the terms of use for which you want to view a version history.
+2. Select the terms of use policy for which you want to view a version history.
3. Click on **Languages and version history** 4. Click on **See previous versions.**
@@ -271,7 +271,7 @@ You can edit some details of terms of use, but you can't modify an existing docu
The following procedure describes how to add a terms of use language. 1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
-1. Select the terms of use you want to edit.
+1. Select the terms of use policy you want to edit.
1. Click **Edit Terms** 1. Click **Add language** at the bottom of the page. 1. In the Add terms of use language pane, upload your localized PDF and select the language.
@@ -285,7 +285,7 @@ The following procedure describes how to add a terms of use language.
## Per-device terms of use
-The **Require users to consent on every device** setting enables you to require end users to accept your terms of use on every device they are accessing from. The end user will be required to register their device in Azure AD. When the device is registered, the device ID is used to enforce the terms of use on each device.
+The **Require users to consent on every device** setting enables you to require end users to accept your terms of use policy on every device they are accessing from. The end user will be required to register their device in Azure AD. When the device is registered, the device ID is used to enforce the terms of use policy on each device.
Here is a list of the supported platforms and software.
@@ -301,7 +301,7 @@ Per-device terms of use has the following constraints:
- A device can only be joined to one tenant. - A user must have permissions to join their device.-- The Intune Enrollment app is not supported. Ensure that it is excluded from any Conditional Access policy requiring Terms of Use.
+- The Intune Enrollment app is not supported. Ensure that it is excluded from any Conditional Access policy requiring Terms of Use policy.
- Azure AD B2B users are not supported. If the user's device is not joined, they will receive a message that they need to join their device. Their experience will be dependent on the platform and software.
@@ -330,20 +330,20 @@ If a user is using browser that is not supported, they will be asked to use a di
## Delete terms of use
-You can delete old terms of use using the following procedure.
+You can delete old terms of use policies using the following procedure.
1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
-1. Select the terms of use you want to remove.
+1. Select the terms of use policy you want to remove.
1. Click **Delete terms**. 1. In the message that appears asking if you want to continue, click **Yes**. ![Message asking for confirmation to delete terms of use](./media/terms-of-use/delete-tou.png)
- You should no longer see your terms of use.
+ You should no longer see your terms of use policy.
## Deleted users and active terms of use
-By default, a deleted user is in a deleted state in Azure AD for 30 days, during which time they can be restored by an administrator if necessary. After 30 days, that user is permanently deleted. In addition, using the Azure Active Directory portal, a Global Administrator can explicitly [permanently delete a recently deleted user](../fundamentals/active-directory-users-restore.md) before that time period is reached. One a user has been permanently deleted, subsequent data about that user will be removed from the active terms of use. Audit information about deleted users remains in the audit log.
+By default, a deleted user is in a deleted state in Azure AD for 30 days, during which time they can be restored by an administrator if necessary. After 30 days, that user is permanently deleted. In addition, using the Azure Active Directory portal, a Global Administrator can explicitly [permanently delete a recently deleted user](../fundamentals/active-directory-users-restore.md) before that time period is reached. One a user has been permanently deleted, subsequent data about that user will be removed from the active terms of use policy. Audit information about deleted users remains in the audit log.
## Policy changes
@@ -352,30 +352,30 @@ Conditional Access policies take effect immediately. When this happens, the admi
> [!IMPORTANT] > Users in scope will need to sign-out and sign-in in order to satisfy a new policy if: >
-> - a Conditional Access policy is enabled on a terms of use
-> - or a second terms of use is created
+> - a Conditional Access policy is enabled on a terms of use policy
+> - or a second terms of use policy is created
## B2B guests
-Most organizations have a process in place for their employees to consent to their organization's terms of use and privacy statements. But how can you enforce the same consents for Azure AD business-to-business (B2B) guests when they're added via SharePoint or Teams? Using Conditional Access and terms of use, you can enforce a policy directly towards B2B guest users. During the invitation redemption flow, the user is presented with the terms of use. This support is currently in preview.
+Most organizations have a process in place for their employees to consent to their organization's terms of use policy and privacy statements. But how can you enforce the same consents for Azure AD business-to-business (B2B) guests when they're added via SharePoint or Teams? Using Conditional Access and terms of use policies, you can enforce a policy directly towards B2B guest users. During the invitation redemption flow, the user is presented with the terms of use policy. This support is currently in preview.
-Terms of use will only be displayed when the user has a guest account in Azure AD. SharePoint Online currently has an [ad hoc external sharing recipient experience](/sharepoint/what-s-new-in-sharing-in-targeted-release) to share a document or a folder that does not require the user to have a guest account. In this case, a terms of use is not displayed.
+Terms of use policies will only be displayed when the user has a guest account in Azure AD. SharePoint Online currently has an [ad hoc external sharing recipient experience](/sharepoint/what-s-new-in-sharing-in-targeted-release) to share a document or a folder that does not require the user to have a guest account. In this case, a terms of use policy is not displayed.
![Users and groups pane - Include tab with All guest users option checked](./media/terms-of-use/b2b-guests.png) ## Support for cloud apps
-Terms of use can be used for different cloud apps, such as Azure Information Protection and Microsoft Intune. This support is currently in preview.
+Terms of use policies can be used for different cloud apps, such as Azure Information Protection and Microsoft Intune. This support is currently in preview.
### Azure Information Protection
-You can configure a Conditional Access policy for the Azure Information Protection app and require a terms of use when a user accesses a protected document. This will trigger a terms of use prior to a user accessing a protected document for the first time.
+You can configure a Conditional Access policy for the Azure Information Protection app and require a terms of use policy when a user accesses a protected document. This will trigger a terms of use policy prior to a user accessing a protected document for the first time.
![Cloud apps pane with Microsoft Azure Information Protection app selected](./media/terms-of-use/cloud-app-info-protection.png) ### Microsoft Intune Enrollment
-You can configure a Conditional Access policy for the Microsoft Intune Enrollment app and require a terms of use prior to the enrollment of a device in Intune. For more information, see the Read [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409).
+You can configure a Conditional Access policy for the Microsoft Intune Enrollment app and require a terms of use policy prior to the enrollment of a device in Intune. For more information, see the Read [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409).
![Cloud apps pane with Microsoft Intune app selected](./media/terms-of-use/cloud-app-intune.png)
@@ -384,6 +384,9 @@ You can configure a Conditional Access policy for the Microsoft Intune Enrollmen
## Frequently asked questions
+**Q: I cannot sign in using PowerShell when terms of use is enabled.**<br />
+A: Terms of use can only be accepted when authenticating interactively.
+ **Q: How do I see when/if a user has accepted a terms of use?**<br /> A: On the Terms of use blade, click the number under **Accepted**. You can also view or search the accept activity in the Azure AD audit logs. For more information, see View report of who has accepted and declined and [View Azure AD audit logs](#view-azure-ad-audit-logs).
@@ -391,34 +394,34 @@ A: On the Terms of use blade, click the number under **Accepted**. You can also
A: The user counts in the terms of use report and who accepted/declined are stored for the life of the terms of use. The Azure AD audit logs are stored for 30 days. **Q: Why do I see a different number of consents in the terms of use report vs. the Azure AD audit logs?**<br />
-A: The terms of use report is stored for the lifetime of that terms of use, while the Azure AD audit logs are stored for 30 days. Also, the terms of use report only displays the users current consent state. For example, if a user declines and then accepts, the terms of use report will only show that user's accept. If you need to see the history, you can use the Azure AD audit logs.
+A: The terms of use report is stored for the lifetime of that terms of use policy, while the Azure AD audit logs are stored for 30 days. Also, the terms of use report only displays the users current consent state. For example, if a user declines and then accepts, the terms of use report will only show that user's accept. If you need to see the history, you can use the Azure AD audit logs.
-**Q: If I edit the details for a terms of use, does it require users to accept again?**<br />
-A: No, if an administrator edits the details for a terms of use (name, display name, require users to expand, or add a language), it does not require users to reaccept the new terms.
+**Q: If I edit the details for a terms of use policy, does it require users to accept again?**<br />
+A: No, if an administrator edits the details for a terms of use policy (name, display name, require users to expand, or add a language), it does not require users to reaccept the new terms.
-**Q: Can I update an existing terms of use document?**<br />
-A: Currently, you can't update an existing terms of use document. To change a terms of use document, you will have to create a new terms of use instance.
+**Q: Can I update an existing terms of use policy document?**<br />
+A: Currently, you can't update an existing terms of use policy document. To change a terms of use policy document, you will have to create a new terms of use policy instance.
-**Q: If hyperlinks are in the terms of use PDF document, will end users be able to click them?**<br />
-A: Yes, end users are able to select hyperlinks to additional pages but links to sections within the document are not supported. Also, hyperlinks in terms of use PDFs do not work when accessed from the Azure AD MyApps/MyAccount portal.
+**Q: If hyperlinks are in the terms of use policy PDF document, will end users be able to click them?**<br />
+A: Yes, end users are able to select hyperlinks to additional pages but links to sections within the document are not supported. Also, hyperlinks in terms of use policy PDFs do not work when accessed from the Azure AD MyApps/MyAccount portal.
-**Q: Can a terms of use support multiple languages?**<br />
-A: Yes. Currently there are 108 different languages an administrator can configure for a single terms of use. An administrator can upload multiple PDF documents and tag those documents with a corresponding language (up to 108). When end users sign in, we look at their browser language preference and display the matching document. If there is no match, we will display the default document, which is the first document that is uploaded.
+**Q: Can a terms of use policy support multiple languages?**<br />
+A: Yes. Currently there are 108 different languages an administrator can configure for a single terms of use policy. An administrator can upload multiple PDF documents and tag those documents with a corresponding language (up to 108). When end users sign in, we look at their browser language preference and display the matching document. If there is no match, we will display the default document, which is the first document that is uploaded.
-**Q: When is the terms of use triggered?**<br />
-A: The terms of use is triggered during the sign-in experience.
+**Q: When is the terms of use policy triggered?**<br />
+A: The terms of use policy is triggered during the sign-in experience.
-**Q: What applications can I target a terms of use to?**<br />
+**Q: What applications can I target a terms of use policy to?**<br />
A: You can create a Conditional Access policy on the enterprise applications using modern authentication. For more information, see [enterprise applications](./../manage-apps/view-applications-portal.md).
-**Q: Can I add multiple terms of use to a given user or app?**<br />
-A: Yes, by creating multiple Conditional Access policies targeting those groups or applications. If a user falls in scope of multiple terms of use, they accept one terms of use at a time.
+**Q: Can I add multiple terms of use policies to a given user or app?**<br />
+A: Yes, by creating multiple Conditional Access policies targeting those groups or applications. If a user falls in scope of multiple terms of use policies, they accept one terms of use policy at a time.
-**Q: What happens if a user declines the terms of use?**<br />
+**Q: What happens if a user declines the terms of use policy?**<br />
A: The user is blocked from getting access to the application. The user would have to sign in again and accept the terms in order to get access.
-**Q: Is it possible to unaccept a terms of use that was previously accepted?**<br />
-A: You can [review previously accepted terms of use](#how-users-can-review-their-terms-of-use), but currently there isn't a way to unaccept.
+**Q: Is it possible to unaccept a terms of use policy that was previously accepted?**<br />
+A: You can [review previously accepted terms of use policies](#how-users-can-review-their-terms-of-use), but currently there isn't a way to unaccept.
**Q: What happens if I'm also using Intune terms and conditions?**<br /> A: If you have configured both Azure AD terms of use and [Intune terms and conditions](/intune/terms-and-conditions-create), the user will be required to accept both. For more information, see the [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409).
@@ -428,4 +431,4 @@ A: Terms of use utilizes the following endpoints for authentication: https://tok
## Next steps -- [Quickstart: Require terms of use to be accepted before accessing cloud apps](require-tou.md)\ No newline at end of file
+- [Quickstart: Require terms of use to be accepted before accessing cloud apps](require-tou.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/access-tokens.md
@@ -288,10 +288,7 @@ A *non-password-based* login is one where the user didn't type in a password to
- Voice - PIN
-> [!NOTE]
-> Primary Refresh Tokens (PRT) on Windows 10 are segregated based on the credential. For example, Windows Hello and password have their respective PRTs, isolated from one another. When a user signs-in with a Hello credential (PIN or biometrics) and then changes the password, the password based PRT obtained previously will be revoked. Signing back in with a password invalidates the old PRT and requests a new one.
->
-> Refresh tokens aren't invalidated or revoked when used to fetch a new access token and refresh token. However, your app should discard the old one as soon as it's used and replace it with the new one, as the new token has a new expiration time in it.
+Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md) for more details on primary refresh tokens.
## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-configurable-token-lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
@@ -96,8 +96,7 @@ Confidential clients are applications that can securely store a client password
Public clients cannot securely store a client password (secret). For example, an iOS/Android app cannot obfuscate a secret from the resource owner, so it is considered a public client. You can set policies on resources to prevent refresh tokens from public clients older than a specified period from obtaining a new access/refresh token pair. To do this, use the [Refresh Token Max Inactive Time property](#refresh-token-max-inactive-time) (`MaxInactiveTime`). You also can use policies to set a period beyond which the refresh tokens are no longer accepted. To do this, use the [Single-Factor Refresh Token Max Age](#single-factor-session-token-max-age) or [Multi-Factor Session Token Max Age](#multi-factor-refresh-token-max-age) property. You can adjust the lifetime of a refresh token to control when and how often the user is required to reenter credentials, instead of being silently reauthenticated, when using a public client application.
-> [!NOTE]
-> The Max Age property is the length of time a single token can be used.
+The Max Age property is the length of time a single token can be used.
### Single sign-on session tokens When a user authenticates with Microsoft identity platform, a single sign-on session (SSO) is established with the userΓÇÖs browser and Microsoft identity platform. The SSO token, in the form of a cookie, represents this session. The SSO session token is not bound to a specific resource/client application. SSO session tokens can be revoked, and their validity is checked every time they are used.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-optional-claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-optional-claims.md
@@ -41,7 +41,7 @@ While optional claims are supported in both v1.0 and v2.0 format tokens, as well
The set of optional claims available by default for applications to use are listed below. To add custom optional claims for your application, see [Directory Extensions](#configuring-directory-extension-optional-claims), below. When adding claims to the **access token**, the claims apply to access tokens requested *for* the application (a web API), not claims requested *by* the application. No matter how the client accesses your API, the right data is present in the access token that is used to authenticate against your API. > [!NOTE]
-> The majority of these claims can be included in JWTs for v1.0 and v2.0 tokens, but not SAML tokens, except where noted in the Token Type column. Consumer accounts support a subset of these claims, marked in the "User Type" column. Many of the claims listed do not apply to consumer users (they have no tenant, so `tenant_ctry` has no value).
+>The majority of these claims can be included in JWTs for v1.0 and v2.0 tokens, but not SAML tokens, except where noted in the Token Type column. Consumer accounts support a subset of these claims, marked in the "User Type" column. Many of the claims listed do not apply to consumer users (they have no tenant, so `tenant_ctry` has no value).
**Table 2: v1.0 and v2.0 optional claim set**
@@ -144,13 +144,13 @@ You can configure optional claims for your application through the UI or applica
[![Configure optional claims in the UI](./media/active-directory-optional-claims/token-configuration.png)](./media/active-directory-optional-claims/token-configuration.png) 1. Under **Manage**, select **Token configuration**.
+ - The UI option **Token configuration** blade is not available for apps registered in an Azure AD B2C tenant which can be configured by modifying the application manifest. For more information see [Add claims and customize user input using custom policies in Azure Active Directory B2C](../../active-directory-b2c/configure-user-input.md)
+ 1. Select **Add optional claim**. 1. Select the token type you want to configure. 1. Select the optional claims to add. 1. Select **Add**.
-> [!NOTE]
-> The UI option **Token configuration** blade is not available for apps registered in an Azure AD B2C tenant currently. For applications registered in a B2C tenant, the optional claims can be configured by modifying the application manifest. For more information see [Add claims and customize user input using custom policies in Azure Active Directory B2C](../../active-directory-b2c/configure-user-input.md)
**Configuring optional claims through the application manifest:**
@@ -223,8 +223,7 @@ In addition to the standard optional claims set, you can also configure tokens t
Schema and open extensions are not supported by optional claims, only the AAD-Graph style directory extensions. This feature is useful for attaching additional user information that your app can use ΓÇô for example, an additional identifier or important configuration option that the user has set. See the bottom of this page for an example.
-> [!NOTE]
-> Directory schema extensions are an Azure AD-only feature. If your application manifest requests a custom extension and an MSA user logs in to your app, these extensions will not be returned.
+Directory schema extensions are an Azure AD-only feature. If your application manifest requests a custom extension and an MSA user logs in to your app, these extensions will not be returned.
### Directory extension formatting
@@ -286,8 +285,7 @@ This section covers the configuration options under optional claims for changing
- accessToken for the OAuth access token - Saml2Token for SAML tokens.
- > [!NOTE]
- > The Saml2Token type applies to both SAML1.1 and SAML2.0 format tokens.
+ The Saml2Token type applies to both SAML1.1 and SAML2.0 format tokens.
For each relevant token type, modify the groups claim to use the OptionalClaims section in the manifest. The OptionalClaims schema is as follows:
@@ -311,8 +309,7 @@ This section covers the configuration options under optional claims for changing
Some applications require group information about the user in the role claim. To change the claim type from a group claim to a role claim, add "emit_as_roles" to additional properties. The group values will be emitted in the role claim.
- > [!NOTE]
- > If "emit_as_roles" is used, any application roles configured that the user is assigned will not appear in the role claim.
+ If "emit_as_roles" is used, any application roles configured that the user is assigned will not appear in the role claim.
**Examples:**
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-register-app.md
@@ -96,7 +96,7 @@ There are certain restrictions on the format of the redirect URIs you add to an
## Add credentials
-Credentials are used by confidential client applications that access a web API. Examples of confidential clients are web apps, other web APIs, or service- and daemon-type applications. Credentials allow your application to authenticate as itself, requiring no interaction from a user at runtime.
+Credentials are used by [confidential client applications](msal-client-applications.md) that access a web API. Examples of confidential clients are [web apps](scenario-web-app-call-api-overview.md), other [web APIs](scenario-protected-web-api-overview.md), or [service- and daemon-type applications](scenario-daemon-overview.md). Credentials allow your application to authenticate as itself, requiring no interaction from a user at runtime.
You can add both certificates and client secrets (a string) as credentials to your confidential client app registration.
@@ -104,7 +104,7 @@ You can add both certificates and client secrets (a string) as credentials to yo
### Add a certificate
-Sometimes called a *public key*, certificates are the recommended credential type as they provide a higher level of assurance than a client secret.
+Sometimes called a *public key*, certificates are the recommended credential type as they provide a higher level of assurance than a client secret. For more information on details about using certificate as an authentication method in your application , see [Microsoft identity platform application authentication certificate credentials](active-directory-certificate-credentials.md)
1. Select your application in **App registrations** in the Azure portal. 1. Select **Certificates & secrets** > **Upload certificate**.
@@ -113,7 +113,7 @@ Sometimes called a *public key*, certificates are the recommended credential typ
### Add a client secret
-The client secret, known also as an *application password*, is a string value your app can use in place of a certificate to identity itself. It's the easier of the two credential types to use and is often used during development, but is considered less secure than a certificate. You should use certificates in your applications running in production.
+The client secret, known also as an *application password*, is a string value your app can use in place of a certificate to identity itself. It's the easier of the two credential types to use and is often used during development, but is considered less secure than a certificate. You should use certificates in your applications running in production. For more information on application security recommendations, please see [Microsoft identity platform best practices and recommendations](identity-platform-integration-checklist.md#security)
1. Select your application in **App registrations** in the Azure portal. 1. Select **Certificates & secrets** > **New client secret**.
@@ -122,6 +122,8 @@ The client secret, known also as an *application password*, is a string value yo
1. Select **Add**. 1. **Record the secret's value** for use in your client application code - it's *never displayed again* after you leave this page.
+**Note:** The ID generated along with the secret's value is the ID of the secret, which is different than Application ID.
+ ## Next steps Client applications typically need to access resources in a web API. In addition to protecting your client application with the Microsoft identity platform, you can use the platform for authorizing scoped, permissions-based access to your web API.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-desktop-acquire-token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-acquire-token.md
@@ -417,8 +417,8 @@ To sign in a domain user on a domain or Azure AD joined machine, use Integrated
- Integrated Windows Authentication is usable for *federated+* users only, that is, users created in Active Directory and backed by Azure AD. Users created directly in Azure AD without Active Directory backing, known as *managed* users, can't use this authentication flow. This limitation doesn't affect the username and password flow. - IWA is for apps written for .NET Framework, .NET Core, and Universal Windows Platform (UWP) platforms. - IWA doesn't bypass [multi-factor authentication (MFA)](../authentication/concept-mfa-howitworks.md). If MFA is configured, IWA might fail if an MFA challenge is required, because MFA requires user interaction.
- > [!NOTE]
- > This one is tricky. IWA is non-interactive, but MFA requires user interactivity. You don't control when the identity provider requests MFA to be performed, the tenant admin does. From our observations, MFA is required when you sign in from a different country/region, when not connected via VPN to a corporate network, and sometimes even when connected via VPN. Don't expect a deterministic set of rules. Azure AD uses AI to continuously learn if MFA is required. Fall back to a user prompt like interactive authentication or device code flow if IWA fails.
+
+ IWA is non-interactive, but MFA requires user interactivity. You don't control when the identity provider requests MFA to be performed, the tenant admin does. From our observations, MFA is required when you sign in from a different country/region, when not connected via VPN to a corporate network, and sometimes even when connected via VPN. Don't expect a deterministic set of rules. Azure AD uses AI to continuously learn if MFA is required. Fall back to a user prompt like interactive authentication or device code flow if IWA fails.
- The authority passed in `PublicClientApplicationBuilder` needs to be: - Tenanted of the form `https://login.microsoftonline.com/{tenant}/`, where `tenant` is either the GUID that represents the tenant ID or a domain associated with the tenant.
@@ -599,14 +599,13 @@ You can also acquire a token by providing the username and password. This flow i
### This flow isn't recommended
-This flow is *not recommended* because having your application ask a user for their password isn't secure. For more information, see [What's the solution to the growing problem of passwords?](https://news.microsoft.com/features/whats-solution-growing-problem-passwords-says-microsoft/). The preferred flow for acquiring a token silently on Windows domain joined machines is [Integrated Windows Authentication](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Integrated-Windows-Authentication). You can also use [device code flow](https://aka.ms/msal-net-device-code-flow).
+The username and password flow is *not recommended* because having your application ask a user for their password isn't secure. For more information, see [What's the solution to the growing problem of passwords?](https://news.microsoft.com/features/whats-solution-growing-problem-passwords-says-microsoft/) The preferred flow for acquiring a token silently on Windows domain joined machines is [Integrated Windows Authentication](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Integrated-Windows-Authentication). You can also use [device code flow](https://aka.ms/msal-net-device-code-flow).
-> [!NOTE]
-> Using a username and password is useful in some cases, such as DevOps scenarios. But if you want to use a username and password in interactive scenarios where you provide your own UI, think about how to move away from it. By using a username and password, you're giving up a number of things:
->
-> - Core tenets of modern identity. A password can get phished and replayed because a shared secret can be intercepted. It's incompatible with passwordless.
-> - Users who need to do MFA can't sign in because there's no interaction.
-> - Users can't do single sign-on (SSO).
+Using a username and password is useful in some cases, such as DevOps scenarios. But if you want to use a username and password in interactive scenarios where you provide your own UI, think about how to move away from it. By using a username and password, you're giving up a number of things:
+
+- Core tenets of modern identity. A password can get phished and replayed because a shared secret can be intercepted. It's incompatible with passwordless.
+- Users who need to do MFA can't sign in because there's no interaction.
+- Users can't do single sign-on (SSO).
### Constraints
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-desktop-app-registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-app-registration.md
@@ -39,7 +39,7 @@ The redirect URIs to use in a desktop application depend on the flow you want to
- If you use interactive authentication or device code flow, use `https://login.microsoftonline.com/common/oauth2/nativeclient`. To achieve this configuration, select the corresponding URL in the **Authentication** section for your application. > [!IMPORTANT]
- > Today, MSAL.NET uses another redirect URI by default in desktop applications that run on Windows (`urn:ietf:wg:oauth:2.0:oob`). In the future, we'll want to change this default, so we recommend that you use `https://login.microsoftonline.com/common/oauth2/nativeclient`.
+ > Using `https://login.microsoftonline.com/common/oauth2/nativeclient` as the redirect URI is recommended as a security best practice. If no redirect URI is specified, MSAL.NET uses `urn:ietf:wg:oauth:2.0:oob` by default which is not recommneded. This default will be updated as a breaking change in the next major release.
- If you build a native Objective-C or Swift app for macOS, register the redirect URI based on your application's bundle identifier in the following format: `msauth.<your.app.bundle.id>://auth`. Replace `<your.app.bundle.id>` with your application's bundle identifier. - If your app uses only Integrated Windows Authentication or a username and a password, you don't need to register a redirect URI for your application. These flows do a round trip to the Microsoft identity platform v2.0 endpoint. Your application won't be called back on any specific URI.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-windows-uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-windows-uwp.md
@@ -55,8 +55,8 @@ This section provides step-by-step instructions to integrate a Windows Desktop .
This guide creates an application that displays a button that queries the Microsoft Graph API and a button to sign out. It also displays text boxes that contain the results of the calls.
-> [!NOTE]
-> Do you want to download this sample's Visual Studio project instead of creating it? [Download a project](https://github.com/Azure-Samples/active-directory-dotnet-native-uwp-v2/archive/msal3x.zip), and skip to the [application registration](#register-your-application "application registration step") step to configure the code sample before it runs.
+> [!Tip]
+> To see a completed version of the project you build in this tutorial, you can [download it from GitHub](https://github.com/Azure-Samples/active-directory-dotnet-native-uwp-v2/archive/msal3x.zip).
### Create your application
@@ -288,8 +288,7 @@ private async void SignOutButton_Click(object sender, RoutedEventArgs e)
} ```
-> [!NOTE]
-> MSAL.NET uses asynchronous methods to acquire tokens or manipulate accounts. You need to support UI actions in the UI thread. This is the reason for the `Dispatcher.RunAsync` call and the precautions to call `ConfigureAwait(false)`.
+MSAL.NET uses asynchronous methods to acquire tokens or manipulate accounts. You need to support UI actions in the UI thread. This is the reason for the `Dispatcher.RunAsync` call and the precautions to call `ConfigureAwait(false)`.
#### More information about signing out<a name="more-information-on-sign-out"></a>
@@ -474,8 +473,7 @@ The Microsoft Graph API requires the `user.read` scope to read a user's profile.
To access the user's calendars in the context of an application, add the `Calendars.Read` delegated permission to the application registration information. Then add the `Calendars.Read` scope to the `acquireTokenSilent` call.
-> [!NOTE]
-> Users might be prompted for additional consents as you increase the number of scopes.
+Users might be prompted for additional consents as you increase the number of scopes.
## Known issues
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
@@ -1,5 +1,5 @@
---
-title: Microsoft identity platform scopes, permissions, and consent
+title: Microsoft identity platform scopes, permissions, & consent
description: Learn about authorization in the Microsoft identity platform endpoint, including scopes, permissions, and consent. services: active-directory author: rwike77
@@ -17,82 +17,98 @@ ms.custom: aaddev, fasttrack-edit, contperf-fy21q1, identityplatformtop40
# Permissions and consent in the Microsoft identity platform endpoint
-Applications that integrate with Microsoft identity platform follow an authorization model that gives users and administrators control over how data can be accessed. The implementation of the authorization model has been updated on the Microsoft identity platform endpoint, and it changes how an app must interact with the Microsoft identity platform. This article covers the basic concepts of this authorization model, including scopes, permissions, and consent.
+Applications that integrate with Microsoft identity platform follow an authorization model that gives users and administrators control over how data can be accessed. The implementation of the authorization model has been updated on the Microsoft identity platform endpoint. It changes how an app must interact with the Microsoft identity platform. This article covers the basic concepts of this authorization model, including scopes, permissions, and consent.
## Scopes and permissions
-The Microsoft identity platform implements the [OAuth 2.0](active-directory-v2-protocols.md) authorization protocol. OAuth 2.0 is a method through which a third-party app can access web-hosted resources on behalf of a user. Any web-hosted resource that integrates with the Microsoft identity platform has a resource identifier, or *Application ID URI*. For example, some of Microsoft's web-hosted resources include:
+The Microsoft identity platform implements the [OAuth 2.0](active-directory-v2-protocols.md) authorization protocol. OAuth 2.0 is a method through which a third-party app can access web-hosted resources on behalf of a user. Any web-hosted resource that integrates with the Microsoft identity platform has a resource identifier, or *application ID URI*.
+
+Here are some examples of Microsoft web-hosted resources:
* Microsoft Graph: `https://graph.microsoft.com` * Microsoft 365 Mail API: `https://outlook.office.com` * Azure Key Vault: `https://vault.azure.net`
-We strongly recommend that you use Microsoft Graph instead of Microsoft 365 Mail API, etc.
- The same is true for any third-party resources that have integrated with the Microsoft identity platform. Any of these resources also can define a set of permissions that can be used to divide the functionality of that resource into smaller chunks. As an example, [Microsoft Graph](https://graph.microsoft.com) has defined permissions to do the following tasks, among others: * Read a user's calendar * Write to a user's calendar * Send mail as a user
-By defining these types of permissions, the resource has fine-grained control over its data and how API functionality is exposed. A third-party app can request these permissions from users and administrators, who must approve the request before the app can access data or act on a user's behalf. By chunking the resource's functionality into smaller permission sets, third-party apps can be built to request only the specific permissions that they need to perform their function. Users and administrators can know exactly what data the app has access to, and they can be more confident that it isn't behaving with malicious intent. Developers should always abide by the concept of least privilege, asking for only the permissions they need for their applications to function.
+Because of these types of permission definitions, the resource has fine-grained control over its data and how API functionality is exposed. A third-party app can request these permissions from users and administrators, who must approve the request before the app can access data or act on a user's behalf.
+
+When a resource's functionality is chunked into small permission sets, third-party apps can be built to request only the permissions that they need to perform their function. Users and administrators can know what data the app can access. And they can be more confident that the app isn't behaving with malicious intent. Developers should always abide by the principle of least privilege, asking for only the permissions they need for their applications to function.
-In OAuth 2.0, these types of permissions are called *scopes*. They are also often referred to as *permissions*. A permission is represented in the Microsoft identity platform as a string value. Continuing with the Microsoft Graph example, the string value for each permission is:
+In OAuth 2.0, these types of permission sets are called *scopes*. They're also often referred to as *permissions*. In the Microsoft identity platform, a permission is represented as a string value. For the Microsoft Graph example, here's the string value for each permission:
* Read a user's calendar by using `Calendars.Read` * Write to a user's calendar by using `Calendars.ReadWrite` * Send mail as a user using by `Mail.Send`
-An app most commonly requests these permissions by specifying the scopes in requests to the Microsoft identity platform authorize endpoint. However, certain high privilege permissions can only be granted through administrator consent and requested/granted using the [administrator consent endpoint](#admin-restricted-permissions). Read on to learn more.
+An app most commonly requests these permissions by specifying the scopes in requests to the Microsoft identity platform authorize endpoint. However, some high-privilege permissions can be granted only through administrator consent. They can be requested or granted by using the [administrator consent endpoint](#admin-restricted-permissions). Keep reading to learn more.
## Permission types
-Microsoft identity platform supports two types of permissions: **delegated permissions** and **application permissions**.
+Microsoft identity platform supports two types of permissions: *delegated permissions* and *application permissions*.
-* **Delegated permissions** are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests, and the app is delegated permission to act as the signed-in user when making calls to the target resource. Some delegated permissions can be consented to by non-administrative users, but some higher-privileged permissions require [administrator consent](#admin-restricted-permissions). To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
+* **Delegated permissions** are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests. The app is delegated permission to act as the signed-in user when it makes calls to the target resource.
-* **Application permissions** are used by apps that run without a signed-in user present; for example, apps that run as background services or daemons. Application permissions can only be [consented by an administrator](#requesting-consent-for-an-entire-tenant).
+ Some delegated permissions can be consented to by nonadministrators. But some high-privileged permissions require [administrator consent](#admin-restricted-permissions). To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure Active Directory (Azure AD)](../roles/permissions-reference.md).
-_Effective permissions_ are the permissions that your app will have when making requests to the target resource. It's important to understand the difference between the delegated and application permissions that your app is granted and its effective permissions when making calls to the target resource.
+* **Application permissions** are used by apps that run without a signed-in user present, for example, apps that run as background services or daemons. Only [an administrator can consent to](#requesting-consent-for-an-entire-tenant) application permissions.
-- For delegated permissions, the _effective permissions_ of your app will be the least privileged intersection of the delegated permissions the app has been granted (via consent) and the privileges of the currently signed-in user. Your app can never have more privileges than the signed-in user. Within organizations, the privileges of the signed-in user may be determined by policy or by membership in one or more administrator roles. To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
+_Effective permissions_ are the permissions that your app has when it makes requests to the target resource. It's important to understand the difference between the delegated permissions and application permissions that your app is granted, and the effective permissions your app is granted when it makes calls to the target resource.
- For example, assume your app has been granted the _User.ReadWrite.All_ delegated permission. This permission nominally grants your app permission to read and update the profile of every user in an organization. If the signed-in user is a global administrator, your app will be able to update the profile of every user in the organization. However, if the signed-in user isn't in an administrator role, your app will be able to update only the profile of the signed-in user. It will not be able to update the profiles of other users in the organization because the user that it has permission to act on behalf of does not have those privileges.
+- For delegated permissions, the _effective permissions_ of your app are the least-privileged intersection of the delegated permissions the app has been granted (by consent) and the privileges of the currently signed-in user. Your app can never have more privileges than the signed-in user.
-- For application permissions, the _effective permissions_ of your app will be the full level of privileges implied by the permission. For example, an app that has the _User.ReadWrite.All_ application permission can update the profile of every user in the organization.
+ Within organizations, the privileges of the signed-in user can be determined by policy or by membership in one or more administrator roles. To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
+
+ For example, assume your app has been granted the _User.ReadWrite.All_ delegated permission. This permission nominally grants your app permission to read and update the profile of every user in an organization. If the signed-in user is a global administrator, your app can update the profile of every user in the organization. However, if the signed-in user doesn't have an administrator role, your app can update only the profile of the signed-in user. It can't update the profiles of other users in the organization because the user that it has permission to act on behalf of doesn't have those privileges.
+
+- For application permissions, the _effective permissions_ of your app are the full level of privileges implied by the permission. For example, an app that has the _User.ReadWrite.All_ application permission can update the profile of every user in the organization.
## OpenID Connect scopes
-The Microsoft identity platform implementation of OpenID Connect has a few well-defined scopes that are also hosted on the Microsoft Graph: `openid`, `email`, `profile`, and `offline_access`. The `address` and `phone` OpenID Connect scopes are not supported.
+The Microsoft identity platform implementation of OpenID Connect has a few well-defined scopes that are also hosted on Microsoft Graph: `openid`, `email`, `profile`, and `offline_access`. The `address` and `phone` OpenID Connect scopes aren't supported.
-Requesting the OIDC scopes and a token will give you a token to call the [UserInfo endpoint](userinfo.md).
+If you request the OpenID Connect scopes and a token, you'll get a token to call the [UserInfo endpoint](userinfo.md).
### openid
-If an app performs sign-in by using [OpenID Connect](active-directory-v2-protocols.md), it must request the `openid` scope. The `openid` scope shows on the work account consent page as the "Sign you in" permission, and on the personal Microsoft account consent page as the "View your profile and connect to apps and services using your Microsoft account" permission. With this permission, an app can receive a unique identifier for the user in the form of the `sub` claim. It also gives the app access to the UserInfo endpoint. The `openid` scope can be used at the Microsoft identity platform token endpoint to acquire ID tokens, which can be used by the app for authentication.
+If an app signs in by using [OpenID Connect](active-directory-v2-protocols.md), it must request the `openid` scope. The `openid` scope appears on the work account consent page as the **Sign you in** permission. On the personal Microsoft account consent page, it appears as the **View your profile and connect to apps and services using your Microsoft account** permission.
+
+By using this permission, an app can receive a unique identifier for the user in the form of the `sub` claim. The permission also gives the app access to the UserInfo endpoint. The `openid` scope can be used at the Microsoft identity platform token endpoint to acquire ID tokens. The app can use these tokens for authentication.
### email
-The `email` scope can be used with the `openid` scope and any others. It gives the app access to the user's primary email address in the form of the `email` claim. The `email` claim is included in a token only if an email address is associated with the user account, which isn't always the case. If it uses the `email` scope, your app should be prepared to handle a case in which the `email` claim does not exist in the token.
+The `email` scope can be used with the `openid` scope and any other scopes. It gives the app access to the user's primary email address in the form of the `email` claim.
+
+The `email` claim is included in a token only if an email address is associated with the user account, which isn't always the case. If your app uses the `email` scope, the app needs to be able to handle a case in which no `email` claim exists in the token.
### profile
-The `profile` scope can be used with the `openid` scope and any others. It gives the app access to a substantial amount of information about the user. The information it can access includes, but isn't limited to, the user's given name, surname, preferred username, and object ID. For a complete list of the profile claims available in the id_tokens parameter for a specific user, see the [`id_tokens` reference](id-tokens.md).
+The `profile` scope can be used with the `openid` scope and any other scope. It gives the app access to a large amount of information about the user. The information it can access includes, but isn't limited to, the user's given name, surname, preferred username, and object ID.
+
+For a complete list of the `profile` claims available in the `id_tokens` parameter for a specific user, see the [`id_tokens` reference](id-tokens.md).
### offline_access
-The [`offline_access` scope](https://openid.net/specs/openid-connect-core-1_0.html#OfflineAccess) gives your app access to resources on behalf of the user for an extended time. On the consent page, this scope appears as the "Maintain access to data you have given it access to" permission. When a user approves the `offline_access` scope, your app can receive refresh tokens from the Microsoft identity platform token endpoint. Refresh tokens are long-lived. Your app can get new access tokens as older ones expire.
+The [`offline_access` scope](https://openid.net/specs/openid-connect-core-1_0.html#OfflineAccess) gives your app access to resources on behalf of the user for an extended time. On the consent page, this scope appears as the **Maintain access to data you have given it access to** permission.
+
+When a user approves the `offline_access` scope, your app can receive refresh tokens from the Microsoft identity platform token endpoint. Refresh tokens are long-lived. Your app can get new access tokens as older ones expire.
> [!NOTE]
-> This permission appears on all consent screens today, even for flows that don't provide a refresh token (the [implicit flow](v2-oauth2-implicit-grant-flow.md)). This is to cover scenarios where a client can begin within the implicit flow, and then move to the code flow where a refresh token is expected.
+> This permission currently appears on all consent pages, even for flows that don't provide a refresh token (such as the [implicit flow](v2-oauth2-implicit-grant-flow.md)). This setup addresses scenarios where a client can begin within the implicit flow and then move to the code flow where a refresh token is expected.
-On the Microsoft identity platform (requests made to the v2.0 endpoint), your app must explicitly request the `offline_access` scope, to receive refresh tokens. This means that when you redeem an authorization code in the [OAuth 2.0 authorization code flow](active-directory-v2-protocols.md), you'll receive only an access token from the `/token` endpoint. The access token is valid for a short time. The access token usually expires in one hour. At that point, your app needs to redirect the user back to the `/authorize` endpoint to get a new authorization code. During this redirect, depending on the type of app, the user might need to enter their credentials again or consent again to permissions.
+On the Microsoft identity platform (requests made to the v2.0 endpoint), your app must explicitly request the `offline_access` scope, to receive refresh tokens. So when you redeem an authorization code in the [OAuth 2.0 authorization code flow](active-directory-v2-protocols.md), you'll receive only an access token from the `/token` endpoint.
+
+The access token is valid for a short time. It usually expires in one hour. At that point, your app needs to redirect the user back to the `/authorize` endpoint to get a new authorization code. During this redirect, depending on the type of app, the user might need to enter their credentials again or consent again to permissions.
For more information about how to get and use refresh tokens, see the [Microsoft identity platform protocol reference](active-directory-v2-protocols.md). ## Requesting individual user consent
-In an [OpenID Connect or OAuth 2.0](active-directory-v2-protocols.md) authorization request, an app can request the permissions it needs by using the `scope` query parameter. For example, when a user signs in to an app, the app sends a request like the following example (with line breaks added for legibility):
+In an [OpenID Connect or OAuth 2.0](active-directory-v2-protocols.md) authorization request, an app can request the permissions it needs by using the `scope` query parameter. For example, when a user signs in to an app, the app sends a request like the following example. (Line breaks are added for legibility.)
```HTTP GET https://login.microsoftonline.com/common/oauth2/v2.0/authorize?
@@ -106,74 +122,84 @@ https%3A%2F%2Fgraph.microsoft.com%2Fmail.send
&state=12345 ```
-The `scope` parameter is a space-separated list of delegated permissions that the app is requesting. Each permission is indicated by appending the permission value to the resource's identifier (the Application ID URI). In the request example, the app needs permission to read the user's calendar and send mail as the user.
+The `scope` parameter is a space-separated list of delegated permissions that the app is requesting. Each permission is indicated by appending the permission value to the resource's identifier (the application ID URI). In the request example, the app needs permission to read the user's calendar and send mail as the user.
-After the user enters their credentials, the Microsoft identity platform endpoint checks for a matching record of *user consent*. If the user has not consented to any of the requested permissions in the past, nor has an administrator consented to these permissions on behalf of the entire organization, the Microsoft identity platform endpoint asks the user to grant the requested permissions.
+After the user enters their credentials, the Microsoft identity platform endpoint checks for a matching record of *user consent*. If the user hasn't consented to any of the requested permissions in the past, and if the administrator hasn't consented to these permissions on behalf of the entire organization, the Microsoft identity platform endpoint asks the user to grant the requested permissions.
-At this time, the `offline_access` ("Maintain access to data you have given it access to") and `user.read` ("Sign you in and read your profile") permissions are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality - `offline_access` gives the app access to refresh tokens, critical for native and web apps, while `user.read` gives access to the `sub` claim, allowing the client or app to correctly identify the user over time and access rudimentary user information.
+At this time, the `offline_access` ("Maintain access to data you have given it access to") permission and `user.read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality. The `offline_access` permission gives the app access to refresh tokens that are critical for native apps and web apps. The `user.read` permission gives access to the `sub` claim. It allows the client or app to correctly identify the user over time and access rudimentary user information.
-![Example screenshot that shows work account consent](./media/v2-permissions-and-consent/work_account_consent.png)
+![Example screenshot that shows work account consent.](./media/v2-permissions-and-consent/work_account_consent.png)
-When the user approves the permission request, consent is recorded and the user doesn't have to consent again on subsequent sign-ins to the application.
+When the user approves the permission request, consent is recorded. The user doesn't have to consent again when they later sign in to the application.
## Requesting consent for an entire tenant
-Often, when an organization purchases a license or subscription for an application, the organization wants to proactively set up the application for use by all members of the organization. As part of this process, an administrator can grant consent for the application to act on behalf of any user in the tenant. If the admin grants consent for the entire tenant, the organization's users won't see a consent page for the application.
+When an organization purchases a license or subscription for an application, the organization often wants to proactively set up the application for use by all members of the organization. As part of this process, an administrator can grant consent for the application to act on behalf of any user in the tenant. If the admin grants consent for the entire tenant, the organization's users don't see a consent page for the application.
To request consent for delegated permissions for all users in a tenant, your app can use the admin consent endpoint.
-Additionally, applications must use the admin consent endpoint to request Application Permissions.
+Additionally, applications must use the admin consent endpoint to request application permissions.
## Admin-restricted permissions
-Some high-privilege permissions in the Microsoft ecosystem can be set to *admin-restricted*. Examples of these kinds of permissions include the following:
+Some high-privilege permissions in Microsoft resources can be set to *admin-restricted*. Here are some examples of these kinds of permissions:
* Read all user's full profiles by using `User.Read.All` * Write data to an organization's directory by using `Directory.ReadWrite.All` * Read all groups in an organization's directory by using `Groups.Read.All`
-Although a consumer user might grant an application access to this kind of data, organizational users are restricted from granting access to the same set of sensitive company data. If your application requests access to one of these permissions from an organizational user, the user receives an error message that says they're not authorized to consent to your app's permissions.
+Although a consumer user might grant an application access to this kind of data, organizational users can't grant access to the same set of sensitive company data. If your application requests access to one of these permissions from an organizational user, the user receives an error message that says they're not authorized to consent to your app's permissions.
-If your app requires access to admin-restricted scopes for organizations, you should request them directly from a company administrator, also by using the admin consent endpoint, described next.
+If your app requires scopes for admin-restricted permissions, an organization's administrator must consent to those scopes on behalf of the organization's users. To avoid displaying prompts to users that request consent for permissions they can't grant, your app can use the admin consent endpoint. The admin consent endpoint is covered in the next section.
-If the application is requesting high privilege delegated permissions and an administrator grants these permissions via the admin consent endpoint, consent is granted for all users in the tenant.
+If the application requests high-privilege delegated permissions and an administrator grants these permissions through the admin consent endpoint, consent is granted for all users in the tenant.
-If the application is requesting application permissions and an administrator grants these permissions via the admin consent endpoint, this grant isn't done on behalf of any specific user. Instead, the client application is granted permissions *directly*. These types of permissions are only used by daemon services and other non-interactive applications that run in the background.
+If the application requests application permissions and an administrator grants these permissions through the admin consent endpoint, this grant isn't done on behalf of any specific user. Instead, the client application is granted permissions *directly*. These types of permissions are used only by daemon services and other noninteractive applications that run in the background.
## Using the admin consent endpoint
-After granting admin consent using the admin consent endpoint, you have finished granting admin consent and users do not need to perform any further additional actions. After granting admin consent, users can get an access token via a typical auth flow and the resulting access token will have the consented permissions.
+After you use the admin consent endpoint to grant admin consent, you're finished. Users don't need to take any further action. After admin consent is granted, users can get an access token through a typical auth flow. The resulting access token has the consented permissions.
-When a Company Administrator uses your application and is directed to the authorize endpoint, Microsoft identity platform will detect the user's role and ask them if they would like to consent on behalf of the entire tenant for the permissions you have requested. However, there is also a dedicated admin consent endpoint you can use if you would like to proactively request that an administrator grants permission on behalf of the entire tenant. Using this endpoint is also necessary for requesting Application Permissions (which can't be requested using the authorize endpoint).
+When a company administrator uses your application and is directed to the authorize endpoint, Microsoft identity platform detects the user's role. It asks if the company administrator wants to consent on behalf of the entire tenant for the permissions you requested. You could instead use a dedicated admin consent endpoint to proactively request an administrator to grant permission on behalf of the entire tenant. This endpoint is also necessary for requesting application permissions. Application permissions can't be requested by using the authorize endpoint.
-If you follow these steps, your app can request permissions for all users in a tenant, including admin-restricted scopes. This is a high privilege operation and should only be done if necessary for your scenario.
+If you follow these steps, your app can request permissions for all users in a tenant, including admin-restricted scopes. This operation is high privilege. Use the operation only if necessary for your scenario.
-To see a code sample that implements the steps, see the [admin-restricted scopes sample](https://github.com/Azure-Samples/active-directory-dotnet-admin-restricted-scopes-v2).
+To see a code sample that implements the steps, see the [admin-restricted scopes sample](https://github.com/Azure-Samples/active-directory-dotnet-admin-restricted-scopes-v2) in GitHub.
### Request the permissions in the app registration portal
-Applications are able to note which permissions they require (both delegated and application) in the app registration portal. This allows use of the `/.default` scope and the Azure portal's "Grant admin consent" option. In general, it's best practice to ensure that the permissions statically defined for a given application are a superset of the permissions that it will be requesting dynamically/incrementally.
+In the app registration portal, applications can list the permissions they require, including both delegated permissions and application permissions. This setup allows the use of the `/.default` scope and the Azure portal's **Grant admin consent** option.
+
+In general, the permissions should be statically defined for a given application. They should be a superset of the permissions that the app will request dynamically or incrementally.
> [!NOTE]
->Application permissions can only be requested through the use of [`/.default`](#the-default-scope) - so if your app needs application permissions, make sure they're listed in the app registration portal.
+>Application permissions can be requested only through the use of [`/.default`](#the-default-scope). So if your app needs application permissions, make sure they're listed in the app registration portal.
-#### To configure the list of statically requested permissions for an application
+To configure the list of statically requested permissions for an application:
1. Go to your application in the <a href="https://go.microsoft.com/fwlink/?linkid=2083908" target="_blank">Azure portal - App registrations<span class="docon docon-navigate-external x-hidden-focus"></span></a> quickstart experience. 1. Select an application, or [create an app](quickstart-register-app.md) if you haven't already. 1. On the application's **Overview** page, under **Manage**, select **API Permissions** > **Add a permission**.
-1. Select **Microsoft Graph** from the list of available APIs and then add the permissions that your app requires.
+1. Select **Microsoft Graph** from the list of available APIs. Then add the permissions that your app requires.
1. Select **Add Permissions**.
-### Recommended: Sign the user into your app
+### Recommended: Sign the user in to your app
-Typically, when you build an application that uses the admin consent endpoint, the app needs a page or view in which the admin can approve the app's permissions. This page can be part of the app's sign-up flow, part of the app's settings, or it can be a dedicated "connect" flow. In many cases, it makes sense for the app to show this "connect" view only after a user has signed in with a work or school Microsoft account.
+Typically, when you build an application that uses the admin consent endpoint, the app needs a page or view in which the admin can approve the app's permissions. This page can be:
-When you sign the user into your app, you can identify the organization to which the admin belongs before asking them to approve the necessary permissions. Although not strictly necessary, it can help you create a more intuitive experience for your organizational users. To sign the user in, follow our [Microsoft identity platform protocol tutorials](active-directory-v2-protocols.md).
+* Part of the app's sign-up flow.
+* Part of the app's settings.
+* A dedicated "connect" flow.
+
+In many cases, it makes sense for the app to show this "connect" view only after a user has signed in with a work Microsoft account or school Microsoft account.
+
+When you sign the user in to your app, you can identify the organization to which the admin belongs before you ask them to approve the necessary permissions. Although this step isn't strictly necessary, it can help you create a more intuitive experience for your organizational users.
+
+To sign the user in, follow the [Microsoft identity platform protocol tutorials](active-directory-v2-protocols.md).
### Request the permissions from a directory admin
-When you're ready to request permissions from your organization's admin, you can redirect the user to the Microsoft identity platform *admin consent endpoint*.
+When you're ready to request permissions from your organization's admin, you can redirect the user to the Microsoft identity platform admin consent endpoint.
```HTTP // Line breaks are for legibility only.
@@ -189,14 +215,14 @@ https://graph.microsoft.com/mail.send
| Parameter | Condition | Description | |:--------------|:--------------|:-----------------------------------------------------------------------------------------|
-| `tenant` | Required | The directory tenant that you want to request permission from. Can be provided in GUID or friendly name format OR generically referenced with organizations as seen in the example. Do not use 'common', as personal accounts cannot provide admin consent except in the context of a tenant. To ensure best compatibility with personal accounts that manage tenants, use the tenant ID when possible. |
-| `client_id` | Required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. |
+| `tenant` | Required | The directory tenant that you want to request permission from. It can be provided in a GUID or friendly name format. Or it can be generically referenced with organizations, as seen in the example. Don't use "common," because personal accounts can't provide admin consent except in the context of a tenant. To ensure the best compatibility with personal accounts that manage tenants, use the tenant ID when possible. |
+| `client_id` | Required | The application (client) ID that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. |
| `redirect_uri` | Required |The redirect URI where you want the response to be sent for your app to handle. It must exactly match one of the redirect URIs that you registered in the app registration portal. | | `state` | Recommended | A value included in the request that will also be returned in the token response. It can be a string of any content you want. Use the state to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-|`scope` | Required | Defines the set of permissions being requested by the application. This can be either static (using [`/.default`](#the-default-scope)) or dynamic scopes. This can include the OIDC scopes (`openid`, `profile`, `email`). If you need application permissions, you must use `/.default` to request the statically configured list of permissions. |
+|`scope` | Required | Defines the set of permissions being requested by the application. Scopes can be either static (using [`/.default`](#the-default-scope)) or dynamic. This set can include the OpenID Connect scopes (`openid`, `profile`, `email`). If you need application permissions, you must use `/.default` to request the statically configured list of permissions. |
-At this point, Azure AD requires a tenant administrator to sign in to complete the request. The administrator is asked to approve all the permissions that you have requested in the `scope` parameter. If you've used a static (`/.default`) value, it will function like the v1.0 admin consent endpoint and request consent for all scopes found in the required permissions for the app.
+At this point, Azure AD requires a tenant administrator to sign in to complete the request. The administrator is asked to approve all the permissions that you requested in the `scope` parameter. If you used a static (`/.default`) value, it will function like the v1.0 admin consent endpoint and request consent for all scopes found in the required permissions for the app.
#### Successful response
@@ -214,7 +240,7 @@ GET http://localhost/myapp/permissions?tenant=a8990e1f-ff32-408a-9f8e-78d3b9139b
#### Error response
-If the admin does not approve the permissions for your app, the failed response looks like this:
+If the admin doesn't approve the permissions for your app, the failed response looks like this:
```HTTP GET http://localhost/myapp/permissions?error=permission_denied&error_description=The+admin+canceled+the+request
@@ -222,14 +248,14 @@ GET http://localhost/myapp/permissions?error=permission_denied&error_description
| Parameter | Description | | --- | --- |
-| `error` | An error code string that can be used to classify types of errors that occur, and can be used to react to errors. |
+| `error` | An error code string that can be used to classify types of errors that occur. It can also be used to react to errors. |
| `error_description` | A specific error message that can help a developer identify the root cause of an error. | After you've received a successful response from the admin consent endpoint, your app has gained the permissions it requested. Next, you can request a token for the resource you want. ## Using permissions
-After the user consents to permissions for your app, your app can acquire access tokens that represent your app's permission to access a resource in some capacity. An access token can be used only for a single resource, but encoded inside the access token is every permission that your app has been granted for that resource. To acquire an access token, your app can make a request to the Microsoft identity platform token endpoint, like this:
+After the user consents to permissions for your app, your app can acquire access tokens that represent the app's permission to access a resource in some capacity. An access token can be used only for a single resource. But encoded inside the access token is every permission that your app has been granted for that resource. To acquire an access token, your app can make a request to the Microsoft identity platform token endpoint, like this:
```HTTP POST common/oauth2/v2.0/token HTTP/1.1
@@ -246,76 +272,86 @@ Content-Type: application/json
} ```
-You can use the resulting access token in HTTP requests to the resource. It reliably indicates to the resource that your app has the proper permission to perform a specific task.
+You can use the resulting access token in HTTP requests to the resource. It reliably indicates to the resource that your app has the proper permission to do a specific task.
For more information about the OAuth 2.0 protocol and how to get access tokens, see the [Microsoft identity platform endpoint protocol reference](active-directory-v2-protocols.md). ## The /.default scope
-You can use the `/.default` scope to help migrate your apps from the v1.0 endpoint to the Microsoft identity platform endpoint. This is a built-in scope for every application that refers to the static list of permissions configured on the application registration. A `scope` value of `https://graph.microsoft.com/.default` is functionally the same as the v1.0 endpoints `resource=https://graph.microsoft.com` - namely, it requests a token with the scopes on Microsoft Graph that the application has registered for in the Azure portal. It is constructed using the resource URI + `/.default` (e.g. if the resource URI is `https://contosoApp.com`, then the scope requested would be `https://contosoApp.com/.default`). See the [section on trailing slashes](#trailing-slash-and-default) for cases where you must include a second slash to correctly request the token.
+You can use the `/.default` scope to help migrate your apps from the v1.0 endpoint to the Microsoft identity platform endpoint. The `/.default` scope is built in for every application that refers to the static list of permissions configured on the application registration.
-The /.default scope can be used in any OAuth 2.0 flow, but is necessary in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md), as well as when using the v2 admin consent endpoint to request application permissions.
+A `scope` value of `https://graph.microsoft.com/.default` is functionally the same as `resource=https://graph.microsoft.com` on the v1.0 endpoint. By specifying the `https://graph.microsoft.com/.default` scope in its request, your application is requesting an access token that includes scopes for every Microsoft Graph permission you've selected for the app in the app registration portal. The scope is constructed by using the resource URI and `/.default`. So if the resource URI is `https://contosoApp.com`, the scope requested is `https://contosoApp.com/.default`. For cases where you must include a second slash to correctly request the token, see the [section about trailing slashes](#trailing-slash-and-default).
-Clients can't combine static (`/.default`) and dynamic consent in a single request. Thus, `scope=https://graph.microsoft.com/.default+mail.read` will result in an error due to the combination of scope types.
+The `/.default` scope can be used in any OAuth 2.0 flow. But it's necessary in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md). You also need it when you use the v2 admin consent endpoint to request application permissions.
+
+Clients can't combine static (`/.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default+mail.read` results in an error because it combines scope types.
### /.default and consent
-The `/.default` scope triggers the v1.0 endpoint behavior for `prompt=consent` as well. It requests consent for all permissions registered by the application, regardless of the resource. If included as part of the request, the `/.default` scope returns a token that contains the scopes for the resource requested.
+The `/.default` scope triggers the v1.0 endpoint behavior for `prompt=consent` as well. It requests consent for all permissions that the application registered, regardless of the resource. If it's included as part of the request, the `/.default` scope returns a token that contains the scopes for the resource requested.
### /.default when the user has already given consent
-Because `/.default` is functionally identical to the `resource`-centric v1.0 endpoint's behavior, it brings with it the consent behavior of the v1.0 endpoint as well. Namely, `/.default` only triggers a consent prompt if no permission has been granted between the client and the resource by the user. If any such consent exists, then a token will be returned containing all scopes granted by the user for that resource. However, if no permission has been granted, or the `prompt=consent` parameter has been provided, a consent prompt will be shown for all scopes registered by the client application.
+The `/.default` scope is functionally identical to the behavior of the `resource`-centric v1.0 endpoint. It carries the consent behavior of the v1.0 endpoint as well. That is, `/.default` triggers a consent prompt only if the user has granted no permission between the client and the resource.
+
+If any such consent exists, the returned token contains all scopes the user granted for that resource. However, if no permission has been granted or if the `prompt=consent` parameter has been provided, a consent prompt is shown for all scopes that the client application registered.
#### Example 1: The user, or tenant admin, has granted permissions
-In this example, the user (or a tenant administrator) has granted the client the Microsoft Graph permissions `mail.read` and `user.read`. If the client makes a request for `scope=https://graph.microsoft.com/.default`, then no consent prompt will be shown regardless of the contents of the client applications registered permissions for Microsoft Graph. A token would be returned containing the scopes `mail.read` and `user.read`.
+In this example, the user or a tenant administrator has granted the `mail.read` and `user.read` Microsoft Graph permissions to the client.
+
+If the client requests `scope=https://graph.microsoft.com/.default`, no consent prompt is shown, regardless of the contents of the client application's registered permissions for Microsoft Graph. The returned token contains the scopes `mail.read` and `user.read`.
#### Example 2: The user hasn't granted permissions between the client and the resource
-In this example, no consent for the user exists between the client and Microsoft Graph. The client has registered for the `user.read` and `contacts.read` permissions, as well as the Azure Key Vault scope `https://vault.azure.net/user_impersonation`. When the client requests a token for `scope=https://graph.microsoft.com/.default`, the user will see a consent screen for the `user.read`, `contacts.read`, and the Key Vault `user_impersonation` scopes. The token returned will have just the `user.read` and `contacts.read` scopes in it and only be usable against Microsoft Graph.
+In this example, the user hasn't granted consent between the client and Microsoft Graph. The client has registered for the permissions `user.read` and `contacts.read`. It has also registered for the Azure Key Vault scope `https://vault.azure.net/user_impersonation`.
+
+When the client requests a token for `scope=https://graph.microsoft.com/.default`, the user sees a consent page for the `user.read` scope, the `contacts.read` scope, and the Key Vault `user_impersonation` scopes. The returned token contains only the `user.read` and `contacts.read` scopes. It can be used only against Microsoft Graph.
+
+#### Example 3: The user has consented, and the client requests more scopes
-#### Example 3: The user has consented and the client requests additional scopes
+In this example, the user has already consented to `mail.read` for the client. The client has registered for the `contacts.read` scope.
-In this example, the user has already consented to `mail.read` for the client. The client has registered for the `contacts.read` scope in its registration. When the client makes a request for a token using `scope=https://graph.microsoft.com/.default` and requests consent through `prompt=consent`, then the user will see a consent screen for all (and only) the permissions registered by the application. `contacts.read` will be present in the consent screen, but `mail.read` will not. The token returned will be for Microsoft Graph and will contain `mail.read` and `contacts.read`.
+When the client requests a token by using `scope=https://graph.microsoft.com/.default` and requests consent through `prompt=consent`, the user sees a consent page for all (and only) the permissions that the application registered. The `contacts.read` scope is on the consent page but `mail.read` isn't. The token returned is for Microsoft Graph. It contains `mail.read` and `contacts.read`.
### Using the /.default scope with the client
-A special case of the `/.default` scope exists where a client requests its own `/.default` scope. The following example demonstrates this scenario.
+In some cases, a client can request its own `/.default` scope. The following example demonstrates this scenario.
```HTTP // Line breaks are for legibility only. GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize?
-response_type=token //code or a hybrid flow is also possible here
+response_type=token //Code or a hybrid flow is also possible here
&client_id=9ada6f8a-6d83-41bc-b169-a306c21527a5 &scope=9ada6f8a-6d83-41bc-b169-a306c21527a5/.default &redirect_uri=https%3A%2F%2Flocalhost &state=1234 ```
-This produces a consent screen for all registered permissions (if applicable based on the above descriptions of consent and `/.default`), then returns an id_token, rather than an access token. This behavior exists for certain legacy clients moving from ADAL to MSAL, and **should not** be used by new clients targeting the Microsoft identity platform endpoint.
+This code example produces a consent page for all registered permissions if the preceding descriptions of consent and `/.default` apply to the scenario. Then the code returns an `id_token`, rather than an access token.
-### Client credentials grant flow and /.default
+This behavior accommodates some legacy clients that are moving from Azure AD Authentication Library (ADAL) to Microsoft Authentication Library (MSAL). This setup *shouldn't* be used by new clients that target the Microsoft identity platform endpoint.
-Another use of `/.default` is when requesting application permissions (or *roles*) in a non-interactive application like a daemon app that uses the [client credentials](v2-oauth2-client-creds-grant-flow.md) grant flow to call a web API.
+### Client credentials grant flow and /.default
-To create application permissions (roles) for a web API, see [How to: Add app roles in your application](howto-add-app-roles-in-azure-ad-apps.md).
+Another use of `/.default` is to request application permissions (or *roles*) in a noninteractive application like a daemon app that uses the [client credentials](v2-oauth2-client-creds-grant-flow.md) grant flow to call a web API.
-Client credentials requests in your client app **must** include `scope={resource}/.default`, where `{resource}` is the web API that your app intends to call. Issuing a client credentials request with individual application permissions (roles) is **not** supported. All the application permissions (roles) that have been granted for that web API will be included in the returned access token.
+To create application permissions (roles) for a web API, see [Add app roles in your application](howto-add-app-roles-in-azure-ad-apps.md).
-To grant access to the application permissions you define, including granting admin consent for the application, see [Quickstart: Configure a client application to access a web API](quickstart-configure-app-access-web-apis.md).
+Client credentials requests in your client app *must* include `scope={resource}/.default`. Here, `{resource}` is the web API that your app intends to call. Issuing a client credentials request by using individual application permissions (roles) is *not* supported. All the application permissions (roles) that have been granted for that web API are included in the returned access token.
-### Trailing slash and /.default
+To grant access to the application permissions you define, including granting admin consent for the application, see [Configure a client application to access a web API](quickstart-configure-app-access-web-apis.md).
-Some resource URIs have a trailing slash (`https://contoso.com/` as opposed to `https://contoso.com`), which can cause problems with token validation. This can occur primarily when requesting a token for Azure Resource Management (`https://management.azure.com/`), which has a trailing slash on their resource URI and requires it to be present when the token is requested. Thus, when requesting a token for `https://management.azure.com/` and using `/.default`, you must request `https://management.azure.com//.default` - note the double slash!
+### Trailing slash and /.default
-In general - if you've validated that the token is being issued, and the token is being rejected by the API that should accept it, consider adding a second slash and trying again. This happens because the login server emits a token with the audience matching the URIs in the `scope` parameter - with `/.default` removed from the end. If this removes the trailing slash, the login server still processes the request and validates it against the resource URI, even though they no longer match - this is non-standard and should not be relied on by your application.
+Some resource URIs have a trailing forward slash, for example, `https://contoso.com/` as opposed to `https://contoso.com`. The trailing slash can cause problems with token validation. Problems occur primarily when a token is requested for Azure Resource Manager (`https://management.azure.com/`). In this case, a trailing slash on the resource URI means the slash must be present when the token is requested. So when you request a token for `https://management.azure.com/` and use `/.default`, you must request `https://management.azure.com//.default` (notice the double slash!). In general, if you verify that the token is being issued, and if the token is being rejected by the API that should accept it, consider adding a second forward slash and trying again.
## Troubleshooting permissions and consent
-If you or your application's users are seeing unexpected errors during the consent process, see this article for troubleshooting steps: [Unexpected error when performing consent to an application](../manage-apps/application-sign-in-unexpected-user-consent-error.md).
+For troubleshooting steps, see [Unexpected error when performing consent to an application](../manage-apps/application-sign-in-unexpected-user-consent-error.md).
## Next steps
-* [ID tokens | Microsoft identity platform](id-tokens.md)
-* [Access tokens | Microsoft identity platform](access-tokens.md)
+* [ID tokens in Microsoft identity platform](id-tokens.md)
+* [Access tokens in Microsoft identity platform](access-tokens.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/devices/device-management-azure-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/device-management-azure-portal.md
@@ -163,10 +163,10 @@ You must be assigned one of the following roles to view or manage device setting
- **Additional local administrators on Azure AD joined devices** - You can select the users that are granted local administrator rights on a device. These users are added to the *Device Administrators* role in Azure AD. Global administrators in Azure AD and device owners are granted local administrator rights by default. This option is a premium edition capability available through products such as Azure AD Premium or the Enterprise Mobility Suite (EMS). - **Users may register their devices with Azure AD** - You need to configure this setting to allow Windows 10 personal, iOS, Android, and macOS devices to be registered with Azure AD. If you select **None**, devices are not allowed to register with Azure AD. Enrollment with Microsoft Intune or Mobile Device Management (MDM) for Microsoft 365 requires registration. If you have configured either of these services, **ALL** is selected and **NONE** is not available.-- **Require Multi-Factor Auth to join devices** - You can choose whether users are required to provide an additional authentication factor to join or register their device to Azure AD. The default is **No**. We recommend requiring multi-factor authentication when registering or joining a device. Before you enable multi-factor authentication for this service, you must ensure that multi-factor authentication is configured for the users that register their devices. For more information on different Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md).
+- **Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication** - You can choose whether users are required to provide an additional authentication factor to join or register their device to Azure AD. The default is **No**. We recommend requiring multi-factor authentication when registering or joining a device. Before you enable multi-factor authentication for this service, you must ensure that multi-factor authentication is configured for the users that register their devices. For more information on different Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md).
> [!NOTE]
-> **Require Multi-Factor Auth to join devices** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting does not apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-windows#enabling-azure-ad-login-in-for-windows-vm-in-azure) and Azure AD joined devices using [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
+> **Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting does not apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-windows#enabling-azure-ad-login-in-for-windows-vm-in-azure) and Azure AD joined devices using [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
- **Maximum number of devices** - This setting enables you to select the maximum number of Azure AD joined or Azure AD registered devices that a user can have in Azure AD. If a user reaches this quota, they are not be able to add additional devices until one or more of the existing devices are removed. The default value is **50**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/governance/access-reviews-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/access-reviews-overview.md
@@ -98,7 +98,7 @@ Here are some example license scenarios to help you determine the number of lice
| An administrator creates an access review of Group B with 500 users and 3 group owners, and assigns the 3 group owners as reviewers. | 3 licenses for each group owner as reviewers | 3 | | An administrator creates an access review of Group B with 500 users. Makes it a self-review. | 500 licenses for each user as self-reviewers | 500 | | An administrator creates an access review of Group C with 50 member users and 25 guest users. Makes it a self-review. | 50 licenses for each user as self-reviewers.* | 50 |
-| An administrator creates an access review of Group D with 6 member users and 108 guest users. Makes it a self-review. | 6 licenses for each user as self-reviewers. Guest users are billed on a monthly active user (MAU) basis. No additional licenses are required. * | - |
+| An administrator creates an access review of Group D with 6 member users and 108 guest users. Makes it a self-review. | 6 licenses for each user as self-reviewers. Guest users are billed on a monthly active user (MAU) basis. No additional licenses are required. * | 6 |
\* Azure AD External Identities (guest user) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This model replaces the 1:5 ratio billing model, which allowed up to five guest users for each Azure AD Premium license in your tenant. When your tenant is linked to a subscription and you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU-based billing model. For more information, see Billing model for Azure AD External Identities.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-group-writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-group-writeback.md
@@ -42,7 +42,13 @@ To enable group writeback, use the following steps:
```Powershell $AzureADConnectSWritebackAccountDN = <MSOL_ account DN> Import-Module "C:\Program Files\Microsoft Azure Active Directory Connect\AdSyncConfig\AdSyncConfig.psm1"+
+# To grant the <MSOL_account> permission to all domains in the forest:
Set-ADSyncUnifiedGroupWritebackPermissions -ADConnectorAccountDN $AzureADConnectSWritebackAccountDN+
+# To grant the <MSOL_account> permission to specific OU (eg. the OU chosen to writeback Office 365 Groups to):
+$GroupWritebackOU = <DN of OU where groups are to be written back to>
+Set-ADSyncUnifiedGroupWritebackPermissions -ADConnectorAccountDN $AzureADConnectSWritebackAccountDN -ADObjectDN $GroupWritebackOU
``` For additional information on configuring the Microsoft 365 groups see [Configure Microsoft 365 Groups with on-premises Exchange hybrid](/exchange/hybrid-deployment/set-up-microsoft-365-groups#enable-group-writeback-in-azure-ad-connect).
@@ -65,4 +71,4 @@ To disable Group Writeback, use the following steps:
> Disabling Group Writeback will set the Full Import and Full Synchronization flags to 'true' on the Azure Active Directory Connector, causing the rule changes to propagate through on the next synchronization cycle, deleting the groups that were previously written back to your Active Directory. ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
\ No newline at end of file
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-staged-rollout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
@@ -63,7 +63,7 @@ The following scenarios are supported for staged rollout. The feature works only
The following scenarios are not supported for staged rollout: -- Applications or cloud services use legacy authentication such as POP3 and SMTP.
+- Legacy authentication such as POP3 and SMTP are not supported.
- Certain applications send the "domain_hint" query parameter to Azure AD during authentication. These flows will continue, and users who are enabled for staged rollout will continue to use federation for authentication.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/reference-connect-sync-functions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-sync-functions-reference.md
@@ -660,7 +660,7 @@ The possible values for the format can be found here: [Custom date and time form
**Example:**
-`FormatDateTime(CDate("12/25/2007"),"yyyy-mm-dd")`
+`FormatDateTime(CDate("12/25/2007"),"yyyy-MM-dd")`
Results in "2007-12-25". `FormatDateTime(DateFromNum([pwdLastSet]),"yyyyMMddHHmmss.0Z")`
@@ -1392,4 +1392,4 @@ Would return "has"
## Additional Resources * [Understanding Declarative Provisioning Expressions](concept-azure-ad-connect-sync-declarative-provisioning-expressions.md) * [Azure AD Connect Sync: Customizing Synchronization options](how-to-connect-sync-whatis.md)
-* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
\ No newline at end of file
+* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/tshoot-connect-password-hash-synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/tshoot-connect-password-hash-synchronization.md
@@ -377,7 +377,7 @@ if ($aadConnectors -ne $null -and $adConnectors -ne $null)
{ if ($aadConnectors.Count -eq 1) {
- $features = Get-ADSyncAADCompanyFeature -ConnectorName $aadConnectors[0].Name
+ $features = Get-ADSyncAADCompanyFeature
Write-Host Write-Host "Password sync feature enabled in your Azure AD directory: " $features.PasswordHashSync foreach ($adConnector in $adConnectors)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/custom-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-overview.md
@@ -34,7 +34,7 @@ Built-in roles are out of box roles that have a fixed set of permissions. These
Once youΓÇÖve created your custom role definition (or using a built-in role), you can assign it to a user by creating a role assignment. A role assignment grants the user the permissions in a role definition at a specified scope. This two-step process allows you to create a single role definition and assign it many times at different scopes. A scope defines the set of Azure AD resources the role member has access to. The most common scope is organization-wide (org-wide) scope. A custom role can be assigned at org-wide scope, meaning the role member has the role permissions over all resources in the organization. A custom role can also be assigned at an object scope. An example of an object scope would be a single application. The same role can be assigned to one user over all applications in the organization and then to another user with a scope of only the Contoso Expense Reports app.
-Azure AD built-in and custom roles operate on concepts similar to [Azure role-based access control (Azure RBAC)](../../active-directory-b2c/overview.md). The [difference between these two role-based access control systems](../../role-based-access-control/rbac-and-directory-admin-roles.md) is that Azure RBAC controls access to Azure resources such as virtual machines or storage using Azure Resource Management, and Azure AD custom roles control access to Azure AD resources using Graph API. Both systems leverage the concept of role definitions and role assignments. Azure AD RBAC permissions cannot be included in Azure roles and vice versa.
+Azure AD built-in and custom roles operate on concepts similar to [Azure role-based access control (Azure RBAC)](https://docs.microsoft.com/azure/active-directory/develop/access-tokens#payload-claims). The [difference between these two role-based access control systems](../../role-based-access-control/rbac-and-directory-admin-roles.md) is that Azure RBAC controls access to Azure resources such as virtual machines or storage using Azure Resource Management, and Azure AD custom roles control access to Azure AD resources using Graph API. Both systems leverage the concept of role definitions and role assignments. Azure AD RBAC permissions cannot be included in Azure roles and vice versa.
### How Azure AD determines if a user has access to a resource
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/adobe-identity-management-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/adobe-identity-management-tutorial.md
@@ -2,22 +2,16 @@
title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Adobe Identity Management | Microsoft Docs' description: Learn how to configure single sign-on between Azure Active Directory and Adobe Identity Management. services: active-directory
-documentationCenter: na
author: jeevansd
-manager: mtillman
-ms.reviewer: barbkess
-
-ms.assetid: 9db7f01d-7f15-492f-a839-55963790a12e
+manager: CelesteDG
+ms.reviewer: CelesteDG
ms.service: active-directory ms.subservice: saas-app-tutorial ms.workload: identity
-ms.tgt_pltfrm: na
-ms.devlang: na
ms.topic: tutorial
-ms.date: 09/26/2019
+ms.date: 01/15/2021
ms.author: jeedes
-ms.collection: M365-identity-device-management
--- # Tutorial: Azure Active Directory single sign-on (SSO) integration with Adobe Identity Management
@@ -28,8 +22,6 @@ In this tutorial, you'll learn how to integrate Adobe Identity Management with A
* Enable your users to be automatically signed-in to Adobe Identity Management with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis).
- ## Prerequisites To get started, you need the following items:
@@ -47,18 +39,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Adobe Identity Management into Azure AD, you need to add Adobe Identity Management from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Adobe Identity Management** in the search box. 1. Select **Adobe Identity Management** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Adobe Identity Management
+## Configure and test Azure AD SSO for Adobe Identity Management
Configure and test Azure AD SSO with Adobe Identity Management using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Adobe Identity Management.
-To configure and test Azure AD SSO with Adobe Identity Management, complete the following building blocks:
+To configure and test Azure AD SSO with Adobe Identity Management, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -71,15 +63,15 @@ To configure and test Azure AD SSO with Adobe Identity Management, complete the
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Adobe Identity Management** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Adobe Identity Management** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- a. In the **Sign on URL** text box, type a URL:
+ a. In the **Sign on URL** text box, type the URL:
`https://adobe.com` b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
@@ -115,38 +107,63 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Adobe Identity Management**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Adobe Identity Management SSO
-To configure single sign-on on **Adobe Identity Management** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Adobe Identity Management support team](mailto:identity@adobe.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. To automate the configuration within Adobe Identity Management, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Adobe Identity Management** will direct you to the Adobe Identity Management application. From there, provide the admin credentials to sign into Adobe Identity Management. The browser extension will automatically configure the application for you and automate steps 3-8.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup Adobe Identity Management manually, in a different web browser window, sign in to your Adobe Identity Management company site as an administrator.
+
+4. Go to the **Settings** tab and click on **Create Directory**.
+
+ ![Adobe Identity Management settings](./media/adobe-identity-management-tutorial/settings.png)
+
+5. Give the directory name in the text box and select **Federated ID**, click on **Next**.
+
+ ![Adobe Identity Management create directory](./media/adobe-identity-management-tutorial/create-directory.png)
+
+6. Select the **Other SAML Providers** and click on **Next**.
+
+ ![Adobe Identity Management saml providers](./media/adobe-identity-management-tutorial/saml-providers.png)
+
+7. Click on **select** to upload the **Metadata XML** file which you have downloaded from the Azure portal.
+
+ ![Adobe Identity Management saml configuration](./media/adobe-identity-management-tutorial/saml-configuration.png)
+
+8. Click on **Done**.
### Create Adobe Identity Management test user
-In this section, you create a user called B.Simon in Adobe Identity Management. Work with [Adobe Identity Management support team](mailto:identity@adobe.com) to add the users in the Adobe Identity Management platform. Users must be created and activated before you use single sign-on.
+1. Go to the **Users** tab and click on **Add User**.
+
+ ![Adobe Identity Management add user](./media/adobe-identity-management-tutorial/add-user.png)
+
+2. In the **Enter userΓÇÖs email address** textbox, give the **email address**.
-## Test SSO
+ ![Adobe Identity Management save user](./media/adobe-identity-management-tutorial/save-user.png)
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+3. Click **Save**.
-When you click the Adobe Identity Management tile in the Access Panel, you should be automatically signed in to the Adobe Identity Management for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+## Test SSO
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list)
+* Click on **Test this application** in Azure portal. This will redirect to Adobe Identity Management Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory? ](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis)
+* Go to Adobe Identity Management Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview)
+* You can use Microsoft My Apps. When you click the Adobe Identity Management tile in the My Apps, this will redirect to Adobe Identity Management Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try Adobe Identity Management with Azure AD](https://aad.portal.azure.com/)
+## Next steps
+Once you configure Adobe Identity Management you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/blink-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/blink-provisioning-tutorial.md
@@ -45,7 +45,7 @@ Before configuring and enabling automatic user provisioning, you should decide w
## Setup Blink for provisioning
-1. Log a [Support Case](https://support.joinblink.com) or email **Blink support** at support@joinblink.com to request a SCIM token. .
+1. Log a [Support Case](https://support.joinblink.com) or email **Blink support** at support@joinblink.com to request a SCIM token.
2. Copy the **SCIM Authentication Token**. This value will be entered in the Secret Token field in the Provisioning tab of your Blink application in the Azure portal.
@@ -112,7 +112,23 @@ This section guides you through the steps to configure the Azure AD provisioning
9. Review the user attributes that are synchronized from Azure AD to Blink in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Blink for update operations. Select the **Save** button to commit any changes.
- ![Blink User Attributes](media/blink-provisioning-tutorial/new-user-attributes.png)
+ |Attribute|Type|Supported for filtering|
+ |---|---|---|
+ |userName|String|&check;|
+ |active|Boolean|
+ |title|String|
+ |emails[type eq "work"].value|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |phoneNumbers[type eq "mobile"].value|String|
+ |externalId|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
+ |urn:ietf:params:scim:schemas:extension:blink:2.0:User:company|String|
+ urn:ietf:params:scim:schemas:extension:blink:2.0:User:description|String|
+ urn:ietf:params:scim:schemas:extension:blink:2.0:User:location|String|
10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
@@ -132,15 +148,23 @@ This operation starts the initial synchronization of all users defined in **Scop
For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
++ ## Change log
-* 01/14/2021 - Custom extension attribute **company** , **description** and **location** has been added.
+* 01/14/2021 - Custom extension attributes **company**, **description**, and **location** have been added.
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
\ No newline at end of file
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/cofense-provision-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cofense-provision-tutorial.md
@@ -161,9 +161,13 @@ This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Change log
+
+* 01/15/2020 - Change from "Only during Object Creation" to "Always" has been implemented for objectId -> externalId mapping.
## Additional resources
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/intacct-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/intacct-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/05/2020
+ms.date: 01/15/2021
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Sage Intacct with Azure Active D
* Enable your users to be automatically signed-in to Sage Intacct with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -35,13 +33,12 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Sage Intacct supports **IDP** initiated SSO
-* Once you configure Sage Intacct you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Adding Sage Intacct from the gallery To configure the integration of Sage Intacct into Azure AD, you need to add Sage Intacct from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -52,7 +49,7 @@ To configure the integration of Sage Intacct into Azure AD, you need to add Sage
Configure and test Azure AD SSO with Sage Intacct using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Sage Intacct.
-To configure and test Azure AD SSO with Sage Intacct, complete the following building blocks:
+To configure and test Azure AD SSO with Sage Intacct, complete the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
@@ -65,9 +62,9 @@ To configure and test Azure AD SSO with Sage Intacct, complete the following bui
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Sage Intacct** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Sage Intacct** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -128,15 +125,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Sage Intacct**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Sage Intacct SSO
@@ -207,16 +198,13 @@ To set up Azure AD users so they can sign in to Sage Intacct, they must be provi
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Sage Intacct tile in the Access Panel, you should be automatically signed in to the Sage Intacct for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Sage Intacct for which you set up the SSO
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Sage Intacct tile in the My Apps, you should be automatically signed in to the Sage Intacct for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure Sage Intacct you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/international-sos-assistance-products-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/international-sos-assistance-products-tutorial.md new file mode 100644
@@ -0,0 +1,140 @@
+---
+title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with International SOS Assistance Products | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and International SOS Assistance Products.
+services: active-directory
+author: jeevansd
+manager: CelesteDG
+ms.reviewer: CelesteDG
+ms.service: active-directory
+ms.subservice: saas-app-tutorial
+ms.workload: identity
+ms.topic: tutorial
+ms.date: 01/15/2021
+ms.author: jeedes
+
+---
+
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with International SOS Assistance Products
+
+In this tutorial, you'll learn how to integrate International SOS Assistance Products with Azure Active Directory (Azure AD). When you integrate International SOS Assistance Products with Azure AD, you can:
+
+* Control in Azure AD who has access to International SOS Assistance Products.
+* Enable your users to be automatically signed-in to International SOS Assistance Products with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* International SOS Assistance Products single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* International SOS Assistance Products supports **SP** initiated SSO
+
+* International SOS Assistance Products supports **Just In Time** user provisioning
++
+## Adding International SOS Assistance Products from the gallery
+
+To configure the integration of International SOS Assistance Products into Azure AD, you need to add International SOS Assistance Products from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **International SOS Assistance Products** in the search box.
+1. Select **International SOS Assistance Products** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for International SOS Assistance Products
+
+Configure and test Azure AD SSO with International SOS Assistance Products using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in International SOS Assistance Products.
+
+To configure and test Azure AD SSO with International SOS Assistance Products, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure International SOS Assistance Products SSO](#configure-international-sos-assistance-products-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create International SOS Assistance Products test user](#create-international-sos-assistance-products-test-user)** - to have a counterpart of B.Simon in International SOS Assistance Products that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **International SOS Assistance Products** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.outsystemsenterprise.com/myassist`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.internationalsos.com/sso/saml2/<CUSTOM_ID>`
+
+ c. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://www.okta.com/saml2/service-provider/<IN>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Sign on URL, Reply URL and Identifier. Contact [International SOS Assistance Products Client support team](mailto:onlinehelp@internationalsos.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to International SOS Assistance Products.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **International SOS Assistance Products**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure International SOS Assistance Products SSO
+
+To configure single sign-on on **International SOS Assistance Products** side, you need to send the **App Federation Metadata Url** to [International SOS Assistance Products support team](mailto:onlinehelp@internationalsos.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create International SOS Assistance Products test user
+
+In this section, a user called Britta Simon is created in International SOS Assistance Products. International SOS Assistance Products supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in International SOS Assistance Products, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to International SOS Assistance Products Sign-on URL where you can initiate the login flow.
+
+* Go to International SOS Assistance Products Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the International SOS Assistance Products tile in the My Apps, this will redirect to International SOS Assistance Products Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure International SOS Assistance Products you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/navex-one-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/navex-one-tutorial.md new file mode 100644
@@ -0,0 +1,130 @@
+---
+title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with NAVEX One | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and NAVEX One.
+services: active-directory
+author: jeevansd
+manager: CelesteDG
+ms.reviewer: CelesteDG
+ms.service: active-directory
+ms.subservice: saas-app-tutorial
+ms.workload: identity
+ms.topic: tutorial
+ms.date: 01/13/2021
+ms.author: jeedes
+
+---
+
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with NAVEX One
+
+In this tutorial, you'll learn how to integrate NAVEX One with Azure Active Directory (Azure AD). When you integrate NAVEX One with Azure AD, you can:
+
+* Control in Azure AD who has access to NAVEX One.
+* Enable your users to be automatically signed-in to NAVEX One with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* NAVEX One single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* NAVEX One supports **SP** initiated SSO
+
+## Adding NAVEX One from the gallery
+
+To configure the integration of NAVEX One into Azure AD, you need to add NAVEX One from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **NAVEX One** in the search box.
+1. Select **NAVEX One** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for NAVEX One
+
+Configure and test Azure AD SSO with NAVEX One using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in NAVEX One.
+
+To configure and test Azure AD SSO with NAVEX One, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure NAVEX One SSO](#configure-navex-one-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create NAVEX One test user](#create-navex-one-test-user)** - to have a counterpart of B.Simon in NAVEX One that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **NAVEX One** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CLIENT_KEY>.navexglobal.com`
+
+ > [!NOTE]
+ > The value is not real. Update the value with the actual Sign-On URL. Contact [NAVEX One Client support team](mailto:ethicspoint@navexglobal.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to NAVEX One.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **NAVEX One**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure NAVEX One SSO
+
+To configure single sign-on on **NAVEX One** side, you need to send the **App Federation Metadata Url** to [NAVEX One support team](mailto:ethicspoint@navexglobal.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create NAVEX One test user
+
+In this section, you create a user called Britta Simon in NAVEX One. Work with [NAVEX One support team](mailto:ethicspoint@navexglobal.com) to add the users in the NAVEX One platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to NAVEX One Sign-on URL where you can initiate the login flow.
+
+* Go to NAVEX One Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the NAVEX One tile in the My Apps, this will redirect to NAVEX One Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure NAVEX One you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/user-help-auth-app-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-auth-app-faq.md
@@ -10,7 +10,7 @@ ms.service: active-directory
ms.workload: identity ms.subservice: user-help ms.topic: end-user-help
-ms.date: 12/09/2020
+ms.date: 01/15/2020
ms.author: curtand ms.reviewer: olhaun ---
@@ -29,13 +29,17 @@ The Microsoft Authenticator app replaced the Azure Authenticator app, and it's t
**A**: Registering a device gives your device access to your organization's services and doesn't allow your organization access to your device.
-## Too many app permissions
+### Too many app permissions
**Q**: Why does the app request so many permissions?
-**A**: Here's the full list of permissions that might be asked for, and how they're used by the app. The specific permissions you see will depend on the type of phone you have.
+**A**: Here's the full list of permissions that might be asked for, and how they're used by the app. The specific permissions you see will depend on the type of phone you have. Sometimes your organization wants to know your **Location** before allowing you to access certain resources. The app will request this permission only if your organization has a policy requiring location.
-- **Location**. Sometimes your organization wants to know your location before allowing you to access certain resources. The app will request this permission only if your organization has a policy requiring location.
+### Error adding account
+
+**Q**: When I try to add my account, I get an error message saying ΓÇ£The account you're trying to add is not valid at this time. Contact your admin to fix this issue (uniqueness validation).ΓÇ¥ What should I do?
+
+**A**: Reach out to your admin and let them know youΓÇÖre prevented from adding your account to Authenticator because of a uniqueness validation issue. YouΓÇÖll need to provide your sign-in username so that your admin can look you up in your organization.
### Legacy APNs support deprecated
advisor https://docs.microsoft.com/en-us/azure/advisor/advisor-performance-recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/advisor-performance-recommendations.md
@@ -173,7 +173,7 @@ Learn more about [Immersive reader SDK](../cognitive-services/immersive-reader/i
## Improve VM performance by changing the maximum session limit
-Advisor detects that you have a host pool that has depth first set as the load balancing algorithm, and that host pool's max session limit is greater than or equal to 99999. Depth first load balancing uses the max session limit to determine the maximum number of users that can have concurrent sessions on a single session host. If the max session limit is too high, all user sessions will be directed to the same session host, and this will cause performance and reliability issues. Therefore, when setting a host pool to have depth first load balancing, you must set an appropriate max session limit according to the configuration of your deployment and capacity of your VMs.
+Advisor detects that you have a host pool that has depth first set as the load balancing algorithm, and that host pool's max session limit is greater than or equal to 999999. Depth first load balancing uses the max session limit to determine the maximum number of users that can have concurrent sessions on a single session host. If the max session limit is too high, all user sessions will be directed to the same session host, and this will cause performance and reliability issues. Therefore, when setting a host pool to have depth first load balancing, you must set an appropriate max session limit according to the configuration of your deployment and capacity of your VMs.
To learn more about load balancing in Windows Virtual Desktop, see [Configure the Windows Virtual Desktop load-balancing method](/azure/virtual-desktop/troubleshoot-set-up-overview).
aks https://docs.microsoft.com/en-us/azure/aks/azure-files-volume https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-files-volume.md
@@ -144,7 +144,6 @@ spec:
storage: 5Gi accessModes: - ReadWriteMany
- storageClassName: azurefile
azureFile: secretName: azure-secret shareName: aksshare
@@ -172,7 +171,6 @@ spec:
storage: 5Gi accessModes: - ReadWriteMany
- storageClassName: azurefile
azureFile: secretName: azure-secret shareName: aksshare
@@ -196,7 +194,7 @@ metadata:
spec: accessModes: - ReadWriteMany
- storageClassName: azurefile
+ storageClassName: ""
resources: requests: storage: 5Gi
aks https://docs.microsoft.com/en-us/azure/aks/ingress-own-tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-own-tls.md
@@ -360,7 +360,7 @@ kubectl delete -f hello-world-ingress.yaml
Delete the certificate Secret: ```console
-kubectl delete secret aks-ingress-tls
+kubectl delete secret aks-ingress-tls --namespace ingress-basic
``` Finally, you can delete the itself namespace. Use the `kubectl delete` command and specify your namespace name:
aks https://docs.microsoft.com/en-us/azure/aks/ingress-static-ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-static-ip.md
@@ -101,7 +101,7 @@ No ingress rules have been created yet, so the NGINX ingress controller's defaul
You can verify that the DNS name label has been applied by querying the FQDN on the public IP address as follows: ```azurecli-interactive
-az network public-ip list --resource-group MC_myResourceGroup_myAKSCluster_eastus --query "[?name=='myAKSPublicIP'].[dnsSettings.fqdn]" -o tsv
+az network public-ip list --resource-group MC_myResourceGroup_myAKSCluster_eastus --query "[?ipAddress=='myAKSPublicIP'].[dnsSettings.fqdn]" -o tsv
``` The ingress controller is now accessible through the IP address or the FQDN.
aks https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/intro-kubernetes.md
@@ -54,17 +54,17 @@ Azure Kubernetes Service offers multiple Kubernetes versions. As new versions be
To learn more about lifecycle versions, see [Supported Kubernetes versions in AKS][aks-supported versions]. For steps on how to upgrade, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
-### GPU enabled nodes
+### GPU-enabled nodes
-AKS supports the creation of GPU enabled node pools. Azure currently provides single or multiple GPU enabled VMs. GPU enabled VMs are designed for compute-intensive, graphics-intensive, and visualization workloads.
+AKS supports the creation of GPU-enabled node pools. Azure currently provides single or multiple GPU-enabled VMs. GPU-enabled VMs are designed for compute-intensive, graphics-intensive, and visualization workloads.
For more information, see [Using GPUs on AKS][aks-gpu]. ### Confidential computing nodes (public preview)
-AKS supports the creation of Intel SGX based confidential computing node pools (DCSv2 VMs). Confidential computing nodes allow containers to run in a hardware based trusted and isolated execution environment (enclaves). Isolation between containers combined with code integrity through attestation can help with your defense-in-depth container security strategy. Confidential computing nodes supports both confidential containers (existing docker apps) and enclave aware containers.
+AKS supports the creation of Intel SGX based confidential computing node pools (DCSv2 VMs). Confidential computing nodes allow containers to run in a hardware-based trusted execution environment (enclaves). Isolation between containers, combined with code integrity through attestation, can help with your defense-in-depth container security strategy. Confidential computing nodes supports both confidential containers (existing Docker apps) and enclave-aware containers.
-For more information, see [Confidential computing nodes on AKS][conf-com-node]
+For more information, see [Confidential computing nodes on AKS][conf-com-node].
### Storage volume support
@@ -76,7 +76,7 @@ Get started with dynamic persistent volumes using [Azure Disks][azure-disk] or [
## Virtual networks and ingress
-An AKS cluster can be deployed into an existing virtual network. In this configuration, every pod in the cluster is assigned an IP address in the virtual network, and can directly communicate with other pods in the cluster, and other nodes in the virtual network. Pods can connect also to other services in a peered virtual network, and to on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.
+An AKS cluster can be deployed into an existing virtual network. In this configuration, every pod in the cluster is assigned an IP address in the virtual network, and can directly communicate with other pods in the cluster, and other nodes in the virtual network. Pods can also connect to other services in a peered virtual network, and to on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.
For more information, see the [Network concepts for applications in AKS][aks-networking].
@@ -94,15 +94,15 @@ Kubernetes has a rich ecosystem of development and management tools such as Helm
Additionally, Azure Dev Spaces provides a rapid, iterative Kubernetes development experience for teams. With minimal configuration, you can run and debug containers directly in AKS. To get started, see [Azure Dev Spaces][azure-dev-spaces].
-The Azure DevOps project provides a simple solution for bringing existing code and Git repository into Azure. The DevOps project automatically creates Azure resources such as AKS, a release pipeline in Azure DevOps Services that includes a build pipeline for CI, sets up a release pipeline for CD, and then creates an Azure Application Insights resource for monitoring.
+DevOps Starter provides a simple solution for bringing existing code and Git repositories into Azure. DevOps Starter automatically creates Azure resources such as AKS, a release pipeline in Azure DevOps Services that includes a build pipeline for CI, sets up a release pipeline for CD, and then creates an Azure Application Insights resource for monitoring.
-For more information, see [Azure DevOps project][azure-devops].
+For more information, see [DevOps Starter][azure-devops].
## Docker image support and private container registry AKS supports the Docker image format. For private storage of your Docker images, you can integrate AKS with Azure Container Registry (ACR).
-To create private image store, see [Azure Container Registry][acr-docs].
+To create a private image store, see [Azure Container Registry][acr-docs].
## Kubernetes certification
aks https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-portal.md
@@ -4,7 +4,7 @@ titleSuffix: Azure Kubernetes Service
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure portal. services: container-service ms.topic: quickstart
-ms.date: 10/06/2020
+ms.date: 01/13/2021
ms.custom: mvc, seo-javascript-october2019
@@ -67,6 +67,9 @@ Open Cloud Shell using the `>_` button on the top of the Azure portal.
![Open the Azure Cloud Shell in the portal](media/kubernetes-walkthrough-portal/aks-cloud-shell.png)
+> [!NOTE]
+> To perform these operations in a local shell installation, you'll first need to verify Azure CLI is installed, then connect to Azure via the `az login` command.
+ To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. The following example gets credentials for the cluster name *myAKSCluster* in the resource group named *myResourceGroup*: ```azurecli
@@ -276,7 +279,7 @@ To learn more about AKS, and walk through a complete code to deployment example,
<!-- LINKS - internal --> [kubernetes-concepts]: concepts-clusters-workloads.md
-[az-aks-get-credentials]: /cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials
+[az-aks-get-credentials]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az-aks-get-credentials
[az-aks-delete]: /cli/azure/aks#az-aks-delete [aks-monitor]: ../azure-monitor/insights/container-insights-overview.md [aks-network]: ./concepts-network.md
aks https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-powershell.md
@@ -3,7 +3,7 @@ title: 'Quickstart: Deploy an AKS cluster by using PowerShell'
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using PowerShell. services: container-service ms.topic: quickstart
-ms.date: 09/11/2020
+ms.date: 01/13/2021
ms.custom: devx-track-azurepowershell
@@ -86,7 +86,7 @@ containers is also enabled by default. This takes several minutes to complete.
> [Why are two resource groups created with AKS?](./faq.md#why-are-two-resource-groups-created-with-aks) ```azurepowershell-interactive
-New-AzAks -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1
+New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1
``` After a few minutes, the command completes and returns information about the cluster.
aks https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-rm-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-rm-template.md
@@ -3,7 +3,7 @@ title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster
description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS) services: container-service ms.topic: quickstart
-ms.date: 09/11/2020
+ms.date: 01/13/2021
ms.custom: mvc,subject-armqs, devx-track-azurecli
@@ -308,10 +308,10 @@ To learn more about AKS, and walk through a complete code to deployment example,
[kubernetes-concepts]: concepts-clusters-workloads.md [aks-monitor]: ../azure-monitor/insights/container-insights-onboard.md [aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
-[az-aks-browse]: /cli/azure/aks?view=azure-cli-latest#az-aks-browse
-[az-aks-create]: /cli/azure/aks?view=azure-cli-latest#az-aks-create
-[az-aks-get-credentials]: /cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials
-[az-aks-install-cli]: /cli/azure/aks?view=azure-cli-latest#az-aks-install-cli
+[az-aks-browse]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az-aks-browse
+[az-aks-create]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az-aks-create
+[az-aks-get-credentials]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az-aks-get-credentials
+[az-aks-install-cli]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az-aks-install-cli
[az-group-create]: /cli/azure/group#az-group-create [az-group-delete]: /cli/azure/group#az-group-delete [azure-cli-install]: /cli/azure/install-azure-cli
aks https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough.md
@@ -3,7 +3,7 @@ title: 'Quickstart: Deploy an AKS cluster by using Azure CLI'
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure CLI. services: container-service ms.topic: quickstart
-ms.date: 09/11/2020
+ms.date: 01/12/2021
ms.custom: [H1Hack27Feb2017, mvc, devcenter, seo-javascript-september2019, seo-javascript-october2019, seo-python-october2019, devx-track-azurecli, contperf-fy21q1]
@@ -39,7 +39,7 @@ The following example creates a resource group named *myResourceGroup* in the *e
az group create --name myResourceGroup --location eastus ```
-The following example output shows the resource group created successfully:
+Output similar to the following example indicates the resource group has been created successfully:
```json {
@@ -287,10 +287,10 @@ To learn more about AKS, and walk through a complete code to deployment example,
[kubernetes-concepts]: concepts-clusters-workloads.md [aks-monitor]: ../azure-monitor/insights/container-insights-onboard.md [aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
-[az-aks-browse]: /cli/azure/aks?view=azure-cli-latest#az-aks-browse
-[az-aks-create]: /cli/azure/aks?view=azure-cli-latest#az-aks-create
-[az-aks-get-credentials]: /cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials
-[az-aks-install-cli]: /cli/azure/aks?view=azure-cli-latest#az-aks-install-cli
+[az-aks-browse]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az-aks-browse
+[az-aks-create]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az-aks-create
+[az-aks-get-credentials]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az-aks-get-credentials
+[az-aks-install-cli]: /cli/azure/aks?view=azure-cli-latest&preserve-view=true#az-aks-install-cli
[az-group-create]: /cli/azure/group#az-group-create [az-group-delete]: /cli/azure/group#az-group-delete [azure-cli-install]: /cli/azure/install-azure-cli
aks https://docs.microsoft.com/en-us/azure/aks/manage-azure-rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/manage-azure-rbac.md
@@ -112,7 +112,7 @@ AKS provides the following four built-in roles:
| Role | Description | |-------------------------------------|--------------|
-| Azure Kubernetes Service RBAC Viewer | Allows read-only access to see most objects in a namespace. It doesn't allow viewing roles or role bindings. This role doesn't allow viewing `Secrets`, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation) |
+| Azure Kubernetes Service RBAC Reader | Allows read-only access to see most objects in a namespace. It doesn't allow viewing roles or role bindings. This role doesn't allow viewing `Secrets`, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation) |
| Azure Kubernetes Service RBAC Writer | Allows read/write access to most objects in a namespace. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing `Secrets` and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. | | Azure Kubernetes Service RBAC Admin | Allows admin access, intended to be granted within a namespace. Allows read/write access to most resources in a namespace (or cluster scope), including the ability to create roles and role bindings within the namespace. This role doesn't allow write access to resource quota or to the namespace itself. | | Azure Kubernetes Service RBAC Cluster Admin | Allows super-user access to perform any action on any resource. It gives full control over every resource in the cluster and in all namespaces. |
aks https://docs.microsoft.com/en-us/azure/aks/quickstart-helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/quickstart-helm.md
@@ -4,7 +4,7 @@ description: Use Helm with AKS and Azure Container Registry to package and run a
services: container-service author: zr-msft ms.topic: article
-ms.date: 07/28/2020
+ms.date: 01/12/2021
ms.author: zarhoads ---
@@ -139,7 +139,7 @@ For example:
replicaCount: 1 image:
- repository: *myhelmacr.azurecr.io*/webfrontend
+ repository: myhelmacr.azurecr.io/webfrontend
pullPolicy: IfNotPresent ... service:
aks https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-app-update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-app-update.md
@@ -3,7 +3,7 @@ title: Kubernetes on Azure tutorial - Update an application
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to update an existing application deployment to AKS with a new version of the application code. services: container-service ms.topic: tutorial
-ms.date: 09/30/2020
+ms.date: 01/12/2021
ms.custom: mvc
@@ -62,7 +62,7 @@ docker-compose up --build -d
To verify that the updated container image shows your changes, open a local web browser to `http://localhost:8080`.
-:::image type="content" source="media/container-service-kubernetes-tutorials/vote-app-updated.png" alt-text="Screenshot showing an example of the updated container image Azure Voting App opened with a local web browser and local host.":::
+:::image type="content" source="media/container-service-kubernetes-tutorials/vote-app-updated.png" alt-text="Screenshot showing an example of the updated container image Azure Voting App running locally opened in a local web browser":::
The updated values provided in the *config_file.cfg* file are displayed in your running application.
@@ -141,9 +141,9 @@ To view the update application, first get the external IP address of the `azure-
kubectl get service azure-vote-front ```
-Now open a local web browser to the IP address of your service:
+Now open a web browser to the IP address of your service:
-:::image type="content" source="media/container-service-kubernetes-tutorials/vote-app-updated-external.png" alt-text="Screenshot showing an example of the updated application Azure Voting App opened in a local web browser.":::
+:::image type="content" source="media/container-service-kubernetes-tutorials/vote-app-updated-external.png" alt-text="Screenshot showing an example of the updated image Azure Voting App running in an AKS cluster opened in a local web browser.":::
## Next steps
aks https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-deploy-application.md
@@ -3,7 +3,7 @@ title: Kubernetes on Azure tutorial - Deploy an application
description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using a custom image stored in Azure Container Registry. services: container-service ms.topic: tutorial
-ms.date: 09/30/2020
+ms.date: 01/12/2021
ms.custom: mvc
@@ -19,7 +19,7 @@ Kubernetes provides a distributed platform for containerized applications. You b
> * Run an application in Kubernetes > * Test the application
-In additional tutorials, this application is scaled out and updated.
+In later tutorials, this application is scaled out and updated.
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
@@ -47,7 +47,7 @@ The sample manifest file from the git repo cloned in the first tutorial uses the
vi azure-vote-all-in-one-redis.yaml ```
-Replace *microsoft* with your ACR login server name. The image name is found on line 51 of the manifest file. The following example shows the default image name:
+Replace *microsoft* with your ACR login server name. The image name is found on line 60 of the manifest file. The following example shows the default image name:
```yaml containers:
@@ -75,7 +75,7 @@ kubectl apply -f azure-vote-all-in-one-redis.yaml
The following example output shows the resources successfully created in the AKS cluster:
-```
+```console
$ kubectl apply -f azure-vote-all-in-one-redis.yaml deployment "azure-vote-back" created
@@ -96,19 +96,19 @@ kubectl get service azure-vote-front --watch
Initially the *EXTERNAL-IP* for the *azure-vote-front* service is shown as *pending*:
-```
+```output
azure-vote-front LoadBalancer 10.0.34.242 <pending> 80:30676/TCP 5s ``` When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
-```
+```output
azure-vote-front LoadBalancer 10.0.34.242 52.179.23.131 80:30676/TCP 67s ``` To see the application in action, open a web browser to the external IP address of your service:
-![Image of Kubernetes cluster on Azure](media/container-service-kubernetes-tutorials/azure-vote.png)
+:::image type="content" source="./media/container-service-kubernetes-tutorials/azure-vote.png" alt-text="Screenshot showing the container image Azure Voting App running in an AKS cluster opened in a local web browser" lightbox="./media/container-service-kubernetes-tutorials/azure-vote.png":::
If the application didn't load, it might be due to an authorization problem with your image registry. To view the status of your containers, use the `kubectl get pods` command. If the container images can't be pulled, see [Authenticate with Azure Container Registry from Azure Kubernetes Service](cluster-container-registry-integration.md).
aks https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-deploy-cluster.md
@@ -3,7 +3,7 @@ title: Kubernetes on Azure tutorial - Deploy a cluster
description: In this Azure Kubernetes Service (AKS) tutorial, you create an AKS cluster and use kubectl to connect to the Kubernetes master node. services: container-service ms.topic: tutorial
-ms.date: 09/30/2020
+ms.date: 01/12/2021
ms.custom: mvc, devx-track-azurecli
@@ -19,7 +19,7 @@ Kubernetes provides a distributed platform for containerized applications. With
> * Install the Kubernetes CLI (kubectl) > * Configure kubectl to connect to your AKS cluster
-In additional tutorials, the Azure Vote application is deployed to the cluster, scaled, and updated.
+In later tutorials, the Azure Vote application is deployed to the cluster, scaled, and updated.
## Before you begin
@@ -31,9 +31,9 @@ This tutorial requires that you're running the Azure CLI version 2.0.53 or later
AKS clusters can use Kubernetes role-based access control (Kubernetes RBAC). These controls let you define access to resources based on roles assigned to users. Permissions are combined if a user is assigned multiple roles, and permissions can be scoped to either a single namespace or across the whole cluster. By default, the Azure CLI automatically enables Kubernetes RBAC when you create an AKS cluster.
-Create an AKS cluster using [az aks create][]. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. The following example does not specify a region so the AKS cluster is also created in the *eastus* region. See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)][quotas-skus-regions] for more information about resource limits and region availability for AKS.
+Create an AKS cluster using [az aks create][]. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. The following example does not specify a region so the AKS cluster is also created in the *eastus* region. For more information, see [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)][quotas-skus-regions] for more information about resource limits and region availability for AKS.
-To allow an AKS cluster to interact with other Azure resources, an Azure Active Directory service principal is automatically created, since you did not specify one. Here, this service principal is [granted the right to pull images][container-registry-integration] from the Azure Container Registry (ACR) instance you created in the previous tutorial. Note that you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management.
+To allow an AKS cluster to interact with other Azure resources, an Azure Active Directory service principal is automatically created, since you did not specify one. Here, this service principal is [granted the right to pull images][container-registry-integration] from the Azure Container Registry (ACR) instance you created in the previous tutorial. To execute the command successfully, you're required to have an **Owner** or **Azure account administrator** role on the Azure subscription.
```azurecli az aks create \
@@ -44,7 +44,7 @@ az aks create \
--attach-acr <acrName> ```
-You can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md).
+To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management.
After a few minutes, the deployment completes, and returns JSON-formatted information about the AKS deployment.
@@ -74,8 +74,9 @@ To verify the connection to your cluster, run the [kubectl get nodes][kubectl-ge
``` $ kubectl get nodes
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-12345678-0 Ready agent 32m v1.14.8
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-37463671-vmss000000 Ready agent 2m37s v1.18.10
+aks-nodepool1-37463671-vmss000001 Ready agent 2m28s v1.18.10
``` ## Next steps
aks https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-acr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-prepare-acr.md
@@ -3,7 +3,7 @@ title: Kubernetes on Azure tutorial - Create a container registry
description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload a sample application container image. services: container-service ms.topic: tutorial
-ms.date: 09/30/2020
+ms.date: 01/12/2021
ms.custom: mvc, devx-track-azurecli
@@ -20,7 +20,7 @@ Azure Container Registry (ACR) is a private registry for container images. A pri
> * Upload the image to ACR > * View images in your registry
-In additional tutorials, this ACR instance is integrated with a Kubernetes cluster in AKS, and an application is deployed from the image.
+In later tutorials, this ACR instance is integrated with a Kubernetes cluster in AKS, and an application is deployed from the image.
## Before you begin
@@ -58,12 +58,12 @@ The command returns a *Login Succeeded* message once completed.
To see a list of your current local images, use the [docker images][docker-images] command:
-```azurecli
+```console
$ docker images ```
-The above command output shows list of your current local images:
+The above command's output shows list of your current local images:
-```
+```output
REPOSITORY TAG IMAGE ID CREATED SIZE mcr.microsoft.com/azuredocs/azure-vote-front v1 84b41c268ad9 7 minutes ago 944MB mcr.microsoft.com/oss/bitnami/redis 6.0.8 3a54a920bb6c 2 days ago 103MB
@@ -120,7 +120,7 @@ az acr repository list --name <acrName> --output table
The following example output lists the *azure-vote-front* image as available in the registry:
-```
+```output
Result ---------------- azure-vote-front
@@ -134,7 +134,7 @@ az acr repository show-tags --name <acrName> --repository azure-vote-front --out
The following example output shows the *v1* image tagged in a previous step:
-```
+```output
Result -------- v1
aks https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-prepare-app.md
@@ -3,7 +3,7 @@ title: Kubernetes on Azure tutorial - Prepare an application
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to prepare and build a multi-container app with Docker Compose that you can then deploy to AKS. services: container-service ms.topic: tutorial
-ms.date: 09/30/2020
+ms.date: 01/12/2021
ms.custom: mvc
@@ -21,9 +21,9 @@ In this tutorial, part one of seven, a multi-container application is prepared f
Once completed, the following application runs in your local development environment:
-![Image of Kubernetes cluster on Azure](./media/container-service-tutorial-kubernetes-prepare-app/azure-vote.png)
+:::image type="content" source="./media/container-service-kubernetes-tutorials/azure-vote-local.png" alt-text="Screenshot showing the container image Azure Voting App running locally opened in a local web browser" lightbox="./media/container-service-kubernetes-tutorials/azure-vote-local.png":::
-In additional tutorials, the container image is uploaded to an Azure Container Registry, and then deployed into an AKS cluster.
+In later tutorials, the container image is uploaded to an Azure Container Registry, and then deployed into an AKS cluster.
## Before you begin
@@ -31,11 +31,12 @@ This tutorial assumes a basic understanding of core Docker concepts such as cont
To complete this tutorial, you need a local Docker development environment running Linux containers. Docker provides packages that configure Docker on a [Mac][docker-for-mac], [Windows][docker-for-windows], or [Linux][docker-for-linux] system.
-Azure Cloud Shell does not include the Docker components required to complete every step in these tutorials. Therefore, we recommend using a full Docker development environment.
+> [!NOTE]
+> Azure Cloud Shell does not include the Docker components required to complete every step in these tutorials. Therefore, we recommend using a full Docker development environment.
## Get application code
-The sample application used in this tutorial is a basic voting app. The application consists of a front-end web component and a back-end Redis instance. The web component is packaged into a custom container image. The Redis instance uses an unmodified image from Docker Hub.
+The [sample application][sample-application] used in this tutorial is a basic voting app consisting of a front-end web component and a back-end Redis instance. The web component is packaged into a custom container image. The Redis instance uses an unmodified image from Docker Hub.
Use [git][] to clone the sample application to your development environment:
@@ -49,7 +50,35 @@ Change into the cloned directory.
cd azure-voting-app-redis ```
-Inside the directory is the application source code, a pre-created Docker compose file, and a Kubernetes manifest file. These files are used throughout the tutorial set.
+Inside the directory is the application source code, a pre-created Docker compose file, and a Kubernetes manifest file. These files are used throughout the tutorial set. The contents and structure of the directory are as follows:
+
+```output
+azure-voting-app-redis
+Γöé azure-vote-all-in-one-redis.yaml
+Γöé docker-compose.yaml
+Γöé LICENSE
+Γöé README.md
+Γöé
+Γö£ΓöÇΓöÇΓöÇazure-vote
+Γöé Γöé app_init.supervisord.conf
+Γöé Γöé Dockerfile
+Γöé Γöé Dockerfile-for-app-service
+Γöé Γöé sshd_config
+Γöé Γöé
+Γöé ΓööΓöÇΓöÇΓöÇazure-vote
+Γöé Γöé config_file.cfg
+Γöé Γöé main.py
+Γöé Γöé
+Γöé Γö£ΓöÇΓöÇΓöÇstatic
+Γöé Γöé default.css
+Γöé Γöé
+Γöé ΓööΓöÇΓöÇΓöÇtemplates
+Γöé https://docsupdatetracker.net/index.html
+Γöé
+ΓööΓöÇΓöÇΓöÇjenkins-tutorial
+ config-jenkins.sh
+ deploy-jenkins-vm.sh
+```
## Create container images
@@ -86,11 +115,11 @@ d10e5244f237 mcr.microsoft.com/azuredocs/azure-vote-front:v1 "/entrypoi
To see the running application, enter `http://localhost:8080` in a local web browser. The sample application loads, as shown in the following example:
-![Image of Kubernetes cluster on Azure](./media/container-service-tutorial-kubernetes-prepare-app/azure-vote.png)
+:::image type="content" source="./media/container-service-kubernetes-tutorials/azure-vote-local.png" alt-text="Screenshot showing the container image Azure Voting App running locally opened in a local web browser" lightbox="./media/container-service-kubernetes-tutorials/azure-vote-local.png":::
## Clean up resources
-Now that the application's functionality has been validated, the running containers can be stopped and removed. Do not delete the container images - in the next tutorial, the *azure-vote-front* image is uploaded to an Azure Container Registry instance.
+Now that the application's functionality has been validated, the running containers can be stopped and removed. ***Do not delete the container images*** - in the next tutorial, the *azure-vote-front* image is uploaded to an Azure Container Registry instance.
Stop and remove the container instances and resources with the [docker-compose down][docker-compose-down] command:
@@ -124,6 +153,7 @@ Advance to the next tutorial to learn how to store container images in Azure Con
[docker-ps]: https://docs.docker.com/engine/reference/commandline/ps/ [docker-compose-down]: https://docs.docker.com/compose/reference/down [git]: https://git-scm.com/downloads
+[sample-application]: https://github.com/Azure-Samples/azure-voting-app-redis
<!-- LINKS - internal --> [aks-tutorial-prepare-acr]: ./tutorial-kubernetes-prepare-acr.md
aks https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-scale.md
@@ -3,7 +3,7 @@ title: Kubernetes on Azure tutorial - Scale Application
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to scale nodes and pods in Kubernetes, and implement horizontal pod autoscaling. services: container-service ms.topic: tutorial
-ms.date: 09/30/2020
+ms.date: 01/12/2021
ms.custom: mvc
@@ -19,7 +19,7 @@ If you've followed the tutorials, you have a working Kubernetes cluster in AKS a
> * Manually scale Kubernetes pods that run your application > * Configure autoscaling pods that run the app front-end
-In additional tutorials, the Azure Vote application is updated to a new version.
+In later tutorials, the Azure Vote application is updated to a new version.
## Before you begin
@@ -37,7 +37,7 @@ kubectl get pods
The following example output shows one front-end pod and one back-end pod:
-```
+```output
NAME READY STATUS RESTARTS AGE azure-vote-back-2549686872-4d2r5 1/1 Running 0 31m azure-vote-front-848767080-tf34m 1/1 Running 0 31m
@@ -49,7 +49,7 @@ To manually change the number of pods in the *azure-vote-front* deployment, use
kubectl scale --replicas=5 deployment/azure-vote-front ```
-Run [kubectl get pods][kubectl-get] again to verify that AKS creates the additional pods. After a minute or so, the additional pods are available in your cluster:
+Run [kubectl get pods][kubectl-get] again to verify that AKS successfully creates the additional pods. After a minute or so, the pods are available in your cluster:
```console kubectl get pods
@@ -129,7 +129,7 @@ spec:
Use `kubectl apply` to apply the autoscaler defined in the `azure-vote-hpa.yaml` manifest file.
-```
+```console
kubectl apply -f azure-vote-hpa.yaml ```
@@ -156,7 +156,7 @@ az aks scale --resource-group myResourceGroup --name myAKSCluster --node-count 3
When the cluster has successfully scaled, the output is similar to following example:
-```
+```output
"agentPoolProfiles": [ { "count": 3,
aks https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-upgrade-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-upgrade-cluster.md
@@ -3,7 +3,7 @@ title: Kubernetes on Azure tutorial - Upgrade a cluster
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version. services: container-service ms.topic: tutorial
-ms.date: 09/30/2020
+ms.date: 01/12/2021
ms.custom: mvc, devx-track-azurecli
@@ -35,22 +35,22 @@ Before you upgrade a cluster, use the [az aks get-upgrades][] command to check w
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster ```
-In the following example, the current version is *1.15.11*, and the available versions are shown under *upgrades*.
+In the following example, the current version is *1.18.10*, and the available versions are shown under *upgrades*.
```json { "agentPoolProfiles": null, "controlPlaneProfile": {
- "kubernetesVersion": "1.15.11",
+ "kubernetesVersion": "1.18.10",
... "upgrades": [ { "isPreview": null,
- "kubernetesVersion": "1.16.8"
+ "kubernetesVersion": "1.19.1"
}, { "isPreview": null,
- "kubernetesVersion": "1.16.9"
+ "kubernetesVersion": "1.19.3"
} ] },
@@ -80,7 +80,7 @@ az aks upgrade \
> [!NOTE] > You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but cannot upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
-The following condensed example output shows the result of upgrading to *1.16.8*. Notice the *kubernetesVersion* now reports *1.16.8*:
+The following condensed example output shows the result of upgrading to *1.19.1*. Notice the *kubernetesVersion* now reports *1.19.1*:
```json {
@@ -98,7 +98,7 @@ The following condensed example output shows the result of upgrading to *1.16.8*
"enableRbac": false, "fqdn": "myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io", "id": "/subscriptions/<Subscription ID>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster",
- "kubernetesVersion": "1.16.8",
+ "kubernetesVersion": "1.19.1",
"location": "eastus", "name": "myAKSCluster", "type": "Microsoft.ContainerService/ManagedClusters"
@@ -113,12 +113,12 @@ Confirm that the upgrade was successful using the [az aks show][] command as fol
az aks show --resource-group myResourceGroup --name myAKSCluster --output table ```
-The following example output shows the AKS cluster runs *KubernetesVersion 1.16.8*:
+The following example output shows the AKS cluster runs *KubernetesVersion 1.19.1*:
-```
+```output
Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn ------------ ---------- --------------- ------------------- ------------------- ----------------------------------------------------------------
-myAKSCluster eastus myResourceGroup 1.16.8 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
+myAKSCluster eastus myResourceGroup 1.19.1 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
``` ## Delete the cluster
aks https://docs.microsoft.com/en-us/azure/aks/windows-container-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/windows-container-powershell.md
@@ -30,7 +30,11 @@ If you choose to use PowerShell locally, this article requires that you install
module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell][install-azure-powershell].
+[Install Azure PowerShell][install-azure-powershell]. You also must install the Az.Aks PowerShell module:
+
+```azurepowershell-interactive
+Install-Module Az.Aks
+```
[!INCLUDE [cloud-shell-try-it](../../includes/cloud-shell-try-it.md)]
@@ -101,7 +105,7 @@ network resources if they don't exist.
```azurepowershell-interactive $Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
-New-AzAKS -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -KubernetesVersion 1.16.7 -NetworkPlugin azure -NodeVmSetType VirtualMachineScaleSets -WindowsProfileAdminUserName akswinuser -WindowsProfileAdminUserPassword $Password
+New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -KubernetesVersion 1.16.7 -NetworkPlugin azure -NodeVmSetType VirtualMachineScaleSets -WindowsProfileAdminUserName akswinuser -WindowsProfileAdminUserPassword $Password
``` > [!Note]
api-management https://docs.microsoft.com/en-us/azure/api-management/plan-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/plan-manage-costs.md
@@ -18,7 +18,7 @@ Costs for API Management are only a portion of the monthly costs in your Azure b
## Prerequisites
-Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](https://docs.microsoft.com/azure/cost-management-billing/costs/understand-cost-mgt-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](https://docs.microsoft.com/azure/cost-management/assign-access-acm-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management-billing/costs/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Estimate costs before using API Management
@@ -49,7 +49,7 @@ You can pay for API Management charges with your EA monetary commitment credit.
## Monitor costs
-As you use Azure resources with API Management, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). As soon as API Management use starts, costs are incurred and you can see the costs in [cost analysis](https://docs.microsoft.com/azure/cost-management/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+As you use Azure resources with API Management, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). As soon as API Management use starts, costs are incurred and you can see the costs in [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
When you use cost analysis, you view API Management costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
@@ -73,13 +73,13 @@ In the preceding example, you see the current cost for the service. Costs by Azu
## Create budgets
-You can create [budgets](https://docs.microsoft.com/azure/cost-management/tutorial-acm-create-budgets?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](https://docs.microsoft.com/azure/cost-management/cost-mgt-alerts-monitor-usage-spending?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+You can create [budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
-Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you when create a budget, see [Group and filter options](https://docs.microsoft.com/azure/cost-management-billing/costs/group-filter?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you when create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Export cost data
-You can also [export your cost data](https://docs.microsoft.com/azure/cost-management-billing/costs/tutorial-export-acm-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
## Other ways to manage and reduce costs for API Management
@@ -102,9 +102,9 @@ As you add or remove units, capacity and cost scale proportionally. For example,
## Next steps -- Learn [how to optimize your cloud investment with Azure Cost Management](https://docs.microsoft.com/azure/cost-management-billing/costs/cost-mgt-best-practices?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn more about managing costs with [cost analysis](https://docs.microsoft.com/azure/cost-management-billing/costs/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn about how to [prevent unexpected costs](https://docs.microsoft.com/azure/cost-management-billing/manage/getting-started?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/manage/getting-started.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
- Take the [Cost Management](https://docs.microsoft.com/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. - Learn about API Management [capacity](api-management-capacity.md). - See steps to scale and upgrade API Management using the [Azure portal](upgrade-and-scale.md), and learn about [autoscaling](api-management-howto-autoscale.md).\ No newline at end of file
app-service https://docs.microsoft.com/en-us/azure/app-service/faq-configuration-and-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/faq-configuration-and-management.md
@@ -313,3 +313,8 @@ You also can specify the specific dynamic and static MIME types that you want to
## How do I migrate from an on-premises environment to App Service? To migrate sites from Windows and Linux web servers to App Service, you can use Azure App Service Migration Assistant. The migration tool creates web apps and databases in Azure as needed, and then publishes the content. For more information, see [Azure App Service Migration Assistant](https://appmigration.microsoft.com/).+
+## Why is my certificate issued for 11 months and not for a full year?
+
+For all certificates issued after 9/1/2020, the maximum duration is now 397 days. Certificates issued before 9/1/2020 have a maximum validity of 825 days until they are renewed, rekeyed etc. Any certificate renewed after 9/1/2020 will be affected by this change and users may notice a shorter validity on their renewed certificates.
+GoDaddy has implemented a subscription service that both meets the new requirements while honoring existing customer certificates. Thirty days before the newly-issued certificate expires, the service automatically issues a second certificate that extends the duration to the original expiration date. App Service is working with GoDaddy to address this change and make sure that our customers receive the full duration of their certificates.
app-service https://docs.microsoft.com/en-us/azure/app-service/overview-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-manage-costs.md
@@ -21,7 +21,7 @@ ms.date: 01/01/2021
<!-- Note for Azure service writer: Modify the following for your service. -->
-This article describes how you plan for and manage costs for Azure App Service. First, you use the Azure pricing calculator to help plan for App Service costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs. After you've started using App Service resources, use [Cost Management](https://docs.microsoft.com/azure/cost-management-billing/) features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure App Service are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for App Service, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
+This article describes how you plan for and manage costs for Azure App Service. First, you use the Azure pricing calculator to help plan for App Service costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs. After you've started using App Service resources, use [Cost Management](https://docs.microsoft.com/azure/cost-management-billing/?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure App Service are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for App Service, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
## Relevant costs for App Service
@@ -80,7 +80,7 @@ To create an app and view the estimated price:
![Review estimated cost for each pricing tier in the portal](media/overview-manage-costs/pricing-estimates.png)
-If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../billing/billing-spending-limit.md).
+If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Optimize costs
@@ -125,7 +125,7 @@ The **Isolated** tier (App Service environment) also supports 1-year and 3-year
## Monitor costs
-As you use Azure resources with App Service, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days). As soon as App Service use starts, costs are incurred and you can see the costs in [cost analysis](https://docs.microsoft.com/azure/cost-management/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+As you use Azure resources with App Service, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days). As soon as App Service use starts, costs are incurred and you can see the costs in [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
When you use cost analysis, you view App Service costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
@@ -151,20 +151,20 @@ In the preceding example, you see the current cost for the service. Costs by Azu
<!-- Note to Azure service writer: Modify the following as needed for your service. -->
-You can create [budgets](https://docs.microsoft.com/azure/cost-management/tutorial-acm-create-budgets?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](https://docs.microsoft.com/azure/cost-management/cost-mgt-alerts-monitor-usage-spending?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+You can create [budgets](../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
-Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you extra money. For more information about the filter options available when you create a budget, see [Group and filter options](https://docs.microsoft.com/azure/cost-management-billing/costs/group-filter?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you extra money. For more information about the filter options available when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Export cost data
-You can also [export your cost data](https://docs.microsoft.com/azure/cost-management-billing/costs/tutorial-export-acm-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do more data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do more data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
## Next steps - Learn more on how pricing works with Azure Storage. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).-- Learn [how to optimize your cloud investment with Azure Cost Management](https://docs.microsoft.com/azure/cost-management-billing/costs/cost-mgt-best-practices?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn more about managing costs with [cost analysis](https://docs.microsoft.com/azure/cost-management-billing/costs/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn about how to [prevent unexpected costs](https://docs.microsoft.com/azure/cost-management-billing/manage/getting-started?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/manage/getting-started.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
- Take the [Cost Management](https://docs.microsoft.com/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. <!-- Insert links to other articles that might help users save and manage costs for you service here.
automation https://docs.microsoft.com/en-us/azure/automation/troubleshoot/update-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/update-management.md
@@ -2,7 +2,7 @@
title: Troubleshoot Azure Automation Update Management issues description: This article tells how to troubleshoot and resolve issues with Azure Automation Update Management. services: automation
-ms.date: 12/04/2020
+ms.date: 01/13/2021
ms.topic: conceptual ms.service: automation ---
@@ -139,13 +139,11 @@ This issue can be caused by local configuration issues or by improperly configur
| summarize by Computer, Solutions ```
-4. If you don't see your machine in the query results, it hasn't recently checked in. There's probably a local configuration issue and you should [reinstall the agent](../../azure-monitor/learn/quick-collect-windows-computer.md#install-the-agent-for-windows).
+ If you don't see your machine in the query results, it hasn't recently checked in. There's probably a local configuration issue and you should [reinstall the agent](../../azure-monitor/learn/quick-collect-windows-computer.md#install-the-agent-for-windows).
-5. If your machine shows up in the query results, check for scope configuration problems. The [scope configuration](../update-management/scope-configuration.md) determines which machines are configured for Update Management.
+ If your machine is listed in the query results, verify under the **Solutions** property that **updates** is listed. This verifies it is registered with Update Management. If it is not, check for scope configuration problems. The [scope configuration](../update-management/scope-configuration.md) determines which machines are configured for Update Management. To configure the scope configuration for the target the machine, see [Enable machines in the workspace](../update-management/enable-from-automation-account.md#enable-machines-in-the-workspace).
-6. If your machine is showing up in your workspace but not in Update Management, you must configure the scope configuration to target the machine. To learn how to do this, see [Enable machines in the workspace](../update-management/enable-from-automation-account.md#enable-machines-in-the-workspace).
-
-7. In your workspace, run this query.
+4. In your workspace, run this query.
```kusto Operation
@@ -153,9 +151,9 @@ This issue can be caused by local configuration issues or by improperly configur
| sort by TimeGenerated desc ```
-8. If you get a `Data collection stopped due to daily limit of free data reached. Ingestion status = OverQuota` result, the quota defined on your workspace has been reached, which has stopped data from being saved. In your workspace, go to **data volume management** under **Usage and estimated costs**, and change or remove the quota.
+ If you get a `Data collection stopped due to daily limit of free data reached. Ingestion status = OverQuota` result, the quota defined on your workspace has been reached, which has stopped data from being saved. In your workspace, go to **data volume management** under **Usage and estimated costs**, and change or remove the quota.
-9. If your issue is still unresolved, follow the steps in [Deploy a Windows Hybrid Runbook Worker](../automation-windows-hrw-install.md) to reinstall the Hybrid Worker for Windows. For Linux, follow the steps in [Deploy a Linux Hybrid Runbook Worker](../automation-linux-hrw-install.md).
+5. If your issue is still unresolved, follow the steps in [Deploy a Windows Hybrid Runbook Worker](../automation-windows-hrw-install.md) to reinstall the Hybrid Worker for Windows. For Linux, follow the steps in [Deploy a Linux Hybrid Runbook Worker](../automation-linux-hrw-install.md).
## <a name="rp-register"></a>Scenario: Unable to register Automation resource provider for subscriptions
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-instance-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-instance-management.md
@@ -296,6 +296,10 @@ public static async Task Run(
{ log.LogInformation(JsonConvert.SerializeObject(instance)); }
+
+ // Note: ListInstancesAsync only returns the first page of results.
+ // To request additional pages provide the result.ContinuationToken
+ // to the OrchestrationStatusQueryCondition's ContinuationToken property.
} ```
@@ -1030,4 +1034,4 @@ func durable delete-task-hub --task-hub-name UserTest
> [Learn how to handle versioning](durable-functions-versioning.md) > [!div class="nextstepaction"]
-> [Built-in HTTP API reference for instance management](durable-functions-http-api.md)
\ No newline at end of file
+> [Built-in HTTP API reference for instance management](durable-functions-http-api.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-grid-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-event-grid-trigger.md
@@ -600,7 +600,7 @@ The subscription validation request will be received first; ignore any validatio
### Manually post the request
-Run your Event Grid function locally.
+Run your Event Grid function locally. The `Content-Type` and `aeg-event-type` headers are required to be manually set, while and all other values can be left as default.
Use a tool such as [Postman](https://www.getpostman.com/) or [curl](https://curl.haxx.se/docs/httpscripting.html) to create an HTTP POST request:
azure-government https://docs.microsoft.com/en-us/azure/azure-government/compliance/azure-services-in-fedramp-auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
@@ -23,12 +23,12 @@ This article provides a detailed list of in-scope cloud services across Azure Pu
* 3PAO = Third Party Assessment Organization * JAB = Joint Authorization Board * :heavy_check_mark: = indicates the service has achieved this audit scope.
-* Planned 2020 = indicates the service will be reviewed by 3PAO and JAB in 2020. Once the service is authorized, status will be updated
+* Planned 2021 = indicates the service will be reviewed by 3PAO and JAB in 2021. Once the service is authorized, status will be updated
## Azure public services by audit scope | _Last Updated: November 2020_ |
-| Azure Service| DoD CC SRG IL 2 | FedRAMP Moderate | FedRAMP High | Planned 2020 |
+| Azure Service| DoD CC SRG IL 2 | FedRAMP Moderate | FedRAMP High | Planned 2021 |
| ------------ |:---------------:|:----------------:|:------------:|:------------:| | [API Management](https://azure.microsoft.com/services/api-management/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
@@ -225,7 +225,7 @@ This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | :heavy_check_mark: | | | | :heavy_check_mark: |
-| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: |
| [Azure Intune](/intune/what-is-intune) | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | | [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/api-custom-events-metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/api-custom-events-metrics.md
@@ -141,7 +141,9 @@ telemetry.trackEvent({name: "WinGame"});
### Custom events in Analytics
-The telemetry is available in the `customEvents` table in [Application Insights Analytics](../log-query/log-query-overview.md). Each row represents a call to `trackEvent(..)` in your app.
+The telemetry is available in the `customEvents` table in [Application Insights Logs tab](../log-query/log-query-overview.md) or [Usage Experience](usage-overview.md). Events may come from `trackEvent(..)` or [Click Analytics Auto-collection Plugin](javascript-click-analytics-plugin.md).
+
+
If [sampling](./sampling.md) is in operation, the itemCount property shows a value greater than 1. For example itemCount==10 means that of 10 calls to trackEvent(), the sampling process only transmitted one of them. To get a correct count of custom events, you should therefore use code such as `customEvents | summarize sum(itemCount)`.
@@ -1119,4 +1121,3 @@ To determine how long data is kept, see [Data retention and privacy](./data-rete
* [Search events and logs](./diagnostic-search.md) * [Troubleshooting](../faq.md)-
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-core.md
@@ -65,7 +65,7 @@ For Visual Studio for Mac use the [manual guidance](#enable-application-insights
```xml <ItemGroup>
- <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.13.1" />
+ <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.16.0" />
</ItemGroup> ```
@@ -228,7 +228,7 @@ See the [configurable settings in `ApplicationInsightsServiceOptions`](https://g
### Configuration Recommendation for Microsoft.ApplicationInsights.AspNetCore SDK 2.15.0 & above
-Starting from Microsoft.ApplicationInsights.AspNetCore SDK version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) the recommendation is to configure every setting available in `ApplicationInsightsServiceOptions`, including instrumentationkey using applications `IConfiguration` instance. The settings must be under the section "ApplicationInsights", as shown in the below example. The following section from appsettings.json configures instrumentation key, and also disable adaptive sampling and performance counter collection.
+Starting from Microsoft.ApplicationInsights.AspNetCore SDK version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0), the recommendation is to configure every setting available in `ApplicationInsightsServiceOptions`, including instrumentationkey using applications `IConfiguration` instance. The settings must be under the section "ApplicationInsights", as shown in the following example. The following section from appsettings.json configures instrumentation key, and also disable adaptive sampling and performance counter collection.
```json {
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/ip-addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ip-addresses.md
@@ -23,7 +23,7 @@ You need to open some outgoing ports in your server's firewall to allow the Appl
| Purpose | URL | IP | Ports | | --- | --- | --- | --- |
-| Telemetry |dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com |40.114.241.141<br/>104.45.136.42<br/>40.84.189.107<br/>168.63.242.221<br/>52.167.221.184<br/>52.169.64.244<br/>40.85.218.175<br/>104.211.92.54<br/>52.175.198.74<br/>51.140.6.23<br/>40.71.12.231<br/>13.69.65.22<br/>13.78.108.165<br/>13.70.72.233<br/>20.44.8.7<br/>13.86.218.248<br/>40.79.138.41<br/>52.231.18.241<br/>13.75.38.7<br/>102.133.155.50<br/>52.162.110.67<br/>191.233.204.248<br/>13.69.66.140<br/>13.77.52.29<br/>51.107.59.180<br/>40.71.12.235<br/>20.44.8.10<br/>40.71.13.169<br/>13.66.141.156<br/>40.71.13.170<br/>13.69.65.23<br/>20.44.17.0<br/>20.36.114.207 <br/>51.116.155.246 <br/>51.107.155.178 <br/>51.140.212.64 <br/>13.86.218.255 <br/>20.37.74.240 <br/>65.52.250.236 <br/>13.69.229.240 <br/>52.236.186.210<br/>52.167.107.65<br/>40.71.12.237<br/>40.78.229.32<br/>40.78.229.33 | 443 |
+| Telemetry |dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com |40.114.241.141<br/>104.45.136.42<br/>40.84.189.107<br/>168.63.242.221<br/>52.167.221.184<br/>52.169.64.244<br/>40.85.218.175<br/>104.211.92.54<br/>52.175.198.74<br/>51.140.6.23<br/>40.71.12.231<br/>13.69.65.22<br/>13.78.108.165<br/>13.70.72.233<br/>20.44.8.7<br/>13.86.218.248<br/>40.79.138.41<br/>52.231.18.241<br/>13.75.38.7<br/>102.133.155.50<br/>52.162.110.67<br/>191.233.204.248<br/>13.69.66.140<br/>13.77.52.29<br/>51.107.59.180<br/>40.71.12.235<br/>20.44.8.10<br/>40.71.13.169<br/>13.66.141.156<br/>40.71.13.170<br/>13.69.65.23<br/>20.44.17.0<br/>20.36.114.207 <br/>51.116.155.246 <br/>51.107.155.178 <br/>51.140.212.64 <br/>13.86.218.255 <br/>20.37.74.240 <br/>65.52.250.236 <br/>13.69.229.240 <br/>52.236.186.210<br/>52.167.107.65<br/>40.71.12.237<br/>40.78.229.32<br/>40.78.229.33<br/>51.105.67.161<br/>40.124.64.192 | 443 |
| Live Metrics Stream | live.applicationinsights.azure.com<br/>rt.applicationinsights.microsoft.com<br/>rt.services.visualstudio.com|23.96.28.38<br/>13.92.40.198<br/>40.112.49.101<br/>40.117.80.207<br/>157.55.177.6<br/>104.44.140.84<br/>104.215.81.124<br/>23.100.122.113| 443 | ## Status Monitor
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-in-process-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
@@ -30,11 +30,11 @@ The 3.0 agent supports Java 8 and above.
> Please review all the [configuration options](./java-standalone-config.md) carefully, > as the json structure has completely changed, in addition to the file name itself which went all lowercase.
-Download [applicationinsights-agent-3.0.0.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.0.0/applicationinsights-agent-3.0.0.jar)
+Download [applicationinsights-agent-3.0.1.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.0.1/applicationinsights-agent-3.0.1.jar)
**2. Point the JVM to the agent**
-Add `-javaagent:path/to/applicationinsights-agent-3.0.0.jar` to your application's JVM args
+Add `-javaagent:path/to/applicationinsights-agent-3.0.1.jar` to your application's JVM args
Typical JVM args include `-Xmx512m` and `-XX:+UseG1GC`. So if you know where to add these, then you already know where to add this.
@@ -50,7 +50,7 @@ Point the agent to your Application Insights resource, either by setting an envi
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=... ```
-Or by creating a configuration file named `applicationinsights.json`, and placing it in the same directory as `applicationinsights-agent-3.0.0.jar`, with the following content:
+Or by creating a configuration file named `applicationinsights.json`, and placing it in the same directory as `applicationinsights-agent-3.0.1.jar`, with the following content:
```json {
@@ -258,7 +258,7 @@ try {
### Add request custom dimensions using the 2.x SDK > [!NOTE]
-> This feature is only in 3.0.1-BETA and later
+> This feature is only in 3.0.1 and later
Add `applicationinsights-web-2.6.2.jar` to your application (all 2.x versions are supported by Application Insights Java 3.0, but it's worth using the latest if you have a choice):
@@ -282,7 +282,7 @@ requestTelemetry.getProperties().put("mydimension", "myvalue");
### Set the request telemetry user_Id using the 2.x SDK > [!NOTE]
-> This feature is only in 3.0.1-BETA and later
+> This feature is only in 3.0.1 and later
Add `applicationinsights-web-2.6.2.jar` to your application (all 2.x versions are supported by Application Insights Java 3.0, but it's worth using the latest if you have a choice):
@@ -306,7 +306,7 @@ requestTelemetry.getContext().getUser().setId("myuser");
### Override the request telemetry name using the 2.x SDK > [!NOTE]
-> This feature is only in 3.0.1-BETA and later
+> This feature is only in 3.0.1 and later
Add `applicationinsights-web-2.6.2.jar` to your application (all 2.x versions are supported by Application Insights Java 3.0, but it's worth using the latest if you have a choice):
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-arguments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-arguments.md
@@ -18,24 +18,24 @@ Configure [App Services](../../app-service/configure-language-java.md#set-java-r
## Spring Boot
-Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.0.0.jar` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.0.1.jar` somewhere before `-jar`, for example:
```
-java -javaagent:path/to/applicationinsights-agent-3.0.0.jar -jar <myapp.jar>
+java -javaagent:path/to/applicationinsights-agent-3.0.1.jar -jar <myapp.jar>
``` ## Spring Boot via Docker entry point
-If you are using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.0.0.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you are using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.0.1.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.0.0.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.0.1.jar", "-jar", "<myapp.jar>"]
```
-If you are using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.0.0.jar` somewhere before `-jar`, for example:
+If you are using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.0.1.jar` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.0.0.jar -jar <myapp.jar>
+ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.0.1.jar -jar <myapp.jar>
``` ## Tomcat 8 (Linux)
@@ -45,7 +45,7 @@ ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.0.0.jar -jar <mya
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.0.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.1.jar"
``` ### Tomcat installed via download and unzip
@@ -53,10 +53,10 @@ JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.0.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.0.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.0.1.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.0.0.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.0.1.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
@@ -66,36 +66,36 @@ If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.0.0.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.0.1.jar
``` Quotes are not necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.0.0.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.0.1.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.0.0.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.0.1.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.0.0.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.0.1.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.0.0.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.0.1.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="<b>-javaagent:path/to/applicationinsights-agent-3.0.0.jar</b> -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="<b>-javaagent:path/to/applicationinsights-agent-3.0.1.jar</b> -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.0.0.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.0.1.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
@@ -105,7 +105,7 @@ Add `-javaagent:path/to/applicationinsights-agent-3.0.0.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.0.0.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.0.1.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
@@ -145,20 +145,20 @@ Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.0.0.jar
+-javaagent:path/to/applicationinsights-agent-3.0.1.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.0.0.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.0.1.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.0.0.jar>
+ -javaagent:path/to/applicationinsights-agent-3.0.1.jar>
</jvm-options> ... </java-config>
@@ -175,7 +175,7 @@ Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following: ```--javaagent:path/to/applicationinsights-agent-3.0.0.jar
+-javaagent:path/to/applicationinsights-agent-3.0.1.jar
``` After that, save and restart the application server.
@@ -184,5 +184,5 @@ After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.0.0.jar
+-javaagent:path/to/applicationinsights-agent-3.0.1.jar
```
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
@@ -36,14 +36,14 @@ You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.0 expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.0.0.jar`.
+By default, Application Insights Java 3.0 expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.0.1.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.0.0.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.0.1.jar` is located.
## Connection string
@@ -168,7 +168,7 @@ If you want to add custom dimensions to all of your telemetry:
`${...}` can be used to read the value from specified environment variable at startup. > [!NOTE]
-> Starting from version 3.0.1-BETA, if you add a custom dimension named `service.version`, the value will be stored
+> Starting from version 3.0.1, if you add a custom dimension named `service.version`, the value will be stored
> in the `application_Version` column in the Application Insights Logs table instead of as a custom dimension. ## Telemetry processors (preview)
@@ -245,7 +245,7 @@ To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## Suppressing specific auto-collected telemetry
-Starting from version 3.0.1-BETA.2, specific auto-collected telemetry can be suppressed using these configuration options:
+Starting from version 3.0.1, specific auto-collected telemetry can be suppressed using these configuration options:
```json {
@@ -344,7 +344,7 @@ and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.0.0.jar` is located.
+`applicationinsights-agent-3.0.1.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-troubleshoot.md
@@ -11,7 +11,7 @@ In this article, we cover some of the common issues that you might face while in
## Check the self-diagnostic log file
-By default, the Java 3.0 agent for Application Insights produces a log file named `applicationinsights.log` in the same directory that holds the `applicationinsights-agent-3.0.0.jar` file.
+By default, the Java 3.0 agent for Application Insights produces a log file named `applicationinsights.log` in the same directory that holds the `applicationinsights-agent-3.0.1.jar` file.
This log file is the first place to check for hints to any issues you might be experiencing.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/javascript-click-analytics-plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript-click-analytics-plugin.md new file mode 100644
@@ -0,0 +1,313 @@
+---
+title: Click Analytics Auto-collection plugin for Application Insights JavaScript SDK
+description: How to install and use Click Analytics Auto-collection plugin for Application Insights JavaScript SDK.
+services: azure-monitor
+author: lgayhardt
+
+ms.workload: tbd
+ms.tgt_pltfrm: ibiza
+ms.topic: conceptual
+ms.date: 01/14/2021
+ms.author: lagayhar
+---
+
+# Click Analytics Auto-collection plugin for Application Insights JavaScript SDK
+
+Click Analytics Auto-collection plugin for Application Insights JavaScript SDK, enables automatic tracking of the click events on web pages based on `data-*`meta tags. This plugin uses the `data-*` global attributes to capture the click events and populate telemetry data.
+
+## Getting started
+
+Users can setup the Click Analytics Auto-collection plugin via npm.
+
+### npm setup
+
+Install npm package:
+
+```bash
+npm install --save @microsoft/applicationinsights-clickanalytics-js @microsoft/applicationinsights-web
+```
+
+```js
+
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
+
+const clickPluginInstance = new ClickAnalyticsPlugin();
+// Click Analytics configuration
+const clickPluginConfig = {
+ autoCapture: true
+};
+// Application Insights Configuration
+const configObj = {
+ instrumentationKey: "YOUR INSTRUMENTATION KEY",
+ extensions: [clickPluginInstance],
+ extensionConfig: {
+ [clickPluginInstance.identifier]: clickPluginConfig
+ },
+};
+
+const appInsights = new ApplicationInsights({ config: configObj });
+appInsights.loadAppInsights();
+```
+
+## How to effectively use the plugin
+
+1. Telemetry data generated from the click events are stored as `customEvents` in the Application Insights section of the Azure portal.
+2. The `name` of the customEvent is populated based on the following rules:
+ 1. The `id` provided in the `data-*-id` will be used as the customEvent name. For example, if the clicked HTML element has the attribute "data-sample-id"="button1", then "button1" will be the customEvent name.
+ 2. If no such attribute exists and if the `useDefaultContentNameOrId` is set to `true` in the configuration, then the clicked element's HTML attribute `id` or content name of the element will be used as the customEvent name.
+ 3. If `useDefaultContentNameOrId` is false, then the customEvent name will be "not_specified".
+
+ > [!TIP]
+ > Our recommendations is to set `useDefaultContentNameOrId` to true for generating meaningful data.
+3. `parentDataTag` does two things:
+ 1. If this tag is present, the plugin will fetch the `data-*` attributes and values from all the parent HTML elements of the clicked element.
+ 2. To improve efficiency, the plugin uses this tag as a flag, when encountered it will stop itself from further processing the DOM (Document Object Model) upwards.
+
+ > [!CAUTION]
+ > Once `parentDataTag` is used, it has a persistent effect across your whole application and not just the HTML element you used it in.
+4. `customDataPrefix` provided by the user should always start with `data-`, for example `data-sample-`. In HTML the `data-*` global attributes form a class of attributes called custom data attributes, that allow proprietary information to be exchanged between the HTML and its DOM representation by scripts. Older browsers (Internet Explorer, Safari) will drop attributes that it doesn't understand, unless they start with `data-`.
+
+ The `*` in `data-*` may be replaced by any name following the [production rule of XML names](https://www.w3.org/TR/REC-xml/#NT-Name) with the following restrictions:
+ - The name must not start with "xml", whatever case is used for these letters.
+ - The name must not contain any semicolon (U+003A).
+ - The name must not contain capital letters.
+
+## Configuration
+
+| Name | Type | Default | Description |
+| --------------------- | -----------------------------------| --------| ---------------------------------------------------------------------------------------------------------------------------------------- |
+| autoCapture | boolean | true | Automatic capture configuration. |
+| callback | [IValueCallback](#ivaluecallback) | null | Callbacks configuration. |
+| pageTags | string | null | Page tags. |
+| dataTags | [ICustomDataTags](#icustomdatatags)| null | Custom Data Tags provided to override default tags used to capture click data. |
+| urlCollectHash | boolean | false | Enables the logging of values after a "#" character of the URL. |
+| urlCollectQuery | boolean | false | Enables the logging of the query string of the URL. |
+| behaviorValidator | Function | null | Callback function to use for the `data-*-bhvr` value validation. For more information, go to [behaviorValidator section](#behaviorvalidator).|
+| defaultRightClickBhvr | string (or) number | '' | Default Behavior value when Right Click event has occurred. This value will be overridden if the element has the `data-*-bhvr` attribute. |
+| dropInvalidEvents | boolean | false | Flag to drop events that do not have useful click data. |
+
+### IValueCallback
+
+| Name | Type | Default | Description |
+| ------------------ | -------- | ------- | --------------------------------------------------------------------------------------- |
+| pageName | Function | null | Function to override the default pageName capturing behavior. |
+| pageActionPageTags | Function | null | A callback function to augment the default pageTags collected during pageAction event. |
+| contentName | Function | null | A callback function to populate customized contentName. |
+
+### ICustomDataTags
+
+| Name | Type | Default | Description |
+|---------------------------|---------|-----------|---------------------------------------------------------------------------------------------------|
+| useDefaultContentNameOrId | boolean | false | When a particular element is not tagged with default customDataPrefix or customDataPrefix is not provided by user, this flag is used to collect standard HTML attribute for contentName. |
+| customDataPrefix | string | `data-` | Automatic capture content name and value of elements that are tagged with provided prefix. |
+| aiBlobAttributeTag | string | `ai-blob` | Plugin supports a JSON blob content meta data tagging instead of individual `data-*` attributes. |
+| metaDataPrefix | string | null | Automatic capture HTML Head's meta element name and content with provided prefix. |
+| captureAllMetaDataContent | string | null | Automatic capture all HTML Head's meta element names and content. Default is false. If enabled this will override provided metaDataPrefix. |
+| parentDataTag | string | null | Stops traversing up the DOM to capture content name and value of elements when encountered with this tag.|
+| dntDataTag | string | `ai-dnt` | HTML elements with this attribute will be ignored by the plugin for capturing telemetry data.|
+
+### behaviorValidator
+
+You might use the behaviorValidator function when you want to ensure data consistency though automatic checks that tagged behaviors in code conform to a pre-defined list of known and accepted taxonomy within your enterprise. It is not required or expected that most Azure Monitor customers will use this, but it's available for advanced scenarios. There are three different behaviorValidator callback functions exposed as part of this extension. However, users can use their own callback functions if the exposed functions do not solve your requirement. The intent is to bring your own behaviors data structure, the plugin uses this validator function while extracting the behaviors from the data tags.
+
+| Name | Description |
+| ---------------------- | -----------------------------------------------------------------------------------|
+| BehaviorValueValidator | Use this callback function if your behaviors data structure is an array of strings.|
+| BehaviorMapValidator | Use this callback function if your behaviors data structure is a dictionary. |
+| BehaviorEnumValidator | Use this callback function if your behaviors data structure is an Enum. |
+
+#### Sample usage with behaviorValidator
+
+```js
+var clickPlugin = Microsoft.ApplicationInsights.ClickAnalyticsPlugin;
+var clickPluginInstance = new clickPlugin();
+
+// Behavior enum values
+var behaviorMap = {
+ UNDEFINED: 0, // default, Undefined
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Page Experience [1-19]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ NAVIGATIONBACK: 1, // Advancing to the previous index position within a webpage
+ NAVIGATION: 2, // Advancing to a specific index position within a webpage
+ NAVIGATIONFORWARD: 3, // Advancing to the next index position within a webpage
+ APPLY: 4, // Applying filter(s) or making selections
+ REMOVE: 5, // Applying filter(s) or removing selections
+ SORT: 6, // Sorting content
+ EXPAND: 7, // Expanding content or content container
+ REDUCE: 8, // Sorting content
+ CONTEXTMENU: 9, // Context Menu
+ TAB: 10, // Tab control
+ COPY: 11, // Copy the contents of a page
+ EXPERIMENTATION: 12, // Used to identify a third party experimentation event
+ PRINT: 13, // User printed page
+ SHOW: 14, // Displaying an overlay
+ HIDE: 15, // Hiding an overlay
+ MAXIMIZE: 16, // Maximizing an overlay
+ MINIMIZE: 17, // Minimizing an overlay
+ BACKBUTTON: 18, // Clicking the back button
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Scenario Process [20-39]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ STARTPROCESS: 20, // Initiate a web process unique to adopter
+ PROCESSCHECKPOINT: 21, // Represents a checkpoint in a web process unique to adopter
+ COMPLETEPROCESS: 22, // Page Actions that complete a web process unique to adopter
+ SCENARIOCANCEL: 23, // Actions resulting from cancelling a process/scenario
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Download [40-59]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ DOWNLOADCOMMIT: 40, // Initiating an unmeasurable off-network download
+ DOWNLOAD: 41, // Initiating a download
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Search [60-79]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ SEARCHAUTOCOMPLETE: 60, // Auto-completing a search query during user input
+ SEARCH: 61, // Submitting a search query
+ SEARCHINITIATE: 62, // Initiating a search query
+ TEXTBOXINPUT: 63, // Typing or entering text in the text box
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Commerce [80-99]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ VIEWCART: 82, // Viewing the cart
+ ADDWISHLIST: 83, // Adding a physical or digital good or services to a wishlist
+ FINDSTORE: 84, // Finding a physical store
+ CHECKOUT: 85, // Before you fill in credit card info
+ REMOVEFROMCART: 86, // Remove an item from the cart
+ PURCHASECOMPLETE: 87, // Used to track the pageView event that happens when the CongratsPage or Thank You page loads after a successful purchase
+ VIEWCHECKOUTPAGE: 88, // View the checkout page
+ VIEWCARTPAGE: 89, // View the cart page
+ VIEWPDP: 90, // View a PDP
+ UPDATEITEMQUANTITY: 91, // Update an item's quantity
+ INTENTTOBUY: 92, // User has the intent to buy an item
+ PUSHTOINSTALL: 93, // User has selected the push to install option
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Authentication [100-119]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ SIGNIN: 100, // User sign-in
+ SIGNOUT: 101, // User sign-out
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Social [120-139]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ SOCIALSHARE: 120, // "Sharing" content for a specific social channel
+ SOCIALLIKE: 121, // "Liking" content for a specific social channel
+ SOCIALREPLY: 122, // "Replying" content for a specific social channel
+ CALL: 123, // Click on a "call" link
+ EMAIL: 124, // Click on an "email" link
+ COMMUNITY: 125, // Click on a "community" link
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Feedback [140-159]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ VOTE: 140, // Rating content or voting for content
+ SURVEYCHECKPOINT: 145, // reaching the survey page/form
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Registration, Contact [160-179]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ REGISTRATIONINITIATE: 161, // Initiating a registration process
+ REGISTRATIONCOMPLETE: 162, // Completing a registration process
+ CANCELSUBSCRIPTION: 163, // Canceling a subscription
+ RENEWSUBSCRIPTION: 164, // Renewing a subscription
+ CHANGESUBSCRIPTION: 165, // Changing a subscription
+ REGISTRATIONCHECKPOINT: 166, // Reaching the registration page/form
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Chat [180-199]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ CHATINITIATE: 180, // Initiating a chat experience
+ CHATEND: 181, // Ending a chat experience
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Trial [200-209]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ TRIALSIGNUP: 200, // Signing-up for a trial
+ TRIALINITIATE: 201, // Initiating a trial
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Signup [210-219]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ SIGNUP: 210, // Signing-up for a notification or service
+ FREESIGNUP: 211, // Signing-up for a free service
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Referals [220-229]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ PARTNERREFERRAL: 220, // Navigating to a partner's web property
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Intents [230-239]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ LEARNLOWFUNNEL: 230, // Engaging in learning behavior on a commerce page (ex. "Learn more click")
+ LEARNHIGHFUNNEL: 231, // Engaging in learning behavior on a non-commerce page (ex. "Learn more click")
+ SHOPPINGINTENT: 232, // Shopping behavior prior to landing on a commerce page
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Video [240-259]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ VIDEOSTART: 240, // Initiating a video
+ VIDEOPAUSE: 241, // Pausing a video
+ VIDEOCONTINUE: 242, // Pausing or resuming a video.
+ VIDEOCHECKPOINT: 243, // Capturing predetermined video percentage complete.
+ VIDEOJUMP: 244, // Jumping to a new video location.
+ VIDEOCOMPLETE: 245, // Completing a video (or % proxy)
+ VIDEOBUFFERING: 246, // Capturing a video buffer event
+ VIDEOERROR: 247, // Capturing a video error
+ VIDEOMUTE: 248, // Muting a video
+ VIDEOUNMUTE: 249, // Unmuting a video
+ VIDEOFULLSCREEN: 250, // Making a video full screen
+ VIDEOUNFULLSCREEN: 251, // Making a video return from full screen to original size
+ VIDEOREPLAY: 252, // Making a video replay
+ VIDEOPLAYERLOAD: 253, // Loading the video player
+ VIDEOPLAYERCLICK: 254, // Click on a button within the interactive player
+ VIDEOVOLUMECONTROL: 255, // Click on video volume control
+ VIDEOAUDIOTRACKCONTROL: 256, // Click on audio control within a video
+ VIDEOCLOSEDCAPTIONCONTROL: 257, // Click on the closed caption control
+ VIDEOCLOSEDCAPTIONSTYLE: 258, // Click to change closed caption style
+ VIDEORESOLUTIONCONTROL: 259, // Click to change resolution
+
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ // Advertisement Engagement [280-299]
+ ///////////////////////////////////////////////////////////////////////////////////////////////////
+ ADBUFFERING: 283, // Ad is buffering
+ ADERROR: 284, // Ad error
+ ADSTART: 285, // Ad start
+ ADCOMPLETE: 286, // Ad complete
+ ADSKIP: 287, // Ad skipped
+ ADTIMEOUT: 288, // Ad timed-out
+ OTHER: 300 // Other
+};
+
+// Application Insights Configuration
+var configObj = {
+ instrumentationKey: "YOUR INSTRUMENTATION KEY",
+ extensions: [clickPluginInstance],
+ extensionConfig: {
+ [clickPluginInstance.identifier]: {
+ behaviorValidator: Microsoft.ApplicationInsights.BehaviorMapValidator(behaviorMap),
+ defaultRightClickBhvr: 9
+ },
+ },
+};
+var appInsights = new Microsoft.ApplicationInsights.ApplicationInsights({
+ config: configObj
+});
+appInsights.loadAppInsights();
+```
+
+## Sample app
+
+[Simple web app with Click Analytics Auto-collection Plugin enabled](https://go.microsoft.com/fwlink/?linkid=2152871).
+
+## Next steps
+
+- Use [Events Analysis in Usage Experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.
+- Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../log-query/log-analytics-tutorial.md#write-a-query).
+- Build a [Workbook](../platform/workbooks-overview.md) to create custom visualizations of click data.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript.md
@@ -261,7 +261,8 @@ Currently, we offer a separate [React plugin](javascript-react-plugin.md), which
|---------------| | [React](javascript-react-plugin.md)| | [React Native](javascript-react-native-plugin.md)|
-| [Angular](javascript-angular-plugin.md) |
+| [Angular](javascript-angular-plugin.md)|
+| [Click Analytics Auto-collection](javascript-click-analytics-plugin.md)|
## Explore browser/client-side data
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/usage-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-overview.md
@@ -75,7 +75,9 @@ The retention controls on top allow you to define specific events and time range
## Custom business events
-To get a clear understanding of what users do with your app, it's useful to insert lines of code to log custom events. These events can track anything from detailed user actions such as clicking specific buttons, to more significant business events such as making a purchase or winning a game.
+To get a clear understanding of what users do with your app, it's useful to insert lines of code to log custom events. These events can track anything from detailed user actions such as clicking specific buttons, to more significant business events such as making a purchase or winning a game.
+
+You can also use the [Click Analytics Auto-collection Plugin](javascript-click-analytics-plugin.md) to collect custom events.
Although in some cases, page views can represent useful events, it isn't true in general. A user can open a product page without buying the product.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/azure-monitor-operations-manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/azure-monitor-operations-manager.md new file mode 100644
@@ -0,0 +1,150 @@
+---
+title: Azure Monitor for existing Operations Manager customers
+description: Guidance for existing users of Operations Manager to transition monitoring of certain workloads to Azure Monitor as part of a transition to the cloud.
+ms.subservice:
+ms.topic: conceptual
+author: bwren
+ms.author: bwren
+ms.date: 01/11/2021
+
+---
+
+# Azure Monitor for existing Operations Manager customers
+This article provides guidance for customers who currently use [System Center Operations Manager](https://docs.microsoft.com/system-center/scom/welcome) and are planning a transition to [Azure Monitor](overview.md) as they migrate business applications and other resources into Azure. It assumes that your ultimate goal is a full transition into the cloud, replacing as much Operations Manager functionality as possible with Azure Monitor, without compromising your business and IT operational requirements.
+
+The specific recommendations made in this article will change as Azure Monitor and Operations Manager add features. The fundamental strategy though will remain consistent.
+
+> [!IMPORTANT]
+> There is a cost to implementing several Azure Monitor features described here, so you should evaluate their value before deploying across your entire environment.
+
+## Prerequisites
+This article assumes that you already use [Operations Manager](https://docs.microsoft.com/system-center/scom) and at least have a basic understanding of [Azure Monitor](overview.md). For a complete comparison between the two, see [Cloud monitoring guide: Monitoring platforms overview](/azure/cloud-adoption-framework/manage/monitor/platform-overview). That article details specific feature differences between to the two to help you understand some of the recommendations made here.
++
+## General strategy
+There are no migration tools to convert assets from Operations Manager to Azure Monitor since the platforms are fundamentally different. Your migration will instead constitute a [standard Azure Monitor implementation](deploy.md) while you continue to use Operations Manager. As you customize Azure Monitor to meet your requirements for different applications and components and as it gains more features, then you can start to retire different management packs and agents in Operations Manager.
+
+The general strategy recommended in this article is the same as in the [Cloud Monitoring Guide](https://docs.microsoft.com/azure/cloud-adoption-framework/manage/monitor/), which recommends a [Hybrid cloud monitoring](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview#hybrid-cloud-monitoring) strategy that allows you to make a gradual transition to the cloud. Even though some features may overlap, this strategy will allow you to maintain your existing business processes as you become more familiar with the new platform. Only move away from Operations Manager functionality as you can replace it with Azure Monitor. Using multiple monitoring tools does add complexity, but it allows you to take advantage of Azure Monitor's ability to monitor next generation cloud workloads while retaining Operations Manager's ability to monitor server software and infrastructure components that may be on-premises or in other clouds.
++
+## Components to monitor
+It helps to categorize the different types of workloads that you need to monitor in order to determine a distinct monitoring strategy for each. [Cloud monitoring guide: Formulate a monitoring strategy](/azure/cloud-adoption-framework/strategy/monitoring-strategy#high-level-modeling) provides a detailed breakdown of the different layers in your environment that need monitoring as you progress from legacy enterprise applications to modern applications in the cloud.
+
+Before the cloud, you used Operations Manager to monitor all layers. As you start your transition with Infrastructure as a Service (IaaS), you continue to use Operations Manager for your virtual machines but start to use Azure Monitor for your cloud resources. As you further transition to modern applications using Platform as a Service (PaaS), you can focus more on Azure Monitor and start to retire Operations Manager functionality.
++
+![Cloud Models](https://docs.microsoft.com/azure/cloud-adoption-framework/strategy/media/monitoring-strategy/cloud-models.png)
+
+These layers can be simplified into the following categories, which are further described in the rest of this article. While every monitoring workload in your environment may not fit neatly into one of these categories, each should be close enough to a particular category for the general recommendations to apply.
+
+**Business applications.** Applications that provide functionality specific to your business. They may be internal or external and are often developed internally using custom code. Your legacy applications will typically be hosted on virtual or physical machines running either Windows or Linux, while your newer applications will be based on application services in Azure such as Azure Web Apps and Azure Functions.
+
+**Azure services.** Resources in Azure that support your business applications that have migrated to the cloud. This includes services such as Azure Storage, Azure SQL, and Azure IoT. This also includes Azure virtual machines since they are monitored like other Azure services, but the applications and software running on the guest operating system of those virtual machines require more monitoring beyond the host.
+
+**Server software.** Software running on virtual and physical machines that support your business applications or packaged applications that provide general functionality to your business. Examples include Internet Information Server (IIS), SQL Server, Exchange, and SharePoint. This also includes the Windows or Linux operating system on your virtual and physical machines.
+
+**Local infrastructure.** Components specific to your on-premises environment that require monitoring. This includes such resources as physical servers, storage, and network components. These are the components that are virtualized when you move to the cloud.
+
+## Sample walkthrough
+The following is a hypothetical walkthrough of a migration from Operations Manager to Azure Monitor. This is not intended to represent the full complexity of an actual migration, but it does at least provide the basic steps and sequence. The sections below describe each of these steps in more detail.
+
+Your environment prior to moving any components into Azure is based on virtual and physical machines located on-premises or with a managed service provider. It relies on Operations Manager to monitor business applications, server software, and other infrastructure components in your environment such as physical servers and networks. You use standard management packs for server software such as IIS, SQL Server, and various vendor software, and you tune those management packs for your specific requirements. You create custom management packs for your business applications and other components that can't be monitored with existing management packs and configure Operations Manager to support your business processes.
+
+Your migration to Azure starts with IaaS, moving virtual machines supporting business applications into Azure. The monitoring requirements for these applications and the server software they depend on don't change, and you continue using Operations Manager on these servers with your existing management packs.
+
+Azure Monitor is enabled for your Azure services as soon as you create an Azure subscription. It automatically collects platform metrics and the Activity log, and you configure resource logs to be collected so you can interactively analyze all available telemetry using log queries. You enable Azure Monitor for VMs on your virtual machines to analyze monitoring data across your entire environment together and to discover relationships between machines and processes. You extend your use of Azure Monitor to your on-premises physical and virtual machines by enabling Azure Arc enabled servers on them.
+
+You enable Application Insights for each of your business applications. It identifies the different components of each application, begins to collect usage and performance data, and identifies any errors that occur in the code. You create availability tests to proactively test your external applications and alert you to any performance or availability problems. While Application Insights gives you powerful features that you don't have in Operations Manager, you continue to rely on custom management packs that you developed for your business applications since they include monitoring scenarios not yet covered by Azure Monitor.
+
+As you gain familiarity with Azure Monitor, you start to create alert rules that are able to replace some management pack functionality and start to evolve your business processes to use the new monitoring platform. This allows you to start removing machines and management packs from the Operations Manager management group. You continue to use management packs for critical server software and on-premises infrastructure but continue to watch for new features in Azure Monitor that will allow you to retire additional functionality.
+
+## Monitor Azure services
+Azure services actually require Azure Monitor to collect telemetry, and it's enabled the moment that you create an Azure subscription. The [Activity log](platform/activity-log.md) is automatically collected for the subscription, and [platform metrics](platform/data-platform-metrics.md) are automatically collected from any Azure resources you create. You can immediately start using [metrics explorer](platform/metrics-getting-started.md), which is similar to performance views in the Operations console, but it provides interactive analysis and [advanced aggregations](platform/metrics-charts.md) of data. [Create a metric alert](platform/alerts-metric.md) to be notified when a value crosses a threshold or [add a chart to an Azure dashboard](platform/metrics-charts.md#pinning-to-dashboards) for visibility.
+
+[![Metrics explorer](media/azure-monitor-operations-manager/metrics-explorer.png)](media/azure-monitor-operations-manager/metrics-explorer.png#lightbox)
+
+[Create a diagnostic setting](platform/diagnostic-settings.md) for each Azure resource to send metrics and [resource logs](platform/resource-logs.md), which provide details about the internal operation of each resource, to a Log Analytics workspace. This gives you all available telemetry for your resources and allows you to use [Log Analytics](log-query/log-analytics-overview.md) to interactively analyze log and performance data using an advanced query language that has no equivalent in Operations Manager. You can also create [log query alerts](platform/alerts-log-query.md), which can use complex logic to determine alerting conditions and correlate data across multiple resources.
+
+[![Logs Analytics](media/azure-monitor-operations-manager/log-analytics.png)](media/azure-monitor-operations-manager/log-analytics.png#lightbox)
+
+[Insights](monitor-reference.md) in Azure Monitor are similar to management packs in that they provide unique monitoring for a particular Azure service. Insights are currently available for several services including networking, storage, and containers, and others are continuously being added.
+
+[![Insight example](media/azure-monitor-operations-manager/insight.png)](media/azure-monitor-operations-manager/insight.png#lightbox)
++
+Insights are based on [workbooks](platform/workbooks-overview.md) in Azure Monitor, which combine metrics and log queries into rich interactive reports. Create your own workbooks to combine data from multiple services similar to how you might create custom views and reports in the Operations console.
+
+### Azure management pack
+The [Azure management pack](https://www.microsoft.com/download/details.aspx?id=50013) allows Operations Manager to discover Azure resources and monitor their health based on a particular set of monitoring scenarios. This management pack does require you to perform additional configuration for each resource in Azure, but it may be helpful to provide some visibility of your Azure resources in the Operations Console until you evolve your business processes to focus on Azure Monitor.
+
+[![Azure management pack](media/azure-monitor-operations-manager/operations-console.png)](media/azure-monitor-operations-manager/operations-console.png#lightbox)
+
+ You may choose to use the Azure Management pack if you want visibility for certain Azure resources in the Operations console and to integrate some basic alerting with your existing processes. It actually uses data collected by Azure Monitor. You should look to Azure Monitor though for long-term complete monitoring of your Azure resources.
++
+## Monitor server software and local infrastructure
+When you move machines to the cloud, the monitoring requirements for their software don't change. You no longer need to monitor their physical components since they're virtualized, but the guest operating system and its workloads have the same requirements regardless of their environment.
+
+[Azure Monitor for VMs](insights/vminsights-overview.md) is the primary feature in Azure Monitor for monitoring virtual machines and their guest operating system and workloads. Similar to Operations Manager, Azure Monitor for VMs uses an agent to collect data from the guest operating system of virtual machines. This is the same performance and event data typically used by management packs for analysis and alerting. There aren't preexisting rules though to identify and alert on issues for the business applications and server software running in those machines. You must create your own alert rules to be proactively notified of any detected issues.
+
+[![Azure Monitor for VMs performance](media/azure-monitor-operations-manager/vm-insights-performance.png)](media/azure-monitor-operations-manager/vm-insights-performance.png#lightbox)
+
+Azure Monitor also doesn't measure the health of different applications and services running on a virtual machine. Metric alerts can automatically resolve when a value drops below a threshold, but Azure Monitor doesn't currently have the ability to define health criteria for applications and services running on the machine, nor does it provide health rollup to group the health of related components.
+
+> [!NOTE]
+> A new [guest health feature for Azure Monitor for VMs](insights/vminsights-health-overview.md) is now in public preview and does alert based on the health state of a set of performance metrics. This is initially limited though to a specific set of performance counters related to the guest operating system and not applications or other workloads running in the virtual machine.
+>
+> [![Azure Monitor for VMs guest health](media/azure-monitor-operations-manager/vm-insights-guest-health.png)](media/azure-monitor-operations-manager/vm-insights-guest-health.png#lightbox)
+
+Monitoring the software on your machines in a hybrid environment will typically use a combination of Azure Monitor for VMs and Operations Manager, depending on the requirements of each machine and on your maturity developing operational processes around Azure Monitor. The Microsoft Management Agent (referred to as the Log Analytics agent in Azure Monitor) is used by both platforms so that a single machine can be simultaneously monitored by both.
+
+> [!NOTE]
+> In the future, Azure Monitor for VMs will transition to the [Azure Monitor agent](platform/azure-monitor-agent-overview.md), which is currently in public preview. It will be compatible with the Microsoft Monitoring Agent so the same virtual machine will continue to be able to be monitored by both platforms.
+
+Continue to use Operations Manager for functionality that cannot yet be provided by Azure Monitor. This includes management packs for critical server software like IIS, SQL Server, or Exchange. You may also have custom management packs developed for on-premises infrastructure that can't be reached with Azure Monitor. Also continue to use Operations Manager if it is tightly integrated into your operational processes until you can transition to modernizing your service operations where Azure Monitor and other Azure services can augment or replace.
+
+Use Azure Monitor fo VMs to enhance your current monitoring even if it doesn't immediately replace Operations Manager. Examples of features unique to Azure Monitor include the following:
+
+- Discover and monitor relationships between virtual machines and their external dependencies.
+- View aggregated performance data across multiple virtual machines in interactive charts and workbooks.
+- Use [log queries](log-query/log-query-overview.md) to interactively analyze telemetry from your virtual machines with data from your other Azure resources.
+- Create [log alert rules](platform/alerts-log-query.md) based on complex logic across multiple virtual machines.
+
+[![Azure Monitor for VMs map](media/azure-monitor-operations-manager/vm-insights-map.png)](media/azure-monitor-operations-manager/vm-insights-map.png#lightbox)
+
+In addition to Azure virtual machines, Azure Monitor for VMs can monitor machines on-premises and in other clouds using [Azure Arc enabled servers](../azure-arc/servers/overview.md). Arc enabled servers allow you to manage your Windows and Linux machines hosted outside of Azure, on your corporate network, or other cloud provider consistent with how you manage native Azure virtual machines.
+++
+## Monitor business applications
+You typically require custom management packs to monitor your business applications with Operations Manager, leveraging agents installed on each virtual machine. Application Insights in Azure Monitor monitors web-based applications whether they're in Azure, other clouds, or on-premises, so it can be used for all of your applications whether or not they've been migrated to Azure.
+
+If your monitoring of a business application is limited to functionality provided by the [.NET app performance template]() in Operations Manager, then you can most likely migrate to Application Insights with no loss of functionality. In fact, Application Insights will include a significant number of additional features including the following:
+
+- Automatically discover and monitor application components.
+- Collect detailed application usage and performance data such as response time, failure rates, and request rates.
+- Collect browser data such as page views and load performance.
+- Detect exceptions and drill into stack trace and related requests.
+- Perform advanced analysis using features such as [distributed tracing](app/distributed-tracing.md) and [smart detection](app/proactive-diagnostics.md).
+- Use [metrics explorer](platform/metrics-getting-started.md) to interactively analyze performance data.
+- Use [log queries](log-query/log-query-overview.md) to interactively analyze collected telemetry together with data collected for Azure services and Azure Monitor for VMs.
+
+[![Application Insights](media/azure-monitor-operations-manager/application-insights.png)](media/azure-monitor-operations-manager/application-insights.png#lightbox)
+
+There are certain scenarios though where you may need to continue using Operations Manager in addition to Application Insights until you're able to achieve required functionality. Examples where you may need to continue with Operations Manager include the following:
+
+- [Availability tests](app/monitor-web-app-availability.md), which allow you to monitor and alert on the availability and responsiveness of your applications require incoming requests from the IP addresses of web test agents. If your policy won't allow such access, you may need to keep using [Web Application Availability Monitors](/system-center/scom/web-application-availability-monitoring-template) in Operations Manager.
+- In Operations Manager you can set any polling interval for availability tests, with many customers checking every 60-120 seconds. Application Insights has a minimum polling interval of 5 minutes which may be too long for some customers.
+- A significant amount of monitoring in Operations Manager is performed by collecting events generated by applications and by running scripts on the local agent. These aren't standard options in Application Insights, so you could require custom work to achieve your business requirements. This might include custom alert rules using event data stored in a Log Analytics workspace and scripts launched in a virtual machines guest using [hybrid runbook worker](../automation/automation-hybrid-runbook-worker.md).
+- Depending on the language that your application is written in, you may be limited in the [instrumentation you can use with Application Insights](app/platforms.md).
+
+Following the basic strategy in the other sections of this guide, continue to use Operations Manager for your business applications, but take advantage of additional features provided by Application Insights. As you're able to replace critical functionality with Azure Monitor, you can start to retire your custom management packs.
++
+## Next steps
+
+- See the [Cloud Monitoring Guide](/azure/cloud-adoption-framework/manage/monitor/) for a detailed comparison of Azure Monitor and System Center Operations Manager and more details on designing and implementing a hybrid monitoring environment.
+- Read more about [monitoring Azure resources in Azure Monitor](insights/monitor-azure-resource.md).
+- Read more about [monitoring Azure virtual machines in Azure Monitor](insights/monitor-vm-azure.md).
+- Read more about [Azure Monitor for VMs](insights/vminsights-overview.md).
+- Read more about [Application Insights](app/app-insights-overview.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-analyze https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/container-insights-analyze.md
@@ -125,7 +125,7 @@ In metrics explorer, you can view aggregated node and pod utilization metrics fr
| insights.container/pods | | | | PodCount | A pod count from Kubernetes.|
-You can [split](../platform/metrics-charts.md#apply-splitting-to-a-chart) a metric to view it by dimension and visualize how different segments of it compare to each other. For a node, you can segment the chart by the *host* dimension. From a pod, you can segment it by the following dimensions:
+You can [split](../platform/metrics-charts.md#apply-splitting) a metric to view it by dimension and visualize how different segments of it compare to each other. For a node, you can segment the chart by the *host* dimension. From a pod, you can segment it by the following dimensions:
* Controller * Kubernetes namespace
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/vminsights-enable-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/vminsights-enable-overview.md
@@ -49,6 +49,9 @@ Azure Monitor for VMs is available for Azure Arc enabled servers in regions wher
Azure Monitor for VMs supports any operating system that supports the Log Analytics agent and Dependency agent. See [Overview of Azure Monitor agents ](../platform/agents-overview.md#supported-operating-systems) for a complete list.
+> [!IMPORTANT]
+> The Azure Monitor for VMs guest health feature has more limited operating system support while it's in public preview. See [Enable Azure Monitor for VMs guest health (preview)](vminsights-health-enable.md) for a detailed list.
+ See the following list of considerations on Linux support of the Dependency agent that supports Azure Monitor for VMs: - Only default and SMP Linux kernel releases are supported.
@@ -60,7 +63,7 @@ See the following list of considerations on Linux support of the Dependency agen
## Log Analytics workspace Azure Monitor for VMs requires a Log Analytics workspace. See [Configure Log Analytics workspace for Azure Monitor for VMs](vminsights-configure-workspace.md) for details and requirements of this workspace. ## Agents
-Azure Monitor for VMs requires the following two agents to be installed on each virtual machine or virtual machine scale set to be monitored. Installing these agents and connecting them to the workspace is the only requirement to onboard the resource.
+Azure Monitor for VMs requires the following two agents to be installed on each virtual machine or virtual machine scale set to be monitored. To onboard the resource, install these agents and connect them to the workspace. See [Network requirements](../platform/log-analytics-agent.md#network-requirements) for the network requirements for these agents.
- [Log Analytics agent](../platform/log-analytics-agent.md). Collects events and performance data from the virtual machine or virtual machine scale set and delivers it to the Log Analytics workspace. Deployment methods for the Log Analytics agent on Azure resources use the VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md). - Dependency agent. Collects discovered data about processes running on the virtual machine and external process dependencies, which are used by the [Map feature in Azure Monitor for VMs](vminsights-maps.md). The Dependency agent relies on the Log Analytics agent to deliver its data to Azure Monitor. Deployment methods for the Dependency agent on Azure resources use the VM extension for [Windows](../../virtual-machines/extensions/agent-dependency-windows.md) and [Linux](../../virtual-machines/extensions/agent-dependency-linux.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/vminsights-log-search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/vminsights-log-search.md
@@ -88,7 +88,7 @@ Here are some important points to consider:
#### Naming and Classification
-For convenience, the IP address of the remote end of a connection is included in the RemoteIp property. For inbound connections, RemoteIp is the same as SourceIp, while for outbound connections, it is the same as DestinationIp. The RemoteDnsCanonicalNames property represents the DNS canonical names reported by the machine for RemoteIp. The RemoteDnsQuestions and RemoteClassification properties are reserved for future use.
+For convenience, the IP address of the remote end of a connection is included in the RemoteIp property. For inbound connections, RemoteIp is the same as SourceIp, while for outbound connections, it is the same as DestinationIp. The RemoteDnsCanonicalNames property represents the DNS canonical names reported by the machine for RemoteIp. The RemoteDnsQuestions property represents the DNS questions reported by the machine for RemoteIp. The RemoveClassification property is reserved for future use.
#### Geolocation
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/learn/tutorial-metrics-explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/learn/tutorial-metrics-explorer.md
@@ -65,20 +65,20 @@ Use the **time brush** to investigate an interesting area of the chart such as a
## Apply dimension filters and splitting See the following references for advanced features that allow you to perform additional analysis on your metrics and identify potential outliers in your data. -- [Filtering](../platform/metrics-charts.md#apply-filters-to-charts) lets you choose which dimension values are included in the chart. For example, you might want to show only successful requests when charting a *server response time* metric.
+- [Filtering](../platform/metrics-charts.md#filters) lets you choose which dimension values are included in the chart. For example, you might want to show only successful requests when charting a *server response time* metric.
-- [Splitting](../platform/metrics-charts.md#apply-splitting-to-a-chart) controls whether the chart displays separate lines for each value of a dimension, or aggregates the values into a single line. For example, you might want to see one line for an average response time across all server instances or you may want separate lines for each server.
+- [Splitting](../platform/metrics-charts.md#apply-splitting) controls whether the chart displays separate lines for each value of a dimension, or aggregates the values into a single line. For example, you might want to see one line for an average response time across all server instances or you may want separate lines for each server.
See [examples of the charts](../platform/metric-chart-samples.md) that have filtering and splitting applied. ## Advanced chart settings
-You can customize chart style, title, and modify advanced chart settings. When done with customization, pin it to a dashboard to save your work. You can also configure metrics alerts. See [Advanced features of Azure Metrics Explorer](../platform/metrics-charts.md#lock-boundaries-of-chart-y-axis) to learn about these and other advanced features of Azure Monitor metrics explorer.
+You can customize chart style, title, and modify advanced chart settings. When done with customization, pin it to a dashboard to save your work. You can also configure metrics alerts. See [Advanced features of Azure Metrics Explorer](../platform/metrics-charts.md#locking the-range-of-the-y-axis) to learn about these and other advanced features of Azure Monitor metrics explorer.
## Next steps Now that you've learned how to work with metrics in Azure Monitor, learn how to use metrics to send proactive alerts. > [!div class="nextstepaction"]
-> [Create, view, and manage metric alerts using Azure Monitor](../platform/metrics-charts.md#create-alert-rules)
+> [Create, view, and manage metric alerts using Azure Monitor](../platform/metrics-charts.md#alert-rules)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/agents-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/agents-overview.md
@@ -134,7 +134,7 @@ The following tables list the operating systems that are supported by the Azure
### Windows
-| Operations system | Azure Monitor agent | Log Analytics agent | Dependency agent | Diagnostics extension |
+| Operating system | Azure Monitor agent | Log Analytics agent | Dependency agent | Diagnostics extension |
|:---|:---:|:---:|:---:|:---:| | Windows Server 2019 | X | X | X | X | | Windows Server 2016 | X | X | X | X |
@@ -148,7 +148,7 @@ The following tables list the operating systems that are supported by the Azure
### Linux
-| Operations system | Azure Monitor agent | Log Analytics agent | Dependency agent | Diagnostics extension |
+| Operating system | Azure Monitor agent | Log Analytics agent | Dependency agent | Diagnostics extension |
|:---|:---:|:---:|:---:|:---: | Amazon Linux 2017.09 | | X | | | | CentOS Linux 8 | | X | X | |
@@ -156,7 +156,7 @@ The following tables list the operating systems that are supported by the Azure
| CentOS Linux 6 | | X | | | | CentOS Linux 6.5+ | | X | X | X | | Debian 9 | X | X | x | X |
-| Debian 8 | | X | X | X |
+| Debian 8 | | X | X | |
| Debian 7 | | | | X | | OpenSUSE 13.1+ | | | | X | | Oracle Linux 8 | | X | | |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-metric-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-metric-overview.md
@@ -22,7 +22,7 @@ Let's say you have created a simple static threshold metric alert rule as follow
- Target Resource (the Azure resource you want to monitor): myVM - Metric: Percentage CPU - Condition Type: Static-- Time Aggregation (Statistic that is run over raw metric values. [Supported time aggregations](metrics-charts.md#changing-aggregation) are Min, Max, Avg, Total, Count): Average
+- Time Aggregation (Statistic that is run over raw metric values. [Supported time aggregations](metrics-charts.md#aggregation) are Min, Max, Avg, Total, Count): Average
- Period (The look back window over which metric values are checked): Over the last 5 mins - Frequency (The frequency with which the metric alert checks if the conditions are met): 1 min - Operator: Greater Than
@@ -39,7 +39,7 @@ Let's say you have created a simple Dynamic Thresholds metric alert rule as foll
- Target Resource (the Azure resource you want to monitor): myVM - Metric: Percentage CPU - Condition Type: Dynamic-- Time Aggregation (Statistic that is run over raw metric values. [Supported time aggregations](metrics-charts.md#changing-aggregation) are Min, Max, Avg, Total, Count): Average
+- Time Aggregation (Statistic that is run over raw metric values. [Supported time aggregations](metrics-charts.md#aggregation) are Min, Max, Avg, Total, Count): Average
- Period (The look back window over which metric values are checked): Over the last 5 mins - Frequency (The frequency with which the metric alert checks if the conditions are met): 1 min - Operator: Greater Than
@@ -176,7 +176,7 @@ You can find the full list of supported resource types in this [article](./alert
## Next steps - [Learn how to create, view, and manage metric alerts in Azure](alerts-metric.md)-- [Learn how to create alerts within Azure Montior Metrics Explorer](./metrics-charts.md#create-alert-rules)
+- [Learn how to create alerts within Azure Montior Metrics Explorer](./metrics-charts.md#alert-rules)
- [Learn how to deploy metric alerts using Azure Resource Manager templates](./alerts-metric-create-templates.md) - [Learn more about action groups](action-groups.md) - [Learn more about Dynamic Thresholds condition type](alerts-dynamic-thresholds.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/autoscale-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/autoscale-troubleshoot.md
@@ -83,7 +83,7 @@ The chart on the bottom shows a few values.
- The **Observed Capacity** (purple) shows the instance count seen by autoscale engine. - The **Metric Threshold** (light green) is set to 10.
-If there are multiple scale action rules, you can use splitting or the **add filter** option in the Metrics explorer chart to look at metric by a specific source or rule. For more information on splitting a metric chart, see [Advanced features of metric charts - splitting](metrics-charts.md#apply-splitting-to-a-chart)
+If there are multiple scale action rules, you can use splitting or the **add filter** option in the Metrics explorer chart to look at metric by a specific source or rule. For more information on splitting a metric chart, see [Advanced features of metric charts - splitting](metrics-charts.md#apply-splitting)
## Example 3 - Understanding autoscale events
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-aggregation-explained https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/metrics-aggregation-explained.md new file mode 100644
@@ -0,0 +1,282 @@
+---
+title: Azure Monitor Metrics metrics aggregation and display explained
+description: Detailed information on how metrics are aggregated in Azure Monitor
+author: rboucher
+ms.author: robb
+services: azure-monitor
+ms.topic: conceptual
+ms.date: 01/12/2020
+ms.subservice: metrics
+---
+
+# Azure Monitor Metrics metrics aggregation and display explained
+
+This article explains the aggregation of metrics in the Azure Monitor time-series database that back Azure Monitor [platform metrics](data-platform.md) and [custom metrics](metrics-custom-overview.md). This article also applies to standard [Application Insights metrics](../app/app-insights-overview.md).
+
+This is a complex topic and not necessary to understand all the information in this article to use Azure Monitor metrics effectively.
+
+## Overview and terms
+
+When you add a metric to a chart, metrics explorer automatically pre-selects its default aggregation. The default makes sense in the basic scenarios, but you can use different aggregations to gain more insights about the metric. Viewing different aggregations on a chart requires that you understand how metrics explorer handles them.
+
+Let's define a few terms clearly first:
+
+- **Metric value** ΓÇô A single measurement value gathered for a specific resource.
+- **Time period** ΓÇô A generic period of time.
+- **Time interval** ΓÇô The period of time between the gathering of two metric values.
+- **Time range** ΓÇô The time period displayed on a chart. Typical default is 24 hours. Only specific ranges are available.
+- **Time granularity** or **time grain** ΓÇô The time period used to aggregate values together to allow display on a chart. Only specific ranges are available. Current minimum is 1 minute. The time granularity value should be smaller than the selected time range to be useful, otherwise just one value is shown for the entire chart.
+- **Aggregation type** ΓÇô A type of statistic calculated from multiple metric values.
+- **Aggregate** ΓÇô The process of taking multiple input values and then using them to produce a single output value via the rules defined by the aggregation type. For example, taking an average of multiple values.
+
+Metrics are a series of metric values captured at a regular time interval. When you plot a chart, the values of the selected metric are separately aggregated over the time granularity (also known as time grain). You select the size of the time granularity using the [Metrics Explorer time picker panel](metrics-getting-started.md#select-a-time-range). If you donΓÇÖt make an explicit selection, the time granularity is automatically selected based on the currently selected time range. Once selected, the metric values that were captured during each time granularity interval are aggregated and placed onto the chart - one datapoint per interval.
+
+## Aggregation types
+
+There are five basic aggregation types available in the metrics explorer. Metrics explorer hides the aggregations that are irrelevant and cannot be used for a given metric.
+
+- **Sum** ΓÇô the sum of all values captured over the aggregation interval. Sometimes referred to as the Total aggregation.
+- **Count** ΓÇô the number of measurements captured over the aggregation interval. Count doesn't look at the value of the measurement, only the number of records.
+- **Average** ΓÇô the average of the metric values captured over the aggregation interval. For most metrics, this value is Sum/Count.
+- **Min** ΓÇô the smallest value captured over the aggregation interval.
+- **Max** ΓÇô the largest value captured over the aggregation interval.
+
+For example, suppose a chart is showing the **Network Out Total** metric for a VM using the **SUM** aggregation over the last 24-hour time span. The time range and granularity can be changed from the upper right of the chart as seen in the following screenshot.
+
+:::image type="content" source="media/metrics-aggregation-explained/time-range-granularity-picker.png" alt-text="Screenshot showing time range and time granularity picker" border="true":::
+
+For time granularity = 30 minutes and the time range = 24 hours:
+
+- The chart is drawn from 48 datapoints. That is 24 hours x 2 datapoints per hour (60min/30min) aggregated 1-minute datapoints.
+- The line chart connects 48 dots in the chart plot area.
+- Each datapoint represents the sum of all network out bytes sent out during each of the relevant 30-min time periods.
++
+ :::image type="content" source="media/metrics-aggregation-explained/24-hour-30-min-gran.png" alt-text="Screenshot showing data on a line graph set to 24-hour time range and 30-minute time granularity" border="true" lightbox="media/metrics-aggregation-explained/24-hour-30-min-gran.png":::
+
+*Click on the images in this section to see larger versions.*
+
+If you switch the time granularity to 15 minutes, the chart is drawn from 96 aggregated data points. That is, 60min/15min = 4 datapoints per hour x 24 hours.
+
+:::image type="content" source="media/metrics-aggregation-explained/24-hour-15-min-gran.png" alt-text="Screenshot showing data on a line graph set to 24-hour time range and 15-minute time granularity" border="true" lightbox="media/metrics-aggregation-explained/24-hour-15-min-gran.png":::
+
+For time granularity of 5 minutes, you get 24 x (60/5) = 288 points.
+
+:::image type="content" source="media/metrics-aggregation-explained/24-hour-5-min-gran.png" alt-text="Screenshot showing data on a line graph set to 24-hour time range and 5-minute time granularity" border="true" lightbox="media/metrics-aggregation-explained/24-hour-15-min-gran.png":::
+
+For time granularity of 1 minute (the smallest possible on the chart), you get 24 x 60/1 = 1440 points.
+
+:::image type="content" source="media/metrics-aggregation-explained/24-hour-1-min-gran.png" alt-text="Screenshot showing data on a line graph set to 24-hour time range and 1-minute time granularity" border="true" lightbox="media/metrics-aggregation-explained/24-hour-1-min-gran.png":::
+
+The charts look different for these summations as shown in the previous screenshots. Notice how this VM has a lot of output in a small time period relative to the rest of the time window.
+
+The time granularity allows you to adjust the "signal-to-noise" ratio on a chart. Higher aggregations remove noise and smooth out spikes. Notice the variations at the bottom 1-minute chart and how they smooth out as you go to higher granularity values.
+
+This smoothing behavior is important when you send this data to other systems--for example, alerts. Typically, you usually don't want to be alerted by very short spikes in CPU time over 90%. But if the CPU stays at 90% for 5 minutes, that's likely important. If you set up an alert rule on CPU (or any metric), making the time granularity larger can reduce the number of false alerts you receive.
+
+It is important to establish what's "normal" for your workload to know what time interval is best. This is one of the benefits of [dynamic alerts](alerts-dynamic-thresholds.md), which is a different topic not covered here.
+
+## How the system collects metrics
+
+Data collection varies by metric. There are two types of collection periods.
+
+### Measurement collection frequency
+
+- **Regular** - The metric is gathered at a consistent time interval that does not vary.
+
+- **Activity-based** - The metric is gathered based on when a transaction of a certain type occurs. Each transaction has a metric entry and a time stamp. They are not gathered at regular intervals so there are a varying number of records over a given time period.
+
+### Granularity
+
+The minimum time interval is 1 minute, but the underlying system may capture data faster depending on the metric. For example, CPU percentage is tracked every 15 seconds at a regular interval. Because HTTP failures are tracked as transactions, they can easily exceed many more than one a minute. Other metrics such as SQL Storage are captured every 20 minutes. This choice is up to the individual resource provider and type. Most try to provide the smallest interval possible.
+
+### Dimensions, splitting, and filtering
+
+Metrics are captured for each individual resource. However, the level at which the metrics are collected, stored, and able to be charted may vary. This level is represented by additional metrics available in **metrics dimensions**. Each individual resource provider gets to define how detailed the data they collect is. Azure Monitor only defines how such detail should be presented and stored.
+
+When you chart a metric in metric explorer, you have the option to "split" the chart by a dimension. Splitting a chart means that you are looking into the underlying data for more detail and seeing that data charted or filtered in metric explorer.
+
+For example, [Microsoft.ApiManagement/service](metrics-supported.md#microsoftapimanagementservice) has *Location* as a dimension for many metrics.
+
+- **Capacity** is one such metric. Having the *Location* dimension implies that the underlying system is storing a metric record for the capacity of each location, rather than just one for the aggregate amount. You can then retrieve or split out that information in a metric chart.
+
+- Looking at **Overall Duration of Gateway Requests**, there are 2 dimensions *Location* and *Hostname*, again letting you know the location of a duration and which hostname it came from.
+
+- One of the more flexible metrics, **Requests**, has 7 different dimensions.
+
+Check the Azure Monitor [metrics supported](metrics-supported.md) article for details on each metric and the dimensions available. In addition, the documentation for each resource provider and type may provide additional information on the dimensions and what they measure.
+
+You can use splitting and filtering together to dig into a problem. Below is an example of a graphic showing the *Avg Disk Write Bytes* for a group of VMs in a resource group. We have a rollup of all the VMs with this metric, but we may want to dig into see which are actually responsible for the peaks around 6AM. Are they the same machine? How many machines are involved?
+
+:::image type="content" source="media/metrics-aggregation-explained/total-disk write-bytes-all-VMs.png" alt-text="Screenshot showing total Disk Write Bytes for all virtual machines in Contoso Hotels resource group" border="true" lightbox="media/metrics-aggregation-explained/total-disk write-bytes-all-VMs.png":::
+
+*Click on the images in this section to see larger versions.*
+
+When we apply splitting, we can see the underlying data, but it's a bit of a mess. Turns out there are 20 VMs being aggregated into the chart above. In this case, we've used our mouse to hover over the large peak at 6AM that tells us that CH-DCVM11 is the cause. but it's hard to see the rest of the data associated with that VM because of other VMs cluttering the chart.
+
+:::image type="content" source="media/metrics-aggregation-explained/split-total-disk write-bytes-all-VMs.png" alt-text="Screenshot showing Disk Write Bytes for all virtual machines in Contoso Hotels resource group split by virtual machine name" border="true" lightbox="media/metrics-aggregation-explained/split-total-disk write-bytes-all-VMs.png":::
+
+Using filtering allows us to clean up the chart to see what's really happening. You can check or uncheck the VMs you want to see. Notice the dotted lines. Those are mentioned in a later section.
+
+:::image type="content" source="media/metrics-aggregation-explained/split-filter-total-disk write-bytes-all-VMs.png" alt-text="Screenshot showing Disk Write Bytes for all virtual machines in Contoso Hotels resource group split and filtered by virtual machine name" border="true" lightbox="media/metrics-aggregation-explained/split-filter-total-disk write-bytes-all-VMs.png":::
+
+For more information on how to show split dimension data on a metric explorer chart, see [Advanced features of metrics explorer- filters and splitting](metrics-charts.md#filters).
+
+### NULL and zero values
+
+When the system expects metric data from a resource but doesn't receive it, it records a NULL value. NULL is different than a zero value, which becomes important in the calculation of aggregations and charting. NULL values are not counted as valid measurements.
+
+NULLs show up differently on different charts. Scatter plots skip showing a dot on the chart. Bar charts skip showing the bar. On line charts, NULL can show up as [dotted or dashed lines](metrics-troubleshoot.md#chart-shows-dashed-line) like those shown in the screenshot in the previous section. When calculating averages that include NULLs, there are fewer data points to take the average from. This behavior can sometimes result in an unexpected drop in values on a chart, though usually less so than if the value was converted to a zero and used as a valid datapoint.
+
+[Custom metrics](metrics-custom-overview.md) always use NULLs when no data is received. With [platform metrics](data-platform.md), each resource provider decides whether to use zeros or NULLs based on what makes the most sense for a given metric.
+
+Azure Monitor alerts use the values the resource provider writes to the metric database, so it's important to know how the resource provider handles NULLs by viewing the data first.
+
+## How aggregation works
+
+The metrics charts in the previous system show different types of aggregated data. The system pre-aggregates the data so that the requested charts can show quicker without a lot of repeated computation.
+
+In this example:
+
+- We are collecting a **fictitious** transactional metric called **HTTP failures**
+- *Server* is a dimension for the **HTTP failures** metric.
+- We have 3 servers - Server A, B, and C.
+
+To simplify the explanation, we'll start with the SUM aggregation type only.
+
+### Sub minute to 1-minute aggregation
+
+First raw metric data is collected and stored in the Azure Monitor metrics database. In this case, each server has transaction records stored with a timestamp because *Server* is a dimension. Given that the smallest time period you can view as a customer is 1 minute, those timestamps are first aggregated into 1-minute metric values for each individual server. The aggregation process for Server B is shown in the graphic below. Servers A and C are done in the same way and have different data.
+
+:::image type="content" source="media/metrics-aggregation-explained/sub-minute-transaction.png" alt-text="Screenshot showing sub minute transactional entries into 1-minute aggregations. " border="false":::
+
+The resulting 1-minute aggregated values are stored as new entries in the metrics database so they can be gathered for later calculations.
+
+:::image type="content" source="media/metrics-aggregation-explained/sub-minute-transaction-dimension-aggregated.png" alt-text="Screenshot showing multiple 1-minute aggregated entries across dimension of server. Server A, B, and C shown individually" border="false":::
+
+### Dimension aggregation
+
+The 1-minute calculations are then collapsed by dimension and again stored as individual records. In this case, all the data from all the individual servers are aggregated into a 1-minute interval metric and stored in the metrics database for use in later aggregations.
+
+:::image type="content" source="media/metrics-aggregation-explained/1-minute-transaction-dimension-flattened-aggregated.png" alt-text="Screenshot showing multiple 1-minute aggregated entries of Server A, B, and C aggregated into 1-minute All Servers entires" border="false":::
+
+For clarity, the following table shows the method of aggregation.
+
+| Period | Server A | Server B | Server C | Sum (A+B+C)|
+| -------- | -------- | -------- | -------- | -------- |
+| Minute 1 | 1 | 1 | 1 | 3 |
+| Minute 2 | 0 | 5 | 1 | 6 |
+| Minute 3 | 0 | 5 | 1 | 6 |
+| Minute 4 | 2 | 3 | 4 | 9 |
+| Minute 5 | 1 | 0 | 3 | 4 |
+| Minute 6 | 1 | 0 | 4 | 5 |
+| Minute 7 | 1 | 2 | 4 | 7 |
+| Minute 8 | 0 | 1 | 0 | 1 |
+| Minute 9 | 1 | 1 | 4 | 6 |
+| Minute 10| 2 | 1 | 0 | 3 |
+
+Only one dimension is shown above, but this same aggregation and storage process occurs for **all dimensions** that a metric supports.
+
+- Collect values into 1-minute aggregated set by that dimension. Store those values.
+- Collapse the dimension into a 1-minute aggregated SUM. Store those values.
+
+Let's introduce another dimension of HTTP failures called NetworkAdapter. Let's say we had a varying number of adapters per server.
+
+- Server A has 1 adapter
+- Server B has 2 adapters
+- Server C has 3 adapters
+
+We'd collect data for the following transactions separately. They would be marked with:
+
+- A time
+- A value
+- The server the transaction came from
+- The adapter that the transaction came from
+
+Each of those subminute streams would then be aggregated into 1-minute time-series values and stored in the Azure Monitor metric database:
+
+- Server A, Adapter 1
+- Server B, Adapter 1
+- Server B, Adapter 2
+- Server C, Adapter 1
+- Server C, Adapter 2
+- Server C, Adapter 3
+
+In addition, the following collapsed aggregations would also be stored:
+
+- Server A, Adapter 1 (because there is nothing to collapse, it would be stored again)
+- Server B, Adapter 1+2
+- Server C, Adapter 1+2+3
+- Servers ALL, Adapters ALL
+
+This shows that metrics with large numbers of dimensions have a larger number of aggregations. It's not important to know all the permutations, just understand the reasoning. The system wants to have both the individual data and the aggregated data stored for quick retrieval for access on any chart. The system picks either the most relevant stored aggregation or the underlying raw data depending on what you choose to display.
+
+### Aggregation with no dimensions
+
+Because this metric has a dimension *Server*, you can get to the underlying data for server A, B, and C above via splitting and filtering, as explained earlier in this article. If the metric didn't have *Server* as a dimension, you as a customer could only access the aggregated 1-minute sums shown in black on the diagram. That is, the values of 3, 6, 6, 9, etc. The system also would not do the underlying work to aggregate split values it would never use them in metric explorer or send them out via the metrics REST API.
+
+## Viewing time granularities above 1 minute
+
+If you ask for metrics at a larger granularity, the system uses the 1-minute aggregated sums to calculate the sums for the larger time granularities. Below, dotted lines show the summation method for the 2-minute and 5-minute time granularities. Again, we are showing just the SUM aggregation type for simplicity.
+
+:::image type="content" source="media/metrics-aggregation-explained/1-minute-to-2-min-5-min.png" alt-text="Screenshot showing multiple 1-minute aggregated entries across dimension of server aggregated into 2-min and 5-min time periods." border="false":::
+
+For the 2-minute time granularity.
+
+| Period | Sums
+| -------------|-------------|
+| Minute 1 & 2 | (3 + 6) = 9 |
+| Minute 3 & 4 | (6 + 9) = 15|
+| Minute 4 & 5 | (4 + 5) = 9 |
+| Minute 6 & 7 | (7 + 1) = 8 |
+| Minute 8 & 9 | (6 + 3) = 9 |
+
+For 5-minute time granularity.
+
+| Period | Sums |
+|---------------------|------------------------|
+| Minute 1 through 5 | 3 + 6 + 6 + 9 + 4 = 28 |
+| Minute 6 through 10 | 5 + 7 + 1 + 6 + 3 = 22 |
+
+The system uses the stored aggregated data that gives the best performance.
+
+Below is the larger diagram for the above 1-minute aggregation process, with some of the arrows left out to improve readability.
+
+:::image type="content" source="media/metrics-aggregation-explained/sum-aggregation-full.png" alt-text="Screenshot showing consolidation of previous 3 screenshots. Multiple 1-minute aggregated entries across dimension of server aggregated in 1-minute, 2-minute, and 5-minute intervals. Server A, B, and C shown individually" border="false":::
+
+## More complex example
+
+Following is a larger example using values for a fictitious metric called HTTP Response time in milliseconds. Here we introduce additional levels of complexity.
+
+1. We show aggregation for Sum, Count, Min, and Max and the calculation for Average.
+2. We show NULL values and how they affect calculations.
+
+Consider the following example. The boxes and arrows show examples of how the values are aggregated and calculated.
+
+The same 1-minute preaggregation process as described in the previous section occurs for Sums, Count, Minimum, and Maximum. However, Average is NOT pre-aggregated. It is recalculated using aggregated data to avoid calculation errors.
+
+:::image type="content" source="media/metrics-aggregation-explained/full-aggregation-example-all-types.png" alt-text="Screenshot showing complex example of aggregation and calculation of sum, count, min, max and average from 1 minute to 10 minutes." border="false" lightbox="media/metrics-aggregation-explained/full-aggregation-example-all-types.png":::
+
+Consider minute 6 for the 1-minute aggregation as highlighted above. This minute is the point where Server B went offline and stopped reporting data, perhaps due to a reboot.
+
+From Minute 6 above, the calculated 1-minute aggregation types are:
+
+| Aggregation type | Value | Notes |
+|------------------|--------------|-------|
+| Sum | 53+20=73 | |
+| Count | 2 | Shows the effect of NULLs. The value would have been 3 if the server had been online. |
+| Minimum | 20 | |
+| Maximum | 53 | |
+| Average | 73 / 2 | Always the Sum divided by the Count. It's never stored and always recalculated for each time granularity using the aggregated numbers for that granularity. Notice the recalculation for the 5-minute and 10-minute time granularities as highlighted above. |
+
+The red text color indicates values that might be considered out of the normal range and shows how they propagate (or fail to) as the time-granularity goes up. Notice how the *Min* and *Max* indicate there are underlying anomalies while the *Average* and *Sums* lose that information as your time granularity goes up.
+
+You can also see that the NULLs give a better calculation of average than if zeros were used instead.
+
+> [!NOTE]
+> Though not the case in this example, *Count* is equal to *Sum* in cases where a metric is always captured with the value of 1. This is common when a metric tracks the occurrence of a transactional event--for example, the number of HTTP failures mentioned in a previous example in this article.
+
+## Next steps
+
+- [Getting started with metrics explorer](metrics-getting-started.md)
+- [Advanced Metrics explorer](metrics-charts.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-charts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/metrics-charts.md
@@ -1,6 +1,6 @@
---
-title: Advanced features of Azure Metrics Explorer
-description: Learn about advanced features of Azure Monitor Metrics Explorer
+title: Advanced features of the Azure metrics explorer
+description: Learn about advanced uses of the Azure metrics explorer.
author: vgorbenko services: azure-monitor
@@ -10,213 +10,222 @@ ms.author: vitalyg
ms.subservice: metrics ---
-# Advanced features of Azure Metrics Explorer
+# Advanced features of the Azure metrics explorer
> [!NOTE]
-> This article assumes that you are familiar with basic features of Metrics Explorer. If you are a new user and want to learn how to create your first metric chart, see [Getting started with Azure Metrics Explorer](metrics-getting-started.md).
+> This article assumes you're familiar with basic features of the Azure metrics explorer feature of Azure Monitor. If you're a new user and want to learn how to create your first metric chart, see [Getting started with the metrics explorer](metrics-getting-started.md).
-## Metrics in Azure
+In Azure Monitor, [metrics](data-platform-metrics.md) are a series of measured values and counts that are collected and stored over time. Metrics can be standard (also called "platform") or custom.
-[Metrics in Azure Monitor](data-platform-metrics.md) are the series of measured values and counts that are collected and stored over time. There are standard (or ΓÇ£platformΓÇ¥) metrics, and custom metrics. The standard metrics are provided to you by the Azure platform itself. Standard metrics reflect the health and usage statistics of your Azure resources. Whereas custom metrics are sent to Azure by your applications using the [Application Insights API for custom events and metrics](../app/api-custom-events-metrics.md), [Windows Azure Diagnostics (WAD) extension](./diagnostics-extension-overview.md), or by [Azure Monitor REST API](./metrics-store-custom-rest-api.md).
+Standard metrics are provided by the Azure platform. They reflect the health and usage statistics of your Azure resources.
## Resource scope picker
-The resource scope picker allows you to view metrics across single and multiple resources. Below are instructions on how to use the resource scope picker.
+The resource scope picker allows you to view metrics across single resources and multiple resources. The following sections explain how to use the resource scope picker.
-### Selecting a single resource
-Select **Metrics** from the **Azure Monitor** menu or from the **Monitoring** section of a resource's menu. Click on the "Select a scope" button to open the scope picker, which will allow you to select the resource(s) you want to see metrics for. This should already be populated if you opened metrics explorer from a resource's menu.
+### Select a single resource
+Select **Metrics** from the **Azure Monitor** menu or from the **Monitoring** section of a resource's menu. Then choose **Select a scope** to open the scope picker.
-![Screenshot of the resource scope picker](./media/metrics-charts/scope-picker.png)
+Use the scope picker to select the resources whose metrics you want to see. The scope should be populated if you opened the Azure metrics explorer from a resource's menu.
-For certain resources, you can only view a single resourceΓÇÖs metrics at a time. These resources are under the ΓÇ£All resource typesΓÇ¥ section in the Resource types dropdown.
+![Screenshot showing how to open the resource scope picker.](./media/metrics-charts/scope-picker.png)
-![Screenshot of single resource](./media/metrics-charts/single-resource-scope.png)
+For some resources, you can view only one resource's metrics at a time. In the **Resource types** menu, these resources are in the **All resource types** section.
-After clicking your desired resource, you will see all subscriptions and resource groups that contain that resource.
+![Screenshot showing a single resource.](./media/metrics-charts/single-resource-scope.png)
-![Screenshot of available resources](./media/metrics-charts/available-single-resource.png)
+After selecting a resource, you see all subscriptions and resource groups that contain that resource.
+
+![Screenshot showing available resources.](./media/metrics-charts/available-single-resource.png)
> [!TIP]
-> If youΓÇÖd like to view multiple resourceΓÇÖs metrics at the same time, or metrics across a subscription or resource group, click the Upvote button.
+> If you want the capability to view the metrics for multiple resources at the same time, or to view metrics across a subscription or resource group, select **Upvote**.
-Once youΓÇÖre satisfied with your selection click ΓÇ£ApplyΓÇ¥.
+When you're satisfied with your selection, select **Apply**.
-### Viewing metrics across multiple resources
-Some resource types have enabled the ability to query for metrics over multiple resources, as long as they are within the same subscription and location. These resource types can be found at the top of the ΓÇ£Resource TypesΓÇ¥ dropdown. To get more details on how to view metrics across multiple resources view [this document](metrics-dynamic-scope.md#selecting-multiple-resources).
+### View metrics across multiple resources
+Some resource types can query for metrics over multiple resources. The resources must be within the same subscription and location. Find these resource types at the top of the **Resource types** menu.
-![Screenshot of cross resource types](./media/metrics-charts/multi-resource-scope.png)
+For more information, see [Select multiple resources](metrics-dynamic-scope.md#select-multiple-resources).
-For multi-resource compatible types, you can also query for metrics across a subscription or multiple resource groups. To learn how to do this, view [this article](metrics-dynamic-scope.md#selecting-a-resource-group-or-subscription)
+![Screenshot showing cross-resource types.](./media/metrics-charts/multi-resource-scope.png)
+For types that are compatible with multiple resources, you can query for metrics across a subscription or multiple resource groups. For more information, see [Select a resource group or subscription](metrics-dynamic-scope.md#select-a-resource-group-or-subscription).
-## Create views with multiple metrics and charts
+## Multiple metric lines and charts
-You can create charts that plot multiple metrics lines or show multiple metric charts at once. This functionality allows you to:
+In the Azure metrics explorer, you can create charts that plot multiple metric lines or show multiple metric charts at the same time. This functionality allows you to:
-- correlate related metrics on the same graph to see how one value is related to another-- display metrics with different units of measure in close proximity-- visually aggregate and compare metrics from multiple resources
+- Correlate related metrics on the same graph to see how one value relates to another.
+- Display metrics that use different units of measure in close proximity.
+- Visually aggregate and compare metrics from multiple resources.
-For example, if you have 5 storage accounts and you want to know how much total space is consumed between them, you can create a (stacked) area chart which shows the individual and sum of all the values at particular points in time.
+For example, imagine you have five storage accounts, and you want to know how much space they consume together. You can create a (stacked) area chart that shows the individual values and the sum of all the values at particular points in time.
### Multiple metrics on the same chart
-First, [create a new chart](metrics-getting-started.md#create-your-first-metric-chart). Click **Add Metric** and repeat the steps to add another metric on the same chart.
+To view multiple metrics on the same chart, first [create a new chart](metrics-getting-started.md#create-your-first-metric-chart). Then select **Add metric**. Repeat this step to add another metric on the same chart.
- > [!NOTE]
- > You typically donΓÇÖt want to have metrics with different units of measure (i.e. ΓÇ£millisecondsΓÇ¥ and ΓÇ£kilobytesΓÇ¥) or with significantly different scale on one chart. Instead, consider using multiple charts. Click on the Add Chart button to create multiple charts in metrics explorer.
+> [!NOTE]
+> Typically, your charts shouldn't mix metrics that use different units of measure. For example, avoid mixing one metric that uses milliseconds with another that uses kilobytes. Also avoid mixing metrics whose scales differ significantly.
+>
+> In these cases, consider using multiple charts instead. In the metrics explorer, select **Add chart** to create a new chart.
### Multiple charts
-Click the **Add chart** and create another chart with a different metric.
+To create another chart that uses a different metric, select **Add chart**.
-### Order or delete multiple charts
+To reorder or delete multiple charts, select the ellipsis (**...**) button to open the chart menu. Then choose **Move up**, **Move down**, or **Delete**.
-To order or delete multiple charts, click on the ellipses ( **...** ) symbol to open the chart menu and choose the appropriate menu item of **Move up**, **Move down**, or **Delete**.
+## Aggregation
-## Changing aggregation
+When you add a metric to a chart, the metrics explorer automatically applies a default aggregation. The default makes sense in basic scenarios. But you can use a different aggregation to gain more insights about the metric.
-When you add a metric to a chart, metrics explorer automatically pre-selects its default aggregation. The default makes sense in the basic scenarios, but you can use a different aggregation to gain additional insights about the metric. Viewing different aggregations on a chart requires that you understand how metrics explorer handles them.
+Before you use different aggregations on a chart, you should understand how the metrics explorer handles them. Metrics are a series of measurements (or "metric values") that are captured over a time period. When you plot a chart, the values of the selected metric are separately aggregated over the *time grain*.
-Metrics are the series of measurements (or "metric values") captured over the time period. When you plot a chart, the values of the selected metric are separately aggregated over the *time grain*. You select the size of the time grain [using the Metrics Explorer time picker panel](metrics-getting-started.md#select-a-time-range). If you donΓÇÖt make an explicit selection of the time grain, the time granularity is automatically selected based on the currently selected time range. Once the time grain is determined, the metric values that were captured during each time grain interval are aggregated and placed onto the chart - one datapoint per time grain.
+You select the size of the time grain by using the metrics explorer's [time picker panel](metrics-getting-started.md#select-a-time-range). If you don't explicitly select the time grain, the currently selected time range is used by default. After the time grain is determined, the metric values that were captured during each time grain are aggregated on the chart, one data point per time grain.
-For example, suppose the chart is showing the **Server Response Time** metric using the **Average** aggregation over the **last 24 hours** time span:
+For example, suppose a chart shows the *Server response time* metric. It uses the *average* aggregation over time span of the *last 24 hours*. In this example:
-- If the time granularity is set to 30 minutes, the chart is drawn from 48 aggregated datapoints (e.g. the line chart connects 48 dots in the chart plot area). That is, 24 hours x 2 datapoints per hour. Each datapoint represents the *average* of all captured response times for server requests that occurred during each of the relevant 30 min time periods.-- If you switch the time granularity to 15 minutes, you get 96 aggregated datapoints. That is, 24 hours x 4 datapoints per hour.
+- If the time granularity is set to 30 minutes, the chart is drawn from 48 aggregated data points. That is, the line chart connects 48 dots in the chart plot area (24 hours x 2 data points per hour). Each data point represents the *average* of all captured response times for server requests that occurred during each of the relevant 30-minute time periods.
+- If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, you get 24 hours x 4 data points per hour.
-There are five basic stats aggregation types available in the metrics explorer: **Sum**, **Count**, **Min**, **Max**, and **Average**. The **Sum** aggregation is sometimes referred as **Total** aggregation. For many metrics, Metrics Explorer will hide the aggregations that are totally irrelevant and cannot be used.
+The metrics explorer has five basic statistical aggregation types: sum, count, min, max, and average. The *sum* aggregation is sometimes called the *total* aggregation. For many metrics, the metrics explorer hides the aggregations that are irrelevant and can't be used.
-**Sum** ΓÇô the sum of all values captured over the aggregation interval
+* **Sum**: The sum of all values captured during the aggregation interval.
-![Screenshot of sum of request](./media/metrics-charts/request-sum.png)
+ ![Screenshot of a sum request.](./media/metrics-charts/request-sum.png)
-**Count** ΓÇô the number of measurements captured over the aggregation interval. Note that **Count** will be equal to **Sum** in the case where the metric is always captured with the value of 1. This is common when the metric tracks the count of distinct events, and each measurement represents one event (i.e. the code fires off a metric record every time a new request comes in)
+* **Count**: The number of measurements captured during the aggregation interval.
+
+ When the metric is always captured with the value of 1, the count aggregation is equal to the sum aggregation. This scenario is common when the metric tracks the count of distinct events and each measurement represents one event. The code emits a metric record every time a new request arrives.
-![Screenshot of count of request](./media/metrics-charts/request-count.png)
+ ![Screenshot of a count request.](./media/metrics-charts/request-count.png)
-**Average** ΓÇô the average of the metric values captured over the aggregation interval
+* **Average**: The average of the metric values captured during the aggregation interval.
-![Screenshot of average request](./media/metrics-charts/request-avg.png)
+ ![Screenshot of an average request.](./media/metrics-charts/request-avg.png)
-**Min** ΓÇô the smallest value captured over the aggregation interval
+* **Min**: The smallest value captured during the aggregation interval.
-![Screenshot of minimum request](./media/metrics-charts/request-min.png)
+ ![Screenshot of a minimum request.](./media/metrics-charts/request-min.png)
-**Max** ΓÇô the largest value captured over the aggregation interval
+* **Max**: The largest value captured during the aggregation interval.
-![Screenshot of max request](./media/metrics-charts/request-max.png)
+ ![Screenshot of a maximum request.](./media/metrics-charts/request-max.png)
-## Apply filters to charts
+## Filters
-You can apply filters to the charts that show metrics with dimensions. For example, if the metric ΓÇ£Transaction countΓÇ¥ has a dimension, ΓÇ£Response typeΓÇ¥, which indicates whether the response from transactions succeeded or failed then filtering on this dimension would plot a chart line for only successful (or only failed) transactions.
+You can apply filters to charts whose metrics have dimensions. For example, imagine a "Transaction count" metric that has a "Response type" dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, you'll see a chart line for only successful (or only failed) transactions.
-### To add a filter
+### Add a filter
-1. Select **Add filter** above the chart
+1. Above the chart, select **Add filter**.
-2. Select which dimension (property) you want to filter
+2. Select a dimension (property) to filter.
![Screenshot that shows the dimensions (properties) you can filter.](./media/metrics-charts/028.png)
-3. Select which dimension values you want to include when plotting the chart (this example shows filtering out the successful storage transactions):
+3. Select the dimension values you want to include when you plot the chart. The following example filters out the successful storage transactions:
- ![Screenshot that shows the filtering out of the successful storage transactions.](./media/metrics-charts/029.png)
+ ![Screenshot that shows the successful filtered storage transactions.](./media/metrics-charts/029.png)
-4. After selecting the filter values, click away from the Filter Selector to close it. Now the chart shows how many storage transactions have failed:
+4. Select outside the **Filter Selector** to close it. Now the chart shows how many storage transactions have failed:
- ![Screenshot that shows how many storage transactions have failed](./media/metrics-charts/030.png)
+ ![Screenshot that shows how many storage transactions have failed.](./media/metrics-charts/030.png)
-5. You can repeat steps 1-4 to apply multiple filters to the same charts.
+You can repeat these steps to apply multiple filters to the same charts.
-## Apply splitting to a chart
+## Metric splitting
-You can split a metric by dimension to visualize how different segments of the metric compare against each other, and identify the outlying segments of a dimension.
+You can split a metric by dimension to visualize how different segments of the metric compare. Splitting can also help you identify the outlying segments of a dimension.
### Apply splitting
-1. Click on **Apply splitting** above the chart.
+1. Above the chart, select **Apply splitting**.
> [!NOTE]
- > Splitting cannot be used with charts that have multiple metrics. Also, you can have multiple filters but only one splitting dimension applied to any single chart.
+ > Charts that have multiple metrics can't use the splitting functionality. Also, although a chart can have multiple filters, it can have only one splitting dimension.
-2. Choose a dimension on which you want to segment your chart:
+2. Choose a dimension on which to segment your chart:
- ![Screenshot that shows the selected dimension on which you segment your chart.](./media/metrics-charts/031.png)
+ ![Screenshot that shows the selected dimension on which to segment the chart.](./media/metrics-charts/031.png)
- Now the chart now shows multiple lines, one for each segment of dimension:
+ The chart now shows multiple lines, one for each dimension segment:
- ![Screenshot that shows multiple lines, one for each segment of dimension.](./media/metrics-charts/032.png)
+ ![Screenshot that shows lines for each dimension segment.](./media/metrics-charts/032.png)
-3. Click away from the **Grouping Selector** to close it.
+3. Select outside the **Grouping Selector** to close it.
> [!NOTE]
- > Use both Filtering and Splitting on the same dimension to hide the segments that are irrelevant for your scenario and make charts easier to read.
+ > To hide segments that are irrelevant for your scenario and to make your charts easier to read, use both filtering and splitting on the same dimension.
-## Lock boundaries of chart y-axis
+## Locking the range of the y-axis
-Locking the range of the y-axis becomes important when the chart shows smaller fluctuations of larger values.
+Locking the range of the value (y) axis becomes important in charts that show small fluctuations of large values.
-For example, when the volume of successful requests drops down from 99.99% to 99.5%, it may represent a significant reduction in the quality of service. However, noticing a small numeric value fluctuation would be difficult or even impossible from the default chart settings. In this case you could lock the lowest boundary of the chart to 99%, which would make this small drop more apparent.
+For example, a drop in the volume of successful requests from 99.99 percent to 99.5 percent might represent a significant reduction in the quality of service. But noticing a small numeric value fluctuation would be difficult or even impossible if you're using the default chart settings. In this case, you could lock the lowest boundary of the chart to 99 percent to make a small drop more apparent.
-Another example is a fluctuation in the available memory, where the value will technically never reach 0. Fixing the range to a higher value may make the drops in available memory easier to spot.
+Another example is a fluctuation in the available memory. In this scenario, the value will technically never reach 0. Fixing the range to a higher value might make drops in available memory easier to spot.
-To control the y-axis range, use the “…” chart menu, and select **Chart settings** to access advanced chart settings.
+To control the y-axis range, open the chart menu (**...**). Then select **Chart settings** to access advanced chart settings.
-![Screenshot that highlights the chart settings option.](./media/metrics-charts/033.png)
+![Screenshot that highlights the chart settings selection.](./media/metrics-charts/033.png)
- Modify the values in the Y-Axis Range section, or use **Auto** button to revert to defaults.
+Modify the values in the **Y-axis range** section, or select **Auto** to revert to the default values.
![Screenshot that highlights the Y-axis range section.](./media/metrics-charts/034.png) > [!WARNING]
-> Locking the boundaries of y-axis for the charts that track various counts or sums over a period of time (and thus use count, sum, minimum, or maximum aggregations) usually requires specifying a fixed time granularity rather than relying on the automatic defaults. This is necessary is because the values on charts change when the time granularity is automatically modified by the user resizing browser window or going from one screen resolution to another. The resulting change in time granularity effects the look of the chart, invalidating current selection of y-axis range.
+> If you need to lock the boundaries of the y-axis for charts that track counts or sums over a period of time (by using count, sum, min, or max aggregations), you should usually specify a fixed time granularity. In this case, you shouldn't rely on the automatic defaults.
+>
+> You choose a fixed time granularity because chart values change when the time granularity is automatically modified after a user resizes a browser window or changes screen resolution. The resulting change in time granularity affects the look of the chart, invalidating the current selection of the y-axis range.
-## Change colors of chart lines
+## Line colors
After you configure the charts, the chart lines are automatically assigned a color from a default palette. You can change those colors.
-To change the color of a chart line, click on the colored bar in the legend that corresponds to the chart. The color picker dialog will open. Use the color picker to configure the color for the line.
+To change the color of a chart line, select the colored bar in the legend that corresponds to the chart. The color picker dialog box opens. Use the color picker to configure the line color.
-![Screenshot that shows how to change color](./media/metrics-charts/035.png)
+![Screenshot that shows how to change color.](./media/metrics-charts/035.png)
-After the chart colors are configured, they will remain that way when you pin the chart to a dashboard. The following section shows you how to pin a chart.
+Your customized colors are preserved when you pin the chart to a dashboard. The following section shows how to pin a chart.
-## Pin charts to dashboards
+## Pinning to dashboards
-After configuring the charts, you may want to add it to the dashboards so that you can view it again, possibly in context of other monitoring telemetry, or share with your team.
+After you configure a chart, you might want to add it to a dashboard. By pinning a chart to a dashboard, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring telemetry.
-To pin a configured chart to a dashboard:
+To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Pin to dashboard**.
-After configuring your chart, click **Pin to dashboard** in the right top corner of the chart.
+![Screenshot showing how to pin a chart to a dashboard.](./media/metrics-charts/036.png)
-![Screenshot that shows you how to pin to chart](./media/metrics-charts/036.png)
+## Alert rules
-## Create alert rules
+You can use your visualization criteria to create a metric-based alert rule. The new alert rule will include your chart's target resource, metric, splitting, and filter dimensions. You can modify these settings by using the alert rule creation pane.
-You can use the criteria you have set to visualize your metrics as the basis of a metric based alert rule. The new alerting rule will include your target resource, metric, splitting, and filter dimensions from your chart. You will be able to modify these settings later on the alert rule creation pane.
+To begin, select **New alert rule**.
-### To create a new alert rule, click **New Alert rule**
+![Screenshot that shows the New alert rule button highlighted in red.](./media/metrics-charts/042.png)
-![New alert rule button highlighted in red](./media/metrics-charts/042.png)
+The alert rule creation pane opens. In the pane, you see the chart's metric dimensions. The fields in the pane are prepopulated to help you customize the rule.
-You will be taken to the alert rule creation pane with the underlying metric dimensions from your chart pre-populated to make it easier to generate custom alert rules.
+![Screenshot showing the rule creation pane.](./media/metrics-charts/041.png)
-![Create alert rule](./media/metrics-charts/041.png)
-
-Check out this [article](alerts-metric.md) to learn more about setting up metric alerts.
+For more information, see [Create, view, and manage metric alerts](alerts-metric.md).
## Troubleshooting
-*I don't see any data on my chart.*
+If you don't see any data on your chart, review the following troubleshooting information:
-* Filters apply to all the charts on the pane. Make sure that, while you're focusing on one chart, you didn't set a filter that excludes all the data on another.
+* Filters apply to all of the charts on the pane. While you focus on a chart, make sure that you don't set a filter that excludes all the data on another chart.
-* If you want to set different filters on different charts, create them in different blades, save them as separate favorites. If you want, you can pin them to the dashboard so that you can see them alongside each other.
+* To set different filters on different charts, create the charts in different blades. Then save the charts as separate favorites. If you want, you can pin the charts to the dashboard so you can see them together.
-* If you segment a chart by a property that is not defined on the metric, then there will be nothing on the chart. Try clearing the segmentation (splitting), or choose a different property.
+* If you segment a chart by a property that the metric doesn't define, the chart displays no content. Try clearing the segmentation (splitting), or choose a different property.
## Next steps
- Read [Creating custom KPI dashboards](../learn/tutorial-app-dashboards.md) to learn about the best practices for creating actionable dashboards with metrics.
+To create actionable dashboards by using metrics, see [Creating custom KPI dashboards](../learn/tutorial-app-dashboards.md).
+
+
\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-dynamic-scope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/metrics-dynamic-scope.md
@@ -1,6 +1,6 @@
---
-title: Viewing multiple resources in Metrics Explorer
-description: Learn how to visualize multiple resources on Azure Monitor Metrics Explorer
+title: View multiple resources in the Azure metrics explorer
+description: Learn how to visualize multiple resources by using the Azure metrics explorer.
author: ritaroloff services: azure-monitor
@@ -10,77 +10,80 @@ ms.author: riroloff
ms.subservice: metrics ---
-# Viewing multiple resources in Metrics Explorer
+# View multiple resources in the Azure metrics explorer
-The resource scope picker allows you to view metrics across multiple resources that are within the same subscription and region. Below are instructions on how to view multiple resources in Azure Monitor Metrics Explorer.
+The resource scope picker allows you to view metrics across multiple resources that are within the same subscription and region. This article explains how to view multiple resources by using the Azure metrics explorer feature of Azure Monitor.
-## Selecting a resource
+## Select a resource
-Select **Metrics** from the **Azure Monitor** menu or from the **Monitoring** section of a resource's menu. Click on the "Select a scope" button to open the resource scope picker, which will allow you to select the resource(s) you want to see metrics for. This should already be populated if you opened metrics explorer from a resource's menu.
+Select **Metrics** from the **Azure Monitor** menu or from the **Monitoring** section of a resource's menu. Then choose **Select a scope** to open the scope picker.
-![Screenshot of resource scope picker highlighted in red](./media/metrics-charts/019.png)
+Use the scope picker to select the resources whose metrics you want to see. The scope should be populated if you opened the metrics explorer from a resource's menu.
-## Selecting multiple resources
+![Screenshot showing how to open the resource scope picker.](./media/metrics-charts/019.png)
-Some resource types have enabled the ability to query for metrics over multiple resources, as long as they are within the same subscription and location. These resource types can be found at the top of the ΓÇ£Resource TypesΓÇ¥ dropdown.
+## Select multiple resources
-![Screenshot that shows a dropdown of resources that are multi-resource compatible ](./media/metrics-charts/020.png)
+Some resource types can query for metrics over multiple resources. The metrics must be within the same subscription and location. Find these resource types at the top of the **Resource types** menu.
+
+![Screenshot that shows a menu of resources that are compatible with multiple resources.](./media/metrics-charts/020.png)
> [!WARNING]
-> You must have Monitoring Reader permission at the subscription level to visualize metrics across multiple resources, resource groups or a subscription. In order to do this, please follow the instructions in [this document](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal).
+> You must have Monitoring Reader permission at the subscription level to visualize metrics across multiple resources, resource groups, or a subscription. For more information, see [Add or remove Azure role assignments by using the Azure portal](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal).
-In order to visualize metrics over multiple resources, start by selecting multiple resources within the resource scope picker.
+To visualize metrics over multiple resources, start by selecting multiple resources within the resource scope picker.
-![Screenshot that shows how to select multiple resources](./media/metrics-charts/021.png)
+![Screenshot that shows how to select multiple resources.](./media/metrics-charts/021.png)
> [!NOTE]
-> You are only able to select multiple resources within the same resource type, location and subscription. Resources outside of this criteria will not be selectable.
+> The resources you select must be within the same resource type, location, and subscription. Resources that don't fit these criteria aren't selectable.
-When you are done selecting, click on the ΓÇ£ApplyΓÇ¥ button to save your selection.
+When you finish, choose **Apply** to save your selections.
-## Selecting a resource group or subscription
+## Select a resource group or subscription
> [!WARNING]
-> You must have Monitoring Reader permission at the subscription level to visualize metrics across multiple resources, resource groups or a subscription.
+> You must have Monitoring Reader permission at the subscription level to visualize metrics across multiple resources, resource groups, or a subscription.
-For multi-resource compatible types, you can also query for metrics across a subscription or multiple resource groups. Start by selecting a subscription or one or more resource groups:
+For types that are compatible with multiple resources, you can query for metrics across a subscription or multiple resource groups. Start by selecting a subscription or one or more resource groups:
-![Screenshot that shows how to query across multiple resource groups ](./media/metrics-charts/022.png)
+![Screenshot that shows how to query across multiple resource groups.](./media/metrics-charts/022.png)
-You will then need to select a resource type and location before you can continue applying your new scope.
+Select a resource type and location.
-![Screenshot that shows the selected resource groups ](./media/metrics-charts/023.png)
+![Screenshot that shows the selected resource groups.](./media/metrics-charts/023.png)
-You are also able to expand the selected scopes to verify which resources this will apply to.
+You can expand the selected scopes to verify the resources your selections apply to.
-![Screenshot that shows the selected resources within the groups ](./media/metrics-charts/024.png)
+![Screenshot that shows the selected resources within the groups.](./media/metrics-charts/024.png)
-Once you are finished selecting your scopes, click ΓÇ£ApplyΓÇ¥ to save your selections.
+When you finish selecting scopes, select **Apply**.
-## Splitting and filtering by resource group or resources
+## Split and filter by resource group or resources
-After plotting your resources, you can use the splitting and filtering tool to gain more insight into your data.
+After plotting your resources, you can use splitting and filtering to gain more insight into your data.
-Splitting allows you to visualize how different segments of the metric compare with each other. For instance, when you are plotting a metric for multiple resources you can use the ΓÇ£Apply splittingΓÇ¥ tool to split by resource id or resource group. This will allow you to easily compare a single metric across multiple resources or resource groups.
+Splitting allows you to visualize how different segments of the metric compare with each other. For instance, when you plot a metric for multiple resources, you can choose **Apply splitting** to split by resource ID or resource group. The split allows you to compare a single metric across multiple resources or resource groups.
-For example, below is a chart of the percentage CPU across 9VMs. By splitting by resource id, you can easily see how percentage CPU differs per VM.
+For example, the following chart shows the percentage CPU across nine VMs. When you split by resource ID, you see how percentage CPU differs by VM.
-![Screenshot that shows how you can use splitting to see percentage CPU per VM](./media/metrics-charts/026.png)
+![Screenshot that shows how to use splitting to see the percentage CPU across VMs.](./media/metrics-charts/026.png)
-In addition to splitting, you can use the filtering feature to only display the resource groups that you want to see. For instance, if you want to view the percentage CPU for VMs for a certain resource group, you can use the "Add filterΓÇ¥ tool to filter by resource group. In this example we filter by TailspinToysDemo, which removes metrics associated with resources in TailspinToys.
+Along with splitting, you can use filtering to display only the resource groups that you want to see. For instance, to view the percentage CPU for VMs for a certain resource group, you can select **Add filter** to filter by resource group.
-![Screenshot that shows how you can filter by resource group](./media/metrics-charts/027.png)
+In this example, we filter by TailspinToysDemo. Here, the filter removes metrics associated with resources in TailspinToys.
-## Pinning your multi-resource charts
+![Screenshot that shows how to filter by resource group.](./media/metrics-charts/027.png)
-> [!WARNING]
-> You must have Monitoring Reader permission at the subscription level to visualize metrics across multiple resources, resource groups or a subscription. In order to do this, please follow the instructions in [this document](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal).
+## Pin multiple-resource charts
+
+Multiple-resource charts that visualize metrics across resource groups and subscriptions require the user to have *Monitoring Reader* permission at the subscription level. Ensure that all users of the dashboards to which you pin multiple-resource charts have sufficient permissions. For more information, see [Add or remove Azure role assignments by using the Azure portal](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal).
-To pin your multi-resource chart, please follow the instructions [here](https://docs.microsoft.com/azure/azure-monitor/platform/metrics-charts#pin-charts-to-dashboards).
+To pin your multiple-resource chart to a dashboard, see [Pinning to dashboards](https://docs.microsoft.com/azure/azure-monitor/platform/metrics-charts#pinning-to-dashboards).
## Next steps
-* [Troubleshooting Metrics Explorer](metrics-troubleshoot.md)
+* [Troubleshoot the metrics explorer](metrics-troubleshoot.md)
* [See a list of available metrics for Azure services](metrics-supported.md) * [See examples of configured charts](metric-chart-samples.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-getting-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/metrics-getting-started.md
@@ -34,7 +34,7 @@ To create a metric chart, from your resource, resource group, subscription, or A
> ![Select a metric](./media/metrics-getting-started/metrics-dropdown.png)
-4. Optionally, you can [change the metric aggregation](metrics-charts.md#changing-aggregation). For example, you might want your chart to show minimum, maximum, or average values of the metric.
+4. Optionally, you can [change the metric aggregation](metrics-charts.md#aggregation). For example, you might want your chart to show minimum, maximum, or average values of the metric.
> [!TIP] > Use the **Add metric** button and repeat these steps if you want to see multiple metrics plotted in the same chart. For multiple charts in one view, select the **Add chart** button on top.
@@ -53,7 +53,7 @@ By default, the chart shows the most recent 24 hours of metrics data. Use the **
## Apply dimension filters and splitting
-[Filtering](metrics-charts.md#apply-filters-to-charts) and [splitting](metrics-charts.md#apply-splitting-to-a-chart) are powerful diagnostic tools for the metrics that have dimensions. These features show how various metric segments ("dimension values") impact the overall value of the metric, and allow you to identify possible outliers.
+[Filtering](metrics-charts.md#filters) and [splitting](metrics-charts.md#apply-splitting) are powerful diagnostic tools for the metrics that have dimensions. These features show how various metric segments ("dimension values") impact the overall value of the metric, and allow you to identify possible outliers.
- **Filtering** lets you choose which dimension values are included in the chart. For example, you might want to show successful requests when charting the *server response time* metric. You would need to apply the filter on the *success of request* dimension.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/metrics-troubleshoot.md
@@ -44,7 +44,7 @@ Some resources donΓÇÖt constantly emit their metrics. For example, Azure will no
### All metric values were outside of the locked y-axis range
-By [locking the boundaries of chart y-axis](metrics-charts.md#lock-boundaries-of-chart-y-axis), you can unintentionally make the chart display area not show the chart line. For example, if the y-axis is locked to a range between 0% and 50%, and the metric has a constant value of 100%, the line is always rendered outside of the visible area, making the chart appear blank.
+By [locking the boundaries of chart y-axis](metrics-charts.md#locking-the-range-of-the-y-axis), you can unintentionally make the chart display area not show the chart line. For example, if the y-axis is locked to a range between 0% and 50%, and the metric has a constant value of 100%, the line is always rendered outside of the visible area, making the chart appear blank.
**Solution:** Verify that the y-axis boundaries of the chart arenΓÇÖt locked outside of the range of the metric values. If the y-axis boundaries are locked, you may want to temporarily reset them to ensure that the metric values donΓÇÖt fall outside of the chart range. Locking the y-axis range isnΓÇÖt recommended with automatic granularity for the charts with **sum**, **min**, and **max** aggregation because their values will change with granularity by resizing browser window or going from one screen resolution to another. Switching granularity may leave the display area of your chart empty.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/powerbi.md
@@ -24,6 +24,9 @@ To import data from a [Log Analytics workspace](manage-access.md) in Azure Monit
## Export query Start by creating a [log query](../log-query/log-query-overview.md) that returns the data that you want to populate the Power BI dataset. You then export that query to [Power Query (M) language](/powerquery-m/power-query-m-language-specification) which can be used by Power BI Desktop.
+> [!WARNING]
+> Be careful to [optimize your query](../log-query/query-optimization.md) so that it doesn't take excessively long to run or it may timeout. Note the **timespan** value in the exported query which defines the timespan of data that the query will retrieve. Use the smallest timespan that you require to limit the amount of data that the query returns.
+ 1. [Create the log query in Log Analytics](../log-query/log-analytics-tutorial.md) to extract the data for your dataset. 2. Select **Export** > **Power BI Query (M)**. This exports the query to a text file called **PowerBIQuery.txt**.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/resource-logs-categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/resource-logs-categories.md
@@ -13,7 +13,7 @@ ms.date: 12/09/2020
[Azure Monitor resource logs](./platform-logs-overview.md) are logs emitted by Azure services that describe the operation of those services or resources. All resource logs available through Azure Monitor share a common top-level schema, with flexibility for each service to emit unique properties for their own events.
-A combination of the resource type (available in the `resourceId` property) and the `category` uniquely identify a schema. There is a common schema for all resource logs with service specific fields then added for different log categories. For more information, see [Common and service specific schema for Azure Resource Logs]()
+A combination of the resource type (available in the `resourceId` property) and the `category` uniquely identify a schema. There is a common schema for all resource logs with service-specific fields then added for different log categories. For more information, see [Common and service-specific schema for Azure Resource Logs]()
## Costs
@@ -29,7 +29,7 @@ Some categories may only be supported for specific types of resources. See the r
If you still something is missing, you can open a GitHub comment at the bottom of this article. ## Microsoft.AnalysisServices/servers
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -39,7 +39,7 @@ Cost: Free
## Microsoft.ApiManagement/service
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -48,7 +48,7 @@ Cost: Free
## Microsoft.AppPlatform/Spring
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -58,7 +58,7 @@ Cost: Free
## Microsoft.Automation/automationAccounts
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -69,7 +69,7 @@ Cost: Free
## Microsoft.Batch/batchAccounts
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -78,7 +78,7 @@ Cost: Free
## Microsoft.BatchAI/workspaces
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -89,7 +89,7 @@ Cost: Free
## Microsoft.Blockchain/blockchainMembers
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -101,7 +101,7 @@ Cost: Free
## Microsoft.Blockchain/cordaMembers
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -110,16 +110,16 @@ Cost: Free
## Microsoft.Cdn/cdnwebapplicationfirewallpolicies
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
-|WebApplicationFirewallLogs|Web Appliation Firewall Logs|
+|WebApplicationFirewallLogs|Web Application Firewall Logs|
## Microsoft.Cdn/profiles
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -128,7 +128,7 @@ Cost: Free
## Microsoft.Cdn/profiles/endpoints
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -137,7 +137,7 @@ Cost: Free
## Microsoft.ClassicNetwork/networksecuritygroups
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -146,7 +146,7 @@ Cost: Free
## Microsoft.CognitiveServices/accounts
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -157,7 +157,7 @@ Cost: Free
## Microsoft.ContainerRegistry/registries
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -167,7 +167,7 @@ Cost: Free
## Microsoft.ContainerService/managedClusters
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -180,7 +180,7 @@ Cost: Free
## Microsoft.CustomProviders/resourceproviders
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -189,7 +189,7 @@ Cost: Free
## Microsoft.Databricks/workspaces
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -207,7 +207,7 @@ Cost: Free
## Microsoft.DataFactory/factories
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -218,7 +218,7 @@ Cost: Free
## Microsoft.DataLakeStore/accounts
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -228,7 +228,7 @@ Cost: Free
## Microsoft.DataShare/accounts
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -240,7 +240,7 @@ Cost: Free
## Microsoft.DBforMariaDB/servers
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -250,7 +250,7 @@ Cost: Free
## Microsoft.DBforMySQL/flexibleServers
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -260,7 +260,7 @@ Cost: Free
## Microsoft.DBforMySQL/servers
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -270,7 +270,7 @@ Cost: Free
## Microsoft.DBforPostgreSQL/flexibleServers
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -279,7 +279,7 @@ Cost: Free
## Microsoft.DBforPostgreSQL/servers
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -290,7 +290,7 @@ Cost: Free
## Microsoft.DBforPostgreSQL/serversv2
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -299,7 +299,7 @@ Cost: Free
## Microsoft.DesktopVirtualization/applicationgroups
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -310,7 +310,7 @@ Cost: Free
## Microsoft.DesktopVirtualization/hostpools
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -323,7 +323,7 @@ Cost: Free
## Microsoft.DesktopVirtualization/workspaces
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -335,7 +335,7 @@ Cost: Free
## Microsoft.Devices/IotHubs
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -357,7 +357,7 @@ Cost: Free
## Microsoft.Devices/provisioningServices
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -367,7 +367,7 @@ Cost: Free
## Microsoft.DocumentDB/databaseAccounts
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -383,7 +383,7 @@ Cost: Free
## Microsoft.EventGrid/domains
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -393,7 +393,7 @@ Cost: Free
## Microsoft.EventGrid/systemTopics
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -402,7 +402,7 @@ Cost: Free
## Microsoft.EventGrid/topics
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -412,7 +412,7 @@ Cost: Free
## Microsoft.EventHub/namespaces
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -427,7 +427,7 @@ Cost: Free
## Microsoft.HealthcareApis/services
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -436,7 +436,7 @@ Cost: Free
## Microsoft.Insights/AutoscaleSettings
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -446,7 +446,7 @@ Cost: Free
## Microsoft.Insights/Components
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -465,7 +465,7 @@ Cost: Free
## Microsoft.KeyVault/vaults
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -474,7 +474,7 @@ Cost: Free
## Microsoft.Kusto/Clusters
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -489,7 +489,7 @@ Cost: Free
## Microsoft.Logic/integrationAccounts
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -498,7 +498,7 @@ Cost: Free
## Microsoft.Logic/workflows
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -507,7 +507,7 @@ Cost: Free
## Microsoft.MachineLearningServices/workspaces
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -520,7 +520,7 @@ Cost: Free
## Microsoft.Media/mediaservices
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -529,7 +529,7 @@ Cost: Free
## Microsoft.Network/applicationGateways
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -540,7 +540,7 @@ Cost: Free
## Microsoft.Network/azurefirewalls
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -550,7 +550,7 @@ Cost: Free
## Microsoft.Network/bastionHosts
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -559,7 +559,7 @@ Cost: Free
## Microsoft.Network/expressRouteCircuits
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -568,7 +568,7 @@ Cost: Free
## Microsoft.Network/frontdoors
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -578,7 +578,7 @@ Cost: Free
## Microsoft.Network/loadBalancers
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -588,7 +588,7 @@ Cost: Free
## Microsoft.Network/networksecuritygroups
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -599,7 +599,7 @@ Cost: Free
## Microsoft.Network/publicIPAddresses
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -610,7 +610,7 @@ Cost: Free
## Microsoft.Network/trafficManagerProfiles
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -619,7 +619,7 @@ Cost: Free
## Microsoft.Network/virtualNetworkGateways
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -632,7 +632,7 @@ Cost: Free
## Microsoft.Network/virtualNetworks
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -641,7 +641,7 @@ Cost: Free
## Microsoft.PowerBIDedicated/capacities
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -650,7 +650,7 @@ Cost: Free
## Microsoft.RecoveryServices/Vaults
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -672,7 +672,7 @@ Cost: Free
## Microsoft.Relay/namespaces
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -681,7 +681,7 @@ Cost: Free
## Microsoft.Search/searchServices
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -690,7 +690,7 @@ Cost: Free
## Microsoft.ServiceBus/namespaces
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -699,7 +699,7 @@ Cost: Free
## Microsoft.SignalRService/SignalR
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -708,7 +708,7 @@ Cost: Free
## Microsoft.Sql/managedInstances
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -719,7 +719,7 @@ Cost: Free
## Microsoft.Sql/managedInstances/databases
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -731,7 +731,7 @@ Cost: Free
## Microsoft.Sql/servers/databases
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -755,7 +755,7 @@ Cost: Free
## Microsoft.Storage/storageAccounts/blobServices
-Cost: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.](https://azure.microsoft.com/pricing/details/monitor/)
+Cost to export: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.](https://azure.microsoft.com/pricing/details/monitor/)
|Category |Category Display Name| |---|---|
@@ -766,7 +766,7 @@ Cost: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.]
## Microsoft.Storage/storageAccounts/fileServices
-Cost: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.](https://azure.microsoft.com/pricing/details/monitor/)
+Cost to export: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.](https://azure.microsoft.com/pricing/details/monitor/)
|Category |Category Display Name| |---|---|
@@ -777,7 +777,7 @@ Cost: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.]
## Microsoft.Storage/storageAccounts/queueServices
-Cost: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.](https://azure.microsoft.com/pricing/details/monitor/)
+Cost to export: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.](https://azure.microsoft.com/pricing/details/monitor/)
|Category |Category Display Name| |---|---|
@@ -788,7 +788,7 @@ Cost: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.]
## Microsoft.Storage/storageAccounts/tableServices
-Cost: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.](https://azure.microsoft.com/pricing/details/monitor/)
+Cost to export: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.](https://azure.microsoft.com/pricing/details/monitor/)
|Category |Category Display Name| |---|---|
@@ -799,7 +799,7 @@ Cost: Paid as outlined in Platform Logs section of [Azure Monitor Pricing page.]
## Microsoft.StreamAnalytics/streamingjobs
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -809,7 +809,7 @@ Cost: Free
## Microsoft.Synapse/workspaces
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -821,7 +821,7 @@ Cost: Free
## Microsoft.Synapse/workspaces/bigDataPools
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -830,7 +830,7 @@ Cost: Free
## Microsoft.Synapse/workspaces/sqlPools
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -844,7 +844,7 @@ Cost: Free
## microsoft.web/hostingenvironments
-Cost: Free
+Cost to export: Free
|Category |Category Display Name| |---|---|
@@ -853,7 +853,7 @@ Cost: Free
## microsoft.web/sites
-Cost: Free
+Cost to export: Free
|Category |Category Display Name|
@@ -868,7 +868,7 @@ Cost: Free
## microsoft.web/sites/slots
-Cost: Free
+Cost to export: Free
|Category |Category Display Name|
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/dynamic-change-volume-service-level https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/dynamic-change-volume-service-level.md
@@ -13,14 +13,13 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: how-to
-ms.date: 11/12/2020
+ms.date: 01/14/2021
ms.author: b-juche --- # Dynamically change the service level of a volume > [!IMPORTANT]
-> * The public preview registration for this feature is on hold until further notice.
-> * Dynamically changing the service level of a replication destination volume is currently not supported.
+> Dynamically changing the service level of a replication destination volume is currently not supported.
You can change the service level of an existing volume by moving the volume to another capacity pool that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume does not require that you migrate data. It also does not impact access to the volume.
@@ -33,7 +32,7 @@ The capacity pool that you want to move the volume to must already exist. The ca
* After the volume is moved to another capacity pool, you will no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool. * If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least seven days before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*).
-<!--
+ ## Register the feature The feature to move a volume to another capacity pool is currently in preview. If you are using this feature for the first time, you need to register the feature first.
@@ -53,7 +52,7 @@ The feature to move a volume to another capacity pool is currently in preview. I
Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFTierChange ``` You can also use [Azure CLI commands](/cli/azure/feature?preserve-view=true&view=azure-cli-latest) `az feature register` and `az feature show` to register the feature and display the registration status.
+
## Move a volume to another capacity pool 1. On the Volumes page, right-click the volume whose service level you want to change. Select **Change Pool**.
@@ -71,3 +70,4 @@ You can also use [Azure CLI commands](/cli/azure/feature?preserve-view=true&view
* [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md) * [Set up a capacity pool](azure-netapp-files-set-up-capacity-pool.md)
+* [Troubleshoot issues for changing the capacity pool of a volume](troubleshoot-capacity-pools.md#issues-when-changing-the-capacity-pool-of-a-volume)
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/troubleshoot-capacity-pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-capacity-pools.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: troubleshooting
-ms.date: 11/06/2020
+ms.date: 01/14/2021
ms.author: b-juche --- # Troubleshoot capacity pool issues
@@ -30,9 +30,6 @@ This article describes resolutions to issues you might have when managing capaci
## Issues when changing the capacity pool of a volume
-> [!IMPORTANT]
-> The [Dynamically change the service level of a volume](dynamic-change-volume-service-level.md) public preview registration is on hold until further notice.
- | Error condition | Resolution | |-|-| | Changing the capacity pool for a volume is not permitted. | You might not be authorized yet to use this feature. <br> The feature to move a volume to another capacity pool is currently in preview. If you are using this feature for the first time, you need to register the feature first and set `-FeatureName ANFTierChange`. See the registration steps in [Dynamically change the service level of a volume](dynamic-change-volume-service-level.md). |
azure-relay https://docs.microsoft.com/en-us/azure/azure-relay/relay-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-faq.md
@@ -65,7 +65,7 @@ Relays that are opened by using the **netTCPRelay** WCF binding treat messages n
## Quotas | Quota name | Scope | Notes | Value | | --- | --- | --- | --- |
-| Concurrent listeners on a relay |Entity |Subsequent requests for additional connections are rejected and an exception is received by the calling code. |25 |
+| Concurrent listeners on a relay |Entity (hybrid connection or WCF relay) |Subsequent requests for additional connections are rejected and an exception is received by the calling code. |25 |
| Concurrent relay connections per all relay endpoints in a service namespace |Namespace |- |5,000 | | Relay endpoints per service namespace |Namespace |- |10,000 | | Message size for [NetOnewayRelayBinding](/dotnet/api/microsoft.servicebus.netonewayrelaybinding) and [NetEventRelayBinding](/dotnet/api/microsoft.servicebus.neteventrelaybinding) relays |Namespace |Incoming messages that exceed these quotas are rejected and an exception is received by the calling code. |64 KB |
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
@@ -2,7 +2,7 @@
title: Deploy resources with Azure CLI and template description: Use Azure Resource Manager and Azure CLI to deploy resources to Azure. The resources are defined in a Resource Manager template. ms.topic: conceptual
-ms.date: 10/22/2020
+ms.date: 01/15/2021
--- # Deploy resources with ARM templates and Azure CLI
@@ -129,7 +129,7 @@ To avoid conflicts with concurrent deployments and to ensure unique entries in t
Instead of deploying a local or remote template, you can create a [template spec](template-specs.md). The template spec is a resource in your Azure subscription that contains an ARM template. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec. This feature is currently in preview.
-The following examples show how to create and deploy a template spec. These commands are only available if you've [signed up for the preview](https://aka.ms/templateSpecOnboarding).
+The following examples show how to create and deploy a template spec.
First, create the template spec by providing the ARM template.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-powershell.md
@@ -2,7 +2,7 @@
title: Deploy resources with PowerShell and template description: Use Azure Resource Manager and Azure PowerShell to deploy resources to Azure. The resources are defined in a Resource Manager template. ms.topic: conceptual
-ms.date: 10/22/2020
+ms.date: 01/15/2021
--- # Deploy resources with ARM templates and Azure PowerShell
@@ -130,7 +130,7 @@ To avoid conflicts with concurrent deployments and to ensure unique entries in t
Instead of deploying a local or remote template, you can create a [template spec](template-specs.md). The template spec is a resource in your Azure subscription that contains an ARM template. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec. This feature is currently in preview.
-The following examples show how to create and deploy a template spec. These commands are only available if you've [signed up for the preview](https://aka.ms/templateSpecOnboarding).
+The following examples show how to create and deploy a template spec.
First, create the template spec by providing the ARM template.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/always-encrypted-enclaves-configure-attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-configure-attestation.md new file mode 100644
@@ -0,0 +1,149 @@
+---
+title: "Configure Azure Attestation for your Azure SQL logical server"
+description: "Configure Azure Attestation for Always Encrypted with secure enclaves in Azure SQL Database."
+keywords: encrypt data, sql encryption, database encryption, sensitive data, Always Encrypted, secure enclaves, SGX, attestation
+services: sql-database
+ms.service: sql-database
+ms.subservice: security
+ms.devlang:
+ms.topic: how-to
+author: jaszymas
+ms.author: jaszymas
+ms.reviwer: vanto
+ms.date: 01/15/2021
+---
+
+# Configure Azure Attestation for your Azure SQL logical server
+
+[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
+
+> [!NOTE]
+> Always Encrypted with secure enclaves for Azure SQL Database is currently in **public preview**.
+
+[Microsoft Azure Attestation](../../attestation/overview.md) is a solution for attesting Trusted Execution Environments (TEEs), including Intel Software Guard Extensions (Intel SGX) enclaves.
+
+To use Azure Attestation for attesting Intel SGX enclaves used for [Always Encrypted with secure enclaves](https://docs.microsoft.com/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database, you need to:
+
+1. Create an [attestation provider](../../attestation/basic-concepts.md#attestation-provider) and configure it with the recommended attestation policy.
+
+2. Grant your Azure SQL logical server access to your attestation provider.
+
+> [!NOTE]
+> Configuring attestation is the responsibility of the attestation administrator. See [Roles and responsibilities when configuring SGX enclaves and attestation](always-encrypted-enclaves-plan.md#roles-and-responsibilities-when-configuring-sgx-enclaves-and-attestation).
+
+## Requirements
+
+The Azure SQL logical server and the attestation provider must belong to the same Azure Active Directory tenant. Cross-tenant interactions aren't supported.
+
+The Azure SQL logical server must have an Azure AD identity assigned to it. As the attestation administrator you need to obtain the Azure AD identity of the server from the Azure SQL Database administrator for that server. You will use the identity to grant the server access to the attestation provider.
+
+For instructions on how to create a server with an identity or assign an identity to an existing server using PowerShell and Azure CLI, see [Assign an Azure AD identity to your server](transparent-data-encryption-byok-configure.md#assign-an-azure-active-directory-azure-ad-identity-to-your-server).
+
+## Create and configure an attestation provider
+
+An [attestation provider](../../attestation/basic-concepts.md#attestation-provider) is a resource in Azure Attestation that evaluates [attestation requests](../../attestation/basic-concepts.md#attestation-request) against [attestation policies](../../attestation/basic-concepts.md#attestation-request) and issues [attestation tokens](../../attestation/basic-concepts.md#attestation-token).
+
+Attestation policies are specified using the [claim rule grammar](../../attestation/claim-rule-grammar.md).
+
+Microsoft recommends the following policy for attesting Intel SGX enclaves used for Always Encrypted in Azure SQL Database:
+
+```output
+version= 1.0;
+authorizationrules
+{
+ [ type=="x-ms-sgx-is-debuggable", value==false ]
+ && [ type=="x-ms-sgx-product-id", value==4639 ]
+ && [ type=="x-ms-sgx-svn", value>= 0 ]
+ && [ type=="x-ms-sgx-mrsigner", value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
+ => permit();
+};
+```
+
+The above policy verifies:
+
+- The enclave inside Azure SQL Database doesn't support debugging (which would reduce the level of protection the enclave provides).
+- The product ID of the library inside the enclave is the product ID assigned to Always Encrypted with secure enclaves (4639).
+- The version ID (svn) of the library is greater than 0.
+- The library in the enclave has been signed using the Microsoft signing key (the value of the x-ms-sgx-mrsigner claim is the hash of the signing key).
+
+> [!IMPORTANT]
+> An attestation provider gets created with the default policy for Intel SGX enclaves, which does not validate the code running inside the enclave. Microsoft strongly advises you set the above recommended policy, and not use the default policy, for Always Encrypted with secure enclaves.
+
+For instructions for how to create an attestation provider and configure with an attestation policy using:
+
+- [Quickstart: Set up Azure Attestation with Azure portal](../../attestation/quickstart-portal.md)
+ > [!IMPORTANT]
+ > When you configure your attestation policy with Azure portal, set Attestation Type to `SGX-IntelSDK`.
+- [Quickstart: Set up Azure Attestation with Azure PowerShell](../../attestation/quickstart-powershell.md)
+ > [!IMPORTANT]
+ > When you configure your attestation policy with Azure PowerShell, set the `Tee` parameter to `SgxEnclave`.
+- [Quickstart: Set up Azure Attestation with Azure CLI](../../attestation/quickstart-azure-cli.md)
+ > [!IMPORTANT]
+ > When you configure your attestation policy with Azure CLI, set the `attestation-type` parameter to `SGX-IntelSDK`.
+
+## Determine the attestation URL for your attestation policy
+
+After you've configured an attestation policy, you need to share the attestation URL, referencing the policy, administrators of applications that use Always Encrypted with secure enclaves in Azure SQL Database. Application administrators or/and application users will need to configure their apps with the attestation URL, so that they can run statements that use secure enclaves.
+
+### Use PowerShell to determine the attestation URL
+
+Use the following script to determine your attestation URL:
+
+```powershell
+$attestationProvider = Get-AzAttestation -Name $attestationProviderName -ResourceGroupName $attestationResourceGroupName
+$attestationUrl = $attestationProvider.AttestUri + ΓÇ£/attest/SgxEnclaveΓÇ¥
+Write-Host "Your attestation URL is: " $attestationUrl
+```
+
+### Use Azure portal to determine the attestation URL
+
+1. In the Overview pane for your attestation provider, copy the value of the Attest URI property to clipboard. An Attest URI should look like this: `https://MyAttestationProvider.us.attest.azure.net`.
+
+2. Append the following to the Attest URI: `/attest/SgxEnclave`.
+
+The resulting attestation URL should look like this: `https://MyAttestationProvider.us.attest.azure.net/attest/SgxEnclave`
+
+## Grant your Azure SQL logical server access to your attestation provider
+
+During the attestation workflow, the Azure SQL logical server containing your database calls the attestation provider to submit an attestation request. For the Azure SQL logical server to be able to submit attestation requests, the server must have a permission for the `Microsoft.Attestation/attestationProviders/attestation/read` action on the attestation provider. The recommended way to grant the permission is for the administrator of the attestation provider to assign the Azure AD identity of the server to the Attestation Reader role for the attestation provider, or its containing resource group.
+
+### Use Azure portal to assign permission
+
+To assign the identity of an Azure SQL server to the Attestation Reader role for an attestation provider, follow the general instructions in [Add or remove Azure role assignments using the Azure portal](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal). When you are in the **Add role assignment** pane:
+
+1. In the **Role** drop-down, select the **Attestation Reader** role.
+1. In the **Select** field, enter the name of your Azure SQL server to search for it.
+
+See the below screenshot for an example.
+
+![attestation reader role assignment](./media/always-encrypted-enclaves/attestation-provider-role-assigment.png)
+
+> [!NOTE]
+> For a server to show up in the **Add role assignment** pane, the server must have an Azure AD identity assigned - see [Requirements](#requirements).
+
+### Use PowerShell to assign permission
+
+1. Find your Azure SQL logical server.
+
+```powershell
+$serverResourceGroupName = "<server resource group name>"
+$serverName = "<server name>"
+$server = Get-AzSqlServer -ServerName $serverName -ResourceGroupName
+```
+
+2. Assign the server to the Attestation Reader role for the resource group containing your attestation provider.
+
+```powershell
+$attestationResourceGroupName = "<attestation provider resource group name>"
+New-AzRoleAssignment -ObjectId $server.Identity.PrincipalId -RoleDefinitionName "Attestation Reader" -ResourceGroupName $attestationResourceGroupName
+```
+
+For more information, see [Add or remove Azure role assignments using Azure PowerShell](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-powershell#add-a-role-assignment).
+
+## Next Steps
+
+- [Manage keys for Always Encrypted with secure enclaves](https://docs.microsoft.com/sql/relational-databases/security/encryption/always-encrypted-enclaves-manage-keys)
+
+## See also
+
+- [Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database](always-encrypted-enclaves-getting-started.md)
\ No newline at end of file
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/always-encrypted-enclaves-enable-sgx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-enable-sgx.md new file mode 100644
@@ -0,0 +1,40 @@
+---
+title: "Enable Intel SGX for your Azure SQL Database"
+description: "Learn how to enable Intel SGX for Always Encrypted with secure enclaves in Azure SQL Database by selecting an SGX-enabled hardware generation."
+keywords: encrypt data, sql encryption, database encryption, sensitive data, Always Encrypted, secure enclaves, SGX, attestation
+services: sql-database
+ms.service: sql-database
+ms.subservice: security
+ms.devlang:
+ms.topic: conceptual
+author: jaszymas
+ms.author: jaszymas
+ms.reviwer: vanto
+ms.date: 01/15/2021
+---
+# Enable Intel SGX for your Azure SQL Database
+
+[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
+
+> [!NOTE]
+> Always Encrypted with secure enclaves for Azure SQL Database is currently in **public preview**.
+
+[Always Encrypted with secure enclaves](https://docs.microsoft.com/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database uses [Intel Software Guard Extensions (Intel SGX)](https://itpeernetwork.intel.com/microsoft-azure-confidential-computing/) enclaves. For Intel SGX to be available, the database must use the [vCore model](service-tiers-vcore.md) and the [DC-series](service-tiers-vcore.md#dc-series) hardware generation.
+
+Configuring the DC-series hardware generation to enable Intel SGX enclaves is the responsibility of the Azure SQL Database administrator. See [Roles and responsibilities when configuring SGX enclaves and attestation](always-encrypted-enclaves-plan.md#roles-and-responsibilities-when-configuring-sgx-enclaves-and-attestation).
+
+> [!NOTE]
+> Intel SGX is not available in hardware generations other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and it is not available for databases using the [DTU model](service-tiers-dtu.md).
+
+> [!IMPORTANT]
+> Before you configure the DC-series hardware generation for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For more information, see [DC-series](service-tiers-vcore.md#dc-series).
+
+For detailed instructions for how to configure a new or existing database to use a specific hardware generation, see [Selecting a hardware generation](service-tiers-vcore.md#selecting-a-hardware-generation).
+
+## Next steps
+
+- [Configure Azure Attestation for your Azure SQL database server](always-encrypted-enclaves-configure-attestation.md)
+
+## See also
+
+- [Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database](always-encrypted-enclaves-getting-started.md)
\ No newline at end of file
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/always-encrypted-enclaves-getting-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-getting-started.md new file mode 100644
@@ -0,0 +1,359 @@
+---
+title: "Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database"
+description: This tutorial teaches you how to create a basic environment for Always Encrypted with secure enclaves in Azure SQL Database and how to encrypt data in-place, and issue rich confidential queries against encrypted columns using SQL Server Management Studio (SSMS).
+keywords: encrypt data, sql encryption, database encryption, sensitive data, Always Encrypted, secure enclaves, SGX, attestation
+services: sql-database
+ms.service: sql-database
+ms.subservice: security
+ms.devlang:
+ms.topic: tutorial
+author: jaszymas
+ms.author: jaszymas
+ms.reviwer: vanto
+ms.date: 01/15/2021
+---
+# Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database
+
+[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
+
+> [!NOTE]
+> Always Encrypted with secure enclaves for Azure SQL Database is currently in **public preview**.
+
+This tutorial teaches you how to get started with [Always Encrypted with secure enclaves](https://docs.microsoft.com/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database. It will show you:
+
+> [!div class="checklist"]
+> - How to create an environment for testing and evaluating Always Encrypted with secure enclaves.
+> - How to encrypt data in-place and issue rich confidential queries against encrypted columns using SQL Server Management Studio (SSMS).
+
+## Prerequisites
+
+This tutorial requires Azure PowerShell and [SSMS](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms).
+
+### PowerShell requirements
+
+See [Overview of Azure PowerShell](https://docs.microsoft.com/powershell/azure) for information on how to install and run Azure PowerShell.
+
+Minimum version of Az modules required to support attestation operations:
+
+- Az 4.5.0
+- Az.Accounts 1.9.2
+- Az.Attestation 0.1.8
+
+Run the below command to verify the installed version of all Az modules:
+
+```powershell
+Get-InstalledModule
+```
+
+If the versions aren't matching with the minimum requirement, run the `Update-Module` command.
+
+The PowerShell Gallery has deprecated Transport Layer Security (TLS) versions 1.0 and 1.1. TLS 1.2 or a later version is recommended. You may receive the following errors if you are using a TLS version lower than 1.2:
+
+- `WARNING: Unable to resolve package source 'https://www.powershellgallery.com/api/v2'`
+- `PackageManagement\Install-Package: No match was found for the specified search criteria and module name.`
+
+To continue to interact with the PowerShell Gallery, run the following command before the Install-Module commands
+
+```powershell
+[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
+```
+
+### SSMS requirements
+
+See [Download SQL Server Management Studio (SSMS)](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms) for information on how to download SSMS.
+
+The required minimum version of SSMS is 18.8.
++
+## Step 1: Create a server and a DC-series database
+
+ In this step, you will create a new Azure SQL Database logical server and a new database using the DC-series hardware configuration. Always Encrypted with secure enclaves in Azure SQL Database uses Intel SGX enclaves, which are supported in the DC-series hardware configuration. For more information, see [DC-series](service-tiers-vcore.md#dc-series).
+
+1. Open a PowerShell console and sign into Azure. If needed, [switch to the subscription](https://docs.microsoft.com/powershell/azure/manage-subscriptions-azureps) you are using for this tutorial.
+
+ ```PowerShell
+ Connect-AzAccount
+ $subscriptionId = <your subscription ID>
+ Set-AzContext -Subscription $serverSubscriptionId
+ ```
+
+2. Create a resource group to contain your database server.
+
+ ```powershell
+ $serverResourceGroupName = "<server resource group name>"
+ $serverLocation = "<Azure region that supports DC-series in SQL Database>"
+ New-AzResourceGroup -Name $serverResourceGroupName -Location $serverLocation
+ ```
+
+ > [!IMPORTANT]
+ > You need to create your resource group in a region that supports the DC-series hardware configuration. For the list of currently supported regions, see [DC-series availability](service-tiers-vcore.md#dc-series-1).
+
+3. Create a database server. When prompted, enter the server administrator name and a password.
+
+ ```powershell
+ $serverName = "<server name>"
+ New-AzSqlServer -ServerName $serverName -ResourceGroupName $serverResourceGroupName -Location $serverLocation
+ ```
+
+4. Create a server firewall rule that allows access from the specified IP range
+
+ ```powershell
+ # The ip address range that you want to allow to access your server
+ $startIp = "<start of IP range>"
+ $endIp = "<end of IP range>"
+ $serverFirewallRule = New-AzSqlServerFirewallRule -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -FirewallRuleName "AllowedIPs" -StartIpAddress $startIp -EndIpAddress $endIp
+ ```
+
+5. Assign a managed system identity to your server. You'll need it later to grant your server access to Microsoft Azure Attestation.
+
+ ```powershell
+ Set-AzSqlServer -ServerName $serverName -ResourceGroupName $serverResourceGroupName -AssignIdentity
+ ```
+
+6. Retrieve an object ID of the identity assigned to your server. Save the resulting object ID. You'll need the ID in a later section.
+
+ > [!NOTE]
+ > It might take a few seconds for the newly assigned managed system identity to propagate in Azure Active Directory. If the below script return an empty result, retry it.
+
+ ```PowerShell
+ $server = Get-AzSqlServer -ServerName $serverName -ResourceGroupName $serverResourceGroupName
+ $serverObjectId = $server.Identity.PrincipalId
+ $serverObjectId
+ ```
+
+7. Create a DC-series database.
+
+ ```powershell
+ $databaseName = "ContosoHR"
+ $edition = "GeneralPurpose"
+ $vCore = 2
+ $generation = "DC"
+ New-AzSqlDatabase -ResourceGroupName $serverResourceGroupName -ServerName $serverName -DatabaseName $databaseName -Edition $edition -Vcore $vCore -ComputeGeneration $generation
+ ```
+
+## Step 2: Configure an attestation provider
+
+In this step, You'll create and configure an attestation provider in Microsoft Azure Attestation. This is needed to attest the secure enclave in your database server.
+
+1. Copy the below attestation policy and save the policy in a text file (txt). For information about the below policy, see [Create and configure an attestation provider](always-encrypted-enclaves-configure-attestation.md#create-and-configure-an-attestation-provider).
+
+ ```output
+ version= 1.0;
+ authorizationrules
+ {
+ [ type=="x-ms-sgx-is-debuggable", value==false ]
+ && [ type=="x-ms-sgx-product-id", value==4639 ]
+ && [ type=="x-ms-sgx-svn", value>= 0 ]
+ && [ type=="x-ms-sgx-mrsigner", value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
+ => permit();
+ };
+ ```
+
+2. Import the required versions of `Az.Accounts` and `Az.Attestation`.
+
+ ```powershell
+ Import-Module "Az.Accounts" -MinimumVersion "1.9.2"
+ Import-Module "Az.Attestation" -MinimumVersion "0.1.8"
+ ```
+
+3. Create a resource group for the attestation provider.
+
+ ```powershell
+ $attestationLocation = $serverLocation
+ $attestationResourceGroupName = "<attestation provider resource group name>"
+ New-AzResourceGroup -Name $attestationResourceGroupName -Location $location
+ ```
+
+4. Create an attestation provider.
+
+ ```powershell
+ $attestationProviderName = "<attestation provider name>"
+ New-AzAttestation -Name $attestationProviderName -ResourceGroupName $attestationResourceGroupName -Location $attestationLocation
+ ```
+
+5. Configure your attestation policy.
+
+ ```powershell
+ $policyFile = "<the pathname of the file from step 1 in this section"
+ $teeType = "SgxEnclave"
+ $policyFormat = "Text"
+ $policy=Get-Content -path $policyFile -Raw
+ Set-AzAttestationPolicy -Name $attestationProviderName -ResourceGroupName $attestationResourceGroupName -Tee $teeType -Policy $policy -PolicyFormat $policyFormat
+ ```
+
+6. Grant your Azure SQL logical server access to your attestation provider. In this step, We're using the object ID of the managed service identity that you assigned to your server earlier.
+
+ ```powershell
+ New-AzRoleAssignment -ObjectId $serverObjectId -RoleDefinitionName "Attestation Reader" -ResourceGroupName $attestationResourceGroupName
+ ```
+
+7. Retrieve the attestation URL.
+
+ ```powershell
+ $attestationProvider = Get-AzAttestation -Name $attestationProviderName -ResourceGroupName $attestationResourceGroupName
+ $attestationUrl = $attestationProvider.AttestUri + ΓÇ£/attest/SgxEnclaveΓÇ¥
+ Write-Host "Your attestation URL is: " $attestationUrl
+ ```
+
+8. Save the resulting attestation URL that points to an attestation policy you configured for the SGX enclave. You'll need it later. The attestation URL should look like this: `https://contososqlattestation.uks.attest.azure.net/attest/SgxEnclave`
+
+## Step 3: Populate your database
+
+In this step, you'll create a table and populate it with some data that you'll later encrypt and query.
+
+1. Open SSMS and connect to the **ContosoHR** database in the Azure SQL logical server you created **without** Always Encrypted enabled in the database connection.
+ 1. In the **Connect to Server** dialog, specify your server name (for example, *myserver123.database.windows.net*), and enter the user name and the password you configured earlier.
+ 2. Click **Options >>** and select the **Connection Properties** tab. Make sure to select the **ContosoHR** database (not the default, master database).
+ 3. Select the **Always Encrypted** tab.
+ 4. Make sure the **Enable Always Encrypted (column encryption)** checkbox is **not** selected.
+
+ ![Connect without Always Encrypted](media/always-encrypted-enclaves/connect-without-always-encrypted-ssms.png)
+
+ 5. Click **Connect**.
+
+2. Create a new table, named **Employees**.
+
+ ```sql
+ CREATE SCHEMA [HR];
+ GO
+
+ CREATE TABLE [HR].[Employees]
+ (
+ [EmployeeID] [int] IDENTITY(1,1) NOT NULL,
+ [SSN] [char](11) NOT NULL,
+ [FirstName] [nvarchar](50) NOT NULL,
+ [LastName] [nvarchar](50) NOT NULL,
+ [Salary] [money] NOT NULL
+ ) ON [PRIMARY];
+ GO
+ ```
+
+3. Add a few employee records to the **Employees** table.
+
+ ```sql
+ INSERT INTO [HR].[Employees]
+ ([SSN]
+ ,[FirstName]
+ ,[LastName]
+ ,[Salary])
+ VALUES
+ ('795-73-9838'
+ , N'Catherine'
+ , N'Abel'
+ , $31692);
+
+ INSERT INTO [HR].[Employees]
+ ([SSN]
+ ,[FirstName]
+ ,[LastName]
+ ,[Salary])
+ VALUES
+ ('990-00-6818'
+ , N'Kim'
+ , N'Abercrombie'
+ , $55415);
+ ```
++
+## Step 4: Provision enclave-enabled keys
+
+In this step, you'll create a column master key and a column encryption key that allow enclave computations.
+
+1. Using the SSMS instance from the previous step, in **Object Explorer**, expand your database and navigate to **Security** > **Always Encrypted Keys**.
+1. Provision a new enclave-enabled column master key:
+ 1. Right-click **Always Encrypted Keys** and select **New Column Master Key...**.
+ 2. Select your column master key name: **CMK1**.
+ 3. Make sure you select either **Windows Certificate Store (Current User or Local Machine)** or **Azure Key Vault**.
+ 4. Select **Allow enclave computations**.
+ 5. If you selected Azure Key Vault, sign into Azure and select your key vault. For more information on how to create a key vault for Always Encrypted, see [Manage your key vaults from Azure portal](/archive/blogs/kv/manage-your-key-vaults-from-new-azure-portal).
+ 6. Select your certificate or Azure Key Value key if it already exists, or click the **Generate Certificate** button to create a new one.
+ 7. Select **OK**.
+
+ ![Allow enclave computations](media/always-encrypted-enclaves/allow-enclave-computations.png)
+
+1. Create a new enclave-enabled column encryption key:
+
+ 1. Right-click **Always Encrypted Keys** and select **New Column Encryption Key**.
+ 2. Enter a name for the new column encryption key: **CEK1**.
+ 3. In the **Column master key** dropdown, select the column master key you created in the previous steps.
+ 4. Select **OK**.
+
+## Step 5: Encrypt some columns in place
+
+In this step, you'll encrypt the data stored in the **SSN** and **Salary** columns inside the server-side enclave, and then test a SELECT query on the data.
+
+1. Open a new SSMS instance and connect to your database **with** Always Encrypted enabled for the database connection.
+ 1. Start a new instance of SSMS.
+ 2. In the **Connect to Server** dialog, specify your server name, select an authentication method, and specify your credentials.
+ 3. Click **Options >>** and select the **Connection Properties** tab. Make sure to select the **ContosoHR** database (not the default, master database).
+ 4. Select the **Always Encrypted** tab.
+ 5. Make sure the **Enable Always Encrypted (column encryption)** checkbox is selected.
+ 6. Specify your enclave attestation URL that you've obtained by following the steps in [Step 2: Configure an attestation provider](#step-2-configure-an-attestation-provider). See the below screenshot.
+
+ ![Connect with attestation](media/always-encrypted-enclaves/connect-to-server-configure-attestation.png)
+
+ 7. Select **Connect**.
+ 8. If you're prompted to enable Parameterization for Always Encrypted queries, select **Enable**.
+++
+1. Using the same SSMS instance (with Always Encrypted enabled), open a new query window and encrypt the **SSN** and **Salary** columns by running the below statements.
+
+ ```sql
+ ALTER TABLE [HR].[Employees]
+ ALTER COLUMN [SSN] [char] (11) COLLATE Latin1_General_BIN2
+ ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK1], ENCRYPTION_TYPE = Randomized, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL
+ WITH
+ (ONLINE = ON);
+
+ ALTER TABLE [HR].[Employees]
+ ALTER COLUMN [Salary] [money]
+ ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK1], ENCRYPTION_TYPE = Randomized, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL
+ WITH
+ (ONLINE = ON);
+
+ ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
+ ```
+
+ > [!NOTE]
+ > Notice the ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE statement to clear the query plan cache for the database in the above script. After you have altered the table, you need to clear the plans for all batches and stored procedures that access the table to refresh parameters encryption information.
+
+1. To verify the **SSN** and **Salary** columns are now encrypted, open a new query window in the SSMS instance **without** Always Encrypted enabled for the database connection and execute the below statement. The query window should return encrypted values in the **SSN** and **Salary** columns. If you execute the same query using the SSMS instance with Always Encrypted enabled, you should see the data decrypted.
+
+ ```sql
+ SELECT * FROM [HR].[Employees];
+ ```
+
+## Step 6: Run rich queries against encrypted columns
+
+You can run rich queries against the encrypted columns. Some query processing will be performed inside your server-side enclave.
+
+1. In the SSMS instance **with** Always Encrypted enabled, make sure Parameterization for Always Encrypted is also enabled.
+ 1. Select **Tools** from the main menu of SSMS.
+ 2. Select **Options...**.
+ 3. Navigate to **Query Execution** > **SQL Server** > **Advanced**.
+ 4. Ensure that **Enable Parameterization for Always Encrypted** is checked.
+ 5. Select **OK**.
+2. Open a new query window, paste in the below query, and execute. The query should return plaintext values and rows meeting the specified search criteria.
+
+ ```sql
+ DECLARE @SSNPattern [char](11) = '%6818';
+ DECLARE @MinSalary [money] = $1000;
+ SELECT * FROM [HR].[Employees]
+ WHERE SSN LIKE @SSNPattern AND [Salary] >= @MinSalary;
+ ```
+
+3. Try the same query again in the SSMS instance that doesn't have Always Encrypted enabled. A failure should occur.
+
+## Next steps
+
+After completing this tutorial, you can go to one of the following tutorials:
+- [Tutorial: Develop a .NET application using Always Encrypted with secure enclaves](https://docs.microsoft.com/sql/connect/ado-net/sql/tutorial-always-encrypted-enclaves-develop-net-apps)
+- [Tutorial: Develop a .NET Framework application using Always Encrypted with secure enclaves](https://docs.microsoft.com/sql/relational-databases/security/tutorial-always-encrypted-enclaves-develop-net-framework-apps)
+- [Tutorial: Creating and using indexes on enclave-enabled columns using randomized encryption](https://docs.microsoft.com/sql/relational-databases/security/tutorial-creating-using-indexes-on-enclave-enabled-columns-using-randomized-encryption)
+
+## See Also
+
+- [Configure and use Always Encrypted with secure enclaves](https://docs.microsoft.com/sql/relational-databases/security/encryption/configure-always-encrypted-enclaves)
\ No newline at end of file
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/always-encrypted-enclaves-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-plan.md new file mode 100644
@@ -0,0 +1,60 @@
+---
+title: "Plan for Intel SGX enclaves and attestation in Azure SQL Database"
+description: "Plan the deployment of Always Encrypted with secure enclaves in Azure SQL Database."
+keywords: encrypt data, sql encryption, database encryption, sensitive data, Always Encrypted, secure enclaves, SGX, attestation
+services: sql-database
+ms.service: sql-database
+ms.subservice: security
+ms.devlang:
+ms.topic: conceptual
+author: jaszymas
+ms.author: jaszymas
+ms.reviwer: vanto
+ms.date: 01/15/2021
+---
+# Plan for Intel SGX enclaves and attestation in Azure SQL Database
+
+[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
+
+> [!NOTE]
+> Always Encrypted with secure enclaves for Azure SQL Database is currently in **public preview**.
+
+[Always Encrypted with secure enclaves](https://docs.microsoft.com/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database uses [Intel Software Guard Extensions (Intel SGX)](https://itpeernetwork.intel.com/microsoft-azure-confidential-computing/) enclaves and requires [Microsoft Azure Attestation](https://docs.microsoft.com/sql/relational-databases/security/encryption/always-encrypted-enclaves#secure-enclave-attestation).
+
+## Plan for Intel SGX in Azure SQL Database
+
+Intel SGX is a hardware-based trusted execution environment technology. Intel SGX is available for databases that use the [vCore model](service-tiers-vcore.md) and the [DC-series](service-tiers-vcore.md?#dc-series) hardware generation. Therefore, to ensure you can use Always Encrypted with secure enclaves in your database, you need to either select the DC-series hardware generation when you create the database, or you can update your existing database to use the DC-series hardware generation.
+
+> [!NOTE]
+> Intel SGX is not available in hardware generations other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and it is not available for databases using the [DTU model](service-tiers-dtu.md).
+
+> [!IMPORTANT]
+> Before you configure the DC-series hardware generation for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For details, see [DC-series](service-tiers-vcore.md#dc-series).
+
+## Plan for attestation in Azure SQL Database
+
+[Microsoft Azure Attestation](../../attestation/overview.md) (preview) is a solution for attesting Trusted Execution Environments (TEEs), including Intel SGX enclaves in Azure SQL databases using the DC-series hardware generation.
+
+To use Azure Attestation for attesting Intel SGX enclaves in Azure SQL Database, you need to:
+
+1. Create an [attestation provider](../../attestation/basic-concepts.md#attestation-provider) and configure it with an attestation policy.
+
+2. Grant your Azure SQL logical server access to the created attestation provider.
+
+## Roles and responsibilities when configuring SGX enclaves and attestation
+
+Configuring your environment to support Intel SGX enclaves and attestation for Always Encrypted in Azure SQL Database involves setting up components of different types: Microsoft Azure Attestation, Azure SQL Database, and applications that trigger enclave attestation. Configuring components of each type is performed by users assuming one of the below distinct roles:
+
+- Attestation administrator - creates an attestation provider in Microsoft Azure Attestation, authors the attestation policy, grants Azure SQL logical server access to the attestation provider, and shares the attestation URL that points to the policy to application administrators.
+- Azure SQL Database administrator - enables SGX enclaves in databases by selecting the DC-series hardware generation, and provides the attestation administrator with the identity of the Azure SQL logical server that needs to access the attestation provider.
+- Application administrator - configures applications with the attestation URL obtained from the attestation administrator.
+
+In production environments (handling real sensitive data), it is important your organization adheres to role separation when configuring attestation, where each distinct role is assumed by different people. In particular, if the goal of deploying Always Encrypted in your organization is to reduce the attack surface area by ensuring Azure SQL Database administrators cannot access sensitive data, Azure SQL Database administrators should not control attestation policies.
+
+## Next steps
+
+- [Enable Intel SGX for your Azure SQL database](always-encrypted-enclaves-enable-sgx.md)
+
+## See also
+
+- [Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database](always-encrypted-enclaves-getting-started.md)
\ No newline at end of file
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/cost-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/cost-management.md new file mode 100644
@@ -0,0 +1,117 @@
+---
+title: Plan and manage costs for Azure SQL Database
+description: Learn how to plan for and manage costs for Azure SQL Database by using cost analysis in the Azure portal.
+author: stevestein
+ms.author: sstein
+ms.custom: subject-cost-optimization
+ms.service: sql-database
+ms.topic: how-to
+ms.date: 01/15/2021
+---
++
+# Plan and manage costs for Azure SQL Database
+
+This article describes how you plan for and manage costs for Azure SQL Database. First, you use the Azure pricing calculator to add Azure resources, and review the estimated costs. After you've started using Azure SQL Database resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure SQL Database are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure SQL Database, you're billed for all Azure services and resources used in your Azure subscription, including any third-party services.
++
+## Prerequisites
+
+Cost analysis supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account.
+
+For information about assigning access to Azure Cost Management data, see [Assign access to data](../../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
++
+## SQL Database initial cost considerations
+
+When working with Azure SQL Database, there are several cost-saving features to consider:
++
+### vCore or DTU purchasing models
+
+Azure SQL Database supports two purchasing models: vCore and DTU. The way you get charged varies between the purchasing models so it's important to understand the model that works best for your workload when planning and considering costs. For information about vCore and DTU purchasing models, see [Choose between the vCore and DTU purchasing models](purchasing-models.md).
++
+### Provisioned or serverless
+
+In the vCore purchasing model, Azure SQL Database also supports two types of compute tiers: provisioned throughput and serverless. The way you get charged for each compute tier varies so it's important to understand what works best for your workload when planning and considering costs. For details, see [vCore model overview - compute tiers](service-tiers-vcore.md#compute-tiers).
+
+In the provisioned compute tier of the vCore-based purchasing model, you can exchange your existing licenses for discounted rates. For details, see [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md).
+
+### Elastic pools
+
+For environments with multiple databases that have varying and unpredictable usage demands, elastic pools can provide cost savings compared to provisioning the same amount of single databases. For details, see [Elastic pools](elastic-pool-overview.md).
+
+## Estimate Azure SQL Database costs
+
+Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs for different Azure SQL Database configurations. The information and pricing in the following image are for example purposes only:
+
+:::image type="content" source="media/cost-management/pricing-calc.png" alt-text="Azure SQL Database pricing calculator example":::
+
+You can also estimate how different Retention Policy options affect cost. The information and pricing in the following image are for example purposes only:
+
+:::image type="content" source="media/cost-management/backup-storage.png" alt-text="Azure SQL Database pricing calculator example for storage":::
++
+## Understand the full billing model for Azure SQL Database
+
+Azure SQL Database runs on Azure infrastructure that accrues costs along with Azure SQL Database when you deploy the new resource. It's important to understand that additional infrastructure might accrue cost. You need to manage that cost when you make changes to deployed resources.
++
+Azure SQL Database (with the exception of serverless) is billed on a predictable, hourly rate. If the SQL database is active for less than one hour, you are billed for each hour the database exists using the highest service tier selected, provisioned storage and IO that applied during that hour, regardless of usage or whether the database was active for less than an hour.
++
+### Using Monetary Credit with Azure SQL Database
+
+You can pay for Azure SQL Database charges with your EA monetary commitment credit. However, you can't use EA monetary commitment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+
+## Review estimated costs in the Azure portal
+
+As you go through the process of creating an Azure SQL Database, you can see the estimated costs during configuration of the compute tier.
+
+To access this screen, select **Configure database** on the **Basics** tab of the **Create SQL Database** page. The information and pricing in the following image are for example purposes only:
+
+ :::image type="content" source="media/cost-management/cost-estimate.png" alt-text="Example showing cost estimate in the Azure portal":::
+++
+If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](https://docs.microsoft.com/azure/billing/billing-spending-limit).
+
+## Monitor costs
+
+As you start using Azure SQL Database, you can see the estimated costs in the portal. Use the following steps to review the cost estimate:
+
+1. Sign into the Azure portal and navigate to your Azure SQL database's resource group. You can locate the resource group by navigating to your database and select **Resource group** in the **Overview** section.
+1. In the menu, select **Cost analysis**.
+1. View **Accumulated costs** and set the chart at the bottom to **Service name**. This chart shows an estimate of your current SQL Database costs. To narrow costs for the entire page to Azure SQL Database, select **Add filter** and then, select **Azure SQL Database**. The information and pricing in the following image are for example purposes only:
+
+ :::image type="content" source="media/cost-management/cost-analysis.png" alt-text="Example showing accumulated costs in the Azure portal":::
+
+From here, you can explore costs on your own. For more and information about the different cost analysis settings, see [Start analyzing costs](../../cost-management/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Create budgets
+
+<!-- Note to Azure service writer: Modify the following as needed for your service. -->
+
+You can create [budgets](../../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../../cost-management/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you when create a budget, see [Group and filter options](../../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Export cost data
+
+You can also [export your cost data](../../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
++
+## Other ways to manage and reduce costs for Azure SQL Database
+
+Azure SQL Database also enables you to scale resources up or down to control costs based on your application needs. For details, see [Dynamically scale database resources](scale-resources.md).
+
+Save money by committing to a reservation for compute resources for one to three years. For details, see [Save costs for resources with reserved capacity](reserved-capacity-overview.md).
++
+## Next steps
+
+- Learn [how to optimize your cloud investment with Azure Cost Management](../../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../../cost-management-billing/manage/getting-started.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Take the [Cost Management](https://docs.microsoft.com/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/file-space-manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/file-space-manage.md
@@ -10,7 +10,7 @@ ms.topic: conceptual
author: oslake ms.author: moslake ms.reviewer: jrasnick, sstein
-ms.date: 03/12/2019
+ms.date: 12/22/2020
--- # Manage file space for databases in Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
@@ -78,7 +78,7 @@ Modify the following query to return the amount of database data space used. Un
SELECT TOP 1 storage_in_megabytes AS DatabaseDataSpaceUsedInMB FROM sys.resource_stats WHERE database_name = 'db1'
-ORDER BY end_time DESC
+ORDER BY end_time DESC;
``` ### Database data space allocated and unused allocated space
@@ -92,7 +92,7 @@ SELECT SUM(size/128.0) AS DatabaseDataSpaceAllocatedInMB,
SUM(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0) AS DatabaseDataSpaceAllocatedUnusedInMB FROM sys.database_files GROUP BY type_desc
-HAVING type_desc = 'ROWS'
+HAVING type_desc = 'ROWS';
``` ### Database data max size
@@ -102,7 +102,7 @@ Modify the following query to return the database data max size. Units of the q
```sql -- Connect to database -- Database data max size in bytes
-SELECT DATABASEPROPERTYEX('db1', 'MaxSizeInBytes') AS DatabaseDataMaxSizeInBytes
+SELECT DATABASEPROPERTYEX('db1', 'MaxSizeInBytes') AS DatabaseDataMaxSizeInBytes;
``` ## Understanding types of storage space for an elastic pool
@@ -116,6 +116,9 @@ Understanding the following storage space quantities are important for managing
|**Data space allocated but unused**|The difference between the amount of data space allocated and data space used by all databases in the elastic pool.|This quantity represents the maximum amount of space allocated for the elastic pool that can be reclaimed by shrinking database data files.| |**Data max size**|The maximum amount of data space that can be used by the elastic pool for all of its databases.|The space allocated for the elastic pool should not exceed the elastic pool max size. If this condition occurs, then space allocated that is unused can be reclaimed by shrinking database data files.|
+> [!NOTE]
+> The error message "The elastic pool has reached its storage limit" indicates that the database objects have been allocated enough space to meet the elastic pool storage limit, but there may be unused space in the data space allocation. Consider increasing the elastic pool's storage limit, or as a short-term solution, freeing up data space using the [**Reclaim unused allocated space**](#reclaim-unused-allocated-space) section below. You should also be aware of the potential negative performance impact of shrinking database files, see [**Rebuild indexes**](#rebuild-indexes) section below.
+ ## Query an elastic pool for storage space information The following queries can be used to determine storage space quantities for an elastic pool.
@@ -130,7 +133,7 @@ Modify the following query to return the amount of elastic pool data space used.
SELECT TOP 1 avg_storage_percent / 100.0 * elastic_pool_storage_limit_mb AS ElasticPoolDataSpaceUsedInMB FROM sys.elastic_pool_resource_stats WHERE elastic_pool_name = 'ep1'
-ORDER BY end_time DESC
+ORDER BY end_time DESC;
``` ### Elastic pool data space allocated and unused allocated space
@@ -181,7 +184,7 @@ The following screenshot is an example of the output of the script:
### Elastic pool data max size
-Modify the following T-SQL query to return the elastic pool data max size. Units of the query result are in MB.
+Modify the following T-SQL query to return the last recorded elastic pool data max size. Units of the query result are in MB.
```sql -- Connect to master
@@ -189,13 +192,13 @@ Modify the following T-SQL query to return the elastic pool data max size. Unit
SELECT TOP 1 elastic_pool_storage_limit_mb AS ElasticPoolMaxSizeInMB FROM sys.elastic_pool_resource_stats WHERE elastic_pool_name = 'ep1'
-ORDER BY end_time DESC
+ORDER BY end_time DESC;
``` ## Reclaim unused allocated space > [!NOTE]
-> This command can impact database performance while it is running, and if possible should be run during periods of low usage.
+> Shrink commands impact database performance while running, and if possible should be run during periods of low usage.
### DBCC shrink
@@ -203,24 +206,28 @@ Once databases have been identified for reclaiming unused allocated space, modif
```sql -- Shrink database data space allocated.
-DBCC SHRINKDATABASE (N'db1')
+DBCC SHRINKDATABASE (N'db1');
```
-This command can impact database performance while it is running, and if possible should be run during periods of low usage.
+Shrink commands impact database performance while running, and if possible should be run during periods of low usage.
-For more information about this command, see [SHRINKDATABASE](/sql/t-sql/database-console-commands/dbcc-shrinkdatabase-transact-sql).
+You should also be aware of the potential negative performance impact of shrinking database files, see [**Rebuild indexes**](#rebuild-indexes) section below.
+
+For more information about this command, see [SHRINKDATABASE](/sql/t-sql/database-console-commands/dbcc-shrinkdatabase-transact-sql.md).
### Auto-shrink Alternatively, auto shrink can be enabled for a database. Auto shrink reduces file management complexity and is less impactful to database performance than `SHRINKDATABASE` or `SHRINKFILE`. Auto shrink can be particularly helpful for managing elastic pools with many databases. However, auto shrink can be less effective in reclaiming file space than `SHRINKDATABASE` and `SHRINKFILE`.
+By default, Auto Shrink is disabled as recommended for most databases. For more information, see [Considerations for AUTO_SHRINK](/troubleshoot/sql/admin/considerations-autogrow-autoshrink#considerations-for-auto_shrink).
+ To enable auto shrink, modify the name of the database in the following command. ```sql -- Enable auto-shrink for the database.
-ALTER DATABASE [db1] SET AUTO_SHRINK ON
+ALTER DATABASE [db1] SET AUTO_SHRINK ON;
```
-For more information about this command, see [DATABASE SET](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current) options.
+For more information about this command, see [DATABASE SET](/sql/t-sql/statements/alter-database-transact-sql-set-options) options.
### Rebuild indexes
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/firewall-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/firewall-configure.md
@@ -264,7 +264,7 @@ Consider the following points when access to Azure SQL Database doesn't behave a
## Next steps - Confirm that your corporate network environment allows inbound communication from the compute IP address ranges (including SQL ranges) that are used by the Azure datacenters. You might have to add those IP addresses to the allow list. See [Microsoft Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653). -- For a quickstart about creating a server-level IP firewall rule, see [Create a single database in Azure SQL Database](single-database-create-quickstart.md).
+- See our quickstart about [creating a single database in Azure SQL Database](single-database-create-quickstart.md).
- For help with connecting to a database in Azure SQL Database from open-source or third-party applications, see [Client quickstart code samples to Azure SQL Database](connect-query-content-reference-guide.md#libraries). - For information about additional ports that you may need to open, see the "SQL Database: Outside vs inside" section of [Ports beyond 1433 for ADO.NET 4.5 and SQL Database](adonet-v12-develop-direct-route-ports.md) - For an overview of Azure SQL Database security, see [Securing your database](security-overview.md).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/quota-increase-request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/quota-increase-request.md
@@ -14,7 +14,7 @@ ms.date: 06/04/2020
# Request quota increases for Azure SQL Database and SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-This article explains how to request a quota increase for Azure SQL Database and Azure SQL Managed Instance. It also explains how to enable subscription access to a region.
+This article explains how to request a quota increase for Azure SQL Database and Azure SQL Managed Instance. It also explains how to enable subscription access to a region and how to request enabling specific hardware in a region.
## <a id="newquota"></a> Create a new support request
@@ -57,8 +57,7 @@ The following sections describe the quota increase options for the **SQL Databas
- Database transaction units (DTUs) per server - Servers per subscription-- M-series region access-- Region access
+- Region access for subscriptions or specific hardware
### Database transaction units (DTUs) per server
@@ -104,30 +103,15 @@ If your subscription needs access in a particular region, select the **Region ac
![Request region access](./media/quota-increase-request/quota-request.png)
-<!--
-### <a id="mseries"></a> Enable M-series access to a region
+### Request enabling specific hardware in a region
-To enable M-series hardware for a subscription and region, a support request must be opened.
+If a [hardware generation](service-tiers-vcore.md#hardware-generations) you want to use is not available in your region (see [Hardware availability](service-tiers-vcore.md#hardware-availability)), you may request it using the following steps.
-1. Select the **M-series region access** quota type.
+1. Select the **Other quota request** quota type.
-1. In the **Select a location** list, select the Azure region to use. The quota is per subscription in each region.
--
- ![Request M-series region access](./media/quota-increase-request/quota-m-series.png)
-
-## <a id="sqlmiquota"></a> SQL Managed Instance quota type
-
-For the **SQL Managed Instance** quota type, use the following steps:
-
-1. In the **Region** list, select the Azure region to target.
-
-1. Enter the new limits you are requesting for **Subnet** and **vCore**.
-
- ![SQL Managed Instance quota details](./media/quota-increase-request/quota-details-managed-instance.png)
+1. In the **Description** field, state your request, including the name of the hardware generation and the name of the region you need it in.
-For more information, see [Overview Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md).
+ ![Request hardware in a new region](./media/quota-increase-request/hardware-in-new-region.png)
## Submit your request
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/resource-limits-vcore-elastic-pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-elastic-pools.md
@@ -10,7 +10,7 @@ ms.topic: reference
author: oslake ms.author: moslake ms.reviewer: sstein
-ms.date: 10/15/2020
+ms.date: 01/15/2021
--- # Resource limits for elastic pools using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
@@ -229,6 +229,39 @@ You can set the service tier, compute size (service objective), and storage amou
<sup>3</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled. +
+## General purpose - provisioned compute - DC-series
+
+|Compute size (service objective)|GP_DC_2|GP_DC_4|GP_DC_6|GP_DC_8|
+|:--- | --: |--: |--: |--: |
+|Compute generation|DC|DC|DC|DC|
+|vCores|2|4|6|8|
+|Memory (GB)|9|18|27|36|
+|Max number DBs per pool <sup>1</sup>|100|400|400|400|
+|Columnstore support|Yes|Yes|Yes|Yes|
+|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|
+|Max data size (GB)|756|1536|2048|2048|
+|Max log size (GB)|227|461|614|614|
+|TempDB max data size (GB)|64|128|192|256|
+|Storage type|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|Premium (Remote) Storage|
+|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|
+|Max data IOPS per pool <sup>2</sup>|800|1600|2400|3200|
+|Max log rate per pool (MBps)|9.4|18.8|28.1|32.8|
+|Max concurrent workers per pool (requests) <sup>3</sup>|168|336|504|672|
+|Max concurrent logins per pool (requests) <sup>3</sup>|168|336|504|672|
+|Max concurrent sessions|30,000|30,000|30,000|30,000|
+|Min/max elastic pool vCore choices per database|2|2...4|2...6|2...8|
+|Number of replicas|1|1|1|1|
+|Multi-AZ|N/A|N/A|N/A|N/A|
+|Read Scale-out|N/A|N/A|N/A|N/A|
+|Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|
+
+<sup>1</sup> See [Resource management in dense elastic pools](elastic-pool-resource-management.md) for additional considerations.
+
+<sup>2</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
+
+<sup>3</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
+ ## Business critical - provisioned compute - Gen4 > [!IMPORTANT]
@@ -400,8 +433,6 @@ You can set the service tier, compute size (service objective), and storage amou
If all vCores of an elastic pool are busy, then each database in the pool receives an equal amount of compute resources to process queries. Azure SQL Database provides resource sharing fairness between databases by ensuring equal slices of compute time. Elastic pool resource sharing fairness is in addition to any amount of resource otherwise guaranteed to each database when the vCore min per database is set to a non-zero value. -- ### M-series compute generation (part 2) |Compute size (service objective)|BC_M_20|BC_M_24|BC_M_32|BC_M_64|BC_M_128|
@@ -435,6 +466,37 @@ If all vCores of an elastic pool are busy, then each database in the pool receiv
If all vCores of an elastic pool are busy, then each database in the pool receives an equal amount of compute resources to process queries. Azure SQL Database provides resource sharing fairness between databases by ensuring equal slices of compute time. Elastic pool resource sharing fairness is in addition to any amount of resource otherwise guaranteed to each database when the vCore min per database is set to a non-zero value.
+## Business critical - provisioned compute - DC-series
+
+|Compute size (service objective)|BC_DC_2|BC_DC_4|BC_DC_6|BC_DC_8|
+|:--- | --: |--: |--: |--: |
+|Compute generation|DC|DC|DC|DC|
+|vCores|2|4|6|8|
+|Memory (GB)|9|18|27|36|
+|Max number DBs per pool <sup>1</sup>|50|100|100|100|
+|Columnstore support|Yes|Yes|Yes|Yes|
+|In-memory OLTP storage (GB)|1.7|3.7|5.9|8.2|
+|Max data size (GB)|768|768|768|768|
+|Max log size (GB)|230|230|230|230|
+|TempDB max data size (GB)|64|128|192|256|
+|Storage type|Local SSD|Local SSD|Local SSD|Local SSD|
+|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|
+|Max data IOPS per pool <sup>2</sup>|15750|31500|47250|56000|
+|Max log rate per pool (MBps)|20|60|90|120|
+|Max concurrent workers per pool (requests) <sup>3</sup>|168|336|504|672|
+|Max concurrent logins per pool (requests) <sup>3</sup>|168|336|504|672|
+|Max concurrent sessions|30,000|30,000|30,000|30,000|
+|Min/max elastic pool vCore choices per database|2|2...4|2...6|2...8|
+|Number of replicas|4|4|4|4|
+|Multi-AZ|No|No|No|No|
+|Read Scale-out|Yes|Yes|Yes|Yes|
+|Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|
+
+<sup>1</sup> See [Resource management in dense elastic pools](elastic-pool-resource-management.md) for additional considerations.
+
+<sup>2</sup> The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
+
+<sup>3</sup> For the max concurrent workers (requests) for any individual database, see [Single database resource limits](resource-limits-vcore-single-databases.md). For example, if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5 there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1 vCore or less, the number of max concurrent workers is similarly rescaled.
## Database properties for pooled databases
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/resource-limits-vcore-single-databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
@@ -10,7 +10,7 @@ ms.topic: reference
author: stevestein ms.author: sstein ms.reviewer:
-ms.date: 10/15/2020
+ms.date: 01/15/2021
--- # Resource limits for single databases using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
@@ -225,6 +225,37 @@ The [serverless compute tier](serverless-tier-overview.md) is currently availabl
**Note 2**: Latency is 1-2 ms for data on local compute replica SSD, which caches most used data pages. Higher latency for data retrieved from page servers.
+## Hyperscale - provisioned compute - DC-series
+
+|Compute size (service objective)|HS_DC_2|HS_DC_4|HS_DC_6|HS_DC_8|
+|:--- | --: |--: |--: |--: |---: |
+|Compute generation|DC-series|DC-series|DC-series|DC-series|
+|vCores|2|4|6|8|
+|Memory (GB)|9|18|27|36|
+|[RBPEX](service-tier-hyperscale.md#compute) Size|3X Memory|3X Memory|3X Memory|3X Memory|
+|Columnstore support|Yes|Yes|Yes|Yes|
+|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|
+|Max data size (TB)|100 |100 |100 |100 |
+|Max log size (TB)|Unlimited |Unlimited |Unlimited |Unlimited |
+|TempDB max data size (GB)|64|128|192|256|
+|Storage type| [Note 1](#notes) |[Note 1](#notes)|[Note 1](#notes) |[Note 1](#notes) |
+|Max local SSD IOPS *|8000 |16000 |24000 |32000 |
+|Max log rate (MBps)|100 |100 |100 |100 |
+|IO latency (approximate)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|
+|Max concurrent workers (requests)|160|320|480|640|
+|Max concurrent sessions|30,000|30,000|30,000|30,000|
+|Secondary replicas|0-4|0-4|0-4|0-4|
+|Multi-AZ|N/A|N/A|N/A|N/A|
+|Read Scale-out|Yes|Yes|Yes|Yes|
+|Backup storage retention|7 days|7 days|7 days|7 days|
+|||
+
+### Notes
+
+**Note 1**: Hyperscale is a multi-tiered architecture with separate compute and storage components: [Hyperscale Service Tier Architecture](service-tier-hyperscale.md#distributed-functions-architecture)
+
+**Note 2**: Latency is 1-2 ms for data on local compute replica SSD, which caches most used data pages. Higher latency for data retrieved from page servers.
+ ## General purpose - provisioned compute - Gen4 > [!IMPORTANT]
@@ -384,6 +415,32 @@ The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Read Scale-out|N/A|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
+\* The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
+
+## General purpose - provisioned compute - DC-series
+
+|Compute size (service objective)|GP_DC_2|GP_DC_4|GP_DC_6|GP_DC_8|
+|:---| ---:|---:|---:|---:|
+|Compute generation|DC-series|DC-series|DC-series|DC-series|
+|vCores|2|4|6|8|
+|Memory (GB)|9|18|27|36|
+|Columnstore support|Yes|Yes|Yes|Yes|
+|In-memory OLTP storage (GB)|N/A|N/A|N/A|N/A|
+|Max data size (GB)|1024|1536|3072|3072|
+|Max log size (GB)|307|461|922|922|
+|TempDB max data size (GB)|64|128|192|256|
+|Storage type|Remote SSD|Remote SSD|Remote SSD|Remote SSD|
+|IO latency (approximate)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|5-7 ms (write)<br>5-10 ms (read)|
+|Max data IOPS *|640|1280|1920|2560|
+|Max log rate (MBps)|9|18|27|36|
+|Max concurrent workers (requests)|160|320|480|640|
+|Max concurrent sessions|30,000|30,000|30,000|30,000|
+|Number of replicas|1|1|1|1|
+|Multi-AZ|N/A|N/A|N/A|N/A|
+|Read Scale-out|N/A|N/A|N/A|N/A|
+|Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|
++ \* The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance). ## Business critical - provisioned compute - Gen4
@@ -557,6 +614,31 @@ The [serverless compute tier](serverless-tier-overview.md) is currently availabl
> [!IMPORTANT] > Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md).
+## Business critical - provisioned compute - DC-series
+
+|Compute size (service objective)|BC_DC_2|BC_DC_4|BC_DC_6|BC_DC_8|
+|:--- | --: |--: |--: |--: |
+|Compute generation|DC-series|DC-series|DC-series|DC-series|
+|vCores|2|4|6|8|
+|Memory (GB)|9|18|27|36|
+|Columnstore support|Yes|Yes|Yes|Yes|
+|In-memory OLTP storage (GB)|1.7|3.7|5.9|8.2|
+|Max data size (GB)|768|768|768|768|
+|Max log size (GB)|230|230|230|230|
+|TempDB max data size (GB)|64|128|192|256|
+|Storage type|Local SSD|Local SSD|Local SSD|Local SSD|
+|IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|
+|Max data IOPS *|14000|28000|42000|56000|
+|Max log rate (MBps)|24|48|72|96|
+|Max concurrent workers (requests)|200|400|600|800|
+|Max concurrent logins|200|400|600|800|
+|Max concurrent sessions|30,000|30,000|30,000|30,000|
+|Number of replicas|4|4|4|4|
+|Multi-AZ|No|No|No|No|
+|Read Scale-out|No|No|No|No|
+|Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|
+
+\* The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance).
## Next steps
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tiers-vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-vcore.md
@@ -9,7 +9,7 @@ ms.topic: conceptual
author: stevestein ms.author: sstein ms.reviewer: sashan, moslake
-ms.date: 09/30/2020
+ms.date: 01/15/2021
--- # vCore model overview - Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
@@ -63,7 +63,7 @@ The [serverless compute tier](serverless-tier-overview.md) auto-scales compute r
## Hardware generations
-Hardware generation options in the vCore model include Gen 4/5, M-series, and Fsv2-series. The hardware generation generally defines the compute and memory limits and other characteristics that impact the performance of the workload.
+Hardware generation options in the vCore model include Gen 4/5, M-series, Fsv2-series, and DC-series. The hardware generation generally defines the compute and memory limits and other characteristics that impact the performance of the workload.
### Gen4/Gen5
@@ -79,7 +79,6 @@ For regions where Gen4/Gen5 is available, see [Gen4/Gen5 availability](#gen4gen5
Fsv2-series in only supported in the General Purpose tier. For regions where Fsv2-series is available, see [Fsv2-series availability](#fsv2-series-1). - ### M-series - M-series is a memory optimized hardware option for workloads demanding more memory and higher compute limits than provided by Gen5.
@@ -95,6 +94,22 @@ To access M-series, the subscription must be a paid offer type including Pay-As-
To enable M-series hardware for a subscription and region, a support request must be opened. The subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). If the support request is approved, then the selection and provisioning experience of M-series follows the same pattern as for other hardware generations. For regions where M-series is available, see [M-series availability](#m-series). -->
+### DC-series
+
+> [!NOTE]
+> DC-series is currently in **public preview**.
+
+- DC-series hardware uses Intel processors with Software Guard Extensions (Intel SGX) technology.
+- DC-series is required for [Always Encrypted with secure enclaves](https://docs.microsoft.com/sql/relational-databases/security/encryption/always-encrypted-enclaves), which is not supported with other hardware configurations.
+- DC-series is designed for workloads that process sensitive data and demand confidential query processing capabilities, provided by Always Encrypted with secure enclaves.
+- DC-series hardware provides balanced compute and memory resources.
+
+DC-series is only supported for the Provisioned compute (Serverless is not supported) and it does not support zone redundancy. For regions where DC-series is available, see [DC-series availability](#dc-series-1).
+
+#### Azure offer types supported by DC-series
+
+To access DC-series, the subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by DC-series, see [current offers without spending limits](https://azure.microsoft.com/support/legal/offer-details).
+ ### Compute and memory specifications
@@ -104,6 +119,7 @@ To enable M-series hardware for a subscription and region, a support request mus
|Gen5 |**Provisioned compute**<br>- Intel® E5-2673 v4 (Broadwell) 2.3-GHz, Intel® SP-8160 (Skylake)\*, and Intel® 8272CL (Cascade Lake) 2.5 GHz\* processors<br>- Provision up to 80 vCores (1 vCore = 1 hyper-thread)<br><br>**Serverless compute**<br>- Intel® E5-2673 v4 (Broadwell) 2.3-GHz and Intel® SP-8160 (Skylake)* processors<br>- Auto-scale up to 40 vCores (1 vCore = 1 hyper-thread)|**Provisioned compute**<br>- 5.1 GB per vCore<br>- Provision up to 408 GB<br><br>**Serverless compute**<br>- Auto-scale up to 24 GB per vCore<br>- Auto-scale up to 120 GB max| |Fsv2-series |- Intel® 8168 (Skylake) processors<br>- Featuring a sustained all core turbo clock speed of 3.4 GHz and a maximum single core turbo clock speed of 3.7 GHz.<br>- Provision up to 72 vCores (1 vCore = 1 hyper-thread)|- 1.9 GB per vCore<br>- Provision up to 136 GB| |M-series |- Intel® E7-8890 v3 2.5 GHz and Intel® 8280M 2.7 GHz (Cascade Lake) processors<br>- Provision up to 128 vCores (1 vCore = 1 hyper-thread)|- 29 GB per vCore<br>- Provision up to 3.7 TB|
+|DC-series | - Intel XEON E-2288G processors<br>- Featuring Intel Software Guard Extension (Intel SGX))<br>- Provision up to 8 vCores (1 vCore = 1 physical core) | 4.5 GB per vCore |
\* In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for databases using Intel® SP-8160 (Skylake) processors appears as Gen6, while hardware generation for databases using Intel® 8272CL (Cascade Lake) appears as Gen7. Resource limits for all Gen5 databases are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
@@ -222,6 +238,15 @@ On the **Details** page, provide the following:
Approved support requests are typically fulfilled within 5 business days. -->
+#### DC-series
+
+> [!NOTE]
+> DC-series is currently in **public preview**.
+
+DC-series is available in the following regions: Canada Central, Canada East, East US, North Europe, UK South, West Europe, West US.
+
+If you need DC-series in a currently unsupported region, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) following the instructions in [Request quota increases for Azure SQL Database and SQL Managed Instance](quota-increase-request.md).
+ ## Next steps To get started, see:
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/troubleshoot-common-errors-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-common-errors-issues.md
@@ -15,7 +15,7 @@ ms.date: 01/14/2021
# Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-You receive error messages when the connection to Azure SQL Database or Azure SQL Managed Instance fails. These connection problems can be caused by reconfiguration, firewall settings, a connection timeout, incorrect login information or failure to apply best practices and design guidelines during the [application design](develop-overview.md) process. Additionally, if the maximum limit on some Azure SQL Database or SQL Managed Instance resources is reached, you can no longer connect.
+You receive error messages when the connection to Azure SQL Database or Azure SQL Managed Instance fails. These connection problems can be caused by reconfiguration, firewall settings, a connection timeout, incorrect login information, or failure to apply best practices and design guidelines during the [application design](develop-overview.md) process. Additionally, if the maximum limit on some Azure SQL Database or SQL Managed Instance resources is reached, you can no longer connect.
## Transient fault error messages (40197, 40613 and others)
@@ -37,13 +37,13 @@ The Azure infrastructure has the ability to dynamically reconfigure servers when
### Steps to resolve transient connectivity issues 1. Check the [Microsoft Azure Service Dashboard](https://azure.microsoft.com/status) for any known outages that occurred during the time during which the errors were reported by the application.
-2. Applications that connect to a cloud service such as Azure SQL Database should expect periodic reconfiguration events and implement retry logic to handle these errors instead of surfacing these as application errors to users.
+2. Applications that connect to a cloud service such as Azure SQL Database should expect periodic reconfiguration events and implement retry logic to handle these errors instead of surfacing application errors to users.
3. As a database approaches its resource limits, it can seem to be a transient connectivity issue. See [Resource limits](resource-limits-logical-server.md#what-happens-when-database-resource-limits-are-reached). 4. If connectivity problems continue, or if the duration for which your application encounters the error exceeds 60 seconds or if you see multiple occurrences of the error in a given day, file an Azure support request by selecting **Get Support** on the [Azure Support](https://azure.microsoft.com/support/options) site. #### Implementing Retry Logic
-It is strongly recommended that your client program has retry logic so that it could reestablish a connection after giving the transient fault time to correct itself. We recommend that you delay for 5 seconds before your first retry. Retrying after a delay shorter than 5 seconds risks overwhelming the cloud service. For each subsequent retry the delay should grow exponentially, up to a maximum of 60 seconds.
+It is strongly recommended that your client program has retry logic so that it could reestablish a connection after giving the transient fault time to correct itself. We recommend that you delay for 5 seconds before your first retry. Retrying after a delay shorter than 5-seconds risks overwhelming the cloud service. For each subsequent retry the delay should grow exponentially, up to a maximum of 60 seconds.
For code examples of retry logic, see:
@@ -99,49 +99,46 @@ To resolve this issue, contact your service administrator to provide you with a
Typically, the service administrator can use the following steps to add the login credentials: 1. Log in to the server by using SQL Server Management Studio (SSMS).
-2. Run the following SQL query to check whether the login name is disabled:
+2. Run the following SQL query in the master database to check whether the login name is disabled:
```sql
- SELECT name, is_disabled FROM sys.sql_logins
+ SELECT name, is_disabled FROM sys.sql_logins;
``` 3. If the corresponding name is disabled, enable it by using the following statement: ```sql
- Alter login <User name> enable
+ ALTER LOGIN <User name> ENABLE;
```
-4. If the SQL login user name doesn't exist, create it by following these steps:
-
- 1. In SSMS, double-click **Security** to expand it.
- 2. Right-click **Logins**, and then select **New login**.
- 3. In the generated script with placeholders, edit and run the following SQL query:
+4. If the SQL login user name doesn't exist, edit and run the following SQL query to create a new SQL login:
```sql CREATE LOGIN <SQL_login_name, sysname, login_name>
- WITH PASSWORD = '<password, sysname, Change_Password>'
+ WITH PASSWORD = '<password, sysname, Change_Password>';
GO ```
-5. Double-click **Database**.
+5. In SSMS Object Explorer, expand **Databases**.
6. Select the database that you want to grant the user permission to.
-7. Double-click **Security**.
-8. Right-click **Users**, and then select **New User**.
-9. In the generated script with placeholders, edit and run the following SQL query:
+7. Right-click **Security**, and then select **New**, **User**.
+8. In the generated script with placeholders, edit and run the following SQL query:
```sql CREATE USER <user_name, sysname, user_name> FOR LOGIN <login_name, sysname, login_name>
- WITH DEFAULT_SCHEMA = <default_schema, sysname, dbo>
+ WITH DEFAULT_SCHEMA = <default_schema, sysname, dbo>;
GO
- -- Add user to the database owner role
- EXEC sp_addrolemember N'db_owner', N'<user_name, sysname, user_name>'
+ -- Add user to the database owner role
+ EXEC sp_addrolemember N'db_owner', N'<user_name, sysname, user_name>';
GO ```
+ You can also use `sp_addrolemember` to map specific users to specific database roles.
+ > [!NOTE]
- > You can also use `sp_addrolemember` to map specific users to specific database roles.
+ > In Azure SQL Database, consider the newer [ALTER ROLE](/sql/t-sql/statements/alter-role-transact-sql) syntax for managing database role membership.
For more information, see [Managing databases and logins in Azure SQL Database](./logins-create-manage.md).
@@ -178,7 +175,7 @@ To work around this issue, try one of the following methods:
- Verify whether there are long-running queries. > [!NOTE]
- > This is a minimalist approach that might not resolve the issue. For detailed information on troubleshooting query blocking, see [Understand and resolve Azure SQL blocking problems](understand-resolve-blocking.md).
+ > This is a minimalist approach that might not resolve the issue. For more thorough information on troubleshooting long running or blocking queries, see [Understand and resolve Azure SQL Database blocking problems](understand-resolve-blocking.md).
1. Run the following SQL query to check the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) view to see any blocking requests:
@@ -186,12 +183,15 @@ To work around this issue, try one of the following methods:
SELECT * FROM sys.dm_exec_requests; ```
-2. Determine the **input buffer** for the head blocker.
-3. Tune the head blocker query.
+1. Determine the **input buffer** for the head blocker using the [sys.dm_exec_input_buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) dynamic management function, and the session_id of the offending query, for example:
+
+ ```sql
+ SELECT * FROM sys.dm_exec_input_buffer (100,0);
+ ```
- For an in-depth troubleshooting procedure, see [Is my query running fine in the cloud?](/archive/blogs/sqlblog/is-my-query-running-fine-in-the-cloud).
+1. Tune the head blocker query.
-If the database consistently reaches its limit despite addressing blocking and long-running queries, consider upgrading to an edition with more resources [Editions](https://azure.microsoft.com/pricing/details/sql-database/)).
+If the database consistently reaches its limit despite addressing blocking and long-running queries, consider upgrading to an edition with more resources [Editions](https://azure.microsoft.com/pricing/details/sql-database/).
For more information about database limits, see [SQL Database resource limits for servers](./resource-limits-logical-server.md).
@@ -249,12 +249,18 @@ If you repeatedly encounter this error, try to resolve the issue by following th
SELECT * FROM sys.dm_exec_requests; ```
-2. Determine the input buffer for the long-running query.
+2. Determine the **input buffer** for the head blocker using the [sys.dm_exec_input_buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) dynamic management function, and the session_id of the offending query, for example:
+
+ ```sql
+ SELECT * FROM sys.dm_exec_input_buffer (100,0);
+ ```
+ 3. Tune the query.
-Also consider batching your queries. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).
+ > [!Note]
+ > For more information on troubleshooting blocking in Azure SQL Database, see [Understand and resolve Azure SQL Database blocking problems](understand-resolve-blocking.md).
-For an in-depth troubleshooting procedure, see [Is my query running fine in the cloud?](/archive/blogs/sqlblog/is-my-query-running-fine-in-the-cloud).
+Also consider batching your queries. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).
### Error 40551: The session has been terminated because of excessive TEMPDB usage
@@ -292,7 +298,7 @@ For an in-depth troubleshooting procedure, see [Is my query running fine in the
| Error code | Severity | Description | | ---:| ---:|:--- | | 10928 |20 |Resource ID: %d. The %s limit for the database is %d and has been reached. For more information, see [SQL Database resource limits for single and pooled databases](resource-limits-logical-server.md).<br/><br/>The Resource ID indicates the resource that has reached the limit. For worker threads, the Resource ID = 1. For sessions, the Resource ID = 2.<br/><br/>For more information about this error and how to resolve it, see: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md). |
-| 10929 |20 |Resource ID: %d. The %s minimum guarantee is %d, maximum limit is %d, and the current usage for the database is %d. However, the server is currently too busy to support requests greater than %d for this database. The Resource ID indicates the resource that has reached the limit. For worker threads, the Resource ID = 1. For sessions, the Resource ID = 2. For more information, see: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md). <br/>Otherwise, please try again later. |
+| 10929 |20 |Resource ID: %d. The %s minimum guarantee is %d, maximum limit is %d, and the current usage for the database is %d. However, the server is currently too busy to support requests greater than %d for this database. The Resource ID indicates the resource that has reached the limit. For worker threads, the Resource ID = 1. For sessions, the Resource ID = 2. For more information, see: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md). <br/>Otherwise, try again later. |
| 40544 |20 |The database has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions. For database scaling, see [Scale single database resources](single-database-scale.md) and [Scale elastic pool resources](elastic-pool-scale.md).| | 40549 |16 |Session is terminated because you have a long-running transaction. Try shortening your transaction. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).| | 40550 |16 |The session has been terminated because it has acquired too many locks. Try reading or modifying fewer rows in a single transaction. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).|
@@ -306,14 +312,14 @@ The following errors are related to creating and using elastic pools:
| Error code | Severity | Description | Corrective action | |:--- |:--- |:--- |:--- |
-| 1132 | 17 |The elastic pool has reached its storage limit. The storage usage for the elastic pool cannot exceed (%d) MBs. Attempting to write data to a database when the storage limit of the elastic pool has been reached. For information on resource limits, see: <br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md). <br/> |Consider increasing the DTUs of and/or adding storage to the elastic pool if possible in order to increase its storage limit, reduce the storage used by individual databases within the elastic pool, or remove databases from the elastic pool. For elastic pool scaling, see [Scale elastic pool resources](elastic-pool-scale.md).|
-| 10929 | 16 |The %s minimum guarantee is %d, maximum limit is %d, and the current usage for the database is %d. However, the server is currently too busy to support requests greater than %d for this database. For information on resource limits, see: <br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md). <br/> Otherwise, please try again later. DTU / vCore min per database; DTU / vCore max per database. The total number of concurrent workers (requests) across all databases in the elastic pool attempted to exceed the pool limit. |Consider increasing the DTUs or vCores of the elastic pool if possible in order to increase its worker limit, or remove databases from the elastic pool. |
+| 1132 | 17 |The elastic pool has reached its storage limit. The storage usage for the elastic pool cannot exceed (%d) MBs. Attempting to write data to a database when the storage limit of the elastic pool has been reached. For information on resource limits, see: <br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md). <br/> |Consider increasing the DTUs of and/or adding storage to the elastic pool if possible in order to increase its storage limit, reduce the storage used by individual databases within the elastic pool, or remove databases from the elastic pool. For elastic pool scaling, see [Scale elastic pool resources](elastic-pool-scale.md). For more information on removing unused space from databases, see [Manage file space for databases in Azure SQL Database](file-space-manage.md).|
+| 10929 | 16 |The %s minimum guarantee is %d, maximum limit is %d, and the current usage for the database is %d. However, the server is currently too busy to support requests greater than %d for this database. For information on resource limits, see: <br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md). <br/> Otherwise, try again later. DTU / vCore min per database; DTU / vCore max per database. The total number of concurrent workers (requests) across all databases in the elastic pool attempted to exceed the pool limit. |Consider increasing the DTUs or vCores of the elastic pool if possible in order to increase its worker limit, or remove databases from the elastic pool. |
| 40844 | 16 |Database '%ls' on Server '%ls' is a '%ls' edition database in an elastic pool and cannot have a continuous copy relationship. |N/A | | 40857 | 16 |Elastic pool not found for server: '%ls', elastic pool name: '%ls'. Specified elastic pool does not exist in the specified server. | Provide a valid elastic pool name. | | 40858 | 16 |Elastic pool '%ls' already exists in server: '%ls'. Specified elastic pool already exists in the specified server. | Provide new elastic pool name. | | 40859 | 16 |Elastic pool does not support service tier '%ls'. Specified service tier is not supported for elastic pool provisioning. |Provide the correct edition or leave service tier blank to use the default service tier. | | 40860 | 16 |Elastic pool '%ls' and service objective '%ls' combination is invalid. Elastic pool and service tier can be specified together only if resource type is specified as 'ElasticPool'. |Specify correct combination of elastic pool and service tier. |
-| 40861 | 16 |The database edition '%.*ls' cannot be different than the elastic pool service tier which is '%.*ls'. The database edition is different than the elastic pool service tier. |Do not specify a database edition which is different than the elastic pool service tier. Note that the database edition does not need to be specified. |
+| 40861 | 16 |The database edition '%.*ls' cannot be different than the elastic pool service tier which is '%.*ls'. The database edition is different than the elastic pool service tier. |Do not specify a database edition that is different than the elastic pool service tier. Note that the database edition does not need to be specified. |
| 40862 | 16 |Elastic pool name must be specified if the elastic pool service objective is specified. Elastic pool service objective does not uniquely identify an elastic pool. |Specify the elastic pool name if using the elastic pool service objective. | | 40864 | 16 |The DTUs for the elastic pool must be at least (%d) DTUs for service tier '%.*ls'. Attempting to set the DTUs for the elastic pool below the minimum limit. |Retry setting the DTUs for the elastic pool to at least the minimum limit. | | 40865 | 16 |The DTUs for the elastic pool cannot exceed (%d) DTUs for service tier '%.*ls'. Attempting to set the DTUs for the elastic pool above the maximum limit. |Retry setting the DTUs for the elastic pool to no greater than the maximum limit. |
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/glossary-terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/glossary-terms.md
@@ -11,7 +11,7 @@ ms.topic: reference
author: stevestein ms.author: sstein ms.reviewer:
-ms.date: 01/22/2020
+ms.date: 12/09/2020
--- # Azure SQL Database glossary of terms [!INCLUDE[appliesto-asf](includes/appliesto-asf.md)]
@@ -28,10 +28,10 @@ ms.date: 01/22/2020
|Service tier|Basic, Standard, Premium, General Purpose, Hyperscale, Business Critical|For service tiers in the vCore model, see [SQL Database service tiers](database/service-tiers-vcore.md#service-tiers). For service tiers in the DTU model, see [DTU model](database/service-tiers-dtu.md#compare-the-dtu-based-service-tiers).| |Compute tier|Serverless compute|[Serverless compute](database/service-tiers-vcore.md#compute-tiers) ||Provisioned compute|[Provisioned compute](database/service-tiers-vcore.md#compute-tiers)
-|Compute generation|Gen5, M-series, Fsv2-series|[Hardware generations](database/service-tiers-vcore.md#hardware-generations)
+|Compute generation|Gen5, M-series, Fsv2-series, DC-series|[Hardware generations](database/service-tiers-vcore.md#hardware-generations)
|Server entity| Server |[Logical SQL servers](database/logical-servers.md)| |Resource type|vCore|A CPU core provided to the compute resource for a single database, elastic pool. |
-||Compute size and storage amount|Compute size is the maximum amount of CPU, memory and other non-storage related resources available for a single database, or elastic pool. Storage size is the maximum amount of storage available for a single database, or elastic pool. For sizing options in the vcore model, see [vCore single databases](database/resource-limits-vcore-single-databases.md), and [vCore elastic pools](database/resource-limits-vcore-elastic-pools.md). (../managed-instance/resource-limits.md). For sizing options in the DTU model, see [DTU single databases](database/resource-limits-dtu-single-databases.md) and [DTU elastic pools](database/resource-limits-dtu-elastic-pools.md).
+||Compute size and storage amount|Compute size is the maximum amount of CPU, memory and other non-storage related resources available for a single database, or elastic pool. Storage size is the maximum amount of storage available for a single database, or elastic pool. For sizing options in the vCore model, see [vCore single databases](database/resource-limits-vcore-single-databases.md), and [vCore elastic pools](database/resource-limits-vcore-elastic-pools.md). (../managed-instance/resource-limits.md). For sizing options in the DTU model, see [DTU single databases](database/resource-limits-dtu-single-databases.md) and [DTU elastic pools](database/resource-limits-dtu-elastic-pools.md).
## Azure SQL Managed Instance
@@ -47,7 +47,3 @@ ms.date: 01/22/2020
|Server entity|Managed instance or instance| N/A as the SQL Managed Instance is in itself the server | |Resource type|vCore|A CPU core provided to the compute resource for SQL Managed Instance.| ||Compute size and storage amount|Compute size is the maximum amount of CPU, memory and other non-storage related resources for SQL Managed Instance. Storage size is the maximum amount of storage available for a SQL Managed Instance. For sizing options, [SQL Managed Instances](managed-instance/resource-limits.md). |-
-## SQL on Azure VM
-
-need more stuff here
batch https://docs.microsoft.com/en-us/azure/batch/batch-pool-cloud-service-to-virtual-machine-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-cloud-service-to-virtual-machine-configuration.md
@@ -17,7 +17,7 @@ This article describes how to migrate to 'virtualMachineConfiguration'.
Existing active pools cannot be updated from 'cloudServiceConfiguration' to 'virtualMachineConfiguration', new pools must be created. Creating pools using 'virtualMachineConfiguration' is supported by all Batch APIs, command-line tools, Azure portal, and the Batch Explorer UI.
-The [.NET](tutorial-parallel-dotnet.md) and [Python](tutorial-parallel-python.md) tutorials provide examples of pool creation using 'virtualMachineConfiguration'.
+**The [.NET](tutorial-parallel-dotnet.md) and [Python](tutorial-parallel-python.md) tutorials provide examples of pool creation using 'virtualMachineConfiguration'.**
## Pool configuration differences
batch https://docs.microsoft.com/en-us/azure/batch/batch-rendering-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-applications.md
@@ -5,7 +5,7 @@ ms.date: 09/19/2019
ms.topic: how-to ---
-# Pre-installed applications on rendering VM images
+# Pre-installed applications on Batch rendering VM images
It's possible to use any rendering applications with Azure Batch. However, Azure Marketplace VM images are available with common applications pre-installed.
@@ -82,4 +82,4 @@ The following list applies to Windows Server 2016, version 1.3.7 rendering image
## Next steps
-To use the rendering VM images, they need to be specified in the pool configuration when a pool is created; see the [Batch pool capabilities for rendering](./batch-rendering-functionality.md#batch-pools).
+To use the rendering VM images, they need to be specified in the pool configuration when a pool is created; see the [Batch pool capabilities for rendering](./batch-rendering-functionality.md).
batch https://docs.microsoft.com/en-us/azure/batch/batch-rendering-functionality https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-functionality.md
@@ -3,7 +3,7 @@ title: Rendering capabilities
description: Standard Azure Batch capabilities are used to run rendering workloads and apps. Batch includes specific features to support rendering workloads. author: mscurrell ms.author: markscu
-ms.date: 08/02/2018
+ms.date: 01/14/2021
ms.topic: how-to ---
@@ -13,7 +13,15 @@ Standard Azure Batch capabilities are used to run rendering workloads and applic
For an overview of Batch concepts, including pools, jobs, and tasks, see [this article](./batch-service-workflow-features.md).
-## Batch Pools
+## Batch pools using custom VM images and standard application licensing
+
+As with other workloads and types of application, a custom VM image can be created with the required rendering applications and plug-ins. The custom VM image is placed in the [Shared Image Gallery](../virtual-machines/shared-image-galleries.md) and [can be used to create Batch Pools](batch-sig-images.md).
+
+The task command line strings will need to reference the applications and paths used when creating the custom VM image.
+
+Most rendering applications will require licenses obtained from a license server. If there's an existing on-premises license server, then both the pool and license server need to be on the same [virtual network](../virtual-network/virtual-networks-overview.md). It is also possible to run a license server on an Azure VM, with the Batch pool and license server VM being on the same virtual network.
+
+## Batch pools using rendering VM images
### Rendering application installation
@@ -66,13 +74,13 @@ Arnold 2017 command line|kick.exe|ARNOLD_2017_EXEC|
|Arnold 2018 command line|kick.exe|ARNOLD_2018_EXEC| |Blender|blender.exe|BLENDER_2018_EXEC|
-### Azure VM families
+## Azure VM families
As with other workloads, rendering application system requirements vary, and performance requirements vary for jobs and projects. A large variety of VM families are available in Azure depending on your requirements ΓÇô lowest cost, best price/performance, best performance, and so on. Some rendering applications, such as Arnold, are CPU-based; others such as V-Ray and Blender Cycles can use CPUs and/or GPUs. For a description of available VM families and VM sizes, [see VM types and sizes](../virtual-machines/sizes.md).
-### Low-priority VMs
+## Low-priority VMs
As with other workloads, low-priority VMs can be utilized in Batch pools for rendering. Low-priority VMs perform the same as regular dedicated VMs but utilize surplus Azure capacity and are available for a large discount. The tradeoff for using low-priority VMs is that those VMs may not be available to be allocated or may be preempted at any time, depending on available capacity. For this reason, low-priority VMs aren't going to be suitable for all rendering jobs. For example, if images take many hours to render then it's likely that having the rendering of those images interrupted and restarted due to VMs being preempted wouldn't be acceptable.
batch https://docs.microsoft.com/en-us/azure/batch/batch-rendering-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-service.md
@@ -3,7 +3,7 @@ title: Rendering overview
description: Introduction of using Azure for rendering and an overview of Azure Batch rendering capabilities author: mscurrell ms.author: markscu
-ms.date: 08/02/2018
+ms.date: 01/14/2021
ms.topic: how-to ---
@@ -13,11 +13,11 @@ Rendering is the process of taking 3D models and converting them into 2D images.
The rendering workload is heavily used for special effects (VFX) in the Media and Entertainment industry. Rendering is also used in many other industries such as advertising, retail, oil and gas, and manufacturing.
-The process of rendering is computationally intensive; there can be many frames/images to produce and each image can take many hours to render. Rendering is therefore a perfect batch processing workload that can leverage Azure and Azure Batch to run many renders in parallel.
+The process of rendering is computationally intensive; there can be many frames/images to produce and each image can take many hours to render. Rendering is therefore a perfect batch processing workload that can leverage Azure to run many renders in parallel and utilize a wide range of hardware, including GPUs.
## Why use Azure for rendering?
-For many reasons, rendering is a workload perfectly suited for Azure and Azure Batch:
+For many reasons, rendering is a workload perfectly suited for Azure:
* Rendering jobs can be split into many pieces that can be run in parallel using multiple VMs: * Animations consist of many frames and each frame can be rendered in parallel. The more VMs available to process each frame, the faster all the frames and the animation can be produced.
@@ -31,68 +31,31 @@ For many reasons, rendering is a workload perfectly suited for Azure and Azure B
* Choose from a wide selection of hardware according to application, workload, and timeframe: * ThereΓÇÖs a wide selection of hardware available in Azure that can be allocated and managed with Batch. * Depending on the project, the requirement may be for the best price/performance or the best overall performance. Different scenes and/or rendering applications will have different memory requirements. Some rendering application can leverage GPUs for the best performance or certain features.
-* Low-priority VMs reduce costs:
- * Low-priority VMs are available for a large discount compared to regular on-demand VMs and are suitable for some job types.
- * Low-priority VMs can be allocated by Azure Batch, with Batch providing flexibility on how they are used to cater for a broad set of requirements. Batch pools can consist of both dedicated and low-priority VMs, with it being possible to change the mix of VM types at any time.
+* Low-priority or [Spot VMs](https://azure.microsoft.com/pricing/spot/) reduce cost:
+ * Low-priority and Spot VMs are available for a large discount compared to standard VMs and are suitable for some job types.
+
+## Existing on-premises rendering environment
-## Options for rendering on Azure
+The most common case is for there to be an existing on-premises render farm being managed by a render management application such as PipelineFX Qube, Royal Render, Thinkbox Deadline, or a custom application. The requirement is to extend the on-premises render farm capacity using Azure VMs.
-There are a range of Azure capabilities that can be used for rendering workloads. Which capabilities to use depends on any existing environment and requirements.
+Azure infrastructure and services are used to create a hybrid environment where Azure is used to supplement the on-premises capacity. For example:
-### Existing on-premises rendering environment using a render management application
+* Use a [Virtual Network](../virtual-network/virtual-networks-overview.md) to place the Azure resources on the same network as the on-premises render farm.
+* Use [Avere vFXT for Azure](../avere-vfxt/avere-vfxt-overview.md) or [Azure HPC Cache](../hpc-cache/hpc-cache-overview.md) to cache source files in Azure to reduce bandwidth use and latency, maximizing performance.
+* Ensure the existing license server is on the virtual network and purchase the additional licenses required to cater for the extra Azure-based capacity.
-The most common case is for there to be an existing on-premises render farm being managed by a render management application such as PipelineFX Qube, Royal Render, or Thinkbox Deadline. The requirement is to extend the on-premises render farm capacity using Azure VMs.
+## No existing render farm
-The render management software either has Azure support built-in or we make available plug-ins that add Azure support. For more information on the supported render managers and functionality enabled, see the article on [using render managers](./batch-rendering-render-managers.md).
+Client workstations may be performing rendering, but the rendering load is increasing and it is taking too long to solely use workstation capacity.
-### Custom rendering workflow
+There are two main options available:
-The requirement is for VMs to extend an existing render farm. Azure Batch pools can allocate large numbers of VMs, allow low-priority VMs to be used and dynamically auto-scaled with full-priced VMs, and provide pay-for-use licensing for popular rendering applications.
+* Deploy an on-premises render manager, such as Royal Render, and configure a hybrid environment to use Azure when further capacity or performance is required. A render manager is specifically tailored for rendering workloads and will include plug-ins for the popular client applications, enabling easy submission of rendering jobs.
-### No existing render farm
-
-Client workstations may be performing rendering, but the rendering workload is increasing and it is taking too long to solely use workstation capacity. Azure Batch can be used to both allocate render farm compute on-demand as well as schedule the render jobs to the Azure render farm.
-
-## Azure Batch rendering capabilities
-
-Azure Batch allows parallel workloads to be run in Azure. It enables the creation and management of large numbers of VMs on which applications are installed and run. It also provides comprehensive job scheduling capabilities to run instances of those applications, providing the assignment of tasks to VMs, queuing, application monitoring, and so on.
-
-Azure Batch is used for many workloads, but the following capabilities are available to specifically make it easier and quicker to run rendering workloads.
-
-* VM images with pre-installed graphics and rendering applications:
- * Azure Marketplace VM images are available that contain popular graphics and rendering applications, avoiding the need to install the applications yourself or create your own custom images with the applications installed.
-* Pay-per-use licensing for rendering applications:
- * You can choose to pay for the applications by the minute, in addition to paying for the compute VMs, which avoids having to buy licenses and potentially configure a license server for the applications. Paying for use also means that it is possible to cater for varying and unexpected load as there is not a fixed number of licenses.
- * It is also possible to use the pre-installed applications with your own licenses and not use the pay-per-use licensing. To do this, typically you install an on-premises or Azure-based license server and use an Azure virtual network to connect the rendering pool to the license server.
-* Plug-ins for client design and modeling applications:
- * Plug-ins allow end-users to utilize Azure Batch directly from client application, such as Autodesk Maya, enabling them to create pools, submit jobs and make use of more compute capacity to perform faster renders.
-* Render manager integration:
- * Azure Batch is integrated into render management applications or plug-ins are available to provide the Azure Batch integration.
-
-There are several ways to use Azure Batch, all of which also apply to Azure Batch rendering.
-
-* APIs:
- * Write code using the [REST](/rest/api/batchservice), [.NET](/dotnet/api/overview/azure/batch), [Python](/python/api/overview/azure/batch), [Java](/java/api/overview/azure/batch), or other supported APIs. Developers can integrate Azure Batch capabilities into their existing applications or workflow, whether cloud or based on-premises. For example, the [Autodesk Maya plug-in](https://github.com/Azure/azure-batch-maya) utilizes the Batch Python API to invoke Batch, creating and managing pools, submitting jobs and tasks, and monitoring status.
-* Command-line tools:
- * The [Azure command line](/cli/azure/) or [Azure PowerShell](/powershell/azure/) can be used to script Batch use.
- * In particular, the Batch CLI template support makes it much easier to create pools and submit jobs.
-* UIs:
- * [Batch Explorer](https://github.com/Azure/BatchExplorer) is a cross-platform client tool that also allows Batch accounts to be managed and monitored, but provides some richer capabilities compared to the Azure portal UI. A set of pool and job templates are provided that are tailored for each supported application and can be used to easily create pools and to submit jobs.
- * The Azure portal can be used to manage and monitor Azure Batch.
-* Client application plug-inΓÇÖs:
- * Plug-ins are available that allow Batch rendering to be used from directly within the client design and modeling applications. The plug-ins mainly invoke the Batch Explorer application with contextual information about the current 3D model.
- * The following plug-ins are available:
- * [Azure Batch for Maya](https://github.com/Azure/azure-batch-maya)
- * [3ds Max](https://github.com/Azure/azure-batch-rendering/tree/master/plugins/3ds-max)
- * [Blender](https://github.com/Azure/azure-batch-rendering/tree/master/plugins/blender)
-
-## Getting started with Azure Batch rendering
-
-See the following introductory tutorials to try Azure Batch rendering:
-
-* [Use Batch Explorer to render a Blender scene](./tutorial-rendering-batchexplorer-blender.md)
-* [Use the Batch CLI to render an Autodesk 3ds Max scene](./tutorial-rendering-cli.md)
+* A custom solution using Azure Batch to allocate and manage the compute capacity as well as providing the job scheduling to run the render jobs.
## Next steps
-Determine the list of rendering applications and versions included on the Azure Marketplace VM images in [this article](./batch-rendering-applications.md).
+ Learn how to [use Azure infrastructure and services to extend an existing on-premises render farm](https://azure.microsoft.com/solutions/high-performance-computing/rendering/).
+
+Learn more about [Azure Batch rendering capabilities](batch-rendering-functionality.md).
cloud-services https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-guestos-msrc-releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
@@ -10,7 +10,7 @@ ms.service: cloud-services
ms.topic: article ms.tgt_pltfrm: na ms.workload: tbd
-ms.date: 12/21/2020
+ms.date: 1/15/2021
ms.author: yohaddad ---
@@ -18,35 +18,32 @@ ms.author: yohaddad
The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in. ## December 2020 Guest OS
->[!NOTE]
-
->The December Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the December Guest OS. This list is subject to change.
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | --- | --- | --- | --- | --- |
-| Rel 20-12 | [4592440] | Latest Cumulative Update | 6.26 | Dec 8, 2020 |
-| Rel 20-12 | [4580325] | Flash update | 3.92, 4.85, 5.50, 6.26 | Oct 13, 2020 |
-| Rel 20-12 | [4586768] | IE Cumulative Updates | 2.105, 3.92, 4.85 | Nov 10, 2020 |
-| Rel 20-12 | [4593226] | Latest Cumulative Update | 5.50 | Dec 8, 2020 |
-| Rel 20-12 | [4052623] | Defender | 5.50, 6.26 | Dec 13, 2020 |
-| Rel 20-12 | [4578952] | .NET Framework 3.5 Security and Quality Rollup | 2.105 | Nov 10, 2020 |
-| Rel 20-12 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup | 2.105 | Nov 10, 2020 |
-| Rel 20-12 | [4578953] | .NET Framework 3.5 Security and Quality Rollup | 4.85 | Nov 10, 2020 |
-| Rel 20-12 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup | 4.85 | Nov 10, 2020 |
-| Rel 20-12 | [4578950] | .NET Framework 3.5 Security and Quality Rollup | 3.92 | Nov 10, 2020 |
-| Rel 20-12 | [4578954] | .NET Framework 4.5.2 Security and Quality Rollup | 3.92 | Nov 10, 2020 |
-| Rel 20-12 | [4578966] | .NET Framework 3.5 and 4.7.2 Cumulative Update | 6.26 | Oct 13, 2020 |
-| Rel 20-12 | [4592471] | Monthly Rollup | 2.105 | Dec 8, 2020 |
-| Rel 20-12 | [4592468] | Monthly Rollup | 3.92 | Dec 8, 2020 |
-| Rel 20-12 | [4592484] | Monthly Rollup | 4.85 | Dec 8, 2020 |
-| Rel 20-12 | [4566426] | Servicing Stack update | 3.92 | Jul 14, 2020 |
-| Rel 20-12 | [4566425] | Servicing Stack update | 4.85 | Jul 14, 2020 |
-| Rel 20-12 OOB | [4578013] | Standalone Security Update | 4.85 | Aug 19, 2020 |
-| Rel 20-12 | [4576750] | Servicing Stack update | 5.50 | Sep 8, 2020 |
-| Rel 20-12 | [4592510] | Servicing Stack update | 2.105 | Dec 8, 2020 |
-| Rel 20-12 | [4587735] | Servicing Stack update | 6.26 | Nov 10, 2020 |
-| Rel 20-12 | [4494175] | Microcode | 5.50 | Sep 1, 2020 |
-| Rel 20-12 | [4494174] | Microcode | 6.26 | Sep 3, 2020 |
+| Rel 20-12 | [4592440] | Latest Cumulative Update | [6.26] | Dec 8, 2020 |
+| Rel 20-12 | [4580325] | Flash update | [3.92], [4.85], [5.50], [6.26] | Oct 13, 2020 |
+| Rel 20-12 | [4586768] | IE Cumulative Updates | [2.105], [3.92], [4.85] | Nov 10, 2020 |
+| Rel 20-12 | [4593226] | Latest Cumulative Update | [5.50] | Dec 8, 2020 |
+| Rel 20-12 | [4052623] | Defender | [5.50], [6.26] | Dec 13, 2020 |
+| Rel 20-12 | [4578952] | .NET Framework 3.5 Security and Quality Rollup | [2.105] | Nov 10, 2020 |
+| Rel 20-12 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup | [2.105] | Nov 10, 2020 |
+| Rel 20-12 | [4578953] | .NET Framework 3.5 Security and Quality Rollup | [4.85] | Nov 10, 2020 |
+| Rel 20-12 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup | [4.85] | Nov 10, 2020 |
+| Rel 20-12 | [4578950] | .NET Framework 3.5 Security and Quality Rollup | [3.92] | Nov 10, 2020 |
+| Rel 20-12 | [4578954] | .NET Framework 4.5.2 Security and Quality Rollup | [3.92] | Nov 10, 2020 |
+| Rel 20-12 | [4578966] | .NET Framework 3.5 and 4.7.2 Cumulative Update | [6.26] | Oct 13, 2020 |
+| Rel 20-12 | [4592471] | Monthly Rollup | [2.105] | Dec 8, 2020 |
+| Rel 20-12 | [4592468] | Monthly Rollup | [3.92] | Dec 8, 2020 |
+| Rel 20-12 | [4592484] | Monthly Rollup | [4.85] | Dec 8, 2020 |
+| Rel 20-12 | [4566426] | Servicing Stack update | [3.92] | Jul 14, 2020 |
+| Rel 20-12 | [4566425] | Servicing Stack update | [4.85] | Jul 14, 2020 |
+| Rel 20-12 OOB | [4578013] | Standalone Security Update | [4.85] | Aug 19, 2020 |
+| Rel 20-12 | [4576750] | Servicing Stack update | [5.50] | Sep 8, 2020 |
+| Rel 20-12 | [4592510] | Servicing Stack update | [2.105] | Dec 8, 2020 |
+| Rel 20-12 | [4587735] | Servicing Stack update | [6.26] | Nov 10, 2020 |
+| Rel 20-12 | [4494175] | Microcode | [5.50] | Sep 1, 2020 |
+| Rel 20-12 | [4494174] | Microcode | [6.26] | Sep 3, 2020 |
[4592440]: https://support.microsoft.com/kb/4592440 [4580325]: https://support.microsoft.com/kb/4580325
@@ -71,7 +68,11 @@ The following tables show the Microsoft Security Response Center (MSRC) updates
[4587735]: https://support.microsoft.com/kb/4587735 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174-
+[2.105]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.92]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.85]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.50]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.26]: ./cloud-services-guestos-update-matrix.md#family-6-releases
## November 2020 Guest OS
cloud-services https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-guestos-update-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-update-matrix.md
@@ -10,7 +10,7 @@ ms.service: cloud-services
ms.topic: article ms.tgt_pltfrm: na ms.workload: tbd
-ms.date: 1/4/2021
+ms.date: 1/15/2021
ms.author: yohaddad --- # Azure Guest OS releases and SDK compatibility matrix
@@ -36,6 +36,9 @@ Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **January 15, 2021**
+The December Guest OS has released.
+ ###### **December 19, 2020** The November Guest OS has released.
@@ -143,8 +146,9 @@ The September Guest OS has released.
| Configuration string | Release date | Disable date | | --- | --- | --- |
+| WA-GUEST-OS-6.26_202012-01 | January 15, 2021 | Post 6.28 |
| WA-GUEST-OS-6.25_202011-01 | December 19, 2020 | Post 6.27 |
-| WA-GUEST-OS-6.24_202010-02 | November 17, 2020 | Post 6.26 |
+|~~WA-GUEST-OS-6.24_202010-02~~| November 17, 2020 | January 15, 2021 |
|~~WA-GUEST-OS-6.23_202009-01~~| October 10, 2020 | December 19, 2020 | |~~WA-GUEST-OS-6.22_202008-02~~| September 5, 2020 | November 17, 2020 | |~~WA-GUEST-OS-6.21_202007-01~~| August 17, 2020 | October 10, 2020 |
@@ -181,8 +185,9 @@ The September Guest OS has released.
| Configuration string | Release date | Disable date | | --- | --- | --- |
+| WA-GUEST-OS-5.50_202012-01 | January 15, 2021 | Post 5.52 |
| WA-GUEST-OS-5.49_202011-01 | December 19, 2020 | Post 5.51 |
-| WA-GUEST-OS-5.48_202010-02 | November 17, 2020 | Post 5.50 |
+|~~WA-GUEST-OS-5.48_202010-02~~| November 17, 2020 | January 15, 2021 |
|~~WA-GUEST-OS-5.47_202009-01~~| October 10, 2020 | December 19, 2020 | |~~WA-GUEST-OS-5.46_202008-02~~| September 5, 2020 | November 17, 2020 | |~~WA-GUEST-OS-5.45_202007-01~~| August 17, 2020 | October 10, 2020 |
@@ -216,8 +221,9 @@ The September Guest OS has released.
| Configuration string | Release date | Disable date | | --- | --- | --- |
+| WA-GUEST-OS-4.85_202012-01 | January 15, 2021 | Post 4.87 |
| WA-GUEST-OS-4.84_202011-01 | December 19, 2020 | Post 4.86 |
-| WA-GUEST-OS-4.83_202010-02 | November 17, 2020 | Post 4.85 |
+|~~WA-GUEST-OS-4.83_202010-02~~| November 17, 2020 | January 15, 2021 |
|~~WA-GUEST-OS-4.82_202009-01~~| October 10, 2020 | December 19, 2020 | |~~WA-GUEST-OS-4.81_202008-02~~| September 5, 2020 | November 17, 2020 | |~~WA-GUEST-OS-4.80_202007-01~~| August 17, 2020 | October 10, 2020 |
@@ -251,8 +257,9 @@ The September Guest OS has released.
| Configuration string | Release date | Disable date | | --- | --- | --- |
+| WA-GUEST-OS-3.92_202012-01 | January 15, 2021 | Post 3.94 |
| WA-GUEST-OS-3.91_202011-01 | December 19, 2020 | Post 3.93 |
-| WA-GUEST-OS-3.90_202010-02 | November 17, 2020 | Post 3.92 |
+|~~WA-GUEST-OS-3.90_202010-02~~| November 17, 2020 | January 15, 2021 |
|~~WA-GUEST-OS-3.89_202009-01~~| October 10, 2020 | December 19, 2020 | |~~WA-GUEST-OS-3.88_202008-02~~| September 5, 2020 | November 17, 2020 | |~~WA-GUEST-OS-3.87_202007-01~~| August 17, 2020 | October 10, 2020 |
@@ -286,8 +293,9 @@ The September Guest OS has released.
| Configuration string | Release date | Disable date | | --- | --- | --- |
+| WA-GUEST-OS-2.105_202012-01 | January 15, 2021 | Post 2.107 |
| WA-GUEST-OS-2.104_202011-01 | December 19, 2020 | Post 2.106 |
-| WA-GUEST-OS-2.103_202010-02 | November 17, 2020 | Post 2.105 |
+|~~WA-GUEST-OS-2.103_202010-02~~| November 17, 2020 | January 15, 2021 |
|~~WA-GUEST-OS-2.102_202009-01~~| October 10, 2020 | December 19, 2020 | |~~WA-GUEST-OS-2.101_202008-02~~| September 5, 2020 | November 17, 2020 | |~~WA-GUEST-OS-2.100_202007-01~~| August 17, 2020 | October 10, 2020 |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/deploy-computer-vision-on-premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/deploy-computer-vision-on-premises.md
@@ -254,6 +254,8 @@ By design, each v3 container has a dispatcher and a recognition worker. The disp
The container receiving the request can split the task into single page sub-tasks, and add them to the universal queue. Any recognition worker from a less busy container can consume single page sub-tasks from the queue, perform recognition, and upload the result to the storage. The throughput can be improved up to `n` times, depending on the number of containers that are deployed.
+The v3 container exposes the liveness probe API under the `/ContainerLiveness` path. Use the following deployment example to configure a liveness probe for Kubernetes.
+ Copy and paste the following YAML into a file named `deployment.yaml`. Replace the `# {ENDPOINT_URI}` and `# {API_KEY}` comments with your own values. Replace the `# {AZURE_STORAGE_CONNECTION_STRING}` comment with your Azure Storage Connection String. Configure `replicas` to the number you want, which is set to `3` in the following example. ```yaml
@@ -289,6 +291,13 @@ spec:
value: # {AZURE_STORAGE_CONNECTION_STRING} - name: Queue__Azure__ConnectionString value: # {AZURE_STORAGE_CONNECTION_STRING}
+ livenessProbe:
+ httpGet:
+ path: /ContainerLiveness
+ port: 5000
+ initialDelaySeconds: 60
+ periodSeconds: 60
+ timeoutSeconds: 20
--- apiVersion: v1 kind: Service
@@ -370,4 +379,4 @@ For more details on installing applications with Helm in Azure Kubernetes Servic
<!-- LINKS - internal --> [vision-container-host-computer]: computer-vision-how-to-install-containers.md#the-host-computer [installing-helm-apps-in-aks]: ../../aks/kubernetes-helm.md
-[cog-svcs-containers]: ../cognitive-services-container-support.md
\ No newline at end of file
+[cog-svcs-containers]: ../cognitive-services-container-support.md
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/choose-training-images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/choose-training-images.md
@@ -22,3 +22,6 @@ Additionally, make sure all of your training images meet the following criteria:
* .jpg, .png, .bmp, or .gif format * no greater than 6MB in size (4MB for prediction images) * no less than 256 pixels on the shortest edge; any images shorter than this will be automatically scaled up by the Custom Vision Service+
+> [!NOTE]
+> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial-od https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial-od.md
@@ -138,6 +138,9 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device.
+> [!NOTE]
+> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+ When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. The following code associates each of the sample images with its tagged region. [!code-csharp[](~/cognitive-services-quickstart-code/dotnet/CustomVision/ObjectDetection/Program.cs?name=snippet_upload_regions)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial.md
@@ -143,6 +143,9 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device.
+> [!NOTE]
+> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+ Then define a helper method to upload the images in this directory. You may need to edit the **GetFiles** argument to point to the location where your images are saved. [!code-csharp[](~/cognitive-services-quickstart-code/dotnet/CustomVision/ImageClassification/Program.cs?name=snippet_loadimages)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial-od https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial-od.md
@@ -149,6 +149,9 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device.
+> [!NOTE]
+> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+ When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. The following code associates each of the sample images with its tagged region. > [!NOTE]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial.md
@@ -151,6 +151,9 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device.
+> [!NOTE]
+> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+ [!code-java[](~/cognitive-services-quickstart-code/java/CustomVision/src/main/java/com/microsoft/azure/cognitiveservices/vision/customvision/samples/CustomVisionSamples.java?name=snippet_upload)] The previous code snippet makes use of two helper functions that retrieve the images as resource streams and upload them to the service (you can upload up to 64 images in a single batch).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial-object-detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial-object-detection.md
@@ -120,6 +120,9 @@ Start a new function to contain all of your Custom Vision function calls. Add th
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device.
+> [!NOTE]
+> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+ To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag. When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. For this tutorial, the regions are hardcoded inline with the code. The regions specify the bounding box in normalized coordinates, and the coordinates are given in the order: left, top, width, height. You can upload up to 64 images in a single batch. [!code-javascript[](~/cognitive-services-quickstart-code/javascript/CustomVision/ObjectDetection/CustomVisionQuickstart.js?name=snippet_upload)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial.md
@@ -125,6 +125,9 @@ To create classification tags to your project, add the following code to your fu
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device.
+> [!NOTE]
+> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+ To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag. [!code-javascript[](~/cognitive-services-quickstart-code/javascript/CustomVision/ImageClassification/CustomVisionQuickstart.js?name=snippet_upload)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial-od https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial-od.md
@@ -107,6 +107,9 @@ To create object tags in your project, add the following code:
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device.
+> [!NOTE]
+> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+ When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. The following code associates each of the sample images with its tagged region. The regions specify the bounding box in normalized coordinates, and the coordinates are given in the order: left, top, width, height. [!code-python[](~/cognitive-services-quickstart-code/python/CustomVision/ObjectDetection/CustomVisionQuickstart.py?name=snippet_tagging)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial.md
@@ -104,6 +104,9 @@ To add classification tags to your project, add the following code:
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device.
+> [!NOTE]
+> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+ To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag. You can upload up to 64 images in a single batch. [!code-python[](~/cognitive-services-quickstart-code/python/CustomVision/ImageClassification/CustomVisionQuickstart.py?name=snippet_upload)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/rest-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/rest-tutorial.md
@@ -98,6 +98,9 @@ You'll get a JSON response like the following. Save the `"id"` value of each tag
Next, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device.
+> [!NOTE]
+> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+ Use the following command to upload the images and apply tags; once for the "Hemlock" images, and separately for the "Japanese Cherry" images. See the [Create Images From Data](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb5) API for more options. :::code language="shell" source="~/cognitive-services-quickstart-code/curl/custom-vision/image-classifier.sh" ID="uploadimages":::
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/includes/quickstart-sdk-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/includes/quickstart-sdk-csharp.md
@@ -4,6 +4,9 @@ description: This quickstart shows how to get started with the QnA Maker client
ms.topic: quickstart ms.date: 06/18/2020 ---+
+# [QnA Maker GA (stable release)](#tab/version-1)
+ Use the QnA Maker client library for .NET to: * Create a knowledgebase
@@ -12,28 +15,65 @@ Use the QnA Maker client library for .NET to:
* Get prediction runtime endpoint key * Wait for long-running task * Download a knowledgebase
- * Get answer
+ * Get an answer from a knowledgebase
* Delete a knowledgebase
-[Reference documentation](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker?view=azure-dotnet) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/cognitiveservices/Knowledge.QnAMaker) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker/) | [C# Samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/QnAMaker/SDK-based-quickstart)
+[Reference documentation](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker?view=azure-dotnet) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/cognitiveservices/Knowledge.QnAMaker) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker/2.0.1) | [C# Samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/QnAMaker/SDK-based-quickstart)
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+Use the QnA Maker client library for .NET to:
+
+ * Create a knowledgebase
+ * Update a knowledgebase
+ * Publish a knowledgebase
+ * Wait for long-running task
+ * Download a knowledgebase
+ * Get an answer from a knowledgebase
+ * Delete a knowledgebase
+
+[Reference documentation](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker?view=azure-dotnet) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/cognitiveservices/Knowledge.QnAMaker) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker/3.0.0-preview.1) | [C# Samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/QnAMaker/Preview-sdk-based-quickstart)
+
+---
[!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)] ## Prerequisites
+# [QnA Maker GA (stable release)](#tab/version-1)
+ * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * The [Visual Studio IDE](https://visualstudio.microsoft.com/vs/) or current version of [.NET Core](https://dotnet.microsoft.com/download/dotnet-core). * Once you have your Azure subscription, create a [QnA Maker resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) in the Azure portal to get your authoring key and resource name. After it deploys, select **Go to resource**. * You will need the key and resource name from the resource you create to connect your application to the QnA Maker API. You'll paste your key and resource name into the code below later in the quickstart. * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* The [Visual Studio IDE](https://visualstudio.microsoft.com/vs/) or current version of [.NET Core](https://dotnet.microsoft.com/download/dotnet-core).
+* Once you have your Azure subscription, create a [QnA Maker resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) in the Azure portal to get your authoring key and endpoint.
+ * NOTE: Be sure to select the **Managed** checkbox.
+ * After your QnA Maker resource deploys, select **Go to resource**. You will need the key and endpoint from the resource you create to connect your application to the QnA Maker API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+
+---
+ ## Setting up
-#### [Visual Studio IDE](#tab/visual-studio)
+### Visual Studio IDE
-Using Visual Studio, create a .NET Core application and install the client library by right-clicking on the solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse**, check **Include prerelease**, and search for `Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker`. Select version `2.0.0-preview.1`, and then **Install**.
+# [QnA Maker GA (stable release)](#tab/version-1)
-#### [CLI](#tab/cli)
+Using Visual Studio, create a .NET Core application and install the client library by right-clicking on the solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse**, and search for `Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker`. Select version `2.0.1`, and then **Install**.
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+Using Visual Studio, create a .NET Core application and install the client library by right-clicking on the solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse**, check **Include prerelease**, and search for `Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker`. Select version `3.0.0-preview.1`, and then **Install**.
+
+---
+
+### CLI
In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `qna-maker-quickstart`. This command creates a simple "Hello World" C# project with a single source file: *program.cs*.
@@ -59,36 +99,90 @@ Build succeeded.
Within the application directory, install the QnA Maker client library for .NET with the following command:
+# [QnA Maker GA (stable release)](#tab/version-1)
+ ```console
-dotnet add package Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker --version 2.0.0-preview.1
+dotnet add package Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker --version 2.0.1
```
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+```console
+dotnet add package Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker --version 3.0.0-preview.1
+```
---
+# [QnA Maker GA (stable release)](#tab/version-1)
+ > [!TIP] > Want to view the whole quickstart code file at once? You can find it on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/QnAMaker/SDK-based-quickstart/Program.cs), which contains the code examples in this quickstart.
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+> [!TIP]
+> Want to view the whole quickstart code file at once? You can find it on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs), which contains the code examples in this quickstart.
+
+---
+
+### Using directives
+ From the project directory, open the *program.cs* file and add the following `using` directives:
-[!code-csharp[Dependencies](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=Dependencies&highlight=1-2)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-csharp[Dependencies](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=Dependencies)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-csharp[Dependencies](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=Dependencies)]
+
+---
+
+### Subscription key and resource endpoints
In the application's `Main` method, add variables and code, shown in the following section, to use the common tasks in this quickstart.
+# [QnA Maker GA (stable release)](#tab/version-1)
+ > [!IMPORTANT] > Go to the Azure portal and find the key and endpoint for the QnA Maker resource you created in the prerequisites. They will be located on the resource's **key and endpoint** page, under **resource management**.
-> You need the entire key to create your knowledgebase. You need only the resource name from the endpoint. The format is `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
+
+- Create environment variables named QNA_MAKER_SUBSCRIPTION_KEY, QNA_MAKER_ENDPOINT, and QNA_MAKER_RUNTIME_ENDPOINT to store these values.
+- The value of QNA_MAKER_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
+- The value of QNA_MAKER_RUNTIME_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.azurewebsites.net`.
+- For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
[!code-csharp[Set the resource key and resource name](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=Resourcevariables)]
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+> [!IMPORTANT]
+> Go to the Azure portal and find the key and endpoint for the QnA Maker resource you created in the prerequisites. They will be located on the resource's **key and endpoint** page, under **resource management**.
+
+- Create environment variables named QNA_MAKER_SUBSCRIPTION_KEY and QNA_MAKER_ENDPOINT to store these values.
+- The value of QNA_MAKER_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
+- For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
+
+[!code-csharp[Set the resource key and resource name](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=Resourcevariables)]
+
+---
## Object models
-QnA Maker uses two different object models:
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[QnA Maker](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker?view=azure-dotnet) uses two different object models:
* **[QnAMakerClient](#qnamakerclient-object-model)** is the object to create, manage, publish, and download the knowledgebase. * **[QnAMakerRuntime](#qnamakerruntimeclient-object-model)** is the object to query the knowledge base with the GenerateAnswer API and send new suggested questions using the Train API (as part of [active learning](../concepts/active-learning-suggestions.md)).
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[QnA Maker](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker?view=azure-dotnet) uses the following object model:
+* **[QnAMakerClient](#qnamakerclient-object-model)** is the object to create, manage, publish, download, and query the knowledgebase.
+
+---
+ [!INCLUDE [Get KBinformation](./quickstart-sdk-cognitive-model.md)] ### QnAMakerClient object model
@@ -101,14 +195,24 @@ Manage your knowledge base by sending a JSON object. For immediate operations, a
### QnAMakerRuntimeClient object model
+# [QnA Maker GA (stable release)](#tab/version-1)
+ The prediction QnA Maker client is a [QnAMakerRuntimeClient](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.qnamakerruntimeclient?view=azure-dotnet) object that authenticates to Azure using Microsoft.Rest.ServiceClientCredentials, which contains your prediction runtime key, returned from the authoring client call, `client.EndpointKeys.GetKeys` after the knowledgebase is published. Use the [GenerateAnswer](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.runtimeextensions) method to get an answer from the query runtime.
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+A QnA Maker managed resource does not require the use of the **QnAMakerRuntimeClient** object. Instead, you call the [QnAMakerClient.Knowledgebase](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.qnamakerclient.knowledgebase?view=azure-dotnet-preview).[GenerateAnswerAsync](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.knowledgebaseextensions.generateanswerasync?view=azure-dotnet-preview) method.
+
+---
+ ## Code examples These code snippets show you how to do the following with the QnA Maker client library for .NET:
+# [QnA Maker GA (stable release)](#tab/version-1)
+ * [Authenticate the authoring client](#authenticate-the-client-for-authoring-the-knowledge-base) * [Create a knowledge base](#create-a-knowledge-base) * [Update a knowledge base](#update-a-knowledge-base)
@@ -120,14 +224,33 @@ These code snippets show you how to do the following with the QnA Maker client l
* [Authenticate the query runtime client](#authenticate-the-runtime-for-generating-an-answer) * [Generate an answer from the knowledge base](#generate-an-answer-from-the-knowledge-base)
+# [QnA Maker managed (preview release)](#tab/version-2)
+* [Authenticate the authoring client](#authenticate-the-client-for-authoring-the-knowledge-base)
+* [Create a knowledge base](#create-a-knowledge-base)
+* [Update a knowledge base](#update-a-knowledge-base)
+* [Download a knowledge base](#download-a-knowledge-base)
+* [Publish a knowledge base](#publish-a-knowledge-base)
+* [Delete a knowledge base](#delete-a-knowledge-base)
+* [Get status of an operation](#get-status-of-an-operation)
+* [Generate an answer from the knowledge base](#generate-an-answer-from-the-knowledge-base)
+
+---
## Authenticate the client for authoring the knowledge base Instantiate a client object with your key, and use it with your resource to construct the endpoint to create an [QnAMakerClient](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.qnamakerclient?view=azure-dotnet) with your endpoint and key. Create a [ServiceClientCredentials](/dotnet/api/microsoft.rest.serviceclientcredentials?view=azure-dotnet) object.
+# [QnA Maker GA (stable release)](#tab/version-1)
+ [!code-csharp[Create QnAMakerClient object with key and endpoint](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=AuthorizationAuthor)]
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-csharp[Create QnAMakerClient object with key and endpoint](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=AuthorizationAuthor)]
+
+---
+ ## Create a knowledge base A knowledge base stores question and answer pairs for the [CreateKbDTO](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.createkbdto?view=azure-dotnet) object from three sources:
@@ -146,7 +269,15 @@ Call the [CreateAsync](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.q
The final line of the following code returns the knowledge base ID from the response from MonitorOperation.
-[!code-csharp[Create a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=CreateKBMethod&highlight=31)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-csharp[Create a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=CreateKBMethod)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-csharp[Create a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=CreateKBMethod)]
+
+---
Make sure the include the [`MonitorOperation`](#get-status-of-an-operation) function, referenced in the above code, in order to successfully create a knowledge base.
@@ -154,7 +285,15 @@ Make sure the include the [`MonitorOperation`](#get-status-of-an-operation) func
You can update a knowledge base by passing in the knowledge base ID and an [UpdatekbOperationDTO](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdto?view=azure-dotnet) containing [add](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdtoadd?view=azure-dotnet), [update](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdtoupdate?view=azure-dotnet), and [delete](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdtodelete?view=azure-dotnet) DTO objects to the [UpdateAsync](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.knowledgebaseextensions.updateasync?view=azure-dotnet) method. Use the [MonitorOperation](#get-status-of-an-operation) method to determine if the update succeeded.
-[!code-csharp[Update a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=UpdateKBMethod&highlight=8)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-csharp[Update a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=UpdateKBMethod)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-csharp[Update a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=UpdateKBMethod)]
+
+---
Make sure the include the [`MonitorOperation`](#get-status-of-an-operation) function, referenced in the above code, in order to successfully update a knowledge base.
@@ -162,15 +301,31 @@ Make sure the include the [`MonitorOperation`](#get-status-of-an-operation) func
Use the [DownloadAsync](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.knowledgebaseextensions.downloadasync?view=azure-dotnet) method to download the database as a list of [QnADocumentsDTO](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.qnadocumentsdto?view=azure-dotnet). This is _not_ equivalent to the QnA Maker portal's export from the **Settings** page because the result of this method is not a file.
-[!code-csharp[Download a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=DownloadKB&highlight=3)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-csharp[Download a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=DownloadKB)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-csharp[Download a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=DownloadKB)]
+
+---
## Publish a knowledge base Publish the knowledge base using the [PublishAsync](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.knowledgebaseextensions.publishasync?view=azure-dotnet) method. This takes the current saved and trained model, referenced by the knowledge base ID, and publishes that at your endpoint. This is a necessary step in order to query your knowledgebase.
-[!code-csharp[Publish a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=PublishKB&highlight=3)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-csharp[Publish a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=PublishKB)]
+# [QnA Maker managed (preview release)](#tab/version-2)
+[!code-csharp[Publish a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=PublishKB)]
+
+---
+
+# [QnA Maker GA (stable release)](#tab/version-1)
## Get query runtime key
@@ -180,7 +335,7 @@ Use the [EndpointKeys](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.q
Use either of the key properties returned in the object to query the knowledgebase.
-[!code-csharp[Get query runtime key](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=GetQueryEndpointKey&highlight=3)]
+[!code-csharp[Get query runtime key](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=GetQueryEndpointKey)]
A runtime key is necessary to query your knowledgebase.
@@ -196,27 +351,35 @@ Use the QnAMakerRuntimeClient to:
## Generate an answer from the knowledge base
-### [QnA Maker GA (stable release)](#tab/v1)
- Generate an answer from a published knowledgebase using the [RuntimeClient](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.qnamakerclient.knowledgebase?view=azure-dotnet#Microsoft_Azure_CognitiveServices_Knowledge_QnAMaker_QnAMakerClient_Knowledgebase).[GenerateAnswerAsync](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.runtimeextensions.generateanswerasync?view=azure-dotnet) method. This method accepts the knowledge base ID and the [QueryDTO](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto?view=azure-dotnet). Access additional properties of the QueryDTO, such a [Top](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto.top?view=azure-dotnet#Microsoft_Azure_CognitiveServices_Knowledge_QnAMaker_Models_QueryDTO_Top) and [Context](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto.context?view=azure-dotnet) to use in your chat bot.
-[!code-csharp[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=GenerateAnswer&highlight=3)]
+[!code-csharp[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=GenerateAnswer)]
+# [QnA Maker managed (preview release)](#tab/version-2)
-### [QnA Maker managed (preview release)](#tab/v2)
+## Generate an answer from the knowledge base
Generate an answer from a published knowledgebase using the [QnAMakerClient.Knowledgebase](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.qnamakerclient.knowledgebase?view=azure-dotnet-preview).[GenerateAnswerAsync](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.knowledgebaseextensions.generateanswerasync?view=azure-dotnet-preview) method. This method accepts the knowledge base ID and the [QueryDTO](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto?view=azure-dotnet-preview). Access additional properties of the QueryDTO, such a [Top](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto.top?view=azure-dotnet#Microsoft_Azure_CognitiveServices_Knowledge_QnAMaker_Models_QueryDTO_Top), [Context](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto.context?view=azure-dotnet-preview#Microsoft_Azure_CognitiveServices_Knowledge_QnAMaker_Models_QueryDTO_Context) and [AnswerSpanRequest](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.models.querydto.answerspanrequest?view=azure-dotnet-preview#Microsoft_Azure_CognitiveServices_Knowledge_QnAMaker_Models_QueryDTO_AnswerSpanRequest) to use in your chat bot.
-[!code-csharp[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=GenerateAnswer&highlight=3)]
+[!code-csharp[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=GenerateAnswer)]
-This is a simple example querying the knowledgebase. To understand advanced querying scenarios, review [other query examples](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md?pivots=url-test-tool-curl#use-curl-to-query-for-a-chit-chat-answer).
+---
+This is a simple example querying the knowledgebase. To understand advanced querying scenarios, review [other query examples](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md?pivots=url-test-tool-curl#use-curl-to-query-for-a-chit-chat-answer).
## Delete a knowledge base Delete the knowledgebase using the [DeleteAsync](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.knowledgebaseextensions.deleteasync?view=azure-dotnet) method with a parameter of the knowledge base ID.
-[!code-csharp[Delete a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=DeleteKB&highlight=3)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-csharp[Delete a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=DeleteKB)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-csharp[Delete a knowledge base](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=DeleteKB)]
+
+---
## Get status of an operation
@@ -224,7 +387,15 @@ Some methods, such as create and update, can take enough time that instead of wa
The _loop_ and _Task.Delay_ in the following code block are used to simulate retry logic. These should be replaced with your own retry logic.
-[!code-csharp[Monitor an operation](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=MonitorOperation&highlight=10)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-csharp[Monitor an operation](~/cognitive-services-quickstart-code/dotnet/QnAMaker/SDK-based-quickstart/Program.cs?name=MonitorOperation)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-csharp[Monitor an operation](~/cognitive-services-quickstart-code/dotnet/QnAMaker/Preview-sdk-based-quickstart/Program.cs?name=MonitorOperation)]
+
+---
## Run the application
@@ -234,4 +405,12 @@ Run the application with the `dotnet run` command from your application director
dotnet run ```
+# [QnA Maker GA (stable release)](#tab/version-1)
+ The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/QnAMaker/SDK-based-quickstart).+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/QnAMaker/Preview-sdk-based-quickstart).
+
+---
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/includes/quickstart-sdk-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/includes/quickstart-sdk-java.md
@@ -9,6 +9,9 @@ ms.topic: include
ms.date: 09/04/2020 ms.author: v-jawe ---+
+# [QnA Maker GA (stable release)](#tab/version-1)
+ Use the QnA Maker client library for Java to: * Create a knowledgebase
@@ -17,21 +20,50 @@ Use the QnA Maker client library for Java to:
* Get prediction runtime endpoint key * Wait for long-running task * Download a knowledgebase
-* Get answer
+* Get an answer from a knowledgebase
+* Delete knowledge base
+
+[Library source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker) | [Package](https://mvnrepository.com/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-qnamaker/1.0.0-beta.1) | [Samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/java/qnamaker/sdk/quickstart.java)
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+Use the QnA Maker client library for Java to:
+
+* Create a knowledgebase
+* Update a knowledgebase
+* Publish a knowledgebase
+* Wait for long-running task
+* Download a knowledgebase
+* Get an answer from a knowledgebase
* Delete knowledge base
-[Library source code (authoring)](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker) | [Package](https://mvnrepository.com/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-qnamaker) | [Samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/java/qnamaker)
+[Library source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker) | [Package](https://mvnrepository.com/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-qnamaker/1.0.0-beta.2) | [Samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/java/qnamaker/sdk/preview-sdk/quickstart.java)
+
+---
[!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)] ## Prerequisites
+# [QnA Maker GA (stable release)](#tab/version-1)
+ * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * [JDK](https://www.oracle.com/java/technologies/javase-downloads.html) * Once you have your Azure subscription, create a [QnA Maker resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) in the Azure portal to get your authoring key and endpoint. After it deploys, select **Go to resource**. * You will need the key and endpoint from the resource you create to connect your application to the QnA Maker API. You'll paste your key and endpoint into the code below later in the quickstart. * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* [JDK](https://www.oracle.com/java/technologies/javase-downloads.html)
+* Once you have your Azure subscription, create a [QnA Maker resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) in the Azure portal to get your authoring key and endpoint.
+ * NOTE: Be sure to select the **Managed** checkbox.
+ * After your QnA Maker resource deploys, select **Go to resource**. You will need the key and endpoint from the resource you create to connect your application to the QnA Maker API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+
+---
+ ## Setting up ### Install the client libraries
@@ -42,23 +74,58 @@ After installing Java, you can install the client libraries using [Maven](https:
Create a new file named `quickstart.java` and import the following libraries.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="dependencies":::
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-java[Dependencies](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=dependencies)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-java[Dependencies](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=dependencies)]
+
+---
Create variables for your resource's Azure endpoint and key.
+# [QnA Maker GA (stable release)](#tab/version-1)
+ > [!IMPORTANT] > Go to the Azure portal and find the key and endpoint for the QnA Maker resource you created in the prerequisites. They will be located on the resource's **key and endpoint** page, under **resource management**.
-> You need the entire key to create your knowledgebase. You need only the resource name from the endpoint. The format is `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="resourceKeys":::
+- Create environment variables named QNA_MAKER_SUBSCRIPTION_KEY, QNA_MAKER_ENDPOINT, and QNA_MAKER_RUNTIME_ENDPOINT to store these values.
+- The value of QNA_MAKER_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
+- The value of QNA_MAKER_RUNTIME_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.azurewebsites.net`.
+- For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
+
+[!code-java[Resource variables](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=resourceKeys)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+> [!IMPORTANT]
+> Go to the Azure portal and find the key and endpoint for the QnA Maker resource you created in the prerequisites. They will be located on the resource's **key and endpoint** page, under **resource management**.
+
+- Create environment variables named QNA_MAKER_SUBSCRIPTION_KEY and QNA_MAKER_ENDPOINT to store these values.
+- The value of QNA_MAKER_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
+- For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
+
+[!code-java[Resource variables](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=resourceKeys)]
+
+---
## Object models
+# [QnA Maker GA (stable release)](#tab/version-1)
+ QnA Maker uses two different object models: * **[QnAMakerClient](#qnamakerclient-object-model)** is the object to create, manage, publish, and download the knowledgebase. * **[QnAMakerRuntime](#qnamakerruntimeclient-object-model)** is the object to query the knowledge base with the GenerateAnswer API and send new suggested questions using the Train API (as part of [active learning](../concepts/active-learning-suggestions.md)).
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+QnA Maker uses the following object model:
+* **[QnAMakerClient](#qnamakerclient-object-model)** is the object to create, manage, publish, download, and query the knowledgebase.
+
+---
+ [!INCLUDE [Get KBinformation](./quickstart-sdk-cognitive-model.md)] ### QnAMakerClient object model
@@ -71,17 +138,33 @@ For immediate operations, a method usually returns the result, if any. For long-
### QnAMakerRuntimeClient object model
+# [QnA Maker GA (stable release)](#tab/version-1)
+ The runtime QnA Maker client is a [QnAMakerRuntimeClient](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/QnAMakerClient.java) object. After you publish your knowledge base using the authoring client, use the runtime client's [generateAnswer](https://github.com/Azure/azure-sdk-for-java/blob/b455a61f4c6daece13590a0f4136bab3c4f30546/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/Runtimes.java#L36) method to get an answer from the knowledge base. You create a runtime client by calling [QnAMakerRuntimeManager.authenticate](https://github.com/Azure/azure-sdk-for-java/blob/b455a61f4c6daece13590a0f4136bab3c4f30546/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/QnAMakerRuntimeManager.java#L29) and passing a runtime endpoint key. To obtain the runtime endpoint key, use the authoring client to call [getKeys](https://github.com/Azure/azure-sdk-for-java/blob/b455a61f4c6daece13590a0f4136bab3c4f30546/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/EndpointKeys.java#L30).
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+A QnA Maker managed resource does not require the use of the QnAMakerRuntimeClient object. Instead, you call [generateAnswer](https://github.com/Azure/azure-sdk-for-java/blob/657e9a47e4b4c7e7e7eee4100273c09468a30c63/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/Knowledgebases.java#L308) directly on the [QnAMakerClient](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/QnAMakerClient.java) object.
+
+---
+ ## Authenticate the client for authoring the knowledge base Instantiate a client with your authoring endpoint and subscription key.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="authenticate":::
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-java[Authenticate](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=authenticate)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-java[Authenticate](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=authenticate)]
+
+---
## Create a knowledge base
@@ -96,7 +179,15 @@ Call the [create](https://github.com/Azure/azure-sdk-for-java/blob/b455a61f4c6da
The final line of the following code returns the knowledge base ID.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="createKb":::
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-java[Create knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=createKb)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-java[Create knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=createKb)]
+
+---
## Update a knowledge base
@@ -107,22 +198,48 @@ You can update a knowledge base by calling [update](https://github.com/Azure/azu
Pass the `operationId` property of the returned operation to the [getDetails](#get-status-of-an-operation) method to poll for status.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="updateKb":::
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-java[Update knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=updateKb)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-java[Update knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=updateKb)]
+
+---
## Download a knowledge base Use the [download](https://github.com/Azure/azure-sdk-for-java/blob/b455a61f4c6daece13590a0f4136bab3c4f30546/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/Knowledgebases.java#L196) method to download the database as a list of [QnADocumentsDTO](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/models/QnADocumentsDTO.java). This is _not_ equivalent to the QnA Maker portal's export from the **Settings** page because the result of this method is not a TSV file.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="downloadKb":::
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-java[Download knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=downloadKb)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-java[Download knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=downloadKb)]
+
+---
## Publish a knowledge base Publish the knowledge base using the [publish](https://github.com/Azure/azure-sdk-for-java/blob/b455a61f4c6daece13590a0f4136bab3c4f30546/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/Knowledgebases.java#L196) method. This takes the current saved and trained model, referenced by the knowledge base ID, and publishes that at an endpoint.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="publishKb":::
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-java[Publish knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=publishKb)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-java[Publish knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=publishKb)]
+
+---
## Generate an answer from the knowledge base
+# [QnA Maker GA (stable release)](#tab/version-1)
+ Once a knowledge base is published, you need the runtime endpoint key to query the knowledge base. This is not the same as the subscription key used to create the authoring client. Use the [getKeys](https://github.com/Azure/azure-sdk-for-java/blob/b637366e32edefb0fe63962983715a02c1ad2631/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/EndpointKeys.java#L30) method to get an [EndpointKeysDTO](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/models/EndpointKeysDTO.java) object.
@@ -131,7 +248,15 @@ Create a runtime client by calling [QnAMakerRuntimeManager.authenticate](https:/
Generate an answer from a published knowledge base using the [generateAnswer](https://github.com/Azure/azure-sdk-for-java/blob/b455a61f4c6daece13590a0f4136bab3c4f30546/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/Runtimes.java#L36) method. This method accepts the knowledge base ID and a [QueryDTO](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/models/QueryDTO.java) object.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="queryKb":::
+[!code-java[Query knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=queryKb)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+Generate an answer from a published knowledge base using the [generateAnswer](https://github.com/Azure/azure-sdk-for-java/blob/657e9a47e4b4c7e7e7eee4100273c09468a30c63/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/Knowledgebases.java#L308) method. This method accepts the knowledge base ID and a [QueryDTO](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/models/QueryDTO.java) object.
+
+[!code-java[Query knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=queryKb)]
+
+---
This is a simple example of querying a knowledge base. To understand advanced querying scenarios, review [other query examples](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md?pivots=url-test-tool-curl#use-curl-to-query-for-a-chit-chat-answer).
@@ -139,19 +264,43 @@ This is a simple example of querying a knowledge base. To understand advanced qu
Delete the knowledge base using the [delete](https://github.com/Azure/azure-sdk-for-java/blob/b455a61f4c6daece13590a0f4136bab3c4f30546/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/Knowledgebases.java#L81) method with a parameter of the knowledge base ID.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="deleteKb":::
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-java[Delete knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=deleteKb)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-java[Delete knowledgebase](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=deleteKb)]
+
+---
## Get status of an operation Some methods, such as create and update, can take enough time that instead of waiting for the process to finish, an [operation](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cognitiveservices/ms-azure-cs-qnamaker/src/main/java/com/microsoft/azure/cognitiveservices/knowledge/qnamaker/models/Operation.java) is returned. Use the operation ID from the operation to poll (with retry logic) to determine the status of the original method.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="waitForOperation":::
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-java[Wait for operation](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=waitForOperation)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-java[Wait for operation](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=waitForOperation)]
+
+---
## Run the application Here is the main method for the application.
-:::code language="java" source="~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java" id="main":::
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-java[Main method](~/cognitive-services-quickstart-code/java/qnamaker/sdk/quickstart.java?name=main)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-java[Main method](~/cognitive-services-quickstart-code/java/qnamaker/sdk/preview-sdk/quickstart.java?name=main)]
+
+---
Run the application as follows. This presumes your class name is `Quickstart` and your dependencies are in a subfolder named `lib` below the current folder.
@@ -160,4 +309,12 @@ javac Quickstart.java -cp .;lib\*
java -cp .;lib\* Quickstart ```
-The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/java/qnamaker/sdk/quickstart.java).
\ No newline at end of file
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/java/qnamaker/sdk/quickstart.java).
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/java/qnamaker/sdk/preview-sdk/quickstart.java).
+
+---
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/includes/quickstart-sdk-nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/includes/quickstart-sdk-nodejs.md
@@ -5,6 +5,9 @@ ms.topic: quickstart
ms.date: 06/18/2020 ms.custom: devx-track-js ---+
+# [QnA Maker GA (stable release)](#tab/version-1)
+ Use the QnA Maker client library for Node.js to: * Create a knowledgebase
@@ -13,21 +16,50 @@ Use the QnA Maker client library for Node.js to:
* Get prediction runtime endpoint key * Wait for long-running task * Download a knowledgebase
-* Get answer
+* Get an answer from a knowledgebase
* Delete knowledge base
-[Reference documentation](/javascript/api/@azure/cognitiveservices-qnamaker/?view=azure-node-latest) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/cognitiveservices/cognitiveservices-qnamaker) | [Package (npm)](https://www.npmjs.com/package/@azure/cognitiveservices-qnamaker) | [Node.js Samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/QnAMaker/sdk/qnamaker_quickstart.js)
+[Reference documentation](https://docs.microsoft.com/javascript/api/@azure/cognitiveservices-qnamaker/?view=azure-node-latest) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/cognitiveservices/cognitiveservices-qnamaker) | [Package (npm)](https://www.npmjs.com/package/@azure/cognitiveservices-qnamaker) | [Node.js Samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/QnAMaker/sdk/qnamaker_quickstart.js)
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+Use the QnA Maker client library for Node.js to:
+
+* Create a knowledgebase
+* Update a knowledgebase
+* Publish a knowledgebase
+* Wait for long-running task
+* Download a knowledgebase
+* Get an answer from a knowledgebase
+* Delete knowledge base
+
+[Reference documentation](https://docs.microsoft.com/javascript/api/@azure/cognitiveservices-qnamaker/?view=azure-node-latest) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/cognitiveservices/cognitiveservices-qnamaker) | [Package (npm)](https://www.npmjs.com/package/@azure/cognitiveservices-qnamaker) | [Node.js Samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/QnAMaker/sdk/preview-sdk/quickstart.js)
+
+---
[!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)] ## Prerequisites
+# [QnA Maker GA (stable release)](#tab/version-1)
+ * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * The current version of [Node.js](https://nodejs.org). * Once you have your Azure subscription, create a [QnA Maker resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) in the Azure portal to get your authoring key and resource. After it deploys, select **Go to resource**. * You will need the key and resource name from the resource you create to connect your application to the QnA Maker API. You'll paste your key and resource name into the code below later in the quickstart. * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* The current version of [Node.js](https://nodejs.org).
+* Once you have your Azure subscription, create a [QnA Maker resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) in the Azure portal to get your authoring key and endpoint.
+ * NOTE: Be sure to select the **Managed** checkbox.
+ * After your QnA Maker resource deploys, select **Go to resource**. You will need the key and endpoint from the resource you create to connect your application to the QnA Maker API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+
+---
+ ## Setting up ### Create a new Node.js application
@@ -46,6 +78,8 @@ npm init -y
### Install the client library
+# [QnA Maker GA (stable release)](#tab/version-1)
+ Install the following NPM packages: ```console
@@ -54,27 +88,72 @@ npm install @azure/cognitiveservices-qnamaker-runtime
npm install @azure/ms-rest-js ```
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+Install the following NPM packages:
+
+```console
+npm install @azure/cognitiveservices-qnamaker
+npm install @azure/ms-rest-js
+```
+
+---
+ Your app's `package.json` file is updated with the dependencies. Create a file named index.js and import the following libraries:
+# [QnA Maker GA (stable release)](#tab/version-1)
+ [!code-javascript[Dependencies](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=Dependencies)]
-Create a variable for your resource's Azure key and resource name. Both the authoring and prediction URLs use the resource name as the subdomain.
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-javascript[Dependencies](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/preview-sdk/quickstart.js?name=Dependencies)]
+
+---
+
+Create a variable for your resource's Azure key and resource name.
+
+# [QnA Maker GA (stable release)](#tab/version-1)
> [!IMPORTANT] > Go to the Azure portal and find the key and endpoint for the QnA Maker resource you created in the prerequisites. They will be located on the resource's **key and endpoint** page, under **resource management**.
-> You need the entire key to create your knowledgebase. You need only the resource name from the endpoint. The format is `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
+
+- Create environment variables named QNA_MAKER_SUBSCRIPTION_KEY, QNA_MAKER_ENDPOINT, and QNA_MAKER_RUNTIME_ENDPOINT to store these values.
+- The value of QNA_MAKER_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
+- The value of QNA_MAKER_RUNTIME_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.azurewebsites.net`.
+- For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
[!code-javascript[Set the resource key and resource name](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=Resourcevariables)]
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+> [!IMPORTANT]
+> Go to the Azure portal and find the key and endpoint for the QnA Maker resource you created in the prerequisites. They will be located on the resource's **key and endpoint** page, under **resource management**.
+
+- Create environment variables named QNA_MAKER_SUBSCRIPTION_KEY and QNA_MAKER_ENDPOINT to store these values.
+- The value of QNA_MAKER_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
+- For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
+
+[!code-javascript[Set the resource key and resource name](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/preview-sdk/quickstart.js?name=Resourcevariables)]
+
+---
+ ## Object models
-QnA Maker uses two different object models:
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[QnA Maker](https://docs.microsoft.com/javascript/api/@azure/cognitiveservices-qnamaker/?view=azure-node-latest) uses two different object models:
* **[QnAMakerClient](#qnamakerclient-object-model)** is the object to create, manage, publish, and download the knowledgebase. * **[QnAMakerRuntime](#qnamakerruntimeclient-object-model)** is the object to query the knowledge base with the GenerateAnswer API and send new suggested questions using the Train API (as part of [active learning](../concepts/active-learning-suggestions.md)).
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[QnA Maker](https://docs.microsoft.com/javascript/api/@azure/cognitiveservices-qnamaker/?view=azure-node-latest) uses the following object model:
+* **[QnAMakerClient](#qnamakerclient-object-model)** is the object to create, manage, publish, download, and query the knowledgebase.
+
+---
### QnAMakerClient object model
@@ -86,13 +165,22 @@ Manage your knowledge base by sending a JSON object. For immediate operations, a
### QnAMakerRuntimeClient object model
+# [QnA Maker GA (stable release)](#tab/version-1)
+ The prediction QnA Maker client is a QnAMakerRuntimeClient object that authenticates to Azure using Microsoft.Rest.ServiceClientCredentials, which contains your prediction runtime key, returned from the authoring client call, [client.EndpointKeys.getKeys](/javascript/api/@azure/cognitiveservices-qnamaker/endpointkeys?view=azure-node-latest#getkeys-msrest-requestoptionsbase-) after the knowledgebase is published.
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+A QnA Maker managed resource does not require the use of the QnAMakerRuntimeClient object. Instead, you call [generateAnswer](https://docs.microsoft.com/javascript/api/@azure/cognitiveservices-qnamaker/knowledgebase?view=azure-node-latest#generateAnswer_string__QueryDTO__msRest_RequestOptionsBase_) directly on the [QnAMakerClient](https://docs.microsoft.com/javascript/api/@azure/cognitiveservices-qnamaker/qnamakerclient?view=azure-node-latest) object.
+
+---
## Code examples These code snippets show you how to do the following with the QnA Maker client library for .NET:
+# [QnA Maker GA (stable release)](#tab/version-1)
+ * [Authenticate the authoring client](#authenticate-the-client-for-authoring-the-knowledge-base) * [Create a knowledge base](#create-a-knowledge-base) * [Update a knowledge base](#update-a-knowledge-base)
@@ -104,14 +192,33 @@ These code snippets show you how to do the following with the QnA Maker client l
* [Authenticate the query runtime client](#authenticate-the-runtime-for-generating-an-answer) * [Generate an answer from the knowledge base](#generate-an-answer-from-the-knowledge-base)
+# [QnA Maker managed (preview release)](#tab/version-2)
+* [Authenticate the authoring client](#authenticate-the-client-for-authoring-the-knowledge-base)
+* [Create a knowledge base](#create-a-knowledge-base)
+* [Update a knowledge base](#update-a-knowledge-base)
+* [Download a knowledge base](#download-a-knowledge-base)
+* [Publish a knowledge base](#publish-a-knowledge-base)
+* [Delete a knowledge base](#delete-a-knowledge-base)
+* [Get status of an operation](#get-status-of-an-operation)
+* [Generate an answer from the knowledge base](#generate-an-answer-from-the-knowledge-base)
+
+---
## Authenticate the client for authoring the knowledge base Instantiate a client with your endpoint and key. Create an ServiceClientCredentials object with your key, and use it with your endpoint to create an [QnAMakerClient](/javascript/api/@azure/cognitiveservices-qnamaker/qnamakerclient?view=azure-node-latest) object.
+# [QnA Maker GA (stable release)](#tab/version-1)
+ [!code-javascript[Create QnAMakerClient object with key and endpoint](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=AuthorizationAuthor)]
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-javascript[Create QnAMakerClient object with key and endpoint](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/preview-sdk/quickstart.js?name=AuthorizationAuthor)]
+
+---
+ ## Create a knowledge base A knowledge base stores question and answer pairs for the [CreateKbDTO](/javascript/api/@azure/cognitiveservices-qnamaker/createkbdto?view=azure-node-latest) object from three sources:
@@ -130,15 +237,31 @@ Call the [create](/javascript/api/@azure/cognitiveservices-qnamaker/knowledgebas
When the create method returns, pass the returned operation ID to the [wait_for_operation](#get-status-of-an-operation) method to poll for status. The wait_for_operation method returns when the operation completes. Parse the `resourceLocation` header value of the returned operation to get the new knowledge base ID.
-[!code-javascript[Create a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=CreateKBMethod&highlight=39,46)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-javascript[Create knowledgebase](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=CreateKBMethod)]
-Make sure the include the [`wait_for_operation`](#get-status-of-an-operation) function, referenced in the above code, in order to successfully create a knowledge base.
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-javascript[Create knowledgebase](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/preview-sdk/quickstart.js?name=CreateKBMethod)]
+
+---
+
+Make sure to include the [`wait_for_operation`](#get-status-of-an-operation) function, referenced in the above code, in order to successfully create a knowledge base.
## Update a knowledge base You can update a knowledge base by passing in the knowledge base ID and an [UpdateKbOperationDTO](/javascript/api/@azure/cognitiveservices-qnamaker/updatekboperationdto?view=azure-node-latest) containing [add](/javascript/api/@azure/cognitiveservices-qnamaker/updatekboperationdto?view=azure-node-latest#add), [update](/javascript/api/@azure/cognitiveservices-qnamaker/updatekboperationdto?view=azure-node-latest#update), and [delete](/javascript/api/@azure/cognitiveservices-qnamaker/updatekboperationdto?view=azure-node-latest#deleteproperty) DTO objects to the [update](/javascript/api/@azure/cognitiveservices-qnamaker/knowledgebase?view=azure-node-latest#update-string--updatekboperationdto--msrest-requestoptionsbase-) method. The DTOs are also basically JSON objects. Use the [wait_for_operation](#get-status-of-an-operation) method to determine if the update succeeded.
-[!code-javascript[Update a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=UpdateKBMethod&highlight=74,81)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-javascript[Update a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=UpdateKBMethod)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-javascript[Update a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/preview-sdk/quickstart.js?name=UpdateKBMethod)]
+
+---
Make sure the include the [`wait_for_operation`](#get-status-of-an-operation) function, referenced in the above code, in order to successfully update a knowledge base.
@@ -146,17 +269,35 @@ Make sure the include the [`wait_for_operation`](#get-status-of-an-operation) fu
Use the [download](/javascript/api/@azure/cognitiveservices-qnamaker/knowledgebase?view=azure-node-latest#download-string--models-environmenttype--msrest-requestoptionsbase-) method to download the database as a list of [QnADocumentsDTO](/javascript/api/@azure/cognitiveservices-qnamaker/qnadocumentsdto?view=azure-node-latest). This is _not_ equivalent to the QnA Maker portal's export from the **Settings** page because the result of this method is not a TSV file.
-[!code-javascript[Download a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=DownloadKB&highlight=2)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+[!code-javascript[Download a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=DownloadKB)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-javascript[Download a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/preview-sdk/quickstart.js?name=DownloadKB)]
+
+---
## Publish a knowledge base Publish the knowledge base using the [publish](/javascript/api/@azure/cognitiveservices-qnamaker/knowledgebase?view=azure-node-latest#publish-string--msrest-requestoptionsbase-) method. This takes the current saved and trained model, referenced by the knowledge base ID, and publishes that at an endpoint. Check the HTTP response code to validate that the publish succeeded.
-[!code-javascript[Publish a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=PublishKB&highlight=3)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-javascript[Publish a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=PublishKB)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-javascript[Publish a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/preview-sdk/quickstart.js?name=PublishKB)]
+
+---
+
+## Query a knowledge base
+# [QnA Maker GA (stable release)](#tab/version-1)
-## Get query runtime key
+### Get query runtime key
Once a knowledgebase is published, you need the query runtime key to query the runtime. This isn't the same key used to create the original client object.
@@ -164,9 +305,9 @@ Use the [EndpointKeys.getKeys](/javascript/api/@azure/cognitiveservices-qnamaker
Use either of the key properties returned in the object to query the knowledgebase.
-[!code-javascript[Get query runtime key](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=GetQueryEndpointKey&highlight=4)]
+[!code-javascript[Get query runtime key](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=GetQueryEndpointKey)]
-## Authenticate the runtime for generating an answer
+### Authenticate the runtime for generating an answer
Create a QnAMakerRuntimeClient to query the knowledge base to generate an answer or train from active learning.
@@ -174,11 +315,21 @@ Create a QnAMakerRuntimeClient to query the knowledge base to generate an answer
Use the QnAMakerRuntimeClient to get an answer from the knowledge or to send new suggested questions to the knowledge base for [active learning](../concepts/active-learning-suggestions.md).
-## Generate an answer from the knowledge base
+### Generate an answer from the knowledge base
-Generate an answer from a published knowledge base using the RuntimeClient.runtime.generateAnswer method. This method accepts the knowledge base ID and the QueryDTO. Access additional properties of the QueryDTO, such a Top and Context to use in your chat bot.
+Generate an answer from a published knowledge base using the RuntimeClient.runtime.generateAnswer method. This method accepts the knowledge base ID and the [QueryDTO](https://docs.microsoft.com/javascript/api/@azure/cognitiveservices-qnamaker/querydto). Access additional properties of the QueryDTO, such a Top and Context to use in your chat bot.
-[!code-javascript[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=GenerateAnswer&highlight=3)]
+[!code-javascript[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=GenerateAnswer)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+### Generate an answer from the knowledge base
+
+Generate an answer from a published knowledge base using the QnAMakerClient.knowledgebase.generateAnswer method. This method accepts the knowledge base ID and the [QueryDTO](https://docs.microsoft.com/javascript/api/@azure/cognitiveservices-qnamaker/querydto). Access additional properties of the QueryDTO, such a Top and Context to use in your chat bot.
+
+[!code-javascript[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/preview-sdk/quickstart.js?name=GenerateAnswer)]
+
+---
This is a simple example querying the knowledge base. To understand advanced querying scenarios, review [other query examples](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md?pivots=url-test-tool-curl#use-curl-to-query-for-a-chit-chat-answer).
@@ -186,7 +337,15 @@ This is a simple example querying the knowledge base. To understand advanced que
Delete the knowledge base using the [delete](/javascript/api/@azure/cognitiveservices-qnamaker/knowledgebase?view=azure-node-latest#deletemethod-string--msrest-requestoptionsbase-) method with a parameter of the knowledge base ID.
-[!code-javascript[Delete a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=DeleteKB&highlight=3)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-javascript[Delete a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=DeleteKB)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-javascript[Delete a knowledge base](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/preview-sdk/quickstart.js?name=DeleteKB)]
+
+---
## Get status of an operation
@@ -194,7 +353,15 @@ Some methods, such as create and update, can take enough time that instead of wa
The _delayTimer_ call in the following code block is used to simulate the retry logic. Replace this with your own retry logic.
-[!code-javascript[Monitor an operation](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=MonitorOperation&highlight=8)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-javascript[Monitor an operation](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/qnamaker_quickstart.js?name=MonitorOperation)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-javascript[Monitor an operation](~/cognitive-services-quickstart-code/javascript/QnAMaker/sdk/preview-sdk/quickstart.js?name=MonitorOperation)]
+
+---
## Run the application
@@ -204,4 +371,12 @@ Run the application with `node index.js` command from your application directory
node index.js ```
-The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/QnAMaker/sdk/qnamaker_quickstart.js).
\ No newline at end of file
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/QnAMaker/sdk/qnamaker_quickstart.js).
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/QnAMaker/sdk/preview-sdk/quickstart.js).
+
+---
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/includes/quickstart-sdk-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/includes/quickstart-sdk-python.md
@@ -5,6 +5,8 @@ ms.topic: include
ms.date: 06/18/2020 ---
+# [QnA Maker GA (stable release)](#tab/version-1)
+ Use the QnA Maker client library for python to: * Create a knowledgebase
@@ -13,125 +15,252 @@ Use the QnA Maker client library for python to:
* Get prediction runtime endpoint key * Wait for long-running task * Download a knowledgebase
-* Get answer
+* Get an answer from a knowledgebase
+* Delete knowledge base
+
+[Reference documentation](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker?view=azure-python) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cognitiveservices/azure-cognitiveservices-knowledge-qnamaker) | [Package (PyPi)](https://pypi.org/project/azure-cognitiveservices-knowledge-qnamaker/0.2.0/) | [Python samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/QnAMaker/sdk/quickstart.py)
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+Use the QnA Maker client library for python to:
+
+* Create a knowledgebase
+* Update a knowledgebase
+* Publish a knowledgebase
+* Wait for long-running task
+* Download a knowledgebase
+* Get an answer from a knowledgebase
* Delete knowledge base
-[Reference documentation](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker?view=azure-python) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cognitiveservices/azure-cognitiveservices-knowledge-qnamaker) | [Package (PyPi)](https://pypi.org/project/azure-cognitiveservices-knowledge-qnamaker/) | [Python samples](https://github.com/Azure-Samples/cognitive-services-qnamaker-python/blob/master/documentation-samples/quickstarts/knowledgebase_quickstart/knowledgebase_quickstart.py)
+[Reference documentation](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker?view=azure-python) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cognitiveservices/azure-cognitiveservices-knowledge-qnamaker) | [Package (PyPi)](https://pypi.org/project/azure-cognitiveservices-knowledge-qnamaker/) | [Python samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/QnAMaker/sdk/preview-sdk/quickstart.py)
+
+---
[!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)] ## Prerequisites
+# [QnA Maker GA (stable release)](#tab/version-1)
+ * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * [Python 3.x](https://www.python.org/) * Once you have your Azure subscription, create a [QnA Maker resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) in the Azure portal to get your authoring key and endpoint. After it deploys, select **Go to resource**. * You will need the key and endpoint from the resource you create to connect your application to the QnA Maker API. You'll paste your key and endpoint into the code below later in the quickstart. * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* [Python 3.x](https://www.python.org/)
+* Once you have your Azure subscription, create a [QnA Maker resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) in the Azure portal to get your authoring key and endpoint.
+ * NOTE: Be sure to select the **Managed** checkbox.
+ * After your QnA Maker resource deploys, select **Go to resource**. You will need the key and endpoint from the resource you create to connect your application to the QnA Maker API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+
+---
+ ## Setting up ### Install the client library
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+After installing Python, you can install the client library with:
+
+```console
+pip install azure-cognitiveservices-knowledge-qnamaker==0.2.0
+```
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+ After installing Python, you can install the client library with: ```console pip install azure-cognitiveservices-knowledge-qnamaker ```
+---
+ ### Create a new python application Create a new Python file named `quickstart-file.py` and import the following libraries.
-[!code-python[Dependencies](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=Dependencies&highlight=4,5)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-python[Dependencies](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=Dependencies)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-python[Dependencies](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/preview-sdk/quickstart.py?name=Dependencies)]
+
+---
Create variables for your resource's Azure endpoint and key.
+# [QnA Maker GA (stable release)](#tab/version-1)
+ > [!IMPORTANT] > Go to the Azure portal and find the key and endpoint for the QnA Maker resource you created in the prerequisites. They will be located on the resource's **key and endpoint** page, under **resource management**.
-> You need the entire key to create your knowledgebase. You need only the resource name from the endpoint. The format is `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
+
+- Create environment variables named QNA_MAKER_SUBSCRIPTION_KEY, QNA_MAKER_ENDPOINT, and QNA_MAKER_RUNTIME_ENDPOINT to store these values.
+- The value of QNA_MAKER_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
+- The value of QNA_MAKER_RUNTIME_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.azurewebsites.net`.
+- For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
[!code-python[Resource variables](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=Resourcevariables)]
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+> [!IMPORTANT]
+> Go to the Azure portal and find the key and endpoint for the QnA Maker resource you created in the prerequisites. They will be located on the resource's **key and endpoint** page, under **resource management**.
+
+- Create environment variables named QNA_MAKER_SUBSCRIPTION_KEY and QNA_MAKER_ENDPOINT to store these values.
+- The value of QNA_MAKER_ENDPOINT has the format `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`.
+- For production, consider using a secure way of storing and accessing your credentials. For example, [Azure key vault](../../../key-vault/general/overview.md) provides secure key storage.
+
+[!code-python[Resource variables](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/preview-sdk/quickstart.py?name=Resourcevariables)]
+
+---
+ ## Object models
-[QnA Maker](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker?view=azure-python) Maker uses two different object models:
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[QnA Maker](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker?view=azure-python) uses two different object models:
* **[QnAMakerClient](#qnamakerclient-object-model)** is the object to create, manage, publish, and download the knowledgebase. * **[QnAMakerRuntime](#qnamakerruntimeclient-object-model)** is the object to query the knowledge base with the GenerateAnswer API and send new suggested questions using the Train API (as part of [active learning](../concepts/active-learning-suggestions.md)).
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[QnA Maker](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker?view=azure-python) uses the following object model:
+* **[QnAMakerClient](#qnamakerclient-object-model)** is the object to create, manage, publish, download, and query the knowledgebase.
+
+---
+ [!INCLUDE [Get KBinformation](./quickstart-sdk-cognitive-model.md)] ### QnAMakerClient object model
-The authoring QnA Maker client is a [QnAMakerClient](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker?view=azure-python) object that authenticates to Azure using Microsoft.Rest.ServiceClientCredentials, which contains your key.
+The authoring QnA Maker client is a [QnAMakerClient](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.qn_amaker_client.qnamakerclient?view=azure-python) object that authenticates to Azure using Microsoft.Rest.ServiceClientCredentials, which contains your key.
-Once the client is created, use the [Knowledge base](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebaseoperations?view=azure-python) property to create, manage, and publish your knowledge base.
+Once the client is created, use the [Knowledge base](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebase_operations.knowledgebaseoperations?view=azure-python) property to create, manage, and publish your knowledge base.
-Manage your knowledge base by sending a JSON object. For immediate operations, a method usually returns a JSON object indicating status. For long-running operations, the response is the operation ID. Call the [operations.get_details](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.operation(class)?view=azure-python#get-details-operation-id--custom-headers-none--raw-false----operation-config-) method with the operation ID to determine the [status of the request](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.operation(class)?view=azure-python).
+Manage your knowledge base by sending a JSON object. For immediate operations, a method usually returns a JSON object indicating status. For long-running operations, the response is the operation ID. Call the [operations.get_details](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebase_operations.knowledgebaseoperations?view=azure-python#get-details-kb-id--custom-headers-none--raw-false----operation-config-) method with the operation ID to determine the [status of the request](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.operationstatetype?view=azure-python).
### QnAMakerRuntimeClient object model
-The prediction QnA Maker client is a [QnAMakerRuntimeClient](/javascript/api/@azure/cognitiveservices-qnamaker-runtime/qnamakerruntimeclient?view=azure-node-latest) object that authenticates to Azure using Microsoft.Rest.ServiceClientCredentials, which contains your prediction runtime key, returned from the authoring client call, [client.EndpointKeysOperations.get_keys](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.endpointkeysoperations?view=azure-python) after the knowledgebase is published.
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+The prediction QnA Maker client is a `QnAMakerRuntimeClient` object that authenticates to Azure using Microsoft.Rest.ServiceClientCredentials, which contains your prediction runtime key, returned from the authoring client call, [client.EndpointKeysOperations.get_keys](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.endpoint_keys_operations.endpointkeysoperations?view=azure-python#get-keys-custom-headers-none--raw-false----operation-config-) after the knowledgebase is published.
+
+Use the `generate_answer` method to get an answer from the query runtime.
-Use the [generate_answer](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker.runtime.-ctor?view=azure-dotnet#Microsoft_Azure_CognitiveServices_Knowledge_QnAMaker_Runtime__ctor_Microsoft_Azure_CognitiveServices_Knowledge_QnAMaker_QnAMakerRuntimeClient_#generate-answer-kb-id--generate-answer-payload--custom-headers-none--raw-false----operation-config-) method to get an answer from the query runtime.
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+A QnA Maker managed resource does not require the use of the QnAMakerRuntimeClient object. Instead, you call [generate_answer](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebase_operations.knowledgebaseoperations?view=azure-python#generate-answer-kb-id--generate-answer-payload--custom-headers-none--raw-false----operation-config-) directly on the [QnAMakerClient](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.qn_amaker_client.qnamakerclient?view=azure-python) object.
+
+---
## Authenticate the client for authoring the knowledge base
-Instantiate a client with your endpoint and key. Create an CognitiveServicesCredentials object with your key, and use it with your endpoint to create an [QnAMakerClient](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.qnamakerclient?view=azure-python) object.
+Instantiate a client with your endpoint and key. Create an CognitiveServicesCredentials object with your key, and use it with your endpoint to create an [QnAMakerClient](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.qn_amaker_client.qnamakerclient?view=azure-python) object.
+
+# [QnA Maker GA (stable release)](#tab/version-1)
[!code-python[Authorization to resource key](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=AuthorizationAuthor)]
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-python[Authorization to resource key](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/preview-sdk/quickstart.py?name=AuthorizationAuthor)]
+
+---
+ ## Create a knowledge base
- Use the client object to get a [knowledge base operations](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebaseoperations?view=azure-python) object.
+Use the client object to get a [knowledge base operations](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebaseoperations?view=azure-python) object.
-A knowledge base stores question and answer pairs for the [CreateKbDTO](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.create_kb_dto) object from three sources:
+A knowledge base stores question and answer pairs for the [CreateKbDTO](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.createkbdto?view=azure-python) object from three sources:
-* For **editorial content**, use the [QnADTO](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.qnadto?view=azure-python) object.
+* For **editorial content**, use the [QnADTO](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.qnadto?view=azure-python) object.
* To use metadata and follow-up prompts, use the editorial context, because this data is added at the individual QnA pair level.
-* For **files**, use the [FileDTO](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.file_dto) object. The FileDTO includes the filename as well as the public URL to reach the file.
+* For **files**, use the [FileDTO](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.filedto?view=azure-python) object. The FileDTO includes the filename as well as the public URL to reach the file.
* For **URLs**, use a list of strings to represent publicly available URLs.
-Call the [create](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebaseoperations?view=azure-python) method then pass the returned operation ID to the [Operations.getDetails](#get-status-of-an-operation) method to poll for status.
+Call the [create](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebaseoperations?view=azure-python#create-create-kb-payload--custom-headers-none--raw-false----operation-config-) method then pass the returned operation ID to the [Operations.getDetails](#get-status-of-an-operation) method to poll for status.
The final line of the following code returns the knowledge base ID from the response from MonitorOperation.
-[!code-python[Create knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=CreateKBMethod&highlight=36,38)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-python[Create knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=CreateKBMethod)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-python[Create knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/preview-sdk/quickstart.py?name=CreateKBMethod)]
+
+---
Make sure the include the [`_monitor_operation`](#get-status-of-an-operation) function, referenced in the above code, in order to successfully create a knowledge base. ## Update a knowledge base
-You can update a knowledge base by passing in the knowledge base ID and an [UpdateKbOperationDTO](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdto?view=azure-python) containing [add](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdtoadd?view=azure-python), [update](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdtoupdate?view=azure-python), and [delete](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdtodelete?view=azure-python) DTO objects to the [update](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebase_operations.knowledgebaseoperations?view=azure-python) method. Use the [Operation.getDetail](#get-status-of-an-operation) method to determine if the update succeeded.
+You can update a knowledge base by passing in the knowledge base ID and an [UpdateKbOperationDTO](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdto?view=azure-python) containing [add](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdtoadd?view=azure-python), [update](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdtoupdate?view=azure-python), and [delete](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.updatekboperationdtodelete?view=azure-python) DTO objects to the [update](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebase_operations.knowledgebaseoperations?view=azure-python) method. Use the [Operation.getDetail](#get-status-of-an-operation) method to determine if the update succeeded.
-[!code-python[Update a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=UpdateKBMethod&highlight=68,69)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-python[Update a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=UpdateKBMethod)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-python[Update a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/preview-sdk/quickstart.py?name=UpdateKBMethod)]
+
+---
Make sure the include the [`_monitor_operation`](#get-status-of-an-operation) function, referenced in the above code, in order to successfully update a knowledge base. ## Download a knowledge base
-Use the [download](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebaseoperations?view=azure-python) method to download the database as a list of [QnADocumentsDTO](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.qnadocumentsdto?view=azure-python). This is _not_ equivalent to the QnA Maker portal's export from the **Settings** page because the result of this method is not a TSV file.
+Use the [download](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebaseoperations?view=azure-python) method to download the database as a list of [QnADocumentsDTO](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.qnadocumentsdto?view=azure-python). This is _not_ equivalent to the QnA Maker portal's export from the **Settings** page because the result of this method is not a TSV file.
+
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-python[Download a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=DownloadKB)]
-[!code-python[Download a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=DownloadKB&highlight=2)]
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-python[Download a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/preview-sdk/quickstart.py?name=DownloadKB)]
+
+---
## Publish a knowledge base
-Publish the knowledge base using the [publish](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebase_operations.knowledgebaseoperations?view=azure-python) method. This takes the current saved and trained model, referenced by the knowledge base ID, and publishes that at an endpoint.
+Publish the knowledge base using the [publish](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebaseoperations?view=azure-python#publish-kb-id--custom-headers-none--raw-false----operation-config-) method. This takes the current saved and trained model, referenced by the knowledge base ID, and publishes that at an endpoint.
+
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-python[Publish a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=PublishKB)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
-[!code-python[Publish a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=PublishKB&highlight=2)]
+[!code-python[Publish a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/preview-sdk/quickstart.py?name=PublishKB)]
-## Get query runtime key
+---
+
+## Query a knowledge base
+
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+### Get query runtime key
Once a knowledgebase is published, you need the query runtime key to query the runtime. This isn't the same key used to create the original client object.
-Use the [EndpointKeysOperations.get_keys](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.endpointkeysoperations?view=azure-python) method to get the [EndpointKeysDTO](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.endpointkeysdto?view=azure-python) class.
+Use the [EndpointKeysOperations.get_keys](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.endpointkeysoperations?view=azure-python#get-keys-custom-headers-none--raw-false----operation-config-) method to get the [EndpointKeysDTO](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.endpointkeysdto?view=azure-python) class.
Use either of the key properties returned in the object to query the knowledgebase.
-[!code-python[Get query runtime key](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=GetQueryEndpointKey&highlight=2)]
+[!code-python[Get query runtime key](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=GetQueryEndpointKey)]
-
-## Authenticate the runtime for generating an answer
+### Authenticate the runtime for generating an answer
Create a [QnAMakerRuntimeClient](/javascript/api/@azure/cognitiveservices-qnamaker-runtime/qnamakerruntimeclient?view=azure-node-latest) to query the knowledge base to generate an answer or train from active learning.
@@ -139,27 +268,53 @@ Create a [QnAMakerRuntimeClient](/javascript/api/@azure/cognitiveservices-qnamak
Use the QnAMakerRuntimeClient to get an answer from the knowledge or to send new suggested questions to the knowledge base for [active learning](../concepts/active-learning-suggestions.md).
-## Generate an answer from the knowledge base
+### Generate an answer from the knowledge base
+
+Generate an answer from a published knowledge base using the QnAMakerRuntimeClient.runtime.generate_answer method. This method accepts the knowledge base ID and the [QueryDTO](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.querydto?view=azure-python). Access additional properties of the QueryDTO, such a Top and Context to use in your chat bot.
+
+[!code-python[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=GenerateAnswer)]
-Generate an answer from a published knowledge base using the RuntimeClient.runtime.generateAnswer method. This method accepts the knowledge base ID and the QueryDTO. Access additional properties of the QueryDTO, such a Top and Context to use in your chat bot.
+# [QnA Maker managed (preview release)](#tab/version-2)
-[!code-python[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=GenerateAnswer&highlight=5)]
+### Generate an answer from the knowledge base
-This is a simple example querying the knowledge base. To understand advanced querying scenarios, review [other query examples](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md?pivots=url-test-tool-curl#use-curl-to-query-for-a-chit-chat-answer).
+Generate an answer from a published knowledge base using the [generate_answer](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebaseoperations?view=azure-python#generate-answer-kb-id--generate-answer-payload--custom-headers-none--raw-false----operation-config-) method. This method accepts the knowledge base ID and the [QueryDTO](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.querydto?view=azure-python). Access additional properties of the QueryDTO, such a Top and Context to use in your chat bot.
+
+[!code-python[Generate an answer from a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/preview-sdk/quickstart.py?name=GenerateAnswer)]
+
+---
+
+This is a simple example of querying the knowledge base. To understand advanced querying scenarios, review [other query examples](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md?pivots=url-test-tool-curl#use-curl-to-query-for-a-chit-chat-answer).
## Delete a knowledge base
-Delete the knowledge base using the [delete](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebase_operations.knowledgebaseoperations?view=azure-python) method with a parameter of the knowledge base ID.
+Delete the knowledge base using the [delete](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.operations.knowledgebaseoperations?view=azure-python#delete-kb-id--custom-headers-none--raw-false----operation-config-) method with a parameter of the knowledge base ID.
+
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-python[Delete a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=DeleteKB)]
-[!code-python[Delete a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=DeleteKB&highlight=2)]
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-python[Delete a knowledge base](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/preview-sdk/quickstart.py?name=DeleteKB)]
+
+---
## Get status of an operation
-Some methods, such as create and update, can take enough time that instead of waiting for the process to finish, an [operation](/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.operation(class)?view=azure-python) is returned. Use the operation ID from the operation to poll (with retry logic) to determine the status of the original method.
+Some methods, such as create and update, can take enough time that instead of waiting for the process to finish, an [operation](https://docs.microsoft.com/python/api/azure-cognitiveservices-knowledge-qnamaker/azure.cognitiveservices.knowledge.qnamaker.models.operation(class)?view=azure-python) is returned. Use the operation ID from the operation to poll (with retry logic) to determine the status of the original method.
The _setTimeout_ call in the following code block is used to simulate asynchronous code. Replace this with retry logic.
-[!code-python[Monitor an operation](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=MonitorOperation&highlight=7)]
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+[!code-python[Monitor an operation](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/quickstart.py?name=MonitorOperation)]
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+[!code-python[Monitor an operation](~/cognitive-services-quickstart-code/python/QnAMaker/sdk/preview-sdk/quickstart.py?name=MonitorOperation)]
+
+---
## Run the application
@@ -169,4 +324,12 @@ Run the application with the python command on your quickstart file.
python quickstart-file.py ```
-The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/QnAMaker/sdk/quickstart.py).
\ No newline at end of file
+# [QnA Maker GA (stable release)](#tab/version-1)
+
+The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/QnAMaker/sdk/quickstart.py).
+
+# [QnA Maker managed (preview release)](#tab/version-2)
+
+The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/QnAMaker/sdk/preview-sdk/quickstart.py).
+
+---
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/troubleshooting.md
@@ -348,6 +348,31 @@ The disk space for your app service might be full. Steps to fix your disk space:
1. Start the App service. 1. Access your knowledge base to verify it works now.
+</details>
+<details>
+<summary><b>Why is my Application Insights not working?</b></summary>
+
+**Answer**:
+Please Cross check and update below steps to fix the issue:
+
+1. In App Service -> Settings group -> Configuration section -> Application Settings -> Name "UserAppInsightsKey" parameters is configured properly and set to the respective application insights Overview tab ("Instrumentation Key") Guid.
+
+1. In App Service -> Settings group -> "Application Insights" section -> Make sure app insights is enabled and connected to respective application insights resource.
+
+</details>
+
+<details>
+<summary><b>My Application Insights is enabled but why is it not working properly?</b></summary>
+
+**Answer**:
+Please follow the below given steps:
+
+1. Copy the value of 'ΓÇ£APPINSIGHTS_INSTRUMENTATIONKEYΓÇ¥ name' into 'UserAppInsightsKey' name by overriding if there is some value already present there.
+
+1. If the 'UserAppInsightsKey' key does not exist in app settings, please add a new key with that name and copy the value.
+
+1. Save it and this will automatically restart the app service. This should resolve the issue.
+ </details> # [QnA Maker managed (preview release)](#tab/v2)
@@ -520,4 +545,4 @@ When you create your QnA Maker service, you selected an Azure region. Your knowl
</details> \ No newline at end of file
+---
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/get-speech-sdk-cpp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/get-speech-sdk-cpp.md
@@ -26,20 +26,6 @@ The C++ Speech SDK can be installed from the **Package Manager** with the follow
Install-Package Microsoft.CognitiveServices.Speech ```
-#### C++ binaries and header files
-
-Alternatively, the C++ Speech SDK can be installed from binaries. Download the SDK as a <a href="https://aka.ms/csspeech/linuxbinary" target="_blank">.tar package <span class="docon docon-navigate-external x-hidden-focus"></span></a> and unpack the files in a directory of your choice. The contents of this package (which include header files for both x86 and x64 target architectures) are structured as follows:
-
- | Path | Description |
- |------------------------|------------------------------------------------------|
- | `license.md` | License |
- | `ThirdPartyNotices.md` | Third-party notices |
- | `include` | Header files for C++ |
- | `lib/x64` | Native x64 library for linking with your application |
- | `lib/x86` | Native x86 library for linking with your application |
-
- To create an application, copy or move the required binaries (and libraries) into your development environment. Include them as required in your build process.
- #### Additional resources - <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/cpp" target="_blank">Windows, Linux, and macOS quickstart C++ source code <span class="docon docon-navigate-external x-hidden-focus"></span></a>\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/plan-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/plan-manage-costs.md
@@ -16,7 +16,7 @@ This article describes how you plan for and manage costs for Azure Cognitive Ser
## Prerequisites
-Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](https://docs.microsoft.com/azure/cost-management-billing/costs/understand-cost-mgt-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](https://docs.microsoft.com/azure/cost-management/assign-access-acm-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
<!--Note for Azure service writer: If you have other prerequisites for your service, insert them here -->
@@ -62,13 +62,13 @@ You can pay for Cognitive Services charges with your EA monetary commitment cred
## Create budgets
-You can create [budgets](https://docs.microsoft.com/azure/cost-management/tutorial-acm-create-budgets?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](https://docs.microsoft.com/azure/cost-management/cost-mgt-alerts-monitor-usage-spending?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+You can create [budgets](../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
-Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you when create a budget, see [Group and filter options](https://docs.microsoft.com/azure/cost-management-billing/costs/group-filter?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you when create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Export cost data
-You can also [export your cost data](https://docs.microsoft.com/azure/cost-management-billing/costs/tutorial-export-acm-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
<!-- ## Other ways to manage and reduce costs for Cognitive Services
@@ -79,7 +79,7 @@ Work with Dean to complete this section in 2021.
## Next steps -- Learn [how to optimize your cloud investment with Azure Cost Management](https://docs.microsoft.com/azure/cost-management-billing/costs/cost-mgt-best-practices?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn more about managing costs with [cost analysis](https://docs.microsoft.com/azure/cost-management-billing/costs/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn about how to [prevent unexpected costs](https://docs.microsoft.com/azure/cost-management-billing/manage/getting-started?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/manage/getting-started.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
- Take the [Cost Management](https://docs.microsoft.com/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/metrics.md
@@ -25,7 +25,7 @@ There are two types of requests that are represented within Communication Servic
Both Chat and SMS API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together using the `Count` aggregation type and support all standard Azure Aggregation time series including `Sum`, `Average`, `Min`, and `Max`.
-More information on supported aggregation types and time series aggregations can be found [Advanced features of Azure Metrics Explorer](../../azure-monitor/platform/metrics-charts.md#changing-aggregation)
+More information on supported aggregation types and time series aggregations can be found [Advanced features of Azure Metrics Explorer](../../azure-monitor/platform/metrics-charts.md#aggregation)
- **Operation** - All operations or routes that can be called on the ACS Chat gateway. - **Status Code** - The status code response sent after the request.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/includes/private-preview-include https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/includes/private-preview-include.md
@@ -6,5 +6,5 @@ ms.date: 9/1/2020
ms.author: mikben --- > [!IMPORTANT]
-> Functionality described on this document is currently in private preview.
+> Functionality described on this document is currently in private preview. Private preview includes access to client libraries and documentation for testing purposes that are not yet available publicly.
> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/meeting-interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
@@ -25,11 +25,7 @@ Get started with Azure Communication Services by connecting your chat solution t
A Communication Services user that joins a Teams meeting as a guest user can access the meeting's chat only when they've joined the Teams meeting call. See the [Teams interop](../voice-video-calling/get-started-teams-interop.md) documentation to learn how to add a Communication Services user to a Teams meeting call.
-The Teams interoperability feature is currently in private preview. To enable this feature for your Communication Services resource, please email acsfeedback@microsoft.com with:
-1. The Subscription ID of the Azure subscription that contains your Communication Services resource.
-2. Your Teams tenant ID. The easiest way to obtain this is to obtain and share a link to the Team.
-
-You must be a member of the owning organization of both entities to use this feature.
+You must be a member of the owning organization of both entities to use this feature.
[!INCLUDE [Join Teams meetings](./includes/meeting-interop-javascript.md)]
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/includes/teams-interop-javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/includes/teams-interop-javascript.md
@@ -12,14 +12,6 @@ ms.service: azure-communication-services
- A working [Communication Services calling app](../getting-started-with-calling.md). - A [Teams deployment](/deployoffice/teams-install).
-## Enable Teams Interoperability
-
-The Teams interoperability feature is currently in private preview. To enable this feature for your Communication Services resource, please email [acsfeedback@microsoft.com](mailto:acsfeedback@microsoft.com) with:
-
-1. The Subscription ID of the Azure subscription that contains your Communication Services resource.
-2. Your Teams tenant ID. The easiest way to obtain this is to [obtain and share a link to the Team](https://support.microsoft.com/office/create-a-link-or-a-code-for-joining-a-team-11b0de3b-9288-4cb4-bc49-795e7028296f).
-
-You must be a member of the owning organization of both entities to use this feature.
## Add the Teams UI controls
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-tasks-cross-registry-authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-tasks-cross-registry-authentication.md
@@ -38,8 +38,8 @@ For demonstration purposes, as a one-time operation, run [az acr import][az-acr-
```azurecli az acr import --name mybaseregistry \
- --source docker.io/library/node:9-alpine \
- --image baseimages/node:9-alpine
+ --source docker.io/library/node:15-alpine \
+ --image baseimages/node:15-alpine
``` ## Define task steps in YAML file
@@ -185,8 +185,8 @@ Waiting for an agent...
2019/06/14 22:47:45 Launching container with name: acb_step_0 Sending build context to Docker daemon 25.6kB Step 1/6 : ARG REGISTRY_NAME
-Step 2/6 : FROM ${REGISTRY_NAME}/baseimages/node:9-alpine
-9-alpine: Pulling from baseimages/node
+Step 2/6 : FROM ${REGISTRY_NAME}/baseimages/node:15-alpine
+15-alpine: Pulling from baseimages/node
[...] Successfully built 41b49a112663 Successfully tagged myregistry.azurecr.io/hello-world:cf10
@@ -206,7 +206,7 @@ The push refers to repository [myregistry.azurecr.io/hello-world]
runtime-dependency: registry: mybaseregistry.azurecr.io repository: baseimages/node
- tag: 9-alpine
+ tag: 15-alpine
digest: sha256:e8e92cffd464fce3be9a3eefd1b65dc9cbe2484da31c11e813a4effc6105c00f git: git-head-revision: 0f988779c97fe0bfc7f2f74b88531617f4421643
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/autoscale-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/autoscale-faq.md
@@ -43,7 +43,7 @@ Yes. When you purchase reserved capacity for accounts with single write regions,
Multi-write region reserved capacity works the same for autoscale and standard (manual) provisioned throughput. See [Azure Cosmos DB reserved capacity](cosmos-db-reserved-capacity.md) ### Does autoscale work with free tier?
-Yes. In free tier, you can use autoscale throughput on a container. Support for autoscale shared throughput databases with custom max RU/s is not yet available. See how [free tier billing works with autoscale](understand-your-bill.md#billing-examples-with-free-tier-accounts).
+Yes. In free tier, you can use autoscale throughput on a container. Support for autoscale shared throughput databases with custom max RU/s is not yet available. See how [free tier billing works with autoscale](understand-your-bill.md#azure-free-tier).
### Is autoscale supported for all APIs? Yes, autoscale is supported for all APIs: Core (SQL), Gremlin, Table, Cassandra, and API for MongoDB.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/concepts-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/concepts-limits.md
@@ -251,7 +251,8 @@ The following table lists the limits for the [Try Azure Cosmos DB for Free](http
Try Cosmos DB supports global distribution in only the Central US, North Europe, and Southeast Asia regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
-## Free tier account limits
+## Azure Cosmos DB free tier account limits
+ The following table lists the limits for [Azure Cosmos DB free tier accounts.](optimize-dev-test.md#azure-cosmos-db-free-tier) | Resource | Default limit |
@@ -263,7 +264,10 @@ The following table lists the limits for [Azure Cosmos DB free tier accounts.](o
| Maximum number of shared throughput databases | 5 | | Maximum number of containers in a shared throughput database | 25 <br>In free tier accounts, the minimum RU/s for a shared throughput database with up to 25 containers is 400 RU/s. |
- In addition to the above, the [Per-account limits](#per-account-limits) also apply to free tier accounts.
+In addition to the above, the [Per-account limits](#per-account-limits) also apply to free tier accounts.
+
+> [!NOTE]
+> Azure Cosmos DB free tier is different from the Azure free account. The Azure free account offers Azure credits and resources for free for a limited time. When using Azure Cosmos DB as a part of this free account, you get 25-GB storage and 400 RU/s of provisioned throughput for 12 months.
## Next steps
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/create-cosmosdb-resources-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cosmosdb-resources-portal.md
@@ -49,7 +49,7 @@ Go to the [Azure portal](https://portal.azure.com/) to create an Azure Cosmos DB
|Account Name|A unique name|Enter a name to identify your Azure Cosmos account. Because *documents.azure.com* is appended to the name that you provide to create your URI, use a unique name.<br><br>The name can only contain lowercase letters, numbers, and the hyphen (-) character. It must be between 3-31 characters in length.| |API|The type of account to create|Select **Core (SQL)** to create a document database and query by using SQL syntax. <br><br>The API determines the type of account to create. Azure Cosmos DB provides five APIs: Core (SQL) and MongoDB for document data, Gremlin for graph data, Azure Table, and Cassandra. Currently, you must create a separate account for each API. <br><br>[Learn more about the SQL API](introduction.md).| |Capacity mode|Provisioned throughput or Serverless|Select **Provisioned throughput** to create an account in [provisioned throughput](set-throughput.md) mode. Select **Serverless** to create an account in [serverless](serverless.md) mode.|
- |Apply Free Tier Discount|Apply or Do not apply|With Azure Cosmos DB free tier, you will get the first 400 RU/s and 5 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/).|
+ |Apply Azure Cosmos DB free tier discount|Apply or Do not apply|With Azure Cosmos DB free tier, you will get the first 400 RU/s and 5 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/).|
|Location|The region closest to your users|Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data.| |Account Type|Production or Non-Production|Select **Production** if the account will be used for a production workload. Select **Non-Production** if the account will be used for non-production, e.g. development, testing, QA, or staging. This is an Azure resource tag setting that tunes the Portal experience but does not affect the underlying Azure Cosmos DB account. You can change this value anytime.| |Geo-Redundancy|Enable or Disable|Enable or disable global distribution on your account by pairing your region with a pair region. You can add more regions to your account later.|
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/how-pricing-works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-pricing-works.md
@@ -34,7 +34,7 @@ Azure Cosmos DB offers many options for developers to it for free. These options
* **Azure Cosmos DB free tier**: Azure Cosmos DB free tier makes it easy to get started, develop and test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 400 RU/s and 5 GB of storage in the account free, for the lifetime of the account. You can have up to one free tier account per Azure subscription and must opt-in when creating the account. To get started, [create a new account in Azure portal with free tier enabled](create-cosmosdb-resources-portal.md) or use an [ARM Template](./manage-with-templates.md#free-tier).
-* **Azure free account**: Azure offers a [free tier](https://azure.microsoft.com/free/) that gives you $200 in Azure credits for the first 30 days and a limited quantity of free services for 12 months. For more information, see [Azure free account](../cost-management-billing/manage/avoid-charges-free-account.md). Azure Cosmos DB is a part of Azure free account. Specifically for Azure Cosmos DB, this free account offers 5-GB storage and 400 RU/s of provisioned throughput for the entire year.
+* **Azure free account**: Azure offers a [free tier](https://azure.microsoft.com/free/) that gives you $200 in Azure credits for the first 30 days and a limited quantity of free services for 12 months. For more information, see [Azure free account](../cost-management-billing/manage/avoid-charges-free-account.md). Azure Cosmos DB is a part of Azure free account. Specifically for Azure Cosmos DB, this free account offers 25-GB storage and 400 RU/s of provisioned throughput for the entire year.
* **Try Azure Cosmos DB for free**: Azure Cosmos DB offers a time-limited experience by using try Azure Cosmos DB for free accounts. You can create an Azure Cosmos DB account, create database and collections and run a sample application by using the Quickstarts and tutorials. You can run the sample application without subscribing to an Azure account or using your credit card. [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) offers Azure Cosmos DB for one month, with the ability to renew your account any number of times.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/optimize-dev-test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/optimize-dev-test.md
@@ -32,7 +32,7 @@ Free tier lasts indefinitely for the lifetime of the account and comes with all
## Azure free account
-Azure Cosmos DB is included in the [Azure free account](https://azure.microsoft.com/free), which offers Azure credits and resources for free for a certain time period. Specifically for Azure Cosmos DB, this free account offers 5-GB storage and 400 RUs of provisioned throughput for the entire year. This experience enables any developer to easily test the features of Azure Cosmos DB or integrate it with other Azure services at zero cost. With Azure free account, you get a $200 credit to spend in the first 30 days. You wonΓÇÖt be charged, even if you start using the services until you choose to upgrade. To get started, visit [Azure free account](https://azure.microsoft.com/free) page.
+Azure Cosmos DB is included in the [Azure free account](https://azure.microsoft.com/free), which offers Azure credits and resources for free for a certain time period. Specifically for Azure Cosmos DB, this free account offers 25-GB storage and 400 RUs of provisioned throughput for the entire year. This experience enables any developer to easily test the features of Azure Cosmos DB or integrate it with other Azure services at zero cost. With Azure free account, you get a $200 credit to spend in the first 30 days. You wonΓÇÖt be charged, even if you start using the services until you choose to upgrade. To get started, visit [Azure free account](https://azure.microsoft.com/free) page.
## Azure Cosmos DB serverless
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/plan-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/plan-manage-costs.md
@@ -66,7 +66,7 @@ As you start using Azure Cosmos DB resources from Azure portal, you can see the
:::image type="content" source="./media/plan-manage-costs/cost-estimate-portal.png" alt-text="Cost estimate in Azure portal":::
-If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md).
+If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
You can pay for Azure Cosmos DB charges with your Azure Enterprise Agreement monetary commitment credit. However, you can't use the monetary commitment credit to pay for charges for third party products and services including those from the Azure Marketplace.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/synapse-link-power-bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-power-bi.md
@@ -36,7 +36,7 @@ Make sure to create the following resources before you start:
## Create a database and views
-Creating views in the master or default databases is not recommended or supported. So you need to start this step by creating a database. From the Synapse workspace go the **Develop** tab, select the **+** icon, and select **SQL Script**.
+From the Synapse workspace go the **Develop** tab, select the **+** icon, and select **SQL Script**.
:::image type="content" source="./media/synapse-link-power-bi/add-sql-script.png" alt-text="Add a SQL script to the Synapse Analytics workspace":::
@@ -44,7 +44,7 @@ Every workspace comes with a serverless SQL endpoint. After creating a SQL scrip
:::image type="content" source="./media/synapse-link-power-bi/enable-sql-on-demand-endpoint.png" alt-text="Enable the SQL script to use the serverless SQL endpoint in the workspace":::
-Create a new database, named **RetailCosmosDB**, and a SQL view over the Synapse Link enabled containers. The following command shows how to create a database:
+Creating views in the **master** or **default** databases is not recommended or supported. Create a new database, named **RetailCosmosDB**, and a SQL view over the Synapse Link enabled containers. The following command shows how to create a database:
```sql -- Create database
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/understand-your-bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/understand-your-bill.md
@@ -210,10 +210,14 @@ The total monthly bill will be (assuming 30 days/720 hours in a month) will be c
| | |Throughput bill for 2 additional regions: East US, North Europe (all regions are writable) |`(1 + 1) * (70 K RU/sec /100 * $0.016) * 20 hours = $448` |$224 | || |**Total Monthly Cost** | |**$38,688** |
-## Billing examples with free tier accounts
+## <a id="azure-free-tier"></a>Billing examples with Azure Cosmos DB free tier accounts
+ With Azure Cosmos DB free tier, you'll get the first 400 RU/s and 5 GB of storage in your account for free, applied at the account level. Any RU/s and storage beyond 400 RU/s and 5 GB will be billed at the regular pricing rates per the pricing page. On the bill, you will not see a charge or line item for the free 400 Ru/s and 5 GB, only the RU/s and storage beyond what is covered by free tier. The 400 RU/s applies to any type of RU/s - provisioned throughput, autoscale, and multi-region writes.
+> [!NOTE]
+> Azure Cosmos DB free tier is different from the Azure free account. The Azure free account offers Azure credits and resources for free for a limited time. When using Azure Cosmos DB as a part of this free account, you get 25-GB storage and 400 RU/s of provisioned throughput for 12 months.
+ ### Billing example - container or database with provisioned throughput - Let's suppose we create a database or container in a free tier account with 400 RU/s and 5 GB of storage. - Your bill will not show any charge for this resource. Your hourly and monthly cost will be $0.
@@ -314,4 +318,4 @@ Next you can proceed to learn about cost optimization in Azure Cosmos DB with th
* Learn more about [Optimizing storage cost](optimize-cost-storage.md) * Learn more about [Optimizing the cost of reads and writes](optimize-cost-reads-writes.md) * Learn more about [Optimizing the cost of queries](./optimize-cost-reads-writes.md)
-* Learn more about [Optimizing the cost of multi-region Azure Cosmos accounts](optimize-cost-regions.md)
\ No newline at end of file
+* Learn more about [Optimizing the cost of multi-region Azure Cosmos accounts](optimize-cost-regions.md)
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription.md new file mode 100644
@@ -0,0 +1,41 @@
+---
+title: Create Azure subscriptions programatically
+description: This article helps you understand options available to programmatically create Azure subscriptions.
+author: bandersmsft
+ms.service: cost-management-billing
+ms.subservice: billing
+ms.topic: how-to
+ms.date: 01/13/2021
+ms.reviewer: andalmia
+ms.author: banders
+ms.custom: devx-track-azurepowershell, devx-track-azurecli
+---
+
+# Create Azure subscriptions programatically
+
+This article helps you understand options available to programmatically create Azure subscriptions.
+
+Using various REST APIs you can create a subscription for the following Azure agreement types:
+
+- Enterprise Agreement (EA)
+- Microsoft Customer Agreement (MCA)
+- Microsoft Partner Agreement (MPA)
+
+You canΓÇÖt programmatically create additional subscriptions for other agreement types with REST APIs.
+
+Requirements and details to create subscriptions differ for different agreements and API versions. See the following articles that apply to your situation:
+
+Latest APIs:
+
+- [Create EA subscriptions](programmatically-create-subscription-enterprise-agreement.md)
+- [Create MCA subscriptions](programmatically-create-subscription-microsoft-customer-agreement.md)
+- [Create MPA subscriptions](programmatically-create-subscription-microsoft-partner-agreement.md)
+
+If youΓÇÖre still using [preview APIs](programmatically-create-subscription-preview.md), you can continue to create subscriptions with them.
+
+And, you can [create subscriptions with an ARM template](create-subscription-template.md). An ARM template helps automate the subscription creation process with REST APIs.
+
+## Next steps
+
+* After you create a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md).
+* For more information about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-quickbooks.md
@@ -11,7 +11,7 @@ ms.service: data-factory
ms.workload: data-services ms.topic: conceptual ms.custom: seo-lt-2019
-ms.date: 08/03/2020
+ms.date: 01/15/2021
--- # Copy data from QuickBooks Online using Azure Data Factory (Preview)
@@ -50,8 +50,8 @@ The following properties are supported for QuickBooks linked service:
| ***Under `connectionProperties`:*** | | | | endpoint | The endpoint of the QuickBooks Online server. (that is, quickbooks.api.intuit.com) | Yes | | companyId | The company ID of the QuickBooks company to authorize. For info about how to find the company ID, see [How do I find my Company ID](https://quickbooks.intuit.com/community/Getting-Started/How-do-I-find-my-Company-ID/m-p/185551). | Yes |
-| consumerKey | The consumer key for OAuth 2.0 authentication. | Yes |
-| consumerSecret | The consumer secret for OAuth 2.0 authentication. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| consumerKey | The client ID of your QuickBooks Online application for OAuth 2.0 authentication. Learn more from [here](https://developer.intuit.com/app/developer/qbo/docs/develop/authentication-and-authorization/oauth-2.0#obtain-oauth2-credentials-for-your-app). | Yes |
+| consumerSecret | The client secret of your QuickBooks Online application for OAuth 2.0 authentication. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
| refreshToken | The OAuth 2.0 refresh token associated with the QuickBooks application. Learn more from [here](https://developer.intuit.com/app/developer/qbo/docs/develop/authentication-and-authorization/oauth-2.0#obtain-oauth2-credentials-for-your-app). Note refresh token will be expired after 180 days. Customer need to regularly update the refresh token. <br/>Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md).| Yes | | useEncryptedEndpoints | Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true. | No |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/parameterize-linked-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/parameterize-linked-services.md
@@ -6,7 +6,7 @@ documentationcenter: ''
ms.service: data-factory ms.workload: data-services ms.topic: conceptual
-ms.date: 12/09/2020
+ms.date: 01/15/2021
author: dcstwh ms.author: weetok manager: anandsub
@@ -37,6 +37,7 @@ When authoring linked service on UI, Data Factory provides built-in parameteriz
- Azure Cosmos DB (SQL API) - Azure Database for MySQL - Azure Databricks
+- Azure Key Vault
- Azure SQL Database - Azure SQL Managed Instance - Azure Synapse Analytics
data-factory https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-python.md
@@ -11,7 +11,7 @@ ms.service: data-factory
ms.workload: data-services ms.devlang: python ms.topic: quickstart
-ms.date: 01/22/2018
+ms.date: 01/15/2021
ms.custom: seo-python-october2019, devx-track-python ---
@@ -67,12 +67,20 @@ Pipelines can ingest data from disparate data stores. Pipelines process or trans
The [Python SDK for Data Factory](https://github.com/Azure/azure-sdk-for-python) supports Python 2.7, 3.3, 3.4, 3.5, 3.6 and 3.7.
+4. To install the Python package for Azure Identity authentication, run the following command:
+
+ ```python
+ pip install azure-identity
+ ```
+ > [!NOTE]
+ > The "azure-identity" package might have conflicts with "azure-cli" on some common dependencies. If you meet any authentication issue, remove "azure-cli" and its dependencies, or use a clean machine without installing "azure-cli" package to make it work.
+
## Create a data factory client 1. Create a file named **datafactory.py**. Add the following statements to add references to namespaces. ```python
- from azure.common.credentials import ServicePrincipalCredentials
+ from azure.identity import ClientSecretCredential
from azure.mgmt.resource import ResourceManagementClient from azure.mgmt.datafactory import DataFactoryManagementClient from azure.mgmt.datafactory.models import *
@@ -117,21 +125,21 @@ Pipelines can ingest data from disparate data stores. Pipelines process or trans
def main(): # Azure subscription ID
- subscription_id = '<Specify your Azure Subscription ID>'
+ subscription_id = '<subscription ID>'
# This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
- rg_name = 'ADFTutorialResourceGroup'
+ rg_name = '<resource group>'
# The data factory name. It must be globally unique.
- df_name = '<Specify a name for the data factory. It must be globally unique>'
+ df_name = '<factory name>'
# Specify your Active Directory client ID, client secret, and tenant ID
- credentials = ServicePrincipalCredentials(client_id='<Active Directory application/client ID>', secret='<client secret>', tenant='<Active Directory tenant ID>')
+ credentials = ClientSecretCredential(client_id='<service principal ID>', client_secret='<service principal key>', tenant_id='<tenant ID>')
resource_client = ResourceManagementClient(credentials, subscription_id) adf_client = DataFactoryManagementClient(credentials, subscription_id)
- rg_params = {'location':'eastus'}
- df_params = {'location':'eastus'}
+ rg_params = {'location':'westus'}
+ df_params = {'location':'westus'}
``` ## Create a data factory
@@ -144,7 +152,7 @@ Add the following code to the **Main** method that creates a **data factory**. I
resource_client.resource_groups.create_or_update(rg_name, rg_params) #Create a data factory
- df_resource = Factory(location='eastus')
+ df_resource = Factory(location='westus')
df = adf_client.factories.create_or_update(rg_name, df_name, df_resource) print_item(df) while df.provisioning_state != 'Succeeded':
@@ -160,12 +168,12 @@ You create linked services in a data factory to link your data stores and comput
```python # Create an Azure Storage linked service
- ls_name = 'storageLinkedService'
+ ls_name = 'storageLinkedService001'
# IMPORTANT: specify the name and key of your Azure Storage account.
- storage_string = SecureString(value='DefaultEndpointsProtocol=https;AccountName=<storageaccountname>;AccountKey=<storageaccountkey>')
+ storage_string = SecureString(value='DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;EndpointSuffix=<suffix>')
- ls_azure_storage = AzureStorageLinkedService(connection_string=storage_string)
+ ls_azure_storage = LinkedServiceResource(properties=AzureStorageLinkedService(connection_string=storage_string))
ls = adf_client.linked_services.create_or_update(rg_name, df_name, ls_name, ls_azure_storage) print_item(ls) ```
@@ -183,10 +191,12 @@ You define a dataset that represents the source data in Azure Blob. This Blob da
# Create an Azure blob dataset (input) ds_name = 'ds_in' ds_ls = LinkedServiceReference(reference_name=ls_name)
- blob_path= 'adfv2tutorial/input'
- blob_filename = 'input.txt'
- ds_azure_blob= AzureBlobDataset(linked_service_name=ds_ls, folder_path=blob_path, file_name = blob_filename)
- ds = adf_client.datasets.create_or_update(rg_name, df_name, ds_name, ds_azure_blob)
+ blob_path = '<container>/<folder path>'
+ blob_filename = '<file name>'
+ ds_azure_blob = DatasetResource(properties=AzureBlobDataset(
+ linked_service_name=ds_ls, folder_path=blob_path, file_name=blob_filename))
+ ds = adf_client.datasets.create_or_update(
+ rg_name, df_name, ds_name, ds_azure_blob)
print_item(ds) ```
@@ -199,9 +209,10 @@ You define a dataset that represents the source data in Azure Blob. This Blob da
```python # Create an Azure blob dataset (output) dsOut_name = 'ds_out'
- output_blobpath = 'adfv2tutorial/output'
- dsOut_azure_blob = AzureBlobDataset(linked_service_name=ds_ls, folder_path=output_blobpath)
- dsOut = adf_client.datasets.create_or_update(rg_name, df_name, dsOut_name, dsOut_azure_blob)
+ output_blobpath = '<container>/<folder path>'
+ dsOut_azure_blob = DatasetResource(properties=AzureBlobDataset(linked_service_name=ds_ls, folder_path=output_blobpath))
+ dsOut = adf_client.datasets.create_or_update(
+ rg_name, df_name, dsOut_name, dsOut_azure_blob)
print_item(dsOut) ```
@@ -231,7 +242,7 @@ Add the following code to the **Main** method that creates a **pipeline with a c
Add the following code to the **Main** method that **triggers a pipeline run**. ```python
- #Create a pipeline run.
+ # Create a pipeline run
run_response = adf_client.pipelines.create_run(rg_name, df_name, p_name, parameters={}) ```
@@ -240,9 +251,10 @@ Add the following code to the **Main** method that **triggers a pipeline run**.
To monitor the pipeline run, add the following code the **Main** method: ```python
- #Monitor the pipeline run
+ # Monitor the pipeline run
time.sleep(30)
- pipeline_run = adf_client.pipeline_runs.get(rg_name, df_name, run_response.run_id)
+ pipeline_run = adf_client.pipeline_runs.get(
+ rg_name, df_name, run_response.run_id)
print("\n\tPipeline run status: {}".format(pipeline_run.status)) filter_params = RunFilterParameters( last_updated_after=datetime.now() - timedelta(1), last_updated_before=datetime.now() + timedelta(1))
@@ -264,14 +276,13 @@ main()
Here is the full Python code: ```python
-from azure.common.credentials import ServicePrincipalCredentials
+from azure.identity import ClientSecretCredential
from azure.mgmt.resource import ResourceManagementClient from azure.mgmt.datafactory import DataFactoryManagementClient from azure.mgmt.datafactory.models import * from datetime import datetime, timedelta import time - def print_item(group): """Print an Azure object instance.""" print("\tName: {}".format(group.name))
@@ -282,28 +293,22 @@ def print_item(group):
print("\tTags: {}".format(group.tags)) if hasattr(group, 'properties'): print_properties(group.properties)
- print("\n")
- def print_properties(props): """Print a ResourceGroup properties instance.""" if props and hasattr(props, 'provisioning_state') and props.provisioning_state: print("\tProperties:") print("\t\tProvisioning State: {}".format(props.provisioning_state))
- print("\n")
-
+ print("\n\n")
def print_activity_run_details(activity_run): """Print activity run details.""" print("\n\tActivity run details\n") print("\tActivity run status: {}".format(activity_run.status)) if activity_run.status == 'Succeeded':
- print("\tNumber of bytes read: {}".format(
- activity_run.output['dataRead']))
- print("\tNumber of bytes written: {}".format(
- activity_run.output['dataWritten']))
- print("\tCopy duration: {}".format(
- activity_run.output['copyDuration']))
+ print("\tNumber of bytes read: {}".format(activity_run.output['dataRead']))
+ print("\tNumber of bytes written: {}".format(activity_run.output['dataWritten']))
+ print("\tCopy duration: {}".format(activity_run.output['copyDuration']))
else: print("\tErrors: {}".format(activity_run.error['message']))
@@ -311,29 +316,28 @@ def print_activity_run_details(activity_run):
def main(): # Azure subscription ID
- subscription_id = '<your Azure subscription ID>'
+ subscription_id = '<subscription ID>'
# This program creates this resource group. If it's an existing resource group, comment out the code that creates the resource group
- rg_name = '<Azure resource group name>'
+ rg_name = '<resource group>'
# The data factory name. It must be globally unique.
- df_name = '<Your data factory name>'
+ df_name = '<factory name>'
# Specify your Active Directory client ID, client secret, and tenant ID
- credentials = ServicePrincipalCredentials(
- client_id='<Active Directory client ID>', secret='<client secret>', tenant='<tenant ID>')
+ credentials = ClientSecretCredential(client_id='<service principal ID>', client_secret='<service principal key>', tenant_id='<tenant ID>')
resource_client = ResourceManagementClient(credentials, subscription_id) adf_client = DataFactoryManagementClient(credentials, subscription_id)
- rg_params = {'location': 'eastus'}
- df_params = {'location': 'eastus'}
-
+ rg_params = {'location':'westus'}
+ df_params = {'location':'westus'}
+
# create the resource group # comment out if the resource group already exits resource_client.resource_groups.create_or_update(rg_name, rg_params) # Create a data factory
- df_resource = Factory(location='eastus')
+ df_resource = Factory(location='westus')
df = adf_client.factories.create_or_update(rg_name, df_name, df_resource) print_item(df) while df.provisioning_state != 'Succeeded':
@@ -341,33 +345,30 @@ def main():
time.sleep(1) # Create an Azure Storage linked service
- ls_name = 'storageLinkedService'
+ ls_name = 'storageLinkedService001'
- # Specify the name and key of your Azure Storage account
- storage_string = SecureString(
- value='DefaultEndpointsProtocol=https;AccountName=<storage account name>;AccountKey=<storage account key>')
+ # IMPORTANT: specify the name and key of your Azure Storage account.
+ storage_string = SecureString(value='DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;EndpointSuffix=<suffix>')
- ls_azure_storage = AzureStorageLinkedService(
- connection_string=storage_string)
- ls = adf_client.linked_services.create_or_update(
- rg_name, df_name, ls_name, ls_azure_storage)
+ ls_azure_storage = LinkedServiceResource(properties=AzureStorageLinkedService(connection_string=storage_string))
+ ls = adf_client.linked_services.create_or_update(rg_name, df_name, ls_name, ls_azure_storage)
print_item(ls) # Create an Azure blob dataset (input) ds_name = 'ds_in' ds_ls = LinkedServiceReference(reference_name=ls_name)
- blob_path = 'adfv2tutorial/input'
- blob_filename = 'input.txt'
- ds_azure_blob = AzureBlobDataset(
- linked_service_name=ds_ls, folder_path=blob_path, file_name=blob_filename)
+ blob_path = '<container>/<folder path>'
+ blob_filename = '<file name>'
+ ds_azure_blob = DatasetResource(properties=AzureBlobDataset(
+ linked_service_name=ds_ls, folder_path=blob_path, file_name=blob_filename))
ds = adf_client.datasets.create_or_update( rg_name, df_name, ds_name, ds_azure_blob) print_item(ds) # Create an Azure blob dataset (output) dsOut_name = 'ds_out'
- output_blobpath = 'adfv2tutorial/output'
- dsOut_azure_blob = AzureBlobDataset(linked_service_name=ds_ls, folder_path=output_blobpath)
+ output_blobpath = '<container>/<folder path>'
+ dsOut_azure_blob = DatasetResource(properties=AzureBlobDataset(linked_service_name=ds_ls, folder_path=output_blobpath))
dsOut = adf_client.datasets.create_or_update( rg_name, df_name, dsOut_name, dsOut_azure_blob) print_item(dsOut)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/tutorial-copy-data-portal-private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-copy-data-portal-private.md
@@ -6,12 +6,11 @@ documentationcenter: ''
author: linda33wj manager: shwang ms.reviewer: douglasl- ms.service: data-factory ms.workload: data-services ms.topic: tutorial ms.custom: seo-lt-2019
-ms.date: 05/15/2020
+ms.date: 01/15/2021
ms.author: jingwang ---
@@ -103,7 +102,8 @@ In this step, you create an Azure integration runtime and enable Data Factory Ma
1. In the Data Factory portal, go to **Manage** and select **New** to create a new Azure integration runtime. ![Screenshot that shows creating a new Azure integration runtime.](./media/tutorial-copy-data-portal-private/create-new-azure-ir.png)
-1. Choose to create an **Azure** integration runtime.
+1. On the **Integration runtime setup** page, choose what integration runtime to create based on required capabilities. In this tutorial, select **Azure, Self-Hosted** and then click **Continue**.
+1. Select **Azure** and then click **Continue** to create an Azure Integration runtime.
![Screenshot that shows a new Azure integration runtime.](./media/tutorial-copy-data-portal-private/azure-ir.png) 1. Under **Virtual network configuration (Preview)**, select **Enable**.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/tutorial-data-flow-private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow-private.md
@@ -7,7 +7,7 @@ ms.reviewer: makromer
ms.service: data-factory ms.topic: conceptual ms.custom: seo-lt-2019
-ms.date: 05/19/2019
+ms.date: 01/15/2021
--- # Transform data securely by using mapping data flow
@@ -29,6 +29,7 @@ In this tutorial, you do the following steps:
> * Monitor a data flow activity. ## Prerequisites+ * **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. * **Azure storage account**. You use Data Lake Storage as *source* and *sink* data stores. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) for steps to create one. *Ensure the storage account allows access only from selected networks.*
@@ -59,12 +60,14 @@ In this step, you create a data factory and open the Data Factory UI to create a
1. Select **Author & Monitor** to launch the Data Factory UI in a separate tab. ## Create an Azure IR in Data Factory Managed Virtual Network+ In this step, you create an Azure IR and enable Data Factory Managed Virtual Network. 1. In the Data Factory portal, go to **Manage**, and select **New** to create a new Azure IR. ![Screenshot that shows creating a new Azure IR.](./media/tutorial-copy-data-portal-private/create-new-azure-ir.png)
-1. Select the **Azure** IR option.
+1. On the **Integration runtime setup** page, choose what integration runtime to create based on required capabilities. In this tutorial, select **Azure, Self-Hosted** and then click **Continue**.
+1. Select **Azure** and then click **Continue** to create an Azure Integration runtime.
![Screenshot that shows a new Azure IR.](./media/tutorial-copy-data-portal-private/azure-ir.png)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-configure-with-sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-configure-with-sentinel.md
@@ -60,5 +60,5 @@ After connecting a **Subscription**, the hub data is available in Azure Sentinel
In this document, you learned how to connect Defender for IoT to Azure Sentinel. To learn more about threat detection and security data access, see the following articles: -- Learn how to use Azure Sentinel to [get visibility into your data, and potential threats](https://docs.microsoft.com/azure/sentinel/quickstart-get-visibility).-- Learn how to [Access your IoT security data](how-to-security-data-access.md)
+- Learn how to use Azure Sentinel to [get visibility into your data, and potential threats](../sentinel/quickstart-get-visibility.md).
+- Learn how to [Access your IoT security data](how-to-security-data-access.md)
\ No newline at end of file
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-identify-required-appliances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-identify-required-appliances.md
@@ -4,7 +4,7 @@ description: Learn about hardware and virtual appliances for certified Defender
author: shhazam-ms manager: rkarlin ms.author: shhazam
-ms.date: 12/21/2020
+ms.date: 01/13/2021
ms.topic: how-to ms.service: azure ---
@@ -45,7 +45,7 @@ See [Appliance specifications](#appliance-specifications) for vendor details.
About preconfigured sensors: Microsoft has partnered with Arrow to provide preconfigured sensors. To purchase a preconfigured sensor, contact Arrow at the following address: <hardware.sales@arrow.com>
-About bringing your own appliance: Review supported models described here. After you've acquired your appliance, go to **Defender for IoT** > **Network Sensors ISO** > **Installation** to download the software.
+About bringing your own appliance: Review the supported models described here. After you've acquired your appliance, go to **Defender for IoT** > **Network Sensors ISO** > **Installation** to download the software.
:::image type="content" source="media/how-to-prepare-your-network/azure-defender-for-iot-sensor-download-software-screen.png" alt-text="Network sensors ISO.":::
@@ -250,28 +250,6 @@ After you purchase the appliance, go to **Defender for IoT** > **Network Sensors
:::image type="content" source="media/how-to-prepare-your-network/enterprise-deployment-for-azure-defender-for-iot-dell-r340-bom.png" alt-text="Dell R340 BOM.":::
-## SMB deployment: Neousys Nuvo-5006LP
-
-| Component | Technical specifications |
-|--|--|
-| Construction | Aluminum, fanless and dust-proof design |
-| Dimensions | 240 mm (W) x 225 mm (D) x 77 mm (H) |
-| Weight | 3.1 kg (including CPU, memory, and HDD) |
-| CPU | Intel Core i5-6500TE (6M Cache, up to 3.30 GHz) S1151 |
-| Chipset | Intel Q170 Platform Controller Hub |
-| Memory | 8-GB DDR4 2133 MHz Wide Temperature SODIMM |
-| Storage | 128-GB 3ME3 Wide Temperature mSATA SSD |
-| Network controller | 6x Gigabit Ethernet ports by Intel I219 |
-| Device access | 4 USBs: Two fronts, two rears, one internal |
-| Power adapter | 120/240VAC-20VDC/6A |
-| Mounting | Mounting kit, DIN rail |
-| Operating Temperature | \-25┬░C ~ 70┬░C |
-| Storage Temperature | \-40┬░C ~ 85┬░C |
-| Humidity | 10% ~ 90%, non-condensing |
-| Vibration | Operating, 5 Grms, 5-500 Hz, 3 axes <br>(w/ SSD, according to IEC60068-2-64) |
-| Shock | Operating, 50 Grms, half-sine 11-ms duration (w/ SSD, according to IEC60068-2-27) |
-| EMC | CE/FCC Class A, according to EN 55022, EN 55024, and EN 55032 |
- ## Next steps [About Azure Defender for IoT installation](how-to-install-software.md)
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-install-software https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-install-software.md
@@ -13,7 +13,7 @@ ms.service: azure
This article describes how to install the following elements of Azure Defender for IoT: -- **Sensor**: Defender for IoT sensors collects ICS network traffic by using passive (agentless) monitoring. Passive and nonintrusive, the sensors have zero impact on OT and IoT networks and devices. The sensor connects to a SPAN port or network TAP and immediately begins monitoring your network. Detections appear in the sensor console. There, you can view, investigate, and analyze them in a network map, device inventory, and an extensive range of reports. Examples include risk assessment reports, data mining queries, and attack vectors. Read more about sensor capabilities in the [Defender for IoT Sensor User Guide (direct download)](https://aka.ms/AzureDefenderforIoTUserGuide).
+- **Sensor**: Defender for IoT sensors collects ICS network traffic by using passive (agentless) monitoring. Passive and nonintrusive, the sensors have zero impact on OT and IoT networks and devices. The sensor connects to a SPAN port or network TAP and immediately begins monitoring your network. Detections appear in the sensor console. There, you can view, investigate, and analyze them in a network map, device inventory, and an extensive range of reports. Examples include risk assessment reports, data mining queries, and attack vectors. Read more about sensor capabilities in the [Defender for IoT Sensor User Guide (direct download)](./getting-started.md).
- **On-premises management console**: The on-premises management console lets you carry out device management, risk management, and vulnerability management. You can also use it to carry out threat monitoring and incident response across your enterprise. It provides a unified view of all network devices, key IoT, and OT risk indicators and alerts detected in facilities where sensors are deployed. Use the on-premises management console to view and manage sensors in air-gapped networks.
@@ -1068,4 +1068,4 @@ To enable tunneling:
### Next steps
-[Set up your network](how-to-set-up-your-network.md)
+[Set up your network](how-to-set-up-your-network.md)
\ No newline at end of file
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-individual-sensors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-individual-sensors.md
@@ -79,7 +79,7 @@ You'll receive an error message if the activation file could not be uploaded. Th
- **For locally connected sensors**: The activation file is not valid. If the file is not valid, go to the Defender for IoT portal. On the **Sensor Management** page, select the sensor with the invalid file, and download a new activation file. -- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that \*.azure-devices.net:443 is allowed in the firewall and/or proxy. If wildcards are not supported or you want more control, the FQDN for your specific Defender for IoT hub should be opened in your firewall and/or proxy. For details, see [Reference - IoT Hub endpoints](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-endpoints).
+- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that \*.azure-devices.net:443 is allowed in the firewall and/or proxy. If wildcards are not supported or you want more control, the FQDN for your specific Defender for IoT hub should be opened in your firewall and/or proxy. For details, see [Reference - IoT Hub endpoints](../iot-hub/iot-hub-devguide-endpoints.md).
- **For cloud-connected sensors**: The activation file is valid but Defender for IoT rejected it. If you can't resolve this problem, you can download another activation from the **Sensor Management** page of the Defender for IoT portal. If this doesn't work, contact Microsoft Support.
@@ -455,4 +455,4 @@ To access system properties:
[Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)
-[Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
+[Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
\ No newline at end of file
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-security-data-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-security-data-access.md
@@ -33,7 +33,7 @@ To access your alerts and recommendations in your Log Analytics workspace after
1. Choose an alert or recommendation in Defender for IoT. 1. Click **further investigation**, then click **To see which devices have this alert click here and view the DeviceId column**.
-For details on querying data from Log Analytics, see [Get started with queries in Log Analytics](/azure/azure-monitor/log-query/get-started-queries).
+For details on querying data from Log Analytics, see [Get started with queries in Log Analytics](../azure-monitor/log-query/get-started-queries.md).
## Security alerts
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/overview-security-agents https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/overview-security-agents.md
@@ -28,7 +28,7 @@ The Defender for IoT security agents handle raw event collection from the device
Use the following workflow to deploy and test your Defender for IoT security agents: 1. [Enable Defender for IoT service to your IoT Hub](quickstart-onboard-iot-hub.md)
-1. If your IoT Hub has no registered devices, [Register a new device](../iot-accelerators/quickstart-device-simulation-deploy.md).
+1. If your IoT Hub has no registered devices, [Register a new device](../iot-accelerators/iot-accelerators-device-simulation-overview.md).
1. [Create an azureiotsecurity security module](quickstart-create-security-twin.md) for your devices. 1. To install the agent on an Azure simulated device instead of installing on an actual device, [spin up a new Azure Virtual Machine (VM)](../virtual-machines/linux/quick-create-portal.md) in an available zone. 1. [Deploy an Defender for IoT security agent](how-to-deploy-linux-cs.md) on your IoT device, or new VM.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/resources-frequently-asked-questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/resources-frequently-asked-questions.md
@@ -35,7 +35,7 @@ Azure Defender for IoT provides comprehensive protocol support. In addition to e
This unique solution for developing protocols as plugins, does not require dedicated developer teams or version releases in order to support a new protocol. Developers, partners, and customers can securely develop protocols and share insights and knowledge using Horizon. ## Do I have to purchase hardware appliances from Microsoft partners?
-Azure Defender for IoT sensor runs on specific hardware specs as described in the [Hardware Specifications Guide](https://aka.ms/AzureDefenderforIoTBareMetalAppliance), customers can purchase certified hardware from Microsoft partners or use the supplied bill of materials (BOM) and purchase it on their own.
+Azure Defender for IoT sensor runs on specific hardware specs as described in the [Hardware Specifications Guide](./how-to-identify-required-appliances.md), customers can purchase certified hardware from Microsoft partners or use the supplied bill of materials (BOM) and purchase it on their own.
Certified hardware has been tested in our labs for driver stability, packet drops and network sizing.
@@ -83,4 +83,4 @@ To learn more about how to get started with Defender for IoT, see the following
- Read the Defender for IoT [overview](overview.md) - Verify the [System prerequisites](quickstart-system-prerequisites.md) - Learn more about how to [Getting started with Defender for IoT](getting-started.md)-- Understand [Defender for IoT security alerts](concept-security-alerts.md)
+- Understand [Defender for IoT security alerts](concept-security-alerts.md)
\ No newline at end of file
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/security-baseline.md
@@ -392,7 +392,7 @@ Use workflow automation features in Azure Security Center and Azure Sentinel to
## Posture and Vulnerability Management
-*For more information, see the [Azure Security Benchmark: Posture and Vulnerability Management](/azure/security/benchmarks/security-controls-v2-posture-vulnerability-management).*
+*For more information, see the [Azure Security Benchmark: Posture and Vulnerability Management](../security/benchmarks/security-controls-v2-posture-vulnerability-management.md).*
### PV-8: Conduct regular attack simulation
@@ -442,9 +442,9 @@ For more information, see the following references:
- [Cloud Adoption Framework - Azure data security and encryption best practices](../security/fundamentals/data-encryption-best-practices.md?bc=%2fazure%2fcloud-adoption-framework%2f_bread%2ftoc.json&toc=%2fazure%2fcloud-adoption-framework%2ftoc.json) -- [Azure Security Benchmark - Asset management](/azure/security/benchmarks/security-controls-v2-asset-management)
+- [Azure Security Benchmark - Asset management](../security/benchmarks/security-controls-v2-asset-management.md)
-- [Azure Security Benchmark - Data Protection](/azure/security/benchmarks/security-controls-v2-data-protection)
+- [Azure Security Benchmark - Data Protection](../security/benchmarks/security-controls-v2-data-protection.md)
**Azure Security Center monitoring**: Not applicable
@@ -472,7 +472,7 @@ Ensure that the segmentation strategy is implemented consistently across control
**Guidance**: Continuously measure and mitigate risks to your individual assets and the environment they are hosted in. Prioritize high value assets and highly-exposed attack surfaces, such as published applications, network ingress and egress points, user and administrator endpoints, etc. -- [Azure Security Benchmark - Posture and vulnerability management](/azure/security/benchmarks/security-controls-v2-posture-vulnerability-management)
+- [Azure Security Benchmark - Posture and vulnerability management](../security/benchmarks/security-controls-v2-posture-vulnerability-management.md)
**Azure Security Center monitoring**: Not applicable
@@ -513,7 +513,7 @@ This strategy should include documented guidance, policy, and standards for the
For more information, see the following references: - [Azure Security Best Practice 11 - Architecture. Single unified security strategy](/azure/cloud-adoption-framework/security/security-top-10#11-architecture-establish-a-single-unified-security-strategy) -- [Azure Security Benchmark - Network Security](/azure/security/benchmarks/security-controls-v2-network-security)
+- [Azure Security Benchmark - Network Security](../security/benchmarks/security-controls-v2-network-security.md)
- [Azure network security overview](../security/fundamentals/network-overview.md)
@@ -541,9 +541,9 @@ This strategy should include documented guidance, policy, and standards for the
For more information, see the following references: -- [Azure Security Benchmark - Identity management](/azure/security/benchmarks/security-controls-v2-identity-management)
+- [Azure Security Benchmark - Identity management](../security/benchmarks/security-controls-v2-identity-management.md)
-- [Azure Security Benchmark - Privileged access](/azure/security/benchmarks/security-controls-v2-privileged-access)
+- [Azure Security Benchmark - Privileged access](../security/benchmarks/security-controls-v2-privileged-access.md)
- [Azure Security Best Practice 11 - Architecture. Single unified security strategy](/azure/cloud-adoption-framework/security/security-top-10#11-architecture-establish-a-single-unified-security-strategy)
@@ -575,9 +575,9 @@ This strategy should include documented guidance, policy, and standards for the
For more information, see the following references: -- [Azure Security Benchmark - Logging and threat detection](/azure/security/benchmarks/security-controls-v2-logging-threat-detection)
+- [Azure Security Benchmark - Logging and threat detection](../security/benchmarks/security-controls-v2-logging-threat-detection.md)
-- [Azure Security Benchmark - Incident response](/azure/security/benchmarks/security-controls-v2-incident-response)
+- [Azure Security Benchmark - Incident response](../security/benchmarks/security-controls-v2-incident-response.md)
- [Azure Security Best Practice 4 - Process. Update Incident Response Processes for Cloud](/azure/cloud-adoption-framework/security/security-top-10#4-process-update-incident-response-ir-processes-for-cloud)
dns https://docs.microsoft.com/en-us/azure/dns/dns-faq-private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-faq-private.md
@@ -5,7 +5,7 @@ services: dns
author: rohinkoul ms.service: dns ms.topic: article
-ms.date: 10/05/2019
+ms.date: 01/15/2021
ms.author: rohink --- # Azure Private DNS FAQ
@@ -83,6 +83,10 @@ If your existing private DNS zone were created using preview API, you must migra
We strongly recommend that you migrate to the new resource model as soon as possible. Legacy resource model will be supported, however, further features will not be developed on top of this model. In future, we intend to deprecate it in favor of new resource model. For guidance on how to migrate your existing private DNS zones to new resource model see[migration guide for Azure DNS private zones](private-dns-migration-guide.md).
+### Does Azure DNS private zones store any customer content?
+
+No, Azure DNS private zones doesn't store any customer content.
+ ## Next steps -- [Learn more about Azure Private DNS](private-dns-overview.md)\ No newline at end of file
+- [Learn more about Azure Private DNS](private-dns-overview.md)
dns https://docs.microsoft.com/en-us/azure/dns/dns-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-faq.md
@@ -190,10 +190,6 @@ Internationalized domain names (IDNs) encode each DNS name by using [punycode](h
To configure IDNs in Azure DNS, convert the zone name or record set name to punycode. Azure DNS doesn't currently support built-in conversion to or from punycode.
-### Does Azure DNS private zones store any customer content?
-
-No, Azure DNS private zones doesn't store any customer content.
- ## Next steps - [Learn more about Azure DNS](dns-overview.md).
event-grid https://docs.microsoft.com/en-us/azure/event-grid/delivery-and-retry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/delivery-and-retry.md
@@ -52,7 +52,7 @@ For more information on using Azure CLI with Event Grid, see [Route storage even
When EventGrid receives an error for an event delivery attempt, EventGrid decides whether it should retry the delivery or dead-letter or drop the event based on the type of the error.
-If the error returned by the subscribed endpoint is configuration related error which can't be fixed with retries (for example, if the endpoint is deleted), EventGrid will either perform dead lettering the event or drop the event if dead letter is not configured.
+If the error returned by the subscribed endpoint is configuration related error that can't be fixed with retries (for example, if the endpoint is deleted), EventGrid will either perform dead lettering the event or drop the event if dead letter is not configured.
Following are the types of endpoints for which retry doesn't happen:
@@ -62,7 +62,7 @@ Following are the types of endpoints for which retry doesn't happen:
| Webhook | 400 Bad Request, 413 Request Entity Too Large, 403 Forbidden, 404 Not Found, 401 Unauthorized | > [!NOTE]
-> If Dead-Letter is not configured for endpoint, events will be dropped when above errors happen, so consider configuring Dead-Letter, if you don't want these kinds of events to be dropped.
+> If Dead-Letter is not configured for endpoint, events will be dropped when above errors happen. Consider configuring Dead-Letter, if you don't want these kinds of events to be dropped.
If the error returned by the subscribed endpoint is not among the above list, EventGrid performs the retry using policies described below:
@@ -75,7 +75,10 @@ Event Grid waits 30 seconds for a response after delivering a message. After 30
- 10 minutes - 30 minutes - 1 hour-- Hourly for up to 24 hours
+- 3 hours
+- 6 hours
+- Every 12 hours up to 24 hours
+ If the endpoint responds within 3 minutes, Event Grid will attempt to remove the event from the retry queue on a best effort basis but duplicates may still be received.
@@ -99,7 +102,7 @@ When Event Grid can't deliver an event within a certain time period or after try
If either of the conditions is met, the event is dropped or dead-lettered. By default, Event Grid doesn't turn on dead-lettering. To enable it, you must specify a storage account to hold undelivered events when creating the event subscription. You pull events from this storage account to resolve deliveries.
-Event Grid sends an event to the dead-letter location when it has tried all of its retry attempts. If Event Grid receives a 400 (Bad Request) or 413 (Request Entity Too Large) response code, it immediately sends the event to the dead-letter endpoint. These response codes indicate delivery of the event will never succeed.
+Event Grid sends an event to the dead-letter location when it has tried all of its retry attempts. If Event Grid receives a 400 (Bad Request) or 413 (Request Entity Too Large) response code, it immediately schedules the event for dead-lettering. These response codes indicate delivery of the event will never succeed.
The time-to-live expiration is checked ONLY at the next scheduled delivery attempt. Therefore, even if time-to-live expires before the next scheduled delivery attempt, event expiry is checked only at the time of the next delivery and then subsequently dead-lettered.
expressroute https://docs.microsoft.com/en-us/azure/expressroute/plan-manage-cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/plan-manage-cost.md
@@ -20,9 +20,9 @@ Keep in mind that costs for ExpressRoute are only a portion of the monthly costs
## Prerequisites
-Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](https://docs.microsoft.com/azure/cost-management-billing/costs/understand-cost-mgt-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account.
+Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account.
-For information about assigning access to Azure Cost Management data, see [Assign access to data](https://docs.microsoft.com/azure/cost-management/assign-access-acm-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Local vs. Standard vs. Premium
@@ -78,7 +78,7 @@ You can pay for ExpressRoute charges with your EA monetary commitment credit. Ho
## Monitor costs
-As you use Azure resources with ExpressRoute, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as ExpressRoute use starts, costs are incurred and you can see the costs in [cost analysis](https://docs.microsoft.com/azure/cost-management/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+As you use Azure resources with ExpressRoute, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as ExpressRoute use starts, costs are incurred and you can see the costs in [cost analysis](../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
When you use cost analysis, you view ExpressRoute circuit costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
@@ -103,18 +103,18 @@ In the preceding example, you see the current cost for the service. Costs by Azu
## Create budgets and alerts
-You can create [budgets](https://docs.microsoft.com/azure/cost-management/tutorial-acm-create-budgets?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](https://docs.microsoft.com/azure/cost-management/cost-mgt-alerts-monitor-usage-spending?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+You can create [budgets](../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
-Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you create a budget, see [Group and filter options](https://docs.microsoft.com/azure/cost-management-billing/costs/group-filter?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more about the filter options when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Export cost data
-You can also [export your cost data](https://docs.microsoft.com/azure/cost-management-billing/costs/tutorial-export-acm-data?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
## Next steps - Learn more on how pricing works with Azure ExpressRoute. See [Azure ExpressRoute Overview pricing](https://azure.microsoft.com/en-us/pricing/details/expressroute/).-- Learn [how to optimize your cloud investment with Azure Cost Management](https://docs.microsoft.com/azure/cost-management-billing/costs/cost-mgt-best-practices?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn more about managing costs with [cost analysis](https://docs.microsoft.com/azure/cost-management-billing/costs/quick-acm-cost-analysis?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn about how to [prevent unexpected costs](https://docs.microsoft.com/azure/cost-management-billing/manage/getting-started?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/manage/getting-started.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
- Take the [Cost Management](https://docs.microsoft.com/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
governance https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/query-language.md
@@ -1,7 +1,7 @@
--- title: Understand the query language description: Describes Resource Graph tables and the available Kusto data types, operators, and functions usable with Azure Resource Graph.
-ms.date: 11/18/2020
+ms.date: 01/14/2021
ms.topic: conceptual --- # Understanding the Azure Resource Graph query language
@@ -25,16 +25,19 @@ Resource Graph provides several tables for the data it stores about Azure Resour
types and their properties. Some tables can be used with `join` or `union` operators to get properties from related resource types. Here is the list of tables available in Resource Graph:
-|Resource Graph table |Can `join`? |Description |
+|Resource Graph table |Can `join` other tables? |Description |
|---|---| |Resources |Yes |The default table if none defined in the query. Most Resource Manager resource types and properties are here. | |ResourceContainers |Yes |Includes subscription (in preview -- `Microsoft.Resources/subscriptions`) and resource group (`Microsoft.Resources/subscriptions/resourcegroups`) resource types and data. |
-|AdvisorResources |No |Includes resources _related_ to `Microsoft.Advisor`. |
-|AlertsManagementResources |No |Includes resources _related_ to `Microsoft.AlertsManagement`. |
+|AdvisorResources |Yes (preview) |Includes resources _related_ to `Microsoft.Advisor`. |
+|AlertsManagementResources |Yes (preview) |Includes resources _related_ to `Microsoft.AlertsManagement`. |
|GuestConfigurationResources |No |Includes resources _related_ to `Microsoft.GuestConfiguration`. |
-|MaintenanceResources |No |Includes resources _related_ to `Microsoft.Maintenance`. |
+|MaintenanceResources |Partial, join _to_ only. (preview) |Includes resources _related_ to `Microsoft.Maintenance`. |
+|PatchAssessmentResources|No |Includes resources _related_ to Azure Virtual Machines patch assessment. |
+|PatchInstallationResources|No |Includes resources _related_ to Azure Virtual Machines patch installation. |
|PolicyResources |No |Includes resources _related_ to `Microsoft.PolicyInsights`. (**Preview**)|
-|SecurityResources |No |Includes resources _related_ to `Microsoft.Security`. |
+|RecoveryServicesResources |Partial, join _to_ only. (preview) |Includes resources _related_ to `Microsoft.DataProtection` and `Microsoft.RecoveryServices`. |
+|SecurityResources |Partial, join _to_ only. (preview) |Includes resources _related_ to `Microsoft.Security`. |
|ServiceHealthResources |No |Includes resources _related_ to `Microsoft.ResourceHealth`. | For a complete list including resource types, see
@@ -52,7 +55,7 @@ resource types the given Resource Graph table supports that exist in your enviro
The following query shows a simple `join`. The query result blends the columns together and any duplicate column names from the joined table, _ResourceContainers_ in this example, are appended with **1**. As _ResourceContainers_ table has types for both subscriptions and resource groups,
-either type might be used to join to the resource from _resources_ table.
+either type might be used to join to the resource from _Resources_ table.
```kusto Resources
@@ -60,18 +63,20 @@ Resources
| limit 1 ```
-The following query shows a more complex use of `join`. The query limits the joined table to
-subscriptions resources and with `project` to include only the original field _subscriptionId_ and
-the _name_ field renamed to _SubName_. The field rename avoids `join` adding it as _name1_ since the
-field already exists in _Resources_. The original table is filtered with `where` and the following
-`project` includes columns from both tables. The query result is a single key vault displaying type,
-the name of the key vault, and the name of the subscription it's in.
+The following query shows a more complex use of `join`. First, the query uses `project` to get the
+fields from _Resources_ for the Azure Key Vault vaults resource type. The next step uses `join` to
+merge the results with _ResourceContainers_ where the type is a subscription _on_ a property that is
+both in the first table's `project` and the joined table's `project`. The field rename avoids `join`
+adding it as _name1_ since the property already is projected from _Resources_. The query result is a
+single key vault displaying type, the name, location, and resource group of the key vault, along
+with the name of the subscription it's in.
```kusto Resources | where type == 'microsoft.keyvault/vaults'
+| project name, type, location, subscriptionId, resourceGroup
| join (ResourceContainers | where type=='microsoft.resources/subscriptions' | project SubName=name, subscriptionId) on subscriptionId
-| project type, name, SubName
+| project type, name, location, resourceGroup, SubName
| limit 1 ```
@@ -154,7 +159,7 @@ Here is the list of KQL tabular operators supported by Resource Graph with speci
|[count](/azure/kusto/query/countoperator) |[Count key vaults](../samples/starter.md#count-keyvaults) | | |[distinct](/azure/kusto/query/distinctoperator) |[Show resources that contain storage](../samples/starter.md#show-storage) | | |[extend](/azure/kusto/query/extendoperator) |[Count virtual machines by OS type](../samples/starter.md#count-os) | |
-|[join](/azure/kusto/query/joinoperator) |[Key vault with subscription name](../samples/advanced.md#join) |Join flavors supported: [innerunique](/azure/kusto/query/joinoperator#default-join-flavor), [inner](/azure/kusto/query/joinoperator#inner-join), [leftouter](/azure/kusto/query/joinoperator#left-outer-join). Limit of 3 `join` in a single query. Custom join strategies, such as broadcast join, aren't allowed. For which tables can use `join`, see [Resource Graph tables](#resource-graph-tables). |
+|[join](/azure/kusto/query/joinoperator) |[Key vault with subscription name](../samples/advanced.md#join) |Join flavors supported: [innerunique](/azure/kusto/query/joinoperator#default-join-flavor), [inner](/azure/kusto/query/joinoperator#inner-join), [leftouter](/azure/kusto/query/joinoperator#left-outer-join). Limit of 3 `join` in a single query, 1 of which may be a cross-table `join`. If all cross-table `join` use is between _Resource_ and _ResourceContainers_, then 3 cross-table `join` are allowed. Custom join strategies, such as broadcast join, aren't allowed. For which tables can use `join`, see [Resource Graph tables](#resource-graph-tables). |
|[limit](/azure/kusto/query/limitoperator) |[List all public IP addresses](../samples/starter.md#list-publicip) |Synonym of `take`. Doesn't work with [Skip](./work-with-data.md#skipping-records). | |[mvexpand](/azure/kusto/query/mvexpandoperator) | | Legacy operator, use `mv-expand` instead. _RowLimit_ max of 400. The default is 128. | |[mv-expand](/azure/kusto/query/mvexpandoperator) |[List Cosmos DB with specific write locations](../samples/advanced.md#mvexpand-cosmosdb) |_RowLimit_ max of 400. The default is 128. |
governance https://docs.microsoft.com/en-us/azure/governance/resource-graph/reference/supported-tables-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/reference/supported-tables-resources.md
@@ -1,7 +1,7 @@
--- title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History.
-ms.date: 11/20/2020
+ms.date: 01/06/2021
ms.topic: reference ms.custom: generated ---
@@ -31,33 +31,57 @@ part of a **table** in Resource Graph.
- microsoft.maintenance/configurationassignments - microsoft.maintenance/updates
+## patchassessmentresources
+
+- microsoft.compute/virtualmachines/patchassessmentresults
+- microsoft.compute/virtualmachines/patchassessmentresults/softwarepatches
+- microsoft.hybridcompute/machines/patchassessmentresults
+- microsoft.hybridcompute/machines/patchassessmentresults/softwarepatches
+
+## patchinstallationresources
+
+- microsoft.compute/virtualmachines/patchinstallationresults
+- microsoft.compute/virtualmachines/patchinstallationresults/softwarepatches
+- microsoft.hybridcompute/machines/patchinstallationresults
+- microsoft.hybridcompute/machines/patchinstallationresults/softwarepatches
+ ## policyresources - microsoft.policyinsights/policystates
+## recoveryservicesresources
+
+- microsoft.dataprotection/backupvaults/backupinstances
+- microsoft.dataprotection/backupvaults/backupjobs
+- microsoft.dataprotection/backupvaults/backuppolicies
+- microsoft.recoveryservices/vaults/alerts
+- Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectedItems (Backup Items)
+- microsoft.recoveryservices/vaults/backupjobs
+- microsoft.recoveryservices/vaults/backuppolicies
+ ## resourcecontainers -- microsoft.resources/subscriptions-- microsoft.resources/subscriptions/resourcegroups
+- microsoft.resources/subscriptions (Subscriptions)
+- Microsoft.Resources/subscriptions/resourceGroups (Resource groups)
## resources -- 84codes.cloudamqp/servers-- citrix.services/xenappessentials-- citrix.services/xendesktopessentials-- conexlink.mycloudit/accounts-- crypteron.datasecurity/apps
+- 84codes.CloudAMQP/servers (CloudAMQP)
+- Citrix.Services/XenAppEssentials (Citrix Virtual Apps Essentials)
+- Citrix.Services/XenDesktopEssentials (Citrix Virtual Desktops Essentials)
+- Conexlink.MyCloudIt/accounts (MyCloudIT - Azure Desktop Hosting)
+- Crypteron.DataSecurity/apps (Crypteron)
- gridpro.evops/accounts - gridpro.evops/accounts/eventrules - gridpro.evops/accounts/requesttemplates - gridpro.evops/accounts/views-- hive.streaming/services
+- Hive.Streaming/services (Hive Streaming)
- incapsula.waf/accounts-- livearena.broadcast/services-- mailjet.email/services-- microsoft.aad/domainservices
+- LiveArena.Broadcast/services (LiveArena Broadcast)
+- Mailjet.Email/services (Mailjet Email Service)
+- Microsoft.AAD/domainServices (Azure AD Domain Services)
- microsoft.aadiam/azureadmetrics-- microsoft.aadiam/privatelinkforazuread
+- microsoft.aadiam/privateLinkForAzureAD (Private Link for Azure AD)
- microsoft.aadiam/tenants - microsoft.agfoodplatform/farmbeats - microsoft.aisupercomputer/accounts
@@ -66,29 +90,30 @@ part of a **table** in Resource Graph.
- microsoft.alertsmanagement/actionrules - microsoft.alertsmanagement/resourcehealthalertrules - microsoft.alertsmanagement/smartdetectoralertrules-- microsoft.analysisservices/servers-- microsoft.apimanagement/service
+- Microsoft.AnalysisServices/servers (Analysis Services)
+- microsoft.anybuild/clusters
+- Microsoft.ApiManagement/service (API Management services)
- microsoft.appassessment/migrateprojects-- microsoft.appconfiguration/configurationstores-- microsoft.appplatform/spring
+- Microsoft.AppConfiguration/configurationStores (App Configuration)
+- Microsoft.AppPlatform/Spring (Azure Spring Cloud)
- microsoft.archive/collections-- microsoft.attestation/attestationproviders-- microsoft.authorization/resourcemanagementprivatelinks
+- Microsoft.Attestation/attestationProviders (Attestation providers)
+- Microsoft.Authorization/resourceManagementPrivateLinks (Resource management private links)
- microsoft.automanage/accounts - microsoft.automanage/configurationprofilepreferences-- microsoft.automation/automationaccounts
+- Microsoft.Automation/AutomationAccounts (Automation Accounts)
- microsoft.automation/automationaccounts/configurations-- microsoft.automation/automationaccounts/runbooks
+- Microsoft.Automation/automationAccounts/runbooks (Runbook)
- microsoft.autonomousdevelopmentplatform/accounts-- microsoft.autonomoussystems/workspaces-- microsoft.avs/privateclouds
+- Microsoft.AutonomousSystems/workspaces (Bonsai)
+- Microsoft.AVS/privateClouds (AVS Private clouds)
- microsoft.azconfig/configurationstores-- microsoft.azureactivedirectory/b2cdirectories-- microsoft.azureactivedirectory/guestusages-- microsoft.azurearcdata/datacontrollers-- microsoft.azurearcdata/postgresinstances-- microsoft.azurearcdata/sqlmanagedinstances-- microsoft.azurearcdata/sqlserverinstances
+- Microsoft.AzureActiveDirectory/b2cDirectories (B2C Tenants)
+- Microsoft.AzureActiveDirectory/guestUsages (Guest Usages)
+- Microsoft.AzureArcData/dataControllers (Azure Arc data controllers)
+- Microsoft.AzureArcData/postgresInstances (Azure Database for PostgreSQL server groups - Azure Arc)
+- Microsoft.AzureArcData/sqlManagedInstances (SQL managed instances - Azure Arc)
+- Microsoft.AzureArcData/sqlServerInstances (SQL Server - Azure Arc)
- microsoft.azuredata/datacontrollers - microsoft.azuredata/hybriddatamanagers - microsoft.azuredata/postgresinstances
@@ -96,87 +121,92 @@ part of a **table** in Resource Graph.
- microsoft.azuredata/sqlinstances - microsoft.azuredata/sqlmanagedinstances - microsoft.azuredata/sqlserverinstances-- microsoft.azuredata/sqlserverregistrations
+- Microsoft.AzureData/sqlServerRegistrations (SQL Server registries)
- microsoft.azurestack/edgesubscriptions - microsoft.azurestack/linkedsubscriptions-- microsoft.azurestack/registrations-- microsoft.azurestackhci/clusters
+- Microsoft.Azurestack/registrations (Azure Stack Hubs)
+- Microsoft.AzureStackHCI/clusters (Azure Stack HCI)
- microsoft.baremetal/consoleconnections-- microsoft.baremetal/crayservers-- microsoft.baremetal/monitoringservers-- microsoft.baremetalinfrastructure/baremetalinstances-- microsoft.batch/batchaccounts
+- Microsoft.BareMetal/crayServers (Cray Servers)
+- Microsoft.BareMetal/monitoringServers (Monitoring Servers)
+- Microsoft.BareMetalInfrastructure/bareMetalInstances (BareMetal Instances)
+- Microsoft.Batch/batchAccounts (Batch accounts)
- microsoft.batchai/clusters - microsoft.batchai/fileservers - microsoft.batchai/jobs - microsoft.batchai/workspaces-- microsoft.bing/accounts-- microsoft.bingmaps/mapapis
+- Microsoft.Bing/accounts (Bing Resources)
+- Microsoft.BingMaps/mapApis (Bing Maps API for Enterprise)
- microsoft.biztalkservices/biztalk-- microsoft.blockchain/blockchainmembers-- microsoft.blockchain/cordamembers-- microsoft.blockchain/watchers-- microsoft.botservice/botservices-- microsoft.cache/redis-- microsoft.cache/redisenterprise-- microsoft.cdn/cdnwebapplicationfirewallpolicies-- microsoft.cdn/profiles-- microsoft.cdn/profiles/endpoints-- microsoft.certificateregistration/certificateorders
+- Microsoft.Blockchain/blockchainMembers (Azure Blockchain Service)
+- Microsoft.Blockchain/cordaMembers (Corda)
+- Microsoft.Blockchain/watchers (Blockchain Data Manager)
+- Microsoft.BotService/botServices (Bot Services)
+- Microsoft.Cache/Redis (Azure Cache for Redis)
+- Microsoft.Cache/RedisEnterprise (Redis Enterprise)
+- Microsoft.Cdn/CdnWebApplicationFirewallPolicies (Web application firewall policies (WAF))
+- microsoft.cdn/profiles (CDN profiles)
+- microsoft.cdn/profiles/afdendpoints
+- microsoft.cdn/profiles/endpoints (Endpoints)
+- Microsoft.CertificateRegistration/certificateOrders (App Service Certificates)
- microsoft.chaos/chaosexperiments-- microsoft.classiccompute/domainnames-- microsoft.classiccompute/virtualmachines-- microsoft.classicnetwork/networksecuritygroups-- microsoft.classicnetwork/reservedips-- microsoft.classicnetwork/virtualnetworks-- microsoft.classicstorage/storageaccounts
+- microsoft.classicCompute/domainNames (Cloud services (classic))
+- Microsoft.ClassicCompute/VirtualMachines (Virtual machines (classic))
+- Microsoft.ClassicNetwork/networkSecurityGroups (Network security groups (classic))
+- Microsoft.ClassicNetwork/reservedIps (Reserved IP addresses (classic))
+- Microsoft.ClassicNetwork/virtualNetworks (Virtual networks (classic))
+- Microsoft.ClassicStorage/StorageAccounts (Storage accounts (classic))
- microsoft.cloudes/accounts - microsoft.cloudsearch/indexes-- microsoft.cloudtest/accounts-- microsoft.cloudtest/hostedpools-- microsoft.cloudtest/images-- microsoft.cloudtest/pools
+- Microsoft.CloudTest/accounts (CloudTest Accounts)
+- Microsoft.CloudTest/hostedpools (1ES Hosted Pools)
+- Microsoft.CloudTest/images (CloudTest Images)
+- Microsoft.CloudTest/pools (CloudTest Pools)
- microsoft.codespaces/plans-- microsoft.cognition/syntheticsaccounts-- microsoft.cognitiveservices/accounts-- microsoft.compute/availabilitysets-- microsoft.compute/cloudservices-- microsoft.compute/diskaccesses-- microsoft.compute/diskencryptionsets-- microsoft.compute/disks-- microsoft.compute/galleries
+- Microsoft.Cognition/syntheticsAccounts (Synthetics Accounts)
+- Microsoft.CognitiveServices/accounts (Cognitive Services)
+- Microsoft.Compute/availabilitySets (Availability sets)
+- microsoft.compute/capacityreservationgroups
+- microsoft.compute/capacityreservationgroups/capacityreservations
+- microsoft.compute/capacityreservations
+- Microsoft.Compute/cloudServices (Cloud services (extended support))
+- Microsoft.Compute/diskAccesses (Disk Accesses)
+- Microsoft.Compute/diskEncryptionSets (Disk Encryption Sets)
+- Microsoft.Compute/disks (Disks)
+- Microsoft.Compute/galleries (Shared image galleries)
- microsoft.compute/galleries/applications - microsoft.compute/galleries/applications/versions-- microsoft.compute/galleries/images-- microsoft.compute/galleries/images/versions-- microsoft.compute/hostgroups-- microsoft.compute/hostgroups/hosts-- microsoft.compute/images-- microsoft.compute/proximityplacementgroups
+- Microsoft.Compute/galleries/images (Image definitions)
+- Microsoft.Compute/galleries/images/versions (Image versions)
+- Microsoft.Compute/hostgroups (Host groups)
+- Microsoft.Compute/hostgroups/hosts (Hosts)
+- Microsoft.Compute/images (Images)
+- Microsoft.Compute/ProximityPlacementGroups (Proximity placement groups)
- microsoft.compute/restorepointcollections - microsoft.compute/sharedvmextensions - microsoft.compute/sharedvmextensions/versions - microsoft.compute/sharedvmimages - microsoft.compute/sharedvmimages/versions-- microsoft.compute/snapshots-- microsoft.compute/sshpublickeys
+- Microsoft.Compute/snapshots (Snapshots)
+- Microsoft.Compute/sshPublicKeys (SSH keys)
- microsoft.compute/swiftlets-- microsoft.compute/virtualmachines
+- Microsoft.Compute/VirtualMachines (Virtual machines)
- microsoft.compute/virtualmachines/extensions - microsoft.compute/virtualmachines/runcommands-- microsoft.compute/virtualmachinescalesets-- microsoft.confluent/organizations-- microsoft.connectedcache/cachenodes-- microsoft.containerinstance/containergroups-- microsoft.containerregistry/registries
+- Microsoft.Compute/virtualMachineScaleSets (Virtual machine scale sets)
+- Microsoft.Confluent/organizations (Confluent Organizations)
+- Microsoft.ConnectedCache/cacheNodes (Connected Cache Resources)
+- microsoft.connectedvehicle/platformaccounts
+- Microsoft.ContainerInstance/containerGroups (Container instances)
+- Microsoft.ContainerRegistry/registries (Container registries)
- microsoft.containerregistry/registries/agentpools - microsoft.containerregistry/registries/buildtasks-- microsoft.containerregistry/registries/replications
+- Microsoft.ContainerRegistry/registries/replications (Container registry replications)
- microsoft.containerregistry/registries/taskruns - microsoft.containerregistry/registries/tasks-- microsoft.containerregistry/registries/webhooks-- microsoft.containerservice/containerservices-- microsoft.containerservice/managedclusters
+- Microsoft.ContainerRegistry/registries/webhooks (Container registry webhooks)
+- Microsoft.ContainerService/containerServices (Container services (deprecated))
+- Microsoft.ContainerService/managedClusters (Kubernetes services)
- microsoft.containerservice/openshiftmanagedclusters - microsoft.contoso/clusters - microsoft.contoso/employees
@@ -184,230 +214,244 @@ part of a **table** in Resource Graph.
- microsoft.costmanagement/connectors - microsoft.customproviders/resourceproviders - microsoft.d365customerinsights/instances-- microsoft.databox/jobs-- microsoft.databoxedge/databoxedgedevices-- microsoft.databricks/workspaces-- microsoft.datacatalog/catalogs
+- Microsoft.DataBox/jobs (Data Box)
+- Microsoft.DataBoxEdge/dataBoxEdgeDevices (Azure Stack Edge / Data Box Gateway)
+- Microsoft.Databricks/workspaces (Azure Databricks Services)
+- Microsoft.DataCatalog/catalogs (Data Catalog)
- microsoft.datacatalog/datacatalogs-- microsoft.datacollaboration/workspaces-- microsoft.datadog/monitors-- microsoft.datafactory/datafactories-- microsoft.datafactory/factories-- microsoft.datalakeanalytics/accounts-- microsoft.datalakestore/accounts-- microsoft.datamigration/services-- microsoft.datamigration/services/projects
+- Microsoft.DataCollaboration/workspaces (Data Collaborations)
+- Microsoft.Datadog/monitors (Datadog)
+- Microsoft.DataFactory/dataFactories (Data factories)
+- Microsoft.DataFactory/factories (Data factories (V2))
+- Microsoft.DataLakeAnalytics/accounts (Data Lake Analytics)
+- Microsoft.DataLakeStore/accounts (Data Lake Storage Gen1)
+- microsoft.datamigration/controllers
+- Microsoft.DataMigration/services (Azure Database Migration Services)
+- Microsoft.DataMigration/services/projects (Azure Database Migration Projects)
- microsoft.datamigration/slots-- microsoft.dataprotection/backupvaults
+- Microsoft.DataProtection/BackupVaults (Backup vaults)
- microsoft.dataprotection/resourceoperationgatekeepers-- microsoft.datashare/accounts-- microsoft.dbformariadb/servers-- microsoft.dbformysql/flexibleservers-- microsoft.dbformysql/servers-- microsoft.dbforpostgresql/flexibleservers-- microsoft.dbforpostgresql/servergroups-- microsoft.dbforpostgresql/servers-- microsoft.dbforpostgresql/serversv2
+- Microsoft.DataShare/accounts (Data Shares)
+- Microsoft.DBforMariaDB/servers (Azure Database for MariaDB servers)
+- Microsoft.DBforMySQL/flexibleServers (Azure Database for MySQL flexible servers)
+- Microsoft.DBforMySQL/servers (Azure Database for MySQL servers)
+- Microsoft.DBforPostgreSQL/flexibleServers (Azure Database for PostgreSQL flexible servers)
+- Microsoft.DBforPostgreSQL/serverGroups (Azure Database for PostgreSQL server groups)
+- Microsoft.DBforPostgreSQL/servers (Azure Database for PostgreSQL servers)
+- Microsoft.DBforPostgreSQL/serversv2 (Azure Database for PostgreSQL servers v2)
- microsoft.dbforpostgresql/singleservers - microsoft.delegatednetwork/controller - microsoft.delegatednetwork/delegatedsubnets - microsoft.delegatednetwork/orchestratorinstances - microsoft.deploymentmanager/artifactsources-- microsoft.deploymentmanager/rollouts
+- Microsoft.DeploymentManager/Rollouts (Rollouts)
- microsoft.deploymentmanager/servicetopologies - microsoft.deploymentmanager/servicetopologies/services - microsoft.deploymentmanager/servicetopologies/services/serviceunits - microsoft.deploymentmanager/steps-- microsoft.desktopvirtualization/applicationgroups-- microsoft.desktopvirtualization/hostpools-- microsoft.desktopvirtualization/workspaces
+- Microsoft.DesktopVirtualization/ApplicationGroups (Application groups)
+- Microsoft.DesktopVirtualization/HostPools (Host pools)
+- microsoft.desktopvirtualization/scalingplans
+- Microsoft.DesktopVirtualization/Workspaces (Workspaces)
- microsoft.devices/elasticpools - microsoft.devices/elasticpools/iothubtenants-- microsoft.devices/iothubs-- microsoft.devices/provisioningservices-- microsoft.deviceupdate/accounts
+- Microsoft.Devices/IotHubs (IoT Hub)
+- Microsoft.Devices/ProvisioningServices (Device Provisioning Services)
+- Microsoft.DeviceUpdate/Accounts (Device Update for IoT Hubs)
- microsoft.deviceupdate/accounts/instances-- microsoft.devops/pipelines
+- microsoft.devops/pipelines (DevOps Starter)
- microsoft.devspaces/controllers - microsoft.devtestlab/labcenters-- microsoft.devtestlab/labs
+- Microsoft.DevTestLab/labs (DevTest Labs)
- microsoft.devtestlab/labs/servicerunners-- microsoft.devtestlab/labs/virtualmachines
+- Microsoft.DevTestLab/labs/virtualMachines (Virtual machines)
- microsoft.devtestlab/schedules-- microsoft.digitaltwins/digitaltwinsinstances-- microsoft.documentdb/databaseaccounts-- microsoft.domainregistration/domains
+- Microsoft.DigitalTwins/digitalTwinsInstances (Azure Digital Twins)
+- Microsoft.DocumentDb/databaseAccounts (Azure Cosmos DB accounts)
+- Microsoft.DomainRegistration/domains (App Service Domains)
+- Microsoft.Elastic/monitors (Elastic)
- microsoft.enterpriseknowledgegraph/services-- microsoft.eventgrid/domains-- microsoft.eventgrid/partnernamespaces-- microsoft.eventgrid/partnerregistrations-- microsoft.eventgrid/partnertopics-- microsoft.eventgrid/systemtopics-- microsoft.eventgrid/topics-- microsoft.eventhub/clusters-- microsoft.eventhub/namespaces-- microsoft.experimentation/experimentworkspaces-- microsoft.extendedlocation/customlocations
+- Microsoft.EventGrid/domains (Event Grid Domains)
+- Microsoft.EventGrid/partnerNamespaces (Event Grid Partner Namespaces)
+- Microsoft.EventGrid/partnerRegistrations (Event Grid Partner Registrations)
+- Microsoft.EventGrid/partnerTopics (Event Grid Partner Topics)
+- Microsoft.EventGrid/systemTopics (Event Grid System Topics)
+- Microsoft.EventGrid/topics (Event Grid Topics)
+- Microsoft.EventHub/clusters (Event Hubs Clusters)
+- Microsoft.EventHub/namespaces (Event Hubs Namespaces)
+- Microsoft.Experimentation/experimentWorkspaces (Experiment Workspaces)
+- Microsoft.ExtendedLocation/CustomLocations (Custom Locations)
- microsoft.falcon/namespaces - microsoft.footprintmonitoring/profiles - microsoft.gaming/titles-- microsoft.genomics/accounts
+- Microsoft.Genomics/accounts (Genomics accounts)
- microsoft.guestconfiguration/automanagedaccounts-- microsoft.hanaonazure/hanainstances-- microsoft.hanaonazure/sapmonitors
+- Microsoft.HanaOnAzure/hanaInstances (SAP HANA on Azure)
+- Microsoft.HanaOnAzure/sapMonitors (Azure Monitors for SAP Solutions)
- microsoft.hardwaresecuritymodules/dedicatedhsms-- microsoft.hdinsight/clusters-- microsoft.healthcareapis/services
+- Microsoft.HDInsight/clusters (HDInsight clusters)
+- Microsoft.HealthBot/healthBots (Azure Health Bot)
+- Microsoft.HealthcareApis/services (Azure API for FHIR)
- microsoft.healthcareapis/services/privateendpointconnections-- microsoft.hybridcompute/machines
+- microsoft.healthcareapis/workspaces
+- microsoft.healthcareapis/workspaces/dicomservices
+- Microsoft.HybridCompute/machines (Servers - Azure Arc)
- microsoft.hybridcompute/machines/extensions-- microsoft.hybridcompute/privatelinkscopes-- microsoft.hybriddata/datamanagers-- microsoft.hybridnetwork/devices-- microsoft.hybridnetwork/networkfunctions
+- Microsoft.HybridCompute/privateLinkScopes (Azure Arc Private Link Scopes)
+- Microsoft.HybridData/dataManagers (StorSimple Data Managers)
+- Microsoft.HybridNetwork/devices (Azure Network Function Manager ΓÇô Devices)
+- Microsoft.HybridNetwork/networkFunctions (Azure Network Function Manager ΓÇô Network Functions)
- microsoft.hybridnetwork/virtualnetworkfunctions-- microsoft.importexport/jobs
+- Microsoft.ImportExport/jobs (Import/export jobs)
- microsoft.industrydatalifecycle/basemodels - microsoft.industrydatalifecycle/custodiancollaboratives - microsoft.industrydatalifecycle/derivedmodels - microsoft.industrydatalifecycle/membercollaboratives
+- microsoft.industrydatalifecycle/modelmappings
- microsoft.industrydatalifecycle/pipelinesets - microsoft.insights/actiongroups - microsoft.insights/activitylogalerts - microsoft.insights/alertrules - microsoft.insights/autoscalesettings-- microsoft.insights/components-- microsoft.insights/datacollectionrules
+- microsoft.insights/components (Application Insights)
+- microsoft.insights/datacollectionrules (Data Collection Rules)
- microsoft.insights/guestdiagnosticsettings - microsoft.insights/metricalerts - microsoft.insights/notificationgroups - microsoft.insights/notificationrules-- microsoft.insights/privatelinkscopes
+- Microsoft.Insights/privateLinkScopes (Azure Monitor Private Link Scopes)
- microsoft.insights/querypacks - microsoft.insights/scheduledqueryrules-- microsoft.insights/webtests-- microsoft.insights/workbooks-- microsoft.insights/workbooktemplates-- microsoft.intelligentitdigitaltwin/digitaltwins-- microsoft.iotcentral/iotapps-- microsoft.iotspaces/graph
+- microsoft.insights/webtests (Availability tests)
+- microsoft.insights/workbooks (Azure Workbooks)
+- microsoft.insights/workbooktemplates (Azure Workbook Templates)
+- Microsoft.IntelligentITDigitalTwin/digitalTwins (Minervas)
+- microsoft.intelligentitdigitaltwin/digitaltwins/assets
+- Microsoft.IoTCentral/IoTApps (IoT Central Applications)
+- Microsoft.IoTSpaces/Graph (Digital Twins (Deprecated))
- microsoft.keyvault/hsmpools - microsoft.keyvault/managedhsms-- microsoft.keyvault/vaults-- microsoft.kubernetes/connectedclusters-- microsoft.kusto/clusters-- microsoft.kusto/clusters/databases-- microsoft.labservices/labaccounts-- microsoft.logic/integrationaccounts-- microsoft.logic/integrationserviceenvironments-- microsoft.logic/integrationserviceenvironments/managedapis-- microsoft.logic/workflows-- microsoft.machinelearning/commitmentplans-- microsoft.machinelearning/webservices-- microsoft.machinelearning/workspaces
+- Microsoft.KeyVault/vaults (Key vaults)
+- Microsoft.Kubernetes/connectedClusters (Kubernetes - Azure Arc)
+- Microsoft.Kusto/clusters (Azure Data Explorer Clusters)
+- Microsoft.Kusto/clusters/databases (Azure Data Explorer Databases)
+- Microsoft.LabServices/labAccounts (Lab Services)
+- Microsoft.LoadTestService/LoadTests (Cloud Native Load Tests)
+- Microsoft.Logic/integrationAccounts (Integration accounts)
+- Microsoft.Logic/integrationServiceEnvironments (Integration Service Environments)
+- Microsoft.Logic/integrationServiceEnvironments/managedApis (Managed Connector)
+- Microsoft.Logic/workflows (Logic apps)
+- Microsoft.Logz/monitors (Logz Main Account)
+- Microsoft.Logz/monitors/accounts (Logz SubAccount)
+- Microsoft.MachineLearning/commitmentPlans (Machine Learning Studio (classic) web service plans)
+- Microsoft.MachineLearning/webServices (Machine Learning Studio (classic) web services)
+- Microsoft.MachineLearning/workspaces (Machine Learning Studio (classic) workspaces)
- microsoft.machinelearningcompute/operationalizationclusters-- microsoft.machinelearningservices/workspaces
+- microsoft.machinelearningservices/modelinventories
+- microsoft.machinelearningservices/modelinventory
+- Microsoft.MachineLearningServices/workspaces (Machine Learning)
- microsoft.machinelearningservices/workspaces/batchendpoints
+- microsoft.machinelearningservices/workspaces/batchendpoints/deployments
- microsoft.machinelearningservices/workspaces/inferenceendpoints - microsoft.machinelearningservices/workspaces/inferenceendpoints/deployments-- microsoft.machinelearningservices/workspaces/onlineendpoints-- microsoft.machinelearningservices/workspaces/onlineendpoints/deployments-- microsoft.maintenance/maintenanceconfigurations
+- Microsoft.MachineLearningServices/workspaces/onlineEndpoints (ML Apps)
+- Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments (ML App Deployments)
+- Microsoft.Maintenance/maintenanceConfigurations (Maintenance Configurations)
- microsoft.maintenance/maintenancepolicies - microsoft.managedidentity/groups-- microsoft.managedidentity/userassignedidentities
+- Microsoft.ManagedIdentity/userAssignedIdentities (Managed Identities)
- microsoft.managednetwork/managednetworkgroups - microsoft.managednetwork/managednetworkpeeringpolicies - microsoft.managednetwork/managednetworks - microsoft.managednetwork/managednetworks/managednetworkgroups - microsoft.managednetwork/managednetworks/managednetworkpeeringpolicies-- microsoft.maps/accounts
+- Microsoft.Maps/accounts (Azure Maps Accounts)
- microsoft.maps/accounts/creators-- microsoft.maps/accounts/privateatlases-- microsoft.marketplaceapps/classicdevservices-- microsoft.media/mediaservices-- microsoft.media/mediaservices/liveevents-- microsoft.media/mediaservices/streamingendpoints
+- Microsoft.Maps/accounts/privateAtlases (Azure Maps Creator Resources)
+- Microsoft.MarketplaceApps/classicDevServices (Classic Dev Services)
+- microsoft.media/mediaservices (Media Services)
+- microsoft.media/mediaservices/liveevents (Live events)
+- microsoft.media/mediaservices/streamingEndpoints (Streaming Endpoints)
- microsoft.media/mediaservices/transforms - microsoft.microservices4spring/appclusters - microsoft.migrate/assessmentprojects - microsoft.migrate/migrateprojects - microsoft.migrate/movecollections-- microsoft.migrate/projects-- microsoft.mixedreality/holographicsbroadcastaccounts-- microsoft.mixedreality/objectunderstandingaccounts-- microsoft.mixedreality/remoterenderingaccounts-- microsoft.mixedreality/spatialanchorsaccounts
+- Microsoft.Migrate/projects (Migration projects)
+- Microsoft.MixedReality/holographicsBroadcastAccounts (Holographics Broadcast Accounts)
+- Microsoft.MixedReality/objectUnderstandingAccounts (Object Understanding Accounts)
+- Microsoft.MixedReality/remoteRenderingAccounts (Remote Rendering Accounts)
+- Microsoft.MixedReality/spatialAnchorsAccounts (Spatial Anchors Accounts)
- microsoft.mixedreality/surfacereconstructionaccounts-- microsoft.netapp/netappaccounts
+- Microsoft.NetApp/netAppAccounts (NetApp accounts)
- microsoft.netapp/netappaccounts/backuppolicies-- microsoft.netapp/netappaccounts/capacitypools-- microsoft.netapp/netappaccounts/capacitypools/volumes
+- Microsoft.NetApp/netAppAccounts/capacityPools (Capacity pools)
+- Microsoft.NetApp/netAppAccounts/capacityPools/Volumes (Volumes)
- microsoft.netapp/netappaccounts/capacitypools/volumes/mounttargets-- microsoft.netapp/netappaccounts/capacitypools/volumes/snapshots-- microsoft.network/applicationgateways-- microsoft.network/applicationgatewaywebapplicationfirewallpolicies-- microsoft.network/applicationsecuritygroups-- microsoft.network/azurefirewalls-- microsoft.network/bastionhosts-- microsoft.network/connections
+- Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots (Snapshots)
+- Microsoft.Network/applicationGateways (Application gateways)
+- Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies (Web application firewall policies (WAF))
+- Microsoft.Network/applicationSecurityGroups (Application security groups)
+- Microsoft.Network/azureFirewalls (Firewalls)
+- Microsoft.Network/bastionHosts (Bastions)
+- Microsoft.Network/connections (Connections)
- microsoft.network/customipprefixes - microsoft.network/ddoscustompolicies-- microsoft.network/ddosprotectionplans-- microsoft.network/dnszones
+- Microsoft.Network/ddosProtectionPlans (DDoS protection plans)
+- Microsoft.Network/dnsZones (DNS zones)
- microsoft.network/dscpconfigurations-- microsoft.network/expressroutecircuits
+- Microsoft.Network/expressRouteCircuits (ExpressRoute circuits)
- microsoft.network/expressroutecrossconnections - microsoft.network/expressroutegateways-- microsoft.network/expressrouteports-- microsoft.network/firewallpolicies-- microsoft.network/frontdoors-- microsoft.network/frontdoorwebapplicationfirewallpolicies
+- Microsoft.Network/expressRoutePorts (ExpressRoute Direct)
+- Microsoft.Network/firewallPolicies (Firewall Policies)
+- Microsoft.Network/frontdoors (Front Doors)
+- Microsoft.Network/FrontDoorWebApplicationFirewallPolicies (Web Application Firewall policies (WAF))
- microsoft.network/ipallocations-- microsoft.network/ipgroups-- microsoft.network/loadbalancers-- microsoft.network/localnetworkgateways
+- Microsoft.Network/ipGroups (IP Groups)
+- Microsoft.Network/LoadBalancers (Load balancers)
+- Microsoft.Network/localnetworkgateways (Local network gateways)
- microsoft.network/mastercustomipprefixes-- microsoft.network/natgateways-- microsoft.network/networkexperimentprofiles
+- Microsoft.Network/natGateways (NAT gateways)
+- Microsoft.Network/NetworkExperimentProfiles (Internet Analyzer profiles)
- microsoft.network/networkintentpolicies-- microsoft.network/networkinterfaces-- microsoft.network/networkmanagers
+- Microsoft.Network/networkinterfaces (Network interfaces)
+- Microsoft.Network/networkManagers (Network Managers)
- microsoft.network/networkprofiles-- microsoft.network/networksecuritygroups
+- Microsoft.Network/NetworkSecurityGroups (Network security groups)
- microsoft.network/networkvirtualappliances-- microsoft.network/networkwatchers
+- microsoft.network/networkwatchers (Network Watchers)
- microsoft.network/networkwatchers/connectionmonitors-- microsoft.network/networkwatchers/flowlogs
+- microsoft.network/networkwatchers/flowlogs (NSG Flow Logs)
- microsoft.network/networkwatchers/lenses - microsoft.network/networkwatchers/pingmeshes - microsoft.network/p2svpngateways-- microsoft.network/privatednszones
+- Microsoft.Network/privateDnsZones (Private DNS zones)
- microsoft.network/privatednszones/virtualnetworklinks - microsoft.network/privateendpointredirectmaps-- microsoft.network/privateendpoints-- microsoft.network/privatelinkservices-- microsoft.network/publicipaddresses-- microsoft.network/publicipprefixes-- microsoft.network/routefilters-- microsoft.network/routetables
+- Microsoft.Network/privateEndpoints (Private endpoints)
+- Microsoft.Network/privateLinkServices (Private link services)
+- Microsoft.Network/PublicIpAddresses (Public IP addresses)
+- Microsoft.Network/publicIpPrefixes (Public IP Prefixes)
+- Microsoft.Network/routeFilters (Route filters)
+- Microsoft.Network/routeTables (Route tables)
- microsoft.network/sampleresources - microsoft.network/securitypartnerproviders-- microsoft.network/serviceendpointpolicies-- microsoft.network/trafficmanagerprofiles
+- Microsoft.Network/serviceEndpointPolicies (Service endpoint policies)
+- Microsoft.Network/trafficmanagerprofiles (Traffic Manager profiles)
- microsoft.network/virtualhubs - microsoft.network/virtualhubs/bgpconnections - microsoft.network/virtualhubs/ipconfigurations-- microsoft.network/virtualnetworkgateways-- microsoft.network/virtualnetworks
+- Microsoft.Network/virtualNetworkGateways (Virtual network gateways)
+- Microsoft.Network/virtualNetworks (Virtual networks)
- microsoft.network/virtualnetworktaps - microsoft.network/virtualrouters-- microsoft.network/virtualwans
+- Microsoft.Network/virtualWans (Virtual WANs)
- microsoft.network/vpngateways - microsoft.network/vpnserverconfigurations - microsoft.network/vpnsites-- microsoft.notificationhubs/namespaces-- microsoft.notificationhubs/namespaces/notificationhubs
+- Microsoft.NotificationHubs/namespaces (Notification Hub Namespaces)
+- Microsoft.NotificationHubs/namespaces/notificationHubs (Notification Hubs)
- microsoft.nutanix/interfaces - microsoft.nutanix/nodes - microsoft.objectstore/osnamespaces
@@ -416,161 +460,163 @@ part of a **table** in Resource Graph.
- microsoft.offazure/mastersites - microsoft.offazure/serversites - microsoft.offazure/vmwaresites-- microsoft.openlogisticsplatform/workspaces
+- Microsoft.OpenLogisticsPlatform/workspaces (Open Supply Chain Platform)
- microsoft.operationalinsights/clusters-- microsoft.operationalinsights/querypacks-- microsoft.operationalinsights/workspaces-- microsoft.operationsmanagement/solutions
+- Microsoft.OperationalInsights/querypacks (Log Analytics query packs)
+- Microsoft.OperationalInsights/workspaces (Log Analytics workspaces)
+- Microsoft.OperationsManagement/solutions (Solutions)
- microsoft.operationsmanagement/views - microsoft.orbital/contactprofiles - microsoft.orbital/spacecrafts-- microsoft.peering/peerings-- microsoft.peering/peeringservices-- microsoft.portal/dashboards
+- Microsoft.Peering/peerings (Peerings)
+- Microsoft.Peering/peeringServices (Peering Services)
+- Microsoft.Portal/dashboards (Shared dashboards)
- microsoft.portalsdk/rootresources - microsoft.powerbi/privatelinkservicesforpowerbi - microsoft.powerbi/tenants - microsoft.powerbi/workspacecollections-- microsoft.powerbidedicated/capacities-- microsoft.projectbabylon/accounts-- microsoft.purview/accounts-- microsoft.quantum/workspaces-- microsoft.recoveryservices/vaults-- microsoft.redhatopenshift/openshiftclusters-- microsoft.relay/namespaces
+- Microsoft.PowerBIDedicated/capacities (Power BI Embedded)
+- Microsoft.ProjectBabylon/Accounts (Babylon accounts)
+- Microsoft.Purview/Accounts (Purview accounts)
+- Microsoft.Quantum/Workspaces (Quantum Workspaces)
+- Microsoft.RecoveryServices/vaults (Recovery Services vaults)
+- Microsoft.RedHatOpenShift/openShiftClusters (OpenShift clusters)
+- Microsoft.Relay/namespaces (Relays)
- microsoft.remoteapp/collections - microsoft.resiliency/chaosexperiments-- microsoft.resourcegraph/queries-- microsoft.resources/deploymentscripts-- microsoft.resources/templatespecs
+- microsoft.resourceconnector/appliances
+- Microsoft.resourcegraph/queries (Resource Graph queries)
+- Microsoft.Resources/deploymentScripts (Deployment Scripts)
+- Microsoft.Resources/templateSpecs (Template specs)
- microsoft.resources/templatespecs/versions-- microsoft.saas/applications-- microsoft.scheduler/jobcollections
+- Microsoft.SaaS/applications (Software as a Service (classic))
+- Microsoft.Scheduler/jobCollections (Scheduler Job Collections)
- microsoft.scvmm/clouds-- microsoft.scvmm/virtualmachines
+- Microsoft.scvmm/virtualMachines (SCVMM virtual machine - Azure Arc)
- microsoft.scvmm/virtualmachinetemplates - microsoft.scvmm/virtualnetworks - microsoft.scvmm/vmmservers-- microsoft.search/searchservices
+- Microsoft.Search/searchServices (Search services)
- microsoft.security/automations - microsoft.security/iotsecuritysolutions-- microsoft.securitydetonation/chambers-- microsoft.servicebus/namespaces-- microsoft.servicefabric/clusters
+- Microsoft.SecurityDetonation/chambers (Security Detonation Chambers)
+- Microsoft.ServiceBus/namespaces (Service Bus Namespaces)
+- Microsoft.ServiceFabric/clusters (Service Fabric clusters)
- microsoft.servicefabric/containergroupsets-- microsoft.servicefabric/managedclusters-- microsoft.servicefabricmesh/applications
+- Microsoft.ServiceFabric/managedclusters (Managed Service Fabric clusters)
+- Microsoft.ServiceFabricMesh/applications (Mesh applications)
- microsoft.servicefabricmesh/gateways - microsoft.servicefabricmesh/networks - microsoft.servicefabricmesh/secrets - microsoft.servicefabricmesh/volumes-- microsoft.serviceshub/connectors-- microsoft.signalrservice/signalr
+- Microsoft.ServicesHub/connectors (Services Hub Connectors)
+- Microsoft.SignalRService/SignalR (SignalR)
- microsoft.singularity/accounts - microsoft.solutions/appliancedefinitions - microsoft.solutions/appliances-- microsoft.solutions/applicationdefinitions-- microsoft.solutions/applications
+- Microsoft.Solutions/applicationDefinitions (Service catalog managed application definitions)
+- Microsoft.Solutions/applications (Managed applications)
- microsoft.solutions/jitrequests - microsoft.spoolservice/spools-- microsoft.sql/instancepools-- microsoft.sql/managedinstances-- microsoft.sql/managedinstances/databases-- microsoft.sql/servers-- microsoft.sql/servers/databases-- microsoft.sql/servers/elasticpools
+- Microsoft.Sql/instancePools (Instance pools)
+- Microsoft.Sql/managedInstances (SQL managed instances)
+- Microsoft.Sql/managedInstances/databases (Managed databases)
+- Microsoft.Sql/servers (SQL servers)
+- Microsoft.Sql/servers/databases (SQL databases)
+- Microsoft.Sql/servers/elasticpools (SQL elastic pools)
- microsoft.sql/servers/jobaccounts-- microsoft.sql/servers/jobagents-- microsoft.sql/virtualclusters
+- Microsoft.Sql/servers/jobAgents (Elastic Job agents)
+- Microsoft.Sql/virtualClusters (Virtual clusters)
- microsoft.sqlvirtualmachine/sqlvirtualmachinegroups-- microsoft.sqlvirtualmachine/sqlvirtualmachines
+- Microsoft.SqlVirtualMachine/SqlVirtualMachines (SQL virtual machines)
- microsoft.sqlvm/dwvm-- microsoft.storage/storageaccounts-- microsoft.storagecache/caches-- microsoft.storagesync/storagesyncservices-- microsoft.storagesyncdev/storagesyncservices-- microsoft.storagesyncint/storagesyncservices-- microsoft.storsimple/managers-- microsoft.streamanalytics/clusters-- microsoft.streamanalytics/streamingjobs
+- Microsoft.Storage/StorageAccounts (Storage accounts)
+- Microsoft.StorageCache/caches (HPC caches)
+- microsoft.storagepool/diskpools
+- Microsoft.StorageSync/storageSyncServices (Storage Sync Services)
+- Microsoft.StorageSyncDev/storageSyncServices (Storage Sync Services)
+- Microsoft.StorageSyncInt/storageSyncServices (Storage Sync Services)
+- Microsoft.StorSimple/Managers (StorSimple Device Managers)
+- Microsoft.StreamAnalytics/clusters (Stream Analytics clusters)
+- Microsoft.StreamAnalytics/StreamingJobs (Stream Analytics jobs)
- microsoft.swiftlet/virtualmachines - microsoft.swiftlet/virtualmachinesnapshots-- microsoft.synapse/privatelinkhubs-- microsoft.synapse/workspaces-- microsoft.synapse/workspaces/bigdatapools
+- Microsoft.Synapse/privateLinkHubs (Azure Synapse Analytics (private link hubs))
+- Microsoft.Synapse/workspaces (Azure Synapse Analytics)
+- Microsoft.Synapse/workspaces/bigDataPools (Apache Spark pools)
- microsoft.synapse/workspaces/sqldatabases-- microsoft.synapse/workspaces/sqlpools
+- Microsoft.Synapse/workspaces/sqlPools (Dedicated SQL pools)
- microsoft.terraformoss/providerregistrations-- microsoft.timeseriesinsights/environments-- microsoft.timeseriesinsights/environments/eventsources-- microsoft.timeseriesinsights/environments/referencedatasets
+- Microsoft.TimeSeriesInsights/environments (Time Series Insights environments)
+- Microsoft.TimeSeriesInsights/environments/eventsources (Time Series Insights event sources)
+- Microsoft.TimeSeriesInsights/environments/referenceDataSets (Time Series Insights reference data sets)
- microsoft.token/stores - microsoft.tokenvault/vaults - microsoft.virtualmachineimages/imagetemplates-- microsoft.visualstudio/account
+- microsoft.visualstudio/account (Azure DevOps organizations)
- microsoft.visualstudio/account/extension-- microsoft.visualstudio/account/project
+- microsoft.visualstudio/account/project (DevOps Starter)
- microsoft.vmware/arczones - microsoft.vmware/resourcepools - microsoft.vmware/vcenters-- microsoft.vmware/virtualmachines
+- Microsoft.VMware/VirtualMachines (AVS virtual machines)
- microsoft.vmware/virtualmachinetemplates - microsoft.vmware/virtualnetworks-- microsoft.vmwarecloudsimple/dedicatedcloudnodes-- microsoft.vmwarecloudsimple/dedicatedcloudservices-- microsoft.vmwarecloudsimple/virtualmachines
+- Microsoft.VMwareCloudSimple/dedicatedCloudNodes (CloudSimple Nodes)
+- Microsoft.VMwareCloudSimple/dedicatedCloudServices (CloudSimple Services)
+- Microsoft.VMwareCloudSimple/virtualMachines (CloudSimple Virtual Machines)
- microsoft.vmwareonazure/privateclouds - microsoft.vmwarevirtustream/privateclouds - microsoft.vsonline/accounts-- microsoft.vsonline/plans
+- Microsoft.VSOnline/Plans (Visual Studio Online Plans)
- microsoft.web/apimanagementaccounts - microsoft.web/apimanagementaccounts/apis - microsoft.web/certificates-- microsoft.web/connectiongateways-- microsoft.web/connections-- microsoft.web/customapis-- microsoft.web/hostingenvironments-- microsoft.web/kubeenvironments-- microsoft.web/serverfarms-- microsoft.web/sites
+- Microsoft.Web/connectionGateways (On-premises Data Gateways)
+- Microsoft.Web/connections (API Connections)
+- Microsoft.Web/customApis (Logic Apps Custom Connector)
+- Microsoft.Web/HostingEnvironments (App Service Environments)
+- Microsoft.Web/KubeEnvironments (App Service Kubernetes Environments)
+- Microsoft.Web/serverFarms (App Service plans)
+- Microsoft.Web/sites (App Services)
- microsoft.web/sites/premieraddons-- microsoft.web/sites/slots-- microsoft.web/staticsites-- microsoft.windowsesu/multipleactivationkeys-- microsoft.windowsiot/deviceservices
+- Microsoft.Web/sites/slots (App Service (Slots))
+- Microsoft.Web/StaticSites (Static Web Apps (Preview))
+- Microsoft.WindowsESU/multipleActivationKeys (Windows Multiple Activation Keys)
+- Microsoft.WindowsIoT/DeviceServices (Windows 10 IoT Core Services)
- microsoft.workloadbuilder/workloads-- myget.packagemanagement/services-- paraleap.cloudmonix/services-- pokitdok.platform/services-- providers.test/statefulibizaengines
+- MyGet.PackageManagement/services (MyGet - Hosted NuGet, NPM, Bower and Vsix)
+- Paraleap.CloudMonix/services (CloudMonix)
+- Pokitdok.Platform/services (PokitDok Platform)
+- Providers.Test/statefulIbizaEngines (Application assessments)
- providers.test/statefulresources - providers.test/statefulresources/nestedresources - providers.test/statelessresources-- ravenhq.db/databases-- raygun.crashreporting/apps-- sendgrid.email/accounts-- sparkpost.basic/services-- stackify.retrace/services
+- RavenHq.Db/databases (RavenHQ)
+- Raygun.CrashReporting/apps (Raygun)
+- Sendgrid.Email/accounts (SendGrid Accounts)
+- Sparkpost.Basic/services (SparkPost)
+- stackify.retrace/services (Stackify)
- test.shoebox/testresources - test.shoebox/testresources2-- trendmicro.deepsecurity/accounts-- u2uconsult.theidentityhub/services-- wandisco.fusion/fusiongroups-- wandisco.fusion/fusiongroups/azurezones-- wandisco.fusion/fusiongroups/azurezones/plugins-- wandisco.fusion/fusiongroups/hivereplicationrules-- wandisco.fusion/fusiongroups/managedonpremzones
+- TrendMicro.DeepSecurity/accounts (Deep Security SaaS)
+- U2uconsult.TheIdentityHub/services (The Identity Hub)
+- Wandisco.Fusion/fusionGroups (LiveData Planes)
+- Wandisco.Fusion/fusionGroups/azureZones (Azure Zones)
+- Wandisco.Fusion/fusionGroups/azureZones/plugins (Plugins)
+- Wandisco.Fusion/fusionGroups/hiveReplicationRules (Hive Replication Rules)
+- Wandisco.Fusion/fusionGroups/managedOnPremZones (On-premises Zones)
- wandisco.fusion/fusiongroups/onpremzones-- wandisco.fusion/fusiongroups/replicationrules-- wandisco.fusion/migrators-- wandisco.fusion/migrators/livedatamigrations-- wandisco.fusion/migrators/targets
+- Wandisco.Fusion/fusionGroups/replicationRules (Replication Rules)
+- Wandisco.Fusion/migrators (LiveData Migrators)
+- Wandisco.Fusion/migrators/liveDataMigrations (Migrations)
+- Wandisco.Fusion/migrators/targets (Targets)
## securityresources - microsoft.security/assessments - microsoft.security/assessments/subassessments-- microsoft.security/locations/alerts
+- microsoft