Updates from: 08/16/2021 03:05:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/create-access-review.md
For more information, see [License requirements](access-reviews-overview.md#lice
15. In the **Advanced settings** section you can choose the following - Set **Justification required** to **Enable** to require the reviewer to supply a reason for approval. - Set **email notifications** to **Enable** to have Azure AD send email notifications to reviewers when an access review starts, and to administrators when a review completes.
- - Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who have not completed their review. These reminders will be self half-way through the duration of the review.
+ - Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to all reviewers. Reviewers will receive the reminders halfway through the duration of the review, regardless of whether they have completed their review at that time.
- The content of the email sent to reviewers is autogenerated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as additional instructions or contact information, you can specify these details in the **Additional content for reviewer email** section. The information that you enter is included in the invitation and reminder emails sent to assigned reviewers. The section highlighted in the image below shows where this information is displayed. ![additional content for reviewer](./media/create-access-review/additional-content-reviewer.png)
active-directory Pim How To Start Security Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-start-security-review.md
This article describes how to create one or more access reviews for privileged A
> [!NOTE] > Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews.
-12. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **(Preview) eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **(Preview) active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
+12. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
<kbd> ![Reviewers list of assignment types](./media/pim-how-to-start-security-review/assignment-type-select.png) </kbd>
This article describes how to create one or more access reviews for privileged A
1. Set **Mail notifications** to **Enable** to have Azure AD send email notifications to reviewers when an access review starts, and to administrators when a review completes.
-1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who have not completed their review.
+1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to all reviewers. Reviewers will receive the reminders halfway through the duration of the review, regardless of whether they have completed their review at that time.
1. The content of the email sent to reviewers is auto-generated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as additional instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed. ![Content of the email sent to reviewers with highlights](./media/pim-how-to-start-security-review/email-info.png)
active-directory Pim Resource Roles Start Access Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-start-access-review.md
The need for access to privileged Azure resource roles by employees changes over
> Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews. If you are creating an access review of **Azure AD roles**, the following shows an example of the Review membership list.
-1. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **(Preview) eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **(Preview) active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
+1. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
![Reviewers list of assignment types](./media/pim-resource-roles-start-access-review/assignment-type-select.png)
The need for access to privileged Azure resource roles by employees changes over
1. Set **Mail notifications** to **Enable** to have Azure AD send email notifications to reviewers when an access review starts, and to administrators when a review completes.
-1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who have not completed their review.
+1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to all reviewers. Reviewers will receive the reminders halfway through the duration of the review, regardless of whether they have completed their review at that time.
1. The content of the email sent to reviewers is autogenerated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as additional instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed. ![Content of the email sent to reviewers with highlights](./media/pim-resource-roles-start-access-review/email-info.png)
active-directory Atea Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/atea-provisioning-tutorial.md
Title: 'Tutorial: Configure Atea for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Atea. writer: twimmers - ms.assetid: b788328b-10fd-4eaa-a4bc-909d738d8b8b -+ Last updated 01/25/2021
active-directory Britive Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/britive-provisioning-tutorial.md
Title: 'Tutorial: Configure Britive for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Britive. writer: twimmers - ms.assetid: 622688b3-9d20-482e-aab9-ce2a1f01e747 -+ Last updated 03/05/2021
active-directory Cybsafe Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cybsafe-provisioning-tutorial.md
Title: 'Tutorial: Configure CybSafe for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to CybSafe. writer: twimmers - ms.assetid: 7255fe44-1662-4ae4-9ff3-9492911b7ce0 -+ Last updated 11/12/2020
active-directory Fortes Change Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/fortes-change-cloud-provisioning-tutorial.md
Title: 'Tutorial: Configure Fortes Change Cloud for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Fortes Change Cloud. writer: twimmers - ms.assetid: ef9a8f5e-0bf0-46d6-8e17-3bcf1a5b0a6b -+ Last updated 01/15/2021
active-directory Grouptalk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/grouptalk-provisioning-tutorial.md
Title: 'Tutorial: Configure GroupTalk for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to GroupTalk. writer: twimmers - ms.assetid: e537d393-2724-450f-9f5b-4611cdc9237c -+ Last updated 11/18/2020
active-directory Helloid Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/helloid-provisioning-tutorial.md
Title: 'Tutorial: Configure HelloID for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to HelloID. writer: twimmers - ms.assetid: ffd450a5-03ec-4364-8921-5c468e119c4d -+ Last updated 01/15/2021
active-directory Holmes Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/holmes-cloud-provisioning-tutorial.md
Title: 'Tutorial: Configure Holmes Cloud for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Holmes Cloud. writer: twimmers - ms.assetid: b1088904-2ea2-4440-b39e-c4b7712b8229 -+ Last updated 06/07/2021
active-directory Hoxhunt Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/hoxhunt-provisioning-tutorial.md
Title: 'Tutorial: Configure Hoxhunt for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Hoxhunt. writer: twimmers - ms.assetid: 24fbe0a4-ab2d-4e10-93a6-c87d634ffbcf -+ Last updated 01/28/2021
active-directory Insite Lms Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/insite-lms-provisioning-tutorial.md
Title: 'Tutorial: Configure Insite LMS for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Insite LMS. writer: twimmers - ms.assetid: c4dbe83d-b5b4-4089-be89-b357e8d6f359 -+ Last updated 04/30/2021
active-directory Iris Intranet Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/iris-intranet-provisioning-tutorial.md
Title: 'Tutorial: Configure Iris Intranet for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Iris Intranet. writer: twimmers - ms.assetid: 38db8479-6d33-43de-9f71-1f1bd184fe69 -+ Last updated 01/15/2021
active-directory Jostle Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/jostle-provisioning-tutorial.md
Title: 'Tutorial: Configure Jostle for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Jostle. writer: twimmers - ms.assetid: 6dbb744f-8b8e-4988-b293-ebe079c8c5c5 -+ Last updated 04/05/2021
active-directory Kpifire Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/kpifire-provisioning-tutorial.md
Title: 'Tutorial: Configure kpifire for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to kpifire. writer: twimmers - ms.assetid: 8c5dd093-20da-4ff6-a9b2-8071f44accd6 -+ Last updated 04/23/2021
active-directory Limblecmms Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/limblecmms-provisioning-tutorial.md
Title: 'Tutorial: Configure LimbleCMMS for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to LimbleCMMS. writer: twimmers - ms.assetid: 5e0d5369-7230-4a16-bc3f-9eac2bc80a8c -+ Last updated 06/07/2021
active-directory Linkedin Learning Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/linkedin-learning-provisioning-tutorial.md
Title: 'Tutorial: Configure LinkedIn Learning for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to LinkedIn Learning. writer: Zhchia - ms.assetid: 21e2f470-4eb1-472c-adb9-4203c00300be Last updated 06/30/2020
active-directory Olfeo Saas Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/olfeo-saas-provisioning-tutorial.md
Title: 'Tutorial: Configure Olfeo SAAS for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Olfeo SAAS. writer: twimmers - ms.assetid: 5f6b0320-dfe7-451c-8cd8-6ba7f2e40434 -+ Last updated 02/26/2021
active-directory Papercut Cloud Print Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/papercut-cloud-print-management-provisioning-tutorial.md
Title: 'Tutorial: Configure PaperCut Cloud Print Management for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to PaperCut Cloud Print Management. writer: twimmers - ms.assetid: 7e65d727-2951-4aec-a7a3-7bde49ed09e2 -+ Last updated 11/18/2020
active-directory Parsable Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/parsable-provisioning-tutorial.md
Title: 'Tutorial: Configure Parsable for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Parsable. writer: twimmers - ms.assetid: 1ec33ea6-bff4-4665-bf2b-f4037ff28c09 -+ Last updated 11/18/2020
active-directory Preciate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/preciate-provisioning-tutorial.md
Title: 'Tutorial: Configure Preciate for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Preciate. writer: twimmers - ms.assetid: fa640971-87e7-49f2-933b-bc7c95fe51e2 -+ Last updated 12/09/2020
active-directory Sosafe Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sosafe-provisioning-tutorial.md
Title: 'Tutorial: Configure SoSafe for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to SoSafe. writer: twimmers
ms.assetid: 30de9f90-482e-43ef-9fcb-f3d4f5eac533
-+ Last updated 06/07/2021
active-directory Timeclock 365 Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/timeclock-365-provisioning-tutorial.md
Title: 'Tutorial: Configure TimeClock 365 for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to TimeClock 365. writer: Zhchia - ms.assetid: dc5e95c8-d878-43dd-918e-69e1686b4db6 -+ Last updated 07/16/2021
active-directory Tribeloo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tribeloo-provisioning-tutorial.md
Title: 'Tutorial: Configure Tribeloo for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Tribeloo. writer: twimmers - ms.assetid: d1063ef2-5d39-4480-a1e2-f58ebe7f98c3 -+ Last updated 06/07/2021
active-directory Unifi Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/unifi-provisioning-tutorial.md
Title: 'Tutorial: Configure UNIFI for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to UNIFI. writer: twimmers - ms.assetid: 924c603f-574e-4e0a-9345-0cb0c7593dbb -+ Last updated 04/20/2021
active-directory Vonage Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/vonage-provisioning-tutorial.md
Title: 'Tutorial: Configure Vonage for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Vonage. writer: twimmers - ms.assetid: dfb7e9bb-c29e-4476-adad-4ab254658e83 -+ Last updated 06/07/2021
active-directory Wedo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/wedo-provisioning-tutorial.md
Title: 'Tutorial: Configure WEDO for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to WEDO. writer: twimmers - ms.assetid: 3088D3EB-CED5-45A5-BD7E-E20B1D7C40F6 -+ Last updated 11/24/2020
active-directory Zip Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zip-provisioning-tutorial.md
Title: 'Tutorial: Configure Zip for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Zip. writer: Zhchia - ms.assetid: 8aea0505-a3a1-4f84-8deb-6e557997c815 -+ Last updated 07/16/2021
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-nodejs.md
To show the current Node.js version, run the following command in the [Cloud She
az webapp config appsettings list --name <app-name> --resource-group <resource-group-name> --query "[?name=='WEBSITE_NODE_DEFAULT_VERSION'].value" ```
-To show all supported Node.js versions, run the following command in the [Cloud Shell](https://shell.azure.com):
+To show all supported Node.js versions, navigate to `https://<sitename>.scm.azurewebsites.net/api/diagnostics/runtime` or run the following command in the [Cloud Shell](https://shell.azure.com):
```azurecli-interactive az webapp list-runtimes | grep node
To set your app to a [supported Node.js version](#show-nodejs-version), run the
az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings WEBSITE_NODE_DEFAULT_VERSION="10.15" ```
-This setting specifies the Node.js version to use, both at runtime and during automated package restore during App Service build automation.
+This setting specifies the Node.js version to use, both at runtime and during automated package restore during App Service build automation. This setting only recognizes major minor versions, the _LTS_ moniker is not supported.
> [!NOTE] > You should set the Node.js version in your project's `package.json`. The deployment engine runs in a separate process that contains all the supported Node.js versions.
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{names
## Overview Creating the Azure Arc data controller has the following high level steps:
-1. Create the custom resource definitions for the Arc data controller, Azure SQL managed instance, and PostgreSQL Hyperscale. **[Requires Kubernetes Cluster Administrator Permissions]**
-2. Create a namespace in which the data controller will be created. **[Requires Kubernetes Cluster Administrator Permissions]**
-3. Create the bootstrapper service including the replica set, service account, role, and role binding.
-4. Create a secret for the data controller administrator username and password.
-5. Create the data controller.
-6. Create the webhook deployment job, cluster role and cluster role binding.
+
+ > [!IMPORTANT]
+ > Some of the steps below require Kubernetes cluster administrator permissions.
+
+1. Create the custom resource definitions for the Arc data controller, Azure SQL managed instance, and PostgreSQL Hyperscale.
+1. Create a namespace in which the data controller will be created.
+1. Create the bootstrapper service including the replica set, service account, role, and role binding.
+1. Create a secret for the data controller administrator username and password.
+1. Create the webhook deployment job, cluster role and cluster role binding.
+1. Create the data controller.
+ ## Create the custom resource definitions
-Run the following command to create the custom resource definitions. **[Requires Kubernetes Cluster Administrator Permissions]**
+Run the following command to create the custom resource definitions.
+
+ > [!IMPORTANT]
+ > Requires Kubernetes cluster administrator permissions.
```console kubectl create -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/custom-resource-definitions.yaml
kubectl create --namespace arc -f <path to your data controller secret file>
kubectl create --namespace arc -f C:\arc-data-services\controller-login-secret.yaml ```
+## Create the webhook deployment job, cluster role and cluster role binding
+
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/web-hook.yaml) locally on your computer so that you can modify some of the settings.
+
+Edit the file and replace `{{namespace}}` in all places with the name of the namespace you created in the previous step. **Save the file.**
+
+Run the following command to create the cluster role and cluster role bindings.
+
+ > [!IMPORTANT]
+ > Requires Kubernetes cluster administrator permissions.
+
+```console
+kubectl create -n arc -f <path to the edited template file on your computer>
+```
++ ## Create the data controller Now you are ready to create the data controller itself.
kubectl describe pod/<pod name> --namespace arc
#kubectl describe pod/control-2g7bl --namespace arc ```
-## Create the webhook deployment job, cluster role and cluster role binding
-
-First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/web-hook.yaml) locally on your computer so that you can modify some of the settings.
-
-Edit the file and replace `{{namespace}}` in three places with the name of the namespace you created in the previous step. **Save the file.**
-
-Run the following command to create the cluster role and cluster role bindings. **[Requires Kubernetes Cluster Administrator Permissions]**
-
-```console
-kubectl create -n arc -f <path to the edited template file on your computer>
-```
- ## Troubleshooting creation problems If you encounter any troubles with creation, please see the [troubleshooting guide](troubleshoot-guide.md).
azure-functions Durable Functions Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-orchestrations.md
public static async Task CheckSiteAvailable(
DurableHttpResponse response = await context.CallHttpAsync(HttpMethod.Get, url);
- if (response.StatusCode >= 400)
+ if ((int)response.StatusCode == 400)
{ // handling of error codes goes here }
azure-monitor Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-activity-log.md
Title: Create, view, and manage activity log alerts in Azure Monitor description: Create activity log alerts by using the Azure portal, an Azure Resource Manager template, and Azure PowerShell. Previously updated : 06/25/2019 -+ Last updated : 08/12/2021
These alerts are for Azure resources and can be created by using an Azure Resour
> [!IMPORTANT] > Alerts on service health notification can't be created via the interface for activity log alert creation. To learn more about how to create and use service health notifications, see [Receive activity log alerts on service health notifications](../../service-health/alerts-activity-log-service-notifications-portal.md).
-When you create alert rules, ensure the following:
+When you create alert rules, ensure the following points:
- The subscription in the scope isn't different from the subscription where the alert is created. - The criteria must be the level, status, caller, resource group, resource ID, or resource type event category on which the alert is configured.-- Only one "allOf" condition is allowed.-- 'AnyOf' can be used to allow multiple conditions over multiple fields (for example, if either the ΓÇ£statusΓÇ¥ or the ΓÇ£subStatusΓÇ¥ fields equal a certain value). Note that the use of 'AnyOf' is currently limited to creating the alert rule using an ARM template deployment.-- 'ContainsAny' can be used to allow multiple values of the same field (for example, if ΓÇ£operationΓÇ¥ equals either ΓÇÿdeleteΓÇÖ or ΓÇÿmodifyΓÇÖ). Note that the use of ΓÇÿContainsAnyΓÇÖ is currently limited to creating the alert rule using an ARM template deployment.
+- There's no "anyOf" condition or nested conditions in the alert configuration JSON. Basically, only one "allOf" condition is allowed with no further "allOf" or "anyOf" conditions.
- When the category is "administrative," you must specify at least one of the preceding criteria in your alert. You may not create an alert that activates every time an event is created in the activity logs. - Alerts cannot be created for events in Alert category of activity log. ## Azure portal You can use the Azure portal to create and modify activity log alert rules. The experience is integrated with an Azure activity log to ensure seamless alert creation for specific events of interest.
+On the Azure portal, you can create a new activity log alert rule, either from the Azure Monitor alerts blade, or from the Azure Monitor activity log blade.
-### Create with the Azure portal
-Use the following procedure.
+### Create an alert rule from the Azure Monitor alerts blade
-1. In the Azure portal, select **Monitor** > **Alerts**.
-2. Select **New alert rule** in the upper-left corner of the **Alerts** window.
+The following procedure describes how to create a metric alert rule in Azure portal:
- ![New alert rule](media/alerts-activity-log/AlertsPreviewOption.png)
+1. In [Azure portal](https://portal.azure.com), click on **Monitor**. The Monitor blade consolidates all your monitoring settings and data in one view.
- The **Create rule** window appears.
+2. Click **Alerts** then click **+ New alert rule**.
- ![New alert rule options](media/alerts-activity-log/create-new-alert-rule-options.png)
+ :::image type="content" source="media/alerts-activity-log/create-alert-rule-button-new.png" alt-text="Screen shot of new alert rule button.":::
+ > [!TIP]
+ > Most resource blades also have **Alerts** in their resource menu under **Monitoring**, you could create alerts from there as well.
-3. Under **Define alert condition**, provide the following information, and select **Done**:
-
- - **Alert target:** To view and select the target for the new alert, use **Filter by subscription** / **Filter by resource type**. Select the resource or resource group from the list displayed.
-
- > [!NOTE]
- >
- > You can select only [Azure Resource Manager](../../azure-resource-manager/management/overview.md) tracked resource, resource group, or an entire subscription for an activity log signal.
-
- **Alert target sample view**
-
- ![Select target](media/alerts-activity-log/select-target.png)
+3. Click **Select target**, in the context pane that loads, select a target resource that you want to alert on. Use **Subscription** and **Resource type** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.
+
+ > [!NOTE]
+ > As a target, you can select an entire subscription, a resource group, or a specific resource. If you chose a subscription or a resource group as a target, and also selected a resource type, the rule will apply to all resources of that type within the selected subscription or a reosurce group. If you chose a specific target resource, the rule will apply only to that resource. You can't explicitly select multiple subscriptions, resource groups, or resources using the target selector.
- - Under **Target criteria**, select **Add criteria**. All available signals for the target are displayed, which includes those from various categories of **Activity Log**. The category name is appended to the **Monitor Service** name.
+4. If the selected resource has activity log operations you can create alerts on, **Available signals** on the bottom right will include Activity Log. You can view the full list of resource types supported for activity log alerts in this [article](../../role-based-access-control/resource-provider-operations.md).
- - Select the signal from the list displayed of various operations possible for the type **Activity Log**.
+ :::image type="content" source="media/alerts-activity-log/select-target-new.png" alt-text="Screen shot of target selection blade." lightbox="media/alerts-activity-log/select-target-new.png":::
- You can select the log history timeline and the corresponding alert logic for this target signal:
+5. Once you have selected a target resource, click on **Add condition**.
- **Add criteria screen**
+6. You will see a list of signals supported for the resource, which includes those from various categories of **Activity Log**. select the activity log signal/operation you want to create an alert on.
- ![Add criteria](media/alerts-activity-log/add-criteria.png)
-
- > [!NOTE]
- >
- > In order to have a high quality and effective rules, we ask to add at least one more condition to rules with the signal "All Administrative".
- > As a part of the definition of the alert you must fill one of the drop downs: "Event level", "Status" or "Initiated by" and by that the rule will be more specific.
+7. You will see a chart for the activity log operation for the last six hours. Use the **Chart period** dropdown to select to see longer history for the operation.
- - **History time**: Events available for the selected operation can be plotted over the last 6, 12, or 24 hours or over the last week.
+8. Under **Alert logic**, You can optionally define more filtering criteria:
- - **Alert logic**:
+ - **Event level**: The severity level of the event: _Verbose_, _Informational_, _Warning_, _Error_, or _Critical_.
+ - **Status**: The status of the event: _Started_, _Failed_, or _Succeeded_.
+ - **Event initiated by**: Also known as the caller. The email address or Azure Active Directory identifier of the user who performed the operation.
- - **Event level**: The severity level of the event: _Verbose_, _Informational_, _Warning_, _Error_, or _Critical_.
- - **Status**: The status of the event: _Started_, _Failed_, or _Succeeded_.
- - **Event initiated by**: Also known as the caller. The email address or Azure Active Directory identifier of the user who performed the operation.
+ > [!NOTE]
+ > In order to have a high quality and effective rules, in the case that the alert scope is an entire subscription, and the selected signal is "All Administrative Operations", we ask to that as part of the definition of the condition you must fill one of the alert logic drop downs: "Event level", "Status" or "Initiated by" and by that the rule will be more specific.
+
+9. Click **Done**.
- This sample signal graph has the alert logic applied:
+ :::image type="content" source="media/alerts-activity-log/condition-selected-new.png" alt-text="Screenshot of condition selection blade." lightbox="media/alerts-activity-log/condition-selected-new.png":::
- ![Criteria selected](media/alerts-activity-log/criteria-selected.png)
+10. Fill in **Alert details** like **Alert rule name**, **Description**.and **Severity**.
-4. Under **Define alert details**, provide the following details:
+ > [!NOTE]
+ > The alert severity for activity log alerts can't currently be configured by the user, and it always defaults to Sev4.
- - **Alert rule name**: The name for the new alert rule.
- - **Description**: The description for the new alert rule.
- - **Save alert to resource group**: Select the resource group where you want to save this new rule.
+11. Add an action group to the alert either by selecting an existing action group or creating a new action group.
-5. Under **Action group**, from the drop-down menu, specify the action group that you want to assign to this new alert rule. Or, [create a new action group](./action-groups.md) and assign it to the new rule. To create a new group, select **+ New group**.
+12. Click **Done** to save the activity log alert rule.
+
+
+### Create an alert rule from the Azure Monitor activity log blade
-6. To enable the rules after you create them, select **Yes** for the **Enable rule upon creation** option.
-7. Select **Create alert rule**.
+An alternative way to create an activity log alert is to start with an activity log event that already occurred, via the [activity log in the Azure portal](../essentials/activity-log.md#view-the-activity-log).
- The new alert rule for the activity log is created, and a confirmation message appears in the upper-right corner of the window.
+1. In the **Azure Monitor - Activity log** blade, you can filter or find the desired event and then create an alert on future similar events by using the **Add activity log alert** button.
- You can enable, disable, edit, or delete a rule. Learn more about how to manage activity log rules.
+ :::image type="content" source="media/alerts-activity-log/create-alert-rule-from-activity-log-event-new.png" alt-text="Screenshot of alert rule creation from an activity log event." lightbox="media/alerts-activity-log/create-alert-rule-from-activity-log-event-new.png":::
+2. The alert rule creation blade will open with the alert rule scope and condition already filled according to the previously selected activity log event. You can edit and modify the scope and condition at this stage if needed. Note that by default, the exact scope and condition for the new rule are copied 'as is' from the original event attributes. For example, the exact resource on which the event occurred, and the specific user/service name who initiated the event are included by default in the new alert rule. If you would like to make the alert rule more general, you need to modify the scope and condition accordingly, as explained in stages 3-9 above.
-A simple analogy for understanding conditions on which alert rules can be created in an activity log is to explore or filter events via the [activity log in the Azure portal](../essentials/activity-log.md#view-the-activity-log). In the **Azure Monitor - Activity log** screen, you can filter or find the necessary event and then create an alert by using the **Add activity log alert** button. Then follow steps 4 through 7 as previously shown.
+3. Then follow steps 10 through 12 as previously shown.
- ![Add alert from activity log](media/alerts-activity-log/add-activity-log.png)
-
- ### View and manage in the Azure portal 1. In the Azure portal, select **Monitor** > **Alerts**. Select **Manage alert rules** in the upper-left corner of the window.
- ![Screenshot shows the activity log with the search box highlighted.](media/alerts-activity-log/manage-alert-rules.png)
-
+ :::image type="content" source="media/alerts-activity-log/manage-alert-rules-button-new.png" alt-text="Screenshot of manage alert rules button.":::
+
The list of available rules appears.
-2. Search for the activity log rule to modify.
+2. Filter or search for the activity log rule to modify.
- ![Search activity log alert rules](media/alerts-activity-log/searth-activity-log-rule-to-edit.png)
+ :::image type="content" source="media/alerts-activity-log/manage-alert-rules-new.png" alt-text="Screenshot of alert rules management blade." lightbox="media/alerts-activity-log/manage-alert-rules-new.png":::
You can use the available filters, _Subscription_, _Resource group_, _Resource_, _Signal type_, or _Status_, to find the activity rule that you want to edit.-
- > [!NOTE]
- >
- > You can edit only **Description**, **Target criteria**, and **Action groups**.
-
-3. Select the rule, and double-click to edit the rule options. Make the required changes, and then select **Save**.
-
- ![Manage alert rules](media/alerts-activity-log/activity-log-rule-edit-page.png)
-
-4. You can enable, disable, or delete a rule. Select the appropriate option at the top of the window after you select the rule as described in step 2.
-
+
+3. Select the rule, and double-click to edit the rule options. Make the required changes, and then select **Save**.
## Azure Resource Manager template To create an activity log alert rule by using an Azure Resource Manager template, you create a resource of the type `microsoft.insights/activityLogAlerts`. Then you fill in all related properties. Here's a template that creates an activity log alert rule:
To create an activity log alert rule by using an Azure Resource Manager template
] } ```
-The previous sample JSON can be saved as, for example, sampleActivityLogAlert.json for the purpose of this walk-through and can be deployed by using [Azure Resource Manager in the Azure portal](../../azure-resource-manager/templates/deploy-portal.md).
+The previous sample JSON can be saved as, for example, sampleActivityLogAlert.json and can be deployed by using [Azure Resource Manager in the Azure portal](../../azure-resource-manager/templates/deploy-portal.md).
- > [!NOTE]
- >
- > Notice that the highest-level activity log alerts can be defined is subscription.
- > Meaning there is no option to define alert on couple of subscriptions, therefore the definition should be alert per subscription.
+> [!NOTE]
+>
+> Notice that the highest-level activity log alerts can be defined is subscription.
+> Meaning there is no option to define alert on couple of subscriptions, therefore the definition should be alert per subscription.
The following fields are the options that you can use in the Azure Resource Manager template for the conditions fields: Notice that ΓÇ£Resource HealthΓÇ¥, ΓÇ£AdvisorΓÇ¥ and ΓÇ£Service HealthΓÇ¥ have extra properties fields for their special fields.
Dedicated Azure CLI commands under the set [az monitor activity-log alert](/cli/
To create a new activity log alert rule, use the following commands in this order:
-1. [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az_monitor_activity_log_alert_create): Create a new activity log alert rule resource.
-1. [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule.
-1. [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
+1. [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource.
+2. [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule.
+3. [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
-To retrieve one activity log alert rule resource, use the Azure CLI command [az monitor activity-log alert show](/cli/azure/monitor/activity-log/alert#az_monitor_activity_log_alert_show
-). To view all activity log alert rule resources in a resource group, use [az monitor activity-log alert list](/cli/azure/monitor/activity-log/alert#az_monitor_activity_log_alert_list).
-Activity log alert rule resources can be removed by using the Azure CLI command [az monitor activity-log alert delete](/cli/azure/monitor/activity-log/alert#az_monitor_activity_log_alert_delete).
+To retrieve one activity log alert rule resource, use the Azure CLI command [az monitor activity-log alert show](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-show
+). To view all activity log alert rule resources in a resource group, use [az monitor activity-log alert list](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-list).
+Activity log alert rule resources can be removed by using the Azure CLI command [az monitor activity-log alert delete](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-delete).
## Next steps - Learn about [webhook schema for activity logs](./activity-log-alerts-webhook.md). - Read an [overview of activity logs](./activity-log-alerts.md).-- Learn more about [action groups](./action-groups.md).
+- Learn more about [action groups](../platform/action-groups.md).
- Learn about [service health notifications](../../service-health/service-notifications.md).+
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
description: Common issues with Azure Monitor metric alerts and possible solutio
Previously updated : 06/03/2021 Last updated : 08/15/2021 # Troubleshooting problems in Azure Monitor metric alerts
When a metric exhibits large fluctuation, Dynamic Thresholds will build a wider
2. The median values are close to zero 3. The metric exhibits an irregular behavior with high variance (there are spikes or dips in the data)
-When the lower bound has a negative value, this means that it's plausible for the metric to reach a zero value given the metric's irregular behavior. You may consider choosing a higher sensitivity or a larger *Aggregation granularity (Period)* to make the model less sensitive, or using the *Ignore data before* option to exclude a recent irregulaity from the historical data used to build the model.
+When the lower bound has a negative value, this means that it's plausible for the metric to reach a zero value given the metric's irregular behavior. You may consider choosing a higher sensitivity or a larger *Aggregation granularity (Period)* to make the model less sensitive, or using the *Ignore data before* option to exclude a recent irregularity from the historical data used to build the model.
+
+## The Dynamic Thresholds alert rule is too noisy (fires too much)
+To reduce the sensitivity of your Dynamic Thresholds alert rule, use one of the following options:
+1. Threshold sensitivity - Set the sensitivity to *Low* in order to be more tolerant for deviations.
+2. Number of violations (under *Advanced settings*) - Configure the alert rule to trigger only if a number of deviations occur within a certain period of time. This will make the rule less susceptible to transient deviations.
++
+## The Dynamic Thresholds alert rule is too insensitive (doesn't fire)
+Sometimes, an alert rule won't trigger even when a high sensitivity is configured. This usually happens when the metric's distribution is highly irregular.
+Consider one of the following options:
+* Move to monitoring a complementary metric that's suitable for your scenario (if applicable). For example, check for changes in success rate, rather than failure rate.
+* Try selecting a different aggregation granularity (period).
+* Check if there was a drastic change in the metric behavior in the last 10 days (an outage). An abrupt change can impact the upper and lower thresholds calculated for the metric and make them broader. Wait for a few days until the outage is no longer taken into the thresholds calculation, or use the *Ignore data before* option (under *Advanced settings*).
+* If your data has weekly seasonality, but not enough history is available for the metric, the calculated thresholds can result in having broad upper and lower bounds. For example, the calculation can treat weekdays and weekends in the same way, and build wide borders that don't always fit the data. This should resolve itself once enough metric history is available, at which point the correct seasonality will be detected and the calculated thresholds will update accordingly.
+ ## Next steps
azure-monitor Automate Custom Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/automate-custom-reports.md
Title: Automate custom reports with Azure Application Insights data
-description: Automate custom daily/weekly/monthly reports with Azure Application Insights data
+ Title: Automate custom reports with Application Insights data
+description: Automate custom daily/weekly/monthly reports with Azure Monitor Application Insights data
Last updated 05/20/2019
-# Automate custom reports with Azure Application Insights data
+# Automate custom reports with Application Insights data
Periodical reports help keep a team informed on how their business critical services are doing. Developers, DevOps/SRE teams, and their managers can be productive with automated reports reliably delivering insights without requiring everyone to sign in the portal. Such reports can also help identify gradual increases in latencies, load or failure rates that may not trigger any alert rules.
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/pricing.md
To [change the daily cap via Azure Resource Manager](./powershell.md), the prope
### Create alerts for the Daily Cap
-The Application Insights Daily Cap creates an event in the Azure activity log when the ingested data volumes reaches the warning level or the daily cap level. You can [create an alert based on these activity log events](../alerts/alerts-activity-log.md#create-with-the-azure-portal). The signal names for these events are:
+The Application Insights Daily Cap creates an event in the Azure activity log when the ingested data volumes reaches the warning level or the daily cap level. You can [create an alert based on these activity log events](../alerts/alerts-activity-log.md#azure-portal). The signal names for these events are:
* Application Insights component daily cap warning threshold reached
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-custom-overview.md
During the public preview, the ability to publish custom metrics is available on
|Azure region |Regional endpoint prefix| ||| | All Public Cloud Regions | https://<azure_region_code>.monitoring.azure.com |
-| **Azure Government** | |
-| US Gov Arizona | https:\//usgovarizona.monitoring.azure.us |
-| **China** | |
-| China East 2 | https:\//chinaeast2.monitoring.azure.cn |
+ ## Latency and storage retention
azure-vmware Create Placement Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/create-placement-policy.md
+
+ Title: Create a placement policy
+description: Learn how to create a placement policy in Azure VMware Solution to control the placement of virtual machines (VMs) on hosts within a cluster through the Azure portal.
+ Last updated : 8/16/2021+
+#Customer intent: As an Azure service administrator, I want to control the placement of virtual machines on hosts within a cluster in my private cloud.
+++
+# Create a placement policy in Azure VMware Solution
+
+>[!IMPORTANT]
+>Azure VMware Solution placement policy (Preview) is currently in preview. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In Azure VMware Solution, clusters in a private cloud are a managed resource. As a result, the cloudadmin role can't make certain changes to the cluster from the vSphere Client, including the management of Distributed Resource Scheduler (DRS) rules.
+
+Placement policies in Azure VMware Solution let you control the placement of virtual machines (VMs) on hosts within a cluster through the Azure portal. When you create a placement policy, it includes a DRS rule in the specified vSphere cluster. It also includes additional logic for interoperability with Azure VMware Solution operations.
+
+A placement policy has at least five required components:
+
+- **Name** - Defines the name of the policy and is subject to the naming constraints of [Azure Resources](../azure-resource-manager/management/resource-name-rules.md).
+
+- **Type** - Defines the type of control you want to apply to the resources contained in the policy.
+
+- **Cluster** - Defines the cluster for the policy. The scope of a placement policy is a vSphere cluster, so only resources from the same cluster may be part of the same placement policy.
+
+- **State** - Defines if the policy is enabled or disabled. In certain scenarios, a policy might be disabled automatically when a conflicting rule gets created. For more information, see [Considerations](#considerations) below.
+
+- **Virtual machine** - Defines the VMs and hosts for the policy. Depending on the type of rule you create, your policy may require you to specify some number of VMs and hosts. For more information, see [Placement policy types](#placement-policy-types) below.
++
+## Prerequisites
+
+You must have _Contributor_ level access to the private cloud to manage placement policies.
+++
+## Placement policy types
+
+### VM-VM policies
+
+**VM-VM** policies specify if selected VMs should run on the same host or kept on separate hosts. In addition to choosing a name and cluster for the policy, **VM-VM** policies requires that you select at least two VMs to assign. The assignment of hosts isn't required or permitted for this policy type.
+
+- **VM-VM Affinity** policies instruct DRS to try keeping the specified VMs together on the same host. It's useful for performance reasons, for example.
+
+- **VM-VM Anti-Affinity** policies instruct DRS to try keeping the specified VMs apart from each other on separate hosts. It's useful in scenarios where a problem with one host doesn't affect multiple VMs within the same policy.
++
+### VM-Host policies
+
+**VM-Host** policies specify if selected VMs can run on selected hosts. To avoid interference with platform-managed operations such as host maintenance mode and host replacement, **VM-Host** policies in Azure VMware Solution are always preferential (also known as "should" rules). Accordingly, **VM-Host** policies [may not be honored in certain scenarios](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.resmgmt.doc/GUID-793013E2-0976-43B7-9A00-340FA76859D0.html). For more information, see [Monitor the operation of a policy](#monitor-the-operation-of-a-policy) below.
+
+Certain platform operations dynamically update the list of hosts defined in **VM-Host** policies. For example, when you delete a host that is a member of a placement policy, the host is removed if more than one host is part of that policy. Also, if a host is part of a policy and needs to be replaced as part of a platform-managed operation, the policy is updated dynamically with the new host.
+
+In addition to choosing a name and cluster for the policy, a **VM-Host** policy requires that you select at least one VM and one host to assign to the policy.
+
+- **VM-Host Affinity** policies instruct DRS to try running the specified VMs on the hosts defined.
+
+- **VM-Host Anti-Affinity** policies instruct DRS to try running the specified VMs on hosts other than those defined.
++++
+## Considerations
+++
+### Cluster scale in
+
+Azure VMware Solution attempts to prevent certain DRS rule violations from occurring when performing cluster scale-in operations.
+
+You can't remove the last host from a VM-Host policy. However, if you need to remove the last host from the policy, you can remediate this by adding another host to the policy before removing the host from the cluster. Alternatively, you can delete the placement policy before removing the host.
+
+You can't have a VM-VM Anti Affinity policy with more VMs than the number of hosts in a cluster. If removing a host would result in fewer hosts in the cluster than VMs, you'll receive an error preventing the operation. You can remediate this by first removing VMs from the rule and then removing the host from the cluster.
++
+### Rule conflicts
+
+If DRS rule conflicts are detected when you create a VM-VM policy, it results in that policy being created in a disabled state following standard [VMware DRS Rule behavior](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.resmgmt.doc/GUID-69C738B6-5FC8-4189-9CB5-DD90A5A05979.html). For more information on viewing rule conflicts, see [Monitor the operation of a policy](#monitor-the-operation-of-a-policy) below.
+++
+## Create a placement policy
+
+Make sure to review the requirements for the [policy type](#placement-policy-types).
+
+1. In your Azure VMware Solution private cloud, under **Manage**, select **Placement policies** > **+ Create**.
+
+ >[!TIP]
+ >You may also select the Cluster from the Placement Policy overview pane and then select **Create**.
+ >
+ >:::image type="content" source="media/placement-policies/create-placement-policy-cluster.png" alt-text="Screenshot showing an alternative option for creating a placement policy.":::
+++
+ :::image type="content" source="media/placement-policies/create-placement-policy.png" alt-text="Screenshot showing how to start the process to create a VM-VM placement policy." lightbox="media/placement-policies/create-placement-policy.png":::
++
+1. Provide a descriptive name, select the policy type, and select the cluster where the policy is created. Then select **Enable**.
+
+ >[!WARNING]
+ >If you disable the policy, then the policy and the underlying DRS rule are created, but the policy actions are ignored until you enable the policy.
+
+ :::image type="content" source="media/placement-policies/create-placement-policy-vm-vm-affinity-1.png" alt-text="Screenshot showing the placement policy options." lightbox="media/placement-policies/create-placement-policy-vm-vm-affinity-1.png":::
+
+1. If you selected **VM-Host affinity** or **VM-Host anti-affinity** as the type, your policy requires a host to be selected. Select **+ Add host** and the hosts to include in the policy. You can select multiple hosts.
+
+ :::image type="content" source="media/placement-policies/create-placement-policy-vm-host-affinity-2.png" alt-text="Screenshot showing the list of hosts to select.":::
+
+1. Select **+ Add virtual machine** and the VMs to include in the policy. You can select multiple VMs.
+
+ :::image type="content" source="media/placement-policies/create-placement-policy-vm-vm-affinity-2.png" alt-text="Screenshot showing the list of VMs to select.":::
+
+1. Once you've finished adding the VMs you want, select **Add virtual machine**.
+
+1. Select **Next: Review and create** to review your policy.
+
+1. Select **Create policy**. If you want to make changes, select **Back: Basics**.
+
+ :::image type="content" source="media/placement-policies/create-placement-policy-vm-vm-affinity-3.png" alt-text="Screenshot showing the placement policy settings before it's created.":::
+
+1. After the placement policy gets created, select **Refresh** to see it in the list.
+
+ :::image type="content" source="media/placement-policies/create-placement-policy-8.png" alt-text="Screenshot showing the placement policy as Enabled after it's created." lightbox="media/placement-policies/create-placement-policy-8.png":::
+++
+## Edit a placement policy
+
+You can change the state of a policy, add a new resource, or unassign an existing resource.
+
+### Change the policy state
+
+You can change the state of a policy to **Enabled** or **Disabled**.
+
+1. In your Azure VMware Solution private cloud, under **Manage**, select **Placement policies**.
+
+1. For the policy you want to edit, select **More** (...) and then select **Edit**.
+
+ >[!TIP]
+ >You can disable a policy from the Placement policy overview by selecting **Disable** from the Settings drop-down. You can't enable a policy from the Settings drop-down.
+
+ :::image type="content" source="media/placement-policies/edit-placement-policy.png" alt-text="Screenshot showing how to edit a placement policy." lightbox="media/placement-policies/edit-placement-policy.png":::
+
+1. If the policy is enabled but you want to disable it, select **Disabled** and then select **Disabled** on the confirmation message. Otherwise, if the policy is disabled and you want to enable it, select **Enable**.
+
+1. Select **Review + update**.
+
+1. Review the changes and select **Update policy**. If you want to make changes, select **Back: Basics**.
++
+### Update the resources in a policy
+
+You can add new resources, such as a VM or a host, to a policy or remove existing ones.
+
+1. In your Azure VMware Solution private cloud, under **Manage**, select **Placement policies**.
+
+1. For the policy you want to edit, select **More** (...) and then **Edit**.
+
+ :::image type="content" source="media/placement-policies/edit-placement-policy.png" alt-text="Screenshot showing how to edit the resources in a placement policy." lightbox="media/placement-policies/edit-placement-policy.png":::
+
+ - To remove an existing resource, select the resource or resources you want to remove. Select **Unassign**, which removes the resource or resources from the list.
+
+ :::image type="content" source="media/placement-policies/edit-placement-policy-unassign.png" alt-text="Screenshot showing how to remove an existing resource from a placement policy.":::
+
+ - To add a new resource, select **Edit virtual machine** or **Edit host**, select the resource you'd like to add, and then select **Save**.
+
+1. Select **Next : Review and update**.
+
+1. Review the changes and select **Update policy**. If you want to make changes, select **Back : Basics**.
+++
+## Delete a policy
+
+You can delete a placement policy and its corresponding DRS rule.
+
+1. In your Azure VMware Solution private cloud, under **Manage**, select **Placement policies**.
+
+1. For the policy you want to edit, select **More** (...) and then select **Delete**.
+
+ :::image type="content" source="media/placement-policies/delete-placement-policy.png" alt-text="Screenshot showing how to delete a placement policy." lightbox="media/placement-policies/delete-placement-policy.png":::
+
+1. Select **Delete** on the confirmation message.
+++
+## Monitor the operation of a policy
+
+Use the vSphere Client to monitor the operation of a placement policy's corresponding DRS rule.
+
+As a holder of the cloudadmin role, you can view, but not edit, the DRS rules created by a placement policy on the cluster's Configure tab under VM/Host Rules. It lets you view additional information, such as if the DRS rules are in a conflict state.
+
+Additionally, you can monitor various DRS rule operations, such as recommendations and faults, from the cluster's Monitor tab.
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/index-policy.md
Previously updated : 05/25/2021 Last updated : 08/13/2021
Azure Cosmos DB supports two indexing modes:
- **None**: Indexing is disabled on the container. This is commonly used when a container is used as a pure key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk operations. After the bulk operations are complete, the index mode can be set to Consistent and then monitored using the [IndexTransformationProgress](how-to-manage-indexing-policy.md#dotnet-sdk) until complete. > [!NOTE]
-> Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query a Cosmos container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) (except if you are using an Azure Cosmos account in [serverless](serverless.md) mode which doesn't support lazy indexing).
+> Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query a Cosmos container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmoslazyindexing@microsoft.com (except if you are using an Azure Cosmos account in [serverless](serverless.md) mode which doesn't support lazy indexing).
By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure CosmosDB to automatically index documents as they are written.
cosmos-db Create Mongodb Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/create-mongodb-dotnet.md
Title: Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK
+ Title: Build a web API using Azure Cosmos DB's API for MongoDB and .NET SDK
description: Presents a .NET code sample you can use to connect to and query using Azure Cosmos DB's API for MongoDB.
ms.devlang: dotnet Previously updated : 10/15/2020 Last updated : 8/13/2021
-# Quickstart: Build a .NET web app using Azure Cosmos DB's API for MongoDB
+# Quickstart: Build a .NET web API using Azure Cosmos DB's API for MongoDB
[!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)] > [!div class="op_single_selector"]
> * [Golang](create-mongodb-go.md) >
-Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can quickly create and query document, key/value and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Cosmos DB.
-
-This quickstart demonstrates how to create a Cosmos account with [Azure Cosmos DB's API for MongoDB](mongodb-introduction.md). You'll then build and deploy a tasks list web app built using the [MongoDB .NET driver](https://docs.mongodb.com/ecosystem/drivers/csharp/).
+This quickstart demonstrates how to:
+1. Create an [Azure Cosmos DB API for MongoDB account](mongodb-introduction.md)
+2. Build a product catalog web API using the [MongoDB .NET driver](https://docs.mongodb.com/ecosystem/drivers/csharp/)
+3. Import sample data
## Prerequisites to run the sample app * [Visual Studio](https://www.visualstudio.com/downloads/)
-* An Azure Cosmos DB account.
+* [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0)
+* An Azure account with an active subscription. [Create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You can also [try Azure Cosmos DB](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments.
If you don't already have Visual Studio, download [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/) with the **ASP.NET and web development** workload installed with setup. - <a id="create-account"></a> ## Create a database account [!INCLUDE [cosmos-db-create-dbaccount](../includes/cosmos-db-create-dbaccount-mongodb.md)]
-The sample described in this article is compatible with MongoDB.Driver version 2.6.1.
+## Learn the object model
+
+Before you continue building the application, let's look into the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
-## Clone the sample app
+* Azure Cosmos DB API for MongoDB account
+* Databases
+* Collections
+* Documents
-Run the following commands in a GitHub enabled command windows such as [Git bash](https://git-scm.com/downloads):
+To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource model](../account-databases-containers-items.md) article.
+
+## Install the sample app template
+
+This sample is a dotnet project template, which can be installed to create a local copy. Run the following commands in a command window:
```bash
-mkdir "C:\git-samples"
-cd "C:\git-samples"
-git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-dotnet-getting-started.git
+mkdir "C:\cosmos-samples"
+cd "C:\cosmos-samples"
+dotnet new -i Microsoft.Azure.Cosmos.Templates
+dotnet new cosmosmongo-webapi
``` The preceding commands:
-1. Create the *C:\git-samples* directory for the sample. Chose a folder appropriate for your operating system.
-1. Change your current directory to the *C:\git-samples* folder.
-1. Clone the sample into the *C:\git-samples* folder.
+1. Create the *C:\cosmos-samples* directory for the sample. Choose a folder appropriate for your operating system.
+1. Change your current directory to the *C:\cosmos-samples* folder.
+1. Install the project template, making it available globally from the dotnet CLI.
+1. Create a local sample app using the project template.
-If you don't wish to use git, you can also [download the project as a ZIP file](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-dotnet-getting-started/archive/master.zip).
+If you don't wish to use the dotnet CLI, you can also [download the project templates as a ZIP file](https://github.com/Azure/azure-cosmos-dotnet-templates). This sample is in the `Templates/APIForMongoDBQuickstart-WebAPI` folder.
## Review the code
-1. In Visual Studio, right-click on the project in **Solution Explorer** and then click **Manage NuGet Packages**.
-1. In the NuGet **Browse** box, type *MongoDB.Driver*.
-1. From the results, install the **MongoDB.Driver** library. This installs the MongoDB.Driver package as well as all dependencies.
+The following steps are optional. If you're interested in learning how the database resources are created in the code, review the following snippets. Otherwise, skip ahead to [Update the application settings](#update-the-application-settings).
-The following steps are optional. If you're interested in learning how the database resources are created in the code, review the following snippets. Otherwise, skip ahead to [Update your connection string](#update-the-connection-string).
+### Setup connection
-The following snippets are from the *DAL/Dal.cs* file.
+The following snippet is from the *Services/MongoService.cs* file.
-* The following code initializes the client:
+* The following class represents the client and is injected by the .NET framework into services that consume it:
```cs
- MongoClientSettings settings = new MongoClientSettings();
- settings.Server = new MongoServerAddress(host, 10255);
- settings.UseSsl = true;
- settings.SslSettings = new SslSettings();
- settings.SslSettings.EnabledSslProtocols = SslProtocols.Tls12;
-
- MongoIdentity identity = new MongoInternalIdentity(dbName, userName);
- MongoIdentityEvidence evidence = new PasswordEvidence(password);
+ public class MongoService
+ {
+ private static MongoClient _client;
- settings.Credential = new MongoCredential("SCRAM-SHA-1", identity, evidence);
+ public MongoService(IDatabaseSettings settings)
+ {
+ _client = new MongoClient(settings.MongoConnectionString);
+ }
- MongoClient client = new MongoClient(settings);
+ public MongoClient GetClient()
+ {
+ return _client;
+ }
+ }
```
-* The following code retrieves the database and the collection:
+### Setup product catalog data service
- ```cs
- private string dbName = "Tasks";
- private string collectionName = "TasksList";
+The following snippets are from the *Services/ProductService.cs* file.
- var database = client.GetDatabase(dbName);
- var todoTaskCollection = database.GetCollection<MyTask>(collectionName);
+* The following code retrieves the database and the collection and will create them if they don't already exist:
+
+ ```csharp
+ private readonly IMongoCollection<Product> _products;
+
+ public ProductService(MongoService mongo, IDatabaseSettings settings)
+ {
+ var db = mongo.GetClient().GetDatabase(settings.DatabaseName);
+ _products = db.GetCollection<Product>(settings.ProductCollectionName);
+ }
```
-* The following code retrieves all documents:
+* The following code retrieves a document by sku, a unique product identifier:
- ```cs
- collection.Find(new BsonDocument()).ToList();
+ ```csharp
+ public Task<Product> GetBySkuAsync(string sku)
+ {
+ return _products.Find(p => p.Sku == sku).FirstOrDefaultAsync();
+ }
```
-The following code creates a task and insert it into the collection:
+* The following code creates a product and inserts it into the collection:
- ```csharp
- public void CreateTask(MyTask task)
+ ```csharp
+ public Task CreateAsync(Product product)
{
- var collection = GetTasksCollectionForEdit();
- try
- {
- collection.InsertOne(task);
- }
- catch (MongoCommandException ex)
- {
- string msg = ex.Message;
- }
+ _products.InsertOneAsync(product);
}
- ```
- Similarly, you can update and delete documents by using the [collection.UpdateOne()](https://docs.mongodb.com/stitch/mongodb/actions/collection.updateOne/https://docsupdatetracker.net/index.html) and [collection.DeleteOne()](https://docs.mongodb.com/stitch/mongodb/actions/collection.deleteOne/https://docsupdatetracker.net/index.html) methods.
+ ```
-## Update the connection string
+* The following code finds and updates a product:
-From the Azure portal copy the connection string information:
+ ```csharp
+ public Task<Product> UpdateAsync(Product update)
+ {
+ return _products.FindOneAndReplaceAsync(
+ Builders<Product>.Filter.Eq(p => p.Sku, update.Sku),
+ update,
+ new FindOneAndReplaceOptions<Product> { ReturnDocument = ReturnDocument.After });
+ }
+ ```
-1. In the [Azure portal](https://portal.azure.com/), select your Cosmos account, in the left navigation click **Connection String**, and then click **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the Username, Password, and Host into the Dal.cs file in the next step.
+ Similarly, you can delete documents by using the [collection.DeleteOne()](https://docs.mongodb.com/stitch/mongodb/actions/collection.deleteOne/https://docsupdatetracker.net/index.html) method.
-2. Open the *DAL/Dal.cs* file.
+## Update the application settings
-3. Copy the **username** value from the portal (using the copy button) and make it the value of the **username** in the **Dal.cs** file.
+From the Azure portal, copy the connection string information:
-4. Copy the **host** value from the portal and make it the value of the **host** in the **Dal.cs** file.
+1. In the [Azure portal](https://portal.azure.com/), select your Cosmos DB account, in the left navigation select **Connection String**, and then select **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the primary connection string into the appsettings.json file in the next step.
-5. Copy the **password** value from the portal and make it the value of the **password** in your **Dal.cs** file.
+2. Open the *appsettings.json* file.
+
+3. Copy the **primary connection string** value from the portal (using the copy button) and make it the value of the **DatabaseSettings.MongoConnectionString** property in the **appsettings.json** file.
+
+4. Review the **database name** value in the **DatabaseSettings.DatabaseName** property in the **appsettings.json** file.
+
+5. Review the **collection name** value in the **DatabaseSettings.ProductCollectionName** property in the **appsettings.json** file.
-<!-- TODO Store PW correctly-->
> [!WARNING] > Never check passwords or other sensitive data into source code. You've now updated your app with all the info it needs to communicate with Cosmos DB.
-## Run the web app
+## Load sample data
+
+[Download](https://www.mongodb.com/try/download/database-tools) [mongoimport](https://docs.mongodb.com/database-tools/mongoimport/#mongodb-binary-bin.mongoimport), a CLI tool that easily imports small amounts of JSON, CSV, or TSV data. We will use mongoimport to load the sample product data provided in the `Data` folder of this project.
+
+From the Azure portal, copy the connection information and enter it in the command below:
+
+```bash
+mongoimport --host <HOST>:<PORT> -u <USERNAME> -p <PASSWORD> --db cosmicworks --collection products --ssl --jsonArray --writeConcern="{w:0}" --file Data/products.json
+```
+
+1. In the [Azure portal](https://portal.azure.com/), select your Azure Cosmos DB API for MongoDB account, in the left navigation select **Connection String**, and then select **Read-write Keys**.
+
+1. Copy the **HOST** value from the portal (using the copy button) and enter it in place of **<HOST>**.
+
+1. Copy the **PORT** value from the portal (using the copy button) and enter it in place of **<PORT>**.
+
+1. Copy the **USERNAME** value from the portal (using the copy button) and enter it in place of **<USERNAME>**.
+
+1. Copy the **PASSWORD** value from the portal (using the copy button) and enter it in place of **<PASSWORD>**.
+
+1. Review the **database name** value and update it if you created something other than `cosmicworks`.
+
+1. Review the **collection name** value and update it if you created something other than `products`.
+
+> [!Note]
+> If you would like to skip this step you can create documents with the correct schema using the POST endpoint provided as part of this web api project.
+
+## Run the app
+
+From Visual Studio, select CTRL + F5 to run the app. The default browser is launched with the app.
+
+If you prefer the CLI, run the following command in a command window to start the sample app. This command will also install project dependencies and build the project, but will not automatically launch the browser.
+
+```bash
+dotnet run
+```
+
+After the application is running, navigate to [https://localhost:5001/swagger/https://docsupdatetracker.net/index.html](https://localhost:5001/swagger/https://docsupdatetracker.net/index.html) to see the [swagger documentation](https://swagger.io/) for the web api and to submit sample requests.
-1. Click CTRL + F5 to run the app. The default browser is launched with the app.
-1. Click **Create** in the browser and create a few new tasks in your task list app.
+Select the API you would like to test and select "Try it out".
-<!--
-## Deploy the app to Azure
-1. In VS, right click .. publish
-2. This is so easy, why is this critical step missed?
>
-## Review SLAs in the Azure portal
+Enter any necessary parameters and select "Execute."
## Clean up resources
You've now updated your app with all the info it needs to communicate with Cosmo
## Next steps
-In this quickstart, you've learned how to create a Cosmos account, create a collection and run a console app. You can now import additional data to your Cosmos database.
+In this quickstart, you've learned how to create an API for MongoDB account, create a database and a collection with code, and run a web API app. You can now import additional data to your database.
> [!div class="nextstepaction"] > [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
databox-online Azure Stack Edge Gpu Manage Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-storage-accounts.md
Title: Azure Stack Edge Pro GPU storage account management | Microsoft Docs
-description: Describes how to use the Azure portal to manage storage account on your Azure Stack Edge Pro.
+description: Describes how to use the Azure portal to manage storage account on your Azure Stack Edge Pro GPU.
Previously updated : 03/12/2021 Last updated : 08/13/2021
-# Use the Azure portal to manage Edge storage accounts on your Azure Stack Edge Pro
+# Use the Azure portal to manage Edge storage accounts on your Azure Stack Edge Pro GPU
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes how to manage Edge storage accounts on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add or delete Edge storage accounts on your device.
+This article describes how to manage Edge storage accounts on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro GPU via the Azure portal or via the local web UI. Use the Azure portal to add or delete Edge storage accounts on your device.
## About Edge storage accounts
-You can transfer data from your Azure Stack Edge Pro device via the SMB, NFS, or REST protocols. To transfer data to Blob storage using the REST APIs, you need to create Edge storage accounts on your Azure Stack Edge Pro.
+You can transfer data from your Azure Stack Edge Pro GPU device via the SMB, NFS, or REST protocols. To transfer data to Blob storage using the REST APIs, you need to create Edge storage accounts on your device.
-The Edge storage accounts that you add on the Azure Stack Edge Pro device are mapped to Azure Storage accounts. Any data written to the Edge storage accounts is automatically pushed to the cloud.
+The Edge storage accounts that you add on the Azure Stack Edge Pro GPU device are mapped to Azure Storage accounts. Any data written to the Edge storage accounts is automatically pushed to the cloud.
A diagram detailing the two types of accounts and how the data flows from each of these accounts to Azure is shown below:
You can now select a container from this list and select **+ Delete container**
## Sync storage keys
-You can synchronize the access keys for the Edge (local) storage accounts on your device.
+Each Azure Storage account has two 512-bit storage access keys that are used for authentication when the storage account is accessed. One of these two keys must be supplied when your Azure Stack Edge device accesses your cloud storage service provider (in this case, Azure).
-To sync the storage account access key, take the following steps:
+An Azure administrator can regenerate or change the access key by directly accessing the storage account (via the Azure Storage service). The Azure Stack Edge service and the device do not see this change automatically.
+
+To inform Azure Stack Edge of the change, you will need to access the Azure Stack Edge service, access the storage account, and then synchronize the access key. The service then gets the latest key, encrypts the keys, and sends the encrypted key to the device. When the device gets the new key, it can continue to transfer data to the Azure Storage account.
+
+To provide the new keys to the device, access the Azure portal and synchronize storage access keys. Take the following steps:
1. In your resource, select the storage account that you want to manage. From the top command bar, select **Sync storage key**.
dedicated-hsm Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/networking.md
Before provisioning a Dedicated HSM device, customers will first need to create
### Subnets Subnets segment the virtual network into separate address spaces usable by the Azure resources you place in them. Dedicated HSMs are deployed into a subnet in the virtual network. Each Dedicated HSM device that is deployed in the customerΓÇÖs subnet will receive a private IP address from this subnet.
-The subnet in which the HSM device is deployed needs to be explicitly delegated to the service: Microsoft.HardwareSecurityModules/dedicatedHSMs. This grants certain permissions to the HSM service for deployment into the subnet. Delegation to Dedicated HSMs imposes certain policy restrictions on the subnet. Network Security Groups (NSGs) and User-Defined Routes (UDRs) are currently not supported on delegated subnets. As a result, once a subnet is delegated to dedicated HSMs, it can only be used to deploy HSM resources. Deployment of any other customer resources into the subnet will fail.
+The subnet in which the HSM device is deployed needs to be explicitly delegated to the service: Microsoft.HardwareSecurityModules/dedicatedHSMs. This grants certain permissions to the HSM service for deployment into the subnet. Delegation to Dedicated HSMs imposes certain policy restrictions on the subnet. Network Security Groups (NSGs) and User-Defined Routes (UDRs) are currently not supported on delegated subnets. As a result, once a subnet is delegated to dedicated HSMs, it can only be used to deploy HSM resources. Deployment of any other customer resources into the subnet will fail. Their is no requirement on how large or small the subnet for Dedicated HSM should be, however each HSM device will consume one private IP, so it should be ensured the subnet is large enough to accommodate as many HSM devices as required for deployment.
### ExpressRoute gateway
There are a couple of architectures you can use as an alternative to Global VNet
- [High availability](high-availability.md) - [Physical Security](physical-security.md) - [Monitoring](monitoring.md)-- [Deployment architecture](deployment-architecture.md)
+- [Deployment architecture](deployment-architecture.md)
defender-for-iot Quickstart Building The Defender Micro Agent From Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/quickstart-building-the-defender-micro-agent-from-source.md
- Title: 'Quickstart: Build the Defender micro agent from source code (Preview)'
-description: In this quickstart, learn about the Micro Agent which includes an infrastructure that can be used to customize your distribution.
Previously updated : 05/10/2021---
-# Quickstart: Build the Defender micro agent from source code (Preview)
-
-The Micro Agent includes an infrastructure, which can be used to customize your distribution. To see a list of the available configuration parameters look at the `configs/LINUX_BASE.conf` file.
-
-For a single distribution, modify the base `.conf` file.
-
-If you require more than one distribution, you can inherit from the base configuration and override its values.
-
-To override the values:
-
-1. Create a new `.dist` file.
-
-1. Add `CONF_DEFINE_BASE(${g_plat_config_path} LINUX_BASE.conf)` to the top.
-
-1. Define new values to whatever you require, example:
-
- `set(ASC_LOW_PRIORITY_INTERVAL 60*60*24)`
-
-1. Give the `.dist` file a reference when building. For example,
-
- `cmake -DCMAKE_BUILD_TYPE=Debug -Dlog_level=DEBUG -Dlog_level_cmdline:BOOL=ON -DIOT_SECURITY_MODULE_DIST_TARGET=UBUNTU1804 ..`
-
-## Prerequisites
-
-1. Contact your account manager to ask for access to Defender for IoT source code.
-
-1. Clone, or extract the source code to a folder on the disk.
-
-1. Navigate into that directory.
-
-1. Pull the submodules using the following code:
-
- ```bash
- git submodule update --init
- ```
-
-1. Next, pull the submodules for the Azure IoT SDK with the following code:
-
- ```bash
- git -C deps/azure-iot-sdk-c/ submodule update ΓÇôinit
- ```
-
-
-1. Add an execution permission, and run the developer environment setup script:
-
- ```bash
- chmod +x scripts/install_development_environment.sh && ./scripts/install_development_environment.sh
- ```
-
-1. Create a directory for the build outputs:
-
- ```bash
- mkdir cmake
- ```
-
-1. (Optional) Download and install [VSCode](https://code.visualstudio.com/download )
-
-1. (Optional) Install the [C/C++ extension](https://code.visualstudio.com/docs/languages/cpp ) for VSCode.- None
-
-## Baseline Configuration signing
-
-The agent verifies the authenticity of configuration files that are placed on the disk to mitigate tampering, by default.
-
-You can stop this process by defining the preprocessor flag `ASC_BASELINE_CONF_SIGN_CHECK_DISABLE`.
-
-We don't recommend turning off the signature check for production environments.
-
-If you require a different configuration for production scenarios, contact the Defender for IoT team.
-
-## Building the Defender IoT Micro Agent
-
-1. Open the directory with VSCode
-
-1. Navigate to the `cmake` directory.
-
-1. Run the following command:
-
- ```bash
- cmake -DCMAKE_BUILD_TYPE=Debug -Dlog_level=DEBUG -Dlog_level_cmdline:BOOL=ON -DIOT_SECURITY_MODULE_DIST_TARGET<the appropriate distro configuration file name> ..
-
- cmake --build . -- -j${env:NPROC}
- ```
-
- For example:
-
- ```bash
- cmake -DCMAKE_BUILD_TYPE=Debug -Dlog_level=DEBUG -Dlog_level_cmdline:BOOL=ON -DIOT_SECURITY_MODULE_DIST_TARGETUBUNTU1804 ..
-
- cmake --build . -- -j${env:NPROC}
- ```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Configure your Azure Defender for IoT solution](quickstart-configure-your-solution.md).
defender-for-iot Quickstart Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/quickstart-standalone-agent-binary-installation.md
sudo apt-get install defender-iot-micro-agent=<version>
## Next steps > [!div class="nextstepaction"]
-> [Building the Defender micro agent from source code](quickstart-building-the-defender-micro-agent-from-source.md)
+> [Quickstart: Create a Defender IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md)
iot-central Architecture Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/architecture-digital-distribution-center.md
- Title: Architecture IoT Central Digital Distribution Center | Microsoft Docs
-description: An architecture of Digital Distribution Center application template for IoT Central
----- Previously updated : 10/20/2019--
-# Architecture of IoT Central digital distribution center application template
---
-Partners and customers can use the app template & following guidance to develop end to end **digital distribution center** solutions.
-
-> [!div class="mx-imgBorder"]
-> ![digital distribution center](./media/concept-ddc-architecture/digital-distribution-center-architecture.png)
-
-1. Set of IoT sensors sending telemetry data to a gateway device
-2. Gateway devices sending telemetry and aggregated insights to IoT Central
-3. Data is routed to the desired Azure service for manipulation
-4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts
-5. Processed data is stored in hot storage for near real-time actions or cold storage for additional insight enhancements that is based on ML or batch analysis.
-6. Logic Apps can be used to power various business workflows in end-user business applications
-
-## Details
-Following section outlines each part of the conceptual architecture
-
-## Video cameras
-Video cameras are the primary sensors in this digitally connected enterprise-scale ecosystem. Advancements in machine learning and artificial intelligence that allow video to be turned into structured data and process it at edge before sending to cloud. We can use IP cameras to capture images, compress them on the camera, and then send the compressed data over edge compute for video analytics pipeline or use GigE vision cameras to capture images on the sensor and then send these images directly to the Azure IoT Edge, which then compresses before processing in video analytics pipeline.
-
-## Azure IoT Edge Gateway
-The "cameras-as-sensors" and edge workloads are managed locally by Azure IoT Edge and the camera stream is processed by analytics pipeline. The video analytics processing pipeline at Azure IoT Edge brings many benefits, including decreased response time, low-bandwidth consumption, which results in low latency for rapid data processing. Only the most essential metadata, insights, or actions are sent to the cloud for further action or investigation.
-
-## Device Management with IoT Central
-Azure IoT Central is a solution development platform that simplifies IoT device & Azure IoT Edge gateway connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers & partners can build an end to end enterprise solutions to achieve a digital feedback loop in distribution centers.
-
-## Business Insights and actions using data egress
-IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights that are based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved through webhook, Service Bus, event hub, or blob storage to build, train, and deploy machine learning models and further enrich insights.
-
-## Next steps
-* Learn how to deploy [digital distribution center template](./tutorial-iot-central-digital-distribution-center.md)
-* Learn more about [IoT Central retail templates](./overview-iot-central-retail.md)
-* Learn more about IoT Central refer to [IoT Central overview](../core/overview-iot-central.md)
iot-central Architecture Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/architecture-micro-fulfillment-center.md
- Title: Azure IoT Central micro-fulfillment center | Microsoft Docs
-description: Learn to build a micro-fulfillment center application using our Micro-fulfillment center application template in IoT Central
-- Previously updated : 10/13/2019-------
-# Micro-fulfillment center architecture
-
-Micro-fulfillment center solutions allow you to digitally connect, monitor, and manage all aspects of a fully automated fulfillment center to reduce costs by eliminating downtime while increasing security and overall efficiency. These solutions can be built by using one of the application templates within IoT Central and the architecture below as guidance.
-
-![Azure IoT Central Store Analytics](./media/architecture/micro-fulfillment-center-architecture-frame.png)
-
-1. Set of IoT sensors sending telemetry data to a gateway device
-2. Gateway devices sending telemetry and aggregated insights to IoT Central
-3. Continuous data export to the desired Azure service for manipulation
-4. Data can be structured in the desired format and sent to a storage service
-5. Business applications can query data and generate insights that power retail operations
-
-Let's take a look at key components that generally play a part in a micro-fulfillment center solution.
-
-## Robotic carriers
-
-A micro-fulfillment center solution will likely have a large set of robotic carriers generating different kinds of telemetry signals. These signals can be ingested by a gateway device, aggregated, and then sent to IoT Central as reflected by the left side of the architecture diagram.
-
-## Condition monitoring sensors
-
-An IoT solution starts with a set of sensors capturing meaningful signals from within your fulfillment center. It's reflected by different kinds of sensors on the far left of the architecture diagram above.
-
-## Gateway devices
-
-Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
-
-## IoT Central application
-
-The Azure IoT Central application ingests data from different kinds of IoT sensors, robots, as well gateway devices within the fulfillment center environment and generates a set of meaningful insights.
-
-Azure IoT Central also provides a tailored experience to the store operator enabling them to remotely monitor and manage the infrastructure devices.
-
-## Data transform
-The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to a set of Azure PaaS (Platform-as-a-Service) services that can perform data manipulation and enrich these insights before landing them in a business application.
-
-## Business application
-The IoT data can be used to power different kinds of business applications deployed within a retail environment. A fulfillment center manager or employee can use these applications to visualize business insights and take meaningful actions in real time. To learn how to build a real-time Power BI dashboard for your retail team, follow the [tutorial](./tutorial-in-store-analytics-create-app.md).
-
-## Next steps
-* Get started with the [Micro-fulfillment Center](https://aka.ms/checkouttemplate) application template.
-* Take a look at the [tutorial](https://aka.ms/mfc-tutorial) that walks you through how to build a solution using the Micro-fulfillment Center app template.
iot-central Architecture Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/architecture-smart-inventory-management.md
- Title: Architecture IoT Smart Inventory Management | Microsoft Docs
-description: An architecture of IoT Smart Inventory Management template for IoT Central
----- Previously updated : 10/20/2019---
-# Architecture of IoT Central smart inventory management application template
-
-Partners and customers can use the app template and following guidance to develop end to end **smart inventory management** solutions.
-
-> [!div class="mx-imgBorder"]
-> ![smart inventory management](./media/concept-smart-inventory-mgmt-architecture/smart-inventory-management-architecture.png)
-
-1. Set of IoT sensors sending telemetry data to a gateway device
-2. Gateway devices sending telemetry and aggregated insights to IoT Central
-3. Data is routed to the desired Azure service for manipulation
-4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts
-5. Processed data is stored in hot storage for near real-time actions or cold storage for additional insight enhancements that is based on ML or batch analysis.
-6. Logic Apps can be used to power various business workflows in end-user business applications
-
-## Details
-Following section outlines each part of the conceptual architecture
-Telemetry ingestion from Radio-frequency identification (RFID), Bluetooth low energy (BLE) tags
-
-## RFID tags
-RFID tags transmit data about an item through radio waves. RFID tags typically don't have a battery unless specified. Tags receive energy from the radio waves generated by the reader and transmit a signal back toward the RFID reader.
-
-## BLE tags
-Energy beacon broadcasts packets of data at regular intervals. Beacon data is detected by BLE readers or installed services on smartphones and then transmitting that to the cloud.
-
-## RFID & BLE readers
-RFID reader converts the radio waves to a more usable form of data. Information collected from the tags is then stored in local edge server or sent to cloud using JSON-RPC 2.0 over MQTT.
-BLE reader also known as Access Points (AP) are similar to RFID reader. It is used to detect nearby Bluetooth signals and relay its message to local Azure IoT Edge or cloud using JSON-RPC 2.0 over MQTT.
-Many readers are capable of reading RFID and beacon signals, and providing additional sensor capability related to temperature, humidity, accelerometer, and gyroscope.
-
-## Azure IoT Edge gateway
-Azure IoT Edge server provides a place to preprocess that data locally before sending it on to the cloud. We can also deploy cloud workloads artificial intelligence, Azure and third-party services, business logic using standard containers.
-
-## Device management with IoT Central
-Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers & partners can build an end to end enterprise solutions to achieve a digital feedback loop in inventory management.
-
-## Business insights & actions using data egress
-IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved using webhook, service bus, event hub, or blob storage to build, train, and deploy machine learning models & further enrich insights.
-
-## Next steps
-* Learn how to deploy [smart inventory management template](./tutorial-iot-central-smart-inventory-management.md)
-* Learn more about [IoT Central retail templates](./overview-iot-central-retail.md)
-* Learn more about IoT Central refer to [IoT Central overview](../core/overview-iot-central.md)
iot-central Architecture Video Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/architecture-video-analytics.md
- Title: Azure IoT Central video analytics object and motion detection | Microsoft Docs
-description: Learn to build an IoT Central application using the video analytics - object and motion detection application template in IoT Central. This template uses live video analytics and connected cameras.
-- Previously updated : 07/27/2020------
-# Video analytics - object and motion detection application architecture
-
-The **Video analytics - object and motion detection** application template lets you build IoT solutions include live video analytics capabilities.
--
-The key components of the video analytics solution include:
-
-## Live video analytics (LVA)
-
-LVA provides a platform for you to build intelligent video applications that span the edge and the cloud. The platform lets you build intelligent video applications that span the edge and the cloud. The platform offers the capability to capture, record, analyze live video, and publish the results, which could be video or video analytics, to Azure services. The Azure services could be running in the cloud or the edge. The platform can be used to enhance IoT solutions with video analytics.
-
-For more information, see [Live Video Analytics](https://github.com/Azure/live-video-analytics) on GitHub.
-
-## IoT Edge LVA gateway module
-
-The IoT Edge LVA gateway module instantiates cameras as new devices and connects them directly to IoT Central using the IoT device client SDK.
-
-In this reference implementation, devices connect to the solution using symmetric keys from the edge. For more information about device connectivity, see [Get connected to Azure IoT Central](../core/concepts-get-connected.md)
-
-## Media graph
-
-Media graph lets you define where to capture the media from, how to process it, and where to deliver the results. You configure media graph by connecting components, or nodes, in the desired manner. For more information, see [Media Graph](https://github.com/Azure/live-video-analytics/tree/master/MediaGraph) on GitHub.
-
-## Next steps
-
-The suggested next step is to learn how to [How to deploy an IoT Central application using the video analytics - object and motion detection application template](tutorial-video-analytics-deploy.md).
iot-central Store Analytics Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/store-analytics-architecture.md
- Title: Store Analytics Architecture
-description: Learn to build an in-store analytics application using Checkout application template in IoT Central
-- Previously updated : 10/13/2019-------
-# In-store analytics architecture
--
-In-store analytics solutions allow you to monitor various conditions within the retail store environment. These solutions can be built by using one of the application templates within IoT Central and the architecture below as guidance.
--
-![Azure IoT Central Store Analytics](./media/architecture/store-analytics-architecture-frame.png)
--- Set of IoT sensors sending telemetry data to a gateway device-- Gateway devices sending telemetry and aggregated insights to IoT Central-- Continuous data export to the desired Azure service for manipulation-- Data can be structured in the desired format and sent to a storage service-- Business applications can query data and generate insights that power retail operations
-
-Let's take a look at key components that generally play a part in an in-store analytics solution.
-
-## Condition monitoring sensors
-
-An IoT solution starts with a set of sensors capturing meaningful signals from within a retail store environment. It is reflected by different kinds of sensors on the far left of the architecture diagram above.
-
-## Gateway devices
-
-Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
-
-## IoT Central application
-
-The Azure IoT Central application ingests data from different kinds of IoT sensors as well gateway devices within the retail store environment and generates a set of meaningful insights.
-
-Azure IoT Central also provides a tailored experience to the store operator enabling them to remotely monitor and manage the infrastructure devices.
-
-## Data transform
-The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to a set of Azure PaaS (Platform-as-a Service) services that can perform data manipulation and enrich these insights before landing them in a business application.
-
-## Business application
-The IoT data can be used to power different kinds of business applications deployed within a retail environment. A retail store manager or staff member can use these applications to visualize business insights and take meaningful actions in real time. To learn how to build a real-time Power BI dashboard for your retail team, follow the [tutorial](./tutorial-in-store-analytics-create-app.md).
-
-## Next steps
-* Get started with the [In-Store Analytics Checkout](https://aka.ms/checkouttemplate) and [In-Store Analytics Condition Monitoring](https://aka.ms/conditiontemplate) application templates.
-* Take a look at the [end to end tutorial](https://aka.ms/storeanalytics-tutorial) that walks you through how to build a solution using one of the In-Store Analytics application templates.
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Title: 'Tutorial - Create an in-store analytics application in Azure IoT Central'
-description: This tutorial shows how to create an in-store analytics retail application in IoT Central. You'll create it, customize it, and add sensor devices.
+ Title: Tutorial - Azure IoT in-store analytics | Microsoft Docs
+description: This tutorial shows how to deploy and use create an in-store analytics retail application in IoT Central.
Last updated 11/12/2019
-# Tutorial: Create an in-store analytics application in Azure IoT Central
+# Tutorial: Deploy and walk through the in-store analytics application template
-The tutorial shows you how to create an Azure IoT Central in-store analytics application. The sample application is for a retail store. It's a solution to the common business need to monitor and adapt to occupancy and environmental conditions.
+Use the IoT Central *in-store analytics* application template and the guidance in this article to develop an end-to-end in-store analytics solution.
-The sample application that you build includes three real devices: a Rigado Cascade 500 gateway, and two RuuviTag sensors. The tutorial also shows how to use the simulated occupancy sensor included in the application template for testing purposes. The Rigado C500 gateway serves as the communication hub in your application. It communicates with sensors in your store and manages their connections to the cloud. The RuuviTag is an environmental sensor that provides telemetry including temperature, humidity, and pressure. The simulated occupancy sensor provides a way to track motion and presence in the checkout areas of a store.
-This tutorial includes directions for connecting the Rigado and RuuviTag devices to your application. If you have another gateway and sensors, you can still follow the steps to build your application. The tutorial also shows how to create simulated RuuviTag sensors. The simulated sensors enable you to build the application if you don't have real devices.
+- Set of IoT sensors sending telemetry data to a gateway device
+- Gateway devices sending telemetry and aggregated insights to IoT Central
+- Continuous data export to the desired Azure service for manipulation
+- Data can be structured in the desired format and sent to a storage service
+- Business applications can query data and generate insights that power retail operations
-You develop the checkout and condition monitoring solution in three parts:
+Let's take a look at key components that generally play a part in an in-store analytics solution.
-* Create the application and connect devices to monitor conditions
-* Customize the dashboard to enable operators to monitor and manage devices
-* Configure data export to enable store managers to run analytics and visualize insights
+## Condition monitoring sensors
-In this tutorial, you learn how to:
-> [!div class="checklist"]
-> * Use the Azure IoT Central **In-store analytics - checkout** template to create a retail store application
-> * Customize the application settings
-> * Create and customize IoT device templates
-> * Connect devices to your application
-> * Add rules and actions to monitor conditions
+An IoT solution starts with a set of sensors capturing meaningful signals from within a retail store environment. It is reflected by different kinds of sensors on the far left of the architecture diagram above.
-## Prerequisites
+## Gateway devices
-To complete this tutorial series, you need:
-* An Azure subscription is recommended. You can optionally use a free 7-day trial. If you don't have an Azure subscription, you can create one on the [Azure sign-up page](https://aka.ms/createazuresubscription).
-* Access to a gateway device and two environmental sensors (you can optionally use simulated devices as described in the tutorial)
-* Device templates for the devices you use (templates are provided for all devices used in the tutorial)
+Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
-## Create an application
-In this section, you create a new Azure IoT Central application from a template. You'll use this application throughout the tutorial series to build a complete solution.
+## IoT Central application
-To create a new Azure IoT Central application:
+The Azure IoT Central application ingests data from different kinds of IoT sensors as well gateway devices within the retail store environment and generates a set of meaningful insights.
-1. Navigate to the [Azure IoT Central application manager](https://aka.ms/iotcentral) website.
+Azure IoT Central also provides a tailored experience to the store operator enabling them to remotely monitor and manage the infrastructure devices.
-1. If you have an Azure subscription, sign in with the credentials you use to access it, otherwise sign in using a Microsoft account:
+## Data transform
- ![Enter your organization account](./media/tutorial-in-store-analytics-create-app/sign-in.png)
+The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to a set of Azure PaaS (Platform-as-a Service) services that can perform data manipulation and enrich these insights before landing them in a business application.
-1. To start creating a new Azure IoT Central application, select **New Application**.
+## Business application
-1. Select **Retail**. The retail page displays several retail application templates.
+The IoT data can be used to power different kinds of business applications deployed within a retail environment. A retail store manager or staff member can use these applications to visualize business insights and take meaningful actions in real time. To learn how to build a real-time Power BI dashboard for your retail team, follow the [tutorial](./tutorial-in-store-analytics-create-app.md).
-To create a new in-store analytics checkout application:
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+>
+> - Use the Azure IoT Central **In-store analytics - checkout** template to create a retail store application
+> - Customize the application settings
+> - Create and customize IoT device templates
+> - Connect devices to your application
+> - Add rules and actions to monitor conditions
+
+## Prerequisites
-1. Select the **In-store analytics - checkout** application template. This template includes device templates for all devices used in the tutorial except for RuuviTag sensors. The template also provides a dashboard for monitoring checkout and environmental conditions, and device status.
+- There are no specific prerequisites required to deploy this app.
+- You can use the free pricing plan or use an Azure subscription.
-1. Optionally, choose a friendly **Application name**. This application is based on a fictional retail store named Contoso. The tutorial uses the **Application name** *Contoso checkout*. The application template is based on the fictional company Northwind. In this tutorial, you use Contoso to learn how to customize the application.
+## Create in-store analytics application
- > [!NOTE]
- > If you use a friendly **Application name**, you still must use a unique value for the application **URL**.
+Create the application using following steps:
-1. If you have an Azure subscription, enter your *Directory, Azure subscription, and Region*. If you don't have a subscription, you can enable **7-day free trial** and complete the required contact information.
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab:
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/iotc-retail-homepage.png" alt-text="Connected logistics template":::
-1. Select **Create**.
+1. Select **Create app** under **In-store analytics - checkout**.
- ![Azure IoT Central Create Application page](./media/tutorial-in-store-analytics-create-app/preview-application-template.png)
+To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md).
- ![Azure IoT Central Create Application billing info](./media/tutorial-in-store-analytics-create-app/preview-application-template-billinginfo.png)
+## Walk through the application
-## Customize application settings
+The following sections walk you through the key features of the application:
-As a builder, you can change several settings to customize the user experience in your application. In this section, you'll select a predefined application theme. Optionally, you'll learn how to create a custom theme, and update the application image. A custom theme enables you to set the application browser colors, browser icon, and the application logo that appears in the masthead.
+### Customize application settings
+
+You can change several settings to customize the user experience in your application. In this section, you'll select a predefined application theme. Optionally, you'll learn how to create a custom theme, and update the application image. A custom theme enables you to set the application browser colors, browser icon, and the application logo that appears in the masthead.
To select a predefined application theme: 1. Select **Settings** on the masthead.
- ![Azure IoT Central application settings](./media/tutorial-in-store-analytics-create-app/settings-icon.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/settings-icon.png" alt-text="Azure IoT Central application settings.":::
2. Select a new **Theme**.
To create a custom theme:
1. Expand the left pane, if not already expanded.
- ![Azure IoT Central left pane](./media/tutorial-in-store-analytics-create-app/dashboard-expand.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/dashboard-expand.png" alt-text="Azure IoT Central left pane.":::
1. Select **Administration > Customize your application**.
To create a custom theme:
1. Select **Save**.
- ![Azure IoT Central customized logo](./media/tutorial-in-store-analytics-create-app/select-application-logo.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/select-application-logo.png" alt-text="Azure IoT Central customized logo.":::
- After you save, the application updates the browser colors, the logo in the masthead, and the browser icon.
+ After you save, the application updates the browser colors, the logo in the masthead, and the browser icon.
- ![Azure IoT Central updated application settings](./media/tutorial-in-store-analytics-create-app/saved-application-settings.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/saved-application-settings.png" alt-text="Azure IoT Central updated application settings.":::
To update the application image:
To update the application image:
1. Optionally, navigate to the **My Apps** view on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website. The application tile displays the updated application image.
- ![Azure IoT Central customize application image](./media/tutorial-in-store-analytics-create-app/customize-application-image.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/customize-application-image.png" alt-text="Azure IoT Central customize application image.":::
+
+### Create device templates
-## Create device templates
-As a builder, you can create device templates that enable you and the application operators to configure and manage devices. You create a template by building a custom one, by importing an existing template file, or by importing a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application. Optionally, use a device template to generate simulated devices for testing.
+You can create device templates that enable you and the application operators to configure and manage devices. You create a template by building a custom one, by importing an existing template file, or by importing a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application. Optionally, use a device template to generate simulated devices for testing.
The **In-store analytics - checkout** application template has device templates for several devices. There are device templates for two of the three devices you use in the application. The RuuviTag device template isn't included in the **In-store analytics - checkout** application template. In this section, you add a device template for RuuviTag sensors to your application.
To add a RuuviTag device template to your application:
1. Select **Next: Customize**.
- ![Screenshot that highlights the Next: Customize button.](./media/tutorial-in-store-analytics-create-app/ruuvitag-device-template.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-template.png" alt-text="Screenshot that highlights the Next: Customize button.":::
1. Select **Create**. The application adds the RuuviTag device template. 1. Select **Device templates** on the left pane. The page displays all device templates included in the application template, and the RuuviTag device template you just added.
- ![Azure IoT Central RuuviTag sensor device template](./media/tutorial-in-store-analytics-create-app/device-templates-list.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/device-templates-list.png" alt-text="Azure IoT Central RuuviTag sensor device template.":::
+
+### Customize device templates
-## Customize device templates
You can customize the device templates in your application in three ways. First, you customize the native built-in interfaces in your devices by changing the device capabilities. For example, with a temperature sensor, you can change details such as the display name of the temperature interface, the data type, the units of measurement, and minimum and maximum operating ranges. Second, customize your device templates by adding cloud properties. Cloud properties aren't part of the built-in device capabilities. Cloud properties are custom data that your Azure IoT Central application creates, stores, and associates with your devices. An example of a cloud property could be a calculated value, or metadata such as a location that you want to associate with a set of devices.
To customize the built-in interfaces of the RuuviTag device template:
1. Hide the left pane. The summary view of the template displays the device capabilities.
- ![Azure IoT Central RuuviTag device template summary view](./media/tutorial-in-store-analytics-create-app/ruuvitag-device-summary-view.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-summary-view.png" alt-text="Azure IoT Central RuuviTag device template summary view.":::
1. Select **Customize** in the RuuviTag device template menu.
For the `humidity` telemetry type, make the following changes:
1. Select **Save** to save your changes.
- ![Screenshot that shows the Customize screen and highlights the Save button.](./media/tutorial-in-store-analytics-create-app/ruuvitag-device-template-customize.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-template-customize.png" alt-text="Screenshot that shows the Customize screen and highlights the Save button.":::
To add a cloud property to a device template in your application:
Specify the following values to create a custom property to store the location o
1. Enter the value *Location* for the **Display Name**. This value is automatically copied to the **Name** field, which is a friendly name for the property. You can use the copied value or change it.
-1. Select *String* in the **Schema** dropdown. A string type enables you to associate a location name string with any device based on the template. For instance, you could associate an area in a store with each device. Optionally, you can set the **Semantic Type** of your property to *Location*, and this automatically sets the **Schema** to *Geopoint*. It enables you to associate GPS coordinates with a device.
+1. Select *String* in the **Schema** dropdown. A string type enables you to associate a location name string with any device based on the template. For instance, you could associate an area in a store with each device. Optionally, you can set the **Semantic Type** of your property to *Location*, and it automatically sets the **Schema** to *Geopoint*. It enables you to associate GPS coordinates with a device.
1. Set **Minimum Length** to *2*.
Specify the following values to create a custom property to store the location o
1. Select **Save** to save your custom cloud property.
- ![Azure IoT Central RuuviTag device template customization](./media/tutorial-in-store-analytics-create-app/ruuvitag-device-template-cloud-property.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-template-cloud-property.png" alt-text="Azure IoT Central RuuviTag device template customization.":::
1. Select **Publish**. Publishing a device template makes it visible to application operators. After you've published a template, use it to generate simulated devices for testing, or to connect real devices to your application. If you already have devices connected to your application, publishing a customized template pushes the changes to the devices.
-## Add devices
+### Add devices
+ After you have created and customized device templates, it's time to add devices. For this tutorial, you use the following set of real and simulated devices to build the application:+ - A real Rigado C500 gateway - Two real RuuviTag sensors - A simulated **Occupancy** sensor. The simulated sensor is included in the application template, so you don't need to create it.
Complete the steps in the following two articles to connect a real Rigado gatewa
- To connect a Rigado gateway, see [Connect a Rigado Cascade 500 to your Azure IoT Central application](../core/howto-connect-rigado-cascade-500.md). - To connect RuuviTag sensors, see [Connect a RuuviTag sensor to your Azure IoT Central application](../core/howto-connect-ruuvi.md). You can also use these directions to create two simulated sensors, if needed.
-## Add rules and actions
+### Add rules and actions
+ As part of using sensors in your Azure IoT Central application to monitor conditions, you can create rules to run actions when certain conditions are met. A rule is associated with a device template and one or more devices, and contains conditions that must be met based on device telemetry or events. A rule also has one or more associated actions. The actions may include sending email notifications, or triggering a webhook action to send data to other services. The **In-store analytics - checkout** application template includes some predefined rules for the devices in the application. In this section, you create a new rule that checks the maximum relative humidity level based on the RuuviTag sensor telemetry. You add an action to the rule so that if the humidity exceeds the maximum, the application sends an email.
To create a rule:
1. Enter a typical upper range indoor humidity level for your environment as the **Value**. For example, enter *65*. You've set a condition for your rule that occurs when relative humidity in any RuuviTag real or simulated sensor exceeds this value. You may need to adjust the value up or down depending on the normal humidity range in your environment.
- ![Azure IoT Central add rule conditions](./media/tutorial-in-store-analytics-create-app/rules-add-conditions.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/rules-add-conditions.png" alt-text="Azure IoT Central add rule conditions.":::
To add an action to the rule:
-1. Select **+ Email**.
+1. Select **+ Email**.
1. Enter *High humidity notification* as the friendly **Display name** for the action.
To add an action to the rule:
1. Select **Done** to complete the action.
- ![Azure IoT Central add actions to rules](./media/tutorial-in-store-analytics-create-app/rules-add-action.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/rules-add-action.png" alt-text="Azure IoT Central add actions to rules.":::
1. Select **Save** to save and activate the new rule.
iot-central Tutorial In Store Analytics Customize Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md
Last updated 11/12/2019
# Tutorial: Customize the dashboard and manage devices in Azure IoT Central - In this tutorial, you learn how to customize the dashboard in your Azure IoT Central in-store analytics application. Application operators can use the customized dashboard to run the application and manage the attached devices. In this tutorial, you learn how to: > [!div class="checklist"]+ > * Change the dashboard name > * Customize image tiles on the dashboard > * Arrange tiles to modify the layout
The builder should complete the tutorial to create the Azure IoT Central in-stor
* [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md) (Required) ## Change the dashboard name+ To customize the dashboard, you have to edit the default dashboard in your application. Also, you can create additional new dashboards. The first step to customize the dashboard in your application is to change the name. 1. Navigate to the [Azure IoT Central application manager](https://aka.ms/iotcentral) website.
To customize the dashboard, you have to edit the default dashboard in your appli
1. Select **Edit** on the dashboard toolbar. In edit mode, you can customize the appearance, layout, and content of the dashboard.
- ![Azure IoT Central edit dashboard](./media/tutorial-in-store-analytics-customize-dashboard/dashboard-edit.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/dashboard-edit.png" alt-text="Azure IoT Central edit dashboard.":::
1. Optionally, hide the left pane. Hiding the left pane gives you a larger working area for editing the dashboard.
To customize the dashboard, you have to edit the default dashboard in your appli
1. Select **Save**. Your changes are saved to the dashboard and edit mode is disabled.
- ![Azure IoT Central change dashboard name](./media/tutorial-in-store-analytics-customize-dashboard/dashboard-change-name.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/dashboard-change-name.png" alt-text="Azure IoT Central change dashboard name.":::
## Customize image tiles on the dashboard+ An Azure IoT Central application dashboard consists of one or more tiles. A tile is a rectangular container for displaying content on a dashboard. You associate various types of content with tiles, and you drag, drop, and resize tiles to customize a dashboard layout. There are several types of tiles for displaying content. Image tiles contain images, and you can add a URL that enables users to click the image. Label tiles display plain text. Markdown tiles contain formatted content and let you set an image, a URL, a title, and markdown code that renders as HTML. Telemetry, property, or command tiles display device-specific data. In this section, you learn how to customize image tiles on the dashboard.
To customize the image tile that displays a brand image on the dashboard:
1. Select **Configure** on the image tile that displays the Northwind brand image.
- ![Azure IoT Central edit brand image](./media/tutorial-in-store-analytics-customize-dashboard/brand-image-edit.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/brand-image-edit.png" alt-text="Azure IoT Central edit brand image.":::
1. Change the **Title**. The title appears when a user hovers over the image.
To customize the image tile that displays a brand image on the dashboard:
1. Select **Update configuration**. The **Update configuration** button saves changes to the dashboard and leaves edit mode enabled.
- ![Azure IoT Central save brand image](./media/tutorial-in-store-analytics-customize-dashboard/brand-image-save.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/brand-image-save.png" alt-text="Azure IoT Central save brand image.":::
1. Optionally, select **Configure** on the tile titled **Documentation**, and specify a URL for support content.
To customize the image tile that displays a map of the sensor zones in the store
1. Select **Update configuration**.
- ![Azure IoT Central save store map](./media/tutorial-in-store-analytics-customize-dashboard/store-map-save.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/store-map-save.png" alt-text="Azure IoT Central save store map.":::
The example Contoso store map shows four zones: two checkout zones, a zone for apparel and personal care, and a zone for groceries and deli. In this tutorial, you'll associate sensors with these zones to provide telemetry.
- ![Azure IoT Central store zones](./media/tutorial-in-store-analytics-customize-dashboard/store-zones.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/store-zones.png" alt-text="Azure IoT Central store zones.":::
-1. Select **Save**.
+1. Select **Save**.
## Arrange tiles to modify the layout+ A key step in customizing a dashboard is to rearrange the tiles to create a useful view. Application operators use the dashboard to visualize device telemetry, manage devices, and monitor conditions in a store. Azure IoT Central simplifies the application builder task of creating a dashboard. The dashboard edit mode enables you to quickly add, move, resize, and delete tiles. The **In-store analytics - checkout** application template also simplifies the task of creating a dashboard. It provides a working dashboard layout, with sensors connected, and tiles that display checkout line counts and environmental conditions. In this section, you rearrange the dashboard in the **In-store analytics - checkout** application template to create a custom layout.
To remove tiles that you don't plan to use in your application:
1. Select **X Delete** to remove the following tiles: **Back to all zones**, **Visit store dashboard**, **Wait time**, and all three tiles associated with **Checkout 3**. The Contoso store dashboard doesn't use these tiles.
- ![Azure IoT Central delete tiles](./media/tutorial-in-store-analytics-customize-dashboard/delete-tiles.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/delete-tiles.png" alt-text="Azure IoT Central delete tiles.":::
1. Scroll to bring the remaining dashboard tiles into view. 1. Select **X Delete** to remove the following tiles: **Warm-up checkout zone**, **Cool-down checkout zone**, **Occupancy sensor settings**, **Thermostat sensor settings**, and **Environment conditions**.
- ![Azure IoT Central delete remaining tiles](./media/tutorial-in-store-analytics-customize-dashboard/delete-tiles-2.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/delete-tiles-2.png" alt-text="Azure IoT Central delete remaining tiles.":::
1. Select **Save**. Removing unused tiles frees up space in the edit page, and simplifies the dashboard view for operators. 1. View your changes to the dashboard.
- ![Azure IoT Central after deleting tiles](./media/tutorial-in-store-analytics-customize-dashboard/after-delete-tiles.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/after-delete-tiles.png" alt-text="Azure IoT Central after deleting tiles.":::
After you remove unused tiles, rearrange the remaining tiles to create an organized layout. The new layout includes space for tiles you add in a later step.
To rearrange the remaining tiles:
1. View your layout changes.
- ![Azure IoT Central firmware battery tiles](./media/tutorial-in-store-analytics-customize-dashboard/firmware-battery-tiles.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/firmware-battery-tiles.png" alt-text="Azure IoT Central firmware battery tiles.":::
## Add telemetry tiles to display conditions+ After you customize the dashboard layout, you're ready to add tiles to show telemetry. To create a telemetry tile, select a device template and device instance, then select device-specific telemetry to display in the tile. The **In-store analytics - checkout** application template includes several telemetry tiles in the dashboard. The four tiles in the two checkout zones display telemetry from the simulated occupancy sensor. The **People traffic** tile shows counts in the two checkout zones. In this section, you add two more telemetry tiles to show environmental telemetry from the RuuviTag sensors you added in the [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md) tutorial.
To add tiles to display environmental data from the RuuviTag sensors:
1. Select **Combine**.
- ![Azure IoT Central add RuuviTag tile 1](./media/tutorial-in-store-analytics-customize-dashboard/add-zone1-ruuvi.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/add-zone1-ruuvi.png" alt-text="Azure IoT Central add RuuviTag tile 1.":::
A new tile appears to display combined humidity and temperature telemetry for the selected sensor.
To add tiles to display environmental data from the RuuviTag sensors:
1. Select **Save**. The dashboard displays zone telemetry in the two new tiles.
- ![Azure IoT Central all RuuviTag tiles](./media/tutorial-in-store-analytics-customize-dashboard/all-ruuvitag-tiles.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/all-ruuvitag-tiles.png" alt-text="Azure IoT Central all RuuviTag tiles.":::
To edit the **People traffic** tile to show telemetry for only two checkout zones:
To edit the **People traffic** tile to show telemetry for only two checkout zone
1. Select **Save**. The updated dashboard displays counts for only your two checkout zones, which are based on the simulated occupancy sensor.
- ![Azure IoT Central people traffic two lanes](./media/tutorial-in-store-analytics-customize-dashboard/people-traffic-two-lanes.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/people-traffic-two-lanes.png" alt-text="Azure IoT Central people traffic two lanes.":::
## Add property tiles to display device details+ Application operators use the dashboard to manage devices, and monitor status. Add a tile for each RuuviTag to enable operators to view the software version. To add a property tile for each RuuviTag:
To add a property tile for each RuuviTag:
1. Select **Save**.
- ![Azure IoT Central RuuviTag property tiles](./media/tutorial-in-store-analytics-customize-dashboard/add-ruuvi-property-tiles.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/add-ruuvi-property-tiles.png" alt-text="Azure IoT Central RuuviTag property tiles.":::
## Add command tiles to run commands+ Application operators also use the dashboard to manage devices by running commands. You can add command tiles to the dashboard that will execute predefined commands on a device. In this section, you add a command tile to enable operators to reboot the Rigado gateway. To add a command tile to reboot the gateway:
To add a command tile to reboot the gateway:
1. View your completed Contoso dashboard.
- ![Azure IoT Central complete dashboard customization](./media/tutorial-in-store-analytics-customize-dashboard/completed-dashboard.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/completed-dashboard.png" alt-text="Azure IoT Central complete dashboard customization.":::
1. Optionally, select the **Reboot** tile to run the reboot command on your gateway.
iot-central Tutorial In Store Analytics Export Data Visualize Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights.md
Last updated 11/12/2019
# Tutorial: Export data from Azure IoT Central and visualize insights in Power BI - In the two previous tutorials, you created and customized an IoT Central application using the **In-store analytics - checkout** application template. In this tutorial, you configure your IoT Central application to export telemetry collected from the devices. You then use Power BI to create a custom dashboard for the store manager to visualize the insights derived from the telemetry. In this tutorial, you will learn how to: > [!div class="checklist"]+ > * Configure an IoT Central application to export telemetry to an event hub. > * Use Logic Apps to send data from an event hub to a Power BI streaming dataset. > * Create a Power BI dashboard to visualize data in the streaming dataset.
Now you have an **Event Hubs Namespace**, you can create an **Event Hub** to use
You now have an event hub you can use when you configure data export from your IoT Central application:
-![Event hub](./media/tutorial-in-store-analytics-visualize-insights/event-hub.png)
## Configure data export
Now you have an event hub, you can configure your **In-store analytics - checkou
The data export may take a few minutes to start sending telemetry to your event hub. You can see the status of the export on the **Data exports** page:
-![Continuous data export configuration](./media/tutorial-in-store-analytics-visualize-insights/export-configuration.png)
## Create the Power BI datasets
Your Power BI dashboard will display data from your retail monitoring applicatio
You now have two streaming datasets. The logic app will route telemetry from the two environmental sensors connected to your **In-store analytics - checkout** application to these two datasets:
-![Zone datasets](./media/tutorial-in-store-analytics-visualize-insights/dataset-1.png)
+ This solution uses one streaming dataset for each sensor because it's not possible to apply filters to streaming data in Power BI.
You also need a streaming dataset for the occupancy telemetry:
You now have a third streaming dataset that stores values from the simulated occupancy sensor. This sensor reports the queue length at the two checkouts in the store, and how long customers are waiting in these queues:
-![Occupancy dataset](./media/tutorial-in-store-analytics-visualize-insights/dataset-2.png)
## Create a logic app
Before you create the logic app, you need the device IDs of the two RuuviTag sen
1. Select **Devices** in the left pane. Then select **RuuviTag**. 1. Make a note of the **Device IDs**. In the following screenshot, the IDs are **f5dcf4ac32e8** and **e29ffc8d5326**:
- ![Device IDs](./media/tutorial-in-store-analytics-visualize-insights/device-ids.png)
The following steps show you how to create the logic app in the Azure portal:
To add the logic to your logic app design, select **Code view**:
1. Select **Save** and then select **Designer** to see the visual version of the logic you added:
- ![Logic app design](./media/tutorial-in-store-analytics-visualize-insights/logic-app.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/logic-app.png" alt-text="Logic app design.":::
1. Select **Switch by DeviceID** to expand the action. Then select **Zone 1 environment**, and select **Add an action**. 1. In **Search connectors and actions**, enter **Power BI**, and then press **Enter**.
To add the logic to your logic app design, select **Code view**:
* Select the **Humidity** field, and then select **See more** next to **Parse Telemetry**. Then select **humidity**. * Select the **Temperature** field, and then select **See more** next to **Parse Telemetry**. Then select **temperature**. * Select **Save** to save your changes. The **Zone 1 environment** action looks like the following screenshot:
- ![Zone 1 environment](./media/tutorial-in-store-analytics-visualize-insights/zone-1-action.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/zone-1-action.png" alt-text="Zone 1 environment.":::
1. Select the **Zone 2 environment** action, and select **Add an action**. 1. In **Search connectors and actions**, enter **Power BI**, and then press **Enter**. 1. Select the **Add rows to a dataset (preview)** action.
To add the logic to your logic app design, select **Code view**:
* Select the **Humidity** field, and then select **See more** next to **Parse Telemetry**. Then select **humidity**. * Select the **Temperature** field, and then select **See more** next to **Parse Telemetry**. Then select **temperature**. Select **Save** to save your changes. The **Zone 2 environment** action looks like the following screenshot:
- ![Zone 2 environment](./media/tutorial-in-store-analytics-visualize-insights/zone-2-action.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/zone-2-action.png" alt-text="Zone 2 environment.":::
1. Select the **Occupancy** action, and then select the **Switch by Interface ID** action. 1. Select the **Dwell Time interface** action, and select **Add an action**. 1. In **Search connectors and actions**, enter **Power BI**, and then press **Enter**.
To add the logic to your logic app design, select **Code view**:
* Select the **Dwell Time 1** field, and then select **See more** next to **Parse Telemetry**. Then select **DwellTime1**. * Select the **Dwell Time 2** field, and then select **See more** next to **Parse Telemetry**. Then select **DwellTime2**. * Select **Save** to save your changes. The **Dwell Time interface** action looks like the following screenshot:
- ![Screenshot that shows the "Dwell Time interface" action.](./media/tutorial-in-store-analytics-visualize-insights/occupancy-action-1.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/occupancy-action-1.png" alt-text="Dwell Time interface.":::
1. Select the **People Count interface** action, and select **Add an action**. 1. In **Search connectors and actions**, enter **Power BI**, and then press **Enter**. 1. Select the **Add rows to a dataset (preview)** action.
To add the logic to your logic app design, select **Code view**:
* Select the **Queue Length 1** field, and then select **See more** next to **Parse Telemetry**. Then select **count1**. * Select the **Queue Length 2** field, and then select **See more** next to **Parse Telemetry**. Then select **count2**. * Select **Save** to save your changes. The **People Count interface** action looks like the following screenshot:
- ![Occupancy action](./media/tutorial-in-store-analytics-visualize-insights/occupancy-action-2.png)
+ :::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/occupancy-action-2.png" alt-text="Occupancy action.":::
The logic app runs automatically. To see the status of each run, navigate to the **Overview** page for the logic app in the Azure portal:
Add four line chart tiles to show the temperature and humidity from the two envi
The following screenshot shows the settings for the first chart:
-![Line chart settings](./media/tutorial-in-store-analytics-visualize-insights/line-chart.png)
### Add cards to show environmental data
Add four card tiles to show the most recent temperature and humidity values from
The following screenshot shows the settings for the first card:
-![Card settings](./media/tutorial-in-store-analytics-visualize-insights/card-settings.png)
### Add tiles to show checkout occupancy data
Add four card tiles to show the queue length and dwell time for the two checkout
Resize and rearrange the tiles on your dashboard to look like the following screenshot:
-![Screenshot that shows the Power B I dashboard with resized and rearranged tiles.](./media/tutorial-in-store-analytics-visualize-insights/pbi-dashboard.png)
You could add some addition graphics resources to further customize the dashboard:
-![Power BI dashboard](./media/tutorial-in-store-analytics-visualize-insights/pbi-dashboard-graphics.png)
## Clean up resources
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
Title: Tutorial of IoT Digital Distribution Center | Microsoft Docs
-description: A tutorial of digital distribution center application template for IoT Central
+ Title: Tutorial - Azure IoT Digital Distribution Center | Microsoft Docs
+description: This tutorial shows you how to deploy and use the digital distribution center application template for IoT Central
Last updated 10/20/2019
-# Tutorial: Deploy and walk through a digital distribution center application template
+# Tutorial: Deploy and walk through the digital distribution center application template
-This tutorial shows you how to get started by deploying an IoT Central **digital distribution center** application template. You will learn how to deploy the template, what is included out of the box, and what you might want to do next.
+Use the IoT Central *digital distribution center* application template and the guidance in this article to develop an end-to-end digital distribution center solution.
-In this tutorial, you learn how to,
+ :::image type="content" source="media/tutorial-iot-central-ddc/digital-distribution-center-architecture.png" alt-text="digital distribution center.":::
+
+1. Set of IoT sensors sending telemetry data to a gateway device
+2. Gateway devices sending telemetry and aggregated insights to IoT Central
+3. Data is routed to the desired Azure service for manipulation
+4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts
+5. Processed data is stored in hot storage for near real-time actions or cold storage for more insight enhancements that is based on ML or batch analysis.
+6. Logic Apps can be used to power various business workflows in end-user business applications
+
+### Video cameras
+
+Video cameras are the primary sensors in this digitally connected enterprise-scale ecosystem. Advancements in machine learning and artificial intelligence that allow video to be turned into structured data and process it at edge before sending to cloud. We can use IP cameras to capture images, compress them on the camera, and then send the compressed data over edge compute for video analytics pipeline or use GigE vision cameras to capture images on the sensor and then send these images directly to the Azure IoT Edge, which then compresses before processing in video analytics pipeline.
+
+### Azure IoT Edge Gateway
+
+The "cameras-as-sensors" and edge workloads are managed locally by Azure IoT Edge and the camera stream is processed by analytics pipeline. The video analytics processing pipeline at Azure IoT Edge brings many benefits, including decreased response time, low-bandwidth consumption, which results in low latency for rapid data processing. Only the most essential metadata, insights, or actions are sent to the cloud for further action or investigation.
+
+### Device Management with IoT Central
+
+Azure IoT Central is a solution development platform that simplifies IoT device & Azure IoT Edge gateway connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers & partners can build an end to end enterprise solutions to achieve a digital feedback loop in distribution centers.
+
+### Business Insights and actions using data egress
+
+IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights that are based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved through webhook, Service Bus, event hub, or blob storage to build, train, and deploy machine learning models and further enrich insights.
+
+In this tutorial, you learn how to,
> [!div class="checklist"]
-> * Create digital distribution center application
-> * Walk through the application
+
+> * Create digital distribution center application.
+> * Walk through the application.
## Prerequisites+ * No specific pre-requisites required to deploy this app * Recommended to have Azure subscription, but you can even try without it ## Create digital distribution center application template
-You can create application using following steps
+Create the application using following steps:
+
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab:
-1. Navigate to the Azure IoT Central application manager website. Select **Build** from the left-hand navigation bar and then click the **Retail** tab.
+ :::image type="content" source="media/tutorial-iot-central-ddc/iotc-retail-home-page.png" alt-text="Screenshot showing how to create an app.":::
- :::image type="content" source="media/tutorial-iot-central-ddc/iotc-retail-homepage.png" alt-text="Digital distribution center application template":::
-1. Select **Retail** tab and select **Create app** under **digital distribution center application**
+1. Select **Create app** under **digital distribution center**.
-1. **Create app** will open New application form and fill up the requested details as show below.
- **Application name**: you can use default suggested name or enter your friendly application name.
- **URL**: you can use suggested default URL or enter your friendly unique memorable URL. Next, the default setting is recommended if you already have an Azure Subscription. You can start with 7-day free trial pricing plan and choose to convert to a standard pricing plan at any time before the free trail expires.
- **Billing Info**: The Directory, Azure Subscription, and Region details are required to provision the resources.
- **Create**: Select create at the bottom of the page to deploy your application.
+To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md).
- :::image type="content" source="media/tutorial-iot-central-ddc/ddc-create.png" alt-text="Screenshot showing how to create an app from the digital distribution center application template":::
+## Walk through the application
- :::image type="content" source="media/tutorial-iot-central-ddc/ddc-create-billinginfo.png" alt-text="Screenshot showing the billing options when you create the application":::
+The following sections walk you through the key features of the application:
-## Walk through the application dashboard
+### Dashboard
-After successfully deploying the app template, your default dashboard is a distribution center operator focused portal. Northwind Trader is a fictitious distribution center solution provider managing conveyor systems.
+The default dashboard is a distribution center operator focused portal. Northwind Trader is a fictitious distribution center solution provider managing conveyor systems.
In this dashboard, you will see one gateway and one camera acting as an IoT device. Gateway is providing telemetry about packages such as valid, invalid, unidentified, and size along with associated device twin properties. All downstream commands are executed at IoT devices, such as a camera. This dashboard is pre-configured to showcase the critical distribution center device operations activity. The dashboard is logically organized to show the device management capabilities of the Azure IoT gateway and IoT device.
- * You can perform gateway command & control tasks
- * Manage all cameras that are part of the solution.
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the digital distribution center dashboard](./media/tutorial-iot-central-ddc/ddc-dashboard.png)
+* You can perform gateway command & control tasks
+* Manage all cameras that are part of the solution.
+* Manage all cameras that are part of the solution.
+* Manage all cameras that are part of the solution.
-## Device Template
+ :::image type="content" source="media/tutorial-iot-central-ddc/ddc-dashboard.png" alt-text="Screenshot showing the digital distribution center dashboard.":::
+
+### Device Template
Click on the Device templates tab, and you will see the gateway capability model. A capability model is structured around two different interfaces **Camera** and **Digital Distribution Gateway**
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the digital distribution gateway device template in the application](./media/tutorial-iot-central-ddc/ddc-devicetemplate1.png)
+ :::image type="content" source="media/tutorial-iot-central-ddc/ddc-devicetemplate1.png" alt-text="Screenshot showing the digital distribution gateway device template in the application.":::
**Camera** - This interface organizes all the camera-specific command capabilities
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the camera interface in the digital distribution gateway device template](./media/tutorial-iot-central-ddc/ddc-camera.png)
+ :::image type="content" source="media/tutorial-iot-central-ddc/ddc-camera.png" alt-text="Screenshot showing the camera interface in the digital distribution gateway device template.":::
**Digital Distribution Gateway** - This interface represents all the telemetry coming from camera, cloud defined device twin properties and gateway info.
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the digital distribution gateway interface in the digital distribution gateway device template](./media/tutorial-iot-central-ddc/ddc-devicetemplate1.png)
+ :::image type="content" source="media/tutorial-iot-central-ddc/ddc-devicetemplate1.png" alt-text="Screenshot showing the digital distribution gateway interface in the digital distribution gateway device template.":::
+### Gateway Commands
-## Gateway Commands
This interface organizes all the gateway command capabilities
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the gateway commands interface in the digital distribution gateway device template](./media/tutorial-iot-central-ddc/ddc-camera.png)
+ :::image type="content" source="media/tutorial-iot-central-ddc/ddc-camera.png" alt-text="Screenshot showing the gateway commands interface in the digital distribution gateway device template.":::
+
+### Rules
-## Rules
Select the rules tab to see two different rules that exist in this application template. These rules are configured to email notifications to the operators for further investigations. **Too many invalid packages alert** - This rule is triggered when the camera detects a high number of invalid packages flowing through the conveyor system.
-
-**Large package** - This rule will trigger if the camera detects huge package that cannot be inspected for the quality.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the list of rules in the digital distribution center application](./media/tutorial-iot-central-ddc/ddc-rules.png)
-## Jobs
-Select the jobs tab to see five different jobs that exist as part of this application template:
-You can leverage jobs feature to perform solution-wide operations. Here digital distribution center jobs are using the device commands & twin capability to perform tasks such as,
- * calibrating camera before initiating the package detection
- * periodically updating camera firmware
- * modifying the telemetry interval to manage data upload
+**Large package** - This rule will trigger if the camera detects huge package that cannot be inspected for the quality.
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the list of jobs in the digital distribution center application](./media/tutorial-iot-central-ddc/ddc-jobs.png)
+ :::image type="content" source="media/tutorial-iot-central-ddc/ddc-rules.png" alt-text="Screenshot showing the list of rules in the digital distribution center application.":::
## Clean up resources+ If you're not going to continue to use this application, delete the application template by visiting **Administration** > **Application settings** and click **Delete**.
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing how to delete the application when you're done with it](./media/tutorial-iot-central-ddc/ddc-cleanup.png)
+ :::image type="content" source="media/tutorial-iot-central-ddc/ddc-cleanup.png" alt-text="Screenshot showing how to delete the application when you're done with it.":::
## Next steps
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
Title: Tutorial of IoT Smart inventory management | Microsoft Docs
-description: A tutorial of smart inventory management application template for IoT Central
+ Title: Tutorial - Azure IoT Smart inventory management | Microsoft Docs
+description: This tutorial shows you how to deploy and use smart inventory management application template for IoT Central
Last updated 10/20/2019
-# Tutorial: Deploy and walk through a smart inventory management application template
+# Tutorial: Deploy and walk through the smart inventory management application template
-This tutorial shows you how to get started by deploying an IoT Central **smart inventory management** application template. You will learn how to deploy the template, what is included out of the box, and what you might want to do next.
+Use the IoT Central *smart inventory management* application template and the guidance in this article to develop an end-to-end smart inventory management solution.
+
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-architecture.png" alt-text="smart inventory management.":::
+
+1. Set of IoT sensors sending telemetry data to a gateway device
+2. Gateway devices sending telemetry and aggregated insights to IoT Central
+3. Data is routed to the desired Azure service for manipulation
+4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts
+5. Processed data is stored in hot storage for near real-time actions or cold storage for additional insight enhancements that is based on ML or batch analysis.
+6. Logic Apps can be used to power various business workflows in end-user business applications
+
+### Details
+
+Following section outlines each part of the conceptual architecture
+Telemetry ingestion from Radio-frequency identification (RFID), Bluetooth low energy (BLE) tags
+
+### RFID tags
+
+RFID tags transmit data about an item through radio waves. RFID tags typically don't have a battery unless specified. Tags receive energy from the radio waves generated by the reader and transmit a signal back toward the RFID reader.
+
+### BLE tags
+
+Energy beacon broadcasts packets of data at regular intervals. Beacon data is detected by BLE readers or installed services on smartphones and then transmitting that to the cloud.
+
+### RFID & BLE readers
+
+RFID reader converts the radio waves to a more usable form of data. Information collected from the tags is then stored in local edge server or sent to cloud using JSON-RPC 2.0 over MQTT.
+BLE reader also known as Access Points (AP) are similar to RFID reader. It is used to detect nearby Bluetooth signals and relay its message to local Azure IoT Edge or cloud using JSON-RPC 2.0 over MQTT.
+Many readers are capable of reading RFID and beacon signals, and providing additional sensor capability related to temperature, humidity, accelerometer, and gyroscope.
+
+### Azure IoT Edge gateway
+
+Azure IoT Edge server provides a place to preprocess that data locally before sending it on to the cloud. We can also deploy cloud workloads artificial intelligence, Azure and third-party services, business logic using standard containers.
+
+### Device management with IoT Central
+
+Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers & partners can build an end to end enterprise solutions to achieve a digital feedback loop in inventory management.
+
+### Business insights & actions using data egress
+
+IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved using webhook, service bus, event hub, or blob storage to build, train, and deploy machine learning models & further enrich insights.
In this tutorial, you learn how to, > [!div class="checklist"]+ > * create smart inventory management application > * walk through the application
In this tutorial, you learn how to,
* No specific pre-requisites required to deploy this app * Recommended to have Azure subscription, but you can even try without it
-## Create smart inventory management application template
-
-You can create application using following steps
+## Create smart inventory management application
-1. Navigate to the Azure IoT Central application manager website. Select **Build** from the left-hand navigation bar and then click the **Retail** tab.
+Create the application using the following steps:
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/iotc_retail_homepage.png" alt-text="Screenshot showing how to select the smart inventory management application template":::
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab:
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/iotc-retail-home-page.png" alt-text="Screenshot showing how to create an app from the smart inventory management application template":::
-2. Select **Retail** tab and select **Create app** under **smart inventory management**
+1. Select **Create app** under **smart inventory management**.
-3. **Create app** will open New application form and fill up the requested details as show below.
- **Application name**: you can use default suggested name or enter your friendly application name.
- **URL**: you can use suggested default URL or enter your friendly unique memorable URL. Next, the default setting is recommended if you already have an Azure Subscription. You can start with 7-day free trial pricing plan and choose to convert to a standard pricing plan at any time before the free trail expires.
- **Billing Info**: The Directory, Azure Subscription, and Region details are required to provision the resources.
- **Create**: Select create at the bottom of the page to deploy your application.
+To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md).
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart_inventory_management_app_create.png" alt-text="Screenshot showing how to create an app from the smart inventory management application template":::
+## Walk through the application
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-app-create-billinginfo.png" alt-text="Screenshot showing the billing options when you create the application":::
-
-## Walk through the application
+The following sections walk you through the key features of the application:
### Dashboard
This dashboard is pre-configured to showcase the critical smart inventory manage
The dashboard is logically divided between two different gateway device management operations, * The warehouse is deployed with a fixed BLE gateway & BLE tags on pallets to track & trace inventory at a larger facility * Retail store is implemented with a fixed RFID gateway & RFID tags at individual an item level to track and trace the stock in a store outlet
- * View the gateway [location](../core/howto-use-location-data.md), status & related details
+ * View the gateway [location](../core/howto-use-location-data.md), status & related details
+
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-dashboard-1.png" alt-text="Screenshot showing the top half of the smart inventory management dashboard.":::
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the top half of the smart inventory managementdashboard](./media/tutorial-iot-central-smart-inventory-management/smart_inventory_management_dashboard1.png)
+ * You can easily track the total number of gateways, active, and unknown tags.
+ * You can perform device management operations such as update firmware, disable sensor, enable sensor, update sensor threshold, update telemetry intervals & update device service contracts
+ * Gateway devices can perform on-demand inventory management with a complete or incremental scan.
- * You can easily track the total number of gateways, active, and unknown tags.
- * You can perform device management operations such as update firmware, disable sensor, enable sensor, update sensor threshold, update telemetry intervals & update device service contracts
- * Gateway devices can perform on-demand inventory management with a complete or incremental scan.
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-dashboard-2.png" alt-text="Screenshot showing the bottom half of the smart inventory management dashboard.":::
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the bottom half of the smart inventory managementdashboard](./media/tutorial-iot-central-smart-inventory-management/smart_inventory_management_dashboard2.png)
+### Device Template
-## Device Template
Click on the Device templates tab, and you will see the gateway capability model. A capability model is structured around two different interfaces **Gateway Telemetry & Property** and **Gateway Commands** **Gateway Telemetry & Property** - This interface represents all the telemetry related to sensors, location, device info, and device twin property capability such as gateway thresholds and update intervals.
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the inventory gateway device template in the application](./media/tutorial-iot-central-smart-inventory-management/smart_inventory_management_devicetemplate1.png)
-
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-device-template-1.png" alt-text="Screenshot showing the inventory gateway device template in the application.":::
**Gateway Commands** - This interface organizes all the gateway command capabilities
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the gateway commands interface in the inventory gateway device template](./media/tutorial-iot-central-smart-inventory-management/smart_inventory_management_devicetemplate2.png)
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-device-template-2.png" alt-text="Screenshot showing the gateway commands interface in the inventory gateway device template.":::
+
+### Rules
-## Rules
Select the rules tab to see two different rules that exist in this application template. These rules are configured to email notifications to the operators for further investigations. **Gateway offline**: This rule will trigger if the gateway doesn't report to the cloud for a prolonged period. Gateway could be unresponsive because of low battery mode, loss of connectivity, device health. **Unknown tags**: It's critical to track every RFID & BLE tags associated with an asset. If the gateway is detecting too many unknown tags, it's an indication of synchronization challenges with tag sourcing applications.
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the list of rules in the smart inventory managementapplication](./media/tutorial-iot-central-smart-inventory-management/smart_inventory_management_rules.png)
-
-## Jobs
-Select the jobs tab to see five different jobs that exist as part of this application template:
-You can use jobs feature to perform solution-wide operations. Here inventory management jobs are using the device commands and twin capability to perform tasks such as,
- * disabling readers across all the gateway
- * modifying the telemetry threshold between
- * perform on-demand inventory scanning across the entire solution.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the list of jobs in the smart inventory managementapplication](./media/tutorial-iot-central-smart-inventory-management/smart_inventory_management_jobs.png)
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-rules.png" alt-text="Screenshot showing the list of rules in the smart inventory management application.":::
## Clean up resources If you're not going to continue to use this application, delete the application template by visiting **Administration** > **Application settings** and click **Delete**.
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing how to delete the application when you're done with it](./media/tutorial-iot-central-smart-inventory-management/smart_inventory_management_cleanup.png)
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-cleanup.png" alt-text="Screenshot showing how to delete the application when you're done with it.":::
## Next steps
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
Title: Micro-fulfillment center app template tutorial | Microsoft Docs
-description: A tutorial about the micro-fulfillment center application template for Azure IoT Central
+ Title: Tutorial - Azure IoT Micro-fulfillment center | Microsoft Docs
+description: This tutorial shows you how to deploy and use the micro-fulfillment center application template for Azure IoT Central
Last updated 01/09/2020
-# Tutorial: Deploy and walk through a micro-fulfillment center application template
+# Tutorial: Deploy and walk through the micro-fulfillment center application template
-In this tutorial, you use the Azure IoT Central micro-fulfillment center application template to build a retail solution.
+Use the IoT Central *micro-fulfillment center* application template and the guidance in this article to develop an end-to-end micro-fulfillment center solution.
-In this tutorial, you learn:
+![Azure IoT Central Store Analytics](./media/tutorial-micro-fulfillment-center-app/micro-fulfillment-center-architecture-frame.png)
-> [!div class="checklist"]
-> * How to deploy the application template
-> * How to use the application template
+1. Set of IoT sensors sending telemetry data to a gateway device
+2. Gateway devices sending telemetry and aggregated insights to IoT Central
+3. Continuous data export to the desired Azure service for manipulation
+4. Data can be structured in the desired format and sent to a storage service
+5. Business applications can query data and generate insights that power retail operations
-## Prerequisites
-To complete this tutorial series, you need an Azure subscription. You can optionally use a free 7-day trial. If you don't have an Azure subscription, you can create one on the [Azure sign-up page](https://aka.ms/createazuresubscription).
+### Robotic carriers
+
+A micro-fulfillment center solution will likely have a large set of robotic carriers generating different kinds of telemetry signals. These signals can be ingested by a gateway device, aggregated, and then sent to IoT Central as reflected by the left side of the architecture diagram.
-## Create an application
-In this section, you create a new Azure IoT Central application from a template. You'll use this application throughout the tutorial series to build a complete solution.
+### Condition monitoring sensors
-To create a new Azure IoT Central application:
+An IoT solution starts with a set of sensors capturing meaningful signals from within your fulfillment center. It's reflected by different kinds of sensors on the far left of the architecture diagram above.
-1. Go to the [Azure IoT Central application manager](https://aka.ms/iotcentral) website.
-1. If you have an Azure subscription, sign in with the credentials you use to access it. Otherwise, sign in by using a Microsoft account:
+### Gateway devices
- ![Screenshot of Microsoft account sign-in dialog box](./media/tutorial-in-store-analytics-create-app/sign-in.png)
+Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
-1. To start creating a new Azure IoT Central application, select **New Application**.
+### IoT Central application
-1. Select **Retail**. The retail page displays several retail application templates.
+The Azure IoT Central application ingests data from different kinds of IoT sensors, robots, as well gateway devices within the fulfillment center environment and generates a set of meaningful insights.
-To create a new micro-fulfillment center application that uses preview features:
-1. Select the **Micro-fulfillment center** application template. This template includes device templates for all devices used in the tutorial. The template also provides a dashboard for monitoring conditions within your fulfillment center, as well as the conditions for your robotic carriers.
+Azure IoT Central also provides a tailored experience to the store operator enabling them to remotely monitor and manage the infrastructure devices.
- ![Screenshot of Azure IoT Central Build your IoT application page](./media/tutorial-micro-fulfillment-center-app/iotc-retail-homepage-mfc.png)
-
-1. Optionally, choose a friendly **Application name**. The application template is based on the fictional company Northwind Traders.
+### Data transform
- >[!NOTE]
- >If you use a friendly application name, you still must use a unique value for the application URL.
+The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to a set of Azure PaaS (Platform-as-a-Service) services that can perform data manipulation and enrich these insights before landing them in a business application.
-1. If you have an Azure subscription, enter your directory, Azure subscription, and region. If you don't have a subscription, you can enable 7-day free trial, and complete the required contact information.
+### Business application
-1. Select **Create**.
+The IoT data can be used to power different kinds of business applications deployed within a retail environment. A fulfillment center manager or employee can use these applications to visualize business insights and take meaningful actions in real time. To learn how to build a real-time Power BI dashboard for your retail team, follow the [tutorial](./tutorial-in-store-analytics-create-app.md).
- ![Screenshot of Azure IoT Central New application page](./media/tutorial-micro-fulfillment-center-app/iotc-retail-create-app-mfc.png)
+In this tutorial, you learn:
- ![Screenshot of Azure IoT Central billing info](./media/tutorial-micro-fulfillment-center-app/iotc-retail-create-app-mfc-billing.png)
+> [!div class="checklist"]
+
+> * How to deploy the application template
+> * How to use the application template
+
+## Prerequisites
+
+* There are no specific prerequisites required to deploy this app.
+* You can use the free pricing plan or use an Azure subscription.
+
+## Create micro-fulfillment application
+
+Create the application using following steps:
+
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab:
+
+ :::image type="content" source="media/tutorial-micro-fulfillment-center-app/iotc-retail-homepage-mfc.png" alt-text="Screenshot showing how to create an app.":::
+
+1. Select **Create app** under **micro-fulfillment center**.
## Walk through the application
+The following sections walk you through the key features of the application:
+ After successfully deploying the app template, you see the **Northwind Traders micro-fulfillment center dashboard**. Northwind Traders is a fictitious retailer that has a micro-fulfillment center being managed in this Azure IoT Central application. On this dashboard, you see information and telemetry about the devices in this template, along with a set of commands, jobs, and actions that you can take. The dashboard is logically split into two sections. On the left, you can monitor the environmental conditions within the fulfillment structure, and on the right, you can monitor the health of a robotic carrier within the facility. From the dashboard, you can:
From the dashboard, you can:
* View the floor plan and location of the robotic carriers within the fulfillment structure. * Trigger commands, such as resetting the control system, updating the carrier's firmware, and reconfiguring the network.
- ![Screenshot of the top half of the Northwind Traders micro-fulfillment center dashboard.](./media/tutorial-micro-fulfillment-center-app/mfc-dashboard-1.png)
- * See an example of the dashboard that an operator can use to monitor conditions within the fulfillment center.
- * Monitor the health of the payloads that are running on the gateway device within the fulfillment center.
+ :::image type="content" source="media/tutorial-micro-fulfillment-center-app/mfc-dashboard-1.png" alt-text="Screenshot of the top half of the Northwind Traders micro-fulfillment center dashboard.":::
+ * See an example of the dashboard that an operator can use to monitor conditions within the fulfillment center.
+ * Monitor the health of the payloads that are running on the gateway device within the fulfillment center.
+
+ :::image type="content" source="media/tutorial-micro-fulfillment-center-app/mfc-dashboard-2.png" alt-text="Screenshot of the botton half of the Northwind Traders micro-fulfillment center dashboard.":::
+
- ![Screenshot of the botton half of the Northwind Traders micro-fulfillment center dashboard.](./media/tutorial-micro-fulfillment-center-app/mfc-dashboard-2.png)
+### Device template
-## Device template
If you select the device templates tab, you see that there are two different device types that are part of the template: * **Robotic Carrier**: This device template represents the definition for a functioning robotic carrier that has been deployed in the fulfillment structure, and is performing appropriate storage and retrieval operations. If you select the template, you see that the robot is sending device data, such as temperature and axis position, and properties like the robotic carrier status. * **Structure Condition Monitoring**: This device template represents a device collection that allows you to monitor environment condition, as well as the gateway device hosting various edge workloads to power your fulfillment center. The device sends telemetry data, such as the temperature, the number of picks, and the number of orders. It also sends information about the state and health of the compute workloads running in your environment.
- ![Micro-fulfillment Center Device Templates](./media/tutorial-micro-fulfillment-center-app/device-templates.png)
+ :::image type="content" source="media/tutorial-micro-fulfillment-center-app/device-templates.png" alt-text="Micro-fulfillment Center Device Templates.":::
++ If you select the device groups tab, you also see that these device templates automatically have device groups created for them.
-## Rules
+### Rules
+ On the **Rules** tab, you see a sample rule that exists in the application template to monitor the temperature conditions for the robotic carrier. You might use this rule to alert the operator if a specific robot in the facility is overheating, and needs to be taken offline for servicing. Use the sample rule as inspiration to define rules that are more appropriate for your business functions.
-![Screenshot of the Rules tab](./media/tutorial-micro-fulfillment-center-app/rules.png)
+ :::image type="content" source="media/tutorial-micro-fulfillment-center-app/rules.png" alt-text="Screenshot of the Rules tab.":::
-## Clean up resources
+### Clean up resources
If you're not going to continue to use this application, delete the application template. Go to **Administration** > **Application settings**, and select **Delete**.
-![Screenshot of Micro-fulfillment center Application settings page](./media/tutorial-micro-fulfillment-center-app/delete.png)
+ :::image type="content" source="media/tutorial-micro-fulfillment-center-app/delete.png" alt-text="Screenshot of Micro-fulfillment center Application settings page.":::
## Next steps
iot-central Tutorial Video Analytics Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-video-analytics-deploy.md
Title: 'Tutorial: How to deploy the video analytics - object and motion detection Azure IoT Central application template'
-description: Tutorial - This guide summarizes the steps to deploy an Azure IoT Central application using the video analytics - object and motion detection application template.
+ Title: Tutorial - Azure IoT video analytics - object and motion detection | Microsoft Docs
+description: This tutorial shows you how to deploy and use the video analytics - object and motion detection application template for IoT Central.
Last updated 07/31/2020
-# Tutorial: How to deploy an IoT Central application using the video analytics - object and motion detection application template
+# Tutorial: Deploy and walk through the video analytics - object and motion detection application template
-For an overview of the key *video analytics - object and motion detection* application components, see [object and motion detection video analytics application architecture](architecture-video-analytics.md).
+For an overview of the key *video analytics - object and motion detection* application The **Video analytics - object and motion detection** application template lets you build IoT solutions include live video analytics capabilities.
++
+The key components of the video analytics solution include:
+
+### Live video analytics (LVA)
+
+LVA provides a platform for you to build intelligent video applications that span the edge and the cloud. The platform lets you build intelligent video applications that span the edge and the cloud. The platform offers the capability to capture, record, analyze live video, and publish the results, which could be video or video analytics, to Azure services. The Azure services could be running in the cloud or the edge. The platform can be used to enhance IoT solutions with video analytics.
+
+For more information, see [Live Video Analytics](https://github.com/Azure/live-video-analytics) on GitHub.
+
+### IoT Edge LVA gateway module
+
+The IoT Edge LVA gateway module instantiates cameras as new devices and connects them directly to IoT Central using the IoT device client SDK.
+
+In this reference implementation, devices connect to the solution using symmetric keys from the edge. For more information about device connectivity, see [Get connected to Azure IoT Central](../core/concepts-get-connected.md)
+
+### Media graph
+
+Media graph lets you define where to capture the media from, how to process it, and where to deliver the results. You configure media graph by connecting components, or nodes, in the desired manner. For more information, see [Media Graph](https://github.com/Azure/live-video-analytics/tree/master/MediaGraph) on GitHub.
The following video gives a walkthrough of how to use the _video analytics - object and motion detection application template_ to deploy an IoT Central solution:
In this set of tutorials, you learn how to:
## Prerequisites
-An Azure subscription is recommended. Alternatively, you can use a free, 7-day trial. If you don't have an Azure subscription, you can create one on the [Azure sign-up page](https://aka.ms/createazuresubscription).
+* There are no specific prerequisites required to deploy this app.
+* You can use the free pricing plan or use an Azure subscription.
## Deploy the application
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart-linux.md
az group list
In this quickstart, you created an IoT Edge device and used the Azure IoT Edge cloud interface to deploy code onto the device. Now, you have a test device generating raw data about its environment.
-The next step is to set up your local development environment so that you can start creating IoT Edge modules that run your business logic.
+In the next tutorial, you'll learn how to monitor the activity and health of your device from the Azure portal.
> [!div class="nextstepaction"]
-> [Start developing IoT Edge modules for Linux devices](tutorial-develop-for-linux.md)
+> [Monitor IoT Edge devices](tutorial-monitor-with-workbooks.md)
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart.md
Use the dashboard extension in Windows Admin Center to uninstall Azure IoT Edge
In this quickstart, you created an IoT Edge device and used the Azure IoT Edge cloud interface to deploy code onto the device. Now you have a test device generating raw data about its environment.
-Next, set up your local development environment so that you can start creating IoT Edge modules that run your business logic.
+In the next tutorial, you'll learn how to monitor the activity and health of your device from the Azure portal.
> [!div class="nextstepaction"]
-> [Start developing IoT Edge modules](tutorial-develop-for-linux.md)
+> [Monitor IoT Edge devices](tutorial-monitor-with-workbooks.md)
iot-edge Tutorial Monitor With Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-monitor-with-workbooks.md
+
+ Title: Tutorial - Azure Monitor workbooks for IoT Edge
+description: Learn how to monitor IoT Edge modules and devices using Azure Monitor Workbooks for IoT
+++ Last updated : 08/13/2021++++++
+# Tutorial: Monitor IoT Edge devices
++
+Use Azure Monitor workbooks to monitor the health and performance of your Azure IoT Edge deployments.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Understand what metrics are shared by IoT Edge devices and how the metrics collector module handles them.
+> * Deploy the metrics collector module to an IoT Edge device.
+> * View curated visualizations of the metrics collected from the device.
+
+## Prerequisites
+
+An IoT Edge device with the simulated temperature sensor module deployed to it. If you don't have a device ready, follow the steps in [Deploy your first IoT Edge module to a virtual Linux device](quickstart-linux.md) to create one using a virtual machine.
+
+## Understand IoT Edge metrics
+
+Every IoT Edge device relies on two modules, the *runtime modules*, which manage the lifecycle and communication of all the other modules on a device. These modules are called the **IoT Edge agent** and the **IoT Edge hub**. To learn more about these modules, see [Understand the Azure IoT Edge runtime and its architecture](iot-edge-runtime.md).
+
+Both of the runtime modules create metrics that allow you to remotely monitor how an IoT Edge device or its individual modules are performing. The IoT Edge agent reports on the state of individual modules and the host device, so creates metrics like how long a module has been running correctly, or the amount of RAM and percent of CPU being used on the device. The IoT Edge hub reports on communications on the device, so creates metrics like the total number of messages sent and received, or the time it takes to resolve a direct method. For the full list of available metrics, see [Access built-in metrics](how-to-access-built-in-metrics.md).
+
+These metrics are exposed automatically by both modules so that you can create your own solutions to access and report on these metrics. To make this process easier, Microsoft provides the [azureiotedge-metrics-collector module](https://hub.docker.com/_/microsoft-azureiotedge-metrics-collector) that handles this process for those who don't have or want a custom solution. The metrics collector module collects metrics from the two runtime modules and any other modules you may want to monitor, and transports them off-device.
+
+The metrics collector module works one of two ways to send your metrics to the cloud. The first option, which we'll use in this tutorial, is to send the metrics directly to Log Analytics. The second option, which is only recommended if your networking policies require it, is to send the metrics through IoT Hub and then set up a route to pass the metric messages to Log Analytics. Either way, once the metrics are in your Log Analytics workspace, they are available to view through Azure Monitor workbooks.
+
+## Create a Log Analytics workspace
+
+A Log Analytics workspace is necessary to collect the metrics data and provides a query language and integration with Azure Monitor to enable you to monitor your devices.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select **Log Analytics workspaces**.
+
+1. Select **Create** and then follow the prompts to create a new workspace.
+
+1. Once your workspace is created, select **Go to resource**.
+
+1. From the main menu under **Settings**, select **Agents management**.
+
+1. Copy the values of **Workspace ID** and **Primary key**. You'll use these two values later in the tutorial to configure the metrics collector module to send the metrics to this workspace.
+
+## Retrieve your IoT hub resource ID
+
+When you configure the metrics collector module, you give it the Azure Resource Manager resource ID for your IoT hub. Retrieve that ID now.
+
+1. From the Azure portal, navigate to your IoT hub.
+
+1. From the menu on the left, under **Settings**, select **Properties**.
+
+1. Copy the value of **Resource ID**. It should have the format `/subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Devices/IoTHubs/<iot_hub_name>`.
+
+## Deploy the metrics collector module
+
+Deploy the metrics collector module to every device that you want to monitor. It runs on the device like any other module, and watches its assigned endpoints for metrics to collect and send to the cloud.
+
+Follow these steps to deploy and configure the collector module:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to your IoT hub.
+
+1. From the menu on the left, under **Automatic Device Management**, select **IoT Edge**.
+
+1. Select the device ID of the target device from the list of IoT Edge devices to open the device details page.
+
+1. On the upper menu bar, select **Set Modules** to open the three-step module deployment page.
+
+1. The first step of deploying modules from the portal is to declare which **Modules** should be on a device. If you are using the same device that you created in the quickstart, you should already see **SimulatedTemperatureSensor** listed. If not, add it now:
+
+ 1. Select **Add** then choose **Marketplace Module** from the drop-down menu.
+
+ 1. Search for and select **SimulatedTemperatureSensor**.
+
+1. Add and configure the metrics collector module:
+
+ 1. Select **Add** then choose **Marketplace Module** from the drop-down menu.
+ 1. Search for and select **IoT Edge Metrics Collector**.
+ 1. Select the metrics collector module from the list of modules to open its configuration details page.
+ 1. Navigate to the **Environment Variables** tab.
+ 1. Update the following values:
+
+ | Name | Value |
+ | - | -- |
+ | **ResourceId** | Your IoT hub resource ID that you retrieved in a previous section. |
+ | **UploadTarget** | `AzureMonitor` |
+ | **LogAnalyticsWorkspaceId** | Your Log Analytics workspace ID that you retrieved in a previous section. |
+ | **LogAnalyticsSharedKey** | Your Log Analytics key that you retrieved in a previous section. |
+
+ 1. Delete the **OtherConfig** environment variable, which is a placeholder for extra configuration options you may want to add in the future. It's not necessary for this tutorial.
+ 1. Select **Update** to save your changes.
+
+1. Select **Next: Routes** to continue to the second step for deploying modules.
+
+1. The portal automatically adds a route for the metrics collector. You would use this route if you configured the collector module to send the metrics through IoT Hub, but in this tutorial we're sending the metrics directly to Log Analytics so don't need it. Delete the **FromMetricsCollectorToUpstream** route.
+
+1. Select **Review + create** to continue to the final step for deploying modules.
+
+1. Select **Create** to finish the deployment.
+
+After completing the module deployment, you return to the device details page where you can see four modules listed as **Specified in Deployment**. It may take a few moments for all four modules to be listed as **Reported by Device**, which means that they've been successfully started and reported their status to IoT Hub. Refresh the page to see the latest status.
+
+## Monitor device health
+
+It may take up to fifteen minutes for your device monitoring workbooks to be ready to view. Once you deploy the metrics collector module, it starts sending metrics messages to Log Analytics where they're organized within a table. The IoT Hub resource ID that you provided links the metrics that are ingested to the hub that they belong to. As a result, the curated IoT Edge workbooks can retrieve metrics by querying against the metrics table using the resource ID.
+
+Azure Monitor provides three default workbook templates for IoT:
+
+* The **IoT Edge fleet view** workbook shows an overview of active devices so that you can identify any unhealthy devices and drill down into how each device is performing. This workbook also shows alerts generated from any alert rules that you may create.
+* The **IoT Edge device details** workbook provides visualizations around three categories: messaging, modules, and host. The messaging view visualizes the message routes for a device and reports on the overall health of the messaging system. The modules view shows how the individual modules on a device are performing. The host view shows information about the host device including version information for host components and resource use.
+* The **IoT Edge health snapshot** workbook measures device signals against configurable thresholds to determine if a device is health or not. This workbook can only be accessed from within the fleet view workbook, which passes the parameters required to initialize the health snapshot of a particular device.
+
+### Explore the fleet view and health snapshot workbooks
+
+The fleet view workbook shows all of your devices, and lets you select specific devices to view their health snapshots. Use the following steps to explore the workbook visualizations:
+
+1. Return to your IoT hub page in the Azure portal.
+
+1. Scroll down in the main menu to find the **Monitoring** section, and select **Workbooks**.
+
+ :::image type="content" source="./media/tutorial-monitor-with-workbooks/workbooks-gallery.png" alt-text="Select workbooks to open the Azure Monitor workbooks gallery.":::
+
+1. Select the **IoT Edge fleet view** workbook.
+
+1. You should see your device that's running the metrics collector module. The device is listed as either **healthy** or **unhealthy**.
+
+1. Select the device name to open the **IoT Edge health snapshot** and view specific details about the device health.
+
+1. On any of the time charts, use the arrow icons under the X-axis or click on the chart and drag your cursor to change the time range.
+
+ :::image type="content" source="./media/tutorial-monitor-with-workbooks/health-snapshot-custom-time-range.png" alt-text="Click and drag or use the arrow icons on any chart to change the time range.":::
+
+1. Close the health snapshot workbook. Select **Workbooks** from the fleet view workbook to return to the workbooks gallery.
+
+### Explore the device details workbook
+
+The device details workbook shows performance
+details for an individual device. Use the following steps to explore the workbook visualizations:
+
+1. From the workbooks gallery, select the **IoT Edge device details** workbook.
+
+1. The first page you see in the device details workbook is the **messaging** view with the **routing** tab selected.
+
+ On the left, a table displays the routes on the device, organized by endpoint. For our device, we see that the **upstream** endpoint, which is the special term used for routing to IoT Hub, is receiving messages from the **temperatureOutput** output of the simulated temperature sensor module.
+
+ On the right, a graph keeps track of the number of connected clients over time. You can click and drag the graph to change the time range.
+
+ :::image type="content" source="./media/tutorial-monitor-with-workbooks/device-details-messaging-routing.png" alt-text="Select the messaging view to see the status of communications on the device.":::
+
+1. Select the **graph** tab to see a different visualization of the routes. On the graph page, you can drag and drop the different endpoints to rearrange the graph. This feature is helpful when you have many routes to visualize.
+
+ :::image type="content" source="./media/tutorial-monitor-with-workbooks/device-details-messaging-graph.png" alt-text="Select the graph view to see an interactive graph of the device routes.":::
+
+1. The **health** tab reports any issues with messaging, like dropped messages or disconnected clients.
+
+1. Select the **modules** view to see the status of all the modules deployed on the device. You can select each of the modules to see details about how much CPU and memory they use.
+
+ :::image type="content" source="./media/tutorial-monitor-with-workbooks/device-details-modules-availability.png" alt-text="Select the modules view to see the status of each module deployed to the device.":::
+
+1. Select the **host** view to see information about the host device, including its operating system, the IoT Edge daemon version, and resource use.
+
+## View module logs
+
+After viewing the metrics for a device, you might want to dive in further and inspect the individual modules. IoT Edge provides troubleshooting support in the Azure portal with a live module log feature.
+
+1. From the device details workbook, select **Troubleshoot live**.
+
+ :::image type="content" source="./media/tutorial-monitor-with-workbooks/device-details-troubleshoot-live.png" alt-text="Select the troubleshoot live button from the top-right of the device details workbook.":::
+
+1. The troubleshooting page opens to the **edgeAgent** logs from your IoT Edge device. If you selected a specific time range in the device details workbook, that setting is passed through to the troubleshooting page.
+
+1. Use the dropdown menu to switch to the logs of other modules running on the device. Use the **Restart** button to restart a module.
+
+ :::image type="content" source="./media/tutorial-monitor-with-workbooks/troubleshoot-device.png" alt-text="Use the dropdown menu to view the logs of different modules and use the restart button to restart modules.":::
+
+The troubleshoot page can also be accessed from an IoT Edge device's details page. For more information, see [Troubleshoot IoT Edge devices from the Azure portal](troubleshoot-in-portal.md).
+
+## Next steps
+
+As you continue through the rest of the tutorials, keep the metrics collector module on your devices and return to these workbooks to see how the information changes as you add more complex modules and routing.
+
+Continue to the next tutorial where you set up your developer environment to start deploying custom modules to your devices.
+
+> [!div class="nextstepaction"]
+> [Develop IoT Edge modules with Linux containers](tutorial-develop-for-linux.md)
key-vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/recovery.md
Title: Azure Key Vault Managed HSM recovery overview | Microsoft Docs
-description: Managed HSM Recovery features are designed to prevent the accidental or malicious deletion of your HSM resource and keys.
+description: Managed HSM recovery features are designed to prevent the accidental or malicious deletion of your HSM resource and keys.
Last updated 06/01/2021
-# Managed HSM soft delete and purge protection
+# Managed HSM soft-delete and purge protection
-This article covers two recovery features of Managed HSM, soft delete and purge protection. This document provides an overview of these features, and shows you how to manage them through Azure CLI and Azure PowerShell.
+This article describes two recovery features of Managed HSM: soft-delete and purge protection. It provides an overview of these features and demonstrates how to manage them by using the Azure CLI and Azure PowerShell.
-For more information about Managed HSM, see [Managed HSM overview](overview.md)
+For more information, see [Managed HSM overview](overview.md).
## Prerequisites
-* An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
-* [PowerShell module](/powershell/azure/install-az-ps).
-* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
-* A Managed HSM - you can create one using [Azure CLI](./quick-create-cli.md), or [Azure PowerShell](./quick-create-powershell.md)
-* The user will need the following permissions to perform operations on soft-deleted HSMs or keys:
+* An Azure subscription. [Create one for free](https://azure.microsoft.com/free/dotnet).
+* The [PowerShell module](/powershell/azure/install-az-ps).
+* Azure CLI 2.25.0 or later. Run `az --version` to determine which version you have. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+* A managed HSM. You can create one by using the [Azure CLI](./quick-create-cli.md) or [Azure PowerShell](./quick-create-powershell.md).
+* Users will need the following permissions to perform operations on soft-deleted HSMs or keys:
- | Role Assignment | Description |
+ | Role assignment | Description |
|||
- |[Managed HSM Contributor](../../role-based-access-control/built-in-roles.md#managed-hsm-contributor)|To list, recover and purge soft-deleted HSMs|
+ |[Managed HSM Contributor](../../role-based-access-control/built-in-roles.md#managed-hsm-contributor)|List, recover, and purge soft-deleted HSMs|
|[Managed HSM Crypto User](./built-in-roles.md)|List soft-deleted keys|
- |[Managed HSM Crypto Officer](./built-in-roles.md)|Purge or recover soft-deleted keys|
+ |[Managed HSM Crypto Officer](./built-in-roles.md)|Purge and recover soft-deleted keys|
-## What are soft-delete and purge protection
+## What are soft-delete and purge protection?
-[Soft delete](soft-delete-overview.md) and purge protection are two different recovery features.
+[Soft-delete](soft-delete-overview.md) and purge protection are recovery features.
-**Soft delete** is designed to prevent accidental deletion of your HSM and keys. Think of soft-delete like a recycle bin. When you delete an HSM or a key, it will remain recoverable for a user configurable retention period or a default of 90 days. HSMs and keys in the soft deleted state can also be **purged** which means they are permanently deleted. This allows you to recreate HSMs and keys with the same name. Both recovering and deleting HSMs and keys require specific role assignments. **Soft delete cannot be disabled.**
+*Soft-delete* is designed to prevent accidental deletion of your HSM and keys. Soft-delete works like a recycle bin. When you delete an HSM or a key, it will remain recoverable for a configurable retention period or for a default period of 90 days. HSMs and keys in the soft-deleted state can also be *purged*, which means they're permanently deleted. Purging allows you to re-create HSMs and keys with the same name as the purged item. Both recovering and deleting HSMs and keys require specific role assignments. Soft-delete can't be disabled.
> [!NOTE]
-> Since the underlying resources remain allocated to your HSM, even when it is in deleted state, the HSM resource will continue to accrue hourly charges while in deleted state.
+> Because the underlying resources remain allocated to your HSM even when it's in a deleted state, the HSM resource will continue to accrue hourly charges while it's in that state.
-It is important to note that **Managed HSM names are globally unique** in every cloud environment, so you won't be able to create a Managed HSM with the same name if one exists in a soft deleted state. Similarly, the names of keys are unique within an HSM. You won't be able to create a new key if one exists in the soft deleted state.
+Managed HSM names are globally unique in every cloud environment. So you can't create a managed HSM with the same name as one that exists in a soft-deleted state. Similarly, the names of keys are unique within an HSM. You can't create a key with the same name as one that exists in the soft-deleted state.
-**Purge protection** is designed to prevent the deletion of your HSMs and keys by a malicious insider. Think of this as a recycle bin with a time based lock. You can recover items at any point during the configurable retention period. **You will not be able to permanently delete or purge an HSM or a key until the retention period elapses.** Once the retention period elapses the HSM or key will be purged automatically.
+For more information, see [Managed HSM soft-delete overview](soft-delete-overview.md).
-> [!NOTE]
-> Purge Protection is designed so that no administrator role or permission can override, disable, or circumvent purge protection. **Once purge protection is enabled, it cannot be disabled or overridden by anyone including Microsoft.** This means you must recover a deleted HSM or wait for the retention period to elapse before reusing the HSM name.
+*Purge protection* is designed to prevent the deletion of your HSMs and keys by a malicious insider. It's like a recycle bin with a time-based lock. You can recover items at any point during the configurable retention period. You won't be able to permanently delete or purge an HSM or a key until the retention period ends. When the retention period ends, the HSM or key will be purged automatically.
-For more information about soft-delete, see [Managed HSM soft-delete overview](soft-delete-overview.md)
+> [!NOTE]
+> No administrator role or permission can override, disable, or circumvent purge protection. *If purge protection is enabled, it can't be disabled or overridden by anyone, including Microsoft.* So you must recover a deleted HSM or wait for the retention period to end before you can reuse the HSM name.
+## Manage keys and managed HSMs
# [Azure CLI](#tab/azure-cli)
-## Managed HSM (CLI)
+### Managed HSMs (CLI)
-* Check status of soft-delete and purge protection for a Managed HSM
+* To check the status of soft-delete and purge protection for a managed HSM:
```azurecli az keyvault show --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} --hsm-name {HSM NAME} ```
-* Delete HSM (recoverable, since soft delete is enabled by default)
+* To delete an HSM:
```azurecli az keyvault delete --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} --hsm-name {HSM NAME} ```
+
+ This action is recoverable because soft-delete is on by default.
-* List all soft-deleted HSMs
+* To list all soft-deleted HSMs:
```azurecli az keyvault list-deleted --subscription {SUBSCRIPTION ID} --resource-type hsm ```
-* Recover soft-deleted HSM
+* To recover a soft-deleted HSM:
```azurecli az keyvault recover --subscription {SUBSCRIPTION ID} --hsm-name {HSM NAME} --location {LOCATION} ```
-* Purge soft-deleted HSM
+* To purge a soft-deleted HSM:
```azurecli az keyvault purge --subscription {SUBSCRIPTION ID} --hsm-name {HSM NAME} --location {LOCATION} ``` > [!WARNING]
- > This operation will permanently delete your HSM
+ > This operation will permanently delete your HSM.
-* Enable purge-protection on HSM
+* To enable purge protection on an HSM:
```azurecli az keyvault update-hsm --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} --hsm-name {HSM NAME} --enable-purge-protection true ```
-## Keys (CLI)
+### Keys (CLI)
-* Delete key
+* To delete a key:
```azurecli az keyvault key delete --subscription {SUBSCRIPTION ID} --hsm-name {HSM NAME} --name {KEY NAME} ```
-* List deleted keys
+* To list deleted keys:
```azurecli az keyvault key list-deleted --subscription {SUBSCRIPTION ID} --hsm-name {HSM NAME} ```
-* Recover deleted key
+* To recover a deleted key:
```azurecli az keyvault key recover --subscription {SUBSCRIPTION ID} --hsm-name {HSM NAME} --name {KEY NAME} ```
-* Purge soft-deleted key
+* To purge a soft-deleted key:
```azurecli az keyvault key purge --subscription {SUBSCRIPTION ID} --hsm-name {HSM NAME} --name {KEY NAME} ``` > [!WARNING]
- > This operation will permanently delete your key
+ > This operation will permanently delete your key.
# [Azure PowerShell](#tab/azure-powershell)
-## Managed HSM (PowerShell)
+### Managed HSMs (PowerShell)
-* Check status of soft-delete and purge protection for a Managed HSM
+* To check the status of soft-delete and purge protection for a managed HSM:
```powershell Get-AzKeyVaultManagedHsm -Name "ContosoHSM" ```
-* Delete HSM (recoverable, since soft-delete is on by default)
+* To delete an HSM:
```powershell Remove-AzKeyVaultManagedHsm -Name 'ContosoHSM' ```
-> [!NOTE]
-> Additional Managed HSM soft-delete and purge protection PowerShell commands will be enabled soon.
-
+ This action is recoverable because soft-delete is on by default.
-## Keys (PowerShell)
+### Keys (PowerShell)
-* Delete a Key
+* To delete a key:
```powershell Remove-AzKeyVaultKey -HsmName ContosoHSM -Name 'MyKey' ```
-* List all deleted keys
+* To list all deleted keys:
```powershell Get-AzKeyVaultKey -HsmName ContosoHSM -InRemovedState ```
-* To recover a soft-deleted key
+* To recover a soft-deleted key:
```powershell Undo-AzKeyVaultKeyRemoval -HsmName ContosoHSM -Name ContosoFirstKey ```
-* Purge a soft-deleted key
+* To purge a soft-deleted key:
```powershell Remove-AzKeyVaultKey -HsmName ContosoHSM -Name ContosoFirstKey -InRemovedState ``` > [!WARNING]
- > This operation will permanently delete your key
+ > This operation will permanently delete your key.
For more information about soft-delete, see [Managed HSM soft-delete overview](s
- [Managed HSM PowerShell cmdlets](/powershell/module/az.keyvault) - [Key Vault Azure CLI commands](/cli/azure/keyvault)-- [Managed HSM full backup/restore](backup-restore.md)
+- [Managed HSM full backup and restore](backup-restore.md)
- [How to enable Managed HSM logging](logging.md)
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/soft-delete-overview.md
Title: Azure Key Vault Managed HSM soft-delete | Microsoft Docs
-description: Soft-delete in Managed HSM allows you to recover deleted HSM instances and keys.
+description: Soft-delete in Managed HSM allows you to recover deleted HSM instances and keys. This article provides an overview of the feature.
Last updated 06/01/2021
# Managed HSM soft-delete overview > [!IMPORTANT]
-> Soft-delete is on by default for Managed HSM resources. It cannot be turned off.
+> Soft-delete can't be turned off for Managed HSM resources.
> [!IMPORTANT]
-> Soft-deleted Managed HSM resources will continue to be billed at their full hourly rate until they are purged.
+> Soft-deleted Managed HSM resources will continue to be billed at their full hourly rate until they're purged.
-Managed HSM's soft-delete feature allows recovery of the deleted HSMs and keys. Specifically, this safeguard offer the following protections:
+The Managed HSM soft-delete feature allows recovery of deleted HSMs and keys. Specifically, this feature provides the following safeguards:
-- Once an HSM or a key is deleted, it remains recoverable for a configurable period of 7 to 90 calendar days. Retention period can be set during HSM creation. If no value is specified, the default retention period will be set to 90 days. This provides users with sufficient time to notice an accidental key or HSM deletion and respond.-- Two operations must be performed to permanently delete a key. First a user must delete the key, which puts it into the soft-deleted state. Second, a user must purge the key in the soft-deleted state. The purge operation requires user to have a "Managed HSM Crypto Officer" role assigned. These extra protections reduce the risk of a user accidentally or maliciously deleting a key or an HSM.
+- After an HSM or key is deleted, it remains recoverable for a configurable period of 7 to 90 calendar days. You can set the retention period when you create an HSM. If you don't specify a value, the default retention period of 90 days will be used. This period gives users enough time to notice an accidental key or HSM deletion and respond.
+- To permanently delete a key, users need to take two actions. First, they must delete the key, which puts it into the soft-deleted state. Second, they must purge the key in the soft-deleted state. The purge operation requires the Managed HSM Crypto Officer role. These extra safeguards reduce the risk of a user accidentally or maliciously deleting a key or an HSM.
## Soft-delete behavior
-For Managed-HSM soft-delete is on by default. You cannot create a Managed HSM resource with soft-delete disabled.
+Soft-delete can't be turned off for Managed HSM resources.
-With soft-delete is enabled, resources marked as deleted are retained for a specified period (90 days by default). The service further provides a mechanism for recovering the deleted HSMs or keys, allowing to undo the deletion.
+Resources marked as deleted are kept for a specified period. There's also a mechanism for recovering deleted HSMs or keys, so you can undo deletions.
-The default retention period is 90 days, however, during HSM resource creation, it is possible to set the retention policy interval to a value from 7 to 90 days. The purge protection retention policy uses the same interval. Once set, the retention policy interval cannot be changed.
+The default retention period is 90 days. When you create an HSM resource, you can set the retention policy interval to a value from 7 to 90 days. The purge protection retention policy uses the same interval. After you set the retention policy, you can't change it.
-You cannot reuse the name of an HSM resource that has been soft-deleted until the retention period has passed and the HSM resource is purged.
+You can't reuse the name of an HSM resource that's been soft-deleted until the retention period ends and the HSM resource is purged (permanently deleted).
## Purge protection
-Purge protection is an optional behavior and is **not enabled by default**. It can be turned on via [CLI](./recovery.md?tabs=azure-cli) or [PowerShell](./recovery.md?tabs=azure-powershell).
+Purge protection is an optional behavior. It's not enabled by default. You can turn it on by using the [Azure CLI](./recovery.md?tabs=azure-cli) or [PowerShell](./recovery.md?tabs=azure-powershell).
-When purge protection is on, an HSM or a key in the deleted state cannot be purged until the retention period has passed. Soft-deleted HSMs and keys can still be recovered, ensuring that the retention policy will be followed.
+When purge protection is on, an HSM or key in the deleted state can't be purged until the retention period ends. Soft-deleted HSMs and keys can still be recovered, which ensures the retention policy will be in effect.
-The default retention period is 90 days, but it is possible to set the retention policy interval to a value from 7 to 90 days at the time of creating an HSM. Retention policy interval can only be set at the time of creating an HSM. It cannot be changed later.
+The default retention period is 90 days. You can set the retention policy interval to a value from 7 to 90 days when you create an HSM. The retention policy interval can be set only when you create an HSM. It can't be changed later.
-See [How to use Managed HSM soft-delete with CLI](./recovery.md?tabs=azure-cli#managed-hsm-cli) or [How to use Managed HSM soft-delete with PowerShell](./recovery.md?tabs=azure-powershell#managed-hsm-powershell).
+See [How to use Managed HSM soft-delete with CLI](./recovery.md?tabs=azure-cli#managed-hsms-cli) or [How to use Managed HSM soft-delete with PowerShell](./recovery.md?tabs=azure-powershell#managed-hsms-powershell).
## Managed HSM recovery
-Upon deleting an HSM, the service creates a proxy resource under the subscription, adding sufficient metadata for recovery. The proxy resource is a stored object, available in the same location as the deleted HSM.
+When you delete an HSM, the service creates a proxy resource in the subscription, adding enough metadata to enable recovery. The proxy resource is a stored object. It's available in the same location as the deleted HSM.
## Key recovery
-Upon deleting a key, the service will place the it in a deleted state, making it inaccessible to any operations. While in this state, the keys can be listed, recovered, or purged (permanently deleted). To view the objects, use the Azure CLI `az keyvault key list-deleted` command (as documented in [Managed HSM soft-delete and purge protection with CLI](./recovery.md?tabs=azure-cli#keys-cli)), or the Azure PowerShell `-InRemovedState` parameter (as described in [Managed HSM soft-delete and purge protection with PowerShell](./recovery.md?tabs=azure-powershell#keys-powershell)).
+When you delete a key, the service will put it in a deleted state, making it inaccessible to any operations. While in this state, keys can be listed, recovered, or purged. To view the objects, use the Azure CLI `az keyvault key list-deleted` command (described in [Managed HSM soft-delete and purge protection with CLI](./recovery.md?tabs=azure-cli#keys-cli)) or the Azure PowerShell `-InRemovedState` parameter (described in [Managed HSM soft-delete and purge protection with PowerShell](./recovery.md?tabs=azure-powershell#keys-powershell)).
-At the same time, Managed HSM will schedule the deletion of the underlying data corresponding to the deleted HSM or key for execution after a predetermined retention interval. The DNS record corresponding to the HSM is also retained during the retention interval.
+When you delete the key, Managed HSM will schedule the deletion of the underlying data that corresponds to the deleted HSM or key to occur after a predetermined retention interval. The DNS record that corresponds to the HSM is also kept during the retention interval.
## Soft-delete retention period
-Soft-deleted resources are retained for a set period of time, 90 days. During the soft-delete retention interval, the following apply:
+Soft-deleted resources are kept for a set period of time: 90 days. During the soft-delete retention interval, these conditions apply:
-- You may list all the HSMs and keys in the soft-delete state for your subscription as well as access deletion and recovery information about them.
- - Only users with Managed HSM Contributor role can list deleted HSMs. We recommend that our users create a custom role with these special permissions for handling deleted vaults.
-- A managed HSM with the same name cannot be created in the same location; correspondingly, a key cannot be created in a given HSM if it contains a key with the same name in a deleted state.-- Only users with "Managed HSM Contributor" role can list, view, recover, and purge managed HSMs.-- Only users with "Managed HSM Crypto Officer" role can list, view, recover, and purge keys.
+- You can list all the HSMs and keys in the soft-delete state for your subscription. You can also access deletion and recovery information about them.
+- Only users with the Managed HSM Contributor role can list deleted HSMs. We recommend that you create a custom role with these permissions for handling deleted vaults.
+- Managed HSM names must be unique in a given location. When you create a key, you can't use a name if the HSM contains a key with that name in a deleted state.
+- Only users with the Managed HSM Contributor role can list, view, recover, and purge managed HSMs.
+- Only users with Managed HSM Crypto Officer role can list, view, recover, and purge keys.
-Unless a Managed HSM or key is recovered, at the end of the retention interval the service performs a purge of the soft-deleted HSM or key. Resource deletion may not be rescheduled.
+Unless a managed HSM or key is recovered, at the end of the retention interval, the service performs a purge of the soft-deleted HSM or key. You can't reschedule the deletion of resources.
### Billing implications
-Managed HSM is a single-tenant service. When you create a Managed HSM, the service reserves underlying resources allocated to your HSM. These resources remain allocated even when the HSM is in deleted state. Therefore, you will be billed for the HSM while it is in deleted state.
+Managed HSM is a single-tenant service. When you create a managed HSM, the service reserves underlying resources allocated to your HSM. These resources remain allocated even when the HSM is in a deleted state. You'll be billed for the HSM while it's in a deleted state.
## Next steps
-The following two guides offer the primary usage scenarios for using soft-delete.
+These articles describe the main scenarios for using soft-delete:
- [How to use Managed HSM soft-delete with PowerShell](./recovery.md?tabs=azure-powershell) -- [How to use Managed HSM soft-delete with CLI](./recovery.md?tabs=azure-cli)
+- [How to use Managed HSM soft-delete with the Azure CLI](./recovery.md?tabs=azure-cli)
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/cross-region-overview.md
Cross-region load balancer routes the traffic to the appropriate regional load b
* Cross-region IPv6 frontend IP configurations aren't supported.
+* UDP traffic is not supported on Cross-region Load Balancer.
+ * A health probe can't be configured currently. A default health probe automatically collects availability information about the regional load balancer every 20 seconds. * Integration with Azure Kubernetes Service (AKS) is currently unavailable. Loss of connectivity will occur when deploying a cross-region load balancer with the Standard load balancer with AKS cluster deployed in the backend.
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-designer.md
Title: What is the Azure Machine Learning designer?
-description: Learn about the concepts that make up the drag-and-drop Azure Machine Learning designer.
+description: Learn what the Azure Machine Learning designer is and what tasks you can use it for. The drag-and-drop UI enables model training and deployment.
Azure Machine Learning designer is a drag-and-drop interface used to train and deploy models in Azure Machine Learning. This article describes the tasks you can do in the designer.
-To get started with the designer, see [Tutorial: Train a no-code regression model](tutorial-designer-automobile-price-train-score.md).
+ - To get started with the designer, see [Tutorial: Train a no-code regression model](tutorial-designer-automobile-price-train-score.md).
+ - To learn about the components available in the designer, see the [Algorithm and component reference](/azure/machine-learning/algorithm-module-reference/module-reference).
![Azure Machine Learning designer example](./media/concept-designer/designer-drag-and-drop.gif)
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-power-bi-tenant.md
To set up authentication, create a security group and add the Purview managed id
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/allow-service-principals-power-bi-admin.png" alt-text="Image showing how to allow service principals to get read-only Power BI admin API permissions":::
+1. Select **Admin API settings** > **Enhance admin APIs responses with detailed metadata** > Enable the toggle to allow Purview Data Map automatically discover the detailed metadata of Power BI datasets as part of its scans
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-sub-artifacts.png" alt-text="Image showing the Power BI admin portal config to enable sub-artifact scan":::
+ > [!Caution] > When you allow the security group you created (that has your Purview managed identity as a member) to use read-only Power BI admin APIs, you also allow it to access the metadata (e.g. dashboard and report names, owners, descriptions, etc.) for all of your Power BI artifacts in this tenant. Once the metadata has been pulled into the Azure Purview, Purview's permissions, not Power BI permissions, determine who can see that metadata.
Now that you've given the Purview Managed Identity permissions to connect to the
> [!Note] > * Switching the configuration of a scan to include or exclude a personal workspace will trigger a full scan of PowerBI source
- > * The scan name must be between 3-63 characters long and must contain only letters, numbers, underscores, and hyphens. Spaces aren't allowed.
- > * Schema is unavailable in the schema tab.
-5. Set up a scan trigger. Your options are **Once**, **Every 7 days**, and **Every 30 days**.
+5. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
+ 1. Access - Failed status means the user authentication failed. Scans using managed identity will always pass because no user authentication required. [Check if you have provided correct authentication for delegated authentication](register-scan-power-bi-tenant.md#register-and-scan-a-cross-tenant-power-bi)
+ 1. Assets (+ lineage) - Failed status means the Purview - Power BI authorization has failed. Make sure the [Purview managed identity is added to the security group associated in Power BI admin portal](register-scan-power-bi-tenant.md#create-a-security-group-for-permissions)
+ 1. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata**
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-test-connection-status-report.png" alt-text="test connection status report":::
+
+1. Set up a scan trigger. Your options are **Once**, **Every 7 days**, and **Every 30 days**.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Scan trigger image":::
Now that you've given the Purview Managed Identity permissions to connect to the
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/save-run-power-bi-scan.png" alt-text="Save and run Power BI screen image":::
-## Register and scan a cross-tenant Power BI
+## Register and scan a cross-tenant Power BI
-In a cross-tenant scenario, you can use PowerShell to register and scan your Power BI tenants, however, you can view, browse and search assets of remote tenant using Azure Purview Studio through the UI experience.
+In a cross-tenant scenario, you can use PowerShell to register and scan your Power BI tenants. You can browse, and search assets of remote tenant using Azure Purview Studio through the UI experience.
Consider using this guide if the Azure AD tenant where Power BI tenant is located, is different than the Azure AD tenant where your Azure Purview account is being provisioned. Use the following steps to register and scan one or more Power BI tenants in Azure Purview in a cross-tenant scenario:
Use the following steps to register and scan one or more Power BI tenants in Azu
New-AzADApplication -DisplayName $AppName -Password $SecureStringPassword ```
- 2. From Azure Active Directory dashboard select newly created application and then select **App registration**. Assign the application the following delegated permissions and grant admin consent for the tenant:
+ 2. From Azure Active Directory dashboard, select newly created application and then select **App registration**. Assign the application the following delegated permissions and grant admin consent for the tenant:
- Power BI Service Tenant.Read.All - Microsoft Graph openid
- 3. From Azure Active Directory dashboard select newly created application and then select **Authentication**. Under **Supported account types** select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
+ 3. From Azure Active Directory dashboard, select newly created application and then select **Authentication**. Under **Supported account types** select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
- 4. Construct tenant specific sign-in URL for your service principal by running the following url in your web browser:
+ 4. Construct tenant-specific sign-in URL for your service principal by running the following url in your web browser:
https://login.microsoftonline.com/<purview_tenant_id>/oauth2/v2.0/authorize?client_id=<client_id_to_delegate_the_pbi_admin>&scope=openid&response_type=id_token&response_mode=fragment&state=1234&nonece=67890
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/built-in-roles.md
Previously updated : 07/26/2021 Last updated : 08/04/2021
The following table provides a brief description of each built-in role. Click th
> | [SignalR App Server (Preview)](#signalr-app-server-preview) | Lets your app server access SignalR Service with AAD auth options. | 420fcaa2-552c-430f-98ca-3264be4806c7 | > | [SignalR Contributor](#signalr-contributor) | Create, Read, Update, and Delete SignalR service resources | 8cf5e20a-e4b2-4e9d-b3a1-5ceb692c2761 | > | [SignalR Serverless Contributor (Preview)](#signalr-serverless-contributor-preview) | Lets your app access service in serverless mode with AAD auth options. | fd53cd77-2268-407a-8f46-7e7863d0f521 |
-> | [SignalR Service Owner (Preview)](#signalr-service-owner-preview) | Full access to Azure SignalR Service REST APIs | 7e4f1700-ea5a-4f59-8f37-079cfe29dce3 |
+> | [SignalR Service Owner](#signalr-service-owner) | Full access to Azure SignalR Service REST APIs | 7e4f1700-ea5a-4f59-8f37-079cfe29dce3 |
> | [SignalR Service Reader (Preview)](#signalr-service-reader-preview) | Read-only access to Azure SignalR Service REST APIs | ddde6b66-c0df-4114-a159-3618637b3035 | > | [Web Plan Contributor](#web-plan-contributor) | Lets you manage the web plans for websites, but not access to them. | 2cc479cb-7b4d-49a8-b449-8c00fd0f0a4b | > | [Website Contributor](#website-contributor) | Lets you manage websites (not web plans), but not access to them. | de139f84-1756-47ae-9be6-808fbbe84772 |
The following table provides a brief description of each built-in role. Click th
> | **Blockchain** | | | > | [Blockchain Member Node Access (Preview)](#blockchain-member-node-access-preview) | Allows for access to Blockchain Member nodes | 31a002a1-acaf-453e-8a5b-297c9ca1ea24 | > | **AI + machine learning** | | |
+> | [AzureML Data Scientist](#azureml-data-scientist) | Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself. | f6c7c914-8db3-469d-8ca1-694a8f32e121 |
> | [Cognitive Services Contributor](#cognitive-services-contributor) | Lets you create, read, update, delete and manage keys of Cognitive Services. | 25fbc0a9-bd7c-42a3-aa1a-3b75d497ee68 | > | [Cognitive Services Custom Vision Contributor](#cognitive-services-custom-vision-contributor) | Full access to the project, including the ability to view, create, edit, or delete projects. | c1ff6cc2-c111-46fe-8896-e0ef812ad9f3 | > | [Cognitive Services Custom Vision Deployment](#cognitive-services-custom-vision-deployment) | Publish, unpublish or export models. Deployment can view the project but can't update. | 5c4089e1-6d96-4d2f-b296-c1bc7137275f |
View Virtual Machines in the portal and login as administrator [Learn more](../a
> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/read | Gets a load balancer definition | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/read | Gets a network interface definition. | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/*/read | |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/*/read | |
> | **NotActions** | | > | *none* | | > | **DataActions** | | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/login/action | Log in to a virtual machine as a regular user | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/loginAsAdmin/action | Log in to a virtual machine with Windows administrator or Linux root user privileges |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/login/action | Log in to a Azure Arc machine as a regular user |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/loginAsAdmin/action | Log in to a Azure Arc machine with Windows administrator or Linux root user privilege |
> | **NotDataActions** | | > | *none* | |
View Virtual Machines in the portal and login as administrator [Learn more](../a
"Microsoft.Network/virtualNetworks/read", "Microsoft.Network/loadBalancers/read", "Microsoft.Network/networkInterfaces/read",
- "Microsoft.Compute/virtualMachines/*/read"
+ "Microsoft.Compute/virtualMachines/*/read",
+ "Microsoft.HybridCompute/machines/*/read"
], "notActions": [], "dataActions": [ "Microsoft.Compute/virtualMachines/login/action",
- "Microsoft.Compute/virtualMachines/loginAsAdmin/action"
+ "Microsoft.Compute/virtualMachines/loginAsAdmin/action",
+ "Microsoft.HybridCompute/machines/login/action",
+ "Microsoft.HybridCompute/machines/loginAsAdmin/action"
], "notDataActions": [] }
View Virtual Machines in the portal and login as a regular user. [Learn more](..
> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/read | Gets a load balancer definition | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/read | Gets a network interface definition. | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/*/read | |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/*/read | |
> | **NotActions** | | > | *none* | | > | **DataActions** | | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/login/action | Log in to a virtual machine as a regular user |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/login/action | Log in to a Azure Arc machine as a regular user |
> | **NotDataActions** | | > | *none* | |
View Virtual Machines in the portal and login as a regular user. [Learn more](..
"Microsoft.Network/virtualNetworks/read", "Microsoft.Network/loadBalancers/read", "Microsoft.Network/networkInterfaces/read",
- "Microsoft.Compute/virtualMachines/*/read"
+ "Microsoft.Compute/virtualMachines/*/read",
+ "Microsoft.HybridCompute/machines/*/read"
], "notActions": [], "dataActions": [
- "Microsoft.Compute/virtualMachines/login/action"
+ "Microsoft.Compute/virtualMachines/login/action",
+ "Microsoft.HybridCompute/machines/login/action"
], "notDataActions": [] }
Lets your app access service in serverless mode with AAD auth options.
} ```
-### SignalR Service Owner (Preview)
+### SignalR Service Owner
Full access to Azure SignalR Service REST APIs
Full access to Azure SignalR Service REST APIs
> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/clientConnection/send/action | Send messages directly to a client connection. | > | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/clientConnection/read | Check client connection existence. | > | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/clientConnection/write | Close client connection. |
+> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/serverConnection/write | Start a server connection. |
> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/send/action | Send messages to user, who may consist of multiple client connections. | > | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/read | Check user existence. | > | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/write | Modify a user. |
Full access to Azure SignalR Service REST APIs
"Microsoft.SignalRService/SignalR/clientConnection/send/action", "Microsoft.SignalRService/SignalR/clientConnection/read", "Microsoft.SignalRService/SignalR/clientConnection/write",
+ "Microsoft.SignalRService/SignalR/serverConnection/write",
"Microsoft.SignalRService/SignalR/user/send/action", "Microsoft.SignalRService/SignalR/user/read", "Microsoft.SignalRService/SignalR/user/write"
Full access to Azure SignalR Service REST APIs
"notDataActions": [] } ],
- "roleName": "SignalR Service Owner (Preview)",
+ "roleName": "SignalR Service Owner",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" }
Allows for access to Blockchain Member nodes [Learn more](../blockchain/service/
## AI + machine learning
+### AzureML Data Scientist
+
+Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/*/read | |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/*/action | |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/*/delete | |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/*/write | |
+> | **NotActions** | |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/delete | Deletes the Machine Learning Services Workspace(s) |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/write | Creates or updates a Machine Learning Services Workspace(s) |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/computes/*/write | |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/computes/*/delete | |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/computes/listKeys/action | List secrets for compute resources in Machine Learning Services Workspace |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/listKeys/action | List secrets for a Machine Learning Services Workspace |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/f6c7c914-8db3-469d-8ca1-694a8f32e121",
+ "name": "f6c7c914-8db3-469d-8ca1-694a8f32e121",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.MachineLearningServices/workspaces/*/read",
+ "Microsoft.MachineLearningServices/workspaces/*/action",
+ "Microsoft.MachineLearningServices/workspaces/*/delete",
+ "Microsoft.MachineLearningServices/workspaces/*/write"
+ ],
+ "notActions": [
+ "Microsoft.MachineLearningServices/workspaces/delete",
+ "Microsoft.MachineLearningServices/workspaces/write",
+ "Microsoft.MachineLearningServices/workspaces/computes/*/write",
+ "Microsoft.MachineLearningServices/workspaces/computes/*/delete",
+ "Microsoft.MachineLearningServices/workspaces/computes/listKeys/action",
+ "Microsoft.MachineLearningServices/workspaces/listKeys/action"
+ ],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "AzureML Data Scientist",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Cognitive Services Contributor Lets you create, read, update, delete and manage keys of Cognitive Services. [Learn more](../cognitive-services/cognitive-services-virtual-networks.md)
Lets you create, read, update, delete and manage keys of Cognitive Services. [Le
> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/* | | > | [Microsoft.Features](resource-provider-operations.md#microsoftfeatures)/features/read | Gets the features of a subscription. | > | [Microsoft.Features](resource-provider-operations.md#microsoftfeatures)/providers/features/read | Gets the feature of a subscription in a given resource provider. |
+> | [Microsoft.Features](resource-provider-operations.md#microsoftfeatures)/providers/features/register/action | Registers the feature for a subscription in a given resource provider. |
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/diagnosticSettings/* | Creates, updates, or reads the diagnostic setting for Analysis Server | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/logDefinitions/read | Read log definitions |
Lets you create, read, update, delete and manage keys of Cognitive Services. [Le
"Microsoft.CognitiveServices/*", "Microsoft.Features/features/read", "Microsoft.Features/providers/features/read",
+ "Microsoft.Features/providers/features/register/action",
"Microsoft.Insights/alertRules/*", "Microsoft.Insights/diagnosticSettings/*", "Microsoft.Insights/logDefinitions/read",
View permissions for Security Center. Can view recommendations, alerts, a securi
> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/*/read | Read security components and policies | > | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/*/read | | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/*/read | |
-> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/iotDefenderSettings/packageDownloads/action | |
-> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/iotDefenderSettings/downloadManagerActivation/action | |
-> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/iotSensors/downloadResetPassword/action | |
+> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/iotDefenderSettings/packageDownloads/action | Gets downloadable IoT Defender packages information |
+> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/iotDefenderSettings/downloadManagerActivation/action | Download manager activation file with subscription quota data |
+> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/iotSensors/downloadResetPassword/action | Downloads reset password file for IoT Sensors |
> | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/defenderSettings/packageDownloads/action | Gets downloadable IoT Defender packages information | > | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/defenderSettings/downloadManagerActivation/action | Download manager activation file | > | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/sensors/* | |
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 07/26/2021 Last updated : 08/04/2021
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/locations/read | Reads an availability check resource. | > | Microsoft.NetApp/locations/checknameavailability/action | Check if resource name is available | > | Microsoft.NetApp/locations/checkfilepathavailability/action | Check if file path is available |
+> | Microsoft.NetApp/locations/checkinventory/action | Checks ReservedCapacity inventory. |
> | Microsoft.NetApp/locations/operationresults/read | Reads an operation result resource. | > | Microsoft.NetApp/netAppAccounts/read | Reads an account resource. | > | Microsoft.NetApp/netAppAccounts/write | Writes an account resource. |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/AuthorizeReplication/action | Authorize the source volume replication | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/ResyncReplication/action | Resync the replication on the destination volume | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/DeleteReplication/action | Delete the replication on the destination volume |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/PoolChange/action | Moves volume to another pool. |
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/backups/read | Reads a backup resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/backups/write | Writes a backup resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/backups/delete | Deletes a backup resource. |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/read | Reads a snapshot resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/write | Writes a snapshot resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/delete | Deletes a snapshot resource. |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/subvolumes/write | Write a subvolume resource. |
> | Microsoft.NetApp/netAppAccounts/ipsecPolicies/read | Reads a IPSec policy resource. | > | Microsoft.NetApp/netAppAccounts/ipsecPolicies/write | Writes an IPSec policy resource. | > | Microsoft.NetApp/netAppAccounts/ipsecPolicies/delete | Deletes a IPSec policy resource. |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | microsoft.web/sites/backups/delete | Delete Web Apps Backups. | > | microsoft.web/sites/backups/write | Update Web Apps Backups. | > | Microsoft.Web/sites/basicPublishingCredentialsPolicies/Read | List which publishing methods are allowed for a Web App |
+> | Microsoft.Web/sites/basicPublishingCredentialsPolicies/Write | List which publishing methods are allowed for a Web App |
> | Microsoft.Web/sites/basicPublishingCredentialsPolicies/ftp/Read | Get whether FTP publishing credentials are allowed for a Web App | > | Microsoft.Web/sites/basicPublishingCredentialsPolicies/ftp/Write | Update whether FTP publishing credentials are allowed for a Web App | > | Microsoft.Web/sites/basicPublishingCredentialsPolicies/scm/Read | Get whether SCM publishing credentials are allowed for a Web App |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | microsoft.web/sites/slots/backups/restore/action | Restore Web Apps Slots Backups. | > | microsoft.web/sites/slots/backups/delete | Delete Web Apps Slots Backups. | > | Microsoft.Web/sites/slots/basicPublishingCredentialsPolicies/Read | List which publishing credentials are allowed for a Web App Slot |
+> | Microsoft.Web/sites/slots/basicPublishingCredentialsPolicies/Write | List which publishing credentials are allowed for a Web App Slot |
> | Microsoft.Web/sites/slots/basicPublishingCredentialsPolicies/ftp/Read | Get whether FTP publishing credentials are allowed for a Web App Slot | > | Microsoft.Web/sites/slots/basicPublishingCredentialsPolicies/ftp/Write | Update whether FTP publishing credentials are allowed for a Web App Slot | > | Microsoft.Web/sites/slots/basicPublishingCredentialsPolicies/scm/Read | Get whether SCM publishing credentials are allowed for a Web App Slot |
Azure service: [HDInsight](../hdinsight/index.yml)
> | Microsoft.HDInsight/clusters/getGatewaySettings/action | Get gateway settings for HDInsight Cluster | > | Microsoft.HDInsight/clusters/updateGatewaySettings/action | Update gateway settings for HDInsight Cluster | > | Microsoft.HDInsight/clusters/configurations/action | Get HDInsight Cluster Configurations |
+> | Microsoft.HDInsight/clusters/executeScriptActions/action | Execute Script Actions for HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/resolvePrivateLinkServiceId/action | Resolve Private Link Service ID for HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/privateEndpointConnectionsApproval/action | Auto Approve Private Endpoint Connections for HDInsight Cluster |
> | Microsoft.HDInsight/clusters/applications/read | Get Application for HDInsight Cluster | > | Microsoft.HDInsight/clusters/applications/write | Create or Update Application for HDInsight Cluster | > | Microsoft.HDInsight/clusters/applications/delete | Delete Application for HDInsight Cluster | > | Microsoft.HDInsight/clusters/configurations/read | Get HDInsight Cluster Configurations |
+> | Microsoft.HDInsight/clusters/executeScriptActions/azureasyncoperations/read | Get Script Action status for HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/executeScriptActions/operationresults/read | Get Script Action status for HDInsight Cluster |
> | Microsoft.HDInsight/clusters/extensions/write | Create Cluster Extension for HDInsight Cluster | > | Microsoft.HDInsight/clusters/extensions/read | Get Cluster Extension for HDInsight Cluster | > | Microsoft.HDInsight/clusters/extensions/delete | Delete Cluster Extension for HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/privateLinkResources/read | Get Private Link Resources for HDInsight Cluster |
> | Microsoft.HDInsight/clusters/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for the resource HDInsight Cluster | > | Microsoft.HDInsight/clusters/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource HDInsight Cluster | > | Microsoft.HDInsight/clusters/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for HDInsight Cluster | > | Microsoft.HDInsight/clusters/roles/resize/action | Resize a HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/scriptActions/read | Get persisted Script Actions for HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/scriptActions/delete | Delete persisted Script Actions for HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/scriptExecutionHistory/read | Get Script Actions history for HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/scriptExecutionHistory/promote/action | Promote Script Action for HDInsight Cluster |
> | Microsoft.HDInsight/locations/capabilities/read | Get Subscription Capabilities | > | Microsoft.HDInsight/locations/checkNameAvailability/read | Check Name Availability |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/openidConnectProviders/delete | Deletes specific OpenID Connect Provider of the API Management service instance. | > | Microsoft.ApiManagement/service/openidConnectProviders/listSecrets/action | Gets specific OpenID Connect Provider secrets. | > | Microsoft.ApiManagement/service/operationresults/read | Gets current status of long running operation |
+> | Microsoft.ApiManagement/service/outboundNetworkDependenciesEndpoints/read | Gets the outbound network dependency status of resources on which the service depends on. |
> | Microsoft.ApiManagement/service/policies/read | Lists all the Global Policy definitions of the Api Management service. or Get the Global policy definition of the Api Management service. | > | Microsoft.ApiManagement/service/policies/write | Creates or updates the global policy configuration of the Api Management service. | > | Microsoft.ApiManagement/service/policies/delete | Deletes the global policy configuration of the Api Management Service. |
Azure service: [Security Center](../security-center/index.yml)
> | Microsoft.Security/deviceSecurityGroups/read | Gets IoT device security groups | > | Microsoft.Security/informationProtectionPolicies/read | Gets the information protection policies for the resource | > | Microsoft.Security/informationProtectionPolicies/write | Updates the information protection policies for the resource |
+> | Microsoft.Security/iotDefenderSettings/read | Gets IoT Defender Settings |
+> | Microsoft.Security/iotDefenderSettings/write | Create or updates IoT Defender Settings |
+> | Microsoft.Security/iotDefenderSettings/delete | Deletes IoT Defender Settings |
+> | Microsoft.Security/iotDefenderSettings/PackageDownloads/action | Gets downloadable IoT Defender packages information |
+> | Microsoft.Security/iotDefenderSettings/DownloadManagerActivation/action | Download manager activation file with subscription quota data |
> | Microsoft.Security/iotSecuritySolutions/write | Creates or updates IoT security solutions | > | Microsoft.Security/iotSecuritySolutions/delete | Deletes IoT security solutions | > | Microsoft.Security/iotSecuritySolutions/read | Gets IoT security solutions | > | Microsoft.Security/iotSecuritySolutions/analyticsModels/read | Gets IoT security analytics model | > | Microsoft.Security/iotSecuritySolutions/analyticsModels/read | Gets IoT alert types |
+> | Microsoft.Security/iotSecuritySolutions/analyticsModels/read | Gets IoT alert types |
+> | Microsoft.Security/iotSecuritySolutions/analyticsModels/read | Gets IoT alerts |
> | Microsoft.Security/iotSecuritySolutions/analyticsModels/read | Gets IoT alerts | > | Microsoft.Security/iotSecuritySolutions/analyticsModels/read | Gets IoT recommendation types |
+> | Microsoft.Security/iotSecuritySolutions/analyticsModels/read | Gets IoT recommendation types |
+> | Microsoft.Security/iotSecuritySolutions/analyticsModels/read | Gets IoT recommendations |
> | Microsoft.Security/iotSecuritySolutions/analyticsModels/read | Gets IoT recommendations |
+> | Microsoft.Security/iotSecuritySolutions/analyticsModels/read | Gets devices |
> | Microsoft.Security/iotSecuritySolutions/analyticsModels/aggregatedAlerts/read | Gets IoT aggregated alerts | > | Microsoft.Security/iotSecuritySolutions/analyticsModels/aggregatedAlerts/dismiss/action | Dismisses IoT aggregated alerts | > | Microsoft.Security/iotSecuritySolutions/analyticsModels/aggregatedRecommendations/read | Gets IoT aggregated recommendations |
+> | Microsoft.Security/iotSensors/read | Gets IoT Sensors |
+> | Microsoft.Security/iotSensors/write | Create or updates IoT Sensors |
+> | Microsoft.Security/iotSensors/delete | Deletes IoT Sensors |
+> | Microsoft.Security/iotSensors/DownloadActivation/action | Downloads activation file for IoT Sensors |
+> | Microsoft.Security/iotSensors/TriggerTiPackageUpdate/action | Triggers threat intelligence package update |
+> | Microsoft.Security/iotSensors/DownloadResetPassword/action | Downloads reset password file for IoT Sensors |
+> | Microsoft.Security/iotSite/read | Gets IoT site |
+> | Microsoft.Security/iotSite/write | Creates or updates IoT site |
+> | Microsoft.Security/iotSite/delete | Deletes IoT site |
> | Microsoft.Security/locations/read | Gets the security data location | > | Microsoft.Security/locations/alerts/read | Gets all available security alerts | > | Microsoft.Security/locations/alerts/dismiss/action | Dismiss a security alert | > | Microsoft.Security/locations/alerts/activate/action | Activate a security alert |
+> | Microsoft.Security/locations/alerts/resolve/action | Resolve a security alert |
+> | Microsoft.Security/locations/alerts/simulate/action | Simulate a security alert |
> | Microsoft.Security/locations/jitNetworkAccessPolicies/read | Gets the just-in-time network access policies | > | Microsoft.Security/locations/jitNetworkAccessPolicies/write | Creates a new just-in-time network access policy or updates an existing one | > | Microsoft.Security/locations/jitNetworkAccessPolicies/delete | Deletes the just-in-time network access policy |
Azure service: [Security Center](../security-center/index.yml)
> | Microsoft.Security/pricings/read | Gets the pricing settings for the scope | > | Microsoft.Security/pricings/write | Updates the pricing settings for the scope | > | Microsoft.Security/pricings/delete | Deletes the pricing settings for the scope |
+> | Microsoft.Security/secureScoreControlDefinitions/read | Get secure score control definition |
+> | Microsoft.Security/secureScoreControls/read | Get calculated secure score control for your subscription |
+> | Microsoft.Security/secureScores/read | Get calculated secure score for your subscription |
+> | Microsoft.Security/secureScores/secureScoreControls/read | Get calculated secure score control for your secure score calculation |
> | Microsoft.Security/securityContacts/read | Gets the security contact | > | Microsoft.Security/securityContacts/write | Updates the security contact | > | Microsoft.Security/securityContacts/delete | Deletes the security contact |
Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> | Microsoft.OffAzure/MasterSites/privateEndpointConnectionProxies/validate/action | Validate a Private Endpoint Connection Proxy | > | Microsoft.OffAzure/MasterSites/privateEndpointConnectionProxies/write | Create or Update a Private Endpoint Connection Proxy | > | Microsoft.OffAzure/MasterSites/privateEndpointConnectionProxies/delete | Delete a Private Endpoint Connection Proxy |
+> | Microsoft.OffAzure/MasterSites/privateEndpointConnectionProxies/operationsstatus/read | Get status of a long running operation on a Private Endpoint Connection Proxy |
+> | Microsoft.OffAzure/MasterSites/privateEndpointConnectionProxies/operationsstatus/read | Get status of a long running operation on a Private Endpoint Connection Proxy |
+> | Microsoft.OffAzure/MasterSites/privateEndpointConnectionProxies/operationsstatus/read | Get status of a long running operation on a Private Endpoint Connection Proxy |
+> | Microsoft.OffAzure/MasterSites/privateEndpointConnectionProxies/operationsstatus/read | Get status of a long running operation on a Private Endpoint Connection Proxy |
+> | Microsoft.OffAzure/MasterSites/privateEndpointConnectionProxies/operationsstatus/read | Get status of a long running operation on a Private Endpoint Connection Proxy |
> | Microsoft.OffAzure/MasterSites/privateEndpointConnections/read | Get Private Endpoint Connection | > | Microsoft.OffAzure/MasterSites/privateEndpointConnections/write | Update a Private Endpoint Connection | > | Microsoft.OffAzure/MasterSites/privateEndpointConnections/delete | Delete a Private Endpoint Connection |
Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> | Microsoft.OffAzure/ServerSites/delete | Deletes the Server site | > | Microsoft.OffAzure/ServerSites/refresh/action | Refreshes the objects within a Server site | > | Microsoft.OffAzure/ServerSites/updateProperties/action | Updates the properties for machines in a site |
+> | Microsoft.OffAzure/ServerSites/updateTags/action | Updates the tags for machines in a site |
> | Microsoft.OffAzure/ServerSites/read | Gets the properties of a Server site | > | Microsoft.OffAzure/ServerSites/write | Creates or updates the Server site | > | Microsoft.OffAzure/ServerSites/delete | Deletes the Server site | > | Microsoft.OffAzure/ServerSites/refresh/action | Refreshes the objects within a Server site | > | Microsoft.OffAzure/ServerSites/updateProperties/action | Updates the properties for machines in a site |
+> | Microsoft.OffAzure/ServerSites/updateTags/action | Updates the tags for machines in a site |
> | Microsoft.OffAzure/ServerSites/read | Gets the properties of a Server site | > | Microsoft.OffAzure/ServerSites/write | Creates or updates the Server site | > | Microsoft.OffAzure/ServerSites/delete | Deletes the Server site | > | Microsoft.OffAzure/ServerSites/refresh/action | Refreshes the objects within a Server site | > | Microsoft.OffAzure/ServerSites/updateProperties/action | Updates the properties for machines in a site |
+> | Microsoft.OffAzure/ServerSites/updateTags/action | Updates the tags for machines in a site |
> | Microsoft.OffAzure/ServerSites/read | Gets the properties of a Server site | > | Microsoft.OffAzure/ServerSites/write | Creates or updates the Server site | > | Microsoft.OffAzure/ServerSites/delete | Deletes the Server site | > | Microsoft.OffAzure/ServerSites/refresh/action | Refreshes the objects within a Server site | > | Microsoft.OffAzure/ServerSites/updateProperties/action | Updates the properties for machines in a site |
+> | Microsoft.OffAzure/ServerSites/updateTags/action | Updates the tags for machines in a site |
> | Microsoft.OffAzure/ServerSites/read | Gets the properties of a Server site | > | Microsoft.OffAzure/ServerSites/write | Creates or updates the Server site | > | Microsoft.OffAzure/ServerSites/delete | Deletes the Server site | > | Microsoft.OffAzure/ServerSites/refresh/action | Refreshes the objects within a Server site | > | Microsoft.OffAzure/ServerSites/updateProperties/action | Updates the properties for machines in a site |
+> | Microsoft.OffAzure/ServerSites/updateTags/action | Updates the tags for machines in a site |
> | Microsoft.OffAzure/ServerSites/jobs/read | Gets the properties of a Server jobs | > | Microsoft.OffAzure/ServerSites/jobs/read | Gets the properties of a Server jobs | > | Microsoft.OffAzure/ServerSites/jobs/read | Gets the properties of a Server jobs |
Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> | Microsoft.OffAzure/ServerSites/jobs/read | Gets the properties of a Server jobs | > | Microsoft.OffAzure/ServerSites/machines/read | Gets the properties of a Server machines | > | Microsoft.OffAzure/ServerSites/machines/write | Write the properties of a Server machines |
+> | Microsoft.OffAzure/ServerSites/machines/delete | Delete the properties of a Server machines |
> | Microsoft.OffAzure/ServerSites/machines/read | Gets the properties of a Server machines | > | Microsoft.OffAzure/ServerSites/machines/write | Write the properties of a Server machines |
+> | Microsoft.OffAzure/ServerSites/machines/delete | Delete the properties of a Server machines |
> | Microsoft.OffAzure/ServerSites/machines/read | Gets the properties of a Server machines | > | Microsoft.OffAzure/ServerSites/machines/write | Write the properties of a Server machines |
+> | Microsoft.OffAzure/ServerSites/machines/delete | Delete the properties of a Server machines |
> | Microsoft.OffAzure/ServerSites/machines/read | Gets the properties of a Server machines | > | Microsoft.OffAzure/ServerSites/machines/write | Write the properties of a Server machines |
+> | Microsoft.OffAzure/ServerSites/machines/delete | Delete the properties of a Server machines |
> | Microsoft.OffAzure/ServerSites/machines/read | Gets the properties of a Server machines | > | Microsoft.OffAzure/ServerSites/machines/write | Write the properties of a Server machines |
+> | Microsoft.OffAzure/ServerSites/machines/delete | Delete the properties of a Server machines |
> | Microsoft.OffAzure/ServerSites/operationsstatus/read | Gets the properties of a Server operation status | > | Microsoft.OffAzure/ServerSites/operationsstatus/read | Gets the properties of a Server operation status | > | Microsoft.OffAzure/ServerSites/operationsstatus/read | Gets the properties of a Server operation status |
Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> | Microsoft.OffAzure/VMwareSites/refresh/action | Refreshes the objects within a VMware site | > | Microsoft.OffAzure/VMwareSites/exportapplications/action | Exports the VMware applications and roles data into xls | > | Microsoft.OffAzure/VMwareSites/updateProperties/action | Updates the properties for machines in a site |
+> | Microsoft.OffAzure/VMwareSites/updateTags/action | Updates the tags for machines in a site |
> | Microsoft.OffAzure/VMwareSites/generateCoarseMap/action | Generates the coarse map for the list of machines | > | Microsoft.OffAzure/VMwareSites/generateDetailedMap/action | Generates the Detailed VMware Coarse Map | > | Microsoft.OffAzure/VMwareSites/clientGroupMembers/action | Lists the client group members for the selected client group. |
Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> | Microsoft.OffAzure/VMwareSites/refresh/action | Refreshes the objects within a VMware site | > | Microsoft.OffAzure/VMwareSites/exportapplications/action | Exports the VMware applications and roles data into xls | > | Microsoft.OffAzure/VMwareSites/updateProperties/action | Updates the properties for machines in a site |
+> | Microsoft.OffAzure/VMwareSites/updateTags/action | Updates the tags for machines in a site |
> | Microsoft.OffAzure/VMwareSites/generateCoarseMap/action | Generates the coarse map for the list of machines | > | Microsoft.OffAzure/VMwareSites/generateDetailedMap/action | Generates the Detailed VMware Coarse Map | > | Microsoft.OffAzure/VMwareSites/clientGroupMembers/action | Lists the client group members for the selected client group. |
Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> | Microsoft.OffAzure/VMwareSites/refresh/action | Refreshes the objects within a VMware site | > | Microsoft.OffAzure/VMwareSites/exportapplications/action | Exports the VMware applications and roles data into xls | > | Microsoft.OffAzure/VMwareSites/updateProperties/action | Updates the properties for machines in a site |
+> | Microsoft.OffAzure/VMwareSites/updateTags/action | Updates the tags for machines in a site |
> | Microsoft.OffAzure/VMwareSites/generateCoarseMap/action | Generates the coarse map for the list of machines | > | Microsoft.OffAzure/VMwareSites/generateDetailedMap/action | Generates the Detailed VMware Coarse Map | > | Microsoft.OffAzure/VMwareSites/clientGroupMembers/action | Lists the client group members for the selected client group. |
Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> | Microsoft.OffAzure/VMwareSites/refresh/action | Refreshes the objects within a VMware site | > | Microsoft.OffAzure/VMwareSites/exportapplications/action | Exports the VMware applications and roles data into xls | > | Microsoft.OffAzure/VMwareSites/updateProperties/action | Updates the properties for machines in a site |
+> | Microsoft.OffAzure/VMwareSites/updateTags/action | Updates the tags for machines in a site |
> | Microsoft.OffAzure/VMwareSites/generateCoarseMap/action | Generates the coarse map for the list of machines | > | Microsoft.OffAzure/VMwareSites/generateDetailedMap/action | Generates the Detailed VMware Coarse Map | > | Microsoft.OffAzure/VMwareSites/clientGroupMembers/action | Lists the client group members for the selected client group. |
Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> | Microsoft.OffAzure/VMwareSites/refresh/action | Refreshes the objects within a VMware site | > | Microsoft.OffAzure/VMwareSites/exportapplications/action | Exports the VMware applications and roles data into xls | > | Microsoft.OffAzure/VMwareSites/updateProperties/action | Updates the properties for machines in a site |
+> | Microsoft.OffAzure/VMwareSites/updateTags/action | Updates the tags for machines in a site |
> | Microsoft.OffAzure/VMwareSites/generateCoarseMap/action | Generates the coarse map for the list of machines | > | Microsoft.OffAzure/VMwareSites/generateDetailedMap/action | Generates the Detailed VMware Coarse Map | > | Microsoft.OffAzure/VMwareSites/clientGroupMembers/action | Lists the client group members for the selected client group. |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AmlDataSetEvent/read | Read data from the AmlDataSetEvent table | > | Microsoft.OperationalInsights/workspaces/query/AmlDataStoreEvent/read | Read data from the AmlDataStoreEvent table | > | Microsoft.OperationalInsights/workspaces/query/AmlDeploymentEvent/read | Read data from the AmlDeploymentEvent table |
+> | Microsoft.OperationalInsights/workspaces/query/AmlEnvironmentEvent/read | Read data from the AmlEnvironmentEvent table |
> | Microsoft.OperationalInsights/workspaces/query/AmlInferencingEvent/read | Read data from the AmlInferencingEvent table | > | Microsoft.OperationalInsights/workspaces/query/AmlModelsEvent/read | Read data from the AmlModelsEvent table | > | Microsoft.OperationalInsights/workspaces/query/AmlOnlineEndpointConsoleLog/read | Read data from the AmlOnlineEndpointConsoleLog table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/SucceededIngestion/read | Read data from the SucceededIngestion table | > | Microsoft.OperationalInsights/workspaces/query/SynapseBigDataPoolApplicationsEnded/read | Read data from the SynapseBigDataPoolApplicationsEnded table | > | Microsoft.OperationalInsights/workspaces/query/SynapseBuiltinSqlPoolRequestsEnded/read | Read data from the SynapseBuiltinSqlPoolRequestsEnded table |
+> | Microsoft.OperationalInsights/workspaces/query/SynapseDXCommand/read | Read data from the SynapseDXCommand table |
+> | Microsoft.OperationalInsights/workspaces/query/SynapseDXFailedIngestion/read | Read data from the SynapseDXFailedIngestion table |
+> | Microsoft.OperationalInsights/workspaces/query/SynapseDXIngestionBatching/read | Read data from the SynapseDXIngestionBatching table |
+> | Microsoft.OperationalInsights/workspaces/query/SynapseDXQuery/read | Read data from the SynapseDXQuery table |
+> | Microsoft.OperationalInsights/workspaces/query/SynapseDXSucceededIngestion/read | Read data from the SynapseDXSucceededIngestion table |
+> | Microsoft.OperationalInsights/workspaces/query/SynapseDXTableDetails/read | Read data from the SynapseDXTableDetails table |
+> | Microsoft.OperationalInsights/workspaces/query/SynapseDXTableUsageStatistics/read | Read data from the SynapseDXTableUsageStatistics table |
> | Microsoft.OperationalInsights/workspaces/query/SynapseGatewayApiRequests/read | Read data from the SynapseGatewayApiRequests table | > | Microsoft.OperationalInsights/workspaces/query/SynapseGatewayEvents/read | Read data from the SynapseGatewayEvents table | > | Microsoft.OperationalInsights/workspaces/query/SynapseIntegrationActivityRuns/read | Read data from the SynapseIntegrationActivityRuns table |
Azure service: [Automation](../automation/index.yml)
> | Microsoft.Automation/automationAccounts/diagnosticSettings/write | Sets the diagnostic setting for the resource | > | Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads Hybrid Runbook Worker Resources | > | Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes Hybrid Runbook Worker Resources |
+> | Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads Hybrid Runbook Worker Resources |
+> | Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes Hybrid Runbook Worker Resources |
> | Microsoft.Automation/automationAccounts/jobs/runbookContent/action | Gets the content of the Azure Automation runbook at the time of the job execution | > | Microsoft.Automation/automationAccounts/jobs/read | Gets an Azure Automation job | > | Microsoft.Automation/automationAccounts/jobs/write | Creates an Azure Automation job |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | microsoft.recoveryservices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription | > | microsoft.recoveryservices/Vaults/backupProtectionIntents/read | List all backup Protection Intents | > | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
+> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation | > | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' | > | microsoft.recoveryservices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
search Cognitive Search Attach Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-attach-cognitive-services.md
Last updated 08/12/2021
When configuring an [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure Cognitive Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable [multi-service Cognitive Services resource](../cognitive-services/cognitive-services-apis-create-account.md). A multi-service resource references "Cognitive Services" as the offering, rather than individual services, with access granted through a single API key.
-A multi-service resource key is specified in a skillset definition and allows Microsoft to charge you for using these APIs:
+A resource key is specified in a skillset and allows Microsoft to charge you for using these APIs:
+ [Computer Vision](https://azure.microsoft.com/services/cognitive-services/computer-vision/) for image analysis and optical character recognition (OCR) + [Text Analytics](https://azure.microsoft.com/services/cognitive-services/text-analytics/) for language detection, entity recognition, sentiment analysis, and key phrase extraction + [Text Translation](https://azure.microsoft.com/services/cognitive-services/translator-text-api/)
-The key is used for billing, but not connections. Internally, a search service connects to a Cognitive Services resource that's co-located in the same physical region. [Product availability](https://azure.microsoft.com/global-infrastructure/services/?products=search) shows regional availability side by side.
+The key is used for billing, but not connections. Internally, a search service connects to a Cognitive Services resource that's co-located in the [same physical region](https://azure.microsoft.com/global-infrastructure/services/?products=search).
## Key requirements
-A key is required for billable [built-in skills](cognitive-search-predefined-skills.md) that are used more than 20 times a day on the same indexer: Entity Linking, Entity Recognition, Image Analysis, Key Phrase Extraction, Language Detection, OCR, PII Detection, Sentiment, or Text Translation.
+A resource key is required for billable [built-in skills](cognitive-search-predefined-skills.md) that are used more than 20 times a day on the same indexer. Skills that make backend calls to Cognitive Services include Entity Linking, Entity Recognition, Image Analysis, Key Phrase Extraction, Language Detection, OCR, PII Detection, Sentiment, or Text Translation.
-[Custom Entity Lookup](cognitive-search-skill-custom-entity-lookup.md) requires a key to unlock transactions beyond 20 per indexer, per day. Note that adding the key unblocks the number of transactions, but is not used for billing.
+[Custom Entity Lookup](cognitive-search-skill-custom-entity-lookup.md) is metered by Azure Cognitive Search, not Cognitive Services, but it requires a Cognitive Services resource key to unlock transactions beyond 20 per indexer, per day. For this skill only, the resource key unblocks the number of transactions, but is unrelated to billing.
You can omit the key and the Cognitive Services section for skillsets that consist solely of custom skills or utility skills (Conditional, Document Extraction, Shaper, Text Merge, Text Split). You can also omit the section if your usage of billable skills is under 20 transactions per indexer per day.
search Cognitive Search Concept Annotations Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-concept-annotations-syntax.md
Last updated 11/04/2019
-# How to reference annotations in an Azure Cognitive Search skillset
+# Reference annotations in an Azure Cognitive Search skillset
In this article, you learn how to reference annotations in skill definitions, using examples to illustrate various scenarios. As the content of a document flows through a set of skills, it gets enriched with annotations. Annotations can be used as inputs for further downstream enrichment, or mapped to an output field in an index.
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-defining-skillset.md
Previously updated : 08/14/2021 Last updated : 08/15/2021 # Create a skillset in Azure Cognitive Search
Some usage rules for skillsets include the following:
+ A skillset must contain at least one skill. + A skillset can repeat skills of the same type (for example, multiple Shaper skills).
-Recall that [indexers](search-howto-create-indexers.md) drive skillset execution, which means that you will also need a [data source](search-data-sources-gallery.md) and [index](search-what-is-an-index.md) before you can test your skillset.
+Recall that indexers drive skillset execution, which means that you will also need to create an [indexer](search-howto-create-indexers.md), [data source](search-data-sources-gallery.md), and [search index](search-what-is-an-index.md) before you can test your skillset.
> [!TIP]
-> In the early stages of skillset design, we recommend that you enable [enrichment caching](cognitive-search-incremental-indexing-conceptual.md) to lower the cost of development. You might also consider projecting skillset output to a [knowledge store](knowledge-store-concept-intro.md) so that you can view output more easily. Both caching and knowledge store require Azure Storage. You can use the same resource for both tasks.
+> Enable [enrichment caching](cognitive-search-incremental-indexing-conceptual.md) to reuse the content you've already processed and lower the cost of development.
## Skillset definition
-In the [REST API](/rest/api/searchservice/create-skillset), a skillset is authored in JSON and has the following sections:
+Start with the basic structure. In the [REST API](/rest/api/searchservice/create-skillset), a skillset is authored in JSON and has the following sections:
```json {
In the [REST API](/rest/api/searchservice/create-skillset), a skillset is author
After the name and description, a skillset has four main properties:
-+ `skills` array, an unordered collection of skills, for which the search service determines the sequence of execution based on the inputs required for each skill. If skills are independent, they will execute in parallel. Skills can be utilitarian (like splitting text), transformational (based on AI from Cognitive Services), or custom skills that you provide. An example of a skills array is provided below.
++ `skills` array, an unordered [collection of skills](cognitive-search-predefined-skills.md), for which the search service determines the sequence of execution based on the inputs required for each skill. If skills are independent, they will execute in parallel. Skills can be utilitarian (like splitting text), transformational (based on AI from Cognitive Services), or custom skills that you provide. An example of a skills array is provided in the following section.
-+ `cognitiveServices` is used for [billable skills](cognitive-search-predefined-skills.md) that call Cognitive Services APIs. Remove this section if you aren't using billable skills or Custom Entity Lookup.
++ `cognitiveServices` is used for [billable skills](cognitive-search-predefined-skills.md) that call Cognitive Services APIs. Remove this section if you aren't using billable skills or Custom Entity Lookup. [Attach a resource](cognitive-search-attach-cognitive-services.md) if you are.
-+ `knowledgeStore`, (optional) specifies an Azure Storage account and settings for projecting skillset output into tables, blobs, and files in Azure Storage. Remove this section if you don't need a knowledge store.
++ `knowledgeStore`, (optional) specifies an Azure Storage account and settings for projecting skillset output into tables, blobs, and files in Azure Storage. Remove this section if you don't need it, otherwise [specify a knowledge store](knowledge-store-create-rest.md). + `encryptionKey`, (optional) specifies an Azure Key Vault and [customer-managed keys](search-security-manage-encryption-keys.md) used to encrypt sensitive content in a skillset definition. Remove this property if you aren't using customer-managed encryption.
-## Example of a skills array
+## Add a skills array
-The skills array specifies which skills to execute. This example shows two built-in skills that are unrelated. Both will iterate over the same documents, but neither consumes the output of the other.
+Within a skillset definition, the skills array specifies which skills to execute. The following example introduces you to its composition by showing you two unrelated, [built-in skills](cognitive-search-predefined-skills.md). Notice that each skill has a type, context, inputs, and outputs.
```json "skills":[
The skills array specifies which skills to execute. This example shows two built
}, { "@odata.type": "#Microsoft.Skills.Text.SentimentSkill",
+ "context": "/document",
"inputs": [ { "name": "text",
The skills array specifies which skills to execute. This example shows two built
> [!NOTE] > You can build complex skillsets with looping and branching, using the [Conditional skill](cognitive-search-skill-conditional.md) to create the expressions. The syntax is based on the [JSON Pointer](https://tools.ietf.org/html/rfc6901) path notation, with a few modifications to identify nodes in the enrichment tree. A `"/"` traverses a level lower in the tree and `"*"` acts as a for-each operator in the context. Numerous examples in this article illustrate the syntax.
-## How built-in skills are structured
+### How built-in skills are structured
-Each built-in skill is unique in terms of the inputs and parameters it takes. The documentation for each skill describes all of the properties of a given skill. Although there are difference, most skills share a common set of parameters and are similarly patterned. To illustrate several points, the [Entity Recognition skill](cognitive-search-skill-entity-recognition-v3.md) provides an example:
+Each skill is unique in terms of its input values and the parameters it takes. The documentation for each skill describes all of the properties of a given skill. Although there are difference, most skills share a common set of parameters and are similarly patterned. To illustrate several points, the [Entity Recognition skill](cognitive-search-skill-entity-recognition-v3.md) provides an example:
```json {
The second skill for sentiment analysis follows the same pattern as the first en
Below is an example of a [custom skill](cognitive-search-custom-skill-web-api.md). The URI points to an Azure Function, which in turn invokes the model or transformation that you provide. For more information, see [Define a custom interface](cognitive-search-custom-skill-interface.md).
-Context, inputs, and outputs read and write to an enrichment tree, just as the built-in skills do. Notice that the "context" field is set to `"/document/orgs/*"` with an asterisk, meaning the enrichment step is called *for each* organization under `"/document/orgs"`.
+Although the custom skill is executing code that is external to the pipeline, in a skills array, it's just another skill. Like the built-in skills, it has a type, context, inputs, and outputs. It also reads and writes to an enrichment tree, just as the built-in skills do. Notice that the "context" field is set to `"/document/orgs/*"` with an asterisk, meaning the enrichment step is called *for each* organization under `"/document/orgs"`.
-Output, in this case a company description, is generated for each organization identified. When referring to the description in a downstream step (for example, in key phrase extraction), you would use the path `"/document/orgs/*/description"` to do so.
+Output, in this case a company description, is generated for each organization identified. When referring to the description in a downstream step (for example, in key phrase extraction), you would use the path `"/document/orgs/*/companyDescription"` to do so.
```json {
Output, in this case a company description, is generated for each organization i
} ```
-## Skills output
-
-The skillset generates enriched documents that collect the output of each enrichment step. Consider the following example of unstructured text:
-
-*"In its fourth quarter, Microsoft logged $1.1 billion in revenue from LinkedIn, the social networking company it bought last year. The acquisition enables Microsoft to combine LinkedIn capabilities with its CRM and Office capabilities. Stockholders are excited with the progress so far."*
+## Send output to an index
-Using the sentiment analyzer and entity recognition, a likely outcome would be a generated structure similar to the following illustration:
+As each skill executes, its output is added as nodes in a document's enrichment tree. Enriched documents exist in the pipeline as temporary data structures. To create a permanent data structure, and gain full visibility into what a skill is actually producing, you will need to send the output to a search index or a [knowledge store](knowledge-store-concept-intro.md).
-![Sample output structure](media/cognitive-search-defining-skillset/enriched-doc.png "Sample output structure")
+In the early stages of skillset evaluation, you'll want to check preliminary results with minimal effort. We recommend the search index because it's simpler to set up. For each skill output, [define an output field mapping](cognitive-search-output-field-mapping.md) in the indexer, and a field in the index.
-Enriched documents exist in the enrichment pipeline as temporary data structures. To export the enrichments for consumption outside of the pipeline, follow one or more these approaches:
-+ Map skill outputs to [fields in a search index](cognitive-search-output-field-mapping.md)
-+ Map skill outputs to [data shapes](knowledge-store-projection-shape.md) for subsequent [projection into a knowledge store](knowledge-store-projections-examples.md)
-+ Send whole, enriched documents to blob storage via knowledge store
+After running the indexer, you can use [Search Explorer](search-explorer.md) to return documents from the index and check the contents of each field to determine what the skillset detected or created.
-You can also [cache enrichments](cognitive-search-incremental-indexing-conceptual.md), but the storage and format is not intended to be human-readable.
+The following example shows the results of an entity recognition skill that detected persons, locations, organizations, and other entities in a chunk of text. Viewing the results in Search Explorer can help you determine whether a skill adds value to your solution.
-<a name="next-step"></a>
## Next steps
-Context and input source fields are paths to nodes in an enrichment tree. As a next step, learn more about path syntax.
+Context and input source fields are paths to nodes in an enrichment tree. As a next step, learn more about the syntax for setting up paths to nodes in an enrichment tree.
> [!div class="nextstepaction"] > [Referencing annotations in a skillset](cognitive-search-concept-annotations-syntax.md).
search Cognitive Search Output Field Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-output-field-mapping.md
Title: Map input to output fields
+ Title: Map skill output fields
-description: Extract and enrich source data fields, and map to output fields in an Azure Cognitive Search index.
+description: Export the enriched content created by a skillset by mapping its output fields to fields in a search index.
Last updated 08/10/2021
-# How to map AI-enriched fields to a searchable index
+# Map enrichment output to fields in a search index
![Indexer Stages](./media/cognitive-search-output-field-mapping/indexer-stages-output-field-mapping.png "indexer stages")
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-howto-access-private.md
Previously updated : 10/14/2020 Last updated : 08/13/2021 # Make indexer connections through a private endpoint
+> [!NOTE]
+> You can use the [trusted Microsoft service approach](../storage/common/storage-network-security.md#trusted-microsoft-services) to bypass virtual network or IP restrictions on a storage account. You can also enable the search service to access data in the storage account. To do so, see [Indexer access to Azure Storage with the trusted service exception](search-indexer-howto-access-trusted-service-exception.md).
+>
+> However, when you use this approach, communication between Azure Cognitive Search and your storage account happens via the public IP address of the storage account, over the secure Microsoft backbone network.
+ Many Azure resources, such as Azure storage accounts, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. If you're using an indexer to index data in Azure Cognitive Search, and your data source is on a private network, you can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) to reach the data. This indexer connection method is subject to the following two requirements:
The following table lists Azure resources for which you can create outbound priv
| Azure resource | Group ID | | | |
-| Azure Storage - Blob (or) ADLS Gen 2 | `blob`|
+| Azure Storage - Blob | `blob`|
| Azure Storage - Tables | `table`| | Azure Cosmos DB - SQL API | `Sql`| | Azure SQL Database | `sqlServer`|
You can also query the Azure resources for which outbound private endpoint conne
In the remainder of this article, a mix of Azure portal (or the [Azure CLI](/cli/azure/) if you prefer) and [Postman](https://www.postman.com/) (or any other HTTP client like [curl](https://curl.se/) if you prefer) is used to demonstrate the REST API calls.
-> [!NOTE]
-> The examples in this article are based on the following assumptions:
-> * The name of the search service is _contoso-search_, which exists in the _contoso_ resource group of a subscription with subscription ID _00000000-0000-0000-0000-000000000000_.
-> * The resource ID of this search service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search_.
+## Set up indexer connection through private endpoint
-The rest of the examples show how the _contoso-search_ service can be configured so that its indexers can access data from the secure storage account _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Storage/storageAccounts/contoso-storage_.
+Use the following instructions to set up an indexer connection through a private endpoint to a secure Azure resource.
-## Secure your storage account
+The examples in this article are based on the following assumptions:
+* The name of the search service is _contoso-search_, which exists in the _contoso_ resource group of a subscription with subscription ID _00000000-0000-0000-0000-000000000000_.
+* The resource ID of this search service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search_.
-Configure the storage account to [allow access only from specific subnets](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network). In the Azure portal, if you select this option and leave the set empty, it means that no traffic from virtual networks is allowed.
+### Step 1: Secure your Azure resource
- ![Screenshot of the "Firewalls and virtual networks" pane, showing the option to allow access to selected networks. ](media\search-indexer-howto-secure-access\storage-firewall-noaccess.png)
+The steps for restricting access varies by resource. The following scenarios show three of the more common types of resources.
-> [!NOTE]
-> You can use the [trusted Microsoft service approach](../storage/common/storage-network-security.md#trusted-microsoft-services) to bypass virtual network or IP restrictions on a storage account. You can also enable the search service to access data in the storage account. To do so, see [Indexer access to Azure Storage with the trusted service exception](search-indexer-howto-access-trusted-service-exception.md).
->
-> However, when you use this approach, communication between Azure Cognitive Search and your storage account happens via the public IP address of the storage account, over the secure Microsoft backbone network.
+- Scenario 1: Data source
+
+ The following is an example of how to configure an Azure storage account. If you select this option and leave the page empty, it means that no traffic from virtual networks is allowed.
+
+ ![Screenshot of the "Firewalls and virtual networks" pane for Azure storage, showing the option to allow access to selected networks.](media\search-indexer-howto-secure-access\storage-firewall-noaccess.png)
-Shared private link resources for an Azure Cognitive Search service can be managed via the Azure portal. Navigate to your search service -> Networking -> Shared Private Access to manage these resources via the portal.
+- Scenario 2: Azure Key Vault
- ![Screenshot of the "Networking" pane, showing the shared private link management blade. ](media\search-indexer-howto-secure-access\shared-private-link-portal-blade.png)
+ The following is an example of how to configure Azure Key Vault.
+
+ ![Screenshot of the "Firewalls and virtual networks" pane for Azure Key Vault, showing the option to allow access to selected networks.](media\search-indexer-howto-secure-access\key-vault-firewall-noaccess.png)
+
+- Scenario 3: Azure Functions
-### Step 1: Create a shared private link resource to the storage account
+ No network setting changes are needed for Azure Functions. Later in the following steps, when you create the shared private endpoint the Function will automatically only allow access through private link after the creation of a shared private endpoint to the Function.
-To request Azure Cognitive Search to create an outbound private endpoint connection to the storage account, via the Shared Private Access blade, click on "Add Shared Private Access". On the dialog that opens on the right, you can choose to "Connect to an Azure resource in my directory" or "Connect to an Azure resource by resource ID or alias".
+### Step 2: Create a shared private link resource to the Azure resource
-When using the first option (recommended), the dialog pane will help guide you to pick the appropriate storage account and will help fill in other properties such as the group ID of the resource and the resource type.
+The following section describes how to create a shared private link resource either using the Azure portal or the Azure CLI.
+
+#### Option 1: Portal
+
+> [!NOTE]
+> The portal only supports creating a shared private endpoint using group ID values that are GA. For MySQL and Azure Functions, use the Azure CLI steps described in option 2, which follows.
+
+To request Azure Cognitive Search to create an outbound private endpoint connection, via the Shared Private Access blade, click on "Add Shared Private Access". On the blade that opens on the right, you can choose to "Connect to an Azure resource in my directory" or "Connect to an Azure resource by resource ID or alias".
+
+When using the first option (recommended), the blade will help guide you to pick the appropriate Azure resource and will autofill in other properties such as the group ID of the resource and the resource type.
![Screenshot of the "Add Shared Private Access" pane, showing a guided experience for creating a shared private link resource. ](media\search-indexer-howto-secure-access\new-shared-private-link-resource.png)
-When using the second option, you can enter the Azure resource ID of the target storage account manually and choose the appropriate group ID (in this case "blob")
+When using the second option, you can enter the Azure resource ID manually and choose the appropriate group ID. The group IDs are listed at the beginning of this article.
![Screenshot of the "Add Shared Private Access" pane, showing the manual experience for creating a shared private link resource. ](media\search-indexer-howto-secure-access\new-shared-private-link-resource-manual.png)
-Alternatively, you can make the following API call with the [Azure CLI](/cli/azure/):
+#### Option 2: Azure CLI
+
+Alternatively, you can make the following API call with the [Azure CLI](/cli/azure/). Use the 2020-08-01-preview API version if you're using a group ID that is in preview. For example, group IDs *sites* and *mysqlServer* and in preview and require you to use the preview API.
```dotnetcli
-az rest --method put --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe?api-version=2020-08-01 --body @create-pe.json
+az rest --method put --uri https://management.azure.com/subscriptions/<search service subscription ID>/resourceGroups/<search service resource group name>/providers/Microsoft.Search/searchServices/<search service name>/sharedPrivateLinkResources/<shared private endpoint name>?api-version=2020-08-01 --body @create-pe.json
```
-The contents of the *create-pe.json* file, which represent the request body to the API, are as follows:
+The following is an example of the contents of the *create-pe.json* file:
```json {
The contents of the *create-pe.json* file, which represent the request body to t
A `202 Accepted` response is returned on success. The process of creating an outbound private endpoint is a long-running (asynchronous) operation. It involves deploying the following resources:
-+ A private endpoint, allocated with a private IP address in a `"Pending"` state. The private IP address is obtained from the address space that's allocated to the virtual network of the execution environment for the search service-specific private indexer. Upon approval of the private endpoint, any communication from Azure Cognitive Search to the storage account originates from the private IP address and a secure private link channel.
++ A private endpoint, allocated with a private IP address in a `"Pending"` state. The private IP address is obtained from the address space that's allocated to the virtual network of the execution environment for the search service-specific private indexer. Upon approval of the private endpoint, any communication from Azure Cognitive Search to the Azure resource originates from the private IP address and a secure private link channel. + A private DNS zone for the type of resource, based on the `groupId`. By deploying this resource, you ensure that any DNS lookup to the private resource utilizes the IP address that's associated with the private endpoint. Be sure to specify the correct `groupId` for the type of resource for which you're creating the private endpoint. Any mismatch will result in a non-successful response message.
-As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following:
+### Step 3: Check the status of the private endpoint creation
-`"Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe/operationStatuses/08586060559526078782?api-version=2020-08-01"`
+In this step you'll confirm that the provisioning state of the resource changes to "Succeeded".
-You can poll this URI periodically to obtain the status of the operation.
+#### Option 1: Portal
-If you utilize the Azure portal to create the shared private link resource, this polling will be done automatically by the portal (with the resource provisioning state marked as "Updating").
+> [!NOTE]
+> The provisioning state will be visible in the portal for both GA and group IDs that are in preview.
+
+The portal will show you the state of the shared private endpoint. In the following example the status is "Updating".
![Screenshot of the "Add Shared Private Access" pane, showing the resource creation in progress. ](media\search-indexer-howto-secure-access\new-shared-private-link-resource-progress.png)
Once the resource is successfully created, you will receive a portal notificatio
![Screenshot of the "Add Shared Private Access" pane, showing the resource creation completed. ](media\search-indexer-howto-secure-access\new-shared-private-link-resource-success.png)
-If you are using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
+#### Option 2: Azure CLI
+
+The `PUT` call to create the shared private endpoint returns an `Azure-AsyncOperation` header value that looks like the following:
+
+`"Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe/operationStatuses/08586060559526078782?api-version=2020-08-01"`
+
+You can poll for the status by manually querying the `Azure-AsyncOperationHeader` value.
```dotnetcli az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe/operationStatuses/08586060559526078782?api-version=2020-08-01 ```
-Wait until the provisioning state of the resource changes to "Succeeded" before proceeding to the next steps.
-
-### Step 2a: Approve the private endpoint connection for the storage account
+### Step 4: Approve the private endpoint connection
> [!NOTE]
-> In this section, you use the Azure portal to walk through the approval flow for a private endpoint to storage. Alternately, you could use the [REST API](/rest/api/storagerp/privateendpointconnections) that's available via the storage resource provider.
+> In this section, you use the Azure portal to walk through the approval flow for a private endpoint to the Azure resource you're connecting to. Alternately, you could use the [REST API](/rest/api/storagerp/privateendpointconnections) that's available via the storage resource provider.
> > Other providers, such as Azure Cosmos DB or Azure SQL Server, offer similar storage resource provider APIs for managing private endpoint connections.
-1. In the Azure portal, select the **Networking** tab of your storage account and navigate to **Private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
+1. In the Azure portal, navigate to the Azure resource that you're connecting to and select the **Networking** tab. Then navigate to the section that lists the private endpoint connections. Following is an example for a storage account. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
![Screenshot of the Azure portal, showing the "Private endpoint connections" pane.](media\search-indexer-howto-secure-access\storage-privateendpoint-approval.png)
Wait until the provisioning state of the resource changes to "Succeeded" before
After the private endpoint connection request is approved, traffic is *capable* of flowing through the private endpoint. After the private endpoint is approved, Azure Cognitive Search creates the necessary DNS zone mappings in the DNS zone that's created for it.
-### Step 2b: Query the status of the shared private link resource
+### Step 5: Query the status of the shared private link resource
To confirm that the shared private link resource has been updated after approval, revisit the "Shared Private Access" blade of the search service on the Azure portal and check the "Connection State".
Alternatively you can also obtain the "Connection state" by using the [GET API](
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Search/searchServices/contoso-search/sharedPrivateLinkResources/blob-pe?api-version=2020-08-01 ```
-This would return a JSON, where the connection state would show up as "status" under the "properties" section.
+This would return a JSON, where the connection state would show up as "status" under the "properties" section. Following is an example for a storage account.
```json {
This would return a JSON, where the connection state would show up as "status" u
If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and the indexer can be configured to communicate over the private endpoint.
-### Step 3: Configure the indexer to run in the private environment
+### Step 6: Configure the indexer to run in the private environment
> [!NOTE] > You can perform this step before the private endpoint connection is approved. Until the private endpoint connection is approved, any indexer that tries to communicate with a secure resource (such as the storage account) will end up in a transient failure state. New indexers will fail to be created. As soon as the private endpoint connection is approved, indexers can access the private storage account.
-1. [Create a data source](/rest/api/searchservice/create-data-source) that points to the secure storage account and an appropriate container within the storage account. The following screenshot shows this request in Postman.
-
- ![Screenshot showing the creation of a data source on the Postman user interface.](media\search-indexer-howto-secure-access\create-ds.png )
+The following steps show how to configure the indexer to run in the private environment using the REST API. You can also set the execution environment using the JSON editor in the portal.
-1. Similarly, [create an index](/rest/api/searchservice/create-index) and, optionally, [create a skillset](/rest/api/searchservice/create-skillset) by using the REST API.
+1. Create the data source definition, index, and skillset (if you're using one) as you would normally. There are no properties in any of these definitions that vary when using a shared private endpoint.
1. [Create an indexer](/rest/api/searchservice/create-indexer) that points to the data source, index, and skillset that you created in the preceding step. In addition, force the indexer to run in the private execution environment by setting the indexer `executionEnvironment` configuration property to `private`.
- ![Screenshot showing the creation of an indexer on the Postman user interface.](media\search-indexer-howto-secure-access\create-idr.png)
-
- After the indexer is created successfully, it should begin indexing content from the storage account over the private endpoint connection. You can monitor the status of the indexer by using the [Indexer Status API](/rest/api/searchservice/get-indexer-status).
+ ```json
+ {
+ "name": "indexer",
+ "dataSourceName": "blob-datasource",
+ "targetIndexName": "index",
+ "parameters": {
+ "configuration": {
+ "executionEnvironment": "private"
+ }
+ },
+ "fieldMappings": []
+ }
+ ```
+
+ Following is an example of the request in Postman.
+
+ ![Screenshot showing the creation of an indexer on the Postman user interface.](media\search-indexer-howto-secure-access\create-indexer.png)
+
+After the indexer is created successfully, it should connect to the Azure resource over the private endpoint connection. You can monitor the status of the indexer by using the [Indexer Status API](/rest/api/searchservice/get-indexer-status).
> [!NOTE]
-> If you already have existing indexers, you can update them via the [PUT API](/rest/api/searchservice/create-indexer) by setting the `executionEnvironment` to `private`.
+> If you already have existing indexers, you can update them via the [PUT API](/rest/api/searchservice/create-indexer) by setting the `executionEnvironment` to `private` or using the JSON editor in the portal.
## Troubleshooting
search Troubleshoot Shared Private Link Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/troubleshoot-shared-private-link-resources.md
In addition, the specified `groupId` needs to be valid for the specified resourc
### Azure Resource Manager deployment failures
-Once search has accepted the request to create a shared private link resource, the Azure Resource Manager deployment that it kicks off can also fail for any number of reasons. In all cases, when customers query for the status of the asynchronous operation (described [here](search-indexer-howto-access-private.md#step-1-create-a-shared-private-link-resource-to-the-storage-account)), an appropriate error message and any available details will be presented.
+A search service initiates the request to create a shared private link, but Azure Resource Manager performs the actual work. You can [check the deployment's status](search-indexer-howto-access-private.md#step-3-check-the-status-of-the-private-endpoint-creation) in the portal or by query, and address any errors that might occur.
-Shared private link resources that have failed Azure Resource Manager deployment will show up in [List](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/list-by-service) and [Get](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/get) API calls, but will have a "Provisioning State" of `Failed`. Once the reason of the Azure Resource Manager deployment failure has been ascertained, delete the `Failed` resource and re-create it after applying the appropriate resolution from the table below.
+Shared private link resources that have failed Azure Resource Manager deployment will show up in [List](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/list-by-service) and [Get](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/get) API calls, but will have a "Provisioning State" of `Failed`. Once the reason of the Azure Resource Manager deployment failure has been ascertained, delete the `Failed` resource and re-create it after applying the appropriate resolution from the following table.
| Deployment failure reason | Description | Resolution | | | | |
security-center Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/alerts-reference.md
Previously updated : 07/30/2021 Last updated : 08/15/2021
At the bottom of this page, there's a table describing the Azure Security Center
| **Vulnerability scanner detected**<br>(AppServices_WpScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting WordPress applications.<br>If your App Service resource isnΓÇÖt hosting a WordPress site, it isnΓÇÖt vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress Azure Defender alerts, see [Suppress alerts from Azure Defender](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium | | **Web fingerprinting detected**<br>(AppServices_WebFingerprinting) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with a tool called Blind Elephant. The tool fingerprint web servers and tries to detect the installed applications and version.<br>Attackers often use this tool for probing the web application to find vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium | | **Website is tagged as malicious in threat intelligence feed**<br>(AppServices_SmartScreen) | Your website as described below is marked as a malicious site by Windows SmartScreen. If you think this is a false positive, contact Windows SmartScreen via report feedback link provided.<br>(Applies to: App Service on Windows and App Service on Linux) | Collection | Medium |
-| **Possible loss of data detected**<br>(AppServices_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised.<br>(Applies to: App Service on Linux) | Collection, Exfiltration | Medium | |
+| **Possible loss of data detected**<br>(AppServices_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised.<br>(Applies to: App Service on Linux) | Collection, Exfiltration | Medium |
| **Detected suspicious file download**<br>(AppServices_SuspectDownloadArtifacts) | Analysis of host data has detected suspicious download of remote file.<br>(Applies to: App Service on Linux) | Persistence | Medium | | | | | |
security-center Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/export-to-siem.md
Azure Sentinel includes built-in connectors for Azure Security Center at the sub
- [Stream alerts to Azure Sentinel at the subscription level](../sentinel/connect-azure-security-center.md) - [Connect all subscriptions in your tenant to Azure Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-security-center-auto-connect-to-sentinel/ba-p/1387539)
+When you connect Azure Defender to Azure Sentinel, the status of Azure Defender alerts that get ingested into Azure Sentinel is synchronized between the two services. So, for example, when an alert is closed in Azure Defender, that alert will display as closed in Azure Sentinel as well. Changing the status of an alert in Azure Defender "won't"* affect the status of any Azure Sentinel **incidents** that contain the synchronized Azure Sentinel alert, only that of the synchronized alert itself.
+
+Enabling the preview feature, **bi-directional alert synchronization**, will automatically sync the status of the original Azure Defender alerts with Azure Sentinel incidents that contain the copies of those Azure Defender alerts. So, for example, when an Azure Sentinel incident containing an Azure Defender alert is closed, Azure Defender will automatically close the corresponding original alert.
+
+Learn more in [Connect Azure Defender alerts from Azure Security Center](../sentinel/connect-azure-security-center.md).
+
+> [!NOTE]
+> The bi-directional alert synchronization feature isn't available in the Azure Government cloud.
+ ### Configure ingestion of all audit logs into Azure Sentinel Another alternative for investigating Security Center alerts in Azure Sentinel is to stream your audit logs into Azure Sentinel:
security-center Security Center Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-services.md
Previously updated : 08/05/2021 Last updated : 08/15/2021
For information about when recommendations are generated for each of these prote
## Feature support in government and sovereign clouds
-| Feature/Service | Azure | Azure Government | Azure China 21Vianet |
-|--|-|--||
-| **Security Center free features** | | | |
-| - [Continuous export](./continuous-export.md) | GA | GA | GA |
-| - [Workflow automation](./continuous-export.md) | GA | GA | GA |
-| - [Recommendation exemption rules](./exempt-resource.md) | Public Preview | Not Available | Not Available |
-| - [Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA |
-| - [Email notifications for security alerts](./security-center-provide-security-contact-details.md) | GA | GA | GA |
-| - [Auto provisioning for agents and extensions](./security-center-enable-data-collection.md) | GA | GA | GA |
-| - [Asset inventory](./asset-inventory.md) | GA | GA | GA |
-| - [Azure Monitor Workbooks reports in Azure Security Center's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA |
-| **Azure Defender plans and extensions** | | | |
+| Feature/Service | Azure | Azure Government | Azure China 21Vianet |
+|--|-|--||
+| **Security Center free features** | | | |
+| - [Continuous export](./continuous-export.md) | GA | GA | GA |
+| - [Workflow automation](./continuous-export.md) | GA | GA | GA |
+| - [Recommendation exemption rules](./exempt-resource.md) | Public Preview | Not Available | Not Available |
+| - [Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA |
+| - [Email notifications for security alerts](./security-center-provide-security-contact-details.md) | GA | GA | GA |
+| - [Auto provisioning for agents and extensions](./security-center-enable-data-collection.md) | GA | GA | GA |
+| - [Asset inventory](./asset-inventory.md) | GA | GA | GA |
+| - [Azure Monitor Workbooks reports in Azure Security Center's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA |
+| **Azure Defender plans and extensions** | | | |
| - [Azure Defender for servers](./defender-for-servers-introduction.md) | GA | GA | GA | | - [Azure Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available | | - [Azure Defender for DNS](./defender-for-dns-introduction.md) | GA | Not Available | GA |
For information about when recommendations are generated for each of these prote
| - [Azure Defender for Storage](./defender-for-storage-introduction.md) <sup>[6](#footnote6)</sup> | GA | GA | Not Available | | - [Threat protection for Cosmos DB](./other-threat-protections.md#threat-protection-for-azure-cosmos-db-preview) | Public Preview | Not Available | Not Available | | - [Kubernetes workload protection](./kubernetes-workload-protections.md) | GA | GA | GA |
-| **Azure Defender for servers features** <sup>[7](#footnote7)</sup> | | | |
+| - [Bi-directional alert synchronization with Sentinel](../sentinel/connect-azure-security-center.md) | Public Preview | Not Available | Not Available |
+| **Azure Defender for servers features** <sup>[7](#footnote7)</sup> | | | |
| - [Just-in-time VM access](./security-center-just-in-time.md) | GA | GA | GA | | - [File integrity monitoring](./security-center-file-integrity-monitoring.md) | GA | GA | GA | | - [Adaptive application controls](./security-center-adaptive-application.md) | GA | GA | GA |
For information about when recommendations are generated for each of these prote
| - [Microsoft Defender for Endpoint deployment and integrated license](./security-center-wdatp.md) | GA | GA | Not Available | | - [Connect AWS account](./quickstart-onboard-aws.md) | GA | Not Available | Not Available | | - [Connect GCP account](./quickstart-onboard-gcp.md) | GA | Not Available | Not Available |
-| | | |
+| | | | |
<sup><a name="footnote1" /></a>1</sup> Partially GA: The ability to disable specific findings from vulnerability scans is in public preview.
security-center Security Center Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-troubleshooting-guide.md
Previously updated : 09/10/2019 Last updated : 08/15/2021
This guide is for information technology (IT) professionals, information securit
Security Center uses the Log Analytics agent to collect and store data. See [Azure Security Center Platform Migration](./security-center-enable-data-collection.md) to learn more. The information in this article represents Security Center functionality after transition to the Log Analytics agent.
+> [!TIP]
+> A, dedicated area of the Security Center pages in the Azure portal provides a collated, ever-growing set of self-help materials for solving common challenges with Security Center and Azure Defender.
+>
+> When you're facing an issue, or are seeking advice from our support team, **Diagnose and solve problems** is good place to look for solutions:
+>
+> :::image type="content" source="media/release-notes/solve-problems.png" alt-text="Security Center's 'Diagnose and solve problems' page":::
++ ## Troubleshooting guide This guide explains how to troubleshoot Security Center related issues.
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
Previously updated : 07/15/2021 Last updated : 08/15/2021
Last updated 07/15/2021
This article describes feature availability in the Microsoft Azure and Azure Government clouds for the following security -- [Azure Sentinel](#azure-sentinel) - [Azure Security Center](#azure-security-center)
+- [Azure Sentinel](#azure-sentinel)
+- [Azure Defender for IoT](#azure-defender-for-iot)
> [!NOTE] > Additional security services will be added to this article soon.
For more information, see the [Azure Security Center product documentation](../.
The following table displays the current Security Center feature availability in Azure and Azure Government.
-| Feature/Service | Azure | Azure Government |
-|--|-|--|
-| **Security Center free features** | | |
+| Feature/Service | Azure | Azure Government |
+|-|-|--|
+| **Security Center free features** | | |
| - [Continuous export](../../security-center/continuous-export.md) | GA | GA | | - [Workflow automation](../../security-center/continuous-export.md) | GA | GA | | - [Recommendation exemption rules](../../security-center/exempt-resource.md) | Public Preview | Not Available |
The following table displays the current Security Center feature availability in
| - [Auto provisioning for agents and extensions](../../security-center/security-center-enable-data-collection.md) | GA | GA | | - [Asset inventory](../../security-center/asset-inventory.md) | GA | GA | | - [Azure Monitor Workbooks reports in Azure Security Center's workbooks gallery](../../security-center/custom-dashboards-azure-workbooks.md) | GA | GA |
-| **Azure Defender plans and extensions** | | |
+| **Azure Defender plans and extensions** | | |
| - [Azure Defender for servers](../../security-center/defender-for-servers-introduction.md) | GA | GA | | - [Azure Defender for App Service](../../security-center/defender-for-app-service-introduction.md) | GA | Not Available | | - [Azure Defender for DNS](../../security-center/defender-for-dns-introduction.md) | GA | Not Available |
The following table displays the current Security Center feature availability in
| - [Azure Defender for Storage](../../security-center/defender-for-storage-introduction.md) <sup>[6](#footnote6)</sup> | GA | GA | | - [Threat protection for Cosmos DB](../../security-center/other-threat-protections.md#threat-protection-for-azure-cosmos-db-preview) | Public Preview | Not Available | | - [Kubernetes workload protection](../../security-center/kubernetes-workload-protections.md) | GA | GA |
-| **Azure Defender for servers features** <sup>[7](#footnote7)</sup> | | |
+| - [Bi-directional alert synchronization with Sentinel](../../sentinel/connect-azure-security-center.md) | Public Preview | Not Available |
+| **Azure Defender for servers features** <sup>[7](#footnote7)</sup> | | |
| - [Just-in-time VM access](../../security-center/security-center-just-in-time.md) | GA | GA | | - [File integrity monitoring](../../security-center/security-center-file-integrity-monitoring.md) | GA | GA | | - [Adaptive application controls](../../security-center/security-center-adaptive-application.md) | GA | GA |
The following table displays the current Security Center feature availability in
| - [Microsoft Defender for Endpoint deployment and integrated license](../../security-center/security-center-wdatp.md) | GA | GA | | - [Connect AWS account](../../security-center/quickstart-onboard-aws.md) | GA | Not Available | | - [Connect GCP account](../../security-center/quickstart-onboard-gcp.md) | GA | Not Available |
-| | | |
+| | | |
<sup><a name="footnote1" /></a>1</sup> Partially GA: The ability to disable specific findings from vulnerability scans is in public preview.
Office 365 GCC is paired with Azure Active Directory (Azure AD) in Azure. Office
| - Office 365 DoD | - | GA | | | |
-## Azure Defender for IoT
+## Azure Defender for IoT
-Azure Defender for IoT lets you accelerate IoT/OT innovation with comprehensive security across all your IoT/OT devices. For end-user organizations, Azure Defender for IoT offers agentless, network-layer security that is rapidly deployed, works with diverse industrial equipment, and interoperates with Azure Sentinel and other SOC tools. Deploy on-premises or in Azure-connected environments. For IoT device builders, the Azure Defender for IoT security agents allow you to build security directly into your new IoT devices and Azure IoT projects. The micro agent has flexible deployment options, including the ability to deploy as a binary package or modify source code. And the micro agent is available for standard IoT operating systems like Linux and Azure RTOS. For more information, see the [Azure Defender for IoT product documentation](../../defender-for-iot/index.yml).
+Azure Defender for IoT lets you accelerate IoT/OT innovation with comprehensive security across all your IoT/OT devices. For end-user organizations, Azure Defender for IoT offers agentless, network-layer security that is rapidly deployed, works with diverse industrial equipment, and interoperates with Azure Sentinel and other SOC tools. Deploy on-premises or in Azure-connected environments. For IoT device builders, the Azure Defender for IoT security agents allow you to build security directly into your new IoT devices and Azure IoT projects. The micro agent has flexible deployment options, including the ability to deploy as a binary package or modify source code. And the micro agent is available for standard IoT operating systems like Linux and Azure RTOS. For more information, see the [Azure Defender for IoT product documentation](../../defender-for-iot/index.yml).
The following table displays the current Azure Defender for IoT feature availability in Azure, and Azure Government.
-### For organizations
+### For organizations
| Feature | Azure | Azure Government | |--|--|--| | [On-premises device discovery and inventory](../../defender-for-iot/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md) | GA | GA |
-| Cloud device discovery and inventory | Private Preview | Not Available |
| [Vulnerability management](../../defender-for-iot/how-to-create-risk-assessment-reports.md) | GA | GA | | [Threats detection with IoT, and OT behavioral analytics](../../defender-for-iot/how-to-work-with-alerts-on-your-sensor.md) | GA | GA | | [Automatic Threat Intelligence Updates](../../defender-for-iot/how-to-work-with-threat-intelligence-packages.md) | GA | GA |
The following table displays the current Azure Defender for IoT feature availabi
| - [Ticketing system and CMDB (Service Now)](../../defender-for-iot/integration-servicenow.md) | GA | GA | | - [Sensor Provisioning](../../defender-for-iot/how-to-manage-sensors-on-the-cloud.md) | GA | GA |
-### For device builders
+### For device builders
| Feature | Azure | Azure Government | |--|--|--|
The following table displays the current Azure Defender for IoT feature availabi
- Understand the [shared responsibility](shared-responsibility.md) model and which security tasks are handled by the cloud provider and which tasks are handled by you. - Understand the [Azure Government Cloud](../../azure-government/documentation-government-welcome.md) capabilities and the trustworthy design and security used to support compliance applicable to federal, state, and local government organizations and their partners. - Understand the [Office 365 Government plan](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/office-365-us-government#about-office-365-government-environments).-- Understand [compliance in Azure](../../compliance/index.yml) for legal and regulatory standards.
+- Understand [compliance in Azure](../../compliance/index.yml) for legal and regulatory standards.
storage File Sync Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-troubleshoot.md
If files fail to tier to Azure Files:
| 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to tier due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. | | 0x8e5e03fe | -1906441218 | JET_errDiskIO | The file failed to tier due to an I/O error when writing to the cloud tiering database. | If the error persists, run chkdsk on the volume and check the storage hardware. | | 0x8e5e0442 | -1906441150 | JET_errInstanceUnavailable | The file failed to tier because the cloud tiering database is not running. | To resolve this issue, restart the FileSyncSvc service or server. If the error persists, run chkdsk on the volume and check the storage hardware. |
+| 0x80C80285 | -2160591493 | ECS_E_GHOSTING_SKIPPED_BY_CUSTOM_EXCLUSION_LIST | The file cannot be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync. |
virtual-machines Attach Disk Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/attach-disk-portal.md
description: Use the portal to attach new or existing data disk to a Linux VM.
Previously updated : 08/28/2020 Last updated : 08/13/2021
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/disk-encryption-overview.md
To enable Azure Disk Encryption, the VMs must meet the following network endpoin
Azure Disk Encryption uses the BitLocker external key protector for Windows VMs. For domain joined VMs, don't push any group policies that enforce TPM protectors. For information about the group policy for "Allow BitLocker without a compatible TPM," see [BitLocker Group Policy Reference](/windows/security/information-protection/bitlocker/bitlocker-group-policy-settings#bkmk-unlockpol1).
-BitLocker policy on domain joined virtual machines with custom group policy must include the following setting: [Configure user storage of BitLocker recovery information -> Allow 256-bit recovery key](/windows/security/information-protection/bitlocker/bitlocker-group-policy-settings). Azure Disk Encryption will fail when custom group policy settings for BitLocker are incompatible. On machines that didn't have the correct policy setting, apply the new policy, force the new policy to update (gpupdate.exe /force), and then restarting may be required.
+BitLocker policy on domain joined virtual machines with custom group policy must include the following setting: [Configure user storage of BitLocker recovery information -> Allow 256-bit recovery key](/windows/security/information-protection/bitlocker/bitlocker-group-policy-settings). Azure Disk Encryption will fail when custom group policy settings for BitLocker are incompatible. On machines that didn't have the correct policy setting, apply the new policy, and force the new policy to update (gpupdate.exe /force). Restarting may be required.
+
+Microsoft Bitlocker Administration and Monitoring (MBAM) group policy features are not compatible with Azure Disk Encryption.
> [!WARNING] > Azure Disk Encryption **does not store recovery keys**. If the [Interactive logon: Machine account lockout threshold](/windows/security/threat-protection/security-policy-settings/interactive-logon-machine-account-lockout-threshold) security setting is enabled, machines can only be recovered by providing a recovery key via the serial console. Instructions for ensuring the appropriate recovery policies are enabled can be found in the [Bitlocker recovery guide plan](/windows/security/information-protection/bitlocker/bitlocker-recovery-guide-plan).
virtual-machines N Series Amd Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/n-series-amd-driver-setup.md
For basic specs, storage capacities, and disk details, see [GPU Windows VM sizes
| OS | Driver | | -- |- |
-| Windows 10 - Build 2009 <br/><br/>Windows 10 - Build 2004 <br/><br/>Windows 10 Enterprise multi-session - Build 1909 <br/><br/>Windows 10 - Build 1909<br/><br/>Windows Server 2016 (version 1607)<br/><br/>Windows Server 2019 (version 1909) | [21.Q2](https://download.microsoft.com/download/3/4/8/3481cf8d-1706-49b0-aa09-08c9468305ab/AMD-Azure-NVv4-Windows-Driver-21Q2.exe) (.exe) |
+| Windows 10 - Build 2009, 2004, 1909 <br/><br/>Windows 10 Enterprise multi-session - Build 2009, 2004, 1909 <br/><br/>Windows Server 2016 (version 1607)<br/><br/>Windows Server 2019 (version 1909) | [21.Q2](https://download.microsoft.com/download/3/4/8/3481cf8d-1706-49b0-aa09-08c9468305ab/AMD-Azure-NVv4-Windows-Driver-21Q2.exe) (.exe) |
Previous supported driver version for Windows builds up to 1909 is [20.Q4](https://download.microsoft.com/download/f/1/6/f16e6275-a718-40cd-a366-9382739ebd39/AMD-Azure-NVv4-Driver-20Q4.exe) (.exe)
Previous supported driver version for Windows builds up to 1909 is [20.Q4](https
> [Computer Configuration->Policies->Windows Settings->Administrative Templates->Windows Components->Remote Desktop Services->Remote Desktop Session Host->Remote Session Environment], set the Policy [Use WDDM graphics display driver for Remote Desktop Connections] to Disabled. > -
+
## Driver installation 1. Connect by Remote Desktop to each NVv4-series VM.
virtual-network Create Vm Accelerated Networking Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/create-vm-accelerated-networking-powershell.md
The benefits of accelerated networking only apply to the VM that it's enabled on
## Supported operating systems
-The following distributions are supported directly from the Azure Gallery:
+The following versions of Windows are supported:
-- **Windows Server 2019 Datacenter**-- **Windows Server 2016 Datacenter** -- **Windows Server 2012 R2 Datacenter**
+- **Windows Server 2019 Standard/Datacenter**
+- **Windows Server 2016 Standard/Datacenter**
+- **Windows Server 2012 R2 Standard/Datacenter**
## Limitations and constraints