Updates from: 08/01/2022 01:06:11
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
Here are the settings defined in the Group.Unified SettingsTemplate. Unless othe
| <ul><li>AllowToAddGuests<li>Type: Boolean<li>Default: True | A boolean indicating whether or not is allowed to add guests to this directory. <br>This setting may be overridden and become read-only if *EnableMIPLabels* is set to *True* and a guest policy is associated with the sensitivity label assigned to the group.<br>If the AllowToAddGuests setting is set to False at the organization level, any AllowToAddGuests setting at the group level is ignored. If you want to enable guest access for only a few groups, you must set AllowToAddGuests to be true at the organization level, and then selectively disable it for specific groups. | | <ul><li>ClassificationList<li>Type: String<li>Default: "" | A comma-delimited list of valid classification values that can be applied to Microsoft 365 groups. <br>This setting does not apply when EnableMIPLabels == True.| | <ul><li>EnableMIPLabels<li>Type: Boolean<li>Default: "False" |The flag indicating whether sensitivity labels published in Microsoft Purview compliance portal can be applied to Microsoft 365 groups. For more information, see [Assign Sensitivity Labels for Microsoft 365 groups](groups-assign-sensitivity-labels.md). |
-| <ul><li>NewUnifiedGroupWritebackDefault<li>Type: Boolean<li>Default: "True" |The flag that allows an admin to create new Microsoft 365 groups without setting the groupWritebackConfiguration resource type in the request payload. This setting is applicable when group writeback is configured in Azure AD Connect. "NewUnifiedGroupWritebackDefault" is a global Microfot 365 group setting. Default value is true. Updating the setting value to false will change the default writeback behavior for newly created Microsoft 365 groups, and will not change isEnabled property value for existing Microsoft 365 groups. Group admin will need to explicitly update the group isEnabled property value to change the writeback state for existing Microsoft 365 groups. For more information, see [groupWritebackConfiguration resource type](groupwritebackconfiguration?view=graph-rest-beta.md). |
+| <ul><li>NewUnifiedGroupWritebackDefault<li>Type: Boolean<li>Default: "True" |The flag that allows an admin to create new Microsoft 365 groups without setting the groupWritebackConfiguration resource type in the request payload. This setting is applicable when group writeback is configured in Azure AD Connect. "NewUnifiedGroupWritebackDefault" is a global Microfot 365 group setting. Default value is true. Updating the setting value to false will change the default writeback behavior for newly created Microsoft 365 groups, and will not change isEnabled property value for existing Microsoft 365 groups. Group admin will need to explicitly update the group isEnabled property value to change the writeback state for existing Microsoft 365 groups. |
## Example: Configure Guest policy for groups at the directory level 1. Get all the setting templates:
active-directory Entitlement Management Access Package First https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md
na Previously updated : 07/11/2022 Last updated : 08/01/2022
# Tutorial: Manage access to resources in Azure AD entitlement management
-Managing access to all the resources employees need, such as groups, applications, and sites, is an important function for organizations. You want to grant employees the right level of access they need to be productive and remove their access when it is no longer needed.
+Managing access to all the resources employees need, such as groups, applications, and sites, is an important function for organizations. You want to grant employees the right level of access they need to be productive and remove their access when it's no longer needed.
-In this tutorial, you work for Woodgrove Bank as an IT administrator. You've been asked to create a package of resources for a marketing campaign that internal users can use to self-service request. Requests do not require approval and user's access expires after 30 days. For this tutorial, the marketing campaign resources are just membership in a single group, but it could be a collection of groups, applications, or SharePoint Online sites.
+In this tutorial, you work for Woodgrove Bank as an IT administrator. You've been asked to create a package of resources for a marketing campaign that internal users can use to self-service request. Requests don't require approval and user's access expires after 30 days. For this tutorial, the marketing campaign resources are just membership in a single group, but it could be a collection of groups, applications, or SharePoint Online sites.
![Diagram that shows the scenario overview.](./media/entitlement-management-access-package-first/elm-scenario-overview.png)
A resource directory has one or more resources to share. In this step, you creat
**Prerequisite role:** Global administrator or User administrator
-![Create users and groups](./media/entitlement-management-access-package-first/elm-users-groups.png)
+![Diagram that shows the users and groups for this tutorial.](./media/entitlement-management-access-package-first/elm-users-groups.png)
1. Sign in to the [Azure portal](https://portal.azure.com) as a Global administrator or User administrator.
-1. In the left navigation, click **Azure Active Directory**.
+1. In the left navigation, select **Azure Active Directory**.
-1. Create or configure the following two users. You can use these names or different names. **Admin1** can be the user you are currently signed in as.
+1. [Create two users](../fundamentals/add-users-azure-active-directory.md). Use the following names or different names.
| Name | Directory role | | | |
- | **Admin1** | Global administrator<br/>-or-<br/>User administrator |
+ | **Admin1** | Global administrator, or User administrator. This user can be the user you're currently signed in. |
| **Requestor1** | User |
-4. Create an Azure AD security group named **Marketing resources** with a membership type of **Assigned**.
+4. [Create an Azure AD security group](../fundamentals/active-directory-groups-create-azure-portal.md) named **Marketing resources** with a membership type of **Assigned**. This group will be the target resource for entitlement management. The group should be empty of members to start.
- This group will be the target resource for entitlement management. The group should be empty of members to start.
## Step 2: Create an access package
An *access package* is a bundle of resources that a team or project needs and is
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
-![Create an access package](./media/entitlement-management-access-package-first/elm-access-package.png)
+![Diagram that describes the relationship between the access package elements.](./media/entitlement-management-access-package-first/elm-access-package.png)
-1. In the Azure portal, in the left navigation, click **Azure Active Directory**.
+1. In the Azure portal, in the left navigation, select **Azure Active Directory**.
-2. In the left menu, click **Identity Governance**
+1. In the left menu, select **Identity Governance**
-3. In the left menu, click **Access packages**. If you see **Access denied**, ensure that an Azure AD Premium P2 license is present in your directory.
+1. In the left menu, select **Access packages**. If you see **Access denied**, ensure that an Azure AD Premium P2 license is present in your directory.
-4. Click **New access package**.
+1. Select **New access package**.
- ![Entitlement management in the Azure portal](./media/entitlement-management-shared/access-packages-list.png)
+ ![Screenshots that shows how to create an access package.](./media/entitlement-management-access-package-first/new-access-packages.png)
-5. On the **Basics** tab, type the name **Marketing Campaign** access package and description **Access to resources for the campaign**.
+1. On the **Basics** tab, type the name *Marketing Campaign* access package and description *Access to resources for the campaign*.
-6. Leave the **Catalog** drop-down list set to **General**.
+1. Leave the **Catalog** drop-down list set to **General**.
- ![New access package - Basics tab](./media/entitlement-management-access-package-first/basics.png)
+ ![Screenshot showing how to set the basic of the access policy.](./media/entitlement-management-access-package-first/new-access-package-basics.png)
-7. Click **Next** to open the **Resource roles** tab.
+1. Select **Next** to open the **Resource roles** tab. On this tab, select the resources and the resource role to include in the access package. You can choose to manage access to groups and teams, applications, and SharePoint Online sites. In this scenario, select **Groups and Teams**.
- On this tab, you select the resources and the resource role to include in the access package.
+ ![Screenshot showing how to select groups and teams.](./media/entitlement-management-access-package-first/new-access-package-select-resources.png)
-8. Click **Groups and Teams**.
-9. In the Select groups pane, find and select the **Marketing resources** group you created earlier.
+1. In the **Select groups** pane, find and select the **Marketing resources** group you created earlier.
By default, you see groups inside the General catalog. When you select a group outside of the General catalog, which you can see if you check the **See all** check box, it will be added to the General catalog.
- ![Screenshot that shows the "New access package - Resource roles" tab and the "Select groups" window.](./media/entitlement-management-access-package-first/resource-roles-select-groups.png)
+ ![Screenshot that shows how to select the groups"](./media/entitlement-management-access-package-first/resource-roles-select-groups.png)
-10. Click **Select** to add the group to the list.
+1. Choose **Select** to add the group to the list.
-11. In the **Role** drop-down list, select **Member**.
+1. In the **Role** drop-down list, select **Member**. If you select the Owner role, it allows users to add or remove other members or owners. For more information on selecting the appropriate roles for a resource, read [add resource roles](entitlement-management-access-package-resources.md#add-resource-roles).
- ![New access package - Resource roles tab](./media/entitlement-management-access-package-first/resource-roles.png)
+ :::image type="content" source="./media/entitlement-management-access-package-first/resource-roles.png" alt-text="Screenshot the shows how to select the member role." lightbox="./media/entitlement-management-access-package-first/resource-roles.png":::
>[!IMPORTANT]
- >The role-assignable groups added to an access package will be indicated using the Sub Type **Assignable to roles**. Refer to [Create a role-assignable group](../roles/groups-create-eligible.md) in Azure Active Directory for more details on groups assignable to Azure AD roles. Keep in mind that once a role-assignable group is present in an access package catalog, administrative users who are able to manage in entitlement management, including global administrators, user administrators and catalog owners of the catalog, will be able to control the access packages in the catalog, allowing them to choose who can be added to those groups. If you don't see a role-assignable group that you want to add or you are unable to add it, make sure you have the required Azure AD role and entitlement management role to perform this operation. You might need to ask someone with the required roles add the resource to your catalog. For more information, see [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog).
+ >The [role-assignable groups](../roles/groups-concept.md) added to an access package will be indicated using the Sub Type **Assignable to roles**. For more information, check out the [Create a role-assignable group](../roles/groups-create-eligible.md) article. Keep in mind that once a role-assignable group is present in an access package catalog, administrative users who are able to manage in entitlement management, including global administrators, user administrators and catalog owners of the catalog, will be able to control the access packages in the catalog, allowing them to choose who can be added to those groups. If you don't see a role-assignable group that you want to add or you are unable to add it, make sure you have the required Azure AD role and entitlement management role to perform this operation. You might need to ask someone with the required roles add the resource to your catalog. For more information, see [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog).
>[!NOTE] > When using [dynamic groups](../enterprise-users/groups-create-rule.md) you will not see any other roles available besides owner. This is by design.
- > ![Scenario overview](./media/entitlement-management-access-package-first/dynamic-group-warning.png)
+ > ![Screenshots that shows a dynamic group available roles.](./media/entitlement-management-access-package-first/dynamic-group-warning.png)
-12. Click **Next** to open the **Requests** tab.
+1. Select **Next** to open the **Requests** tab. On the Requests tab, you create a request policy. A *policy* defines the rules or guardrails to access an access package. You create a policy that allows a specific user in the resource directory to request this access package.
- On this tab, you create a request policy. A *policy* defines the rules or guardrails to access an access package. You create a policy that allows a specific user in the resource directory to request this access package.
+1. In the **Users who can request access** section, select **For users in your directory** and then select **Specific users and groups**.
-13. In the **Users who can request access** section, click **For users in your directory** and then click **Specific users and groups**.
+ :::image type="content" source="./media/entitlement-management-access-package-first/new-access-package-requests.png" alt-text="Screenshot of the access package requests tab." lightbox="./media/entitlement-management-access-package-first/new-access-package-requests.png":::
- ![New access package - Requests tab](./media/entitlement-management-access-package-first/requests.png)
+1. Select **Add users and groups**.
-14. Click **Add users and groups**.
+1. In the Select users and groups pane, select the **Requestor1** user you created earlier.
-15. In the Select users and groups pane, select the **Requestor1** user you created earlier.
+ ![Screenshot of select users and groups.](./media/entitlement-management-access-package-first/requests-select-users-groups.png)
- ![New access package - Requests tab - Select users and groups](./media/entitlement-management-access-package-first/requests-select-users-groups.png)
+1. Choose **Select** to add the user to the list.
-16. Click **Select**.
+1. Scroll down to the **Approval** and **Enable requests** sections.
-17. Scroll down to the **Approval** and **Enable requests** sections.
+1. Leave **Require approval** set to **No**.
-18. Leave **Require approval** set to **No**.
+1. For **Enable requests**, select **Yes** to enable this access package to be requested as soon as it's created.
-19. For **Enable requests**, click **Yes** to enable this access package to be requested as soon as it is created.
+1. Select **Next** to open the **Requestor information** tab.
- ![New access package - Requests tab - Approval and Enable requests](./media/entitlement-management-access-package-first/requests-approval-enable.png)
+ ![Screenshots of the requests tab approval and enable requests settings.](./media/entitlement-management-access-package-first/requests-approval-enable.png)
-20. Click **Next** to open the **Lifecycle** tab.
+1. On the **Requestor information** tab, you can ask questions to collect more information from the requestor. The questions are shown on the request form and can be either required or optional. In this scenario, you haven't been asked to include requestor information for the access package, so you can leave these boxes empty. Select **Next** to open the **Lifecycle** tab.
-21. In the **Expiration** section, set **Access package assignments expire** to **Number of days**.
+1. On the **Lifecycle** tab, you specify when a user's assignment to the access package expires. You can also specify whether users can extend their assignments. In the **Expiration** section:
+ 1. Set the **Access package assignments expire** to **Number of days**.
+ 1. Set the **Assignments expire after** to **30** days.
+ 1. Leave the **Users can request specific timeline** default value, **Yes**.
+ 1. Set the **Require access reviews** to **No**.
-22. Set **Assignments expire after** to **30** days.
+ ![Screenshot of the access package lifecycle tab](./media/entitlement-management-access-package-first/new-access-package-lifecycle.png)
- ![New access package - Lifecycle tab](./media/entitlement-management-access-package-first/lifecycle.png)
+1. Skip the **Custom extensions (Preview)** step.
-23. Click **Next** to open the **Review + Create** tab.
+1. Select **Next** to open the **Review + Create** tab.
- ![New access package - Review + Create tab](./media/entitlement-management-access-package-first/review-create.png)
+1. On the **Review + Create** tab, select **Create**. After a few moments, you should see a notification that the access package was successfully created.
- After a few moments, you should see a notification that the access package was successfully created.
+1. In left menu of the Marketing Campaign access package, select **Overview**.
-24. In left menu of the Marketing Campaign access package, click **Overview**.
-
-25. Copy the **My Access portal link**.
+1. Copy the **My Access portal link**.
You'll use this link for the next step.
- ![Access package overview - My Access portal link](./media/entitlement-management-shared/my-access-portal-link.png)
+ ![Screenshot that demonstrates how to copy the link to the access policy.](./media/entitlement-management-access-package-first/my-access-portal-link.png)
## Step 3: Request access
In this step, you perform the steps as the **internal requestor** and request ac
You should see the **Marketing Campaign** access package.
-1. If necessary, in the **Description** column, click the arrow to view details about the access package.
-
- ![My Access portal - Access packages](./media/entitlement-management-shared/my-access-access-packages.png)
-
-1. Click the checkmark to select the package.
-
-1. Click **Request access** to open the Request access pane.
+1. In the **Business justification** box, type the justification *I'm working on the new marketing campaign*.
- ![My Access portal - Request access button](./media/entitlement-management-access-package-first/my-access-request-access-button.png)
+ ![Screenshot of the My Access portal listing the access packages.](./media/entitlement-management-access-package-first/my-access-access-packages.png)
-1. In the **Business justification** box, type the justification **I am working on the new marketing campaign**.
+1. Select **Submit**.
- ![My Access portal - Request access](./media/entitlement-management-shared/my-access-request-access.png)
+1. In the left menu, select **Request history** to verify that your request was delivered. For more details, select **View**.
-1. Click **Submit**.
-
-1. In the left menu, click **Request history** to verify that your request was submitted.
+ ![Screenshot of the My Access portal request history.](./media/entitlement-management-access-package-first/my-access-access-packages-history.png)
## Step 4: Validate that access has been assigned
-In this step, you confirm that the **internal requestor** was assigned the access package and that they are now a member of the **Marketing resources** group.
+In this step, you confirm that the **internal requestor** was assigned the access package and that they're now a member of the **Marketing resources** group.
**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
In this step, you confirm that the **internal requestor** was assigned the acces
1. Sign in to the [Azure portal](https://portal.azure.com) as **Admin1**.
-1. Click **Azure Active Directory** and then click **Identity Governance**.
+1. Select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages**.
+1. In the left menu, select **Access packages**.
-1. Find and click **Marketing Campaign** access package.
+1. Find and select **Marketing Campaign** access package.
-1. In the left menu, click **Requests**.
+1. In the left menu, select **Requests**.
You should see Requestor1 and the Initial policy with a status of **Delivered**.
-1. Click the request to see the request details.
+1. Select the request to see the request details.
- ![Access package - Request details](./media/entitlement-management-access-package-first/request-details.png)
+ :::image type="content" source="./media/entitlement-management-access-package-first/request-details.png" alt-text="Screenshot of the access package request details." lightbox="./media/entitlement-management-access-package-first/request-details.png":::
-1. In the left navigation, click **Azure Active Directory**.
+1. In the left navigation, select **Azure Active Directory**.
-1. Click **Groups** and open the **Marketing resources** group.
+1. Select **Groups** and open the **Marketing resources** group.
-1. Click **Members**.
+1. Select **Members**.
You should see **Requestor1** listed as a member.
- ![Marketing resources members](./media/entitlement-management-access-package-first/group-members.png)
+ ![Screenshot shows the requestor one has been added to the marketing resources group.](./media/entitlement-management-access-package-first/group-members.png)
## Step 5: Clean up resources
In this step, you remove the changes you made and delete the **Marketing Campaig
**Prerequisite role:** Global administrator or User administrator
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
1. Open the **Marketing Campaign** access package.
-1. Click **Assignments**.
+1. Select **Assignments**.
-1. For **Requestor1**, click the ellipsis (**...**) and then click **Remove access**. In the message that appears, click **Yes**.
+1. For **Requestor1**, select the ellipsis (**...**) and then select **Remove access**. In the message that appears, select **Yes**.
After a few moments, the status will change from Delivered to Expired.
-1. Click **Resource roles**.
+1. Select **Resource roles**.
-1. For **Marketing resources**, click the ellipsis (**...**) and then click **Remove resource role**. In the message that appears, click **Yes**.
+1. For **Marketing resources**, select the ellipsis (**...**) and then select **Remove resource role**. In the message that appears, select **Yes**.
1. Open the list of access packages.
-1. For **Marketing Campaign**, click the ellipsis (**...**) and then click **Delete**. In the message that appears, click **Yes**.
+1. For **Marketing Campaign**, select the ellipsis (**...**) and then select **Delete**. In the message that appears, select **Yes**.
1. In Azure Active Directory, delete any users you created such as **Requestor1** and **Admin1**. 1. Delete the **Marketing resources** group. ## Set up group writeback in entitlement management
-To set up group writeback for Micosoft 365 groups in access packages, you must complete the following prerequisites:
+
+To set up group writeback for Microsoft 365 groups in access packages, you must complete the following prerequisites:
+ - Set up group writeback in the Azure Active Directory admin center. - The Organizational Unit (OU) that will be used to set up group writeback in Azure AD Connect Configuration. - Complete the [group writeback enablement steps](../hybrid/how-to-connect-group-writeback-v2.md#enable-group-writeback-using-azure-ad-connect) for Azure AD Connect.
-Using group writeback, you can now sync M365 groups that are part of access packages to on-premises Active Directory. To do this, follow the steps below:
+Using group writeback, you can now sync Microsoft 365 groups that are part of access packages to on-premises Active Directory. To sync the groups, follow the steps below:
-1. Create an Azure Active Directory M365 group.
+1. Create an Azure Active Directory Microsoft 365 group.
1. Set the group to be written back to on-premises Active Directory. For instructions, see [Group writeback in the Azure Active Directory admin center](../enterprise-users/groups-write-back-portal.md).
Using group writeback, you can now sync M365 groups that are part of access pack
1. Assign the user to the access package. See [View, add, and remove assignments for an access package](entitlement-management-access-package-assignments.md#directly-assign-a-user) for instructions to directly assign a user.
-1. After you have assigned a user to the access package, confirm that the user is now a member of the on-premises group once AAD Connect Sync cycle completes:
+1. After you've assigned a user to the access package, confirm that the user is now a member of the on-premises group once Azure AD Connect Sync cycle completes:
1. View the member property of the group in the on-premises OU OR 1. Review the member Of on the user object.
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-create.md
na Previously updated : 10/26/2021 Last updated : 08/01/2022
For more information, see [License requirements](entitlement-management-overview
You can enable access reviews when [creating a new access package](entitlement-management-access-package-create.md) or [editing an existing access package assignment policy](entitlement-management-access-package-lifecycle-policy.md) policy. If you have multiple policies, for different communities of users to request access, you can have independent access review schedules for each policy. Follow these steps to enable access reviews of an access package's assignments:
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+
+1. To create a new access policy, in the left menu, select **Access packages**, then select **New access** package.
+
+1. To edit an existing access policy, in the left menu, select **Access packages** and open the access package you want to edit. Then, in the left menu, select **Policies** and select the policy that has the lifecycle settings you want to edit.
+ 1. Open the **Lifecycle** tab for an access package assignment policy to specify when a user's assignment to the access package expires. You can also specify whether users can extend their assignments. 1. In the **Expiration** section, set Access package assignments expires to **On date**, **Number of days**, **Number of hours**, or **Never**.
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
na Previously updated : 11/23/2020 Last updated : 08/01/2022
Azure Active Directory (Azure AD) entitlement management is an [identity governance](identity-governance-overview.md) feature that enables organizations to manage identity and access lifecycle at scale, by automating access request workflows, access assignments, reviews, and expiration.
-Employees in organizations need access to various groups, applications, and sites to perform their job. Managing this access is challenging, as requirements change - new applications are added or users need additional access rights. This scenario gets more complicated when you collaborate with outside organizations - you may not know who in the other organization needs access to your organization's resources, and they won't know what applications, groups, or sites your organization is using.
+Employees in organizations need access to various groups, applications, and SharePoint Online sites to perform their job. Managing this access is challenging, as requirements change. New applications are added or users need more access rights. This scenario gets more complicated when you collaborate with outside organizations. You may not know who in the other organization needs access to your organization's resources, and they won't know what applications, groups, or sites your organization is using.
Azure AD entitlement management can help you more efficiently manage access to groups, applications, and SharePoint Online sites for internal users, and also for users outside your organization who need access to those resources.
Azure AD entitlement management can help address these challenges. To learn mor
Here are some of capabilities of entitlement management: -- Control who can get access to applications, groups, Teams and SharePoint sites, with multi-stage approval, and ensure users do not retain access indefinitely through time-limited assignments and recurring access reviews.
+- Control who can get access to applications, groups, Teams and SharePoint sites, with multi-stage approval, and ensure users don't retain access indefinitely through time-limited assignments and recurring access reviews.
- Delegate to non-administrators the ability to create access packages. These access packages contain resources that users can request, and the delegated access package managers can define policies with rules for which users can request, who must approve their access, and when access expires.-- Select connected organizations whose users can request access. When a user who is not yet in your directory requests access, and is approved, they are automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.
+- Select connected organizations whose users can request access. When a user who isn't yet in your directory requests access, and is approved, they're automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.
>[!NOTE] >If you are ready to try Entitlement management you can get started with our [tutorial to create your first access package](entitlement-management-access-package-first.md).
With an access package, an administrator or delegated access package manager lis
Access packages also include one or more *policies*. A policy defines the rules or guardrails for assignment to access package. Each policy can be used to ensure that only the appropriate users are able to have access assignments, and the access is time-limited and will expire if not renewed.
-![Access package and policies](./media/entitlement-management-overview/elm-overview-access-package.png)
+![Diagram of access package and policies.](./media/entitlement-management-overview/elm-overview-access-package.png)
You can have policies for users to request access. In these kinds of policies, an administrator or access package manager defines -- Either the already-existing users (typically employees or already-invited guests), or the partner organizations of external users, that are eligible to request access
+- Either the already-existing users (typically employees or already-invited guests), or the partner organizations of external users that are eligible to request access
- The approval process and the users that can approve or deny access - The duration of a user's access assignment, once approved, before the assignment expires
The following diagram shows an example of the different elements in entitlement
- **Access package 1** includes a single group as a resource. Access is defined with a policy that enables a set of users in the directory to request access. - **Access package 2** includes a group, an application, and a SharePoint Online site as resources. Access is defined with two different policies. The first policy enables a set of users in the directory to request access. The second policy enables users in an external directory to request access.
-![Entitlement management overview](./media/entitlement-management-overview/elm-overview.png)
+![Entitlement management overview diagram](./media/entitlement-management-overview/elm-overview.png)
## When should I use access packages?
-Access packages do not replace other mechanisms for access assignment. They are most appropriate in situations such as:
+Access packages don't replace other mechanisms for access assignment. They're most appropriate in situations such as:
-- Employees need time-limited access for a particular task. For example, you might use group-based licensing and a dynamic group to ensure all employees have an Exchange Online mailbox, and then use access packages for situations in which employees need additional access, such as to read departmental resources from another department.
+- Employees need time-limited access for a particular task. For example, you might use group-based licensing and a dynamic group to ensure all employees have an Exchange Online mailbox, and then use access packages for situations in which employees need more access rights. For example, rights to read departmental resources from another department.
- Access that requires the approval of an employee's manager or other designated individuals. - Departments wish to manage their own access policies for their resources without IT involvement. - Two or more organizations are collaborating on a project, and as a result, multiple users from one organization will need to be brought in via Azure AD B2B to access another organization's resources.
To better understand entitlement management and its documentation, you can refer
| policy | A set of rules that defines the access lifecycle, such as how users get access, who can approve, and how long users have access through an assignment. A policy is linked to an access package. For example, an access package could have two policies - one for employees to request access and a second for external users to request access. | | resource | An asset, such as an Office group, a security group, an application, or a SharePoint Online site, with a role that a user can be granted permissions to. | | resource directory | A directory that has one or more resources to share. |
-| resource role | A collection of permissions associated with and defined by a resource. A group has two roles - member and owner. SharePoint sites typically have 3 roles but may have additional custom roles. Applications can have custom roles. |
+| resource role | A collection of permissions associated with and defined by a resource. A group has two roles - member and owner. SharePoint sites typically have three roles but may have other custom roles. Applications can have custom roles. |
## License requirements [!INCLUDE [Azure AD Premium P2 license](../../../includes/active-directory-p2-license.md)]
-Specialized clouds, such as Azure Germany, and Azure China 21Vianet, are not currently available for use.
+Specialized clouds, such as Azure Germany, and Azure China 21Vianet, aren't currently available for use.
### How many licenses must you have?
Here are some example license scenarios to help you determine the number of lice
| Scenario | Calculation | Number of licenses | | | | |
-| A Global Administrator at Woodgrove Bank creates initial catalogs and delegates administrative tasks to 6 other users. One of the policies specifies that **All employees** (2,000 employees) can request a specific set of access packages. 150 employees request the access packages. | 2,000 employees who **can** request the access packages | 2,000 |
-| A Global Administrator at Woodgrove Bank creates initial catalogs and delegates administrative tasks to 6 other users. One of the policies specifies that **All employees** (2,000 employees) can request a specific set of access packages. Another policy specifies that some users from **Users from partner Contoso** (guests) can request the same access packages subject to approval. Contoso has 30,000 users. 150 employees request the access packages and 10,500 users from Contoso request access. | 2,000 employees need licenses, guest users are billed on a monthly active user basis and no additional licenses are required for them. * | 2,000 |
+| A Global Administrator at Woodgrove Bank creates initial catalogs and delegates administrative tasks to six other users. One of the policies specifies that **All employees** (2,000 employees) can request a specific set of access packages. 150 employees request the access packages. | 2,000 employees who **can** request the access packages | 2,000 |
+| A Global Administrator at Woodgrove Bank creates initial catalogs and delegates administrative tasks to six other users. One of the policies specifies that **All employees** (2,000 employees) can request a specific set of access packages. Another policy specifies that some users from **Users from partner Contoso** (guests) can request the same access packages subject to approval. Contoso has 30,000 users. 150 employees request the access packages and 10,500 users from Contoso request access. | 2,000 employees need licenses, guest users are billed on a monthly active user basis and no additional licenses are required for them. * | 2,000 |
\* Azure AD External Identities (guest user) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This model replaces the 1:5 ratio billing model, which allowed up to five guest users for each Azure AD Premium license in your tenant. When your tenant is linked to a subscription and you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU-based billing model. For more information, see [Billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md). ## Next steps -- If you are interested in using the Azure portal to manage access to resources, see [Tutorial: Manage access to resources - Azure portal](entitlement-management-access-package-first.md).-- if you are interested in using Microsoft Graph to manage access to resources, see [Tutorial: manage access to resources - Microsoft Graph](/graph/tutorial-access-package-api?toc=/azure/active-directory/governance/toc.json&bc=/azure/active-directory/governance/breadcrumb/toc.json)
+- If you're interested in using the Azure portal to manage access to resources, see [Tutorial: Manage access to resources - Azure portal](entitlement-management-access-package-first.md).
+- if you're interested in using Microsoft Graph to manage access to resources, see [Tutorial: manage access to resources - Microsoft Graph](/graph/tutorial-access-package-api?toc=/azure/active-directory/governance/toc.json&bc=/azure/active-directory/governance/breadcrumb/toc.json)
- [Common scenarios](entitlement-management-scenarios.md)
active-directory Entitlement Management Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-process.md
na Previously updated : 5/17/2021 Last updated : 08/01/2022
The following diagrams show when these email notifications are sent to either th
### First approvers and alternate approvers The following diagram shows the experience of first approvers and alternate approvers, and the email notifications they receive during the request process:
-![First and alternate approvers process flow](./media/entitlement-management-process/first-approvers-and-alternate-with-escalation-flow.png)
### Requestors The following diagram shows the experience of requestors and the email notifications they receive during the request process:
-![Requestor process flow](./media/entitlement-management-process/requestor-approval-request-flow.png)
### Multi-stage approval The following diagram shows the experience of stage-1 and stage-2 approvers and the email notifications they receive during the request process:
-![2-stage approval process flow](./media/entitlement-management-process/2stage-approval-with-request-timeout-flow.png)
### Email notifications table The following table provides more detail about each of these email notifications. To manage these emails, you can use rules. For example, in Outlook, you can create rules to move the emails to a folder if the subject contains words from this table. Note that the words will be based on the default language settings of the tenant where the user is requesting access.
When the request reaches its configured expiration date and expires, it can no l
An email notification is sent to the requestor, notifying them that their access request has expired, and that they need to resubmit the access request. The following diagram shows the experience of the requestor and the email notifications they receive when they request to extend access:
-![Requestor extend access process flow](./media/entitlement-management-process/requestor-expiration-request-flow.png)
Here is a sample email notification that is sent to a requestor when their access request has expired:
active-directory Identity Governance Applications Define https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-define.md
na Previously updated : 6/28/2022 Last updated : 7/28/2022
Once you've identified one or more applications that you want to use Azure AD to
Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. If this application is an existing application in your environment, you may already have documented the access policies for who 'should have access' to this application. If not, you may need to consult with various stakeholders, such as compliance and risk management teams, to ensure that the policies being used to automate access decisions are appropriate for your scenario.
-1. **Collect the roles and permissions that each application provides.** Some applications may have only a single role, for example, an application that only has the role "User". More complex applications may surface multiple roles to be managed through Azure AD. These application roles typically make broad constraints on the access a user with that role would have within the app. For example, an application that has an administrator persona might have two roles, "User" and "Administrator". Other applications may also rely upon group memberships or claims for finer-grained role checks, which can be provided to the application from Azure AD in provisioning or claims issued using federation SSO protocols. Finally, there may be roles that don't surface in Azure AD - perhaps the application doesn't permit defining the administrators in Azure AD, instead relying upon its own authorization rules to identify administrators.
+1. **Collect the roles and permissions that each application provides.** Some applications may have only a single role, for example, an application that only has the role "User". More complex applications may surface multiple roles to be managed through Azure AD. These application roles typically make broad constraints on the access a user with that role would have within the app. For example, an application that has an administrator persona might have two roles, "User" and "Administrator". Other applications may also rely upon group memberships or claims for finer-grained role checks, which can be provided to the application from Azure AD in provisioning or claims issued using federation SSO protocols, or written to AD as a security group membership. Finally, there may be roles that don't surface in Azure AD - perhaps the application doesn't permit defining the administrators in Azure AD, instead relying upon its own authorization rules to identify administrators.
> [!Note] > If you're using an application from the Azure AD application gallery that supports provisioning, then Azure AD may import defined roles in the application and automatically update the application manifest with the application's roles automatically, once provisioning is configured.
active-directory Identity Governance Applications Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-deploy.md
Conditional access is only possible for applications that rely upon Azure AD for
1. **Upload the terms of use (TOU) document, if needed.** If you require users to accept a terms of use (TOU) prior to accessing the application, then create and [upload the TOU document](../conditional-access/terms-of-use.md) so that it can be included in a conditional access policy. 1. **Verify users are ready for Azure Active Directory Multi-Factor Authentication.** We recommend requiring Azure AD Multi-Factor Authentication for business critical applications integrated via federation. For these applications, there should be a policy that requires the user to have met a multi-factor authentication requirement prior to Azure AD permitting them to sign into the application. Some organizations may also block access by locations, or [require the user to access from a registered device](../conditional-access/howto-conditional-access-policy-compliant-device.md). If there's no suitable policy already that includes the necessary conditions for authentication, location, device and TOU, then [add a policy to your conditional access deployment](../conditional-access/plan-conditional-access.md).
-1. **Bring the application into scope of the appropriate conditional access policy**. If you have an existing conditional access policy that was created for another application subject to the same governance requirements, you could update that policy to have it apply to this application as well, to avoid having a large number of policies. Once you have made the updates, check to ensure that the expected policies are being applied. You can see what policies would apply to a user with the [Conditional Access what if tool](../conditional-access/troubleshoot-conditional-access-what-if.md).
+1. **Bring the application web endpoint into scope of the appropriate conditional access policy**. If you have an existing conditional access policy that was created for another application subject to the same governance requirements, you could update that policy to have it apply to this application as well, to avoid having a large number of policies. Once you have made the updates, check to ensure that the expected policies are being applied. You can see what policies would apply to a user with the [Conditional Access what if tool](../conditional-access/troubleshoot-conditional-access-what-if.md).
1. **Create a recurring access review if any users will need temporary policy exclusions**. In some cases, it may not be possible to immediately enforce conditional access policies for every authorized user. For example, some users may not have an appropriate registered device. If it's necessary to exclude one or more users from the CA policy and allow them access, then configure an access review for the group of [users who are excluded from Conditional Access policies](../governance/conditional-access-exclusion.md). 1. **Document the token lifetime and applications' session settings.** How long a user who has been denied continued access can continue to use a federated application will depend upon the application's own session lifetime, and on the access token lifetime. The session lifetime for an application depends upon the application itself. To learn more about controlling the lifetime of access tokens, see [configurable token lifetimes](../develop/active-directory-configurable-token-lifetimes.md).
active-directory Identity Governance Applications Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-integrate.md
na Previously updated : 6/28/2022 Last updated : 7/29/2022
Azure AD identity governance can be integrated with many applications, using [st
In order for Azure AD identity governance to be used for an application, the application must first be integrated with Azure AD. An application being integrated with Azure AD means one of two requirements must be met: * The application relies upon Azure AD for federated SSO, and Azure AD controls authentication token issuance. If Azure AD is the only identity provider for the application, then only users who are assigned to one of the application's roles in Azure AD are able to sign into the application. Those users that lose their application role assignment can no longer get a new token to sign in to the application.
-* The application relies upon user or group lists that are provided to the application by Azure AD. This fulfillment could be done through a provisioning protocol such as SCIM or by the application querying Azure AD via Microsoft Graph.
+* The application relies upon user or group lists that are provided to the application by Azure AD. This fulfillment could be done through a provisioning protocol such as SCIM, by the application querying Azure AD via Microsoft Graph, or the application using AD Kerberos to obtain a user's group memberships.
If neither of those criteria are met for an application, for example when the application doesn't rely upon Azure AD, then identity governance can still be used. However, there may be some limitations using identity governance without meeting the criteria. For instance, users that aren't in your Azure AD, or aren't assigned to the application roles in Azure AD, won't be included in access reviews of the application, until you add them to the application roles. For more information, see [Preparing for an access review of users' access to an application](access-reviews-application-preparation.md).
Next, if the application implements a provisioning protocol, then you should con
|-|--| | SCIM | Configure an application with SCIM [for user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md) |
- * Otherwise, if this is an on-premises or IaaS hosted application, then configure provisioning to that application, either via SCIM or to the underlying database or directory of the application.
+ * If this application uses AD, then configure group writeback, and either update the application to use the Azure AD-created groups, or nest the Azure AD-created groups into the applications' existing AD security groups.
+
+ |Application supports| Next steps|
+ |-|--|
+ | Kerberos | Configure Azure AD Connect [group writeback to AD](../hybrid/how-to-connect-group-writeback-v2.md), create groups in Azure AD and [write those groups to AD](../enterprise-users/groups-write-back-portal.md) |
+
+ * Otherwise, if this is an on-premises or IaaS hosted application, and is not integrated with AD, then configure provisioning to that application, either via SCIM or to the underlying database or directory of the application.
|Application supports| Next steps| |-|--|
If this is a new application your organization hasn't used before, and therefore
However, if the application already existed in your environment, then it's possible that users may have gotten access in the past through manual or out-of-band processes, and those users should now be reviewed to have confirmation that their access is still needed and appropriate going forward. We recommend performing an access review of the users who already have access to the application, before enabling policies for more users to be able to request access. This review will set a baseline of all users having been reviewed at least once, to ensure that those users are authorized for continued access. 1. Follow the steps in [Preparing for an access review of users' access to an application](access-reviews-application-preparation.md).
-1. Bring in any [existing users and create application role assignments](identity-governance-applications-existing-users.md) for them.
-1. If the application wasn't integrated for provisioning, then once the review is complete, you may need to manually update the application's internal database or directory to remove those users who were denied.
+1. If the application was not using Azure AD or AD, bring in any [existing users and create application role assignments](identity-governance-applications-existing-users.md) for them. If the application was using AD security groups, then you'll need to review the membership of those security groups.
+1. If the application had its own directory or database and wasn't integrated for provisioning, then once the review is complete, you may need to manually update the application's internal database or directory to remove those users who were denied.
+1. If the application was using AD security groups, and those groups were created in AD, then once the review is complete, you'll need to manually update the AD groups to remove memberships of those users who were denied. Subsequently, to have denied access rights removed automatically, you can either update the application to use an AD group that was created in Azure AD and [written back to Azure AD](../enterprise-users/groups-write-back-portal.md), or move the membership from the AD group to the Azure AD group, and nest the written back group as the only member of the AD group.
1. Once the review has been completed and the application access updated, or if no users have access, then continue on to the next steps to deploy conditional access and entitlement management policies for the application. Now that you have a baseline that ensures existing access has been reviewed, then you can [deploy the organization's policies](identity-governance-applications-deploy.md) for ongoing access and any new access requests.
active-directory What Is Identity Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-is-identity-lifecycle-management.md
Previously updated : 10/30/2020 Last updated : 08/01/2022
# What is identity lifecycle management?
-Identity Governance helps organizations achieve a balance between productivity - How quickly can a person have access to the resources they need, such as when they join my organization? And security - How should their access change over time, such as due to changes to that person's employment status?
+Identity Governance helps organizations achieve a balance between productivity. How quickly can a person have access to the resources they need, such as when they join my organization? And security. And how should their access change over time, such as due to changes to that person's employment status?
**Identity lifecycle management** is the foundation for Identity Governance, and effective governance at scale requires modernizing the identity lifecycle management infrastructure for applications. Identity Lifecycle Management aims to automate and manage the entire digital identity lifecycle process.
-![cloud provisioning](media/what-is-provisioning/cloud-1.png)
+![Diagram of the cloud provisioning](media/what-is-provisioning/cloud-1.png)
## What is a digital identity?
-A digital identity is information on an entity used by a one or more computing resources - such as operating systems or applications. These entities may represent people, organizations, applications, or devices. The identity is usually described by the attributes that are associated with it, such as the name, identifiers, as well as properties such as roles used for access management. These attributes help systems make determinations such who has access to what and who is allowed to use this or that system.
+A digital identity is information on an entity used by one or more computing resources, such as operating systems or applications. These entities may represent people, organizations, applications, or devices. The identity is usually described by the attributes that are associated with it, such as the name, identifiers, and properties such as roles used for access management. These attributes help systems make determinations such who has access to what, and who is allowed to use that resource.
## Managing the lifecycle of digital identities
-Managing digital identities is a complex task, especially as it relates correlating real-world objects, such as a person and their relationship with an organization as an employee of that organization, with a digital representation. In small organizations, keeping the digital representation of individuals who require an identity can be a manual process - when someone is hired, or a contractor arrives, an IT specialist can create an account for them in a directory, and assign them the access they need. However, in mid-size and large organizations, automation can enable the organization to scale more effectively and keep the identities accurate.
+Managing digital identities is a complex task, especially as it relates correlating real-world objects, such as a person and their relationship with an organization as an employee of that organization, with a digital representation. In small organizations, keeping the digital representation of individuals who require an identity can be a manual process. For example, when someone is hired, or a contractor arrives, an IT specialist can create an account for them in a directory, and assign them the access they need. However, in mid-size and large organizations, automation can enable the organization to scale more effectively and keep the identities accurate.
The typical process for establishing identity lifecycle management in an organization follows these steps:
-1. Determine whether there are already systems of record: data sources which the organization treats as authoritative. For example, the organization may have an HR system Workday, and that system is authoritative for providing the current list of employees, and some of their properties such as the employee's name or department. Or an email system such as Exchange Online may be authoritative for an employee's email address.
+1. Determine whether there are already systems of record - data sources, which the organization treats as authoritative. For example, the organization may have an HR system Workday, and that system is authoritative for providing the current list of employees, and some of their properties such as the employee's name or department. Or an email system such as Exchange Online may be authoritative for an employee's email address.
-2. Connect those systems of record with one or more directories and databases used by applications, and resolve any inconsistencies between the directories and the systems of record. For example, a directory may have obsolete data, such as an account for a former employee, that is no longer needed.
+2. Connect those systems of record with one or more directories and databases used by applications, and resolve any inconsistencies between the directories and the systems of record. For example, a directory may have obsolete data, such as an account for a former employee that is no longer needed.
-3. Determine what processes can be used to supply authoritative information in the absence of a system of record. For example, if there are digital identities for visitors, but the organization has no database for visitors, then it may be necessary to find an alternate way to determine when an digital identity for a visitor is no longer needed.
+3. Determine what processes can be used to supply authoritative information in the absence of a system of record. For example, if there are digital identities for visitors, but the organization has no database for visitors, then it may be necessary to find an alternate way to determine when a digital identity for a visitor is no longer needed.
4. Ensure that changes from the system of record or other processes are replicated to each of the directories or databases that require an update. ## Identity lifecycle management for representing employees and other individuals with an organizational relationship
-When planning identity lifecycle management for employees, or other individuals with an organizational relationship such as a contractor or student, many organizations model the "join, move, and leave" process. These are:
-
- - Join - when an individual comes into scope of needing access, an identity is needed by those applications, so a new digital identity may need to be created if one is not already available
- - Move - when an individual moves between boundaries that require additional access authorizations to be added or removed to their digital identity
- - Leave- when an individual leaves the scope of needing access, access may need to be removed, and subsequently the identity may no longer be required by applications other than for audit or forensics purposes
+When planning identity lifecycle management for employees, or other individuals with an organizational relationship such as a contractor or student, many organizations model the "join, move, and leave" as following process:
-So for example, if a new employee joins your organization and that employee has never been affiliated with your organization before, that employee will require a new digital identity, represented as a user account in Azure AD. The creation of this account would fall into a "Joiner" process, which could be automated if there was a system of record such as Workday that could indicate when the new employee starts work. Later, if your organization has an employee move from say, Sales to Marketing, they would fall into a "Mover" process. This would require removing the access rights they had in the Sales organization which they no longer require, and granting them rights in the Marketing organization that they new require.
+- Join - when an individual comes into scope of needing access, an identity is needed by those applications, so a new digital identity may need to be created if one isn't already available
+- Move - when an individual moves between boundaries that require additional access authorizations to be added or removed to their digital identity
+- Leave - when an individual leaves the scope of needing access, access may need to be removed, and subsequently the identity may no longer be required by applications other than for audit or forensics purposes
+
+So for example, if a new employee joins your organization and that employee has never been affiliated with your organization before, that employee will require a new digital identity, represented as a user account in Azure AD. The creation of this account would fall into a "Joiner" process, which could be automated if there was a system of record such as Workday that could indicate when the new employee starts work. Later, if your organization has an employee move from say, Sales to Marketing, they would fall into a "Mover" process. This move would require removing the access rights they had in the Sales organization, which they no longer require, and granting them rights in the Marketing organization that they new require.
## Identity lifecycle management for guests
active-directory What Is Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-is-provisioning.md
Previously updated : 10/30/2020 Last updated : 08/01/2022
# What is provisioning?
-Provisioning and deprovisioning are the processes that ensure consistency of digital identities across multiple systems. These processes are typically leveraged as part of [identity lifecycle management](what-is-identity-lifecycle-management.md).
+Provisioning and deprovisioning are the processes that ensure consistency of digital identities across multiple systems. These processes are typically used as part of [identity lifecycle management](what-is-identity-lifecycle-management.md).
**Provisioning** is the processes of creating an identity in a target system based on certain conditions. **De-provisioning** is the process of removing the identity from the target system, when conditions are no longer met. **Synchronization** is the process of keeping the provisioned object, up to date, so that the source object and target object are similar.
-For example, when a new employee joins your organization, that employee is entered in to the HR system. At that point, provisioning **from** HR **to** Azure Active Directory (Azure AD) can create a corresponding user account in Azure AD. Applications which query Azure AD can see the account for that new employee. If there are applications that do not use Azure AD, then provisioning **from** Azure AD **to** those applications' databases, ensures that the user will be able to access all of the applications that the user needs access to. This process allows the user to start work and have access to the applications and systems they need on day one. Similarly, when their properties, such as their department or employment status, change in the HR system, synchronization of those updates from the HR system to Azure AD, and furthermore to other applications and target databases, ensures consistency.
+For example, when a new employee joins your organization, that employee is entered in to the HR system. At that point, provisioning **from** HR **to** Azure Active Directory (Azure AD) can create a corresponding user account in Azure AD. Applications which query Azure AD can see the account for that new employee. If there are applications that don't use Azure AD, then provisioning **from** Azure AD **to** those applications' databases, ensures that the user will be able to access all of the applications that the user needs access to. This process allows the user to start work and have access to the applications and systems they need on day one. Similarly, when their properties, such as their department or employment status, change in the HR system, synchronization of those updates from the HR system to Azure AD, and furthermore to other applications and target databases, ensures consistency.
Azure AD currently provides three areas of automated provisioning. They are:
Azure AD currently provides three areas of automated provisioning. They are:
- Provisioning from Azure AD to applications, via **[App provisioning](#app-provisioning)** - Provisioning between Azure AD and Active Directory domain services, via **[inter-directory provisioning](#inter-directory-provisioning)**
-![identity lifecycle management](media/what-is-provisioning/provisioning.png)
+![Diagram of the identity lifecycle management.](media/what-is-provisioning/provisioning.png)
## HR-driven provisioning
-![HR provisioning](media/what-is-provisioning/cloud-2a.png)
+![Diagram of the HR provisioning.](media/what-is-provisioning/cloud-2a.png)
Provisioning from HR to Azure AD involves the creation of objects, typically user identities representing each employee, but in some cases other objects representing departments or other structures, based on the information that is in your HR system.
-The most common scenario would be, when a new employee joins your company, they are entered into the HR system. Once that occurs, they are automatically provisioned as a new user in Azure AD, without needing administrative involvement for each new hire. In general, provisioning from HR can cover the following scenarios.
+The most common scenario would be, when a new employee joins your company, they're entered into the HR system. Once that occurs, they're automatically provisioned as a new user in Azure AD, without needing administrative involvement for each new hire. In general, provisioning from HR can cover the following scenarios.
-- **Hiring new employees** - When a new employee is added to a HR system, a user account is automatically created in Active Directory, Azure AD, and optionally in the directories for other applications supported by Azure AD, with write-back of the email address to the HR system.
+- **Hiring new employees** - When a new employee is added to an HR system, a user account is automatically created in Active Directory, Azure AD, and optionally in the directories for other applications supported by Azure AD, with write-back of the email address to the HR system.
- **Employee attribute and profile updates** - When an employee record is updated in that HR system (such as their name, title, or manager), their user account will be automatically updated in Active Directory, Azure AD, and optionally other applications supported by Azure AD. - **Employee terminations** - When an employee is terminated in HR, their user account is automatically blocked from sign in or removed in Active Directory, Azure AD, and in other applications.-- **Employee rehires** - When an employee is rehired in cloud HR, their old account can be automatically reactivated or re-provisioned (depending on your preference).
+- **Employee rehires** - When an employee is rehired in cloud HR, their old account can be automatically reactivated or reprovisioned (depending on your preference).
There are three deployment options for HR-driven provisioning with Azure AD:
-1. For organizations with a single subscription to Workday or SuccessFactors, and do not use Active Directory
+1. For organizations with a single subscription to Workday or SuccessFactors, and don't use Active Directory
1. For organizations with a single subscription to Workday or SuccessFactors, and have both Active Directory and Azure AD 1. For organizations with multiple HR systems, or an on-premises HR system such as SAP, Oracle eBusiness or PeopleSoft
For more information, see [What is HR driven provisioning?](../app-provisioning/
## App provisioning
-![app provisioning](media/what-is-provisioning/cloud-3b.png)
+![Diagram that shows the app provisioning flow.](media/what-is-provisioning/cloud-3b.png)
In Azure AD, the term **[app provisioning](../app-provisioning/user-provisioning.md)** refers to automatically creating copies of user identities in the applications that users need access to, for applications that have their own data store, distinct from Azure AD or Active Directory. In addition to creating user identities, app provisioning includes the maintenance and removal of user identities from those apps, as the user's status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../saas-apps/servicenow-provisioning-tutorial.md), as each of these applications have their own user repository distinct from Azure AD.
For more information, see [What is app provisioning?](../app-provisioning/user-p
## Inter-directory provisioning
-![inter-directory provisioning](media/what-is-provisioning/cloud-4a.png)
+![Diagram that shows the inter-directory provisioning](media/what-is-provisioning/cloud-4a.png)
Many organizations rely upon both Active Directory and Azure AD, and may have applications connected to Active Directory, such as on-premises file servers.
-As many organizations historically have deployed HR-driven provisioning on-premises, they may already have user identities for all their employees in Active Directory. The most common scenario for inter-directory provisioning is when a user already in Active Directory is provisioned into Azure AD. This provisioning is usually accomplished by Azure AD Connect sync or Azure AD Connect cloud provisioning.
+As many organizations historically have deployed HR-driven provisioning on-premises, they may already have user identities for all their employees in Active Directory. The most common scenario for inter-directory provisioning is when a user already in Active Directory is provisioned into Azure AD. This provisioning is usually accomplished by Azure AD Connect sync or Azure AD Connect cloud provisioning.
-In addition, organizations may wish to also provision to on-premises systems from Azure AD. For example, an organization may have brought guests into the Azure AD directory, but those guests will need access to on-premises Windows Integrated Authentication (WIA) based web applications via the app proxy. This requires the provisioning of on-premises AD accounts for those users in Azure AD.
+In addition, organizations may wish to also provision to on-premises systems from Azure AD. For example, an organization may have brought guests into the Azure AD directory, but those guests will need access to on-premises Windows Integrated Authentication (WIA) based web applications via the app proxy. This scenario requires the provisioning of on-premises AD accounts for those users in Azure AD.
For more information, see [What is inter-directory provisioning?](../hybrid/what-is-inter-directory-provisioning.md)
-## Next steps
+## Next steps
+ - [What is identity lifecycle management?](what-is-identity-lifecycle-management.md) - [What is HR driven provisioning?](../app-provisioning/what-is-hr-driven-provisioning.md) - [What is app provisioning?](../app-provisioning/user-provisioning.md)
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshooting.md
Ensure ports 22, 9000 and 1194 are open to connect to the API server. Check whet
The minimum supported TLS version in AKS is TLS 1.2.
+## I'm using Alias minor version, but I can't seem to upgrade in the same minor version? Why?
+
+When upgrading by alias minor version, only a higher minor version is supported. For example, upgrading from 1.14.x to 1.14 will not trigger an upgrade to the latest GA 1.14 patch, but upgrading to 1.15 will trigger an upgrade to the latest GA 1.15 patch.
+ ## My application is failing with `argument list too long` You may receive an error message similar to:
azure-monitor Diagnostics Extension Stream Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-stream-event-hubs.md
The data collected from the guest operating system that can be sent to Event Hub
* Windows diagnostics extension 1.6 or higher. See [Azure Diagnostics extension configuration schema versions and history](diagnostics-extension-versions.md) for a version history and [Azure Diagnostics extension overview](diagnostics-extension-overview.md) for supported resources. * Event Hubs namespace must always be provisioned. See [Get started with Event Hubs](../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) for details.
+* Event hub must be at least Standard tier. Basic tier is not supported.
## Configuration schema
azure-monitor Alerts Smart Detections Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-smart-detections-migration.md
You can create and manage action groups for the new smart detection alert rules
After completing the migration, you can use Azure Resource Manager templates to configure settings for smart detection alert rule settings. > [!NOTE]
-> After completion of migration, smart detection settings must be configured using smart detection alert rule templates, and can no longer be configured using the [Application Insights Resource Manager template](../app/proactive-arm-config.md#smart-detection-rule-configuration).
+> After completion of migration, smart detection settings must be configured using smart detection alert rule templates, and can no longer be configured using the [Application Insights Resource Manager template](./proactive-arm-config.md#smart-detection-rule-configuration).
This Azure Resource Manager template example demonstrates configuring an **Response Latency Degradation** alert rule in an **Enabled** state with a severity of 2. * Smart detection is a global service, therefore rule location is created in the **global** location.
This Azure Resource Manager template example demonstrates configuring an **Respo
## Next Steps - [Learn more about alerts in Azure](./alerts-overview.md)-- [Learn more about smart detection in Application Insights](../app/proactive-diagnostics.md)
+- [Learn more about smart detection in Application Insights](./proactive-diagnostics.md)
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
Smart detection works for any web app, hosted in the cloud or on your own server
## Next steps - Get an [overview of alerts](alerts-overview.md). - [Create an alert rule](alerts-log.md).-- Learn more about [Smart Detection](../app/proactive-failure-diagnostics.md).
+- Learn more about [Smart Detection](proactive-failure-diagnostics.md).
azure-monitor Proactive Application Security Detection Pack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-application-security-detection-pack.md
+
+ Title: Security detection Pack with Azure Application Insights
+description: Monitor application with Azure Application Insights and smart detection for potential security issues.
+ Last updated : 12/12/2017+++
+# Application security detection pack (preview)
+
+Smart detection automatically analyzes the telemetry generated by your application, and detects potential security issues. It enables you to identify potential security problems. You can mitigate these problems by fixing the application, or by taking the necessary security measures.
+
+This feature requires no special setup, other than [configuring your app to send telemetry](../app/usage-overview.md).
+
+## When would I get this type of smart detection notification?
+There are three types of security issues that are detected:
+1. Insecure URL access: a URL in the application is accessible via both HTTP and HTTPS. Typically, a URL that accepts HTTPS requests shouldn't accept HTTP requests. This detection may indicate a bug or security issue in your application.
+2. Insecure form: a form (or other "POST" request) in the application uses HTTP instead of HTTPS. Using HTTP can compromise the user data that is sent by the form.
+3. Suspicious user activity: the same user accesses the application from multiple countries or regions, around the same time. For example, the same user accessed the application from Spain and the United States within the same hour. This detection indicates a potentially malicious access attempt to your application.
+
+## Does my app definitely have a security issue?
+A notification doesn't mean that your app definitely has a security issue. A detection of any of the scenarios above can, in many cases, indicate a security issue. In other cases, the detection may have a natural business justification, and can be ignored.
+
+## How do I fix the "Insecure URL access" detection?
+1. **Triage.** The notification provides the number of users who accessed insecure URLs, and the URL that was most affected by insecure access. This information can help you assign a priority to the problem.
+3. **Scope.** What percentage of the users accessed insecure URLs? How many URLs were affected? This information can be obtained from the notification.
+4. **Diagnose.** The detection provides the list of insecure requests, and the lists of URLs and users that were affected, to help you further diagnose the issue.
+
+## How do I fix the "Insecure form" detection?
+1. **Triage.** The notification provides the number of insecure forms, and number of users whose data was potentially compromised. This information can help you assign a priority to the problem.
+2. **Scope.** Which form was involved in the largest number of insecure transmissions, and what is the distribution of insecure transmissions over time? This information can be obtained from the notification.
+3. **Diagnose.** The detection provides the list of insecure forms, and a breakdown of insecure transmissions for each form, to help you further diagnose the issue.
+
+## How do I fix the "Suspicious user activity" detection?
+1. **Triage.** The notification provides the number of different users that presented the suspicious behavior. This information can help you assign a priority to the problem.
+2. **Scope.** From which countries or regions did the suspicious requests originate? Which user was the most suspicious? This information can be obtained from the notification.
+3. **Diagnose.** The detection provides the list of suspicious users and the list of countries or regions for each user, to help you further diagnose the issue.
azure-monitor Proactive Arm Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-arm-config.md
+
+ Title: Smart detection rule settings - Azure Application Insights
+description: Automate management and configuration of Azure Application Insights smart detection rules with Azure Resource Manager Templates
+ Last updated : 02/14/2021++
+# Manage Application Insights smart detection rules using Azure Resource Manager templates
+
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> See [Smart Detection Alerts migration](./alerts-smart-detections-migration.md) for more details on the migration process and the behavior of smart detection after the migration.
+>
+
+Smart detection rules in Application Insights can be managed and configured using [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md).
+This method can be used when deploying new Application Insights resources with Azure Resource Manager automation, or for modifying the settings of existing resources.
+
+## Smart detection rule configuration
+
+You can configure the following settings for a smart detection rule:
+- If the rule is enabled (the default is **true**.)
+- If emails should be sent to users associated to the subscriptionΓÇÖs [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) and [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) roles when a detection is found (the default is **true**.)
+- Any additional email recipients who should get a notification when a detection is found.
+ - Email configuration is not available for Smart Detection rules marked as _preview_.
+
+To allow configuring the rule settings via Azure Resource Manager, the smart detection rule configuration is now available as an inner resource within the Application Insights resource, named **ProactiveDetectionConfigs**.
+For maximal flexibility, each smart detection rule can be configured with unique notification settings.
+
+## Examples
+
+Below are a few examples showing how to configure the settings of smart detection rules using Azure Resource Manager templates.
+All samples refer to an Application Insights resource named _ΓÇ£myApplicationΓÇ¥_, and to the "long dependency duration smart detection rule", which is internally named _ΓÇ£longdependencydurationΓÇ¥_.
+Make sure to replace the Application Insights resource name, and to specify the relevant smart detection rule internal name. Check the table below for a list of the corresponding internal Azure Resource Manager names for each smart detection rule.
+
+### Disable a smart detection rule
+
+```json
+{
+ "apiVersion": "2018-05-01-preview",
+ "name": "myApplication",
+ "type": "Microsoft.Insights/components",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ "Application_Type": "web"
+ },
+ "resources": [
+ {
+ "apiVersion": "2018-05-01-preview",
+ "name": "longdependencyduration",
+ "type": "ProactiveDetectionConfigs",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', 'myApplication')]"
+ ],
+ "properties": {
+ "name": "longdependencyduration",
+ "sendEmailsToSubscriptionOwners": true,
+ "customEmails": [],
+ "enabled": false
+ }
+ }
+ ]
+ }
+```
+
+### Disable sending email notifications for a smart detection rule
+
+```json
+{
+ "apiVersion": "2018-05-01-preview",
+ "name": "myApplication",
+ "type": "Microsoft.Insights/components",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ "Application_Type": "web"
+ },
+ "resources": [
+ {
+ "apiVersion": "2018-05-01-preview",
+ "name": "longdependencyduration",
+ "type": "ProactiveDetectionConfigs",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', 'myApplication')]"
+ ],
+ "properties": {
+ "name": "longdependencyduration",
+ "sendEmailsToSubscriptionOwners": false,
+ "customEmails": [],
+ "enabled": true
+ }
+ }
+ ]
+ }
+```
+
+### Add additional email recipients for a smart detection rule
+
+```json
+{
+ "apiVersion": "2018-05-01-preview",
+ "name": "myApplication",
+ "type": "Microsoft.Insights/components",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ "Application_Type": "web"
+ },
+ "resources": [
+ {
+ "apiVersion": "2018-05-01-preview",
+ "name": "longdependencyduration",
+ "type": "ProactiveDetectionConfigs",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', 'myApplication')]"
+ ],
+ "properties": {
+ "name": "longdependencyduration",
+ "sendEmailsToSubscriptionOwners": true,
+ "customEmails": ["alice@contoso.com", "bob@contoso.com"],
+ "enabled": true
+ }
+ }
+ ]
+ }
+
+```
++
+## Smart detection rule names
+
+Below is a table of smart detection rule names as they appear in the portal, along with their internal names, that should be used in the Azure Resource Manager template.
+
+> [!NOTE]
+> Smart detection rules marked as _preview_ donΓÇÖt support email notifications. Therefore, you can only set the _enabled_ property for these rules.
+
+| Azure portal rule name | Internal name
+|:|:|
+| Slow page load time | slowpageloadtime |
+| Slow server response time | slowserverresponsetime |
+| Long dependency duration | longdependencyduration |
+| Degradation in server response time | degradationinserverresponsetime |
+| Degradation in dependency duration | degradationindependencyduration |
+| Degradation in trace severity ratio (preview) | extension_traceseveritydetector |
+| Abnormal rise in exception volume (preview) | extension_exceptionchangeextension |
+| Potential memory leak detected (preview) | extension_memoryleakextension |
+| Potential security issue detected (preview) | extension_securityextensionspackage |
+| Abnormal rise in daily data volume (preview) | extension_billingdatavolumedailyspikeextension |
+
+### Failure Anomalies alert rule
+
+This Azure Resource Manager template demonstrates configuring a Failure Anomalies alert rule with a severity of 2.
+
+> [!NOTE]
+> Failure Anomalies is a global service therefore rule location is created on the global location.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "microsoft.alertsmanagement/smartdetectoralertrules",
+ "apiVersion": "2019-03-01",
+ "name": "Failure Anomalies - my-app",
+ "location": "global",
+ "properties": {
+ "description": "Failure Anomalies notifies you of an unusual rise in the rate of failed HTTP requests or dependency calls.",
+ "state": "Enabled",
+ "severity": "2",
+ "frequency": "PT1M",
+ "detector": {
+ "id": "FailureAnomaliesDetector"
+ },
+ "scope": ["/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/MyResourceGroup/providers/microsoft.insights/components/my-app"],
+ "actionGroups": {
+ "groupIds": ["/subscriptions/00000000-1111-2222-3333-444444444444/resourcegroups/MyResourceGroup/providers/microsoft.insights/actiongroups/MyActionGroup"]
+ }
+ }
+ }
+ ]
+}
+```
+
+> [!NOTE]
+> This Azure Resource Manager template is unique to the Failure Anomalies alert rule and is different from the other classic Smart Detection rules described in this article. If you want to manage Failure Anomalies manually this is done in Azure Monitor Alerts whereas all other Smart Detection rules are managed in the Smart Detection pane of the UI.
+
+## Next Steps
+
+Learn more about automatically detecting:
+
+- [Failure anomalies](./proactive-failure-diagnostics.md)
+- [Memory Leaks](./proactive-potential-memory-leak.md)
+- [Performance anomalies](./proactive-performance-diagnostics.md)
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-diagnostics.md
+
+ Title: Smart detection in Azure Application Insights | Microsoft Docs
+description: Application Insights performs automatic deep analysis of your app telemetry and warns you of potential problems.
+ Last updated : 02/07/2019+++
+# Smart detection in Application Insights
+
+>[!NOTE]
+>You can migrate smart detection on your Application Insights resource to be based on alerts. The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> For more information, see [Smart Detection Alerts migration](./alerts-smart-detections-migration.md).
+
+Smart detection automatically warns you of potential performance problems and failure anomalies in your web application. It performs proactive analysis of the telemetry that your app sends to [Application Insights](../app/app-insights-overview.md). If there is a sudden rise in failure rates, or abnormal patterns in client or server performance, you get an alert. This feature needs no configuration. It operates if your application sends enough telemetry.
+
+You can access the detections issued by smart detection both from the emails you receive, and from the smart detection blade.
+
+## Review your smart detections
+You can discover detections in two ways:
+
+* **You receive an email** from Application Insights. Here's a typical example:
+
+ ![Email alert](./media/proactive-diagnostics/03.png)
+
+ Click the large button to open more detail in the portal.
+* **The smart detection blade** in Application Insights. Select **Smart detection** under the **Investigate** menu to see a list of recent detections.
+
+![View recent detections](./media/proactive-diagnostics/04.png)
+
+Select a detection to view its details.
+
+## What problems are detected?
+
+Smart detection detects and notifies about various issues, such as:
+
+* [Smart detection - Failure Anomalies](./proactive-failure-diagnostics.md). We use machine learning to set the expected rate of failed requests for your app, correlating with load, and other factors. Notifies if the failure rate goes outside the expected envelope.
+* [Smart detection - Performance Anomalies](./proactive-performance-diagnostics.md). Notifies if response time of an operation or dependency duration is slowing down, compared to historical baseline. It also notifies if we identify an anomalous pattern in response time, or page load time.
+* General degradations and issues, like [Trace degradation](./proactive-trace-severity.md), [Memory leak](./proactive-potential-memory-leak.md), [Abnormal rise in Exception volume](./proactive-exception-volume.md) and [Security anti-patterns](./proactive-application-security-detection-pack.md).
+
+(The help links in each notification take you to the relevant articles.)
+
+## Smart detection email notifications
+
+All smart detection rules, except for rules marked as _preview_, are configured by default to send email notifications when detections are found.
+
+Configuring email notifications for a specific smart detection rule can be done by opening the smart detection **Settings** blade and selecting the rule, which will open the **Edit rule** blade.
+
+Alternatively, you can change the configuration using Azure Resource Manager templates. For more information, see [Manage Application Insights smart detection rules using Azure Resource Manager templates](./proactive-arm-config.md) for more details.
++
+## Next steps
+These diagnostic tools help you inspect the telemetry from your app:
+
+* [Metric explorer](../essentials/metrics-charts.md)
+* [Search explorer](../app/diagnostic-search.md)
+* [Analytics - powerful query language](../logs/log-analytics-tutorial.md)
+
+Smart Detection is automatic. But maybe you'd like to set up some more alerts?
+
+* [Manually configured metric alerts](./alerts-log.md)
+* [Availability web tests](../app/monitor-web-app-availability.md)
azure-monitor Proactive Email Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-email-notification.md
+
+ Title: Smart Detection notification change - Azure Application Insights
+description: Change to the default notification recipients from Smart Detection. Smart Detection lets you monitor application traces with Azure Application Insights for unusual patterns in trace telemetry.
+ Last updated : 02/14/2021++
+# Smart Detection e-mail notification change
+
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> See [Smart Detection Alerts migration](./alerts-smart-detections-migration.md) for more details on the migration process and the behavior of smart detection after the migration.
+
+Based on customer feedback, on April 1, 2019, weΓÇÖre changing the default roles who receive email notifications from Smart Detection.
+
+## What is changing?
+
+Currently, Smart Detection email notifications are sent by default to the _Subscription Owner_, _Subscription Contributor_, and _Subscription Reader_ roles. These roles often include users who are not actively involved in monitoring, which causes many of these users to receive notifications unnecessarily. To improve this experience, we are making a change so that email notifications only go to the [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) and [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) roles by default.
+
+## Scope of this change
+
+This change will affect all Smart Detection rules, excluding the following ones:
+
+* Smart Detection rules marked as preview. These Smart Detection rules donΓÇÖt support email notifications today.
+
+* Failure Anomalies rule.
+
+## How to prepare for this change?
+
+To ensure that email notifications from Smart Detection are sent to relevant users, those users must be assigned to the [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) or [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) roles of the subscription.
+
+To assign users to the Monitoring Reader or Monitoring Contributor roles via the Azure portal, follow the steps described in the [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md) article. Make sure to select the _Monitoring Reader_ or _Monitoring Contributor_ as the role to which users are assigned.
+
+> [!NOTE]
+> Specific recipients of Smart Detection notifications, configured using the _Additional email recipients_ option in the rule settings, will not be affected by this change. These recipients will continue receiving the email notifications.
+
+If you have any questions or concerns about this change, donΓÇÖt hesitate to [contact us](mailto:smart-alert-feedback@microsoft.com).
+
+## Next steps
+
+Learn more about Smart Detection:
+
+- [Failure anomalies](./proactive-failure-diagnostics.md)
+- [Memory Leaks](./proactive-potential-memory-leak.md)
+- [Performance anomalies](./proactive-performance-diagnostics.md)
+
azure-monitor Proactive Exception Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-exception-volume.md
+
+ Title: Abnormal rise in exception volume - Azure Application Insights
+description: Monitor application exceptions with smart detection in Azure Application Insights for unusual patterns in exception volume.
+ Last updated : 12/08/2017++
+# Abnormal rise in exception volume (preview)
+
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> For more information, see [Smart Detection Alerts migration](./alerts-smart-detections-migration.md).
+
+Smart detection automatically analyze the exceptions thrown in your application, and can warn you about unusual patterns in your exception telemetry.
+
+This feature requires no special setup, other than [configuring exception reporting](../app/asp-net-exceptions.md#set-up-exception-reporting) for your app. It's active when your app generates enough exception telemetry.
+
+## When would I get this type of smart detection notification?
+You get this type of notification if your app is showing an abnormal rise in the number of exceptions of a specific type, during a day. This number is compared to a baseline calculated over the previous seven days.
+Machine learning algorithms are used for detecting the rise in exception count, while taking into account a natural growth in your application usage.
+
+## Does my app definitely have a problem?
+No, a notification doesn't mean that your app definitely has a problem. Although an excessive number of exceptions usually indicates an application issue, these exceptions might be benign and handled correctly by your application.
+
+## How do I fix it?
+The notifications include diagnostic information to support in the diagnostics process:
+1. **Triage.** The notification shows you how many users or how many requests are affected. This information can help you assign a priority to the problem.
+2. **Scope.** Is the problem affecting all traffic, or just some operation? This information can be obtained from the notification.
+3. **Diagnose.** The detection contains information about the method from which the exception was thrown, and the exception type. You can also use the related items and reports linking to supporting information, to help you further diagnose the issue.
azure-monitor Proactive Failure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md
+
+ Title: Smart Detection of Failure Anomalies in Application Insights | Microsoft Docs
+description: Alerts you to unusual changes in the rate of failed requests to your web app, and provides diagnostic analysis. No configuration is needed.
+ Last updated : 12/18/2018+++
+# Smart Detection - Failure Anomalies
+[Application Insights](../app/app-insights-overview.md) automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests. It detects an unusual rise in the rate of HTTP requests or dependency calls that are reported as failed. For requests, failed requests usually have response codes of 400 or higher. To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature needs no set-up nor configuration, as it uses machine learning algorithms to predict the normal failure rate.
+
+This feature works for any web app, hosted in the cloud or on your own servers, that generate application request or dependency data. For example, if you have a worker role that calls [TrackRequest()](../app/api-custom-events-metrics.md#trackrequest) or [TrackDependency()](../app/api-custom-events-metrics.md#trackdependency).
+
+After setting up [Application Insights for your project](../app/app-insights-overview.md), and if your app generates a certain minimum amount of data, Smart Detection of Failure Anomalies takes 24 hours to learn the normal behavior of your app, before it is switched on and can send alerts.
+
+Here's a sample alert:
++
+The alert details will tell you:
+
+* The failure rate compared to normal app behavior.
+* How many users are affected - so you know how much to worry.
+* A characteristic pattern associated with the failures. In this example, there's a particular response code, request name (operation), and application version. That immediately tells you where to start looking in your code. Other possibilities could be a specific browser or client operating system.
+* The exception, log traces, and dependency failure (databases or other external components) that appear to be associated with the characterized failures.
+* Links directly to relevant searches on the data in Application Insights.
+
+## Benefits of Smart Detection
+Ordinary [metric alerts](./alerts-log.md) tell you there might be a problem. But Smart Detection starts the diagnostic work for you, performing much the analysis you would otherwise have to do yourself. You get the results neatly packaged, helping you to get quickly to the root of the problem.
+
+## How it works
+Smart Detection monitors the data received from your app, and in particular the failure rates. This rule counts the number of requests for which the `Successful request` property is false, and the number of dependency calls for which the `Successful call` property is false. For requests, by default, `Successful request == (resultCode < 400)` (unless you have written custom code to [filter](../app/api-filtering-sampling.md#filtering) or generate your own [TrackRequest](../app/api-custom-events-metrics.md#trackrequest) calls).
+
+Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies.
+
+As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If an abnormal rise in failure rate is observed by comparison with previous performance, an analysis is triggered.
+
+When an analysis is triggered, the service performs a cluster analysis on the failed request, to try to identify a pattern of values that characterize the failures.
+
+In the example above, the analysis has discovered that most failures are about a specific result code, request name, Server URL host, and role instance.
+
+When your service is instrumented with these calls, the analyzer looks for an exception and a dependency failure that are associated with requests in the cluster it has identified, together with an example of any trace logs associated with those requests.
+
+The resulting analysis is sent to you as alert, unless you have configured it not to.
+
+Like the [alerts you set manually](./alerts-log.md), you can inspect the state of the fired alert, which can be resolved if the issue is fixed. Configure the alert rules in the Alerts page of your Application Insights resource. But unlike other alerts, you don't need to set up or configure Smart Detection. If you want, you can disable it or change its target email addresses.
+
+### Alert logic details
+
+The alerts are triggered by our proprietary machine learning algorithm so we can't share the exact implementation details. With that said, we understand that you sometimes need to know more about how the underlying logic works. The primary factors that are evaluated to determine if an alert should be triggered are:
+
+* Analysis of the failure percentage of requests/dependencies in a rolling time window of 20 minutes.
+* A comparison of the failure percentage of the last 20 minutes to the rate in the last 40 minutes and the past seven days, and looking for significant deviations that exceed X-times that standard deviation.
+* Using an adaptive limit for the minimum failure percentage, which varies based on the appΓÇÖs volume of requests/dependencies.
+* There is logic that can automatically resolve the fired alert monitor condition, if the issue is no longer detected for 8-24 hours.
+ Note: in the current design. a notification or action will not be sent when a Smart Detection alert is resolved. You can check if a Smart Detection alert was resolved in the Azure portal.
+
+## Configure alerts
+
+You can disable Smart Detection alert rule from the portal or using Azure Resource Manager ([see template example](./proactive-arm-config.md)).
+
+This alert rule is created with an associated [Action Group](./action-groups.md) named "Application Insights Smart Detection" that contains email and webhook actions, and can be extended to trigger additional actions when the alert fires.
+
+> [!NOTE]
+> Email notifications sent from this alert rule are now sent by default to users associated with the subscription's Monitoring Reader and Monitoring Contributor roles. More information on this is available [here](./proactive-email-notification.md).
+> Notifications sent from this alert rule follow the [common alert schema](./alerts-common-schema.md).
+>
+
+Open the Alerts page. Failure Anomalies alert rules are included along with any alerts that you have set manually, and you can see whether it is currently in the alert state.
++
+Click the alert to configure it.
++
+## Delete alerts
+
+You can disable or delete a Failure Anomalies alert rule, but once deleted you can't create another one for the same Application Insights resource.
+
+Notice that if you delete an Application Insights resource, the associated Failure Anomalies alert rule doesn't get deleted automatically. You can do so manually on the Alert rules page or with the following Azure CLI command:
+
+```azurecli
+az resource delete --ids <Resource ID of Failure Anomalies alert rule>
+```
+
+## Example of Failure Anomalies alert webhook payload
+
+```json
+{
+ "properties": {
+ "essentials": {
+ "severity": "Sev3",
+ "signalType": "Log",
+ "alertState": "New",
+ "monitorCondition": "Resolved",
+ "monitorService": "Smart Detector",
+ "targetResource": "/subscriptions/4f9b81be-fa32-4f96-aeb3-fc5c3f678df9/resourcegroups/test-group/providers/microsoft.insights/components/test-rule",
+ "targetResourceName": "test-rule",
+ "targetResourceGroup": "test-group",
+ "targetResourceType": "microsoft.insights/components",
+ "sourceCreatedId": "1a0a5b6436a9b2a13377f5c89a3477855276f8208982e0f167697a2b45fcbb3e",
+ "alertRule": "/subscriptions/4f9b81be-fa32-4f96-aeb3-fc5c3f678df9/resourcegroups/test-group/providers/microsoft.alertsmanagement/smartdetectoralertrules/failure anomalies - test-rule",
+ "startDateTime": "2019-10-30T17:52:32.5802978Z",
+ "lastModifiedDateTime": "2019-10-30T18:25:23.1072443Z",
+ "monitorConditionResolvedDateTime": "2019-10-30T18:25:26.4440603Z",
+ "lastModifiedUserName": "System",
+ "actionStatus": {
+ "isSuppressed": false
+ },
+ "description": "Failure Anomalies notifies you of an unusual rise in the rate of failed HTTP requests or dependency calls."
+ },
+ "context": {
+ "DetectionSummary": "An abnormal rise in failed request rate",
+ "FormattedOccurenceTime": "2019-10-30T17:50:00Z",
+ "DetectedFailureRate": "50.0% (200/400 requests)",
+ "NormalFailureRate": "0.0% (over the last 30 minutes)",
+ "FailureRateChart": [
+ [
+ "2019-10-30T05:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T05:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T06:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T06:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T06:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T07:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T07:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T07:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T08:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T08:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T08:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T17:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T17:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T09:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T09:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T09:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T10:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T10:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T10:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T11:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T11:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T11:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T12:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T12:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T12:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T13:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T13:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T13:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T14:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T14:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T14:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T15:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T15:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T15:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T16:00:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T16:20:00Z",
+ 0
+ ],
+ [
+ "2019-10-30T16:40:00Z",
+ 100
+ ],
+ [
+ "2019-10-30T17:30:00Z",
+ 50
+ ]
+ ],
+ "ArmSystemEventsRequest": "/subscriptions/4f9b81be-fa32-4f96-aeb3-fc5c3f678df9/resourceGroups/test-group/providers/microsoft.insights/components/test-rule/query?query=%0d%0a++++++++++++++++systemEvents%0d%0a++++++++++++++++%7c+where+timestamp+%3e%3d+datetime(%272019-10-30T17%3a20%3a00.0000000Z%27)+%0d%0a++++++++++++++++%7c+where+itemType+%3d%3d+%27systemEvent%27+and+name+%3d%3d+%27ProactiveDetectionInsight%27+%0d%0a++++++++++++++++%7c+where+dimensions.InsightType+in+(%275%27%2c+%277%27)+%0d%0a++++++++++++++++%7c+where+dimensions.InsightDocumentId+%3d%3d+%27718fb0c3-425b-4185-be33-4311dfb4deeb%27+%0d%0a++++++++++++++++%7c+project+dimensions.InsightOneClassTable%2c+%0d%0a++++++++++++++++++++++++++dimensions.InsightExceptionCorrelationTable%2c+%0d%0a++++++++++++++++++++++++++dimensions.InsightDependencyCorrelationTable%2c+%0d%0a++++++++++++++++++++++++++dimensions.InsightRequestCorrelationTable%2c+%0d%0a++++++++++++++++++++++++++dimensions.InsightTraceCorrelationTable%0d%0a++++++++++++&api-version=2018-04-20",
+ "LinksTable": [
+ {
+ "Link": "<a href=\"https://portal.azure.com/#blade/AppInsightsExtension/ProactiveDetectionFeedBlade/ComponentId/{\"SubscriptionId\":\"4f9b81be-fa32-4f96-aeb3-fc5c3f678df9\",\"ResourceGroup\":\"test-group\",\"Name\":\"test-rule\"}/SelectedItemGroup/718fb0c3-425b-4185-be33-4311dfb4deeb/SelectedItemTime/2019-10-30T17:50:00Z/InsightType/5\" target=\"_blank\">View full details in Application Insights</a>"
+ }
+ ],
+ "SmartDetectorId": "FailureAnomaliesDetector",
+ "SmartDetectorName": "Failure Anomalies",
+ "AnalysisTimestamp": "2019-10-30T17:52:32.5802978Z"
+ },
+ "egressConfig": {
+ "displayConfig": [
+ {
+ "rootJsonNode": null,
+ "sectionName": null,
+ "displayControls": [
+ {
+ "property": "DetectionSummary",
+ "displayName": "What was detected?",
+ "type": "Text",
+ "isOptional": false,
+ "isPropertySerialized": false
+ },
+ {
+ "property": "FormattedOccurenceTime",
+ "displayName": "When did this occur?",
+ "type": "Text",
+ "isOptional": false,
+ "isPropertySerialized": false
+ },
+ {
+ "property": "DetectedFailureRate",
+ "displayName": "Detected failure rate",
+ "type": "Text",
+ "isOptional": false,
+ "isPropertySerialized": false
+ },
+ {
+ "property": "NormalFailureRate",
+ "displayName": "Normal failure rate",
+ "type": "Text",
+ "isOptional": false,
+ "isPropertySerialized": false
+ },
+ {
+ "chartType": "Line",
+ "xAxisType": "Date",
+ "yAxisType": "Percentage",
+ "xAxisName": "",
+ "yAxisName": "",
+ "property": "FailureRateChart",
+ "displayName": "Failure rate over last 12 hours",
+ "type": "Chart",
+ "isOptional": false,
+ "isPropertySerialized": false
+ },
+ {
+ "defaultLoad": true,
+ "displayConfig": [
+ {
+ "rootJsonNode": null,
+ "sectionName": null,
+ "displayControls": [
+ {
+ "showHeader": false,
+ "columns": [
+ {
+ "property": "Name",
+ "displayName": "Name"
+ },
+ {
+ "property": "Value",
+ "displayName": "Value"
+ }
+ ],
+ "property": "tables[0].rows[0][0]",
+ "displayName": "All of the failed requests had these characteristics:",
+ "type": "Table",
+ "isOptional": false,
+ "isPropertySerialized": true
+ }
+ ]
+ }
+ ],
+ "property": "ArmSystemEventsRequest",
+ "displayName": "",
+ "type": "ARMRequest",
+ "isOptional": false,
+ "isPropertySerialized": false
+ },
+ {
+ "showHeader": false,
+ "columns": [
+ {
+ "property": "Link",
+ "displayName": "Link"
+ }
+ ],
+ "property": "LinksTable",
+ "displayName": "Links",
+ "type": "Table",
+ "isOptional": false,
+ "isPropertySerialized": false
+ }
+ ]
+ }
+ ]
+ }
+ },
+ "id": "/subscriptions/4f9b81be-fa32-4f96-aeb3-fc5c3f678df9/resourcegroups/test-group/providers/microsoft.insights/components/test-rule/providers/Microsoft.AlertsManagement/alerts/7daf8739-ca8a-4562-b69a-ff28db4ba0a5",
+ "type": "Microsoft.AlertsManagement/alerts",
+ "name": "Failure Anomalies - test-rule"
+}
+```
+
+## Triage and diagnose an alert
+
+An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there is some problem with your app or its environment.
+
+To investigate further, click on 'View full details in Application Insights' the links in this page will take you straight to a [search page](../app/diagnostic-search.md) filtered to the relevant requests, exception, dependency, or traces.
+
+You can also open the [Azure portal](https://portal.azure.com), navigate to the Application Insights resource for your app, and open the Failures page.
+
+Clicking on 'Diagnose failures' will help you get more details and resolve the issue.
++
+From the percentage of requests and number of users affected, you can decide how urgent the issue is. In the example above, the failure rate of 78.5% compares with a normal rate of 2.2%, indicates that something bad is going on. On the other hand, only 46 users were affected. If it was your app, you'd be able to assess how serious that is.
+
+In many cases, you will be able to diagnose the problem quickly from the request name, exception, dependency failure, and trace data provided.
+
+In this example, there was an exception from SQL Database due to request limit being reached.
++
+## Review recent alerts
+
+Click **Alerts** in the Application Insights resource page to get to the most recent fired alerts:
++
+## What's the difference ...
+Smart Detection of Failure Anomalies complements other similar but distinct features of Application Insights.
+
+* [metric alerts](./alerts-log.md) are set by you and can monitor a wide range of metrics such as CPU occupancy, request rates, page load times, and so on. You can use them to warn you, for example, if you need to add more resources. By contrast, Smart Detection of Failure Anomalies covers a small range of critical metrics (currently only failed request rate), designed to notify you in near real-time manner once your web app's failed request rate increases compared to web app's normal behavior. Unlike metric alerts, Smart Detection automatically sets and updates thresholds in response changes in the behavior. Smart Detection also starts the diagnostic work for you, saving you time in resolving issues.
+
+* [Smart Detection of performance anomalies](proactive-performance-diagnostics.md) also uses machine intelligence to discover unusual patterns in your metrics, and no configuration by you is required. But unlike Smart Detection of Failure Anomalies, the purpose of Smart Detection of performance anomalies is to find segments of your usage manifold that might be badly served - for example, by specific pages on a specific type of browser. The analysis is performed daily, and if any result is found, it's likely to be much less urgent than an alert. By contrast, the analysis for Failure Anomalies is performed continuously on incoming application data, and you will be notified within minutes if server failure rates are greater than expected.
+
+## If you receive a Smart Detection alert
+*Why have I received this alert?*
+
+* We detected an abnormal rise in failed requests rate compared to the normal baseline of the preceding period. After analysis of the failures and associated application data, we think that there is a problem that you should look into.
+
+*Does the notification mean I definitely have a problem?*
+
+* We try to alert on app disruption or degradation, but only you can fully understand the semantics and the impact on the app or users.
+
+*So, you are looking at my application data?*
+
+* No. The service is entirely automatic. Only you get the notifications. Your data is [private](../app/data-retention-privacy.md).
+
+*Do I have to subscribe to this alert?*
+
+* No. Every application that sends request data has the Smart Detection alert rule.
+
+*Can I unsubscribe or get the notifications sent to my colleagues instead?*
+
+* Yes, In Alert rules, click the Smart Detection rule to configure it. You can disable the alert, or change recipients for the alert.
+
+*I lost the email. Where can I find the notifications in the portal?*
+
+* In the Activity logs. In Azure, open the Application Insights resource for your app, then select Activity logs.
+
+*Some of the alerts are about known issues and I do not want to receive them.*
+
+* You can use [alert action rules](./alerts-processing-rules.md) suppression feature.
+
+## Next steps
+These diagnostic tools help you inspect the data from your app:
+
+* [Metric explorer](../essentials/metrics-charts.md)
+* [Search explorer](../app/diagnostic-search.md)
+* [Analytics - powerful query language](../logs/log-analytics-tutorial.md)
+
+Smart detections are automatic. But maybe you'd like to set up some more alerts?
+
+* [Manually configured metric alerts](./alerts-log.md)
+* [Availability web tests](../app/monitor-web-app-availability.md)
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-performance-diagnostics.md
+
+ Title: Smart detection - performance anomalies | Microsoft Docs
+description: Smart detection analyzes your app telemetry and warns you of potential problems. This feature needs no setup.
+ Last updated : 05/04/2017++
+# Smart detection - Performance Anomalies
+
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-based smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> For more information on the migration process, see [Smart Detection Alerts migration](./alerts-smart-detections-migration.md).
+
+[Application Insights](../app/app-insights-overview.md) automatically analyzes the performance of your web application, and can warn you about potential problems.
+
+This feature requires no special setup, other than configuring your app for Application Insights for your [supported language](../app/platforms.md). It's active when your app generates enough telemetry.
+
+## When would I get a smart detection notification?
+
+Application Insights has detected that the performance of your application has degraded in one of these ways:
+
+* **Response time degradation** - Your app has started responding to requests more slowly than it used to. The change might have been rapid, for example because there was a regression in your latest deployment. Or it might have been gradual, maybe caused by a memory leak.
+* **Dependency duration degradation** - Your app makes calls to a REST API, database, or other dependency. The dependency is responding more slowly than it used to.
+* **Slow performance pattern** - Your app appears to have a performance issue that is affecting only some requests. For example, pages are loading more slowly on one type of browser than others; or requests are being served more slowly from one particular server. Currently, our algorithms look at page load times, request response times, and dependency response times.
+
+To establish a baseline of normal performance, smart detection requires at least eight days of sufficient telemetry volume. After your application has been running for that period, significant anomalies will result in a notification.
++
+## Does my app definitely have a problem?
+
+No, a notification doesn't mean that your app definitely has a problem. It's simply a suggestion about something you might want to look at more closely.
+
+## How do I fix it?
+
+The notifications include diagnostic information. Here's an example:
++
+![Here is an example of Server Response Time Degradation detection](media/proactive-performance-diagnostics/server_response_time_degradation.png)
+
+1. **Triage**. The notification shows you how many users or how many operations are affected. This information can help you assign a priority to the problem.
+2. **Scope**. Is the problem affecting all traffic, or just some pages? Is it restricted to particular browsers or locations? This information can be obtained from the notification.
+3. **Diagnose**. Often, the diagnostic information in the notification will suggest the nature of the problem. For example, if response time slows down when request rate is high, it may indicate that your server or dependencies are beyond their capacity.
+
+ Otherwise, open the Performance blade in Application Insights. You'll find there [Profiler](../profiler/profiler.md) data. If exceptions are thrown, you can also try the [snapshot debugger](../snapshot-debugger/snapshot-debugger.md).
+
+## Configure Email Notifications
+
+Smart detection notifications are enabled by default. They are sent to users that have [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) and [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) access to the subscription in which the Application Insights resource resides. To change the default notification, either click **Configure** in the email notification, or open **Smart detection settings** in Application Insights.
+
+ ![Smart Detection Settings](media/proactive-performance-diagnostics/smart_detection_configuration.png)
+
+ * You can disable the default notification, and replace it with a specified list of emails.
+
+Emails about smart detection performance anomalies are limited to one email per day per Application Insights resource. The email will be sent only if there is at least one new issue that was detected on that day. You won't get repeats of any message.
+
+## FAQ
+
+* *So, Microsoft staff look at my data?*
+ * No. The service is entirely automatic. Only you get the notifications. Your data is [private](../app/data-retention-privacy.md).
+* *Do you analyze all the data collected by Application Insights?*
+ * Currently, we analyze request response time, dependency response time, and page load time. Analysis of other metrics is on our backlog looking forward.
+
+* What types of application does this detection work for?
+ * These degradations are detected in any application that generates the appropriate telemetry. If you installed Application Insights in your web app, then requests and dependencies are automatically tracked. But in backend services or other apps, if you inserted calls to [TrackRequest()](../app/api-custom-events-metrics.md#trackrequest) or [TrackDependency](../app/api-custom-events-metrics.md#trackdependency), then smart detection will work in the same way.
+
+* *Can I create my own anomaly detection rules or customize existing rules?*
+
+ * Not yet, but you can:
+ * [Set up alerts](./alerts-log.md) that tell you when a metric crosses a threshold.
+ * [Export telemetry](../app/export-telemetry.md) to a [database](../../stream-analytics/app-insights-export-sql-stream-analytics.md) or [to Power BI](../app/export-power-bi.md), where you can analyze it yourself.
+* *How often is the analysis done?*
+
+ * We run the analysis daily on the telemetry from the previous day (full day in UTC timezone).
+* *Does this replace [metric alerts](./alerts-log.md)?*
+ * No. We don't commit to detecting every behavior that you might consider abnormal.
++
+* *If I don't do anything in response to a notification, will I get a reminder?*
+ * No, you get a message about each issue only once. If the issue persists, it will be updated in the smart detection feed blade.
+* *I lost the email. Where can I find the notifications in the portal?*
+ * In the Application Insights overview of your app, click the **Smart detection** tile. There you'll find all notifications up to 90 days back.
+
+## How can I improve performance?
+Slow and failed responses are one of the biggest frustrations for web site users, as you know from your own experience. So, it's important to address the issues.
+
+### Triage
+First, does it matter? If a page is always slow to load, but only 1% of your site's users ever have to look at it, maybe you have more important things to think about. However, if only 1% of users open it, but it throws exceptions every time, that might be worth investigating.
+
+Use the impact statement, such as affected users or % of traffic, as a general guide. Be aware that it may not be telling the whole story. Gather other evidence to confirm.
+
+Consider the parameters of the issue. If it's geography-dependent, set up [availability tests](../app/monitor-web-app-availability.md) including that region: there might be network issues in that area.
+
+### Diagnose slow page loads
+Where is the problem? Is the server slow to respond, is the page too long, or does the browser need too much work to display it?
+
+Open the Browsers metric blade. The segmented display of browser page load time shows where the time is going.
+
+* If **Send Request Time** is high, either the server is responding slowly, or the request is a post with large amount of data. Look at the [performance metrics](../app/performance-counters.md) to investigate response times.
+* Set up [dependency tracking](../app/asp-net-dependencies.md) to see whether the slowness is because of external services or your database.
+* If **Receiving Response** is predominant, your page and its dependent parts - JavaScript, CSS, images, and so on (but not asynchronously loaded data) are long. Set up an [availability test](../app/monitor-web-app-availability.md), and be sure to set the option to load dependent parts. When you get some results, open the detail of a result and expand it to see the load times of different files.
+* High **Client Processing time** suggests scripts are running slowly. If the reason isn't obvious, consider adding some timing code and send the times in trackMetric calls.
+
+### Improve slow pages
+There's a web full of advice on improving your server responses and page load times, so we won't try to repeat it all here. Here are a few tips that you probably already know about, just to get you thinking:
+
+* Slow loading because of large files: Load the scripts and other parts asynchronously. Use script bundling. Break the main page into widgets that load their data separately. Don't send plain old HTML for long tables: use a script to request the data as JSON or other compact format, then fill the table in place. There are great frameworks to help with such tasks. (They also include large scripts, of course.)
+* Slow server dependencies: Consider the geographical locations of your components. For example, if you're using Azure, make sure the web server and the database are in the same region. Do queries retrieve more information than they need? Would caching or batching help?
+* Capacity issues: Look at the server metrics of response times and request counts. If response times peak disproportionately with peaks in request counts, it's likely that your servers are stretched.
++
+## Server Response Time Degradation
+
+The response time degradation notification tells you:
+
+* The response time compared to normal response time for this operation.
+* How many users are affected.
+* Average response time and 90th percentile response time for this operation on the day of the detection and seven days before.
+* Count of this operation requests on the day of the detection and seven days before.
+* Correlation between degradation in this operation and degradations in related dependencies.
+* Links to help you diagnose the problem.
+ * Profiler traces can help you view where operation time is spent. The link is available if Profiler trace examples exist for this operation.
+ * Performance reports in Metric Explorer, where you can slice and dice time range/filters for this operation.
+ * Search for this call to view specific call properties.
+ * Failure reports - If count > 1, it means that there were failures in this operation that might have contributed to performance degradation.
+
+## Dependency Duration Degradation
+
+Modern applications often adopt a micro services design approach, which in many cases rely heavily on external services. For example, if your application relies on some data platform, or on a critical services provider such as cognitive services.
+
+Example of dependency degradation notification:
+
+![Here is an example of Dependency Duration Degradation detection](media/proactive-performance-diagnostics/dependency_duration_degradation.png)
+
+Notice that it tells you:
+
+* The duration compared to normal response time for this operation
+* How many users are affected
+* Average duration and 90th percentile duration for this dependency on the day of the detection and seven days before
+* Number of dependency calls on the day of the detection and seven days before
+* Links to help you diagnose the problem
+ * Performance reports in Metric Explorer for this dependency
+ * Search for this dependency calls to view calls properties
+ * Failure reports - If count > 1, it means that there were failed dependency calls during the detection period that might have contributed to duration degradation.
+ * Open Analytics with queries that calculate this dependency duration and count
+
+## Smart detection of slow performing patterns
+
+Application Insights finds performance issues that might only affect some portion of your users, or only affect users in some cases. For example, if a page loads slower on a specific browser types compared to others, or if a particular server handles requests more slowly than other servers. It can also discover problems that are associated with combinations of properties, such as slow page loads in one geographical area for clients using particular operating system.
+
+Anomalies like these are hard to detect just by inspecting the data, but are more common than you might think. Often they only surface when your customers complain. By that time, it's too late: the affected users are already switching to your competitors!
+
+Currently, our algorithms look at page load times, request response times at the server, and dependency response times.
+
+You don't have to set any thresholds or configure rules. Machine learning and data mining algorithms are used to detect abnormal patterns.
+
+![From the email alert, click the link to open the diagnostic report in Azure](./media/proactive-performance-diagnostics/03.png)
+
+* **When** shows the time the issue was detected.
+* **What** describes te problem that was detected, and th characteristics of the set of events that we found, which displayed the problem behavior.
+* The table compares the poorly performing set with the average behavior of all other events.
+
+Click the links to open Metric Explorer to view reports, filtered by the time and properties of the slow performing set.
+
+Modify the time range and filters to explore the telemetry.
+
+## Next steps
+These diagnostic tools help you inspect the telemetry from your app:
+
+* [Profiler](../profiler/profiler.md)
+* [snapshot debugger](../snapshot-debugger/snapshot-debugger.md)
+* [Analytics](../logs/log-analytics-tutorial.md)
+* [Analytics smart diagnostics](../logs/log-query-overview.md)
+
+Smart detection is automatic. But maybe you'd like to set up some more alerts?
+
+* [Manually configured metric alerts](./alerts-log.md)
+* [Availability web tests](../app/monitor-web-app-availability.md)
azure-monitor Proactive Potential Memory Leak https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-potential-memory-leak.md
+
+ Title: Detect memory leak - Azure Application Insights smart detection
+description: Monitor applications with Azure Application Insights for potential memory leaks.
+ Last updated : 12/12/2017++
+# Memory leak detection (preview)
+
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> For more information, see [Smart Detection Alerts migration](./alerts-smart-detections-migration.md).
+
+Smart detection automatically analyzes the memory consumption of each process in your application, and can warn you about potential memory leaks or increased memory consumption.
+
+This feature requires no special setup, other than [configuring performance counters](../app/performance-counters.md) for your app. It's active when your app generates enough memory performance counters telemetry (for example, Private Bytes).
+
+## When would I get this type of smart detection notification?
+A typical notification will follow a consistent increase in memory consumption, over a long period of time, in one or more processes or machines, which are part of your application. Machine learning algorithms are used for detecting increased memory consumption that matches the pattern of a memory leak.
+
+## Does my app really have a problem?
+A notification doesn't mean that your app definitely has a problem. Although memory leak patterns many times indicate an application issue, these patterns could be typical to your specific process, or could have a natural business justification. In such case the notification can be ignored.
+
+## How do I fix it?
+The notifications include diagnostic information to support in the diagnostic analysis process:
+1. **Triage.** The notification shows you the amount of memory increase (in GB), and the time range in which the memory has increased. This information can help you assign a priority to the problem.
+2. **Scope.** How many machines exhibited the memory leak pattern? How many exceptions were triggered during the potential memory leak? This information can be obtained from the notification.
+3. **Diagnose.** The detection contains the memory leak pattern, showing memory consumption of the process over time. You can also use the related items and reports linking to supporting information, to help you further diagnose the issue.
azure-monitor Proactive Trace Severity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-trace-severity.md
+
+ Title: Degradation in trace severity ratio - Azure Application Insights
+description: Monitor application traces with Azure Application Insights for unusual patterns in trace telemetry with smart detection.
+ Last updated : 11/27/2017++
+# Degradation in trace severity ratio (preview)
+
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> For more information, see [Smart Detection Alerts migration](./alerts-smart-detections-migration.md).
+>
+
+Traces are widely used in applications, and they help tell the story of what happens behind the scenes. When things go wrong, traces provide crucial visibility into the sequence of events leading to the undesired state. While traces are mostly unstructured, their severity level can still provide valuable information. In an applicationΓÇÖs steady state, we would expect the ratio between ΓÇ£goodΓÇ¥ traces (*Info* and *Verbose*) and ΓÇ£badΓÇ¥ traces (*Warning*, *Error*, and *Critical*) to remain stable.
+
+It's normal to expect some level of ΓÇ£BadΓÇ¥ traces because of any number of reasons, such as transient network issues. But when a real problem begins growing, it usually manifests as an increase in the relative proportion of ΓÇ£badΓÇ¥ traces vs ΓÇ£goodΓÇ¥ traces. Smart detection automatically analyzes the trace telemetry that your application logs, and can warn you about unusual patterns in their severity.
+
+This feature requires no special setup, other than configuring trace logging for your app. See how to configure a trace log listener for [.NET](../app/asp-net-trace-logs.md) or [Java](../app/java-in-process-agent.md). It's active when your app generates enough trace telemetry.
+
+## When would I get this type of smart detection notification?
+You get this type of notification if the ratio between ΓÇ£goodΓÇ¥ traces (traces logged with a level of *Info* or *Verbose*) and ΓÇ£badΓÇ¥ traces (traces logged with a level of *Warning*, *Error*, or *Fatal*) is degrading in a specific day, compared to a baseline calculated over the previous seven days.
+
+## Does my app definitely have a problem?
+A notification doesn't mean that your app definitely has a problem. Although a degradation in the ratio between ΓÇ£goodΓÇ¥ and ΓÇ£badΓÇ¥ traces might indicate an application issue, it can also be benign. For example, the increase can be because of a new flow in the application emitting more ΓÇ£badΓÇ¥ traces than existing flows).
+
+## How do I fix it?
+The notifications include diagnostic information to support in the diagnostics process:
+1. **Triage.** The notification shows you how many operations are affected. This information can help you assign a priority to the problem.
+2. **Scope.** Is the problem affecting all traffic, or just some operation? This information can be obtained from the notification.
+3. **Diagnose.** You can use the related items and reports linking to supporting information, to help you further diagnose the issue.
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Application Insights helps development teams understand app performance and usag
There are many ways to explore Application Insights telemetry. For more information, see the following articles: -- [Smart detection in Application Insights](./proactive-diagnostics.md)
+- [Smart detection in Application Insights](../alerts/proactive-diagnostics.md)
Set up automatic alerts that adapt to your app's normal telemetry patterns and trigger when something is outside the usual pattern. You can also set alerts on specified levels of custom or standard metrics. For more information, see [Create, view, and manage log alerts using Azure Monitor](../alerts/alerts-log.md).
azure-monitor Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/devops.md
Real Madrid uses the Power BI module to view their telemetry.
![Power BI view of Application Insights telemetry](./media/devops/080.png) ## Smart detection
-[Proactive diagnostics](./proactive-diagnostics.md) is a recent feature. Without any special configuration by you, Application Insights automatically detects and alerts you about unusual rises in failure rates in your app. It's smart enough to ignore a background of occasional failures, and also rises that are simply proportionate to a rise in requests. So for example, if there's a failure in one of the services you depend on, or if the new build you just deployed isn't working so well, then you'll know about it as soon as you look at your email. (And there are webhooks so that you can trigger other apps.)
+[Proactive diagnostics](../alerts/proactive-diagnostics.md) is a recent feature. Without any special configuration by you, Application Insights automatically detects and alerts you about unusual rises in failure rates in your app. It's smart enough to ignore a background of occasional failures, and also rises that are simply proportionate to a rise in requests. So for example, if there's a failure in one of the services you depend on, or if the new build you just deployed isn't working so well, then you'll know about it as soon as you look at your email. (And there are webhooks so that you can trigger other apps.)
Another aspect of this feature performs a daily in-depth analysis of your telemetry, looking for unusual patterns of performance that are hard to discover. For example, it can find slow performance associated with a particular geographical area, or with a particular browser version.
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
For more detailed information about how to use queries and logs, see [Logs in Az
### Alerts * [Availability tests](./monitor-web-app-availability.md): Create tests to make sure your site is visible on the web.
-* [Smart diagnostics](./proactive-diagnostics.md): These tests run automatically, so you don't have to do anything to set them up. They tell you if your app has an unusual rate of failed requests.
+* [Smart diagnostics](../alerts/proactive-diagnostics.md): These tests run automatically, so you don't have to do anything to set them up. They tell you if your app has an unusual rate of failed requests.
* [Metric alerts](../alerts/alerts-log.md): Set alerts to warn you if a metric crosses a threshold. You can set them on custom metrics that you code into your app.
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
Title: Autoscale in Microsoft Azure description: "Autoscale in Microsoft Azure"+++ Previously updated : 04/22/2022 Last updated : 08/01/2022 # Overview of autoscale in Microsoft Azure
-This article describes what Microsoft Azure autoscale is, its benefits, and how to get started using it.
+This article describes Microsoft Azure autoscale and its benefits.
-Azure autoscale supports a growing list of resource types. See the list of [supported resources](#supported-services-for-autoscale) for more details.
+Azure autoscale supports many resource types. For more information about supported resources, see [autoscale supported resources](#supported-services-for-autoscale).
> [!NOTE]
-> Azure has two autoscale methods. An older version of autoscale applies to Virtual Machines (availability sets). This feature has limited support and we recommend migrating to virtual machine scale sets for faster and more reliable autoscale support. A link on how to use the older technology is included in this article.
->
+> [Availability sets](/archive/blogs/kaevans/autoscaling-azurevirtual-machines) are an older scaling feature for virtual machines with limited support. We recommend migrating to [virtual machine scale sets](/azure/virtual-machine-scale-sets/overview) for faster and more reliable autoscale support.
-## What is autoscale?
-Autoscale allows you to have the right amount of resources running to handle the load on your application. It allows you to add resources to handle increases in load and also save money by removing resources that are sitting idle. You specify a minimum and maximum number of instances to run and add or remove VMs automatically based on a set of rules. Having a minimum makes sure your application is always running even under no load. Having a maximum limits your total possible hourly cost. You automatically scale between these two extremes using rules you create.
- ![Autoscale explained. Add and remove VMs](./media/autoscale-overview/AutoscaleConcept.png)
+## What is autoscale
+Autoscale is a service that allows you to automatically add and remove resources according to the load on your application.
-When rule conditions are met, one or more autoscale actions are triggered. You can add and remove VMs, or perform other actions. The following conceptual diagram shows this process.
+When your application experiences higher load, autoscale adds resources to handle the increased load. When load is low, autoscale reduces the number of resources, lowering your costs. You can scale your application based on metrics like CPU usage, queue length, and available memory, or based on a schedule. Metrics and schedules are set up in rules. The rules include a minimum level of resources that you need to run your application, and a maximum level of resources that won't be exceeded.
- ![Autoscale Flow Diagram](./media/autoscale-overview/Autoscale_Overview_v4.png)
+For example, scale out your application by adding VMs when the average CPU usage per VM is above 70%. Scale it back in removing VMs when CPU usage drops to 40%.
-The following explanation applies to the pieces of the previous diagram.
+ ![Autoscale explained. Add and remove VMs](./media/autoscale-overview/AutoscaleConcept.png)
-## Resource Metrics
-Resources emit metrics, these metrics are later processed by rules. Metrics come via different methods.
-Virtual machine scale sets use telemetry data from Azure diagnostics agents whereas telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure. Some commonly used statistics include CPU Usage, memory usage, thread counts, queue length, and disk usage. For a list of what telemetry data you can use, see [Autoscale Common Metrics](autoscale-common-metrics.md).
+When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. In addition, you can perform other actions like sending email notifications, or webhooks to trigger processes in other systems..
+### Predictive autoscale (preview)
+[Predictive autoscale](/azure/azure-monitor/autoscale/autoscale-predictive) uses machine learning to help manage and scale Azure virtual machine scale sets with cyclical workload patterns. It forecasts the overall CPU load on your virtual machine scale set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.
+## Autoscale setup
+You can set up autoscale via:
+* [Azure portal](autoscale-get-started.md)
+* [PowerShell](../powershell-samples.md#create-and-manage-autoscale-settings)
+* [Cross-platform Command Line Interface (CLI)](../cli-samples.md#autoscale)
+* [Azure Monitor REST API](/rest/api/monitor/autoscalesettings)
-## Custom Metrics
-You can also use your own custom metrics that your application(s) may be emitting. If you've configured your application(s) to send metrics to Application Insights you can use those metrics to make decisions on whether to scale or not.
+## Architecture
+The following diagram shows the autoscale architecture.
-## Time
-Schedule-based rules are based on UTC. You must set your time zone properly when setting up your rules.
+ ![Autoscale Flow Diagram](./media/autoscale-overview/Autoscale_Overview_v4.png)
-## Rules
-The diagram shows only one autoscale rule, but you can have many of them. You can create complex overlapping rules as needed for your situation. Rule types include
+### Resource metrics
+Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual machine scale sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure.
-* **Metric-based** - For example, do this action when CPU usage is above 50%.
-* **Time-based** - For example, trigger a webhook every 8am on Saturday in a given time zone.
+Some commonly used metrics include CPU usage, memory usage, thread counts, queue length, and disk usage. See [Autoscale Common Metrics](autoscale-common-metrics.md) for a list of available metrics.
-Metric-based rules measure application load and add or remove VMs based on that load. Schedule-based rules allow you to scale when you see time patterns in your load and want to scale before a possible load increase or decrease occurs.
+### Custom metrics
+Use your own custom metrics that your application generates. Configure your application to send metrics to [Application Insights](/azure/azure-monitor/app/app-insights-overview) so you can use those metrics decide when to scale.
-## Actions and automation
-Rules can trigger one or more types of actions.
+### Time
+Set up schedule-based rules to trigger scale events. Use schedule-based rules when you see time patterns in your load, and want to scale before an anticipated change in load occurs.
+
-* **Scale** - Scale VMs in or out
-* **Email** - Send email to subscription admins, co-admins, and/or additional email address you specify
-* **Automate via webhooks** - Call webhooks, which can trigger multiple complex actions inside or outside Azure. Inside Azure, you can start an Azure Automation runbook, Azure Function, or Azure Logic App. Example third-party URL outside Azure include services like Slack and Twilio.
+### Rules
+Rules define the conditions needed to trigger a scale event, the direction of the scaling, and the amount to scale by. Rules can be:
+* Metric-based
+Trigger based on a metric value, for example when CPU usage is above 50%.
+* Time-based
+Trigger based on a schedule, for example, every Saturday at 8am.
-## Autoscale Settings
-Autoscale use the following terminology and structure.
-- An **autoscale setting** is read by the autoscale engine to determine whether to scale up or down. It contains one or more profiles, information about the target resource, and notification settings.
+You can combine multiple rules using different metrics, for example CPU usage and queue length.
+* The OR operator is used when scaling out with multiple rules.
+* The AND operator is used when scaling in with multiple rules.
- - An **autoscale profile** is a combination of a:
+### Actions and automation
+Rules can trigger one or more actions. Actions include:
- - **capacity setting**, which indicates the minimum, maximum, and default values for number of instances.
- - **set of rules**, each of which includes a trigger (time or metric) and a scale action (up or down).
- - **recurrence**, which indicates when autoscale should put this profile into effect.
+- Scale - Scale resources in or out.
+- Email - Send an email to the subscription admins, co-admins, and/or any other email address.
+- Webhooks - Call webhooks to trigger multiple complex actions inside or outside Azure. In Azure, you can:
+ + Start an [Azure Automation runbook](/azure/automation/overview).
+ + Call an [Azure Function](/azure/azure-functions/functions-overview).
+ + Trigger an [Azure Logic App](/azure/logic-apps/logic-apps-overview).
+## Autoscale settings
- You can have multiple profiles, which allow you to take care of different overlapping requirements. You can have different autoscale profiles for different times of day or days of the week, for example.
+Autoscale settings contain the autoscale configuration. The setting including scale conditions that define rules, limits, and schedules and notifications. Define one or more scale conditions in the settings, and one notification setup.
- - A **notification setting** defines what notifications should occur when an autoscale event occurs based on satisfying the criteria of one of the autoscale settingΓÇÖs profiles. Autoscale can notify one or more email addresses or make calls to one or more webhooks.
+Autoscale uses the following terminology and structure. The UI and JSON
+| UI | JSON/CLI | Description |
+||--|-|
+| Scale conditions | profiles | A collection of rules, instance limits and schedules, based on a metric or time. You can define one or more scale conditions or profiles. |
+| Rules | rules | A set of time or metric-based conditions that trigger a scale action. You can define one or more rules for both scale in and scale out actions. |
+| Instance limits | capacity | Each scale condition or profile defines th default, max, and min number of instances that can run under that profile. |
+| Schedule | recurrence | Indicates when autoscale should put this scale condition or profile into effect. You can have multiple scale conditions, which allow you to handle different and overlapping requirements. For example, you can have different scale conditions for different times of day, or days of the week. |
+| Notify | notification | Defines the notifications to send when an autoscale event occurs. Autoscale can notify one or more email addresses or make a call one or more webhooks. You can configure multiple webhooks in the JSON but only one in the UI. |
-![Azure autoscale setting, profile, and rule structure](./media/autoscale-overview/AzureResourceManagerRuleStructure3.png)
+![Azure autoscale setting, profile, and rule structure](./media/autoscale-overview/azure-resource-manager-rule-structure-3.png)
The full list of configurable fields and descriptions is available in the [Autoscale REST API](/rest/api/monitor/autoscalesettings). For code examples, see
-* [Advanced Autoscale configuration using Resource Manager templates for VM Scale Sets](autoscale-virtual-machine-scale-sets.md)
+* [Advanced Autoscale configuration using Resource Manager templates for virtual machine scale sets](autoscale-virtual-machine-scale-sets.md)
* [Autoscale REST API](/rest/api/monitor/autoscalesettings) ## Horizontal vs vertical scaling
-Autoscale only scales horizontally, which is an increase ("out") or decrease ("in") in the number of VM instances. Horizontal is more flexible in a cloud situation as it allows you to run potentially thousands of VMs to handle load.
+Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
-In contrast, vertical scaling is different. It keeps the same number of VMs, but makes the VMs more ("up") or less ("down") powerful. Power is measured in memory, CPU speed, disk space, etc. Vertical scaling has more limitations. It's dependent on the availability of larger hardware, which quickly hits an upper limit and can vary by region. Vertical scaling also usually requires a VM to stop and restart.
+In contrast, vertical scaling, keeps the same number of resources constant, but gives them more capacity in terms of memory, CPU speed, disk space and network. Adding or removing capacity in vertical scaling is known as scaling or down. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process.
-## Methods of access
-You can set up autoscale via
-* [Azure portal](autoscale-get-started.md)
-* [PowerShell](../powershell-samples.md#create-and-manage-autoscale-settings)
-* [Cross-platform Command Line Interface (CLI)](../cli-samples.md#autoscale)
-* [Azure Monitor REST API](/rest/api/monitor/autoscalesettings)
## Supported services for autoscale
-| Service | Schema & Docs |
+The following services are supported by autoscale:
+
+| Service | Schema & Documentation |
| | | | Web Apps |[Scaling Web Apps](autoscale-get-started.md) | | Cloud Services |[Autoscale a Cloud Service](../../cloud-services/cloud-services-how-to-scale-portal.md) |
-| Virtual Machines: Classic |[Scaling Classic Virtual Machine Availability Sets](/archive/blogs/kaevans/autoscaling-azurevirtual-machines) |
-| Virtual Machines: Windows Scale Sets |[Scaling virtual machine scale sets in Windows](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md) |
-| Virtual Machines: Linux Scale Sets |[Scaling virtual machine scale sets in Linux](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md) |
-| Virtual Machines: Windows Example |[Advanced Autoscale configuration using Resource Manager templates for VM Scale Sets](autoscale-virtual-machine-scale-sets.md) |
+| Virtual Machines: Windows scale sets |[Scaling virtual machine scale sets in Windows](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md) |
+| Virtual Machines: Linux scale sets |[Scaling virtual machine scale sets in Linux](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md) |
+| Virtual Machines: Windows Example |[Advanced Autoscale configuration using Resource Manager templates for virtual machine scale sets](autoscale-virtual-machine-scale-sets.md) |
| Azure App Service |[Scale up an app in Azure App service](../../app-service/manage-scale-up.md)| | API Management service|[Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) | Azure Data Explorer Clusters|[Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling)|
You can set up autoscale via
## Next steps
-To learn more about autoscale, use the Autoscale Walkthroughs listed previously or refer to the following resources:
+To learn more about autoscale, see the following resources:
* [Azure Monitor autoscale common metrics](autoscale-common-metrics.md)
+* [Scale virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json)
+* [Autoscale using Resource Manager templates for virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json)
* [Best practices for Azure Monitor autoscale](autoscale-best-practices.md) * [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md) * [Autoscale REST API](/rest/api/monitor/autoscalesettings)
-* [Troubleshooting Virtual Machine Scale Sets Autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)
+* [Troubleshooting virtual machine scale sets and autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)
+* [Troubleshooting Azure Monitor autoscale](/azure/azure-monitor/autoscale/autoscale-troubleshoot)
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
If your monitoring of a business application is limited to functionality provide
- Collect detailed application usage and performance data such as response time, failure rates, and request rates. - Collect browser data such as page views and load performance. - Detect exceptions and drill into stack trace and related requests.-- Perform advanced analysis using features such as [distributed tracing](app/distributed-tracing.md) and [smart detection](app/proactive-diagnostics.md).
+- Perform advanced analysis using features such as [distributed tracing](app/distributed-tracing.md) and [smart detection](alerts/proactive-diagnostics.md).
- Use [metrics explorer](essentials/metrics-getting-started.md) to interactively analyze performance data. - Use [log queries](logs/log-query-overview.md) to interactively analyze collected telemetry together with data collected for Azure services and VM insights.
azure-monitor Container Insights Enable New Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-new-cluster.md
To enable monitoring of a new AKS cluster created with Azure CLI, follow the ste
## Enable using Terraform
-If you are [deploying a new AKS cluster using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you do not choose to specify an existing one.
-
->[!NOTE]
->If you choose to use Terraform, you must be running the Terraform Azure RM Provider version 1.17.0 or above.
-
-To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) and complete the profile by including the [**addon_profile**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specify **oms_agent**.
-
-After you've enabled monitoring and all configuration tasks are completed successfully, you can monitor the performance of your cluster in either of two ways:
-
-* Directly in the AKS cluster by selecting **Health** in the left pane.
-* By selecting the **Monitor Container insights** tile in the AKS cluster page for the selected cluster. In Azure Monitor, in the left pane, select **Health**.
-
- ![Options for selecting Container insights in AKS](./media/container-insights-onboard/kubernetes-select-monitoring-01.png)
-
-After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
+If you are [deploying a new AKS cluster using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you do not choose to specify an existing one. To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) and complete the profile by including the [**addon_profile**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specify **oms_agent**.
## Verify agent and solution deployment With agent version *06072018* or later, you can verify that both the agent and the solution were deployed successfully. With earlier versions of the agent, you can verify only agent deployment.
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
The [Azure Activity log](essentials/platform-logs-overview.md) includes service
| Destination | Description | Reference | | -- | -- | | | Azure Resource Manager control plane changes | Change Analysis provides a historical record of how the Azure resources that host your application have changed over time, using Azure Resource Graph | [Resources | Get Changes](../governance/resource-graph/how-to/get-resource-changes.md) |
-| Resource configurations and settings changes | Change Analysis securely queries and computes IP Configuration rules, TLS settings, and extension versions to provide more change details in the app. | [Azure Resource Manager proxied setting changes](./change/change-analysis.md#azure-resource-manager-proxied-setting-changes) |
+| Resource configurations and settings changes | Change Analysis securely queries and computes IP Configuration rules, TLS settings, and extension versions to provide more change details in the app. | [Azure Resource Manager configuration changes](./change/change-analysis.md#azure-resource-manager-configuration-changes) |
| Web app in-guest changes | Every 30 minutes, Change Analysis captures the deployment and configuration state of an application. | [Diagnose and solve problems tool for Web App](./change/change-analysis-visualizations.md#diagnose-and-solve-problems-tool-for-web-app) | ## Azure resources
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers**,
Billing for the commitment tiers is done per workspace on a daily basis. If the workspace is part of a [dedicated cluster](#dedicated-clusters), the billing is done for the cluster (see below). See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for a detailed listing of the commitment tiers and their prices.
+Azure Commitment Discounts such as those received from [Microsoft Enterprise Agreements](https://www.microsoft.com/licensing/licensing-programs/enterprise) are applied to Azure Monitor Logs Commitment Tier pricing just as they are to Pay-As-You-Go pricing (whether the usage is being billed per workspace or per dedicated cluster).
+ > [!TIP] > The **Usage and estimated costs** menu item for each Log Analytics workspace hows an estimate of your monthly charges at each commitment level. You should periodically review this information to determine if you can reduce your charges by moving to another tier. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) for information on this view.
azure-monitor Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/queries.md
The query interface is populated with the following types of queries:
**Legacy queries:** Log queries previously saved in the query explorer experience and queries Azure solutions that are installed in the workspace. These are listed in the query dialog box under **Legacy queries**. >[!TIP]
-> Legacy Quereis are only avaiable in a Log Analytics Workspace.
+> Legacy queries are only available in a Log Analytics Workspace.
## Effect of query scope The queries that are available when you open Log Analytics is determined by the current [query scope ](scope.md).
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
Beyond Application Insights Snapshot Debugger:
* [Set snappoints in your code](/visualstudio/debugger/debug-live-azure-applications) to get snapshots without waiting for an exception. * [Diagnose exceptions in your web apps](../app/asp-net-exceptions.md) explains how to make more exceptions visible to Application Insights.
-* [Smart Detection](../app/proactive-diagnostics.md) automatically discovers performance anomalies.
+* [Smart Detection](../alerts/proactive-diagnostics.md) automatically discovers performance anomalies.
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> [!IMPORTANT] > Make sure moving to new subscription doesn't exceed [subscription quotas](azure-subscription-service-limits.md#azure-monitor-limits).
+> [!WARNING]
+> When moving a workspace-based Application Insights component to a different subscription, telemetry stored in the original subscription will not be accessible anymore. This is because telemetry is identified by the Application Insights resource ID, which changes when you move the component to a different subscription. Please notice that once moved, there is no way to retrieve telemetry from the original subscription.
+ > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
zone_pivot_groups: programming-languages-speech-sdk
# Use pronunciation assessment
-In this article, you'll learn how to use pronunciation assessment through the Speech SDK.
+In this article, you'll learn how to evaluate pronunciation with the Speech-to-Text capability through the Speech SDK. To [get pronunciation assessment results](#get-pronunciation-assessment-results), you'll apply the `PronunciationAssessmentConfig` settings to a `SpeechRecognizer` object.
::: zone pivot="programming-language-go" > [!NOTE]
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
# Pronunciation assessment in Speech Studio
-Pronunciation assessment provides subjective and objective feedback to language learners. Practicing pronunciation and getting timely feedback are essential for improving language skills. Assessments driven by experienced teachers can take a lot of time and effort, and makes a high-quality assessment expensive for learners. Pronunciation assessment can help make the language assessment more engaging and accessible to learners of all backgrounds.
+Pronunciation assessment uses the Speech-to-Text capability to provide subjective and objective feedback for language learners. Practicing pronunciation and getting timely feedback are essential for improving language skills. Assessments driven by experienced teachers can take a lot of time and effort and makes a high-quality assessment expensive for learners. Pronunciation assessment can help make the language assessment more engaging and accessible to learners of all backgrounds.
Pronunciation assessment provides various assessment results in different granularities, from individual phonemes to the entire text input. - At the full-text level, pronunciation assessment offers additional Fluency and Completeness scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words, and Completeness indicates how many words are pronounced in the speech to the reference text input. An overall score aggregated from Accuracy, Fluency and Completeness is then given to indicate the overall pronunciation quality of the given speech.
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
Title: Continuous export can send Microsoft Defender for Cloud's alerts and recommendations to Log Analytics workspaces or Azure Event Hubs
-description: Learn how to configure continuous export of security alerts and recommendations to Log Analytics workspaces or Azure Event Hubs
+ Title: Continuous export can send Microsoft Defender for Cloud's alerts and recommendations to Log Analytics or Azure Event Hubs
+description: Learn how to configure continuous export of security alerts and recommendations to Log Analytics or Azure Event Hubs
++ Previously updated : 06/19/2022- Last updated : 07/31/2022 # Continuously export Microsoft Defender for Cloud data
-Microsoft Defender for Cloud generates detailed security alerts and recommendations. You can view them in the portal or through programmatic tools. You might also need to export some or all of this information for tracking with other monitoring tools in your environment.
+Microsoft Defender for Cloud generates detailed security alerts and recommendations. To analyze the information in these alerts and recommendations, you can export them to Azure Log Analytics, Event Hubs, or to another [SIEM, SOAR, or IT Service Management solution](export-to-siem.md). You can stream the alerts and recommendations as they're generated or define a schedule to send periodic snapshots of all of the new data.
-You fully customize *what* will be exported, and *where* it will go with **continuous export**. For example, you can configure it so that:
+With **continuous export**, you fully customize *what* will be exported and *where* it will go. For example, you can configure it so that:
-- All high severity alerts are sent to an Azure Event Hub
+- All high severity alerts are sent to an Azure event hub
- All medium or higher severity findings from vulnerability assessment scans of your SQL servers are sent to a specific Log Analytics workspace-- Specific recommendations are delivered to an Event Hub or Log Analytics workspace whenever they're generated -- The secure score for a subscription is sent to a Log Analytics workspace whenever the score for a control changes by 0.01 or more -
-Even though the feature is called *continuous*, there's also an option to export weekly snapshots.
+- Specific recommendations are delivered to an event hub or Log Analytics workspace whenever they're generated
+- The secure score for a subscription is sent to a Log Analytics workspace whenever the score for a control changes by 0.01 or more
-This article describes how to configure continuous export to Log Analytics workspaces or Azure Event Hubs.
-
-> [!NOTE]
-> If you need to integrate Defender for Cloud with a SIEM, see [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md).
+This article describes how to configure continuous export to Log Analytics workspaces or Azure event hubs.
> [!TIP] > Defender for Cloud also offers the option to perform a one-time, manual export to CSV. Learn more in [Manual one-time export of alerts and recommendations](#manual-one-time-export-of-alerts-and-recommendations). - ## Availability |Aspect|Details| |-|:-| |Release state:|General availability (GA)| |Pricing:|Free|
-|Required roles and permissions:|<ul><li>**Security admin** or **Owner** on the resource group</li><li>Write permissions for the target resource.</li><li>If you're using the Azure Policy 'DeployIfNotExist' policies described below you'll also need permissions for assigning policies</li><li>To export data to Event Hub, you'll need Write permission on the Event Hub Policy.</li><li>To export to a Log Analytics workspace:<ul><li>if it **has the SecurityCenterFree solution**, you'll need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`</li><li>if it **doesn't have the SecurityCenterFree solution**, you'll need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](../azure-monitor/insights/solutions.md)</li></ul></li></ul>|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
--
+|Required roles and permissions:|<ul><li>**Security admin** or **Owner** on the resource group</li><li>Write permissions for the target resource.</li><li>If you're using the Azure Policy 'DeployIfNotExist' policies described below, you'll also need permissions for assigning policies</li><li>To export data to Event Hubs, you'll need Write permission on the Event Hubs Policy.</li><li>To export to a Log Analytics workspace:<ul><li>if it **has the SecurityCenterFree solution**, you'll need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`</li><li>if it **doesn't have the SecurityCenterFree solution**, you'll need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](../azure-monitor/insights/solutions.md)</li></ul></li></ul>|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
## What data types can be exported?
Continuous export can export the following data types whenever they change:
- Security alerts. - Security recommendations.-- Security findings. These can be thought of as 'sub' recommendations and belong to a 'parent' recommendation. For example:
+- Security findings. Findings can be thought of as 'sub' recommendations and belong to a 'parent' recommendation. For example:
- The recommendations [System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f) and [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27) each has one 'sub' recommendation per outstanding system update. - The recommendation [Machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) has a 'sub' recommendation for every vulnerability identified by the vulnerability scanner. > [!NOTE]
- > If youΓÇÖre configuring a continuous export with the REST API, always include the parent with the findings.
+ > If youΓÇÖre configuring a continuous export with the REST API, always include the parent with the findings.
- Secure score per subscription or per control. - Regulatory compliance data. - ## Set up a continuous export You can configure continuous export from the Microsoft Defender for Cloud pages in Azure portal, via the REST API, or at scale using the supplied Azure Policy templates. Select the appropriate tab below for details of each.
You can configure continuous export from the Microsoft Defender for Cloud pages
### Configure continuous export from the Defender for Cloud pages in Azure portal
-The steps below are necessary whether you're setting up a continuous export to Log Analytics workspace or Azure Event Hubs.
+The steps below are necessary whether you're setting up a continuous export to Log Analytics or Azure Event Hubs.
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the specific subscription for which you want to configure the data export.
-1. From the sidebar of the settings page for that subscription, select **Continuous Export**.
+1. From the sidebar of the settings page for that subscription, select **Continuous export**.
- :::image type="content" source="./media/continuous-export/continuous-export-options-page.png" alt-text="Export options in Microsoft Defender for Cloud.":::
+ :::image type="content" source="./media/continuous-export/continuous-export-options-page.png" alt-text="Export options in Microsoft Defender for Cloud." lightbox="./media/continuous-export/continuous-export-options-page.png":::
- Here you see the export options. There's a tab for each available export target.
+ Here you see the export options. There's a tab for each available export target, either Event hub or Log Analytics workspace.
1. Select the data type you'd like to export and choose from the filters on each type (for example, export only high severity alerts).
-1. Select the appropriate export frequency:
+1. Select the export frequency:
- **Streaming** ΓÇô assessments will be sent when a resourceΓÇÖs health state is updated (if no updates occur, no data will be sent). - **Snapshots** ΓÇô a snapshot of the current state of the selected data types will be sent once a week per subscription. To identify snapshot data, look for the field ``IsSnapshot``.
-1. Optionally, if your selection includes one of these recommendations, you can include the vulnerability assessment findings together with them:
+ If your selection includes one of these recommendations, you can include the vulnerability assessment findings together with them:
- [SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/82e20e14-edc5-4373-bfc4-f13121257c37) - [SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f97aa83c-9b63-4f9a-99f6-b22c4398f936) - [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648)
The steps below are necessary whether you're setting up a continuous export to L
:::image type="content" source="./media/continuous-export/include-security-findings-toggle.png" alt-text="Include security findings toggle in continuous export configuration." ::: 1. From the "Export target" area, choose where you'd like the data saved. Data can be saved in a target on a different subscription (for example on a Central Event Hub instance or a central Log Analytics workspace).+
+ You can also send the data to an [Event hub or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hub-or-log-analytics-workspace-in-another-tenant).
+ 1. Select **Save**. > [!NOTE]
Continuous export can be configured and managed via the Microsoft Defender for C
- Azure Event Hub - Log Analytics workspace-- Azure Logic Apps
+- Azure Logic Apps
-The API provides additional functionality not available from the Azure portal, for example:
+You can also send the data to an [Event hub or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hub-or-log-analytics-workspace-in-another-tenant).
-* **Greater volume** - You can create multiple export configurations on a single subscription with the API. The **Continuous Export** page in Defender for Cloud's portal UI supports only one export configuration per subscription.
+Here are some examples of options that you can only use in the the API:
-* **Additional features** - The API offers additional parameters that aren't shown in the UI. For example, you can add tags to your automation resource as well as define your export based on a wider set of alert and recommendation properties than those offered in the **Continuous Export** page in Defender for Cloud's portal UI.
+* **Greater volume** - You can create multiple export configurations on a single subscription with the API. The **Continuous Export** page in the Azure portal supports only one export configuration per subscription.
-* **More focused scope** - The API provides a more granular level for the scope of your export configurations. When defining an export with the API, you can do so at the resource group level. If you're using the **Continuous Export** page in Defender for Cloud's portal UI, you have to define it at the subscription level.
+* **Additional features** - The API offers parameters that aren't shown in the Azure portal. For example, you can add tags to your automation resource and define your export based on a wider set of alert and recommendation properties than the ones offered in the **Continuous Export** page in the Azure portal.
+
+* **More focused scope** - The API provides a more granular level for the scope of your export configurations. When defining an export with the API, you can do so at the resource group level. If you're using the **Continuous Export** page in the Azure portal, you have to define it at the subscription level.
> [!TIP]
- > If you've set up multiple export configurations using the API, or if you've used API-only parameters, those extra features will not be displayed in the Defender for Cloud UI. Instead, there'll be a banner informing you that other configurations exist.
+ > These API-only options are not shown in the Azure portal. If you use them, there'll be a banner informing you that other configurations exist.
Learn more about the automations API in the [REST API documentation](/rest/api/securitycenter/automations).
To deploy your continuous export configurations across your organization, use th
|Goal |Policy |Policy ID | ||||
- |Continuous export to Event Hub|[Deploy export to Event Hub for Microsoft Defender for Cloud alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcdfcce10-4578-4ecd-9703-530938e4abcb)|cdfcce10-4578-4ecd-9703-530938e4abcb|
+ |Continuous export to Event Hubs|[Deploy export to Event Hubs for Microsoft Defender for Cloud alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcdfcce10-4578-4ecd-9703-530938e4abcb)|cdfcce10-4578-4ecd-9703-530938e4abcb|
|Continuous export to Log Analytics workspace|[Deploy export to Log Analytics workspace for Microsoft Defender for Cloud alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fffb6f416-7bd2-4488-8828-56585fef2be9)|ffb6f416-7bd2-4488-8828-56585fef2be9| - > [!TIP] > You can also find these by searching Azure Policy: > 1. Open Azure Policy. > :::image type="content" source="./media/continuous-export/opening-azure-policy.png" alt-text="Accessing Azure Policy.":::
- > 2. From the Azure Policy menu, select **Definitions** and search for them by name.
+ > 2. From the Azure Policy menu, select **Definitions** and search for them by name.
1. From the relevant Azure Policy page, select **Assign**. :::image type="content" source="./media/continuous-export/export-policy-assign.png" alt-text="Assigning the Azure Policy."::: 1. Open each tab and set the parameters as desired:
- 1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that will use continuous export configuration.
- 1. In the **Parameters** tab, set the resource group and data type details.
+ 1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that will use continuous export configuration.
+ 1. In the **Parameters** tab, set the resource group and data type details.
> [!TIP] > Each parameter has a tooltip explaining the options available to you. >
To deploy your continuous export configurations across your organization, use th
1. Optionally, to apply this assignment to existing subscriptions, open the **Remediation** tab and select the option to create a remediation task. 1. Review the summary page and select **Create**.
-
+
-## Information about exporting to a Log Analytics workspace
+## Exporting to a Log Analytics workspace
If you want to analyze Microsoft Defender for Cloud data inside a Log Analytics workspace or use Azure alerts together with Defender for Cloud alerts, set up continuous export to your Log Analytics workspace. ### Log Analytics tables and schemas
-Security alerts and recommendations are stored in the *SecurityAlert* and *SecurityRecommendation* tables respectively.
+Security alerts and recommendations are stored in the *SecurityAlert* and *SecurityRecommendation* tables respectively.
-The name of the Log Analytics solution containing these tables depends on whether you have enabled the enhanced security features: Security ('Security and Audit') or SecurityCenterFree.
+The name of the Log Analytics solution containing these tables depends on whether you've enabled the enhanced security features: Security ('Security and Audit') or SecurityCenterFree.
> [!TIP] > To see the data on the destination workspace, you must enable one of these solutions **Security and Audit** or **SecurityCenterFree**.
The name of the Log Analytics solution containing these tables depends on whethe
To view the event schemas of the exported data types, visit the [Log Analytics table schemas](https://aka.ms/ASCAutomationSchemas).
+## Export data to an Azure Event hub or Log Analytics workspace in another tenant
+
+You can export data to an Azure Event hub or Log Analytics workspace in a different tenant, which can help you to gather your data for central analysis.
+
+To export data to an Azure Event hub or Log Analytics workspace in a different tenant:
+
+1. In the tenant that has the Azure Event hub or Log Analytics workspace, [invite a user](../active-directory/external-identities/what-is-b2b.md#easily-invite-guest-users-from-the-azure-ad-portal) from the tenant that hosts the continuous export configuration.
+1. For a Log Analytics workspace: After the user accepts the invitation to join the tenant, assign the user in the workspace tenant one of these roles: Owner, Contributor, Log Analytics Contributor, Sentinel Contributor, Monitoring Contributor
+1. Configure the continuous export configuration and select the Event hub or Analytics workspace to send the data to.
## View exported alerts and recommendations in Azure Monitor
-You might also choose to view exported Security Alerts and/or recommendations in [Azure Monitor](../azure-monitor/alerts/alerts-overview.md).
+You might also choose to view exported Security Alerts and/or recommendations in [Azure Monitor](../azure-monitor/alerts/alerts-overview.md).
-Azure Monitor provides a unified alerting experience for a variety of Azure alerts including Diagnostic Log, Metric alerts, and custom alerts based on Log Analytics workspace queries.
+Azure Monitor provides a unified alerting experience for various Azure alerts including Diagnostic Log, Metric alerts, and custom alerts based on Log Analytics workspace queries.
To view alerts and recommendations from Defender for Cloud in Azure Monitor, configure an Alert rule based on Log Analytics queries (Log Alert):
To view alerts and recommendations from Defender for Cloud in Azure Monitor, con
* For **Resource**, select the Log Analytics workspace to which you exported security alerts and recommendations.
- * For **Condition**, select **Custom log search**. In the page that appears, configure the query, lookback period, and frequency period. In the search query, you can type *SecurityAlert* or *SecurityRecommendation* to query the data types that Defender for Cloud continuously exports to as you enable the Continuous export to Log Analytics feature.
-
+ * For **Condition**, select **Custom log search**. In the page that appears, configure the query, lookback period, and frequency period. In the search query, you can type *SecurityAlert* or *SecurityRecommendation* to query the data types that Defender for Cloud continuously exports to as you enable the Continuous export to Log Analytics feature.
+
* Optionally, configure the [Action Group](../azure-monitor/alerts/action-groups.md) that you'd like to trigger. Action groups can trigger email sending, ITSM tickets, WebHooks, and more. ![Azure Monitor alert rule.](./media/continuous-export/azure-monitor-alert-rule.png)
To download a CSV report for alerts or recommendations, open the **Security aler
> [!NOTE] > These reports contain alerts and recommendations for resources from the currently selected subscriptions. - ## FAQ - Continuous export ### What are the costs involved in exporting data?
-There is no cost for enabling a continuous export. Costs might be incurred for ingestion and retention of data in your Log Analytics workspace, depending on your configuration there.
+There's no cost for enabling a continuous export. Costs might be incurred for ingestion and retention of data in your Log Analytics workspace, depending on your configuration there.
-Learn more about [Log Analytics workspace pricing](https://azure.microsoft.com/pricing/details/monitor/).
+Many alerts are only provided when you've enabled Defender plans for your resources. A good way to preview the alerts you'll get in your exported data is to see the alerts shown in Defender for Cloud's pages in the Azure portal.
-Learn more about [Azure Event Hub pricing](https://azure.microsoft.com/pricing/details/event-hubs/).
+Learn more about [Log Analytics workspace pricing](https://azure.microsoft.com/pricing/details/monitor/).
+Learn more about [Azure Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/).
### Does the export include data about the current state of all resources?
No. Continuous export is built for streaming of **events**:
- **Alerts** received before you enabled export won't be exported. - **Recommendations** are sent whenever a resource's compliance state changes. For example, when a resource turns from healthy to unhealthy. Therefore, as with alerts, recommendations for resources that haven't changed state since you enabled export won't be exported.-- **Secure score** per security control or subscription is sent when a security control's score changes by 0.01 or more.
+- **Secure score** per security control or subscription is sent when a security control's score changes by 0.01 or more.
- **Regulatory compliance status** is sent when the status of the resource's compliance changes. -- ### Why are recommendations sent at different intervals?
-Different recommendations have different compliance evaluation intervals, which can vary from a few minutes to every few days. Consequently, recommendations will differ in the amount of time it takes for them to appear in your exports.
+Different recommendations have different compliance evaluation intervals, which can range from every few minutes to every few days. So, the amount of time that it takes for recommendations to appear in your exports varies.
### Does continuous export support any business continuity or disaster recovery (BCDR) scenarios?
-When preparing your environment for BCDR scenarios, where the target resource is experiencing an outage or other disaster, it's the organization's responsibility to prevent data loss by establishing backups according to the guidelines from Azure Event Hubs, Log Analytics workspace, and Logic App.
+Continuous export can be helpful in to prepare for BCDR scenarios where the target resource is experiencing an outage or other disaster. However, it's the organization's responsibility to prevent data loss by establishing backups according to the guidelines from Azure Event Hubs, Log Analytics workspace, and Logic App.
Learn more in [Azure Event Hubs - Geo-disaster recovery](../event-hubs/event-hubs-geo-dr.md). -
-### Is continuous export available for free?
-
-Yes! Note that many alerts are only provided when you've enabled advanced protections. A good way to preview the alerts you'll get in your exported data is to see the alerts shown in Defender for Cloud's pages in the Azure portal.
--- ## Next steps
-In this article, you learned how to configure continuous exports of your recommendations and alerts. You also learned how to download your alerts data as a CSV file.
+In this article, you learned how to configure continuous exports of your recommendations and alerts. You also learned how to download your alerts data as a CSV file.
-For related material, see the following documentation:
+For related material, see the following documentation:
- Learn more about [workflow automation templates](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation). - [Azure Event Hubs documentation](../event-hubs/index.yml)
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
The following commands trigger an on-demand scan:
- [Are there any additional charges for the Qualys license?](#are-there-any-additional-charges-for-the-qualys-license) - [What prerequisites and permissions are required to install the Qualys extension?](#what-prerequisites-and-permissions-are-required-to-install-the-qualys-extension) - [Can I remove the Defender for Cloud Qualys extension?](#can-i-remove-the-defender-for-cloud-qualys-extension)
+- [How can I check that the Qualys extension is properly installed?](#how-can-i-check-that-the-qualys-extension-is-properly-installed)
- [How does the extension get updated?](#how-does-the-extension-get-updated) - [Why does my machine show as "not applicable" in the recommendation?](#why-does-my-machine-show-as-not-applicable-in-the-recommendation) - [Can the built-in vulnerability scanner find vulnerabilities on the VMs network?](#can-the-built-in-vulnerability-scanner-find-vulnerabilities-on-the-vms-network)
You'll need the following details:
* On Linux, the extension is called "LinuxAgent.AzureSecurityCenter" and the publisher name is "Qualys". * On Windows, the extension is called "WindowsAgent.AzureSecurityCenter" and the provider name is "Qualys".
+### How can I check that the Qualys extension is properly installed?
+
+You can use the `curl` command to check the connectivity to the relevant Qualys URL. A valid response would be: `{"code":404,"message":"HTTP 404 Not Found"}`
+
+In addition, make sure that the DNS resolution for these URLs is successful and that everything is [valid with the certificate authority](https://success.qualys.com/support/s/article/000001856) that is used.
+ ### How does the extension get updated? Like the Microsoft Defender for Cloud agent itself and all other Azure extensions, minor updates of the Qualys scanner might automatically happen in the background. All agents and extensions are tested extensively before being automatically deployed.
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
Title: Workflow automation in Microsoft Defender for Cloud description: Learn how to create and automate workflows in Microsoft Defender for Cloud Previously updated : 06/26/2022 Last updated : 07/31/2022 # Automate responses to Microsoft Defender for Cloud triggers
This article describes the workflow automation feature of Microsoft Defender for
You can also run Logic Apps manually when viewing any security alert or recommendation.
-To manually run a Logic App, open an alert or a recommendation and select **Trigger Logic App**:
+To manually run a Logic App, open an alert, or a recommendation and select **Trigger Logic App**:
[![Manually trigger a Logic App.](media/workflow-automation/manually-trigger-logic-app.png)](media/workflow-automation/manually-trigger-logic-app.png#lightbox)
To implement these policies:
|Goal |Policy |Policy ID | ||||
- |Workflow automation for security alerts |[Deploy Workflow Automation for Microsoft Defender for Cloud alerts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff1525828-9a90-4fcf-be48-268cdd02361e)|f1525828-9a90-4fcf-be48-268cdd02361e|
- |Workflow automation for security recommendations |[Deploy Workflow Automation for Microsoft Defender for Cloud recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef|
- |Workflow automation for regulatory compliance changes|[Deploy Workflow Automation for Microsoft Defender for Cloud regulatory compliance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f509122b9-ddd9-47ba-a5f1-d0dac20be63c)|509122b9-ddd9-47ba-a5f1-d0dac20be63c|
+ |Workflow automation for security alerts |[Deploy Workflow Automation for Microsoft Defender for Cloud alerts](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff1525828-9a90-4fcf-be48-268cdd02361e)|f1525828-9a90-4fcf-be48-268cdd02361e|
+ |Workflow automation for security recommendations |[Deploy Workflow Automation for Microsoft Defender for Cloud recommendations](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef|
+ |Workflow automation for regulatory compliance changes|[Deploy Workflow Automation for Microsoft Defender for Cloud regulatory compliance](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F509122b9-ddd9-47ba-a5f1-d0dac20be63c)|509122b9-ddd9-47ba-a5f1-d0dac20be63c|
+ > [!NOTE]
+ > The three workflow automation policies have recently been rebranded. nfortunately, this change came with an unavoidable breaking change. To learn how to mitigate this breaking change, see [mitigate breaking change](#mitigate-breaking-change),
> [!TIP] > You can also find these by searching Azure Policy:
For every active automation, we recommend you create an identical (disabled) aut
Learn more about [Business continuity and disaster recovery for Azure Logic Apps](../logic-apps/business-continuity-disaster-recovery-guidance.md).
+### Mitigate breaking change
+
+Recently we've rebranded the following recommendation:
+
+- [Deploy Workflow Automation for Microsoft Defender for Cloud alerts](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff1525828-9a90-4fcf-be48-268cdd02361e)
+- [Deploy Workflow Automation for Microsoft Defender for Cloud recommendations](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F73d6ab6c-2475-4850-afd6-43795f3492ef)
+- [Deploy Workflow Automation for Microsoft Defender for Cloud regulatory compliance](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F509122b9-ddd9-47ba-a5f1-d0dac20be63c)
+
+Unfortunately, this change came with an unavoidable breaking change. The breaking change causes all of the old workflow automation policies that used the built-in connectors to be uncompliant.
+
+**To mitigate this issue**:
+
+1. Navigate to the logic app that is connected to the policy.
+1. Select **Logic app designer**.
+1. Select the **three dot** > **Rename**.
+1. Rename the Defender for cloud connector as follows:
+
+ | Original name | New name|
+ |--|--|
+ |Deploy Workflow Automation for Microsoft Defender for Cloud alerts | When an Microsoft Defender for Clou dAlert is created or triggered <sup>[1](#footnote1)</sup>|
+ | Deploy Workflow Automation for Microsoft Defender for Cloud recommendations | When an Microsoft Defender for Cloud Recommendation is created or triggered |
+ | Deploy Workflow Automation for Microsoft Defender for Cloud regulatory compliance | When a Microsoft Defender for Cloud Regulatory Compliance Assessment is created or triggered |
+
+ <sup><a name="footnote1"></a>1</sup> The typo `Clou dAlert` is intentional.
+ ## Next steps In this article, you learned about creating Logic Apps, automating their execution in Defender for Cloud, and running them manually.
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
You can display devices from saved filters in the Device map. For more informati
### Map zoom views
-Working with map views helps expedite forensics when analyzing large networks.
-
-Three device detail views can be displayed:
+Working with map views helps expedite forensics when analyzing large networks. Map views include the following options:
- [BirdΓÇÖs-eye view](#birds-eye-view)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
|--|--|--| | 22.2.4 | 07/2022 | 4/2023 | | 22.2.3 | 07/2022 | 4/2023 |
+| 22.1.7 | 07/2022 | 4/2023 |
| 22.1.6 | 06/2022 | 10/2023 | | 22.1.5 | 06/2022 | 10/2023 | | 22.1.4 | 04/2022 | 10/2022 |
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | ||| |**Enterprise IoT networks** | - [Enterprise IoT purchase experience and Defender for Endpoint integration in GA](#enterprise-iot-purchase-experience-and-defender-for-endpoint-integration-in-ga) |
-|**OT networks** |**Sensor software version 22.2.4**: <br>- [Device inventory enhancements](#device-inventory-enhancements)<br>- [Enhancements for the ServiceNow integration API](#enhancements-for-the-servicenow-integration-api)<br><br>**Sensor software version 22.2.3**:<br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>To update to version 22.2.3:<br>- From version 22.1.x, update directly to version 22.2.3<br>- From version 10.x, first update to version 21.1.6, and then update again to 22.2.3<br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
+|**OT networks** |**Sensor software version 22.2.4**: <br>- [Device inventory enhancements](#device-inventory-enhancements)<br>- [Enhancements for the ServiceNow integration API](#enhancements-for-the-servicenow-integration-api)<br><br>**Sensor software version 22.2.3**:<br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>**Sensor software version 22.1.7**: <br>- [Same passwords for *cyberx_host* and *cyberx* users](#same-passwords-for-cyberx_host-and-cyberx-users) <br><br>**To update to version 22.2.x**:<br>- **From version 22.1.x**, update directly to the latest **22.2.x** version<br>- **From version 10.x**, first update to the latest **22.1.x** version, and then update again to the latest **22.2.x** version <br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
|**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) | ### Enterprise IoT purchase experience and Defender for Endpoint integration in GA
Defender for IoTΓÇÖs new purchase experience and the Enterprise IoT integration
> [!NOTE] > The Enterprise IoT network sensor and all detections remain in Public Preview.
+### Same passwords for cyberx_host and cyberx users
+
+During OT monitoring software installations and updates, the **cyberx** user is assigned a random password. When updating from version 10.x.x to version 22.1.7, the **cyberx_host** password is assigned with an identical password to the **cyberx** user.
+
+For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
+ ### Device inventory enhancements Starting in OT sensor versions 22.2.4, you can now take the following actions from the sensor console's **Device inventory** page:
iot-hub Iot Hub Csharp Csharp C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-c2d.md
In this section, you create a .NET console app that sends cloud-to-device messag
1. In the current Visual Studio solution, select **File** > **New** > **Project**. In **Create a new project**, select **Console App (.NET Framework)**, and then select **Next**.
-1. Name the project *SendCloudToDevice*. Under **Solution**, select **Add to solution** and accept the most recent version of the .NET Framework. Select **Create** to create the project.
+1. Name the project *SendCloudToDevice*, then select **Next**.
![Configure a new project in Visual Studio](./media/iot-hub-csharp-csharp-c2d/sendcloudtodevice-project-configure.png)
+1. Accept the most recent version of the .NET Framework. Select **Create** to create the project.
+ 1. In Solution Explorer, right-click the new project, and then select **Manage NuGet Packages**. 1. In **Manage NuGet Packages**, select **Browse**, and then search for and select **Microsoft.Azure.Devices**. Select **Install**.
iot-hub Tutorial Message Enrichments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-message-enrichments.md
Previously updated : 06/08/2022 Last updated : 07/29/2022 # Customer intent: As a customer using Azure IoT Hub, I want to add information to the messages that come through my IoT hub and are sent to another endpoint. For example, I'd like to pass the IoT hub name to the application that reads the messages from the final endpoint, such as Azure Storage.
*Message enrichments* are the ability of Azure IoT Hub to stamp messages with additional information before the messages are sent to the designated endpoint. One reason to use message enrichments is to include data that can be used to simplify downstream processing. For example, enriching device messages with a device twin tag can reduce load on customers to make device twin API calls for this information. For more information, see [Overview of message enrichments](iot-hub-message-enrichments-overview.md).
-In this tutorial, you see two ways to create and configure the resources that are needed to test the message enrichments for an IoT hub. The resources include one storage account with two storage containers. One container holds the enriched messages, and another container holds the original messages. Also included is an IoT hub to receive the messages and route them to the appropriate storage container based on whether they're enriched or not.
-
-* The first method is to use the Azure CLI to create the resources and configure the message routing. Then you define the message enrichments in the Azure portal.
-
-* The second method is to use an Azure Resource Manager template to create both the resources and configure both the message routing and message enrichments.
-
-After the configurations for the message routing and message enrichments are finished, you use an application to send messages to the IoT hub. The hub then routes them to both storage containers. Only the messages sent to the endpoint for the **enriched** storage container are enriched.
+In the [first part of this tutorial](tutorial-routing.md), you saw how to create custom endpoints and route messages to other Azure services. In this tutorial, you see how to create and configure the extra resources needed to test message enrichments for an IoT hub. The resources include a second storage container for an existing storage account (created in the first part of the tutorial) to hold the enriched messages and a message route to send them there. After the configurations for the message routing and message enrichments are finished, you use an application to send messages to the IoT hub. The hub then routes them to both storage containers. Only the messages sent to the endpoint for the **enriched** storage container are enriched.
In this tutorial, you perform the following tasks: > [!div class="checklist"] >
-> * First method: Create resources and configure message routing using the Azure CLI. Configure the message enrichments in the Azure portal.
-> * Second method: Create resources and configure message routing and message enrichments using a Resource Manager template.
+> * Create a second container in your storage account.
+> * Create another custom endpoint and route messages to it from the IoT hub.
+> * Configure message enrichments that are routed to the new endpoint.
> * Run an app that simulates an IoT device sending messages to the hub.
-> * View the results, and verify that the message enrichments are being applied to the targeted messages.
+> * View the results and verify that the message enrichments are being applied to the targeted messages.
## Prerequisites * You must have an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* You must have completed [Tutorial: Send device data to Azure Storage using IoT Hub message routing](tutorial-routing.md) and maintained the resources you created for it.
+ * Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+# [Azure portal](#tab/portal)
-## Retrieve the IoT C# samples repository
+There are no other prerequisites for the Azure portal.
-Download or clone the [IoT C# samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) from GitHub. Follow the directions in **README.md** to set up the prerequisites for running C# samples.
+# [Azure CLI](#tab/cli)
-This repository has several applications, scripts, and Resource Manager templates in it. The ones to be used for this tutorial are as follows:
-* For the manual method, there's a CLI script that creates the cloud resources. This script is in `/azure-iot-samples-csharp/iot-hub/Tutorials/Routing/SimulatedDevice/resources/iothub_msgenrichment_cli.azcli`. This script creates the resources and configures the message routing. After you run this script, create the message enrichments manually by using the Azure portal.
-* For the automated method, there's an Azure Resource Manager template. The template is in `/azure-iot-samples-csharp/iot-hub/Tutorials/Routing/SimulatedDevice/resources/template_msgenrichments.json`. This template creates the resources, configures the message routing, and then configures the message enrichments.
-* The third application you use is the device simulation app, which you use to send messages to the IoT hub and test the message enrichments.
+
-## Create and configure resources using the Azure CLI
+## Create a second container in your storage account
-In addition to creating the necessary resources, the Azure CLI script also configures the two routes to the endpoints that are separate storage containers. For more information on how to configure message routing, see the [routing tutorial](tutorial-routing.md). After the resources are set up, use the [Azure portal](https://portal.azure.com) to configure message enrichments for each endpoint. Then continue on to the testing step.
+In [the first part](tutorial-routing.md#create-a-storage-account) of this tutorial, you created a storage account and container for routed messages. Now you should create a second container for enriched messages.
-> [!NOTE]
-> All messages are routed to both endpoints, but only the messages going to the endpoint with configured message enrichments will be enriched.
+# [Azure portal](#tab/portal)
-You can use the script that follows, or you can open the script in the /resources folder of the downloaded repository. The script performs the following steps:
+1. In the Azure portal, search for **Storage accounts**.
-* Create an IoT hub.
-* Create a storage account.
-* Create two containers in the storage account. One container is for the enriched messages, and another container is for messages that aren't enriched.
-* Set up routing for the two different storage containers:
- * Create an endpoint for each storage account container.
- * Create a route to each of the storage account container endpoints.
+1. Select the account you created earlier.
-There are several resource names that must be globally unique, such as the IoT hub name and the storage account name. To make running the script easier, those resource names are appended with a random alphanumeric value called *randomValue*. The random value is generated once at the top of the script. It's appended to the resource names as needed throughout the script. If you don't want the value to be random, you can set it to an empty string or to a specific value.
+1. In the storage account menu, select **Containers** from the **Data storage** section.
-If you haven't already done so, open an Azure [Cloud Shell window](https://shell.azure.com) and ensure that it's set to Bash. Open the script in the unzipped repository, select Ctrl+A to select all of it, and then select Ctrl+C to copy it. Alternatively, you can copy the following CLI script or open it directly in Cloud Shell. Paste the script in the Cloud Shell window by right-clicking the command line and selecting **Paste**. The script runs one statement at a time. After the script stops running, select **Enter** to make sure it runs the last command. The following code block shows the script that's used, with comments that explain what it's doing.
+1. Select **Container** to create the new container.
-Here are the resources created by the script. *Enriched* means that the resource is for messages with enrichments. *Original* means that the resource is for messages that aren't enriched.
+ :::image type="content" source="./media/tutorial-message-enrichments/create-storage-container.png" alt-text="Screenshot of creating a storage container.":::
-| Name | Value |
-|--|--|
-| resourceGroup | ContosoResourcesMsgEn |
-| IoT device name | Contoso-Test-Device |
-| IoT Hub name | ContosoTestHubMsgEn |
-| storage Account Name | contosostorage |
-| container name 1 | original |
-| container name 2 | enriched |
-| endpoint Name 1 | ContosoStorageEndpointOriginal |
-| endpoint Name 2 | ContosoStorageEndpointEnriched|
-| route Name 1 | ContosoStorageRouteOriginal |
-| route Name 2 | ContosoStorageRouteEnriched |
+1. Name the container *enriched* and select **Create**.
-```azurecli-interactive
-# This command retrieves the subscription id of the current Azure account.
-# This field is used when setting up the routing queries.
-subscriptionID=$(az account show --query id -o tsv)
-
-# Concatenate this number onto the resources that have to be globally unique.
-# You can set this to "" or to a specific value if you don't want it to be random.
-# This retrieves a random value.
-randomValue=$RANDOM
-
-# This command installs the IoT Extension for Azure CLI.
-# You only need to install this the first time.
-# You need it to create the device identity.
-az extension add --name azure-iot
-
-# Set the values for the resource names that
-# don't have to be globally unique.
-location=westus2
-resourceGroup=ContosoResourcesMsgEn
-containerName1=original
-containerName2=enriched
-iotDeviceName=Contoso-Test-Device
-
-# Create the resource group to be used
-# for all the resources for this tutorial.
-az group create --name $resourceGroup \
- --location $location
-
-# The IoT hub name must be globally unique,
-# so add a random value to the end.
-iotHubName=ContosoTestHubMsgEn$randomValue
-echo "IoT hub name = " $iotHubName
-
-# Create the IoT hub.
-az iot hub create --name $iotHubName \
- --resource-group $resourceGroup \
- --sku S1 --location $location
-
-# You need a storage account that will have two containers
-# -- one for the original messages and
-# one for the enriched messages.
-# The storage account name must be globally unique,
-# so add a random value to the end.
-storageAccountName=contosostorage$randomValue
-echo "Storage account name = " $storageAccountName
-
-# Create the storage account to be used as a routing destination.
-az storage account create --name $storageAccountName \
- --resource-group $resourceGroup \
- --location $location \
- --sku Standard_LRS
-
-# Get the primary storage account key.
-# You need this to create the containers.
-storageAccountKey=$(az storage account keys list \
- --resource-group $resourceGroup \
- --account-name $storageAccountName \
- --query "[0].value" | tr -d '"')
-
-# See the value of the storage account key.
-echo "storage account key = " $storageAccountKey
-
-# Create the containers in the storage account.
-az storage container create --name $containerName1 \
- --account-name $storageAccountName \
- --account-key $storageAccountKey \
- --public-access off
-
-az storage container create --name $containerName2 \
- --account-name $storageAccountName \
- --account-key $storageAccountKey \
- --public-access off
-
-# Create the IoT device identity to be used for testing.
-az iot hub device-identity create --device-id $iotDeviceName \
- --hub-name $iotHubName
-
-# Retrieve the information about the device identity, then copy the primary key to
-# Notepad. You need this to run the device simulation during the testing phase.
-# If you are using Cloud Shell, you can scroll the window back up to retrieve this value.
-az iot hub device-identity show --device-id $iotDeviceName \
- --hub-name $iotHubName
-
-##### ROUTING FOR STORAGE #####
-
-# You're going to have two routes and two endpoints.
-# One route points to the first container ("original") in the storage account
-# and includes the original messages.
-# The other points to the second container ("enriched") in the same storage account
-# and includes the enriched versions of the messages.
-
-endpointType="azurestoragecontainer"
-endpointName1="ContosoStorageEndpointOriginal"
-endpointName2="ContosoStorageEndpointEnriched"
-routeName1="ContosoStorageRouteOriginal"
-routeName2="ContosoStorageRouteEnriched"
-
-# for both endpoints, retrieve the messages going to storage
-condition='level="storage"'
-
-# Get the connection string for the storage account.
-# Adding the "-o tsv" makes it be returned without the default double quotes around it.
-storageConnectionString=$(az storage account show-connection-string \
- --name $storageAccountName --query connectionString -o tsv)
-
-# Create the routing endpoints and routes.
-# Set the encoding format to either avro or json.
-
-# This is the endpoint for the first container, for endpoint messages that are not enriched.
-az iot hub routing-endpoint create \
- --connection-string $storageConnectionString \
- --endpoint-name $endpointName1 \
- --endpoint-resource-group $resourceGroup \
- --endpoint-subscription-id $subscriptionID \
- --endpoint-type $endpointType \
- --hub-name $iotHubName \
- --container $containerName1 \
- --resource-group $resourceGroup \
- --encoding json
-
-# This is the endpoint for the second container, for endpoint messages that are enriched.
-az iot hub routing-endpoint create \
- --connection-string $storageConnectionString \
- --endpoint-name $endpointName2 \
- --endpoint-resource-group $resourceGroup \
- --endpoint-subscription-id $subscriptionID \
- --endpoint-type $endpointType \
- --hub-name $iotHubName \
- --container $containerName2 \
- --resource-group $resourceGroup \
- --encoding json
-
-# This is the route for messages that are not enriched.
-# Create the route for the first storage endpoint.
-az iot hub route create \
- --name $routeName1 \
- --hub-name $iotHubName \
- --source devicemessages \
- --resource-group $resourceGroup \
- --endpoint-name $endpointName1 \
- --enabled \
- --condition $condition
-
-# This is the route for messages that are enriched.
-# Create the route for the second storage endpoint.
-az iot hub route create \
- --name $routeName2 \
- --hub-name $iotHubName \
- --source devicemessages \
- --resource-group $resourceGroup \
- --endpoint-name $endpointName2 \
- --enabled \
- --condition $condition
-```
+# [Azure CLI](#tab/cli)
-At this point, the resources are all set up and the message routing is configured. You can view the message routing configuration in the portal and set up the message enrichments for messages going to the **enriched** storage container.
+> [!TIP]
+> Many of the CLI commands used throughout this tutorial use the same parameters. For your convenience, we have you define local variables that can be called as needed. Be sure to run all the commands in the same session, or else you will have to redefine the variables.
-### Configure the message enrichments using the Azure portal
+The values for these variables should be for the same resources you used in the first part of this tutorial.
-1. In the [Azure portal](https://portal.azure.com), go to your IoT hub by selecting **Resource groups**. Then select the resource group set up for this tutorial (**ContosoResourcesMsgEn**). Find the IoT hub in the list, and select it.
+1. Define the variables for your IoT hub, storage account, and container.
-2. Select **Message routing** for the IoT hub.
+ *GROUP_NAME*: Replace this placeholder with the name of the resource group that contains your IoT hub.
- :::image type="content" source="./media/tutorial-message-enrichments/select-iot-hub.png" alt-text="Screenshot that shows how to select message routing." border="true":::
+ *IOTHUB_NAME*: Replace this placeholder with the name of your IoT hub.
- The message routing pane has three tabs labeled **Routes**, **Custom endpoints**, and **Enrich messages**. Browse the first two tabs to see the configuration set up by the script.
+ *DEVICE_ID*: Replace this placeholder with the ID of your device.
-3. Select the **Enrich messages** tab to add three message enrichments for the messages going to the endpoint for the storage container called **enriched**.
+ *STORAGE_NAME*: Replace this placeholder with the name of your storage account.
-4. For each message enrichment, fill in the name and value, and then select the endpoint **ContosoStorageEndpointEnriched** from the drop-down list. Here's an example of how to set up an enrichment that adds the IoT hub name to the message:
+ For this tutorial, the value for the `containerName` variable should be *enriched*.
- ![Add first enrichment](./media/tutorial-message-enrichments/add-message-enrichments.png)
+ ```azurecli-interactive
+ resourceGroup=GROUP_NAME
+ hubName=IOTHUB_NAME
+ deviceId=DEVICE_ID
+ storageName=STORAGE_NAME
+ containerName=enriched
+ ```
- Add these values to the list for the ContosoStorageEndpointEnriched endpoint:
+1. Use the [az storage container create](/cli/azure/storage/container#az-storage-container-create) command to add the container to your storage account.
- | Name | Value | Endpoint |
- | - | -- | -- |
- | myIotHub | `$iothubname` | ContosoStorageEndpointEnriched |
- | DeviceLocation | `$twin.tags.location` (assumes that the device twin has a location tag) | ContosoStorageEndpointEnriched |
- |customerID | `6ce345b8-1e4a-411e-9398-d34587459a3a` | ContosoStorageEndpointEnriched |
+ ```azurecli-interactive
+ az storage container create --auth-mode login --account-name $storageName --name $containerName
+ ```
- When you're finished, your pane should look similar to this image:
++
+## Route messages to a second endpoint
+
+Create a second endpoint and route for the enriched messages.
+
+# [Azure portal](#tab/portal)
+
+1. In the Azure portal, navigate to your IoT hub.
+
+1. Select **Message Routing** from the **Hub settings** section of the menu.
+
+1. In the **Routes** tab, select **Add**.
+
+ :::image type="content" source="./media/tutorial-message-enrichments/add-route.png" alt-text="Screenshot of adding a new message route.":::
- ![Table with all enrichments added](./media/tutorial-message-enrichments/all-message-enrichments.png)
+1. Select **Add endpoint** next to the **Endpoint** field, then select **Storage** from the dropdown menu.
-5. Select **Apply** to save the changes.
+ :::image type="content" source="./media/tutorial-message-enrichments/add-storage-endpoint.png" alt-text="Screenshot of adding a new endpoint for a route.":::
-You now have message enrichments set up for all messages routed to the **enriched** endpoint. Skip to the [Test message enrichments](#test-message-enrichments) section to continue the tutorial.
+1. Provide the following information for the new storage endpoint:
-## Create and configure resources using a Resource Manager template
+ | Parameter | Value |
+ | | -- |
+ | **Endpoint name** | ContosoStorageEndpointEnriched |
+ | **Azure Storage container** | Select **Pick a container**, which takes you to a list of storage accounts. Choose the storage account that you created in the previous section, then choose the **enriched** container that you created in that account. Select **Select**.|
+ | **Encoding** | Select **JSON**. If this field is greyed out, then your storage account region doesn't support JSON. In that case, continue with the default **AVRO**. |
-You can use a Resource Manager template to create and configure the resources, message routing, and message enrichments.
+ :::image type="content" source="./media/tutorial-message-enrichments/create-storage-endpoint.png" alt-text="Screenshot showing selecting a container for an endpoint.":::
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **+ Create a Resource** to bring up a search box. Enter *template deployment*, and search for it. In the results pane, select **Template deployment (deploy using custom template)**.
+1. Accept the default values for the rest of the parameters and select **Create**.
- ![Template deployment in the Azure portal](./media/tutorial-message-enrichments/template-select-deployment.png)
+1. Continue creating the new route, now that you've added the storage endpoint. Provide the following information for the new route:
-1. Select **Create** in the **Template deployment** pane.
+ | Parameter | Value |
+ | -- | -- |
+ | **Name** | ContosoStorageRouteEnriched |
+ | **Data source** | Verify that **Device Telemetry Messages** is selected from the dropdown list. |
+ | **Enable route** | Verify that this field is set to `enabled`. |
+ | **Routing query** | Enter `level="storage"` as the query string. |
-1. In the **Custom deployment** pane, select **Build your own template in the editor**.
+ :::image type="content" source="./media/tutorial-message-enrichments/create-storage-route.png" alt-text="Screenshot showing saving routing query information.":::
-1. In the **Edit template** pane, select **Load file**. Windows Explorer appears. Locate the **template_messageenrichments.json** file in the unzipped repo file in the **/iot-hub/Tutorials/Routing/SimulatedDevice/resources** directory.
+1. Select **Save**.
- ![Select template from local machine](./media/tutorial-message-enrichments/template-select.png)
+# [Azure CLI](#tab/cli)
-1. Select **Open** to load the template file from the local machine. It loads and appears in the edit pane.
+1. Configure the variables for the endpoint and route commands to use the values *ContosoStorageEndpointEnriched* and *ContosoStorageRouteEnriched*, respectively.
- This template is set up to use a globally unique IoT hub name and storage account name by adding a random value to the end of the default names, so you can use the template without making any changes to it.
+ ```azurecli-interactive
+ endpointName=ContosoStorageEndpointEnriched
+ routeName=ContosoStorageRouteEnriched
+ ```
+
+1. Use the [az iot hub routing-endpoint create](/cli/azure/iot/hub/routing-endpoint#az-iot-hub-routing-endpoint-create) command to create a custom endpoint that points to the storage container you made in the previous section.
+
+ ```azurecli-interactive
+ az iot hub routing-endpoint create \
+ --connection-string $(az storage account show-connection-string --name $storageName --query connectionString -o tsv) \
+ --endpoint-name $endpointName \
+ --endpoint-resource-group $resourceGroup \
+ --endpoint-subscription-id $(az account show --query id -o tsv) \
+ --endpoint-type azurestoragecontainer
+ --hub-name $hubName \
+ --container $containerName \
+ --resource-group $resourceGroup \
+ --encoding json
+ ```
+
+1. Use the [az iot hub route create](/cli/azure/iot/hub/route#az-iot-hub-route-create) command to create a route that passes any message where `level=storage` to the storage container endpoint.
+
+ ```azurecli-interactive
+ az iot hub route create \
+ --name $routeName \
+ --hub-name $hubName \
+ --resource-group $resourceGroup \
+ --source devicemessages \
+ --endpoint-name $endpointName \
+ --enabled true \
+ --condition 'level="storage"'
+ ```
+++
+## Add message enrichment to the new endpoint
+
+Create three message enrichments that will be routed to the **enriched** storage container.
+
+# [Azure portal](#tab/portal)
+
+1. In the Azure portal, navigate to your IoT hub.
- Here are the resources created by loading the template. **Enriched** means that the resource is for messages with enrichments. **Original** means that the resource is for messages that aren't enriched. These are the same values used in the Azure CLI script.
+1. Select **Message routing** for the IoT hub.
- | Name | Value |
- |--|--|
- | IoT Hub name | ContosoTestHubMsgEn |
- | storage Account Name | contosostorage |
- | container name 1 | original |
- | container name 2 | enriched |
- | endpoint Name 1 | ContosoStorageEndpointOriginal |
- | endpoint Name 2 | ContosoStorageEndpointEnriched|
- | route Name 1 | ContosoStorageRouteOriginal |
- | route Name 2 | ContosoStorageRouteEnriched |
+ :::image type="content" source="./media/tutorial-message-enrichments/select-iot-hub.png" alt-text="Screenshot that shows how to select message routing.":::
-1. Select **Save**. The **Custom deployment** pane appears and shows all of the parameters used by the template. The only field you need to set is **Resource group**. Either create a new one or select one from the drop-down list.
+ The message routing pane has three tabs labeled **Routes**, **Custom endpoints**, and **Enrich messages**.
- Here's the top half of the **Custom deployment** pane. You can see where you fill in the resource group.
+1. Select the **Enrich messages** tab to add three message enrichments for the messages going to the endpoint for the storage container called **enriched**.
- ![Top half of Custom deployment pane](./media/tutorial-message-enrichments/template-deployment-top.png)
+1. For each message enrichment, fill in the name and value, and then select the endpoint **ContosoStorageEndpointEnriched** from the drop-down list. Here's an example of how to set up an enrichment that adds the IoT hub name to the message:
-1. Here's the bottom half of the **Custom deployment** pane. You can see the rest of the parameters and the terms and conditions.
+ :::image type="content" source="./media/tutorial-message-enrichments/add-message-enrichments.png" alt-text="Screenshot that shows adding the first enrichment.":::
- ![Bottom half of Custom deployment pane](./media/tutorial-message-enrichments/template-deployment-bottom.png)
+ Add these values to the list for the ContosoStorageEndpointEnriched endpoint:
+
+ | Name | Value | Endpoint |
+ | - | -- | -- |
+ | myIotHub | `$hubname` | ContosoStorageEndpointEnriched |
+ | DeviceLocation | `$twin.tags.location` (assumes that the device twin has a location tag) | ContosoStorageEndpointEnriched |
+ | customerID | `6ce345b8-1e4a-411e-9398-d34587459a3a` | ContosoStorageEndpointEnriched |
-1. Select the check box to agree to the terms and conditions. Then select **Purchase** to continue with the template deployment.
+ When you're finished, your pane should look similar to this image:
-1. Wait for the template to be fully deployed. Select the bell icon at the top of the screen to check on the progress.
+ :::image type="content" source="./media/tutorial-message-enrichments/all-message-enrichments.png" alt-text="Screenshot of table with all enrichments added.":::
-### Register a device in the portal
+1. Select **Apply** to save the changes.
-1. Once your resources are deployed, select the IoT hub in your resource group.
-1. Select **Devices** from the **Device management** section of the navigation menu.
-1. Select **Add Device** to register a new device in your hub.
-1. Provide a device ID. The sample application used later in this tutorial defaults to a device named `Contoso-Test-Device`, but you can use any ID. Select **Save**.
-1. Once the device is created in your hub, select its name from the list of devices. You may need to refresh the list.
-1. Copy the **Primary key** value and have it available to use in the testing section of this article.
+# [Azure CLI](#tab/cli)
+
+Make three calls to the [az iot hub message-enrichment create](/cli/azure/iot/hub/message-enrichment#az-iot-hub-message-enrichment-create) command to add message enrichments to the route going to the endpoint created earlier.
+
+```azurecli-interactive
+az iot hub message-enrichment create \
+ --key myIotHub \
+ --value $hubName \
+ --endpoints ContosoStorageEndpointEnriched \
+ --name $hubName
+
+# This assumes that the device twin has a location tag.
+az iot hub message-enrichment create \
+ --key DeviceLocation \
+ --value '$twin.tags.location' \
+ --endpoints ContosoStorageEndpointEnriched \
+ --name $hubName
+
+az iot hub message-enrichment create \
+ --key customerID \
+ --value 6ce345b8-1e4a-411e-9398-d34587459a3a \
+ --endpoints ContosoStorageEndpointEnriched \
+ --name $hubName
+```
+++
+You now have message enrichments set up for all messages routed to the endpoint you created for enriched messages. If you don't want to add a location tag to the device twin, you can skip to the [Test message enrichments](#test-message-enrichments) section to continue the tutorial.
## Add location tag to the device twin
-One of the message enrichments configured on your IoT hub specifies a key of DeviceLocation with its value determined by the following device twin path: `$twin.tags.location`. If your device twin doesn't have a location tag, the twin path, `$twin.tags.location`, will be stamped as a string for the DeviceLocation value in the message enrichments.
+One of the message enrichments configured on your IoT hub specifies a key of **DeviceLocation** with its value determined by the following device twin path: `$twin.tags.location`. If your device twin doesn't have a location tag, the twin path, `$twin.tags.location`, will be stamped as a string for the **DeviceLocation** key in the message enrichments.
+
+Follow these steps to add a location tag to your device's twin:
-Follow these steps to add a location tag to your device's twin with the portal.
+# [Azure portal](#tab/portal)
1. Navigate to your IoT hub in the Azure portal.
Follow these steps to add a location tag to your device's twin with the portal.
```json , "tags": {"location": "Plant 43"} ```
-
- :::image type="content" source="./media/tutorial-message-enrichments/add-location-tag-to-device-twin.png" alt-text="Screenshot of adding location tag to device twin in Azure portal":::
-1. Wait about five minutes before continuing to the next section. It can take up to that long for updates to the device twin to be reflected in message enrichment values.
+ :::image type="content" source="./media/tutorial-message-enrichments/add-location-tag-to-device-twin.png" alt-text="Screenshot of adding location tag to device twin in Azure portal.":::
-To learn more about how device twin paths are handled with message enrichments, see [Message enrichments limitations](iot-hub-message-enrichments-overview.md#limitations). To learn more about device twins, see [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md).
+# [Azure CLI](#tab/cli)
-## Test message enrichments
+Use the [az iot hub device-twin update](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-update) command to update the device twin with a new tag key and value.
-To view the message enrichments, select **Resource groups**. Then select the resource group you're using for this tutorial. Select the IoT hub from the list of resources, and go to **Messaging**. The message routing configuration and the configured enrichments appear.
+```azurecli-interactive
+az iot hub device-twin update \
+ --hub-name $hubName \
+ --device-id $deviceId \
+ --tags '{"location": "Plant 43"}'
+```
-Now that the message enrichments are configured for the **enriched** endpoint, run the simulated device application to send messages to the IoT hub. The hub was set up with settings that accomplish the following tasks:
+
-* Messages routed to the storage endpoint ContosoStorageEndpointOriginal won't be enriched and will be stored in the storage container **original**.
+> [!TIP]
+> Wait about five minutes before continuing to the next section. It can take up to that long for updates to the device twin to be reflected in message enrichment values.
-* Messages routed to the storage endpoint ContosoStorageEndpointEnriched will be enriched and stored in the storage container **enriched**.
+To learn more about how device twin paths are handled with message enrichments, see [Message enrichments limitations](iot-hub-message-enrichments-overview.md#limitations). To learn more about device twins, see [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md).
-The simulated device application is one of the applications in the azure-iot-samples-csharp repository. The application sends messages with a randomized value for the property `level`. Only messages that have `storage` set as the message's level property will be routed to the two endpoints.
+## Test message enrichments
-1. Open the file **Program.cs** from the **SimulatedDevice** directory in your preferred code editor.
+Now that the message enrichments are configured for the **ContosoStorageEndpointEnriched** endpoint, run the simulated device application to send messages to the IoT hub. At this point, message routing has been set up as follows:
-1. Replace the placeholder text with your own resource information. Substitute the IoT hub name for the marker `{your hub name}`. The format of the IoT hub host name is **{your hub name}.azure-devices.net**. Next, substitute the device key you saved earlier when you ran the script to create the resources for the marker `{your device key}`.
+* Messages routed to the [storage endpoint you created](tutorial-routing.md#route-to-a-storage-account) in the first part of the tutorial won't be enriched and will be stored in the storage container you created then.
- If you don't have the device key, you can retrieve it from the portal. After you sign in, go to **Resource groups**, select your resource group, and then select your IoT hub. Look under **IoT Devices** for your test device, and select your device. Select the copy icon next to **Primary key** to copy it to the clipboard.
+* Messages routed to the storage endpoint **ContosoStorageEndpointEnriched** will be enriched and stored in the storage container **enriched**.
- ```csharp
- private readonly static string s_myDeviceId = "Contoso-Test-Device";
- private readonly static string s_iotHubUri = "{your hub name}.azure-devices.net";
- // This is the primary key for the device. This is in the portal.
- // Find your IoT hub in the portal > IoT devices > select your device > copy the key.
- private readonly static string s_deviceKey = "{your device key}";
- ```
+If you aren't still running the SimulatedDevice console application from the first part of this tutorial, run it again:
-### Run and test
+> [!TIP]
+> If you're following the Azure CLI steps for this tutorial, run the sample code in a separate session. That way, you can allow the sample code to continue running while you follow the rest of the CLI steps.
-Run the console application for a few minutes.
+1. In the sample folder, navigate to the `/iot-hub/Tutorials/Routing/SimulatedDevice/` folder.
-In a command line window, you can run the sample with the following commands executed at the **SimulatedDevice** directory level:
+1. The variable definitions you updated before should still be valid but, if not, edit them in the `Program.cs` file:
-```console
-dotnet restore
-dotnet run
-```
+ 1. Find the variable definitions at the top of the **Program** class. Update the following variables with your own information:
+
+ * **s_myDeviceId**: The device ID that you assigned when registering the device to your IoT hub.
+ * **s_iotHubUri**: The hostname of your IoT hub, which takes the format `IOTHUB_NAME.azure-devices.net`.
+ * **s_deviceKey**: The device primary key found in the device identity information.
+
+ 1. Save and close the file.
-The app sends a new device-to-cloud message to the IoT hub every second. The messages that are being sent are displayed on the console screen of the application. The message contains a JSON-serialized object with the device ID, temperature, humidity, and message level, which defaults to `normal`. The sample program randomly changes the message level to either `critical` or `storage`. Messages labeled for storage are routed to the storage account, and the rest go to the default endpoint. The messages sent to the **enriched** container in the storage account will be enriched.
+1. Run the sample code:
-After several storage messages are sent, view the data.
+ ```console
+ dotnet run
+ ```
+
+After leaving the console application to run for a few minutes, view the data:
-1. Select **Resource groups**. Find your resource group, **ContosoResourcesMsgEn**, and select it.
+1. In the [Azure portal](https://portal.azure.com), navigate to your storage account.
-2. Select your storage account, which begins with **contosostorage**. Then select **Storage browser (preview)** from the navigation menu. Select **Blob containers** to see the two containers that you created.
+1. Select **Storage browser** from the navigation menu. Select **Blob containers** to see the two containers that you created over the course of these tutorials.
- :::image type="content" source="./media/tutorial-message-enrichments/show-blob-containers.png" alt-text="See the containers in the storage account.":::
+ :::image type="content" source="./media/tutorial-message-enrichments/show-blob-containers.png" alt-text="Screenshot showing the blob containers in the storage account.":::
-The messages in the container called **enriched** have the message enrichments included in the messages. The messages in the container called **original** have the raw messages with no enrichments. Drill down into one of the containers until you get to the bottom, and open the most recent message file. Then do the same for the other container to verify that the one is enriched and one isn't.
+The messages in the container called **enriched** have the message enrichments included in the messages. The messages in the container you created earlier have the raw messages with no enrichments. Drill down into the **enriched** container until you get to the bottom and then open the most recent message file. Then do the same for the other container to verify that one is enriched and one isn't.
-When you look at messages that have been enriched, you should see "my IoT Hub" with the hub name and the location and the customer ID, like this:
+When you look at messages that have been enriched, you should see `"myIotHub"` with the hub name, the location, and the customer ID, like this:
```json {
When you look at messages that have been enriched, you should see "my IoT Hub" w
"Properties": { "level":"storage",
- "myIotHub":"contosotesthubmsgen3276",
+ "myIotHub":"{your hub name}",
"DeviceLocation":"Plant 43", "customerID":"6ce345b8-1e4a-411e-9398-d34587459a3a" },
When you look at messages that have been enriched, you should see "my IoT Hub" w
} ```
-Here's an unenriched message. Notice that `my IoT Hub,` `devicelocation,` and `customerID` don't show up here because these fields are added by the enrichments. This endpoint has no enrichments.
+## Clean up resources
-```json
-{
- "EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z",
- "Properties":
- {
- "level":"storage"
- },
- "SystemProperties":
- {
- "connectionDeviceId":"Contoso-Test-Device",
- "connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
- "connectionDeviceGenerationId":"636930642531278483",
- "enqueuedTime":"2019-05-10T06:06:32.7220000Z"
- },"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"
-}
-```
+To remove all of the resources you created in both parts of this tutorial, delete the resource group. This action deletes all resources contained within the group. If you don't want to delete the entire resource group, you can select individual resources within to delete.
-## Clean up resources
+# [Azure portal](#tab/portal)
-To remove all of the resources you created in this tutorial, delete the resource group. This action deletes all resources contained within the group. In this case, it removes the IoT hub, the storage account, and the resource group itself.
+1. In the Azure portal, navigate to the resource group that contains the IoT hub and storage account for this tutorial.
+1. Review all the resources that are in the resource group to determine which ones you want to clean up.
+ * If you want to delete all the resource, select **Delete resource group**.
+ * If you only want to delete certain resource, use the check boxes next to each resource name to select the ones you want to delete. Then select **Delete**.
-### Use the Azure CLI to clean up resources
+# [Azure CLI](#tab/cli)
-To remove the resource group, use the [az group delete](/cli/azure/group#az-group-delete) command. Recall that `$resourceGroup` was set to **ContosoResourcesMsgEn** at the beginning of this tutorial.
+1. Use the [az resource list](/cli/azure/resource#az-resource-list) command to view all the resources in your resource group.
-```azurecli-interactive
-az group delete --name $resourceGroup
-```
+ ```azurecli-interactive
+ az resource list --resource-group $resourceGroup --output table
+ ```
+
+1. Review all the resources that are in the resource group to determine which ones you want to clean up.
+
+ * If you want to delete all the resources, use the [az group delete](/cli/azure/group#az-group-delete) command.
+
+ ```azurecli-interactive
+ az group delete --name $resourceGroup
+ ```
+
+ * If you only want to delete certain resources, use the [az resource delete](/cli/azure/resource#az-resource-delete) command. For example:
+
+ ```azurecli-interactive
+ az resource delete --resource-group $resourceGroup --name $storageName
+ ```
++ ## Next steps
-In this tutorial, you configured and tested message enrichments for IoT Hub messages as they are routed to an endpoint.
+In this tutorial, you configured and tested message enrichments for IoT Hub messages as they're routed to an endpoint.
For more information about message enrichments, see [Overview of message enrichments](iot-hub-message-enrichments-overview.md).
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
Register a new device in your IoT hub.
1. Run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create) command in your CLI shell. This command creates the device identity. ```azurecli-interactive
- az iot hub device-identity create --device-id $deviceName --hub-name $hubName
+ az iot hub device-identity create --device-id $deviceName --hub-name $hubName
``` 1. From the device-identity output, copy the **primaryKey** value without the surrounding quotation marks and save it. You'll use this value to configure the sample code that generates simulated device telemetry messages.
Now set up the routing for the storage account. In this section you define a new
| **Routing query** | Enter `level="storage"` as the query string. | ![Save the routing query information](./media/tutorial-routing/create-storage-route.png)
-
+ 1. Select **Save**. # [Azure CLI](#tab/cli)
Verify that the messages are arriving in the storage container.
If you want to remove all of the Azure resources you used for this tutorial, delete the resource group. This action deletes all resources contained within the group. If you don't want to delete the entire resource group, use the Azure portal to locate and delete the individual resources.
+>[!TIP]
+>If you intend to complete [Tutorial: Use Azure IoT Hub message enrichments](tutorial-message-enrichments.md), be sure to maintain the resources you created here.
+ # [Azure portal](#tab/portal) 1. In the Azure portal, navigate to the resource group that contains the IoT hub and storage account for this tutorial.
role-based-access-control Change History Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/change-history-report.md
AzureActivity
![Activity logs using the Advanced Analytics portal - screenshot](./media/change-history-report/azure-log-analytics.png) ## Next steps+
+* [Alert on privileged Azure role assignments](role-assignments-alert.md)
* [View activity logs to monitor actions on resources](../azure-monitor/essentials/activity-log.md) * [Monitor subscription activity with the Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md)
role-based-access-control Role Assignments Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-alert.md
+
+ Title: Alert on privileged Azure role assignments
+description: Alert on privileged Azure role assignments by creating an alert rule using Azure Monitor.
++++++ Last updated : 07/29/2022+++
+# Alert on privileged Azure role assignments
+
+Privileged Azure roles, such as Contributor, Owner, or User Access Administrator, are powerful roles and may introduce risk into your system. You might want to be notified by email or text message when these or other roles are assigned. This article describes how to get notified of privileged role assignments at a subscription scope by creating an alert rule using Azure Monitor.
+
+## Prerequisites
+
+To create an alert rule, you must have:
+
+- Access to an Azure subscription
+- Permission to create resource groups and resources within the subscription
+- [Log Analytics configured](../azure-monitor/logs/quick-create-workspace.md) so it has access to the AzureActivity table
+
+## Estimate costs before using Azure Monitor
+
+There's a cost associated with using Azure Monitor and alert rules. The cost is based on the frequency the query is executed and the notifications selected. For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Create an alert rule
+
+To get notified of privileged role assignments, you create an alert rule in Azure Monitor.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Monitor**.
+
+1. In the left navigation, click **Alerts**.
+
+1. Click **Create** > **Alert rule**. The **Create an alert rule** page opens.
+
+1. On the **Scope** tab, select your subscription.
+
+1. On the **Condition** tab, select the **Custom log search** signal name.
+
+1. In the **Log query** box, add the following Kusto query that will run on the subscription's log and trigger the alert.
+
+ This query filters for attempts to assign the [Contributor](built-in-roles.md#contributor), [Owner](built-in-roles.md#owner), or [User Access Administrator](built-in-roles.md#user-access-administrator) roles at the scope of the selected subscription.
+
+ ```kusto
+ AzureActivity
+ | where CategoryValue == "Administrative" and
+ OperationNameValue == "Microsoft.Authorization/roleAssignments/write" and
+ (ActivityStatusValue == "Start" or ActivityStatus == "Started")
+ | extend RoleDefinition = extractjson("$.Properties.RoleDefinitionId",tostring(Properties_d.requestbody),typeof(string))
+ | extend PrincipalId = extractjson("$.Properties.PrincipalId",tostring(Properties_d.requestbody),typeof(string))
+ | extend PrincipalType = extractjson("$.Properties.PrincipalType",tostring(Properties_d.requestbody),typeof(string))
+ | extend Scope = extractjson("$.Properties.Scope",tostring(Properties_d.requestbody),typeof(string))
+ | where Scope !contains "resourcegroups"
+ | extend RoleId = split(RoleDefinition,'/')[-1]
+ | extend RoleDisplayName = case(
+ RoleId == 'b24988ac-6180-42a0-ab88-20f7382dd24c', "Contributor",
+ RoleId == '8e3af657-a8ff-443c-a75c-2fe8c4bcb635', "Owner",
+ RoleId == '18d7d88d-d35e-4fb5-a5c3-7773c20a72d9', "User Access Administrator",
+ "Irrelevant")
+ | where RoleDisplayName != "Irrelevant"
+ | project TimeGenerated,Scope, PrincipalId,PrincipalType,RoleDisplayName
+ ```
+
+ :::image type="content" source="./media/role-assignments-alert/alert-rule-condition.png" alt-text="Screenshot of Create an alert rule condition tab in Azure Monitor." lightbox="./media/role-assignments-alert/alert-rule-condition.png":::
+
+1. In the **Measurement** section, set the following values:
+
+ - **Measure**: Table rows
+ - **Aggregation type**: Count
+ - **Aggregation granularity**: 5 minutes
+
+ For **Aggregation granularity**, you can change the default value to a frequency you desire.
+
+1. In the **Split by dimensions** section, set **Resource ID column** to **Don't split**.
+
+1. In the **Alert logic** section, set the following values:
+
+ - **Operator**: Greater than
+ - **Threshold value**: 0
+ - **Frequency of evaluation**: 5 minutes
+
+ For **Frequency of evaluation**, you can change the default value to a frequency you desire.
+
+1. On the **Actions** tab, create an action group or select an existing action group.
+
+ An action group defines the actions and notifications that are executed when the alert is triggered.
+
+ When you create an action group, you must specify the resource group to put the action group within. Then, select the notifications (Email/SMS message/Push/Voice action) to invoke when the alert rule triggers. You can skip the **Actions** and **Tag** tabs. For more information, see [Create and manage action groups in the Azure portal](../azure-monitor/alerts/action-groups.md).
+
+1. On the **Details** tab, select the resource group to save the alert rule.
+
+1. In the **Alert rule details** section, select a **Severity** and specify an **Alert rule name**.
+
+1. For **Region**, you can select any region since Azure activity logs are global.
+
+1. Skip the **Tags** tab.
+
+1. On the **Review + create** tab, click **Create** to create your alert rule.
+
+## Test the alert rule
+
+Once you've created an alert rule, you can test that it fires.
+
+1. Assign the Contributor, Owner, or User Access Administrator role at subscription scope. For more information, see [Assign Azure roles using the Azure portal](role-assignments-portal.md).
+
+1. Wait a few minutes to receive the alert based on the aggregation granularity and the frequency of evaluation of the log query.
+
+1. On the **Alerts** page, monitor for alert you specified in the action group.
+
+ :::image type="content" source="./media/role-assignments-alert/alert-fired.png" alt-text="Screenshot of the Alerts page showing that role assignment alert fired." lightbox="./media/role-assignments-alert/alert-fired.png":::
+
+ The following image shows an example of the email alert.
+
+ :::image type="content" source="./media/role-assignments-alert/alert-email.png" alt-text="Screenshot of an email alert for a role assignment." lightbox="./media/role-assignments-alert/alert-email.png":::
+
+## Delete the alert rule
+
+Follow these steps to delete the role assignment alert rule and stop additional costs.
+
+1. In **Monitor**, navigate to **Alerts**.
+
+1. In the bar, click **Alert rules**.
+
+1. Add a checkmark next to the alert rule you want to delete.
+
+1. Click **Delete** to remove the alert.
+
+## Next steps
+
+- [Create, view, and manage activity log alerts by using Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
+- [View activity logs for Azure RBAC changes](change-history-report.md)
role-based-access-control Role Assignments Portal Subscription Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-portal-subscription-admin.md
The [Owner](built-in-roles.md#owner) role grant full access to manage all resour
## Next steps - [Assign Azure roles using the Azure portal](role-assignments-portal.md)-- [List Azure role assignments using the Azure portal](role-assignments-list-portal.md) - [Organize your resources with Azure management groups](../governance/management-groups/overview.md)
+- [Alert on privileged Azure role assignments](role-assignments-alert.md)
service-bus-messaging Service Bus Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-troubleshooting-guide.md
Title: Troubleshooting guide for Azure Service Bus | Microsoft Docs description: Learn about troubleshooting tips and recommendations for a few issues that you may see when using Azure Service Bus. Previously updated : 06/17/2022 Last updated : 07/29/2022
To learn how to assign permissions to roles, see [Authenticate a managed identit
## Service Bus Exception: Put token failed ### Symptoms
-When you try to send more than 1000 messages using the same Service Bus connection, you'll receive the following error message:
+You'll receive the following error message:
`Microsoft.Azure.ServiceBus.ServiceBusException: Put token failed. status-code: 403, status-description: The maximum number of '1000' tokens per connection has been reached.` ### Cause
-There's a limit on number of tokens that are used to send and receive messages using a single connection to a Service Bus namespace. It's 1000.
+Number of authentication tokens for concurrent links in a single connection to a Service Bus namespace has exceeded the limit: 1000.
### Resolution
-Open a new connection to the Service Bus namespace to send more messages.
+Do one of the following steps:
+
+- Reduce the number of concurrent links in a single connection or use a new connection
+- Use SDKs for Azure Service Bus, which ensures that you don't get into this situation (recommended)
+ ## Adding virtual network rule using PowerShell fails
service-connector Quickstart Portal App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-app-service-connection.md
Title: Quickstart - Create a service connection in App Service from the Azure portal description: Quickstart showing how to create a service connection in App Service from the Azure portal--++ Previously updated : 05/03/2022 Last updated : 07/18/2022 #Customer intent: As an app developer, I want to connect several services together so that I can ensure I have the right connectivity to access my Azure resources.
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
## Create a new service connection in App Service
-You'll use Service Connector to create a new service connection in App Service.
+1. To create a new service connection in App Service, select the **Search resources, services and docs (G +/)** search bar at the top of the Azure portal, type ***App Services***, and select **App Services**.
-1. Select the **All resources** button on the left of the Azure portal. Type **App Service** in the filter and select the name of the App Service you want to use in the list.
-2. Select **Service Connector** from the left table of contents. Then select **Create**.
-3. Select or enter the following settings.
+ :::image type="content" source="./media/app-service-quickstart/select-app-services.png" alt-text="Screenshot of the Azure portal, selecting App Services.":::
- | Setting | Suggested value | Description |
- | | - | -- |
- | **Service type** | Blob Storage | Target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
- | **Subscription** | One of your subscriptions | The subscription where your target service (the service you want to connect to) is located. The default value is the subscription that this App Service is in. |
- | **Connection name** | Generated unique name | The connection name that identifies the connection between your App Service and target service |
- | **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. |
- | **Client type** | The same app stack on this App Service | Your application stack that works with the target service you selected. The default value comes from the App Service runtime stack. |
+1. Select the Azure App Services resource you want to connect to a target resource.
+1. Select **Service Connector** from the left table of contents. Then select **Create**.
-4. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use access key to connect your Blob Storage account.
+ :::image type="content" source="./media/app-service-quickstart/select-service-connector.png" alt-text="Screenshot of the Azure portal, selecting Service Connector and creating new connection.":::
-5. Then select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. It might take 1 minute to complete the operation.
+1. Select or enter the following settings.
+
+ | Setting | Example | Description |
+ ||-|-|
+ | **Service type** | Storage - Blob | The target service type. If you don't have a Microsoft Blob Storage, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
+ | **Subscription** | My subscription | The subscription for your target service (the service you want to connect to). The default value is the subscription for this App Service resource. |
+ | **Connection name** | *my_connection* | The connection name that identifies the connection between your App Service and target service. Use the connection name provided by Service Connector or choose your own connection name. |
+ | **Storage account** | *my_storage_account* | The target storage account you want to connect to. Target service instances to choose from vary according to the selected service type. |
+ | **Client type** | The same app stack on this App Service | The default value comes from the App Service runtime stack. Select the app stack that's on this App Service instance. |
+
+ :::image type="content" source="./media/app-service-quickstart/basics-tab.png" alt-text="Screenshot of the Azure portal, filling out the Basics tab.":::
+
+1. Select **Next: Authentication** to choose an authentication method.
+
+ ### [System-assigned managed identity](#tab/SMI)
+
+ System-assigned managed identity is the recommended authentication option. Select **System-assigned managed identity** to connect through an identity that's generated in Azure Active Directory and tied to the lifecycle of the service instance.
+
+ ### [User-assigned managed identity](#tab/UMI)
+
+ Select **User-assigned managed identity** to authenticate through a standalone identity assigned to one or more instances of an Azure service.
+
+ ### [Connection string](#tab/CS)
+
+ Select **Connection string** to generate or configure one or multiple key-value pairs with pure secrets or tokens.
+
+ ### [Service principal](#tab/SP)
+
+ Select **Service principal** to use a service principal that defines the access policy and permissions for the user/application in Azure Active Directory.
+
+1. Select **Next: Networking** to configure the network access to your target service and select **Configure firewall rules to enable access to your target service**.
+
+1. Select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. This operation may take a minute to complete.
## View service connections in App Service
-1. In **Service Connector**, you see an App Service connection to the target service.
+1. The **Service Connector** tab displays existing App Service connections.
-1. Select the **>** button to expand the list. You can see the environment variables required by your application code.
+1. Select the **>** button to expand the list and see the environment variables required by your application code. Select **Hidden value** to view the hidden value.
-1. Select the **...** button and select **Validate**. You can see the connection validation details in the pop-up panel on the right.
+ :::image type="content" source="./media/app-service-quickstart/show-values.png" alt-text="Screenshot of the Azure portal, viewing connection details.":::
+
+1. Select **Validate** to check your connection. You can see the connection validation details in the panel on the right.
+
+ :::image type="content" source="./media/app-service-quickstart/validation.png" alt-text="Screenshot of the Azure portal, validating the connection.":::
## Next steps Follow the tutorials listed below to start building your own application with Service Connector. > [!div class="nextstepaction"]
-> - [Tutorial: WebApp + Storage with Azure CLI](./tutorial-csharp-webapp-storage-cli.md)
-> - [Tutorial: WebApp + PostgreSQL with Azure CLI](./tutorial-django-webapp-postgres-cli.md)
+> [Tutorial: WebApp + Storage with Azure CLI](./tutorial-csharp-webapp-storage-cli.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: WebApp + PostgreSQL with Azure CLI](./tutorial-django-webapp-postgres-cli.md)
service-fabric Service Fabric Diagnostics Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-common-scenarios.md
Check these links for the full list of performance counters on Reliable [Service
* [Look Up Common Code Package Activation Errors](./service-fabric-diagnostics-code-package-errors.md) * [Set up Alerts in AI](../azure-monitor/alerts/alerts-log.md) to be notified about changes in performance or usage
-* [Smart Detection in Application Insights](../azure-monitor/app/proactive-diagnostics.md) performs a proactive analysis of the telemetry being sent to AI to warn you of potential performance problems
+* [Smart Detection in Application Insights](../azure-monitor/alerts/proactive-diagnostics.md) performs a proactive analysis of the telemetry being sent to AI to warn you of potential performance problems
* Learn more about Azure Monitor logs [alerting](../azure-monitor/alerts/alerts-overview.md) to aid in detection and diagnostics. * For on-premises clusters, Azure Monitor logs offers a gateway (HTTP Forward Proxy) that can be used to send data to Azure Monitor logs. Read more about that in [Connecting computers without Internet access to Azure Monitor logs using the Log Analytics gateway](../azure-monitor/agents/gateway.md) * Get familiarized with the [log search and querying](../azure-monitor/logs/log-query-overview.md) features offered as part of Azure Monitor logs
service-fabric Service Fabric Diagnostics Event Analysis Appinsights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-analysis-appinsights.md
Clicking **Analytics** will take you to the Application Insights Analytics porta
## Next steps * [Set up Alerts in AI](../azure-monitor/alerts/alerts-log.md) to be notified about changes in performance or usage
-* [Smart Detection in Application Insights](../azure-monitor/app/proactive-diagnostics.md) performs a proactive analysis of the telemetry being sent to Application Insights to warn you of potential performance problems
+* [Smart Detection in Application Insights](../azure-monitor/alerts/proactive-diagnostics.md) performs a proactive analysis of the telemetry being sent to Application Insights to warn you of potential performance problems
virtual-desktop App Attach File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-file-share.md
Title: Azure Virtual Desktop set up file share MSIX app attach - Azure
+ Title: Azure Virtual Desktop setup file share MSIX app attach - Azure
description: How to set up a file share for MSIX app attach for Azure Virtual Desktop.
MSIX app attach doesn't have any dependencies on the type of storage fabric the
## Performance requirements
-MSIX app attach image size limits for your system depend on the storage type you're using to store the VHD or VHDx files, as well as the size limitations of the VHD, VHDX or CIM files and the file system.
+MSIX app attach image size limits for your system depend on the storage type you're using to store the VHD or VHDX files, as well as the size limitations of the VHD, VHDX or CIM files and the file system.
The following table gives an example of how many resources a single 1 GB MSIX image with one MSIX app inside of it requires for each VM:
You'll also need to make sure your session host VMs have New Technology File Sys
## Next steps
-Here are the other things you'll need to do after you've set up the file share:
--- Learn how to set up Azure Active Directory Domain Services (AD DS) at [Create a profile container with Azure Files and AD DS](create-file-share.md).-- Learn how to set up Azure Files and Azure AD DS at [Create a profile container with Azure Files and Azure AD DS](create-profile-container-adds.md).-- Learn how to set up Azure NetApp Files for MSIX app attach at [Create a profile container with Azure NetApp Files and AD DS](create-fslogix-profile-container.md).-- Learn how to use a virtual machine-based file share at [Create a profile container for a host pool using a file share](create-host-pools-user-profile.md).- Once you're finished, here are some other resources you might find helpful: - Ask our community questions about this feature at the [Azure Virtual Desktop TechCommunity](https://techcommunity.microsoft.com/t5/Windows-Virtual-Desktop/bd-p/WindowsVirtualDesktop).
virtual-desktop Azure Files Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-files-authorization.md
- Title: How to authorize an Azure Virtual Desktop host pool for Azure Files - Azure
-description: How to authorize an Azure Virtual Desktop host pool to use Azure Files.
-- Previously updated : 08/19/2021----
-# Authorize an account for Azure Files
-
-This article will show you how to authorize an Azure Virtual Desktop host pool to use Azure Files.
-
-## Requirements
-
-Before you get started, you'll need the following things:
--- An Active Directory Domain Services (AD DS) account synced to Azure Active Directory (Azure AD)-- Permissions to create a group in AD DS-- A storage account and the permissions needed to create a new storage account, if necessary-- A virtual machine (VM) or physical machine joined to AD DS that you have permission to access-- An Azure Virtual Desktop host pool in which all session hosts have been domain joined-
-## Create a security group in Active Directory Domain Services
-
-First, you'll need to create a security group in AD DS. This security group will be used in later steps to grant share-level and New Technology File System (NTFS) file share permissions.
-
->[!NOTE]
->If you have an existing security group you'd prefer to use, select the name of that group instead of creating a new group.
-
-To create a security group:
-
-1. Open a remote session with the VM or physical machine joined to AD DS that you want to add to the security group.
-
-2. Open **Active Directory Users and Computers**.
-
-3. Under the domain node, right-click the name of your machine. In the drop-down menu, select **New** > **Group**.
-
-4. In the **New Object ΓÇô Group** window, enter the name of the new group, then select the following values:
-
- - For **Group scope**, select **Global**
- - For **Group type**, select **Security**
-
-5. Right-click on the new group and select **Properties**.
-
-6. In the **Properties** window, select the **Members** tab.
-
-7. Select **Add…**.
-
-8. In the **Select Users, Contacts, Computers, Service Accounts, or Groups** window, select **Object Types…** > **Computers**. When you're finished, select **OK**.
-
-9. In the **Enter the object names to select** window, enter the names of all session hosts you want to include in the security group.
-
-10. Select **Check Names**, then select the name of the session host you want to use from the list that appears.
-
-11. Select **OK**, then select **Apply**.
-
->[!NOTE]
->New security groups may take up to 1 hour to sync with Azure AD.
-
-## Create a storage account
-
-If you haven't created a storage account already, follow the directions in [Create a storage account](../storage/common/storage-account-create.md) first. When you create a new storage account, make sure to also create a new file share.
-
->[!NOTE]
->If you're creating a **Premium** storage account make sure **Account Kind** is set to **FileStorage**.
-
-## Get RBAC permissions
-
-To get RBAC permissions:
-
-1. Select the storage account you want to use.
-
-2. Select **Access Control (IAM)**, then select **Add**. Next, select **Add role assignments** from the drop-down menu.
-
-3. In the **Add role assignment** screen, select the following values:
-
- - For **Role**, select **Storage File Data SMB Share Contributor**.
- - For **Assign access to**, select **User, Group, or Service Principal**.
- - For **Subscription**, select **Based on your environment**.
- - For **Select**, select the name of the Active Directory group that contains your session hosts.
-
-4. Select **Save**.
-
-## Join your storage account to AD DS
-
-Next, you'll need to join storage account to AD DS. To join your account to AD DS:
-
-1. Open a remote session in a VM or physical machine joined to AD DS.
-
- >[!NOTE]
- > Run the script using an on-premises AD DS credential that is synced to your Azure AD. The on-premises AD DS credential must have either storage account owner or contributor Azure role permissions.
-
-2. Download and unzip [the latest version on AzFilesHybrid](https://github.com/Azure-Samples/azure-files-samples/releases).
-
-3. Open **PowerShell** in elevated mode.
-
-4. Run the following cmdlet to set the execution policy:
-
- ```powershell
- Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser
- ```
-
-5. Next, go to the folder where you unzipped AzfileHybrid and run this command:
-
- ```powershell
- .\\CopyToPSPath.ps1
- ```
-
-6. After that, import the AzFilesHybrid module by running this cmdlet:
-
- ```powershell
- Import-Module -Name AzFilesHybrid
- ```
-
-7. Next, run this cmdlet to connect to Azure AD:
-
- ```powershell
- Connect-AzAccount
- ```
-
-8. Set the following parameters, making sure to replace the placeholders with the values relevant to your scenario:
-
- ```powershell
- $SubscriptionId = "<your-subscription-id-here>"
-
- $ResourceGroupName = "<resource-group-name-here>"
-
- $StorageAccountName = "<storage-account-name-here>"
- ```
-
-9. Finally, run this command:
-
- ```powershell
- Join-AzStorageAccountForAuth `
-
- -ResourceGroupName $ResourceGroupName `
-
- -StorageAccountName $StorageAccountName `
-
- -DomainAccountType "ComputerAccount" `
-
- -OrganizationalUnitDistinguishedName "<ou-here>" `
-
- -EncryptionType "'RC4','AES256'"
- ```
-
-## Get NTFS-level permissions
-
-In order to authenticate with AD DS computer accounts against an Azure Files storage account, we must also assign NTFS-level permissions in addition to the RBAC permission we set up earlier.
-
-To assign NTFS permissions:
-
-1. Open the Azure portal and navigate to the storage account that we added to AD DS.
-
-2. Select **Access keys** and copy the value in the **Key1** field.
-
-3. Start a remote session in the VM or physical machine joined to AD DS.
-
-4. Open a command prompt in elevated mode.
-
-5. Run the following command, with the placeholders replaced with the values relevant to your deployment:
-
- ```cmd
- net use <desired-drive-letter>:
- \\<storage-account-name>.file.core.windows.net\<share-name>
- /user:Azure\<storage-account-name> <storage-account-key>
- ```
-
- >[!NOTE]
- >When you run this command, the output should say "The command completed successfully." If not, check your input and try again.
-
-6. Open **File Explorer** and find the drive letter you used in the command in step 5.
-
-7. Right-click the drive letter, then select **Properties** > **Security** from the drop-down menu.
-
-8. Select **Edit**, then select **Add…**.
-
- >[!NOTE]
- >Make sure that domain name matches your AD DS domain name. If it doesnΓÇÖt, then that means the storage account hasn't been domain joined. You'll need to use a domain-joined account in order to continue.
-
-9. If prompted, enter your admin credentials.
-
-10. In the **Select Users, Computers, Service Accounts, or Groups** window, enter the name of the group from [Create a security group in Active Directory Domain Services](#create-a-security-group-in-active-directory-domain-services).
-
-11. Select **OK**. After that, confirm the group has the **Read & execute** permission. If the group has permissions, the "Allow" check box should be selected, as shown in the following image:
-
- > [!div class="mx-imgBorder"]
- > ![A screenshot of the Security window. Under a list marked "Permissions," the "Read & execute" permission has a green check mark under the "Allow" column.](media/read-and-execute.png)
-
-12. Add the Active Directory group with the computer accounts with **Read & execute** permissions to the security group.
-
-13. Select **Apply**. If you see a Windows Security prompt, select **Yes** to confirm your changes.
-
-## Next steps
-
-If you run into any issues after setup, check out our [Azure Files troubleshooting article](troubleshoot-authorization.md).
virtual-desktop Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-file-share.md
- Title: Create an Azure Files file share with a domain controller - Azure
-description: Set up an FSLogix profile container on an Azure file share in an existing Azure Virtual Desktop host pool with your Active Directory domain.
-- Previously updated : 12/08/2021----
-# Create a profile container with Azure Files and AD DS
-
-In this article, you'll learn how to create an Azure file share authenticated by a domain controller on an existing Azure Virtual Desktop host pool. You can use this file share to store storage profiles.
-
-This process uses Active Directory Domain Services (AD DS), which is an on-premises directory service. If you're looking for information about how to create an FSLogix profile container with Azure AD DS, see [Create an FSLogix profile container with Azure Files](create-profile-container-adds.md).
-
-## Prerequisites
-
-Before you get started, make sure your domain controller is synchronized to Azure and resolvable from the Azure virtual network (VNET) your session hosts are connected to.
-
-## Set up a storage account
-
-First, you'll need to set up an Azure Files storage account.
-
-To set up a storage account:
-
-1. Sign in to the Azure portal.
-
-2. Search for **storage account** in the search bar.
-
-3. Select **+Add**.
-
-4. Enter the following information into the **Create storage account** page:
-
- - Create a new resource group.
- - Enter a unique name for your storage account. This storage account name currently has a limit of 15 characters.
- - For **Location**, we recommend you choose the same location as the Azure Virtual Desktop host pool.
- - For **Performance**, select **Standard**. (Depending on your IOPS requirements. For more information, see [Storage options for FSLogix profile containers in Azure Virtual Desktop](store-fslogix-profile.md).)
- - For **Account type**, select **StorageV2** or **FileStorage** (only available if Performance tier is Premium).
- - For **Replication**, select **Locally-redundant storage (LRS)**.
-
-5. When you're done, select **Review + create**, then select **Create**.
-
-If you need more detailed configuration instructions, see [Regional availability](../storage/files/storage-files-identity-auth-active-directory-enable.md#regional-availability).
-
-## Create an Azure file share
-
-Next, you'll need to create an Azure file share.
-
-To create a file share:
-
-1. Select **Go to resource**.
-
-2. On the Overview page, select **File shares**.
-
-3. Select **+File shares**, create a new file share named **profiles**, then either enter an appropriate quota or leave the field blank for no quota.
-
-4. Select **Create**.
-
-## Enable Active Directory authentication
-
-Next, you'll need to enable Active Directory (AD) authentication. To enable this policy, you'll need to follow this section's instructions on a machine that's already domain-joined. To enable authentication, follow these instructions on the VM running the domain controller:
-
-1. Remote Desktop Protocol into the domain-joined VM.
-
-2. Follow the instructions in [Enable AD DS authentication for your Azure file shares](../storage/files/storage-files-identity-ad-ds-enable.md) to install the AzFilesHybrid module and enable authentication.
-
-3. Open the Azure portal, open your storage account, select **Configuration**, then confirm **Active Directory (AD)** is set to **Enabled**.
-
- > [!div class="mx-imgBorder"]
- > ![A screenshot of the Configuration page with Active Directory (AD) enabled.](media/active-directory-enabled.png)
-
-## Assign Azure RBAC permissions to Azure Virtual Desktop users
-
-All users that need to have FSLogix profiles stored on the storage account must be assigned the Storage File Data SMB Share Contributor role.
-
-Users signing in to the Azure Virtual Desktop session hosts need access permissions to access your file share. Granting access to an Azure File share involves configuring permissions both at the share level as well as on the NTFS level, similar to a traditional Windows share.
-
-To configure share level permissions, assign each user a role with the appropriate access permissions. Permissions can be assigned to either individual users or an Azure AD group. To learn more, see [Assign access permissions to an identity](../storage/files/storage-files-identity-ad-ds-assign-permissions.md).
-
->[!NOTE]
->The accounts or groups you assign permissions to should have been created in the domain and synchronized with Azure AD. Accounts created in Azure AD won't work.
-
-To assign Azure role-based access control (Azure RBAC) permissions:
-
-1. Open the Azure portal.
-
-1. Open the storage account you created in [Set up a storage account](#set-up-a-storage-account).
-
-1. Select **File shares**, then select the name of the file share you plan to use.
-
-1. Select **Access control (IAM)**.
-
-1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
- | Setting | Value |
- | | |
- | Role | Storage File Data SMB Share Elevated Contributor |
- | Assign access to | User, group, or service principal |
- | Members | \<Name of the administrator account> |
-
- To assign users permissions for their FSLogix profiles, select the **Storage File Data SMB Share Contributor** role instead.
-
- ![Screenshot showing Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
-
-## Assign users permissions on the Azure file share
-
-Once you've assigned Azure RBAC permissions to your users, next you'll need to configure the NTFS permissions.
-
-You'll need to know two things from the Azure portal to get started:
--- The UNC path.-- The storage account key.-
-### Get the UNC path
-
-Here's how to get the UNC path:
-
-1. Open the Azure portal.
-
-2. Open the storage account you created in [Set up a storage account](#set-up-a-storage-account).
-
-3. Select **Settings**, then select **Properties**.
-
-4. Copy the **Primary File Service Endpoint** URI to the text editor of your choice.
-
-5. After copying the URI, do the following things to change it into the UNC:
-
- - Remove `https://` and replace with `\\`
- - Replace the forward slash `/` with a back slash `\`.
- - Add the name of the file share you created in [Create an Azure file share](#create-an-azure-file-share) to the end of the UNC.
-
- For example: `\\customdomain.file.core.windows.net\<fileshare-name>`
-
-### Get the storage account key
-
-To get the storage account key:
-
-1. Open the Azure portal.
-
-2. Open the storage account you created in [Set up a storage account](#set-up-a-storage-account).
-
-3. On the **Storage account** tab, select **Access keys**.
-
-4. Copy **key1** or **key2** to a file on your local machine.
-
-### Configure NTFS permissions
-
-To configure your NTFS permissions:
-
-1. Open a command prompt on a domain-joined VM.
-
-2. Run the following command to mount the Azure file share and assign it a drive letter:
-
- ```cmd
- net use <desired-drive-letter>: <UNC-path> <SA-key> /user:Azure\<SA-name>
- ```
-
-3. Run the following command to review the access permissions to the Azure file share:
-
- ```cmd
- icacls <mounted-drive-letter>:
- ```
-
- Replace `<mounted-drive-letter>` with the letter of the drive you mapped to.
-
- Both *NT Authority\Authenticated Users* and *BUILTIN\Users* have certain permissions by default. These default permissions let these users read other users' profile containers. However, the permissions described in [Configure storage permissions for use with Profile Containers and Office Containers](/fslogix/fslogix-storage-config-ht) don't let users read each others' profile containers.
-
-4. Run the following commands to allow your Azure Virtual Desktop users to create their own profile container while blocking access to their profile containers from other users.
-
- ```cmd
- icacls <mounted-drive-letter>: /grant <user-email>:(M)
- icacls <mounted-drive-letter>: /grant "Creator Owner":(OI)(CI)(IO)(M)
- icacls <mounted-drive-letter>: /remove "Authenticated Users"
- icacls <mounted-drive-letter>: /remove "Builtin\Users"
- ```
-
- - Replace \<mounted-drive-letter\> with the letter of the drive you used to map the drive.
- - Replace \<user-email\> with the UPN of the user or Active Directory group that contains the users that will require access to the share.
-
- For example:
-
- ```cmd
- icacls <mounted-drive-letter>: /grant john.doe@contoso.com:(M)
- icacls <mounted-drive-letter>: /grant "Creator Owner":(OI)(CI)(IO)(M)
- icacls <mounted-drive-letter>: /remove "Authenticated Users"
- icacls <mounted-drive-letter>: /remove "Builtin\Users"
- ```
-
-## Configure FSLogix on session host VMs
-
-This section will show you how to configure a VM with FSLogix. You'll need to follow these instructions every time you configure a session host. Before you start configuring, follow the instructions in [Download and install FSLogix](/fslogix/install-ht). There are several options available that ensure the registry keys are set on all session hosts. You can set these options in an image or configure a group policy.
-
-To configure FSLogix on your session host VM:
-
-1. RDP to the session host VM of the Azure Virtual Desktop host pool.
-
-2. [Download and install FSLogix](/fslogix/install-ht).
-
-3. Follow the instructions in [Configure profile container registry settings](/fslogix/configure-profile-container-tutorial#configure-profile-container-registry-settings):
-
- - Navigate to **Computer** > **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **FSLogix**.
-
- - Create a **Profiles** key.
-
- - Create **Enabled, DWORD** with a value of 1.
-
- - Create **VHDLocations, MULTI_SZ**.
-
- - Set the value of **VHDLocations** to the UNC path you generated in [Get the UNC path](#get-the-unc-path).
-
-4. Restart the VM.
-
-## Testing
-
-Once you've installed and configured FSLogix, you can test your deployment by signing in with a user account that's been assigned an app group or desktop on the host pool. Make sure the user account you sign in with has permission on the file share.
-
-If the user has signed in before, they'll have an existing local profile that will be used during this session. To avoid creating a local profile, either create a new user account to use for tests or use the configuration methods described in [Tutorial: Configure Profile Container to redirect User Profiles](/fslogix/configure-profile-container-tutorial/).
-
-To check your permissions on your session:
-
-1. Start a session on Azure Virtual Desktop.
-
-2. Open the Azure portal.
-
-3. Open the storage account you created in [Set up a storage account](#set-up-a-storage-account).
-
-4. Select **Create a share** on the Create an Azure file share page.
-
-5. Make sure a folder containing the user profile now exists in your files.
-
-For additional testing, follow the instructions in [Make sure your profile works](create-profile-container-adds.md#make-sure-your-profile-works).
-
-## Next steps
-
-To troubleshoot FSLogix, see [this troubleshooting guide](/fslogix/fslogix-trouble-shooting-ht).
virtual-desktop Create Fslogix Profile Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-fslogix-profile-container.md
This section is based on [Create a profile container for a host pool using a fil
> [!div class="mx-imgBorder"] > ![A screenshot of the contents of the folder in the mount path. Inside is a single VHD file named "Profile_ssbb."](media/mount-path-folder.png)-
-## Next steps
-
-You can use FSLogix profile containers to set up a user profile share. To learn how to create user profile shares with your new containers, see [Create a profile container for a host pool using a file share](create-host-pools-user-profile.md).
-
-You can also create an Azure Files file share to store your FSLogix profile in. To learn more, see [Create an Azure Files file share with a domain controller](create-file-share.md).
virtual-desktop Create Profile Container Adds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-adds.md
- Title: Create FSLogix profile container Azure Files Active Directory Domain Services - Azure
-description: This article describes how to create an FSLogix profile container with Azure Files and Azure Active Directory Domain Services.
-- Previously updated : 07/25/2022-----
-# Create a profile container with Azure Files and Azure AD DS
-
-This article will show you how to create an FSLogix profile container with Azure Files and Azure Active Directory Domain Services (AD DS).
-
-## Prerequisites
-
-This article assumes you've already set up an Azure AD DS instance. If you don't have one yet, follow the instructions in [Create a basic managed domain](../active-directory-domain-services/tutorial-create-instance.md) first, then return here.
-
-## Add Azure AD DS admins
-
-To add more admins, you must create a new user and grant them the necessary permissions.
-
-To add an admin:
-
-1. Select **Azure Active Directory** from the sidebar, then select **All users**, and then select **New user**.
-
-2. Enter the user details into the fields.
-
-3. In the Azure Active Directory pane on the left side of the screen, select **Groups**.
-
-4. Select the **AAD DC Administrators** group.
-
-5. In the pane on the left side of the window, select **Members**, then select **Add members** in the main pane. You will see a list of all available users in Azure AD. Select the name of the user profile you just created.
-
-## Set up an Azure Storage account
-
-Now it's time to enable Azure AD DS authentication over Server Message Block (SMB).
-
-To enable authentication:
-
-1. If you haven't already, set up and deploy a general-purpose v2 Azure Storage account by following the instructions in [Create an Azure Storage account](../storage/common/storage-account-create.md).
-
-2. Once you've finished setting up your account, select **Go to resource**.
-
-3. Select **Configuration** from the pane on the left side of the screen, then enable **Azure Active Directory authentication for Azure Files** in the main pane. When you're done, select **Save**.
-
-4. Select **Overview** in the pane on the left side of the screen, then select **Files** in the main pane.
-
-5. Select **File share** and enter the **Name** and **Quota** into the fields that appear on the right side of the screen.
-
-## Assign access permissions to an identity
-
-Other users will need access permissions to access your file share. To do this, you'll need to assign each user a role with the appropriate access permissions.
-
-To assign users access permissions:
-
-1. From the Azure portal, open the file share you created in [Set up an Azure Storage account](#set-up-an-azure-storage-account).
-
-1. Select **Access control (IAM)**.
-
-1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
- | Setting | Value |
- | | |
- | Role | Storage File Data SMB Share Contributor |
- | Assign access to | User, group, or service principal |
- | Members | \<Name or email address for the target Azure Active Directory identity> |
-
- ![Screenshot showing Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
-
-## Get the Storage Account access key
-
-Next, you'll need to get the access key for your Storage Account.
-
-To get the Storage Account access key:
-
-1. From the Azure portal sidebar, select **Storage accounts**.
-
-2. From the list of storage accounts, select the account that you enabled Azure AD DS and created the custom roles for in the previous sections.
-
-3. Under **Settings**, select **Access keys** and copy the key from **key1**.
-
-4. Go to the **Virtual Machines** tab and locate any VM that will become part of your host pool.
-
-5. Select the name of the virtual machine (VM) under **Virtual Machines (adVM)** and select **Connect**. Connecting will download an RDP file that will let you sign in to the VM with its own credentials.
-
- > [!div class="mx-imgBorder"]
- > ![A screenshot of the RDP tab of the Connect to virtual machine window.](media/rdp-tab.png)
-
-6. When you've signed in to the VM, open a command prompt as an administrator.
-
-7. Run the following command:
-
- ```cmd
- net use <desired-drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name> <storage-account-key> /user:Azure\<storage-account-name>
- ```
-
- - Replace `<desired-drive-letter>` with a drive letter of your choice (for example, `y:`).
- - Replace all instances of `<storage-account-name>` with the name of the storage account you specified earlier.
- - Replace `<share-name>` with the name of the share you created earlier.
- - Replace `<storage-account-key>` with the storage account key from Azure.
-
- For example:
-
- ```cmd
- net use y: \\fsprofile.file.core.windows.net\share HDZQRoFP2BBmoYQ=(truncated)= /user:Azure\fsprofile
- ```
-
-8. Run the following commands to allow your Azure Virtual Desktop users to create their own profile container while blocking access to the profile containers from other users.
-
- ```cmd
- icacls <mounted-drive-letter>: /grant <user-email>:(M)
- icacls <mounted-drive-letter>: /grant "Creator Owner":(OI)(CI)(IO)(M)
- icacls <mounted-drive-letter>: /remove "Authenticated Users"
- icacls <mounted-drive-letter>: /remove "Builtin\Users"
- ```
-
- - Replace `<mounted-drive-letter>` with the letter of the drive you used to map the drive.
- - Replace `<user-email>` with the UPN of the user or Active Directory group that contains the users that will require access to the share.
-
- For example:
-
- ```cmd
- icacls <mounted-drive-letter>: /grant john.doe@contoso.com:(M)
- icacls <mounted-drive-letter>: /grant "Creator Owner":(OI)(CI)(IO)(M)
- icacls <mounted-drive-letter>: /remove "Authenticated Users"
- icacls <mounted-drive-letter>: /remove "Builtin\Users"
- ```
-
-## Create a profile container with FSLogix
-
-In order to use profile containers, you'll need to configure FSLogix on your session host VMs. If you're using a custom image that doesn't have the FSLogix Agent already installed, follow the instructions in [Download and install FSLogix](/fslogix/install-ht). You can set options for setting registry keys on session hosts in images or on a group policy. You'll need to follow these instructions every time you configure a session host, as long as you don't use group policies to apply these settings at scale to multiple session hosts.
-
-To configure FSLogix on your session host VM:
-
-1. RDP to the session host VM of the Azure Virtual Desktop host pool.
-
-2. [Download and install FSLogix](/fslogix/install-ht).
-
-3. Follow the instructions in [Configure profile container registry settings](/fslogix/configure-profile-container-tutorial#configure-profile-container-registry-settings):
-
- - Go to **Computer** > **HKEY_LOCAL_MACHINE** > **SOFTWARE** > **FSLogix**.
-
- - Create a **Profiles** key.
-
- - Create **Enabled, DWORD** with a value of 1.
-
- - Create **VHDLocations, MULTI_SZ**.
-
- - [Get the UNC path](create-file-share.md#get-the-unc-path), then set the value of **VHDLocations** to that UNC path.
-
-4. Restart the VM.
-
-## Make sure your profile works
-
-Once you've installed and configured FSLogix, you can test your deployment by signing in with a user account that's been assigned an app group or desktop on the host pool. Make sure the user account you sign in with has permission on the file share.
-
-If the user has signed in before, they'll have an existing local profile that they'll use during this session. To avoid creating a local profile, either create a new user account to use for tests or use the configuration methods described in [Tutorial: Configure Profile Container to redirect User Profiles](/fslogix/configure-profile-container-tutorial/).
-
-To check your permissions on your session:
-
-1. Start a session on Azure Virtual Desktop.
-
-2. Open the Azure portal.
-
-3. Open the storage account you created in [Set up a storage account](#set-up-an-azure-storage-account).
-
-4. Go to **Data storage** in your storage account, then select **File shares**.
-
-5. Open your file share and make sure the user profile folder you've created is in there.
-
-For extra testing, follow the instructions in [Make sure your profile works](create-profile-container-adds.md#make-sure-your-profile-works).
-
-## Next steps
-
-If you're looking for alternate ways to create FSLogix profile containers, check out the following articles:
--- [Create a profile container for a host pool using a file share](create-host-pools-user-profile.md).-- [Create an FSLogix profile container for a host pool using Azure NetApp Files](create-fslogix-profile-container.md)-
-You can find more detailed information about concepts related to FSlogix containers for Azure files in [FSLogix profile containers and Azure files](fslogix-containers-azure-files.md).
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-azure-ad.md
description: Set up an FSLogix profile container on an Azure file share in an ex
- Last updated 06/13/2022
If you need to disable Azure AD authentication on your storage account:
## Next steps - To troubleshoot FSLogix, see [this troubleshooting guide](/fslogix/fslogix-trouble-shooting-ht).-- To configure FSLogix profiles on Azure Files with Azure Active Directory Domain Services, see [Create a profile container with Azure Files and Azure AD DS](create-profile-container-adds.md).-- To configure FSLogix profiles on Azure Files with Active Directory Domain Services, see [Create a profile container with Azure Files and AD DS](create-file-share.md).
virtual-desktop Fslogix Profile Container Configure Azure Files Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-profile-container-configure-azure-files-active-directory.md
+
+ Title: Set up FSLogix Profile Container with Azure Files and AD DS or Azure AD DS - Azure Virtual Desktop
+description: This article describes how to create a FSLogix Profile Container with Azure Files and Active Directory Domain Services or Azure Active Directory Domain Services.
++ Last updated : 07/29/2022+++++
+# Set up FSLogix Profile Container with Azure Files and Active Directory Domain Services or Azure Active Directory Domain Services
+
+This article will show you how to set up FSLogix Profile Container with Azure Files when your session host virtual machines (VMs) are joined to an Active Directory Domain Services (AD DS) domain or Azure Active Directory Domain Services (Azure AD DS) managed domain.
+
+## Prerequisites
+
+You'll need the following:
+
+- A host pool where the session hosts are joined to an AD DS domain or Azure AD DS managed domain and users are assigned.
+- A security group in your domain that contains the users who will use Profile Container. If you're using AD DS, this must be synchronized to Azure AD.
+- Permission on your Azure subscription to create a storage account and add role assignments.
+- A domain account to join computers to the domain and open an elevated PowerShell prompt.
+- The subscription ID of your Azure subscription where your storage account will be.
+- A computer joined to your domain for installing and running PowerShell modules that will join a storage account to your domain. This device will need to be running a [Supported version of Windows](/powershell/scripting/install/installing-powershell-on-windows.md#supported-versions-of-windows). Alternatively, you can use a session host.
+
+> [!IMPORTANT]
+> If users have previously signed in to the session hosts you want to use, local profiles will have been created for them and must be deleted first by an administrator for their profile to be stored in a Profile Container.
+
+## Set up a storage account for Profile Container
+
+To set up a storage account:
+
+1. Sign in to the Azure portal.
+
+1. Search for **Storage accounts** in the search bar.
+
+1. Select **+ Create**.
+
+1. Enter the following information into the **Basics** tab on the **Create storage account** page:
+
+ - Create a new resource group or select an existing one to store the storage account in.
+ - Enter a unique name for your storage account. This storage account name needs to be between 3 and 24 characters.
+ - For **Region**, we recommend you choose the same location as the Azure Virtual Desktop host pool.
+ - For **Performance**, select **Standard** as a minimum.
+ - If you select Premium performance, set the **Premium account type** to **File shares**.
+ - For **Redundancy**, select **Locally-redundant storage (LRS)** as a minimum.
+ - The defaults on the remaining tabs don't need to be changed.
+
+ > [!TIP]
+ > Your organization may have requirements to change these defaults:
+ >
+ > - Whether you should select **Premium** depends on your IOPS and latency requirements. For more information, see [Storage options for FSLogix Profile Containers in Azure Virtual Desktop](store-fslogix-profile.md).
+ > - On the **Advanced** tab, **Enable storage account key access** must be left enabled.
+ > - For more information on the remaining configuration options, see [Planning for an Azure Files deployment](../storage/files/storage-files-planning.md).
+
+1. Select **Review + create**. Review the parameters and the values that will be used, then select **Create**.
+
+1. Once the storage account has been created, select **Go to resource**.
+
+1. In the **Data storage** section, select **File shares**.
+
+1. Select **+ File share**.
+
+1. Enter a **Name**, such as *Profiles*, then for the tier select **Transaction optimized**.
+
+## Join your storage account to Active Directory
+
+To use Active Directory accounts for the share permissions of your file share, you need to enable AD DS or Azure AD DS as a source. This process joins your storage account to a domain, representing it as a computer account. Select the relevant tab below for your scenario and follow the steps.
+
+# [AD DS](#tab/adds)
+
+1. Sign in to a computer that is joined to your AD DS domain. Alternatively, sign in to one of your session hosts.
+
+1. Download and extract [the latest version of AzFilesHybrid](https://github.com/Azure-Samples/azure-files-samples/releases) from the Azure Files samples GitHub repo. Make a note of the folder you extract the files to.
+
+1. Open an elevated PowerShell prompt and change to the directory where you extracted the files.
+
+1. Run the following command to add the `AzFilesHybrid` module to your user's PowerShell modules directory:
+
+ ```powershell
+ .\CopyToPSPath.ps1
+ ```
+
+1. Import the `AzFilesHybrid` module by running the following command:
+
+ ```powershell
+ Import-Module -Name AzFilesHybrid
+ ```
+
+ > [!IMPORTANT]
+ > This module requires requires the [PowerShell Gallery](/powershell/scripting/gallery/overview) and [Azure PowerShell](/powershell/azure/what-is-azure-powershell). You may be prompted to install these if they are not already installed or they need updating. If you are prompted for these, install them, then close all instances of PowerShell. Re-open an elevated PowerShell prompt and import the `AzFilesHybrid` module again before continuing.
+
+1. Sign in to Azure by running the command below. You will need to use an account that has one of the following role-based access control (RBAC) roles:
+
+ - Storage account owner
+ - Owner
+ - Contributor
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+ > [!TIP]
+ > If your Azure account has access to multiple tenants and/or subscriptions, you will need to select the correct subscription by setting your context. For more information, see [Azure PowerShell context objects](/powershell/azure/context-persistence.md)
+
+1. Join the storage account to your domain by running the commands below, replacing the values for `$subscriptionId`, `$resourceGroupName`, and `$storageAccountName` with your values. You can also add the parameter `-OrganizationalUnitDistinguishedName` to specify an Organizational Unit (OU) in which to place the computer account.
+
+ ```powershell
+ $subscriptionId = "subscription-id"
+ $resourceGroupName = "resource-group-name"
+ $storageAccountName = "storage-account-name"
+
+ Join-AzStorageAccount `
+ -ResourceGroupName $ResourceGroupName `
+ -StorageAccountName $StorageAccountName `
+ -DomainAccountType "ComputerAccount" `
+ -EncryptionType "'RC4','AES256'"
+ ```
+
+1. To verify the storage account has joined your domain, run the commands below and review the output, replacing the values for `$resourceGroupName` and `$storageAccountName` with your values:
+
+ ```powershell
+ $resourceGroupName = "resource-group-name"
+ $storageAccountName = "storage-account-name"
+
+ (Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName).AzureFilesIdentityBasedAuth.DirectoryServiceOptions; (Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName).AzureFilesIdentityBasedAuth.ActiveDirectoryProperties
+ ```
+
+1. From the Azure portal, open the storage account you created previously.
+
+1. In the **Data storage** section, select **File shares**.
+
+1. In the main section of the page, next to **Active Directory**, select **Not configured**.
+
+1. In the box for **Active Directory Domain Services**, select **Set up**.
+
+> [!IMPORTANT]
+> If your domain enforces password expiration, you must update the password before it expires to prevent authentication failures when accessing Azure file shares. For more information, see [Update the password of your storage account identity in AD DS](../storage/files/storage-files-identity-ad-ds-update-password.md) for details.
+
+# [Azure AD DS](#tab/aadds)
+
+1. From the Azure portal, open the storage account you created previously.
+
+1. In the **Data storage** section, select **File shares**.
+
+1. In the main section of the page, next to **Active Directory**, select **Not configured**.
+
+1. In the box for **Azure Active Directory Domain Services**, select **Set up**.
+
+1. Tick the box to **Enable Azure Active Directory Domain Services (Azure AD DS) for this file share**, then select **Save**. An Organizational Unit (OU) called **AzureFilesConfig** will be created at the root of your domain and a computer account named the same as the storage account will be created in that OU.
+
+1. To verify the storage account has joined your domain, run the commands below and review the output, replacing the values for `$resourceGroupName` and `$storageAccountName` with your values:
+
+ ```powershell
+ $resourceGroupName = "resource-group-name"
+ $storageAccountName = "storage-account-name"
+
+ (Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName).AzureFilesIdentityBasedAuth.DirectoryServiceOptions; (Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName).AzureFilesIdentityBasedAuth.ActiveDirectoryProperties
+ ```
+++
+## Assign RBAC role to users
+
+Users needing to store profiles in your file share will need permission to access it. To do this, you'll need to assign each user the *Storage File Data SMB Share Contributor* role.
+
+To assign users the role:
+
+1. From the Azure portal, browse to the storage account, then to the file share you created previously.
+
+1. Select **Access control (IAM)**.
+
+1. Select **+ Add**, then select **Add role assignment** from the drop-down menu.
+
+1. Select the role **Storage File Data SMB Share Contributor** and select **Next**.
+
+1. On the **Members** tab, select **User, group, or service principal**, then select **+Select members**. In the search bar, search for and select the security group that contains the users who will use Profile Container.
+
+1. Select **Review + assign** to complete the assignment.
+
+## Set NTFS permissions
+
+Next, you'll need to set NTFS permissions on the folder, which requires you to get the access key for your Storage account.
+
+To get the Storage account access key:
+
+1. From the Azure portal, search for and select **storage account** in the search bar.
+
+1. From the list of storage accounts, select the account that you enabled Azure AD DS and assigned the RBAC role for in the previous sections.
+
+1. Under **Security + networking**, select **Access keys**, then show and copy the key from **key1**.
+
+To set the correct NTFS permissions on the folder:
+
+1. Sign in to a session host that is part of your host pool.
+
+1. Open an elevated PowerShell prompt and run the command below to map the storage account as a drive on your session host. The mapped drive will not show in File Explorer, but can be viewed with the `net use` command. This is so you can set permissions on the share.
+
+ ```cmd
+ net use <desired-drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name> <storage-account-key> /user:Azure\<storage-account-name>
+ ```
+
+ - Replace `<desired-drive-letter>` with a drive letter of your choice (for example, `y:`).
+ - Replace all instances of `<storage-account-name>` with the name of the storage account you specified earlier.
+ - Replace `<share-name>` with the name of the share you created earlier.
+ - Replace `<storage-account-key>` with the storage account key from Azure.
+
+ For example:
+
+ ```cmd
+ net use y: \\fsprofile.file.core.windows.net\share HDZQRoFP2BBmoYQ(truncated)== /user:Azure\fsprofile
+ ```
+
+1. Run the following commands to set permissions on the share that allow your Azure Virtual Desktop users to create their own profile while blocking access to the profiles of other users. You should use an Active Directory security group that contains the users you want to use Profile Container. In the commands below, replace `<mounted-drive-letter>` with the letter of the drive you used to map the drive and `<upn>` with the UPN name of the Active Directory group or user that will require access to the share.
+
+ ```cmd
+ icacls <mounted-drive-letter>: /grant "<upn>:(M)"
+ icacls <mounted-drive-letter>: /grant "Creator Owner:(OI)(CI)(IO)(M)"
+ icacls <mounted-drive-letter>: /remove "Authenticated Users"
+ icacls <mounted-drive-letter>: /remove "Builtin\Users"
+ ```
+
+ For example:
+
+ ```cmd
+ icacls y: /grant "avdusers@contoso.com:(M)"
+ icacls y: /grant "Creator Owner:(OI)(CI)(IO)(M)"
+ icacls y: /remove "Authenticated Users"
+ icacls y: /remove "Builtin\Users"
+ ```
+
+## Configure session hosts to use Profile Container
+
+In order to use Profile Container, you'll need to make sure FSLogix Apps is installed on your session host VMs. FSLogix Apps is preinstalled in Windows 10 Enterprise multi-session and Windows 11 Enterprise multi-session operating systems, but you should still follow the steps below as it might not have the latest version installed. If you're using a [custom image](set-up-golden-image.md), you can install FSLogix Apps in your image.
+
+To configure Profile Container, we recommend you use Group Policy Preferences to set registry keys and values at scale across all your session hosts. You can also set these in your custom image.
+
+To configure Profile Container on your session host VMs:
+
+1. Sign in to the VM used to create your custom image or a session host VM from your host pool.
+
+1. If you need to install or update FSLogix Apps, download the latest version of [FSLogix](https://aka.ms/fslogix-latest) and install it by running `FSLogixAppsSetup.exe`, then following the instructions in the setup wizard. For more details about the installation process, including customizations and unattended installation, see [Download and Install FSLogix](/fslogix/install-ht).
+
+1. Open an elevated PowerShell prompt and run the following commands, replacing `\\<storage-account-name>.file.core.windows.net\<share-name>` with the UNC path to your storage account you created earlier. These commands enable Profile Container and configure the location of the share.
+
+ ```powershell
+ $regPath = "HKLM:\SOFTWARE\FSLogix\Profiles"
+ New-ItemProperty -Path $regPath -Name Enabled -PropertyType DWORD -Value 1 -Force
+ New-ItemProperty -Path $regPath -Name VHDLocations -PropertyType MultiString -Value \\<storage-account-name>.file.core.windows.net\<share-name> -Force
+ ```
+
+1. Restart the VM used to create your custom image or a session host VM. You will need to repeat these steps for any remaining session host VMs.
+
+You have now finished the setting up Profile Container. If you are installing Profile Container in your custom image, you will need to finish creating the custom image. For more information, follow the steps in [Create a custom image in Azure](set-up-golden-image.md) from the section [Take the final snapshot](set-up-golden-image.md#take-the-final-snapshot) onwards.
+
+## Validate profile creation
+
+Once you've installed and configured Profile Container, you can test your deployment by signing in with a user account that's been assigned an app group or desktop on the host pool.
+
+If the user has signed in before, they'll have an existing local profile that they'll use during this session. Either delete the local profile first, or create a new user account to use for tests.
+
+Users can check that Profile Container is set up by following the steps below:
+
+1. Sign in to Azure Virtual Desktop as the test user.
+
+1. When the user signs in, the message "Please wait for the FSLogix Apps Services" should appear as part of the sign-in process, before reaching the desktop.
+
+Administrators can check the profile folder has been created by following the steps below:
+
+1. Open the Azure portal.
+
+1. Open the storage account you created in previously.
+
+1. Go to **Data storage** in your storage account, then select **File shares**.
+
+1. Open your file share and make sure the user profile folder you've created is in there.
+
+## Next steps
+
+You can find more detailed information about concepts related to FSlogix Profile Container for Azure Files in [FSLogix Profile Container for Azure Files](fslogix-containers-azure-files.md).
virtual-desktop Store Fslogix Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/store-fslogix-profile.md
To learn more about FSLogix profile containers, user profile disks, and other us
If you're ready to create your own FSLogix profile containers, get started with one of these tutorials: -- [Create an Azure file share with Active Directory Domain Services](create-file-share.md)-- [Create an Azure file share with Azure Active Directory](create-profile-container-azure-ad.md)-- [Create an Azure file share with Azure Active Directory Domain Services](create-profile-container-adds.md)-- [Create an FSLogix profile container for a host pool using Azure NetApp files](create-fslogix-profile-container.md)-- The instructions in [Deploy a two-node Storage Spaces Direct scale-out file server for UPD storage in Azure](/windows-server/remote/remote-desktop-services/rds-storage-spaces-direct-deployment/) also apply when you use an FSLogix profile container instead of a user profile disk
+- [Set up FSLogix Profile Container with Azure Files and Active Directory](fslogix-profile-container-configure-azure-files-active-directory.md)
+- [Set up FSLogix Profile Container with Azure NetApp Files](create-fslogix-profile-container.md)
virtual-desktop Troubleshoot Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-authorization.md
Last updated 08/19/2021
-# Troubleshoot Azure Files authorization
+# Troubleshoot Azure Files authentication with Active Directory
-This article describes common issues related to Azure Files authentication with Azure Active Directory (Azure AD), and suggestions for how to fix them.
+This article describes common issues related to Azure Files authentication with an Active Directory Domain Services (AD DS) domain or Azure Active Directory Domain Services (Azure AD DS) managed domain, and suggestions for how to fix them.
## My group membership isn't working
-When you add a virtual machine (VM) to an Active Directory Domain Services (AD DS) group, you must restart that VM to activate its membership within the service.
+When you add a virtual machine (VM) to an AD DS group, you must restart that VM to activate its membership within the service.
-## I can't add my storage account to my AD DS
+## I can't add my storage account to my AD DS domain
First, check [Unable to mount Azure Files with AD credentials](../storage/files/storage-troubleshoot-windows-file-connection-problems.md#unable-to-mount-azure-files-with-ad-credentials) to see if your problem is listed there.
If your storage account doesn't automatically sync with Azure AD after 30 minute
## My storage account says it needs additional permissions
-If your storage account needs additional permissions, you may not have permission to access MSIX app attach and FSLogix. To fix this issue, make sure you've assigned one of these permissions to your account:
+If your storage account needs additional permissions, you may not have assigned the required Azure role-based access control (RBAC) role to users or NTFS permissions. To fix this issue, make sure you've assigned one of these permissions to users who need to access the share:
- The **Storage File Data SMB Share Contributor** RBAC permission.
If your storage account needs additional permissions, you may not have permissio
## Next steps
-If you need to refresh your memory about the Azure Files setup process, see [Authorize an account for Azure Files](azure-files-authorization.md).
+If you need to refresh your memory about the Azure Files setup process, see [Set up FSLogix Profile Container with Azure Files and Active Directory Domain Services or Azure Active Directory Domain Services](fslogix-profile-container-configure-azure-files-active-directory.md).
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2 (preview), Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 07/18/2022 Last updated : 08/01/2022
Premium SSD v2 offers up to 32 TiB per region per subscription by default in the
#### Premium SSD v2 IOPS
-All Premium SSD v2 disks have a baseline IOPS of 3000. After 6 GiB, the maximum IOPS a disk can have increases at a rate of 500 per GiB, up to 80,000 IOPS. So an 8 GiB disk can have up to 4,000 IOPS, and a 10 GiB can have up to 5,000 IOPS. To be able to set 80,000 IOPS on a disk, that disk must have at least 160 GiB.
+All Premium SSD v2 disks have a baseline IOPS of 3000 that is free of charge. After 6 GiB, the maximum IOPS a disk can have increases at a rate of 500 per GiB, up to 80,000 IOPS. So an 8 GiB disk can have up to 4,000 IOPS, and a 10 GiB can have up to 5,000 IOPS. To be able to set 80,000 IOPS on a disk, that disk must have at least 160 GiB. Increasing your IOPS beyond 3000 increases the price of your disk.
#### Premium SSD v2 throughput
-All Premium SSD v2 disks have a baseline throughput of 125 MB/s. After 6 GiB, the maximum throughput that can be set increases by 0.25 MB/s per set IOPS. If a disk has 3,000 IOPS, the max throughput it can set is 750 MB/s. To raise the throughput for this disk beyond 750 MB/s, its IOPS must be increased. For example, if you increased the IOPS to 4,000, then the max throughput that can be set is 1,000. 1,200 MB/s is the maximum throughput supported for disks that have 5,000 IOPS or more.
+All Premium SSD v2 disks have a baseline throughput of 125 MB/s, that is free of charge. After 6 GiB, the maximum throughput that can be set increases by 0.25 MB/s per set IOPS. If a disk has 3,000 IOPS, the max throughput it can set is 750 MB/s. To raise the throughput for this disk beyond 750 MB/s, its IOPS must be increased. For example, if you increased the IOPS to 4,000, then the max throughput that can be set is 1,000. 1,200 MB/s is the maximum throughput supported for disks that have 5,000 IOPS or more. Increasing your throughput beyond 125 increases the price of your disk.
#### Premium SSD v2 Sector Sizes Premium SSD v2 supports a 4k physical sector size by default. A 512E sector size is also supported. While most applications are compatible with 4k sector sizes, some require 512-byte sector sizes. Oracle Database, for example, requires release 12.2 or later in order to support 4k native disks. For older versions of Oracle DB, 512-byte sector size is required.
virtual-machines Connect Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/connect-ssh.md
The examples below use variables. You can set variables in your environment as f
| Bash/ZSH | myResourceGroup='resGroup10' | PowerShell | $myResourceGroup='resGroup10'
-## Install SSH
+## Enable SSH
First, you will need to enable SSH in your Windows machine.
az ssh vm -g $myResourceGroup -n $myVM --local-user $myUsername -- -L 3389:loca
## Next steps
-Learn how to transfer files to an existing VM, see [Use SCP to move files to and from a Linux VM](../linux/copy-files-to-linux-vm-using-scp.md). The same steps will also work for Windows machines.
+Learn how to transfer files to an existing VM, see [Use SCP to move files to and from a Linux VM](../linux/copy-files-to-linux-vm-using-scp.md). The same steps will also work for Windows machines.
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 07/18/2022 Last updated : 07/29/2022
# Use Azure to host and run SAP workload scenarios
-When you use Microsoft Azure, you can reliably run your mission-critical SAP workloads and scenarios on a scalable, compliant, and enterprise-proven platform. You get the scalability, flexibility, and cost savings of Azure. With the expanded partnership between Microsoft and SAP, you can run SAP applications across development and test and production scenarios in Azure and be fully supported. From SAP NetWeaver to SAP S/4HANA, SAP BI on Linux to Windows, and SAP HANA to SQL, we've got you covered.
+When you use Microsoft Azure, you can reliably run your mission-critical SAP workloads and scenarios on a scalable, compliant, and enterprise-proven platform. You get the scalability, flexibility, and cost savings of Azure. With the expanded partnership between Microsoft and SAP, you can run SAP applications across development and test and production scenarios in Azure and be fully supported. From SAP NetWeaver to SAP S/4HANA, SAP BI on Linux to Windows, and SAP HANA to SQL Server, Oracle, Db2, etc, we've got you covered.
-Besides hosting SAP NetWeaver scenarios with the different DBMS on Azure, you can host other SAP workload scenarios, like SAP BI on Azure.
+Besides hosting SAP NetWeaver and S/4HANA scenarios with the different DBMS on Azure, you can host other SAP workload scenarios, like SAP BI on Azure.
-The uniqueness of Azure for SAP HANA is an offer that sets Azure apart. To enable hosting more memory and CPU resource-demanding SAP scenarios that involve SAP HANA, Azure offers the use of customer-dedicated bare-metal hardware. Use this solution to run SAP HANA deployments that require up to 24 TB (120 TB scale-out) of memory for S/4HANA or other SAP HANA workload.
+We just announced our new services of Azure Center for SAP solutions and Azure Monitor for SAP 2.0 entering the public previev stage. These services will give you the possibility to deploy SAP workload on Azure in a highly automated manner in an optimal architecture and configuration. And monitor your Azure infrastructure, OS, DBMS, and ABAP stack deployments on one single pane of glass.
+
+For customers and partners who are focussed on deploying and operating their assets in public cloud through Terraform and Ansible, leverage our SAP Deployment Automation Framework (SDAF) to jump start your SAP deployments into Azure using our public Terraform and Ansible modules on [github](https://github.com/Azure/sap-automation).
Hosting SAP workload scenarios in Azure also can create requirements of identity integration and single sign-on. This situation can occur when you use Azure Active Directory (Azure AD) to connect different SAP components and SAP software-as-a-service (SaaS) or platform-as-a-service (PaaS) offers. A list of such integration and single sign-on scenarios with Azure AD and SAP entities is described and documented in the section "Azure AD SAP identity integration and single sign-on."
In the SAP workload documentation space, you can find the following areas:
## Change Log -- Julu 18, 2022: Clarify statement around Pacemaker support on Oracle Linux in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms_guide_oracle.md)
+- July 18, 2022: Clarify statement around Pacemaker support on Oracle Linux in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms_guide_oracle.md)
- June 29, 2022: Add recommendation and links to Pacemaker usage for Db2 versions 11.5.6 and higher in the documents [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_ibm.md), [High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker](./dbms-guide-ha-ibm.md), and [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md) - June 08, 2022: Change in [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) to adjust timeouts when using NFSv4.1 (related to NFSv4.1 lease renewal) for more resilient Pacemaker configuration - June 02, 2022: Change in the [SAP Deployment Guide](deployment-guide.md) to add a link to RHEL in-place upgrade documentation
vpn-gateway Howto Point To Site Multi Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/howto-point-to-site-multi-auth.md
Previously updated : 07/21/2021 Last updated : 07/29/2022
-# Configure a Point-to-Site VPN connection to a VNet using multiple authentication types: Azure portal
+# Configure a point-to-site VPN connection to a VNet using multiple authentication types: Azure portal
-This article helps you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. Point-to-Site VPN connections are useful when you want to connect to your VNet from a remote location, such when you are telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. Point-to-Site connections do not require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2. For more information about Point-to-Site VPN, see [About Point-to-Site VPN](point-to-site-about.md).
+This article helps you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. point-to-site VPN connections are useful when you want to connect to your VNet from a remote location, such when you are telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. Point-to-site connections do not require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2. For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md).
:::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/point-to-site-diagram.png" alt-text="Connect from a computer to an Azure VNet - point-to-site connection diagram":::
You can use the following values to create a test environment, or refer to these
* **VPN type:** Route-based * **Public IP address name:** VNet1GWpip * **Connection type:** Point-to-site
-* **Client address pool:** 172.16.201.0/24<br>VPN clients that connect to the VNet using this Point-to-Site connection receive an IP address from the client address pool.
+* **Client address pool:** 172.16.201.0/24<br>VPN clients that connect to the VNet using this point-to-site connection receive an IP address from the client address pool.
## <a name="createvnet"></a>Create a virtual network
Before beginning, verify that you have an Azure subscription. If you don't alrea
[!INCLUDE [About cross-premises addresses](../../includes/vpn-gateway-cross-premises.md)] ## <a name="creategw"></a>Virtual network gateway
You can see the deployment status on the Overview page for your gateway. A gatew
## <a name="addresspool"></a>Client address pool
-The client address pool is a range of private IP addresses that you specify. The clients that connect over a Point-to-Site VPN dynamically receive an IP address from this range. Use a private IP address range that does not overlap with the on-premises location that you connect from, or the VNet that you want to connect to. If you configure multiple protocols and SSTP is one of the protocols, then the configured address pool is split between the configured protocols equally.
+The client address pool is a range of private IP addresses that you specify. The clients that connect over a point-to-site VPN dynamically receive an IP address from this range. Use a private IP address range that does not overlap with the on-premises location that you connect from, or the VNet that you want to connect to. If you configure multiple protocols and SSTP is one of the protocols, then the configured address pool is split between the configured protocols equally.
1. Once the virtual network gateway has been created, navigate to the **Settings** section of the virtual network gateway page. In **Settings**, select **Point-to-site configuration**. Select **Configure now** to open the configuration page. :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/configure-now.png" alt-text="Screenshot of point-to-site configuration page." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/configure-now.png"::: 1. On the **Point-to-site configuration** page, you can configure a variety of settings. In the **Address pool** box, add the private IP address range that you want to use. VPN clients dynamically receive an IP address from the range that you specify. The minimum subnet mask is 29 bit for active/passive and 28 bit for active/active configuration.
- :::image type="content" source="./media/howto-point-to-site-multi-auth/address.jpg" alt-text="Screenshot of address pool.":::
+ :::image type="content" source="./media/howto-point-to-site-multi-auth/address-pool.png" alt-text="Screenshot of client address pool.":::
1. Continue to the next section to configure authentication and tunnel types.
The client address pool is a range of private IP addresses that you specify. The
In this section, you configure authentication type and tunnel type. On the **Point-to-site configuration** page, if you don't see **Tunnel type** or **Authentication type**, your gateway is using the Basic SKU. The Basic SKU does not support IKEv2 or RADIUS authentication. If you want to use these settings, you need to delete and recreate the gateway using a different gateway SKU.
- :::image type="content" source="./media/howto-point-to-site-multi-auth/multiauth.jpg" alt-text="Screenshot of authentication type.":::
+ :::image type="content" source="./media/howto-point-to-site-multi-auth/authentication-types.png" alt-text="Screenshot of authentication types and tunnel type.":::
### <a name="tunneltype"></a>Tunnel type
For instructions to generate and install VPN client configuration files, use the
[!INCLUDE [All client articles](../../includes/vpn-gateway-vpn-client-install-articles.md)]
-## <a name="faq"></a>Point-to-Site FAQ
+## <a name="faq"></a>Point-to-site FAQ
-This section contains FAQ information that pertains to Point-to-Site configurations. You can also view the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md) for additional information about VPN Gateway.
-
+For point-to-site FAQ information, see the point-to-site sections of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).
## Next steps
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md
Title: 'Configure Azure AD tenant for P2S VPN connections: Azure AD authentication-OpenVPN'
+ Title: 'Configure Azure AD tenant and settings for P2S VPN connections: Azure AD authentication: OpenVPN'
description: Learn how to set up an Azure AD tenant for P2S Azure AD authentication - OpenVPN protocol. Previously updated : 06/14/2022 Last updated : 07/29/2022
-# Configure an Azure AD tenant for P2S OpenVPN protocol connections
+# Configure an Azure AD tenant and P2S configuration for VPN Gateway P2S connections
-When you connect to your VNet using the Azure VPN Gateway point-to-site VPN, you have a choice of which protocol to use. The protocol you use determines the authentication options that are available to you. If you're using the OpenVPN protocol, Azure Active Directory authentication is one of the authentication options available for you to use. This article helps you configure your AD tenant and P2S VPN gateway for Azure AD authentication. For more information about point-to-site protocols and authentication, see [About point-to-site VPN](point-to-site-about.md).
+This article helps you configure your AD tenant and P2S settings for Azure AD authentication. For more information about point-to-site protocols and authentication, see [About VPN Gateway point-to-site VPN](point-to-site-about.md). To authenticate using the Azure AD authentication type, you must include the OpenVPN tunnel type in your point-to-site configuration.
[!INCLUDE [OpenVPN note](../../includes/vpn-gateway-openvpn-auth-include.md)]
Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, y
[!INCLUDE [Steps to enable the tenant](../../includes/vpn-gateway-vwan-azure-ad-tenant.md)]
-### Configure P2S gateway settings
+### Configure point-to-site settings
1. Locate the tenant ID of the directory that you want to use for authentication. It's listed in the properties section of the Active Directory page. For help with finding your tenant ID, see [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, y
> [!IMPORTANT] > The Basic SKU is not supported for OpenVPN.
-1. Enable Azure AD authentication on the VPN gateway by navigating to **Point-to-site configuration** and picking **OpenVPN (SSL)** as the **Tunnel type**. Select **Azure Active Directory** as the **Authentication type**, then fill in the information under the **Azure Active Directory** section. Replace {AzureAD TenantID} with your tenant ID.
+1. Enable Azure AD authentication on the VPN gateway by going to **Point-to-site configuration** and picking **OpenVPN (SSL)** as the **Tunnel type**. Select **Azure Active Directory** as the **Authentication type**, then fill in the information under the **Azure Active Directory** section. Replace {AzureAD TenantID} with your tenant ID.
* **Tenant:** TenantID for the Azure AD tenant
Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, y
1. Save your changes.
-1. Create and download the profile by clicking on the **Download VPN client** link.
+1. At the top of the page, click **Download VPN client**. It takes a few minutes for the client configuration package to generate.
+
+1. Your browser indicates that a client configuration zip file is available. It's named the same name as your gateway.
1. Extract the downloaded zip file. 1. Browse to the unzipped ΓÇ£AzureVPNΓÇ¥ folder.
-1. Make a note of the location of the ΓÇ£azurevpnconfig.xmlΓÇ¥ file. The azurevpnconfig.xml contains the setting for the VPN connection and can be imported directly into the Azure VPN Client application. You can also distribute this file to all the users that need to connect via e-mail or other means. The user will need valid Azure AD credentials to connect successfully.
+1. Make a note of the location of the ΓÇ£azurevpnconfig.xmlΓÇ¥ file. The azurevpnconfig.xml contains the setting for the VPN connection. You can also distribute this file to all the users that need to connect via e-mail or other means. The user will need valid Azure AD credentials to connect successfully. For more information, see [Azure VPN client profile config files for Azure AD authentication](about-vpn-profile-download.md).
## Next steps
-Create and configure a VPN client profile. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
+* [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).