Updates from: 03/24/2022 02:08:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Deploy Custom Policies Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/deploy-custom-policies-devops.md
try {
Write-Host "Uploading the" $PolicyId "policy..." $graphuri = 'https://graph.microsoft.com/beta/trustframework/policies/' + $PolicyId + '/$value'
- $response = Invoke-RestMethod -Uri $graphuri -Method Put -Body $policycontent -Headers $headers
+ $content = [System.Text.Encoding]::UTF8.GetBytes($policycontent)
+ $response = Invoke-RestMethod -Uri $graphuri -Method Put -Body $content -Headers $headers
Write-Host "Policy" $PolicyId "uploaded successfully." }
active-directory-b2c Enable Authentication In Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app.md
Title: Enable authentication in your own Node web application using Azure Active Directory B2C
-description: This article explains how to enable authentication in your own node.js web application using Azure AD B2C
+description: This article explains how to enable authentication in your own Node.js web application using Azure AD B2C
active-directory-b2c Integrate With App Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/integrate-with-app-code-samples.md
Title: Azure Active Directory B2C integrate with app samples
+ Title: Azure Active Directory B2C integrate with app samples
description: Code samples for integrating Azure AD B2C to mobile, desktop, web, and single-page applications.
The following tables provide links to samples for applications including iOS, An
| [dotnetcore-webapp-openidconnect](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-5-B2C) | An ASP.NET Core web application that uses OpenID Connect to sign in users in Azure AD B2C. | | [dotnetcore-webapp-msal-api](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-2-B2C) | An ASP.NET Core web application that can sign in a user using Azure AD B2C, get an access token using MSAL.NET and call an API. | | [auth-code-flow-nodejs](https://github.com/Azure-Samples/active-directory-b2c-msal-node-sign-in-sign-out-webapp) | A Node.js app that shows how to enable authentication (sign in, sign out and profile edit) in a Node.js web application using Azure Active Directory B2C. The web app uses MSAL-node.|
-| [javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) | A small node.js Web API for Azure AD B2C that shows how to protect your web api and accept B2C access tokens using passport.js. |
+| [javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) | A small Node.js Web API for Azure AD B2C that shows how to protect your web api and accept B2C access tokens using passport.js. |
| [ms-identity-python-webapp](https://github.com/Azure-Samples/ms-identity-python-webapp/blob/master/README_B2C.md) | Demonstrate how to Integrate B2C of Microsoft identity platform with a Python web application. | ## Single page apps
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/threat-management.md
When testing the smart lockout feature, use a distinctive pattern for each passw
When the smart lockout threshold is reached, you'll see the following message while the account is locked: **Your account is temporarily locked to prevent unauthorized use. Try again later**. The error messages can be [localized](localization-string-ids.md#sign-up-or-sign-in-error-messages). > [!NOTE]
-> When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service. In that scenario, because each Azure AD datacenter tracks lockout independently, it might take more than your defined lockout threshold number of attempts to cause a lockout. A user has a maximum of (threshold_limit * datacenter_count) number of bad attempts before being completely locked out.
+> When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service. In that scenario, because each Azure AD datacenter tracks lockout independently, it might take more than your defined lockout threshold number of attempts to cause a lockout. A user has a maximum of (threshold_limit * datacenter_count) number of bad attempts before being completely locked out. For more information, see [Azure global infrastructure](https://azure.microsoft.com/global-infrastructure/).
## Viewing locked-out accounts
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
description: Learn how to use additional context in MFA notifications
Previously updated : 03/18/2022 Last updated : 03/23/2022
# How to use additional context in Microsoft Authenticator notifications (Preview) - Authentication Methods Policy
-This topic covers how to improve the security of user sign-in by adding application location based on IP address in Microsoft Authenticator push notifications.
+This topic covers how to improve the security of user sign-in by adding the application and location in Microsoft Authenticator push notifications.
## Prerequisites
Your organization will need to enable Microsoft Authenticator push notifications
## Passwordless phone sign-in and multifactor authentication
-When a user receives a Passwordless phone sign-in or MFA push notification in the Microsoft Authenticator app, they'll see the name of the application that requests the approval and the app location based on its IP address.
+When a user receives a Passwordless phone sign-in or MFA push notification in the Microsoft Authenticator app, they'll see the name of the application that requests the approval and the location based on the IP address where the sign-in originated from.
:::image type="content" border="false" source="./media/howto-authentication-passwordless-phone/location.png" alt-text="Screenshot of additional context in the MFA push notification.":::
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
If you no longer want to use any password functionality, complete the following
1. On the **Ready to configure** page, select **Configure** and wait for the process to finish. 1. When you see the configuration finish, select **Exit**.
+> [!IMPORTANT]
+> Enabling password writeback for the first time may trigger password change events 656 and 657, even if a password change has not occurred. This is because all password hashes are re-synchronized after a password hash synchronization cycle has run.
+ ## Next steps In this tutorial, you enabled Azure AD SSPR writeback to an on-premises AD DS environment. You learned how to:
active-directory Sample V1 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/sample-v1-code.md
The following samples illustrate public client applications (desktop/mobile appl
The following samples show desktop or web applications that access the Microsoft Graph or a web API with no user (with the application identity). Client application | Platform | Flow/Grant | Calls an ASP.NET or ASP.NET Core 2.0 Web API
- | -- | - | --
+ | -- | - | --
Daemon app (Console) | ![This image shows the .NET Framework logo](media/sample-v2-code/logo-netframework.png) | Client Credentials with app secret or certificate | [dotnet-daemon](https://github.com/azure-samples/active-directory-dotnet-daemon)</p> [dotnet-daemon-certificate-credential](https://github.com/azure-samples/active-directory-dotnet-daemon-certificate-credential) Daemon app (Console) | ![This image shows the .NET Core logo](media/sample-v2-code/logo-netcore.png) | Client Credentials with certificate| [dotnetcore-daemon-certificate-credential](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-certificate-credential) ASP.NET Web App | ![This image shows the .NET Framework logo](media/sample-v2-code/logo-netframework.png) | Client credentials | [dotnet-webapp-webapi-oauth2-appidentity](https://github.com/Azure-Samples/active-directory-dotnet-webapp-webapi-oauth2-appidentity)
ASP.NET Web App | ![This image shows the .NET Framework logo](media/sample-v2-c
### Web API protected by Azure Active Directory
-The following sample shows how to protect a node.js web API with Azure AD.
+The following sample shows how to protect a Node.js web API with Azure AD.
In the previous sections of this article, you can also find other samples illustrating a client application **calling** an ASP.NET or ASP.NET Core **Web API**. These samples are not mentioned again in this section, but you will find them in the last column of the tables above or below
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Previously updated : 07/27/2021 Last updated : 03/22/2022
Groups created in | Security group default behavior | Microsoft 365 group defaul
## Make a group available for user self-service
-1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) with an account that's been assigned the Global Administrator or Privileged Role Administrator role for the directory.
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com) with an account that's been assigned the Global Administrator or Privileged Role Administrator role for the directory.
1. Select **Groups**, and then select **General** settings.
The group settings enable to control who can create security and Microsoft 365 g
![Azure Active Directory security groups setting change.](./media/groups-self-service-management/security-groups-setting.png)
-> [!NOTE]
-> The behavior of these settings recently changed. Make sure these settings are configured for your organization. For more information, see [Why were the group settings changed?](#why-were-the-group-settings-changed).
- The following table helps you decide which values to choose. | Setting | Value | Effect on your tenant |
Here are some additional details about these group settings.
- If you want to enable some, but not all, of your users to create groups, you can assign those users a role that can create groups, such as [Groups Administrator](../roles/permissions-reference.md#groups-administrator). - These settings are for users and don't impact service principals. For example, if you have a service principal with permissions to create groups, even if you set these settings to **No**, the service principal will still be able to create groups.
-### Why were the group settings changed?
-
-The previous implementation of the group settings were named **Users can create security groups in Azure portals** and **Users can create Microsoft 365 groups in Azure portals**. The previous settings only controlled group creation in Azure portals and did not apply to API or PowerShell. The new settings control group creation in Azure portals, as well as, API and PowerShell. The new settings are more secure.
-
-The default values for the new settings have been set to your previous API or PowerShell values. There is a possibility that the default values for the new settings are different than your previous values that controlled only the Azure portal behavior. Starting in May 2021, there was a transition period of a few weeks where you could select your preferred default value before the new settings took effect. Now that the new settings have taken effect, you are required to verify the new settings are configured for your organization.
- ## Next steps These articles provide additional information on Azure Active Directory.
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md
Here's an example that shows how to use PowerShell to add a user to the Guest In
Add-MsolRoleMember -RoleObjectId 95e79109-95c0-4d8e-aee3-d01accf2d47b -RoleMemberEmailAddress <RoleMemberEmailAddress> ```
+## Sign-in logs for B2B users
+
+When a B2B user signs into a resource tenant to collaborate, a sign-in log is generated in both the home tenant and the resource tenant. These logs include information such as the application being used, email addresses, tenant name, and tenant ID for both the home tenant and the resource tenant.
+ ## Next steps See the following articles on Azure AD B2B collaboration: - [What is Azure AD B2B collaboration?](what-is-b2b.md) - [Add B2B collaboration guest users without an invitation](add-user-without-invite.md)-- [Adding a B2B collaboration user to a role](./add-users-administrator.md)
+- [Adding a B2B collaboration user to a role](./add-users-administrator.md)
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
na Previously updated : 02/18/2022 Last updated : 03/22/2022
Ue the following instructions to create an access review on a team with shared c
1. Continue on to the **Reviews** tab. Select a reviewer to complete the review, then specify the **Duration** and **Review recurrence**. > [!NOTE]
- > <ul> If you set **Select reviewers** to **Users review their own access** or **Managers of users**, B2B direct connect users and Teams won't be able to review their own access in your tenant. The owner of the Team under review will get an email that asks the owner to review the B2B direct connect user and Teams.</ul><ul>If you select **Managers of users**, a selected fallback reviewer will review any user without a manager in the home tenant. This includes B2B direct connect users and Teams without a manager.</ul>
+ > - If you set **Select reviewers** to **Users review their own access** or **Managers of users**, B2B direct connect users and Teams won't be able to review their own access in your tenant. The owner of the Team under review will get an email that asks the owner to review the B2B direct connect user and Teams.
+ > - If you select **Managers of users**, a selected fallback reviewer will review any user without a manager in the home tenant. This includes B2B direct connect users and Teams without a manager.
1. Go on to the **Settings** tab and configure additional settings. Then go to the **Review and Create** tab to start your access review. For more detailed information about creating a review and configuration settings, see our [Create a single-stage access review](#create-a-single-stage-access-review).
active-directory Get It Now Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/get-it-now-azure-marketplace.md
- Title: 'Add an app from the Azure Marketplace'
-description: This article acts as a landing page from the Get It Now button on the Azure Marketplace.
------- Previously updated : 07/16/2020-----
-# Get It Now - add an app from the Azure Marketplace
-
-You are almost there!
-
-If you are trying to use Azure AD as the identity provider for an app then you are in the right place. You just need to add it to your Azure AD tenant. To learn how to do this, follow the quickstart series here: [View apps in your Azure AD tenant](view-applications-portal.md).
-
-If you are trying to install an app on your local computer or mobile device then you are in the wrong place. The Azure Marketplace is designed for organizations using apps with their Azure tenants. For personal computers and devices, check out the [Microsoft Apps store](https://www.microsoft.com/store/apps).
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
Previously updated : 03/07/2022 Last updated : 03/22/2022
The following Azure AD roles can be assigned with administrative unit scope:
| Role | Description | | --| -- | | [Authentication Administrator](permissions-reference.md#authentication-administrator) | Has access to view, set, and reset authentication method information for any non-admin user in the assigned administrative unit only. |
+| [Cloud Device Administrator](permissions-reference.md#cloud-device-administrator) | Limited access to manage devices in Azure AD. |
| [Groups Administrator](permissions-reference.md#groups-administrator) | Can manage all aspects of groups in the assigned administrative unit only. | | [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) | Can reset passwords for non-administrators in the assigned administrative unit only. | | [License Administrator](permissions-reference.md#license-administrator) | Can assign, remove, and update license assignments within the administrative unit only. |
active-directory Admin Units Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-manage.md
Previously updated : 03/03/2022 Last updated : 03/22/2022
DELETE https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-uni
## Next steps -- [Add users or groups to an administrative unit](admin-units-members-add.md)
+- [Add users, groups, or devices to an administrative unit](admin-units-members-add.md)
- [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md)
active-directory Admin Units Members Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-add.md
Title: Add users or groups to an administrative unit - Azure Active Directory
-description: Add users or groups to an administrative unit in Azure Active Directory
+ Title: Add users, groups, or devices to an administrative unit - Azure Active Directory
+description: Add users, groups, or devices to an administrative unit in Azure Active Directory
documentationcenter: ''
Previously updated : 01/14/2022 Last updated : 03/22/2022
-# Add users or groups to an administrative unit
+# Add users, groups, or devices to an administrative unit
-In Azure Active Directory (Azure AD), you can add users or groups to an administrative unit to restrict the scope of role permissions. For additional details on what scoped administrators can do, see [Administrative units in Azure Active Directory](administrative-units.md).
+> [!IMPORTANT]
+> Administrative units support for devices is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+In Azure Active Directory (Azure AD), you can add users, groups, or devices to an administrative unit to restrict the scope of role permissions. For additional details on what scoped administrators can do, see [Administrative units in Azure Active Directory](administrative-units.md).
+
+This article describes how to add users, groups, or devices to administrative units manually. For information about how to add users or devices to administrative units dynamically using rules, see [Manage users or devices for an administrative unit with dynamic membership rules](admin-units-members-dynamic.md).
## Prerequisites
In Azure Active Directory (Azure AD), you can add users or groups to an administ
- Azure AD Free licenses for administrative unit members - Privileged Role Administrator or Global Administrator - AzureAD module when using PowerShell
+- AzureADPreview module when using PowerShell for devices
- Admin consent when using Graph explorer for Microsoft Graph API For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md). ## Azure portal
-You can add users or groups to administrative units using the Azure portal. You can also add users in a bulk operation.
+You can add users, groups, or devices to administrative units using the Azure portal. You can also add users in a bulk operation.
-### Add a single user or group to administrative units
+### Add a single user, group, or device to administrative units
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com). 1. Select **Azure Active Directory**.
-1. Select **Users** or **Groups** and then select the user or group you want to add to an administrative unit.
+1. Select one of the following:
+
+ - **Users**
+ - **Groups**
+ - **Devices** > **All devices**
+
+1. Select the user, group, or device you want to add to administrative units.
1. Select **Administrative units**.
You can add users or groups to administrative units using the Azure portal. You
1. In the **Select** pane, select the administrative units and then select **Select**.
- ![Screenshot of the "Administrative units" pane for assigning a user to an administrative unit.](./media/admin-units-members-add/assign-users-individually.png)
+ ![Screenshot of the Administrative units page for adding a user to an administrative unit.](./media/admin-units-members-add/assign-users-individually.png)
-### Add users or groups to a single administrative unit
+### Add users, groups, or devices to a single administrative unit
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com). 1. Select **Azure Active Directory**.
-1. Select **Administrative units** and then select the administrative unit that you want to add users or groups to.
+1. Select **Administrative units** and then select the administrative unit that you want to add users, groups, or devices to.
+
+1. Select one of the following:
-1. Select **Users** or **Groups**.
+ - **Users**
+ - **Groups**
+ - **Devices**
-1. Select **Add member** or **Add**.
+1. Select **Add member**, **Add**, or **Add device**.
-1. In the **Select** pane, select the users or groups you want to add to the administrative unit and then select **Select**.
+1. In the **Select** pane, select the users, groups, or devices you want to add to the administrative unit and then select **Select**.
- ![Screenshot of the administrative unit "Users" pane for assigning a user to an administrative unit.](./media/admin-units-members-add/assign-to-admin-unit.png)
+ ![Screenshot of adding multiple devices to an administrative unit.](./media/admin-units-members-add/admin-unit-members-add.png)
### Add users to an administrative unit in a bulk operation
You can add users or groups to administrative units using the Azure portal. You
1. Select **Users** > **Bulk operations** > **Bulk add members**.
- ![Screenshot of the "Users" pane for assigning users to an administrative unit as a bulk operation.](./media/admin-units-members-add/bulk-assign-to-admin-unit.png)
+ ![Screenshot of the Users page for assigning users to an administrative unit as a bulk operation.](./media/admin-units-members-add/bulk-assign-to-admin-unit.png)
1. In the **Bulk add members** pane, download the comma-separated values (CSV) template.
You can add users or groups to administrative units using the Azure portal. You
Use the [Add-AzureADMSAdministrativeUnitMember](/powershell/module/azuread/add-azureadmsadministrativeunitmember) command to add users or groups to an administrative unit.
+Use the [Add-AzureADMSAdministrativeUnitMember (Preview)](/powershell/module/azuread/add-azureadmsadministrativeunitmember?view=azureadps-2.0-preview&preserve-view=true) command to add devices to an administrative unit.
+ ### Add users to an administrative unit ```powershell
$groupObj = Get-AzureADGroup -Filter "displayname eq 'TestGroup'"
Add-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id -RefObjectId $groupObj.ObjectId ```
+### Add devices to an administrative unit
+
+```powershell
+$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
+$deviceObj = Get-AzureADDevice -Filter "displayname eq 'TestDevice'"
+Add-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id -RefObjectId $deviceObj.ObjectId
+```
+ ## Microsoft Graph API Use the [Add a member](/graph/api/administrativeunit-post-members) API to add users or groups to an administrative unit.
+Use the [Add a member (Beta)](/graph/api/administrativeunit-post-members?view=graph-rest-beta&preserve-view=true) API to add devices to an administrative unit.
++ ### Add users to an administrative unit Request
Body
```http {
- "@odata.id":"https://graph.microsoft.com/v1.0/users/{user-id}"
+ "@odata.id":"https://graph.microsoft.com/v1.0/users/{user-id}"
} ```
Example
```http {
- "@odata.id":"https://graph.microsoft.com/v1.0/users/john@example.com"
+ "@odata.id":"https://graph.microsoft.com/v1.0/users/john@example.com"
} ```
Body
```http {
-"@odata.id":"https://graph.microsoft.com/v1.0/groups/{group-id}"
+ "@odata.id":"https://graph.microsoft.com/v1.0/groups/{group-id}"
} ```
Example
```http {
-"@odata.id":"https://graph.microsoft.com/v1.0/groups/871d21ab-6b4e-4d56-b257-ba27827628f3"
+ "@odata.id":"https://graph.microsoft.com/v1.0/groups/871d21ab-6b4e-4d56-b257-ba27827628f3"
+}
+```
+
+### Add devices to an administrative unit
+
+Request
+
+```http
+POST https://graph.microsoft.com/beta/administrativeUnits/{admin-unit-id}/members/$ref
+```
+
+Body
+
+```http
+{
+ "@odata.id":"https://graph.microsoft.com/beta/devices/{device-id}"
} ```
Example
- [Administrative units in Azure Active Directory](administrative-units.md) - [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md)-- [Remove users or groups from an administrative unit](admin-units-members-remove.md)
+- [Manage users or devices for an administrative unit with dynamic membership rules](admin-units-members-dynamic.md)
+- [Remove users, groups, or devices from an administrative unit](admin-units-members-remove.md)
active-directory Admin Units Members Dynamic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-dynamic.md
+
+ Title: Manage users or devices for an administrative unit with dynamic membership rules (Preview) - Azure Active Directory
+description: Manage users or devices for an administrative unit with dynamic membership rules (Preview) in Azure Active Directory
+
+documentationcenter: ''
++++++ Last updated : 03/22/2022++++++
+# Manage users or devices for an administrative unit with dynamic membership rules (Preview)
+
+> [!IMPORTANT]
+> Dynamic membership rules for administrative units are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+You can add or remove users or devices for administrative units manually. With this preview, you can add or remove users or devices for administrative units dynamically using rules. This article describes how to create administrative units with dynamic membership rules using the Azure portal, PowerShell, or Microsoft Graph API.
+
+Although administrative units with members assigned manually support multiple object types, such as user, group, and devices, it is currently not possible to create an administrative unit with dynamic membership rules that includes more than one object type. For example, you can create administrative units with dynamic membership rules for users or devices, but not both. Administrative units with dynamic membership rules for groups are currently not supported.
+
+## Prerequisites
+
+- Azure AD Premium P1 or P2 license for each administrative unit administrator
+- Azure AD Premium P1 or P2 license for each administrative unit member
+- Privileged Role Administrator or Global Administrator
+- AzureADPreview module when using PowerShell
+- Admin consent when using Graph explorer for Microsoft Graph API
+- Global Azure cloud (not available in specialized clouds, such as Azure Government or Azure China)
+
+> [!NOTE]
+> Dynamic membership rules for administrative units requires an Azure AD Premium P1 license for each unique user that is a member of one or more dynamic administrative units. You don't have to assign licenses to users for them to be members of dynamic administrative units, but you must have the minimum number of licenses in the Azure AD organization to cover all such users. For example, if you had a total of 1,000 unique users in all dynamic administrative units in your organization, you would need at least 1,000 licenses for Azure AD Premium P1 to meet the license requirement. No license is required for devices that are members of a dynamic device administrative unit.
+
+For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
+
+## Add dynamic membership rules
+
+Follow these steps to create administrative units with dynamic membership rules for users or devices.
+
+### Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+
+1. Select **Azure Active Directory**.
+
+1. Select **Administrative units** and then select the administrative unit that you want to add users or devices to.
+
+1. Select **Properties**.
+
+1. In the **Membership type** list, select **Dynamic User** or **Dynamic Device**, depending on the type of rule you want to add.
+
+ ![Screenshot of an administrative unit Properties page with Membership type list displayed.](./media/admin-units-members-dynamic/admin-unit-properties.png)
+
+1. Select **Add dynamic query**.
+
+1. Use the rule builder to specify the dynamic membership rule. For more information, see [Rule builder in the Azure portal](../enterprise-users/groups-dynamic-membership.md#rule-builder-in-the-azure-portal).
+
+ ![Screenshot of Dynamic membership rules page showing rule builder with property, operator, and value.](./media/admin-units-members-dynamic/dynamic-membership-rules-builder.png)
+
+1. When finished, select **Save** to save the dynamic membership rule.
+
+1. On the **Properties** page, select **Save** to save the membership type and query.
+
+ The following message is displayed:
+
+ After changing the administrative unit type, the existing membership may change based on the dynamic membership rule you provide.
+
+1. Select **Yes** to continue.
+
+For steps on how to edit your rule, see the following [Edit dynamic membership rules](#edit-dynamic-membership-rules) section.
+
+### PowerShell
+
+1. Create a dynamic membership rule. For more information, see [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md).
+
+1. Use the [Connect-AzureAD](/powershell/module/azuread/connect-azuread) command to connect with Azure Active Directory with a user that has been assigned the Privileged Role Administrator or Global Administrator role.
+
+ ```powershell
+ # Connect to Azure AD
+ Connect-AzureAD
+ ```
+
+1. Use the [New-AzureADMSAdministrativeUnit](/powershell/module/azuread/new-azureadmsadministrativeunit) command to create a new administrative unit with a dynamic membership rule using the following parameters:
+
+ - `MembershipType`: `Dynamic` or `Assigned`
+ - `MembershipRule`: Dynamic membership rule you created in a previous step
+ - `MembershipRuleProcessingState`: `On` or `Paused`
+
+ ```powershell
+ # Create an administrative unit for users in the United States
+ $adminUnit = New-AzureADMSAdministrativeUnit -DisplayName "Example Admin Unit" -Description "Example Dynamic Membership Admin Unit" -MembershipType "Dynamic" -MembershipRuleProcessingState "On" -MembershipRule '(user.country -eq "United States")'
+ ```
+
+### Microsoft Graph API
+
+1. Create a dynamic membership rule. For more information, see [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md).
+
+1. Use the [Create administrativeUnit](/graph/api/administrativeunit-post-administrativeunits?view=graph-rest-beta&preserve-view=true) API to create a new administrative unit with a dynamic membership rule.
+
+ The following shows an example of a dynamic membership rule that applies to Windows devices.
+
+ Request
+
+ ```http
+ POST https://graph.microsoft.com/beta/administrativeUnits
+ ```
+
+ Body
+
+ ```http
+ {
+ "displayName": "Windows Devices",
+ "description": "All Contoso devices running Windows",
+ "membershipType": "Dynamic",
+ "membershipRule": "(device.deviceOSType -eq \"Windows\")",
+ "membershipRuleProcessingState": "On"
+ }
+ ```
+
+## Edit dynamic membership rules
+
+When an administrative unit has been configured for dynamic membership, the usual commands to add or remove members for the administrative unit are disabled as the dynamic membership engine retains the sole ownership of adding or removing members. To make changes to the membership, you can edit the dynamic membership rules.
+
+### Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+
+1. Select **Azure Active Directory**.
+
+1. Select **Administrative units** and then select the administrative unit that has the dynamic membership rules you want to edit.
+
+1. Select **Membership rules** to edit the dynamic membership rules using the rule builder.
+
+ ![Screenshot of an administrative unit with Membership rules and Dynamic membership rules options to open rule builder.](./media/admin-units-members-dynamic/membership-rules-options.png)
+
+ You can also open the rule builder by selecting **Dynamic membership rules** in the left navigation.
+
+1. When finished, select **Save** to save the dynamic membership rule changes.
+
+### PowerShell
+
+Use the [Set-AzureADMSAdministrativeUnit](/powershell/module/azuread/set-azureadmsadministrativeunit) command to edit the dynamic membership rule.
+
+```powershell
+# Set a new dynamic membership rule for an administrative unit
+Set-AzureADMSAdministrativeUnit -Id $adminUnit.Id -MembershipRule '(user.country -eq "Germany")'
+```
+
+### Microsoft Graph API
+
+Use the [Update administrativeUnit](/graph/api/administrativeunit-update?view=graph-rest-beta&preserve-view=true) API to edit the dynamic membership rule.
+
+Request
+
+```http
+PATCH https://graph.microsoft.com/beta/administrativeUnits/{id}
+```
+
+Body
+
+```http
+{
+ "membershipRule": "(user.country -eq "Germany")"
+}
+```
+
+## Change a dynamic administrative unit to assigned
+
+Follow these steps to change an administrative unit with dynamic membership rules to an administrative unit where members are manually assigned.
+
+### Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+
+1. Select **Azure Active Directory**.
+
+1. Select **Administrative units** and then select the administrative unit that you want to change to assigned.
+
+1. Select **Properties**.
+
+1. In the **Membership type** list, select **Assigned**.
+
+ ![Screenshot of an administrative unit Properties page with Membership type list displayed and Assigned selected.](./media/admin-units-members-dynamic/admin-unit-properties.png)
+
+1. Select **Save** to save the membership type.
+
+ The following message is displayed:
+
+ After changing the administrative unit type, the dynamic rule will no longer be processed. Current administrative unit members will remain in the administrative unit and the administrative unit will have assigned membership.
+
+1. Select **Yes** to continue.
+
+ When the membership type setting is changed from dynamic to assigned, the current members remain intact in the administrative unit. Additionally, the ability to add groups to the administrative unit is enabled.
+
+### PowerShell
+
+Use the [Set-AzureADMSAdministrativeUnit](/powershell/module/azuread/set-azureadmsadministrativeunit) command to change the membership type setting.
+
+```powershell
+# Change an administrative unit to assigned
+Set-AzureADMSAdministrativeUnit -Id $adminUnit.Id -MembershipType "Assigned" -MembershipRuleProcessingState "Paused"
+```
+
+### Microsoft Graph API
+
+Use the [Update administrativeUnit](/graph/api/administrativeunit-update?view=graph-rest-beta&preserve-view=true) API to change the membership type setting.
+
+Request
+
+```http
+PATCH https://graph.microsoft.com/beta/administrativeUnits/{id}
+```
+
+Body
+
+```http
+{
+ "membershipType": "Assigned"
+}
+```
+
+## Next steps
+
+- [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md)
+- [Add users or groups to an administrative unit](admin-units-members-add.md)
+- [Azure AD administrative units: Troubleshooting and FAQ](admin-units-faq-troubleshoot.yml)
+
active-directory Admin Units Members List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-list.md
Title: List users or groups in an administrative unit - Azure Active Directory
-description: List users or groups in an administrative unit in Azure Active Directory.
+ Title: List users, groups, or devices in an administrative unit - Azure Active Directory
+description: List users, groups, or devices in an administrative unit in Azure Active Directory.
documentationcenter: ''
Previously updated : 01/12/2022 Last updated : 03/22/2022
-# List users or groups in an administrative unit
+# List users, groups, or devices in an administrative unit
-In Azure Active Directory (Azure AD), you can list the users or groups in administrative units.
+In Azure Active Directory (Azure AD), you can list the users, groups, or devices in administrative units.
## Prerequisites
In Azure Active Directory (Azure AD), you can list the users or groups in admini
- Azure AD Free licenses for administrative unit members - Privileged Role Administrator or Global Administrator - AzureAD module when using PowerShell
+- AzureADPreview module when using PowerShell for devices
- Admin consent when using Graph explorer for Microsoft Graph API For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md). ## Azure portal
-You can list the users or groups in administrative units using the Azure portal.
+You can list the users, groups, or devices in administrative units using the Azure portal.
-### List the administrative units for a single user or group
+### List the administrative units for a single user, group, or device
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com). 1. Select **Azure Active Directory**.
-1. Select **Users** or **Groups** and then select the user or group you want to list their administrative units.
+1. Select one of the following:
-1. Select **Administrative units** to list all the administrative units where the user or group is a member.
+ - **Users**
+ - **Groups**
+ - **Devices** > **All devices**
- ![Screenshot of the "Administrative units" pane, displaying a list administrative units that a group is assigned to.](./media/admin-units-members-list/list-group-au.png)
+1. Select the user, group, or device you want to list their administrative units.
-### List the users or groups for a single administrative unit
+1. Select **Administrative units** to list all the administrative units where the user, group, or device is a member.
+
+ ![Screenshot of the Administrative units page, displaying a list administrative units that a group is assigned to.](./media/admin-units-members-list/list-group-au.png)
+
+### List the users, groups, or devices for a single administrative unit
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+
+1. Select **Azure Active Directory**.
+
+1. Select **Administrative units** and then select the administrative unit that you want to list the users, groups, or devices for.
+
+1. Select one of the following:
+
+ - **Users**
+ - **Groups**
+ - **Devices**
+
+ ![Screenshot of the Groups page displaying a list of groups in an administrative unit.](./media/admin-units-members-list/list-groups-in-admin-units.png)
+
+### List the devices for an administrative unit by using the All devices page
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com). 1. Select **Azure Active Directory**.
-1. Select **Administrative units** and then select the administrative unit that you want to list the users or groups for.
+1. Select **Devices** > **All devices**.
+
+1. Select the filter for administrative unit.
-1. Select **Users** or **Groups** to see the list of users or groups for this administrative unit.
+1. Select the administrative unit whose devices you want to list.
- ![Screenshot of the "Groups" pane displaying a list of groups in an administrative unit.](./media/admin-units-members-list/list-groups-in-admin-units.png)
+ ![Screenshot of All devices page with an administrative unit filter.](./media/admin-units-members-list/device-admin-unit-filter.png)
## PowerShell Use the [Get-AzureADMSAdministrativeUnit](/powershell/module/azuread/get-azureadmsadministrativeunit) and [Get-AzureADMSAdministrativeUnitMember](/powershell/module/azuread/get-azureadmsadministrativeunitmember) commands to list users or groups for an administrative unit.
+Use the [Get-AzureADMSAdministrativeUnit (Preview)](/powershell/module/azuread/get-azureadmsadministrativeunit?view=azureadps-2.0-preview&preserve-view=true) and [Get-AzureADMSAdministrativeUnitMember (Preview)](/powershell/module/azuread/get-azureadmsadministrativeunitmember?view=azureadps-2.0-preview&preserve-view=true) commands to list devices for an administrative unit.
+ > [!NOTE] > By default, [Get-AzureADMSAdministrativeUnitMember](/powershell/module/azuread/get-azureadmsadministrativeunitmember) returns only top members of an administrative unit. To retrieve all members, add the `-All $true` parameter.
$groupObj = Get-AzureADGroup -Filter "displayname eq 'TestGroup'"
Get-AzureADMSAdministrativeUnit | where { Get-AzureADMSAdministrativeUnitMember -Id $_.Id | where {$_.Id -eq $groupObj.ObjectId} } ```
-### List the users and groups for an administrative unit
+### List the administrative units for a device
+
+```powershell
+Get-AzureADMSAdministrativeUnit | where { Get-AzureADMSAdministrativeUnitMember -ObjectId $_.ObjectId | where {$_.ObjectId -eq $deviceObjId} }
+```
+
+### List the users, groups, and devices for an administrative unit
```powershell $adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
foreach ($member in (Get-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id)
} ```
+### List the devices for an administrative unit
+
+```powershell
+$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
+foreach ($member in (Get-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id))
+{
+ if($member.ObjectType -eq "Device")
+ {
+ Get-AzureADDevice -ObjectId $member.ObjectId
+ }
+}
+```
+ ## Microsoft Graph API Use the [List members](/graph/api/administrativeunit-list-members) API to list users or groups for an administrative unit.
+Use the [List members (Beta)](/graph/api/administrativeunit-list-members?view=graph-rest-beta&preserve-view=true) API to list devices for an administrative unit.
+ ### List the administrative units for a user ```http
GET https://graph.microsoft.com/v1.0/users/{user-id}/memberOf/$/Microsoft.Graph.
GET https://graph.microsoft.com/v1.0/groups/{group-id}/memberOf/$/Microsoft.Graph.AdministrativeUnit ```
+### List the administrative units for a device
+
+```http
+GET https://graph.microsoft.com/beta/devices/{device-id}/memberOf/$/Microsoft.Graph.AdministrativeUnit
+```
+ ### List the groups for an administrative unit ```http GET https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-unit-id}/members/$/microsoft.graph.group ```
+### List the devices for an administrative unit
+
+```http
+GET https://graph.microsoft.com/beta/administrativeUnits/{admin-unit-id}/members/$/microsoft.graph.device
+```
+ ## Next steps -- [Add users or groups to an administrative unit](admin-units-members-add.md)
+- [Add users, groups, or devices to an administrative unit](admin-units-members-add.md)
- [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md)
active-directory Admin Units Members Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-remove.md
Title: Remove users or groups from an administrative unit - Azure Active Directory
-description: Remove users or groups from an administrative unit in Azure Active Directory
+ Title: Remove users, groups, or devices from an administrative unit - Azure Active Directory
+description: Remove users, groups, or devices from an administrative unit in Azure Active Directory
documentationcenter: ''
Previously updated : 12/17/2021 Last updated : 03/22/2022
-# Remove users or groups from an administrative unit
+# Remove users, groups, or devices from an administrative unit
-When users or groups no longer need access, you can remove users or groups from an administrative unit.
+When users, groups, or devices in an administrative unit no longer need access, you can remove them.
## Prerequisites
When users or groups no longer need access, you can remove users or groups from
- Azure AD Free licenses for administrative unit members - Privileged Role Administrator or Global Administrator - AzureAD module when using PowerShell
+- AzureADPreview module when using PowerShell for devices
- Admin consent when using Graph explorer for Microsoft Graph API For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md). ## Azure portal
-You can remove users or groups from administrative units individually using the Azure portal. You can also remove users in a bulk operation.
+You can remove users, groups, or devices from administrative units individually using the Azure portal. You can also remove users in a bulk operation.
-### Remove a single user or group from administrative units
+### Remove a single user, group, or device from administrative units
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com). 1. Select **Azure Active Directory**.
-1. Select **Users** or **Groups** and then select the user or group you want to remove from an administrative unit.
+1. Select one of the following:
+
+ - **Users**
+ - **Groups**
+ - **Devices** > **All devices**
+
+1. Select the user, group, or device you want to remove from an administrative unit.
1. Select **Administrative units**.
-1. Add check marks next to the administrative units you want to remove the user or group from.
+1. Add check marks next to the administrative units you want to remove the user, group, or device from.
1. Select **Remove from administrative unit**.
- ![Screenshot showing how to remove a user from an administrative unit from the user's profile pane.](./media/admin-units-members-remove/user-remove-admin-units.png)
+ ![Screenshot of Devices and Administrative units page with Remove from administrative unit option.](./media/admin-units-members-remove/device-admin-unit-remove.png)
-### Remove users or groups from a single administrative unit
+### Remove users, groups, or devices from a single administrative unit
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com). 1. Select **Azure Active Directory**.
-1. Select **Administrative units** and then select the administrative unit that you want to remove users or groups from.
+1. Select **Administrative units** and then select the administrative unit that you want to remove users, groups, or devices from.
+
+1. Select one of the following:
-1. Select **Users** or **Groups**.
+ - **Users**
+ - **Groups**
+ - **Devices**
-1. Add check marks next to the users or groups you want to remove.
+1. Add check marks next to the users, groups, or devices you want to remove.
-1. Select **Remove member** or **Remove**.
+1. Select **Remove member**, **Remove**, or **Remove device**.
- ![Screenshot showing how to remove a user at the administrative unit level.](./media/admin-units-members-remove/admin-units-remove-user.png)
+ ![Screenshot showing a list users in an administrative unit with check marks and a Remove member option.](./media/admin-units-members-remove/admin-units-remove-user.png)
### Remove users from an administrative unit in a bulk operation
You can remove users or groups from administrative units individually using the
Use the [Remove-AzureADMSAdministrativeUnitMember](/powershell/module/azuread/remove-azureadmsadministrativeunitmember) command to remove users or groups from an administrative unit.
+Use the [Remove-AzureADMSAdministrativeUnitMember (Preview)](/powershell/module/azuread/remove-azureadmsadministrativeunitmember?view=azureadps-2.0-preview&preserve-view=true) command to remove devices from an administrative unit.
+ ### Remove users from an administrative unit ```powershell
$groupObj = Get-AzureADGroup -Filter "displayname eq 'TestGroup'"
Remove-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id -MemberId $groupObj.ObjectId ```
+### Remove devices from an administrative unit
+
+```powershell
+Remove-AzureADMSAdministrativeUnitMember -ObjectId $adminUnitId -MemberId $deviceObjId
+```
## Microsoft Graph API Use the [Remove a member](/graph/api/administrativeunit-delete-members) API to remove users or groups from an administrative unit.
+Use the [Remove a member (Beta)](/graph/api/administrativeunit-delete-members?view=graph-rest-beta&preserve-view=true) API to remove devices from an administrative unit.
+ ### Remove users from an administrative unit ```http
DELETE https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-uni
DELETE https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-unit-id}/members/{group-id}/$ref ```
+### Remove devices from an administrative unit
+
+```http
+DELETE https://graph.microsoft.com/beta/administrativeUnits/{admin-unit-id}/members/{device-id}/$ref
+```
+ ## Next steps -- [Add users or groups to an administrative unit](admin-units-members-add.md)
+- [Add users, groups, or devices to an administrative unit](admin-units-members-add.md)
- [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md)
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/administrative-units.md
Previously updated : 01/14/2022 Last updated : 03/22/2022
# Administrative units in Azure Active Directory
-This article describes administrative units in Azure Active Directory (Azure AD). An administrative unit is an Azure AD resource that can be a container for other Azure AD resources. An administrative unit can contain only users and groups.
+This article describes administrative units in Azure Active Directory (Azure AD). An administrative unit is an Azure AD resource that can be a container for other Azure AD resources. An administrative unit can contain only users, groups, or devices.
Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) role to regional support specialists, so they can manage users only in the region that they support.
A central administrator could:
- Create a role with administrative permissions over only Azure AD users in the School of Business administrative unit. - Add the business school IT team to the role, along with its scope.
-Administrative units apply scope only to management permissions. They don't prevent members or administrators from using their [default user permissions](../fundamentals/users-default-permissions.md) to browse other users, groups, or resources outside the administrative unit. In the Microsoft 365 admin center, users outside a scoped admin's administrative units are filtered out. But you can browse other users in the Azure portal, PowerShell, and other Microsoft services.
+![Screenshot of Devices and Administrative units page with Remove from administrative unit option.](./media/administrative-units/admin-unit-overview.png)
## License requirements
-Using administrative units requires an Azure AD Premium P1 license for each administrative unit administrator, and Azure AD Free licenses for administrative unit members. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+Using administrative units requires an Azure AD Premium P1 license for each administrative unit administrator, and an Azure AD Free license for each administrative unit member. If you are using dynamic membership rules for administrative units, each administrative unit member requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
## Manage administrative units
-You can manage administrative units by using the Azure portal, PowerShell cmdlets and scripts, or Microsoft Graph. For more information, see:
+You can manage administrative units by using the Azure portal, PowerShell cmdlets and scripts, or Microsoft Graph API. For more information, see:
- [Create or delete administrative units](admin-units-manage.md)-- [Add users or groups to an administrative unit](admin-units-members-add.md)
+- [Add users, groups, or devices to an administrative unit](admin-units-members-add.md)
+- [Manage users or devices for an administrative unit with dynamic membership rules (Preview)](admin-units-members-dynamic.md)
- [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md) - [Work with administrative units](/powershell/azure/active-directory/working-with-administrative-units): Covers how to work with administrative units by using PowerShell. - [Administrative unit Graph support](/graph/api/resources/administrativeunit): Provides detailed documentation on Microsoft Graph for administrative units.
You can expect the creation of administrative units in the organization to go th
As a Global Administrator or a Privileged Role Administrator, you can use the Azure portal to: - Create administrative units-- Add users and groups members of administrative units
+- Add users, groups, or devices as members of administrative units
+- Manage users or devices for an administrative unit with dynamic membership rules (Preview)
- Assign IT staff to administrative unit-scoped administrator roles. Administrative unit-scoped admins can use the Microsoft 365 admin center for basic management of users in their administrative units. A group administrator with administrative unit scope can manage groups by using PowerShell, Microsoft Graph, and the Microsoft 365 admin centers.
+Administrative units apply scope only to management permissions. They don't prevent members or administrators from using their [default user permissions](../fundamentals/users-default-permissions.md) to browse other users, groups, or resources outside the administrative unit. In the Microsoft 365 admin center, users outside a scoped admin's administrative units are filtered out. But you can browse other users in the Azure portal, PowerShell, and other Microsoft services.
+ >[!Note] >Only the features described in this section are available in the Microsoft 365 admin center. No organization-level features are available for an Azure AD role with administrative unit scope.
The following sections describe current support for administrative unit scenario
### Administrative unit management
-| Permissions | Graph/PowerShell | Azure portal | Microsoft 365 admin center |
-| | | | |
-| Creating and deleting administrative units | Supported | Supported | Not supported |
-| Adding and removing administrative unit members individually | Supported | Supported | Not supported |
-| Adding and removing administrative unit members in bulk by using CSV files | Not supported | Supported | No plan to support |
-| Assigning administrative unit-scoped administrators | Supported | Supported | Not supported |
-| Adding and removing administrative unit members dynamically based on attributes | Not supported | Not supported | Not supported
+| Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center |
+| | :: | :: | :: |
+| Create or delete administrative units | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Add or remove members individually | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Add or remove members in bulk by using CSV files | :x: | :heavy_check_mark: | No plan to support |
+| Assign administrative unit-scoped administrators | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Add or remove users or devices dynamically based on rules (Preview) | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Add or remove groups dynamically based on rules | :x: | :x: | :x: |
### User management
-| Permissions | Graph/PowerShell | Azure portal | Microsoft 365 admin center |
-| | | | |
-| Administrative unit-scoped management of user properties, passwords | Supported | Supported | Supported |
-| Administrative unit-scoped management of user licenses | Supported | Not Supported | Supported |
-| Administrative unit-scoped blocking and unblocking of user sign-ins | Supported | Supported | Supported |
-| Administrative unit-scoped management of user multifactor authentication credentials | Supported | Supported | Not supported |
+| Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center |
+| | :: | :: | :: |
+| Administrative unit-scoped management of user properties, passwords | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Administrative unit-scoped management of user licenses | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| Administrative unit-scoped blocking and unblocking of user sign-ins | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Administrative unit-scoped management of user multi-factor authentication credentials | :heavy_check_mark: | :heavy_check_mark: | :x: |
### Group management
-| Permissions | Graph/PowerShell | Azure portal | Microsoft 365 admin center |
-| | | | |
-| Administrative unit-scoped management of group properties and membership | Supported | Supported | Not supported |
-| Administrative unit-scoped management of group licensing | Supported | Supported | Not supported |
+| Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center |
+| | :: | :: | :: |
+| Administrative unit-scoped management of group properties and membership | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Administrative unit-scoped management of group licensing | :heavy_check_mark: | :heavy_check_mark: | :x: |
> [!NOTE] > Adding a group to an administrative unit does not grant scoped group administrators the ability to manage properties for individual members of that group. For example, a scoped group administrator can manage group membership, but they can't manage authentication methods of users who are members of the group added to an administrative unit. To manage authentication methods of users who are members of the group that is added to an administrative unit, the individual group members must be directly added as users of the administrative unit, and the group administrator must also be assigned a role that can manage user authentication methods.
+### Device management
+
+| Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center |
+| | :: | :: | :: |
+| Enable, disable, or delete devices | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Read Bitlocker recovery keys | :heavy_check_mark: | :heavy_check_mark: | :x: |
+
+Managing devices in Intune is *not* supported at this time.
+ ## Constraints Here are some of the constraints for administrative units.
Here are some of the constraints for administrative units.
## Next steps - [Create or delete administrative units](admin-units-manage.md)-- [Add users or groups to an administrative unit](admin-units-members-add.md)
+- [Add users, groups, or devices to an administrative unit](admin-units-members-add.md)
- [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md) - [Administrative unit limits](../enterprise-users/directory-service-limits-restrictions.md?context=%2fazure%2factive-directory%2froles%2fcontext%2fugr-context)
active-directory Custom Device Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-device-permissions.md
+
+ Title: Device management permissions for Azure AD custom roles (Preview) - Azure Active Directory
+description: Device management permissions for Azure AD custom roles (Preview) in the Azure portal, PowerShell, or Microsoft Graph API.
+++++++ Last updated : 03/22/2022+++++
+# Device management permissions for Azure AD custom roles (Preview)
+
+> [!IMPORTANT]
+> Device management permissions for Azure AD custom roles are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Device management permissions can be used in custom role definitions in Azure Active Directory (Azure AD) to grant fine-grained access such as the following:
+
+- Enable or disable devices
+- Delete devices
+- Read BitLocker recovery keys
+- Read BitLocker metadata
+- Read device registration policies
+- Update device registration policies
+
+This article lists the permissions you can use in your custom roles for different device management scenarios. For information about how to create custom roles, see [Create and assign a custom role](custom-create.md).
+
+## Enable or disable devices
+
+The following permissions are available to toggle device states.
+
+- microsoft.directory/devices/enable
+- microsoft.directory/devices/disable
+
+## Read BitLocker recovery keys
+
+The following permission is available to read BitLocker metadata and recovery keys. Note that this single permission provides read for both BitLocker metadata and recovery keys.
+
+- microsoft.directory/bitlockerKeys/key/read
+
+You can view the BitLocker recovery key by selecting a device from the **All Devices** page, and then selecting **Show Recovery Key**. For more information about reading BitLocker recovery keys, see [View or copy BitLocker keys](../devices/device-management-azure-portal.md#view-or-copy-bitlocker-keys).
+
+![Screenshot showing Bitlocker keys in Azure portal.](./media/custom-device-permissions/bitlocker-keys.png)
+
+## Read BitLocker metadata
+
+The following permission is available to read the BitLocker metadata for all devices.
+
+- microsoft.directory/bitlockerKeys/metadata/read
+
+You can read the BitLocker metadata for all devices, but you can't read the BitLocker recovery key.
+
+![Screenshot showing Bitlocker metadata in Azure portal.](./media/custom-device-permissions/bitlocker-metadata.png)
+
+## Read device registration policies
+
+The following permission is available to read tenant-wide device registration settings.
+
+- microsoft.directory/deviceRegistrationPolicy/standard/read
+
+You can read device settings in the Azure portal.
+
+![Screenshot showing Device settings page in Azure portal.](./media/custom-device-permissions/device-settings.png)
+
+## Update device registration policies
+
+The following permission is available to update tenant-wide device registration settings.
+
+- microsoft.directory/deviceRegistrationPolicy/basic/update
+
+## Full list of permissions
+
+#### Read
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/devices/createdFrom/read | Read createdfrom properties of devices |
+> | microsoft.directory/devices/registeredOwners/read | Read registered owners of devices |
+> | microsoft.directory/devices/registeredUsers/read | Read registered users of devices |
+> | microsoft.directory/devices/standard/read | Read basic properties on devices |
+> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
+> | microsoft.directory/bitlockerKeys/metadata/read | Read bitlocker metadata on devices |
+> | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies |
+
+#### Update
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/devices/registeredOwners/update | Update registered owners of devices |
+> | microsoft.directory/devices/registeredUsers/update | Update registered users of devices |
+> | microsoft.directory/devices/enable | Enable devices in Azure AD |
+> | microsoft.directory/devices/disable | Disable devices in Azure AD |
+> | microsoft.directory/deviceRegistrationPolicy/basic/update | Update basic properties on device registration policies |
+
+#### Delete
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/devices/delete | Delete devices from Azure AD |
+
+## Next steps
+
+- [Create and assign a custom role in Azure Active Directory](custom-create.md)
+- [List Azure AD role assignments](view-assignments.md)
active-directory Cisco Umbrella User Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-user-management-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and Cisco Umbrella User Management](../app-provisioning/customize-application-attributes.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Cisco Umbrella User Management](../app-provisioning/customize-application-attributes.md).
## Step 2. Import ObjectGUID attribute via Azure AD Connect (Optional)
-If you have previously provisioned user and group identities from on-premise AD to Cisco Umbrella and would now like to provision the same users and groups from Azure AD, you will need to synchronize the ObjectGUID attribute so that previously provisioned identities persist in the Umbrella policy.
+If you have previously provisioned user identities from on-premise AD to Cisco Umbrella and would now like to provision the same users from Azure AD, you will need to synchronize the ObjectGUID attribute so that previously provisioned identities persist in the Umbrella reporting. You will need to reconfigure any Umbrella policy on groups after importing groups from Azure AD.
> [!NOTE] > The on-premise Umbrella AD Connector should be turned off before importing the ObjectGUID attribute.
-When using Microsoft Azure AD Connect, the ObjectGUID attribute of users and groups is not synchronized from on-premise AD to Azure AD by default. To synchronize this attribute, enable the optional **Directory Extension attribute sync** and select the objectGUID attributes for groups and users.
+When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not synchronized from on-premise AD to Azure AD by default. To synchronize this attribute, enable the optional **Directory Extension attribute sync** and select the objectGUID attributes for users.
![Azure Active Directory Connect wizard Optional features page](./media/cisco-umbrella-user-management-provisioning-tutorial/active-directory-connect-directory-extension-attribute-sync.png)
When using Microsoft Azure AD Connect, the ObjectGUID attribute of users and gro
1. Log in to [Cisco Umbrella dashboard](https://login.umbrella.com ). Navigate to **Deployments** > **Core Identities** > **Users and Groups**.
-2. If the import mechanism is set to Manual import, click on **Import from IdP** to switch the import mechanism.
-3. Expand the Azure Active Directory card and click on the **API Keys page**.
+1. Expand the Azure Active Directory card and click on the **API Keys page**.
![Api](./media/cisco-umbrella-user-management-provisioning-tutorial/keys.png)
-4. Expand the Azure Active Directory card on the API Keys page and click on **Generate Token**.
+1. Expand the Azure Active Directory card on the API Keys page and click on **Generate Token**.
![Generate](./media/cisco-umbrella-user-management-provisioning-tutorial/token.png)
-5. The generated token will be displayed only once. Copy and save the URL and the token. These values will be entered in the **Tenant URL** and **Secret Token** fields respectively in the Provisioning tab of your Cisco Umbrella User Management application in the Azure portal.
+1. The generated token will be displayed only once. Copy and save the URL and the token. These values will be entered in the **Tenant URL** and **Secret Token** fields respectively in the Provisioning tab of your Cisco Umbrella User Management application in the Azure portal.
## Step 4. Add Cisco Umbrella User Management from the Azure AD application gallery
This section guides you through the steps to configure the Azure AD provisioning
![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **Cisco Umbrella User Management**.
+1. In the applications list, select **Cisco Umbrella User Management**.
![The Cisco Umbrella User Management link in the Applications list](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
![Provisioning tab](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
![Provisioning tab automatic](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input your Cisco Umbrella User Management Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Cisco Umbrella User Management. If the connection fails, ensure your Cisco Umbrella User Management account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input your Cisco Umbrella User Management Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Cisco Umbrella User Management. If the connection fails, ensure your Cisco Umbrella User Management account has Admin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
![Notification Email](common/provisioning-notification-email.png)
-7. Select **Save**.
+1. Select **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Cisco Umbrella User Management**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Cisco Umbrella User Management**.
-9. Review the user attributes that are synchronized from Azure AD to Cisco Umbrella User Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Cisco Umbrella User Management for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Cisco Umbrella User Management API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Cisco Umbrella User Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Cisco Umbrella User Management for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Cisco Umbrella User Management API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
> [!NOTE] > If you have imported the objectGUID attribute for users via Azure AD Connect (refer Step 2), add a mapping from objectGUID to urn:ietf:params:scim:schemas:extension:ciscoumbrella:2.0:User:nativeObjectId.
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Cisco Umbrella User Management**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Cisco Umbrella User Management**.
-11. Review the group attributes that are synchronized from Azure AD to Cisco Umbrella User Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Cisco Umbrella User Management for update operations. Select the **Save** button to commit any changes.
+1. Review the group attributes that are synchronized from Azure AD to Cisco Umbrella User Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Cisco Umbrella User Management for update operations. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for Filtering| |||| |displayName|String|&check;| |externalId|String| |members|Reference|
- |urn:ietf:params:scim:schemas:extension:ciscoumbrella:2.0:Group:nativeObjectId|String|
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-> [!NOTE]
-> If you have imported the objectGUID attribute for groups via Azure AD Connect (refer Step 2), add a mapping from objectGUID to urn:ietf:params:scim:schemas:extension:ciscoumbrella:2.0:Group:nativeObjectId.
-
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-13. To enable the Azure AD provisioning service for Cisco Umbrella User Management, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Cisco Umbrella User Management, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to Cisco Umbrella User Management by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and/or groups that you would like to provision to Cisco Umbrella User Management by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
-15. When you are ready to provision, click **Save**.
+1. When you are ready to provision, click **Save**.
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
This operation starts the initial synchronization cycle of all users and groups
## Step 7. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Connector Limitations * Cisco Umbrella User Management supports provisioning a maximum of 200 groups. Any groups beyond this number that are in scope may not be provisioned to Cisco Umbrella.
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both Gong and Azure Act
> * Create users in Gong. > * Remove users in Gong when they do not require access anymore. > * Keep user attributes synchronized between Azure AD and Gong.
+> * Provision groups and group memberships in Gong.
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A user account in Gong with **Technical Administrator** privileges.
+* A user account in Gong with **Technical Administrator** privilege.
## Step 1. Plan your provisioning deployment
This section guides you through the steps to configure the Azure AD provisioning
1. Review the user attributes that are synchronized from Azure AD to Gong in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Gong for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Gong API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|Supported for filtering|Required by Gong|
- |||||
+ |Attribute|Type|Supported for filtering|Required by Gong|
+ |||||
|userName|String|&check;|&check; |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|| |active|Boolean||
This section guides you through the steps to configure the Azure AD provisioning
|locale|String|| |timezone|String|| |urn:ietf:params:scim:schemas:extension:Gong:2.0:User:stateOrProvince|String||
- |urn:ietf:params:scim:schemas:extension:Gong:2.0:User:country|String||
+ |urn:ietf:params:scim:schemas:extension:Gong:2.0:User:country|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Gong**.
+
+1. Review the group attributes that are synchronized from Azure AD to Gong in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Gong for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Gong|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
Once you've configured provisioning, use the following resources to monitor your
* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion * If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Change Log
+03/23/2022 - Added support for **Group Provisioning**.
## More resources
active-directory Informatica Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/informatica-platform-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Informatica Platform'
+description: Learn how to configure single sign-on between Azure Active Directory and Informatica Platform.
++++++++ Last updated : 03/23/2022++++
+# Tutorial: Azure AD SSO integration with Informatica Platform
+
+In this tutorial, you'll learn how to integrate Informatica Platform with Azure Active Directory (Azure AD). When you integrate Informatica Platform with Azure AD, you can:
+
+* Control in Azure AD who has access to Informatica Platform.
+* Enable your users to be automatically signed-in to Informatica Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Informatica Platform single sign-on (SSO) enabled subscription.
+
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Informatica Platform supports **SP** and **IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Informatica Platform from the gallery
+
+To configure the integration of Informatica Platform into Azure AD, you need to add Informatica Platform from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Informatica Platform** in the search box.
+1. Select **Informatica Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Informatica Platform
+
+Configure and test Azure AD SSO with Informatica Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Informatica Platform.
+
+To configure and test Azure AD SSO with Informatica Platform, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Informatica Platform SSO](#configure-informatica-platform-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Informatica Platform test user](#create-informatica-platform-test-user)** - to have a counterpart of B.Simon in Informatica Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Informatica Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the value:
+ `Informatica`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<host name: port number>/administrator/Login.do`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<host name: port number>/administrator/saml/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL and Sign-on URL. Contact [Informatica Platform Client support team](mailto:support@informatica.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Informatica Platform application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Informatica Platform application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | --| - |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | email | user.mail |
+ | Administrator | user.givenname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificate-base64-download.png)
+
+1. On the **Set up Informatica Platform** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Informatica Platform.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Informatica Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Informatica Platform SSO
+
+To configure single sign-on on **Informatica Platform** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Informatica Platform support team](mailto:support@informatica.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Informatica Platform test user
+
+In this section, you create a user called Britta Simon in Informatica Platform. Work with [Informatica Platform support team](mailto:support@informatica.com) to add the users in the Informatica Platform platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Informatica Platform Sign on URL where you can initiate the login flow.
+
+* Go to Informatica Platform Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Informatica Platform for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Informatica Platform tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Informatica Platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Informatica Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Palo Alto Networks Scim Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/palo-alto-networks-scim-connector-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Palo Alto Networks SCIM Connector for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Palo Alto Networks SCIM Connector.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: b44885ef-fc1c-473c-9948-d7ca54d42d49
+++
+ms.devlang: na
+ Last updated : 03/23/2022+++
+# Tutorial: Configure Palo Alto Networks SCIM Connector for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Palo Alto Networks SCIM Connector and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Palo Alto Networks SCIM Connector](https://www.paloaltonetworks.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Palo Alto Networks SCIM Connector.
+> * Remove users in Palo Alto Networks SCIM Connector when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Palo Alto Networks SCIM Connector.
+> * Provision groups and group memberships in Palo Alto Networks SCIM Connector.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Palo Alto Networks with Admin rights.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Palo Alto Networks SCIM Connector](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Palo Alto Networks SCIM Connector to support provisioning with Azure AD
+
+Contact [Palo Alto Networks Customer Support](https://support.paloaltonetworks.com/support) to obtain the **SCIM Url** and corresponding **Token**.
+
+## Step 3. Add Palo Alto Networks SCIM Connector from the Azure AD application gallery
+
+Add Palo Alto Networks SCIM Connector from the Azure AD application gallery to start managing provisioning to Palo Alto Networks SCIM Connector. If you have previously setup Palo Alto Networks SCIM Connector for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Palo Alto Networks SCIM Connector, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Palo Alto Networks SCIM Connector
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Palo Alto Networks SCIM Connector based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Palo Alto Networks SCIM Connector in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Palo Alto Networks SCIM Connector**.
+
+ ![The Palo Alto Networks SCIM Connector link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Palo Alto Networks SCIM Connector Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Palo Alto Networks SCIM Connector. If the connection fails, ensure your Palo Alto Networks account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Palo Alto Networks SCIM Connector**.
+
+1. Review the user attributes that are synchronized from Azure AD to Palo Alto Networks SCIM Connector in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Palo Alto Networks SCIM Connector for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Palo Alto Networks SCIM Connector API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Palo Alto Networks SCIM Connector|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |displayName|String||&check;
+ |title|String||
+ |emails[type eq "work"].value|String||
+ |emails[type eq "other"].value|String||
+ |preferredLanguage|String||
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |name.formatted|String||&check;
+ |name.honorificSuffix|String||
+ |name.honorificPrefix|String||
+ |addresses[type eq "work"].formatted|String||
+ |addresses[type eq "work"].streetAddress|String||
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].postalCode|String||
+ |addresses[type eq "work"].country|String||
+ |addresses[type eq "other"].formatted|String||
+ |addresses[type eq "other"].streetAddress|String||
+ |addresses[type eq "other"].locality|String||
+ |addresses[type eq "other"].region|String||
+ |addresses[type eq "other"].postalCode|String||
+ |addresses[type eq "other"].country|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |phoneNumbers[type eq "fax"].value|String||
+ |externalId|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+
+ >![NOTE]
+ >**Schema Discovery** is enabled on this app. Hence you might see more attributes in the application than mentioned in the table above.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Palo Alto Networks SCIM Connector**.
+
+1. Review the group attributes that are synchronized from Azure AD to Palo Alto Networks SCIM Connector in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Palo Alto Networks SCIM Connector for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Palo Alto Networks SCIM Connector|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Palo Alto Networks SCIM Connector, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Palo Alto Networks SCIM Connector by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
++
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
Title: Snapshot Azure Kubernetes Service (AKS) node pools (preview)
+ Title: Snapshot Azure Kubernetes Service (AKS) node pools
description: Learn how to snapshot AKS cluster node pools and create clusters and node pools from a snapshot.
-# Azure Kubernetes Service (AKS) node pool snapshot (preview)
+# Azure Kubernetes Service (AKS) node pool snapshot
AKS releases a new node image weekly and every new cluster, new node pool, or upgrade cluster will always receive the latest image that can make it hard to maintain your environments consistent and to have repeatable environments.
Node pool snapshots allow you to take a configuration snapshot of your node pool
The snapshot is an Azure resource that will contain the configuration information from the source node pool such as the node image version, kubernetes version, OS type, and OS SKU. You can then reference this snapshot resource and the respective values of its configuration to create any new node pool or cluster based off of it. - ## Before you begin This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
This article assumes that you have an existing AKS cluster. If you need an AKS c
### Limitations - Any node pool or cluster created from a snapshot must use a VM from the same virtual machine family as the snapshot, for example, you can't create a new N-Series node pool based of a snapshot captured from a D-Series node pool because the node images in those cases are structurally different.-- During preview, snapshots must be created and used in the same region as the source node pool.-
-### Install aks-preview CLI extension
-
-You also need the *aks-preview* Azure CLI extension version 0.5.40 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `SnapshotPreview` preview feature
-
-To use the feature, you must also enable the `SnapshotPreview` feature flag on your subscription.
-
-Register the `SnapshotPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "SnapshotPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/SnapshotPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+- Snapshots must be created and used in the same region as the source node pool.
## Take a node pool snapshot
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
Tags : {}
When you upgrade your cluster, the following Kubenetes events may occur on each node: * Surge ΓÇô Create surge node.
-* Drain ΓÇô Pods are being evicted from the node. Each pod has a 30 minute timeout to complete the eviction.
+* Drain ΓÇô Pods are being evicted from the node. Each pod has a 5 minute timeout to complete the eviction.
* Update ΓÇô Update of a node has succeeded or failed. * Delete ΓÇô Deleted a surge node.
app-service App Service Web Nodejs Best Practices And Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-nodejs-best-practices-and-troubleshoot-guide.md
The default value is false. When set to true, iisnode displays the HTTP status c
This setting controls debugging feature. Iisnode is integrated with node-inspector. By enabling this setting, you enable debugging of your node application. Upon enabling this setting, iisnode creates node-inspector files in ΓÇÿdebuggerVirtualDirΓÇÖ directory on the first debug request to your node application. You can load the node-inspector by sending a request to `http://yoursite/server.js/debug`. You can control the debug URL segment with ΓÇÿdebuggerPathSegmentΓÇÖ setting. By default, debuggerPathSegment=ΓÇÖdebugΓÇÖ. You can set `debuggerPathSegment` to a GUID, for example, so that it is more difficult to be discovered by others.
-Read [Debug node.js applications on Windows](https://tomasz.janczuk.org/2011/11/debug-nodejs-applications-on-windows.html) for more details on debugging.
+Read [Debug Node.js applications on Windows](https://tomasz.janczuk.org/2011/11/debug-nodejs-applications-on-windows.html) for more details on debugging.
## Scenarios and recommendations/troubleshooting
You can see that 95% of the time was consumed by the WriteConsoleLog function. T
If your application is consuming too much memory, you see a notice from Azure App Service on your portal about high memory consumption. You can set up monitors to watch for certain [metrics](web-sites-monitor.md). When checking the memory usage on the [Azure portal Dashboard](../azure-monitor/essentials/metrics-charts.md), be sure to check the MAX values for memory so you donΓÇÖt miss the peak values.
-#### Leak detection and Heap Diff for node.js
+#### Leak detection and Heap Diff for Node.js
You could use [node-memwatch](https://github.com/lloyd/node-memwatch) to help you identify memory leaks. You can install `memwatch` just like v8-profiler and edit your code to capture and diff heaps to identify the memory leaks in your application.
NODE.exe has a setting called `NODE_PENDING_PIPE_INSTANCES`. On Azure App Servic
## More resources
-Follow these links to learn more about node.js applications on Azure App Service.
+Follow these links to learn more about Node.js applications on Azure App Service.
* [Get started with Node.js web apps in Azure App Service](quickstart-nodejs.md) * [How to debug a Node.js web app in Azure App Service](/archive/blogs/azureossds/debugging-node-js-apps-on-azure-app-services)
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md
To remove all identities in an ARM template:
## REST endpoint reference
-> [!NOTE]
-> An older version of this endpoint, using the "2017-09-01" API version, used the `secret` header instead of `X-IDENTITY-HEADER` and only accepted the `clientid` property for user-assigned. It also returned the `expires_on` in a timestamp format. `MSI_ENDPOINT` can be used as an alias for `IDENTITY_ENDPOINT`, and `MSI_SECRET` can be used as an alias for `IDENTITY_HEADER`. This version of the protocol is currently required for Linux Consumption hosting plans.
- An app with a managed identity makes this endpoint available by defining two environment variables: - IDENTITY_ENDPOINT - the URL to the local token service.
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Azure Form Recognizer prebuilt models enable you to add intelligent document pro
| **Model** | **Description** | | | |
+|**Document analysis**||
| 🆕[Read (preview)](#read-preview) | Extract printed and handwritten text lines, words, locations, and detected languages.|
+| 🆕[General document (preview)](#general-document-preview) | Extract text, tables, structure, key-value pairs, and named entities.|
+| [Layout](#layout) | Extract text and layout information from documents.|
+|**Prebuilt**||
| 🆕[W-2 (preview)](#w-2-preview) | Extract employee, employer, wage information, etc. from US W-2 forms. |
-| 🆕[General document (preview)](#general-document-preview) | Extract text, tables, structure, key-value pairs, and named entities. |
-| [Layout](#layout) | Extracts text and layout information from documents. |
| [Invoice](#invoice) | Extract key information from English and Spanish invoices. | | [Receipt](#receipt) | Extract key information from English receipts. | | [ID document](#id-document) | Extract key information from US driver licenses and international passports. |
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-
Form Recognizer uses the following models to easily identify, extract, and analyze document data:
-* [**W-2 form model**](concept-w2.md) | Extract text and key information from US W2 tax forms.
+**Document analysis models**
+ * [**Read model**](concept-read.md) | Extract printed and handwritten text lines, words, locations, and detected languages from documents and images.
+* [**Layout model**](concept-layout.md) | Extract text, tables, selection marks, and structure information from documents (PDF and TIFF) and images (JPG, PNG, and BMP).
* [**General document model**](concept-general-document.md) | Extract key-value pairs, selection marks, and entities from documents.+
+**Prebuilt models**
+
+* [**W-2 form model**](concept-w2.md) | Extract text and key information from US W2 tax forms.
* [**Invoice model**](concept-invoice.md) | Extract text, selection marks, tables, key-value pairs, and key information from invoices. * [**Receipt model**](concept-receipt.md) | Extract text and key information from receipts. * [**ID document model**](concept-id-document.md) | Extract text and key information from driver licenses and international passports.
The following features are supported by Form Recognizer v2.1. Use the links in t
| Feature | Description | Development options | |-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, and table structures, along with their bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>| |[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>| |[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
automanage Automanage Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-arc.md
Previously updated : 12/10/2021 Last updated : 03/22/2022 # Azure Automanage for Machines Best Practices - Azure Arc-enabled servers
Automanage supports the following operating systems for Azure Arc-enabled server
|[Machines Insights Monitoring](../azure-monitor/vm/vminsights-overview.md) |Azure Monitor for machines monitors the performance and health of your virtual machines, including their running processes and dependencies on other resources. |Production | |[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. |Production, Dev/Test | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. |Production, Dev/Test |
-|[Azure Guest Configuration](../governance/policy/concepts/guest-configuration.md) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the Guest Configuration extension. For Linux machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. |Production, Dev/Test |
+|[Azure Guest Configuration](../governance/policy/concepts/guest-configuration.md) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure security baseline using the Guest Configuration extension. For Arc machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. |Production, Dev/Test |
|[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. |Production, Dev/Test | |[Log Analytics Workspace](../azure-monitor/logs/log-analytics-overview.md) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. |Production, Dev/Test |
automanage Automanage Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-windows-server.md
Previously updated : 12/10/2021 Last updated : 03/22/2022
Automanage supports the following Windows Server versions:
|[Microsoft Antimalware](../security/fundamentals/antimalware.md) |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. **Note:** Microsoft Antimalware requires that there be no other antimalware software installed, or it may fail to work. |Production, Dev/Test | |[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. |Production, Dev/Test | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. |Production, Dev/Test |
-|[Guest configuration](../governance/policy/concepts/guest-configuration.md) | Guest configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. For Windows machines, the guest configuration service will automatically reapply the baseline settings if they are out of compliance. |Production, Dev/Test |
+|[Guest configuration](../governance/policy/concepts/guest-configuration.md) | Guest configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. For Windows machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/policy/concepts/guest-configuration.md). To modify the audit mode for Windows machines, use a custom profile to choose your audit mode setting. [Learn more](virtual-machines-custom-profile.md) |Production, Dev/Test |
|[Boot Diagnostics](../virtual-machines/boot-diagnostics.md) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test | |[Windows Admin Center](/windows-server/manage/windows-admin-center/azure/manage-vm) | Use Windows Admin Center (preview) in the Azure portal to manage the Windows Server operating system inside an Azure VM. This is only supported for machines using Windows Server 2016 or higher. Automanage configures Windows Admin Center over a Private IP address. If you wish to connect with Windows Admin Center over a Public IP address, please open an inbound port rule for port 6516. Automanage onboards Windows Admin Center for the Dev/Test profile by default. Use the preferences to enable or disable Windows Admin Center for the Production and Dev/Test environments. |Production, Dev/Test | |[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. |Production, Dev/Test |
automanage Virtual Machines Custom Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-custom-profile.md
Previously updated : 12/10/2021 Last updated : 03/22/2022
Azure Automanage for machine best practices has default best practice profiles that cannot be edited. However, if you need more flexibility, you can pick and choose the set of services and settings by creating a custom profile.
-We support toggling services ON and OFF. We also currently support customizing settings on [Azure Backup](..\backup\backup-azure-arm-vms-prepare.md#create-a-custom-policy) and [Microsoft Antimalware](../security/fundamentals/antimalware.md#default-and-custom-antimalware-configuration).
+We support toggling services ON and OFF. We also currently support customizing settings on [Azure Backup](..\backup\backup-azure-arm-vms-prepare.md#create-a-custom-policy) and [Microsoft Antimalware](../security/fundamentals/antimalware.md#default-and-custom-antimalware-configuration). Also, for Windows machines only, you can modify the audit modes for the [Azure security baselines in Guest Configuration](../governance/policy/concepts/guest-configuration.md). Check out the [ARM template](#create-a-custom-profile-using-azure-resource-manager-templates) for modifying the **azureSecurityBaselineAssignmentType**.
-## Prerequisites
-If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/) before you begin.
+## Create a custom profile in the Azure portal
-> [!NOTE]
-> Free trial accounts do not have access to the virtual machines used in this tutorial. Please upgrade to a Pay-As-You-Go subscription.
-
-> [!IMPORTANT]
-> The following Azure RBAC permission is needed to enable Automanage for the first time on a subscription: **Owner** role, or **Contributor** along with **User Access Administrator** roles.
--
-## Sign in to Azure
+### Sign in to Azure
Sign in to the [Azure portal](https://portal.azure.com/).
-## Enable Automanage for VMs on an existing VM
+### Create a custom profile
1. In the search bar, search for and select **Automanage ΓÇô Azure machine best practices**.
-2. Select the **Enable on existing VM**.
+2. Select **Configuration Profiles** in the table of contents.
-3. Under **Configuration profile**, select **Custom profile**.
+3. Select the **Create** button to create your custom profile
-4. Select an existing custom profile from the dropdown if one exists or create a new custom profile by clicking **Create new**.
-
-5. On the **Create new profile** blade, fill out the details:
+4. On the **Create new profile** blade, fill out the details:
1. Profile Name 1. Subscription 1. Resource Group
Sign in to the [Azure portal](https://portal.azure.com/).
:::image type="content" source="media\virtual-machine-custom-profile\create-custom-profile.png" alt-text="Fill out custom profile details.":::
-6. Adjust the profile with the desired services and settings and click **Create**.
-
-7. On the **Select machines** blade:
- 1. Filter the VMs list by your **Subscription** and **Resource group**.
- 1. Check the checkbox of each virtual machine you want to onboard.
- 1. Click the **Select** button.
-
- :::image type="content" source="media\virtual-machine-custom-profile\existing-vm-select-machine.png" alt-text="Select existing VM from list of available VMs.":::
-
- > [!NOTE]
- > Click the **Show ineligible machines** to see the list of unsupported machines and the reasoning.
-
-## Disable Automanage for VMs
-
-Quickly stop using Azure Automanage for virtual machines by disabling automanagement.
--
-1. Go to the **Automanage ΓÇô Azure machine best practices** page that lists all of your auto-managed VMs.
-1. Select the checkbox next to the virtual machine you want to disable.
-1. Click on the **Disable automanagent** button.
-1. Read carefully through the messaging in the resulting pop-up before agreeing to **Disable**.
--
-## Clean up resources
-
-If you created a new resource group to try Azure Automanage for virtual machines and no longer need it, you can delete the resource group. Deleting the group also deletes the VM and all of the resources in the resource group.
-
-Azure Automanage creates default resource groups to store resources in. Check resource groups that have the naming convention "DefaultResourceGroupRegionName" and "AzureBackupRGRegionName" to clean up all resources.
-
-1. Select the **Resource group**.
-1. On the page for the resource group, select **Delete**.
-1. When prompted, confirm the name of the resource group and then select **Delete**.
-
+5. Adjust the profile with the desired services and settings and click **Create**.
++
+## Create a custom profile using Azure Resource Manager Templates
+
+The following ARM template will create an Automanage custom profile. Details on the ARM template and steps on how to deploy are located in the ARM template deployment [section](#arm-template-deployment).
+```json
+{
+ "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "customProfileName": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ },
+ "azureSecurityBaselineAssignmentType": {
+ "type": "string",
+ "allowedValues": [
+ "ApplyAndAutoCorrect",
+ "ApplyAndMonitor",
+ "Audit"
+ ]
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Automanage/configurationProfiles",
+ "apiVersion": "2021-04-30-preview",
+ "name": "[parameters('customProfileName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "configuration": {
+ "Antimalware/Enable": "true",
+ "Antimalware/EnableRealTimeProtection": "true",
+ "Antimalware/RunScheduledScan": "true",
+ "Antimalware/ScanType": "Quick",
+ "Antimalware/ScanDay": "7",
+ "Antimalware/ScanTimeInMinutes": "120",
+ "AzureSecurityBaseline/Enable": true,
+ "AzureSecurityBaseline/AssignmentType": "[parameters('azureSecurityBaselineAssignmentType')]",
+ "AzureSecurityCenter/Enable": true,
+ "Backup/Enable": "true",
+ "Backup/PolicyName": "dailyBackupPolicy",
+ "Backup/TimeZone": "UTC",
+ "Backup/InstantRpRetentionRangeInDays": "2",
+ "Backup/SchedulePolicy/ScheduleRunFrequency": "Daily",
+ "Backup/SchedulePolicy/ScheduleRunTimes": [
+ "2017-01-26T00:00:00Z"
+ ],
+ "Backup/SchedulePolicy/SchedulePolicyType": "SimpleSchedulePolicy",
+ "Backup/RetentionPolicy/RetentionPolicyType": "LongTermRetentionPolicy",
+ "Backup/RetentionPolicy/DailySchedule/RetentionTimes": [
+ "2017-01-26T00:00:00Z"
+ ],
+ "Backup/RetentionPolicy/DailySchedule/RetentionDuration/Count": "180",
+ "Backup/RetentionPolicy/DailySchedule/RetentionDuration/DurationType": "Days",
+ "BootDiagnostics/Enable": true,
+ "ChangeTrackingAndInventory/Enable": true,
+ "LogAnalytics/Enable": true,
+ "UpdateManagement/Enable": true,
+ "VMInsights/Enable": true
+ }
+ }
+ }
+ ]
+ }
+```
+
+### ARM template deployment
+This ARM template will create a custom configuration profile that you can assign to your specified machine.
+
+The `customProfileName` value is the name of the custom configuration profile that you would like to create.
+
+The `location` value is the region where you would like to store this custom configuration profile. Note, you can assign this profile to any supported machines in any region.
+
+The `azureSecurityBaselineAssignmentType` is the audit mode that you can choose for the Azure server security baseline. Your options are
+
+* ApplyAndAutoCorrect : This will apply the Azure security baseline through the Guest Configuration extention, and if any setting within the baseline drifts, we will auto-remediate the setting so it stays compliant.
+* ApplyAndMonitor : This will apply the Azure security baseline through the Guest Configuration extention when you first assign this profile to each machine. After it is applied, the Guest Configuration service will monitor the sever baseline and report any drift from the desired state. However, it will not auto-remdiate.
+* Audit : This will install the Azure security baseline using the Guest Configuration extension. You will be able to see where your machine is out of compliance with the baseline, but noncompliance won't be automatically remediated.
+
+Follow these steps to deploy the ARM template:
+1. Save this ARM template as `azuredeploy.json`
+1. Run this ARM template deployment with `az deployment group create --resource-group myResourceGroup --template-file azuredeploy.json`
+1. Provide the values for customProfileName, location, and azureSecurityBaselineAssignmentType when prompted
+1. You're ready to deploy
+
+As with any ARM template, it's possible to factor out the parameters into a separate `azuredeploy.parameters.json` file and use that as an argument when deploying.
## Next steps
azure-arc Delete Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-managed-instance.md
demo-mi 1/1 10.240.0.4:32023 Ready
## Delete Azure Arc-enabled SQL Managed Instance
-To delete a SQL Managed Instance, run the following command:
+To delete a SQL Managed Instance, run the appropriate command for your deployment type. For example:
+
+### [Indirectly connected mode](#tab/indirectly)
```azurecli
-az sql mi-arc delete -n <NAME_OF_INSTANCE> --k8s-namespace <namespace> --use-k8s
+az sql mi-arc delete -n <instance_name> --k8s-namespace <namespace> --use-k8s
``` Output should look something like this:
Output should look something like this:
Deleted demo-mi from namespace arc ```
+### [Directly connected mode](#tab/directly)
+
+```azurecli
+az sql mi-arc delete -n <instance_name> -g <resource_group>
+```
+
+Output should look something like this:
+
+```azurecli
+# az sql mi-arc delete -n demo-mi -g my-rg
+Deleted demo-mi from namespace arc
+```
+++ ## Reclaim the Kubernetes Persistent Volume Claims (PVCs) A PersistentVolumeClaim (PVC) is a request for storage by a user from Kubernetes cluster while creating and adding storage to a SQL Managed Instance. Deleting a SQL Managed Instance does not remove its associated [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). This is by design. The intention is to help the user to access the database files in case the deletion of instance was accidental. Deleting PVCs is not mandatory. However it is recommended. If you don't reclaim these PVCs, you'll eventually end up with errors as your Kubernetes cluster will run out of disk space or usage of the same SQL Managed Instance name while creating new instance might cause inconsistencies. To reclaim the PVCs, take the following steps:
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md
The following table describes the scenarios that are currently supported for Azu
|North Europe|Available|Available |Japan East|Available|Available |Korea Central|Available|Available
-|East Asia|Available|Available
|Southeast Asia|Available|Available |Australia East|Available|Available
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-enabled data services.
-### March 08, 2022
+## March 08, 2022
|Component |Value | |--||
This article identifies the component versions with each release of Azure Arc-en
|Arc enabled Kubernetes helm chart extension version|1.1.18911000| |Arc Data extension for Azure Data Studio|1.0|
-### February 25, 2022
+## February 25, 2022
|Component |Value | |--||
This article identifies the component versions with each release of Azure Arc-en
|Arc enabled Kubernetes helm chart extension version|1.1.18791000| |Arc Data extension for Azure Data Studio|1.0|
-### January 27, 2022
+## January 27, 2022
|Component |Value | |--||
This article identifies the component versions with each release of Azure Arc-en
|Arc enabled Kubernetes helm chart extension version|1.1.18501004| |Arc Data extension for Azure Data Studio|1.0|
-### December 16, 2021
+## December 16, 2021
The following table describes the components in this release.
azure-arc Conceptual Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2-ci-cd.md
Title: "CI/CD Workflow using GitOps (Flux v2) - Azure Arc-enabled Kubernetes" description: "This article provides a conceptual overview of a CI/CD workflow using GitOps"
-keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Helm, Arc, CI/CD, Azure DevOps"
+keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Helm, Arc, AKS, CI/CD, Azure DevOps"
Last updated 11/29/2021
-# CI/CD workflow using GitOps (Flux v2) - Azure Arc enabled Kubernetes
+# CI/CD workflow using GitOps (Flux v2)
Modern Kubernetes deployments contain multiple applications, clusters, and environments. With GitOps, you can manage these complex setups more easily, tracking the desired state of the Kubernetes environments declaratively with Git. Using common Git tooling to declare cluster state, you can increase accountability, facilitate fault investigation, and enable automation to manage environments.
The GitOps repository represents the current desired state of all environments a
### Kubernetes clusters
-At least one Azure Arc-enabled Kubernetes cluster serves the different environments needed by the application. For example, a single cluster can serve both a dev and QA environment through different namespaces. A second cluster can provide easier separation of environments and more fine-grained control.
+At least one Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) cluster serves the different environments needed by the application. For example, a single cluster can serve both a dev and QA environment through different namespaces. A second cluster can provide easier separation of environments and more fine-grained control.
## Example workflow
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
Title: 'Tutorial: Implement CI/CD with GitOps (Flux v2) in Azure Arc-enabled Kubernetes clusters'
-description: This tutorial walks through setting up a CI/CD solution using GitOps (Flux v2) in Azure Arc-enabled Kubernetes clusters. For a conceptual take on this workflow, see the CI/CD Workflow using GitOps - Azure Arc-enabled Kubernetes article.
-keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, ci/cd, devops"
+ Title: 'Tutorial: Implement CI/CD with GitOps (Flux v2)'
+description: This tutorial walks through setting up a CI/CD solution using GitOps (Flux v2) in Azure Arc-enabled Kubernetes or Azure Kubernetes Service clusters. For a conceptual take on this workflow, see the CI/CD Workflow using GitOps article.
+keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, ci/cd, devops"
Last updated 12/15/2021
-# Tutorial: Implement CI/CD with GitOps (Flux v2) using Azure Arc-enabled Kubernetes clusters
+# Tutorial: Implement CI/CD with GitOps (Flux v2)
-In this tutorial, you'll set up a CI/CD solution using GitOps (Flux v2) and Azure Arc-enabled Kubernetes clusters. Using the sample Azure Vote app, you'll:
+In this tutorial, you'll set up a CI/CD solution using GitOps (Flux v2) and Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters. Using the sample Azure Vote app, you'll:
> [!div class="checklist"]
-> * Create an Azure Arc-enabled Kubernetes cluster.
+> * Create an Azure Arc-enabled Kubernetes or AKS cluster.
> * Connect your application and GitOps repositories to Azure Repos or Git Hub. > * Implement CI/CD flow with either Azure Pipelines or GitHub. > * Connect your Azure Container Registry to Azure DevOps and Kubernetes.
If you don't have an Azure subscription, create a [free account](https://azure.m
* Verify you have: * A [connected Azure Arc-enabled Kubernetes cluster](./quickstart-connect-cluster.md#connect-an-existing-kubernetes-cluster) named **arc-cicd-cluster**. * A connected Azure Container Registry with either [AKS integration](../../aks/cluster-container-registry-integration.md) or [non-AKS cluster authentication](../../container-registry/container-registry-auth-kubernetes.md).
-* Install the latest versions of these Azure Arc-enabled Kubernetes CLI extensions:
+* Install the latest versions of these Azure Arc-enabled Kubernetes and Kubernetes Configuration CLI extensions:
```azurecli az extension add --name connectedk8s
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
If you use a `bucket` source instead of a `git` source, here are the bucket-spec
| Parameter | Format | Notes | | - | - | - |
-| `--url` `-u` | URL String | The URL for the `bucket`. Formats supported: http://, https://, s3://. |
+| `--url` `-u` | URL String | The URL for the `bucket`. Formats supported: http://, https://. |
| `--bucket-name` | String | Name of the `bucket` to sync. | | `--bucket-access-key` | String | Access Key ID used to authenticate with the `bucket`. | | `--bucket-secret-key` | String | Secret Key used to authenticate with the `bucket`. |
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
Title: Azure Arc-enabled servers Overview description: Learn how to use Azure Arc-enabled servers to manage servers hosted outside of Azure like an Azure resource. Previously updated : 09/30/2021 Last updated : 03/22/2022 # What is Azure Arc-enabled servers?
-Azure Arc-enabled servers enables you to manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or other cloud provider. This management experience is designed to be consistent with how you manage native Azure virtual machines. When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID enabling the machine to be included in a resource group. Now you can benefit from standard Azure constructs, such as Azure Policy and applying tags. Service providers managing a customer's on-premises infrastructure can manage their hybrid machines, just like they do today with native Azure resources, across multiple customer environments using [Azure Lighthouse](../../lighthouse/how-to/manage-hybrid-infrastructure-arc.md).
+Azure Arc-enabled servers lets you manage Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or other cloud provider. This management experience is designed to be consistent with how you manage native Azure virtual machines, using standard Azure constructs such as Azure Policy and applying tags.
-To deliver this experience with your hybrid machines, you need to install the Azure Connected Machine agent on each machine. This agent does not deliver any other functionality, and it doesn't replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md). The Log Analytics agent for Windows and Linux is required when:
+When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID enabling the machine to be included in a resource group.
-* You want to proactively monitor the OS and workloads running on the machine,
-* Manage it using Automation runbooks or solutions like Update Management, or
-* Use other Azure services like [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md).
+To connect hybrid machines, you install the [Azure Connected Machine agent](agent-overview.md) on each machine. This agent does not deliver any other functionality, and it doesn't replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md). The Log Analytics agent for Windows and Linux is required in order to:
-## Supported cloud operations
+* Proactively monitor the OS and workloads running on the machine
+* Manage it using Automation runbooks or solutions like Update Management
+* Use other Azure services like [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)
-When you connect your machine to Azure Arc-enabled servers, it enables the ability for you to perform the following operational functions as described in the following table.
+You can install the Connected Machine agent manually, or on multiple machines at scale, using the [deployment method](deployment-options.md) that works best for your scenario.
-|Operations function |Description |
-|--||
-|**Govern** ||
-| Azure Policy |Assign [Azure Policy guest configurations](../../governance/policy/concepts/guest-configuration.md) to audit settings inside the machine. To understand the cost of using Azure Policy Guest Configuration policies with Arc-enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/)|
-|**Protect** ||
-| Microsoft Defender for Cloud | Protect non-Azure servers with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint), included through [Microsoft Defender for Cloud](../../security-center/defender-for-servers-introduction.md), for threat detection, for vulnerability management, and to proactively monitor for potential security threats. Microsoft Defender for Cloud presents the alerts and remediation suggestions from the threats detected. |
-| Microsoft Sentinel | Machines connected to Arc-enabled servers can be [configured with Microsoft Sentinel](scenario-onboard-azure-sentinel.md) to collect security-related events and correlate them with other data sources. |
-|**Configure** ||
-| Azure Automation |Automate frequent and time-consuming management tasks using PowerShell and Python [runbooks](../../automation/automation-runbook-execution.md).<br> Assess configuration changes about installed software, Microsoft services, Windows registry and files, and Linux daemons using [Change Tracking and Inventory](../../automation/change-tracking/overview.md).<br> Use [Update Management](../../automation/update-management/overview.md) to manage operating system updates for your Windows and Linux servers. |
-| Azure Automanage (preview) | Automate onboarding and configuration of a set of Azure services when you use [Automanage Machine for Arc-enabled servers](../../automanage/automanage-arc.md). |
-| VM extensions | Provides post-deployment configuration and automation tasks using supported [Arc-enabled servers VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. |
-|**Monitor**|
-| Azure Monitor | Monitor the connected machine guest operating system performance, and discover application components to monitor their processes and dependencies with other resources using [VM insights](../../azure-monitor/vm/vminsights-overview.md). Collect other log data, such as performance data and events, from the operating system or workload(s) running on the machine with the [Log Analytics agent](../../azure-monitor/agents/agents-overview.md#log-analytics-agent). This data is stored in a [Log Analytics workspace](../../azure-monitor/logs/design-logs-deployment.md). |
-> [!NOTE]
-> At this time, enabling Azure Automation Update Management directly from an Azure Arc-enabled server is not supported. See [Enable Update Management from your Automation account](../../automation/update-management/enable-from-automation-account.md) to understand requirements and how to enable for your server.
+## Supported cloud operations
-Log data collected and stored in a Log Analytics workspace from the hybrid machine now contains properties specific to the machine, such as a Resource ID, to support [resource-context](../../azure-monitor/logs/design-logs-deployment.md#access-mode) log access.
+When you connect your machine to Azure Arc-enabled servers, you can perform many operational functions, just as you would with native Azure virtual machines. Below are some of the key supported actions for connected machines.
+* **Govern**:
+ * Assign [Azure Policy guest configurations](../../governance/policy/concepts/guest-configuration.md) to audit settings inside the machine. To understand the cost of using Azure Policy Guest Configuration policies with Arc-enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/).
+* **Protect**:
+ * Protect non-Azure servers with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint), included through [Microsoft Defender for Cloud](../../security-center/defender-for-servers-introduction.md), for threat detection, for vulnerability management, and to proactively monitor for potential security threats. Microsoft Defender for Cloud presents the alerts and remediation suggestions from the threats detected.
+ * Use [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) to collect security-related events and correlate them with other data sources.
+* **Configure**:
+ * Use Azure Automation for frequent and time-consuming management tasks using PowerShell and Python [runbooks](../../automation/automation-runbook-execution.md). Assess configuration changes for installed software, Microsoft services, Windows registry and files, and Linux daemons using [Change Tracking and Inventory](../../automation/change-tracking/overview.md)
+ * Use [Update Management](../../automation/update-management/overview.md) to manage operating system updates for your Windows and Linux servers. Automate onboarding and configuration of a set of Azure services when you use [Azure Automanage (preview)](../../automanage/automanage-arc.md).
+ * Perform post-deployment configuration and automation tasks using supported [Arc-enabled servers VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine.
+* **Monitor**:
+ * Monitor operating system performance and discover application components to monitor processes and dependencies with other resources using [VM insights](../../azure-monitor/vm/vminsights-overview.md).
+ * Collect other log data, such as performance data and events, from the operating system or workloads running on the machine with the [Log Analytics agent](../../azure-monitor/agents/agents-overview.md#log-analytics-agent). This data is stored in a [Log Analytics workspace](../../azure-monitor/logs/design-logs-deployment.md).
+
+> [!NOTE]
+> At this time, enabling Azure Automation Update Management directly from an Azure Arc-enabled server is not supported. See [Enable Update Management from your Automation account](../../automation/update-management/enable-from-automation-account.md) to understand requirements and [how to enable Update Management for non-Azure VMs](../../automation/update-management/enable-from-automation-account.md#enable-non-azure-vms).
+
+Log data collected and stored in a Log Analytics workspace from the hybrid machine contains properties specific to the machine, such as a Resource ID, to support [resource-context](../../azure-monitor/logs/design-logs-deployment.md#access-mode) log access.
-To learn more about how Azure Arc-enabled servers can be used to implement Azure monitoring, security, and update services across hybrid and multicloud environments, see the following video.
+Watch this video to learn more about Azure monitoring, security, and update services across hybrid and multicloud environments.
> [!VIDEO https://www.youtube.com/embed/mJnmXBrU1ao] ## Supported regions
-For a definitive list of supported regions with Azure Arc-enabled servers, see the [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc) page.
+For a list of supported regions with Azure Arc-enabled servers, see the [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc) page.
In most cases, the location you select when you create the installation script should be the Azure region geographically closest to your machine's location. Data at rest is stored within the Azure geography containing the region you specify, which may also affect your choice of region if you have data residency requirements. If the Azure region your machine connects to is affected by an outage, the connected machine is not affected, but management operations using Azure may be unable to complete. If there is a regional outage, and if you have multiple locations that support a geographically redundant service, it is best to connect the machines in each location to a different Azure region.
-The following metadata information about the connected machine is collected and stored in the region where the Azure Arc machine resource is configured:
+[Instance metadata information about the connected machine](agent-overview.md#instance-metadata) is collected and stored in the region where the Azure Arc machine resource is configured, including the following:
-- Operating system name and version-- Computer name-- Computer fully qualified domain name (FQDN)-- Connected Machine agent version
+* Operating system name and version
+* Computer name
+* Computer fully qualified domain name (FQDN)
+* Connected Machine agent version
-For example, if the machine is registered with Azure Arc in the East US region, this data is stored in the US region.
+For example, if the machine is registered with Azure Arc in the East US region, the metadata is stored in the US region.
## Supported environments
-Azure Arc-enabled servers support the management of physical servers and virtual machines hosted *outside* of Azure. For specific details of which hybrid cloud environments hosting VMs are supported, see [Connected Machine agent prerequisites](prerequisites.md#supported-environments).
+Azure Arc-enabled servers support the management of physical servers and virtual machines hosted *outside* of Azure. For specific details about supported hybrid cloud environments hosting VMs, see [Connected Machine agent prerequisites](prerequisites.md#supported-environments).
> [!NOTE] > Azure Arc-enabled servers is not designed or supported to enable management of virtual machines running in Azure. ## Agent status
-The Connected Machine agent sends a regular heartbeat message to the service every 5 minutes. If the service stops receiving these heartbeat messages from a machine, that machine is considered offline and the status will automatically be changed to **Disconnected** in the portal within 15 to 30 minutes. Upon receiving a subsequent heartbeat message from the Connected Machine agent, its status will automatically be changed to **Connected**.
+The Connected Machine agent sends a regular heartbeat message to the service every five minutes. If the service stops receiving these heartbeat messages from a machine, that machine is considered offline, and its status will automatically be changed to **Disconnected** within 15 to 30 minutes. Upon receiving a subsequent heartbeat message from the Connected Machine agent, its status will automatically be changed back to **Connected**. The status for a connected machine can be viewed in the Azure portal under **Azure Arc > Servers**.
## Service limits
-Azure Arc-enabled servers has a limit for the number of instances that can be created in each resource group. It does not have any limits at the subscription or service level. To learn about what resource type limits exist, see the [Resource instance limit](../../azure-resource-manager/management/resources-without-resource-group-limit.md#microsofthybridcompute) article.
+Azure Arc-enabled servers has a limit for the number of instances that can be created in each resource group. It does not have any limits at the subscription or service level.
-## Next steps
+To learn more about resource type limits, see the [Resource instance limit](../../azure-resource-manager/management/resources-without-resource-group-limit.md#microsofthybridcompute) article.
-* Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
+## Next steps
+* Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review the [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
azure-functions Durable Functions Code Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-code-constraints.md
The following table shows examples of APIs that you should avoid because they ar
| Network | Network calls involve external systems and are nondeterministic. | Use activity functions to make network calls. If you need to make an HTTP call from your orchestrator function, you also can use the [durable HTTP APIs](durable-functions-http-features.md#consuming-http-apis). | | Blocking APIs | Blocking APIs like `Thread.Sleep` in .NET and similar APIs can cause performance and scale problems for orchestrator functions and should be avoided. In the Azure Functions Consumption plan, they can even result in unnecessary runtime charges. | Use alternatives to blocking APIs when they're available. For example, use `CreateTimer` to introduce delays in orchestration execution. [Durable timer](durable-functions-timers.md) delays don't count towards the execution time of an orchestrator function. | | Async APIs | Orchestrator code must never start any async operation except by using the `IDurableOrchestrationContext` API, the `context.df` API in JavaScript, or the `context` API in Python. For example, you can't use `Task.Run`, `Task.Delay`, and `HttpClient.SendAsync` in .NET or `setTimeout` and `setInterval` in JavaScript. The Durable Task Framework runs orchestrator code on a single thread. It can't interact with any other threads that might be called by other async APIs. | An orchestrator function should make only durable async calls. Activity functions should make any other async API calls. |
-| Async JavaScript functions | You can't declare JavaScript orchestrator functions as `async` because the node.js runtime doesn't guarantee that asynchronous functions are deterministic. | Declare JavaScript orchestrator functions as synchronous generator functions |
+| Async JavaScript functions | You can't declare JavaScript orchestrator functions as `async` because the Node.js runtime doesn't guarantee that asynchronous functions are deterministic. | Declare JavaScript orchestrator functions as synchronous generator functions |
| Python Coroutines | You can't declare Python orchestrator functions as coroutines, i.e declare them with the `async` keyword, because coroutine semantics do not align with the Durable Functions replay model. | Declare Python orchestrator functions as generators, meaning that you should expect the `context` API to use `yield` instead of `await`. | | Threading APIs | The Durable Task Framework runs orchestrator code on a single thread and can't interact with any other threads. Introducing new threads into an orchestration's execution can result in nondeterministic execution or deadlocks. | Orchestrator functions should almost never use threading APIs. For example, in .NET, avoid using `ConfigureAwait(continueOnCapturedContext: false)`; this ensures task continuations run on the orchestrator function's original `SynchronizationContext`. If such APIs are necessary, limit their use to only activity functions. | | Static variables | Avoid using nonconstant static variables in orchestrator functions because their values can change over time, resulting in nondeterministic runtime behavior. | Use constants, or limit the use of static variables to activity functions. |
azure-functions Durable Functions Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-create-portal.md
If you are creating JavaScript Durable Functions, you'll need to install the [`d
1. Use an HTTP tool like Postman or cURL to send a POST request to the URL that you copied. The following example is a cURL command that sends a POST request to the durable function: ```bash
- curl -X POST https://{your-function-app-name}.azurewebsites.net/api/orchestrators/HelloSequence --header "Content-Length: 0"
+ curl -X POST https://{your-function-app-name}.azurewebsites.net/api/orchestrators/{functionName} --header "Content-Length: 0"
```
- In this example, `{your-function-app-name}` is the domain that is the name of your function app. The response message contains a set of URI endpoints that you can use to monitor and manage the execution, which looks like the following example:
+ In this example, `{your-function-app-name}` is the domain that is the name of your function app, and `{functionName}` is the **HelloSequence** orchestrator function. The response message contains a set of URI endpoints that you can use to monitor and manage the execution, which looks like the following example:
```json {
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
From the [Java functions runtime library](/java/api/overview/azure/functions/run
+ [collectionName](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.collectionname) + [createIfNotExists](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.createifnotexists) + [dataType](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.datatype)
-+ [id](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.id)
+ [partitionKey](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.partitionkey) + [preferredLocations](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.preferredlocations) + [useMultipleWriteLocations](/java/api/com.microsoft.azure.functions.annotation.cosmosdboutput.usemultiplewritelocations)
By default, when you write to the output parameter in your function, a document
- [Run a function when an Azure Cosmos DB document is created or modified (Trigger)](./functions-bindings-cosmosdb-v2-trigger.md) - [Read an Azure Cosmos DB document (Input binding)](./functions-bindings-cosmosdb-v2-input.md)
-[extension version 4.x]: ./functions-bindings-cosmosdb-v2.md?tabs=extensionv4
+[extension version 4.x]: ./functions-bindings-cosmosdb-v2.md?tabs=extensionv4
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
# [Extension 5.x and higher](#tab/extensionv5/isolated-process)
-Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages//dotnet/api/microsoft.azure.webjobs.blobattribute.Blobs/5.0.0-beta.4), version 5.x.
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage), version 5.x.
# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages//dotnet/api/microsoft.azure.webjobs.blobattribute), version 3.x.
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage), version 4.x.
# [Functions 1.x](#tab/functionsv1/isolated-process)
You can add this version of the extension from the preview extension bundle v3 b
```json {
- "version": "3.0",
+ "version": "2.0",
"extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[3.3.0, 4.0.0)"
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 03/21/2022 Last updated : 03/22/2022 # Compare Azure Government and global Azure
Azure Government services operate the same way as the corresponding services in
You can use AzureCLI or PowerShell to obtain Azure Government endpoints for services you provisioned: -- Use **Azure CLI** to run the [az cloud show](/cli/azure/cloud#az_cloud_show) comm
-provide `AzureUSGovernment` as the name of the target cloud environment. For example,
+- Use **Azure CLI** to run the [az cloud show](/cli/azure/cloud#az_cloud_show) command and provide `AzureUSGovernment` as the name of the target cloud environment. For example,
```azurecli az cloud show --name AzureUSGovernment
The calculation for recommending that you should right-size or shut down underut
If you want to be more aggressive at identifying underutilized virtual machines, you can adjust the CPU utilization rule on a per subscription basis.
-### [Azure Cost Management and Billing](../cost-management-billing/index.yml)
-
-The following Azure Cost Management + Billing **features are not currently available** in Azure Government:
--- Cost Management + Billing for cloud solution providers (CSPs)- ### [Azure Lighthouse](../lighthouse/index.yml) The following Azure Lighthouse **features are not currently available** in Azure Government:
You need to open some **outgoing ports** in your server's firewall to allow the
|-||-|--| |Telemetry|dc.applicationinsights.us|23.97.4.113|443|
+### [Cost Management and Billing](../cost-management-billing/index.yml)
+
+The following Azure Cost Management + Billing **features are not currently available** in Azure Government:
+
+- Cost Management + Billing for cloud solution providers (CSPs)
+ ## Media This section outlines variations and considerations when using Media services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=cdn,media-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
Last updated 10/12/2021
# Application Insights for ASP.NET Core applications
-This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application. When you complete the instructions in this article, Application Insights will collect requests, dependencies, exceptions, performance counters, heartbeats, and logs from your ASP.NET Core application.
+This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application.
-The example we'll use here is an [MVC application](/aspnet/core/tutorials/first-mvc-app) that targets `netcoreapp3.0`. You can apply these instructions to all ASP.NET Core applications. If you are using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md).
+Application Insights can collect the following telemetry from your ASP.NET Core application:
+
+> [!div class="checklist"]
+> * Requests
+> * Dependencies
+> * Exceptions
+> * Performance counters
+> * Heartbeats
+> * Logs
+
+We'll use an [MVC application](/aspnet/core/tutorials/first-mvc-app) example that targets `netcoreapp3.0`. You can apply these instructions to all ASP.NET Core applications. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md).
> [!NOTE] > A preview [OpenTelemetry-based .NET offering](opentelemetry-enable.md?tabs=net) is available. [Learn more](opentelemetry-overview.md).
The example we'll use here is an [MVC application](/aspnet/core/tutorials/first-
## Supported scenarios
-The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. Support covers the following:
+The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. Support covers the following scenarios:
* **Operating system**: Windows, Linux, or Mac * **Hosting method**: In process or out of process * **Deployment method**: Framework dependent or self-contained * **Web server**: IIS (Internet Information Server) or Kestrel * **Hosting platform**: The Web Apps feature of Azure App Service, Azure VM, Docker, Azure Kubernetes Service (AKS), and so on
-* **.NET Core version**: All officially [supported .NET Core versions](https://dotnet.microsoft.com/download/dotnet-core) that are not in preview
+* **.NET Core version**: All officially [supported .NET Core versions](https://dotnet.microsoft.com/download/dotnet-core) that aren't in preview
* **IDE**: Visual Studio, Visual Studio Code, or command line > [!NOTE]
The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Micro
## Prerequisites - A functioning ASP.NET Core application. If you need to create an ASP.NET Core application, follow this [ASP.NET Core tutorial](/aspnet/core/getting-started/).-- A valid Application Insights instrumentation key. This key is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get an instrumentation key, see [Create an Application Insights resource](./create-new-resource.md).
+- A valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md).
> [!IMPORTANT]
-> [Connection Strings](./sdk-connection-string.md?tabs=net) are recommended over instrumentation keys. New Azure regions **require** using connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable.
-
+> [Connection Strings](./sdk-connection-string.md?tabs=net) are recommended over instrumentation keys. New Azure regions require using connection strings instead of instrumentation keys. Connection strings identify the appropriate endpoint for your Application Insights resource which provides the fastest way to ingest your telemetry for alerting and reporting. You will need to copy the connection string and add it to your application's code or to an environment variable.
## Enable Application Insights server-side telemetry (Visual Studio)
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
2. Select **Project** > **Add Application Insights Telemetry**.
-3. Select **Get Started**. Depending on your version of Visual Studio, the name of this button might vary. In some earlier versions, it is named the **Start Free** button.
+3. Select **Get Started**. Depending on your version of Visual Studio, the name of this button might vary. In some earlier versions, it's named the **Start Free** button.
4. Select your subscription, and then select **Resource** > **Register**.
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
} ```
-3. Set up the instrumentation key.
+3. Set up the connection string.
- Although you can provide the instrumentation key as an argument to `AddApplicationInsightsTelemetry`, we recommend that you specify the instrumentation key in configuration. The following code sample shows how to specify an instrumentation key in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
+ Although you can provide the connection string as an argument to `AddApplicationInsightsTelemetry`, we recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
```json { "ApplicationInsights": {
- "InstrumentationKey": "putinstrumentationkeyhere"
+ "ConnectionString" : "Copy connection string from Application Insights Resource Overview"
}, "Logging": { "LogLevel": {
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
} ```
- Alternatively, specify the instrumentation key in either of the following environment variables:
-
- * `APPINSIGHTS_INSTRUMENTATIONKEY`
-
- * `ApplicationInsights:InstrumentationKey`
+ Alternatively, specify the connection string in the "APPLICATIONINSIGHTS_CONNECTION_STRING" environment variable or "ApplicationInsights:ConnectionString" in the JSON configuration file.
For example:
- * `SET ApplicationInsights:InstrumentationKey=putinstrumentationkeyhere`
-
- * `SET APPINSIGHTS_INSTRUMENTATIONKEY=putinstrumentationkeyhere`
+ * `SET ApplicationInsights:ConnectionString = <Copy connection string from Application Insights Resource Overview>`
- * Typically, `APPINSIGHTS_INSTRUMENTATIONKEY` is used in [Azure Web Apps](./azure-web-apps.md?tabs=net), but it can also be used in all places where this SDK is supported. (If you're doing codeless web app monitoring, this format is required if you aren't using connection strings.)
+ * `SET APPLICATIONINSIGHTS_CONNECTION_STRING = <Copy connection string from Application Insights Resource Overview>`
- In lieu of setting instrumentation keys, you can now also use [Connection Strings](./sdk-connection-string.md?tabs=net).
+ * Typically, `APPLICATIONINSIGHTS_CONNECTION_STRING` is used in [Azure Web Apps](./azure-web-apps.md?tabs=net), but it can also be used in all places where this SDK is supported.
> [!NOTE]
- > An instrumentation key specified in code wins over the environment variable `APPINSIGHTS_INSTRUMENTATIONKEY`, which wins over other options.
+ > An connection string specified in code wins over the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, which wins over other options.
### User secrets and other configuration providers
-If you want to store the instrumentation key in ASP.NET Core user secrets or retrieve it from another configuration provider, you can use the overload with a `Microsoft.Extensions.Configuration.IConfiguration` parameter. For example, `services.AddApplicationInsightsTelemetry(Configuration);`.
-Starting from Microsoft.ApplicationInsights.AspNetCore version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore), calling `services.AddApplicationInsightsTelemetry()` automatically reads the instrumentation key from `Microsoft.Extensions.Configuration.IConfiguration` of the application. There is no need to explicitly provide the `IConfiguration`.
+If you want to store the connection string in ASP.NET Core user secrets or retrieve it from another configuration provider, you can use the overload with a `Microsoft.Extensions.Configuration.IConfiguration` parameter. For example, `services.AddApplicationInsightsTelemetry(Configuration);`.
+In Microsoft.ApplicationInsights.AspNetCore version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) and later, calling `services.AddApplicationInsightsTelemetry()` automatically reads the connection string from `Microsoft.Extensions.Configuration.IConfiguration` of the application. There's no need to explicitly provide the `IConfiguration`.
## Run your application
The preceding steps are enough to help you start collecting server-side telemetr
</head> ```
-As an alternative to using the `FullScript`, the `ScriptBody` is available starting in Application Insights SDK for ASP.NET Core version 2.14. Use this if you need to control the `<script>` tag to set a Content Security Policy:
+As an alternative to using the `FullScript`, the `ScriptBody` is available starting in Application Insights SDK for ASP.NET Core version 2.14. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy:
```cshtml <script> // apply custom changes to this script tag.
As an alternative to using the `FullScript`, the `ScriptBody` is available start
</script> ```
-The `.cshtml` file names referenced earlier are from a default MVC application template. Ultimately, if you want to properly enable client-side monitoring for your application, the JavaScript snippet must appear in the `<head>` section of each page of your application that you want to monitor. To do this in this application template, add the JavaScript snippet to `_Layout.cshtml`.
+The `.cshtml` file names referenced earlier are from a default MVC application template. Ultimately, if you want to properly enable client-side monitoring for your application, the JavaScript snippet must appear in the `<head>` section of each page of your application that you want to monitor. Add the JavaScript snippet to `_Layout.cshtml` in an application template to enable client-side monitoring.
-If your project doesn't include `_Layout.cshtml`, you can still add [client-side monitoring](./website-monitoring.md). To do this, add the JavaScript snippet to an equivalent file that controls the `<head>` of all pages within your app. Or you can add the snippet to multiple pages, but this solution is difficult to maintain and we generally don't recommend it.
+If your project doesn't include `_Layout.cshtml`, you can still add [client-side monitoring](./website-monitoring.md) by adding the JavaScript snippet to an equivalent file that controls the `<head>` of all pages within your app. Alternatively, you can add the snippet to multiple pages, but we don't recommend it.
> [!NOTE]
-> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the instrumentation key, you are required to remove auto-injection as described above and manually add the [JavaScript SDK](./javascript.md#adding-the-javascript-sdk).
+> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you are required to remove auto-injection as described above and manually add the [JavaScript SDK](./javascript.md#adding-the-javascript-sdk).
## Configure the Application Insights SDK
This table has the full list of `ApplicationInsightsServiceOptions` settings:
|EnableHeartbeat | Enable/Disable Heartbeats feature, which periodically (15-min default) sends a custom metric named 'HeartbeatState' with information about the runtime like .NET Version, Azure Environment information, if applicable, etc. | true |AddAutoCollectedMetricExtractor | Enable/Disable AutoCollectedMetrics extractor, which is a TelemetryProcessor that sends pre-aggregated metrics about Requests/Dependencies before sampling takes place. | true |RequestCollectionOptions.TrackExceptions | Enable/Disable reporting of unhandled Exception tracking by the Request collection module. | false in NETSTANDARD2.0 (because Exceptions are tracked with ApplicationInsightsLoggerProvider), true otherwise.
-|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling this will cause the following settings to be ignored; `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, `EnableAppServicesHeartbeatTelemetryModule` | true
+|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling will cause the following settings to be ignored; `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, `EnableAppServicesHeartbeatTelemetryModule` | true
For the most current list, see the [configurable settings in `ApplicationInsightsServiceOptions`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs). ### Configuration recommendation for Microsoft.ApplicationInsights.AspNetCore SDK 2.15.0 and later
-In Microsoft.ApplicationInsights.AspNetCore SDK version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later, we recommend configuring every setting available in `ApplicationInsightsServiceOptions`, including **InstrumentationKey** using the application's `IConfiguration` instance. The settings must be under the section "ApplicationInsights", as shown in the following example. The following section from appsettings.json configures the instrumentation key and disables adaptive sampling and performance counter collection.
+In Microsoft.ApplicationInsights.AspNetCore SDK version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later, we recommend configuring every setting available in `ApplicationInsightsServiceOptions`, including **ConnectionString** using the application's `IConfiguration` instance. The settings must be under the section "ApplicationInsights", as shown in the following example. The following section from appsettings.json configures the connection string and disables adaptive sampling and performance counter collection.
```json { "ApplicationInsights": {
- "InstrumentationKey": "putinstrumentationkeyhere",
+ "ConnectionString": "Copy connection string from Application Insights Resource Overview",
"EnableAdaptiveSampling": false, "EnablePerformanceCounterCollectionModule": false }
public void ConfigureServices(IServiceCollection services)
### Removing TelemetryInitializers
-By default, telemetry initializers are present. To remove all or specific telemetry initializers, use the following sample code *after* you call `AddApplicationInsightsTelemetry()`.
+By default, telemetry initializers are present. To remove all or specific telemetry initializers, use the following sample code *after* calling `AddApplicationInsightsTelemetry()`.
```csharp public void ConfigureServices(IServiceCollection services)
public void ConfigureServices(IServiceCollection services)
### Configuring or removing default TelemetryModules
-Application Insights uses telemetry modules to automatically collect useful telemetry about specific workloads without requiring manual tracking by user.
+Application Insights automatically collects telemetry about specific workloads without requiring manual tracking by user.
By default, the following automatic-collection modules are enabled. These modules are responsible for automatically collecting telemetry. You can disable or configure them to alter their default behavior.
Also, if you're [enabling server-side telemetry based on Visual Studio](#enable-
Get an instance of `TelemetryClient` by using constructor injection, and call the required `TrackXXX()` method on it. We don't recommend creating new `TelemetryClient` or `TelemetryConfiguration` instances in an ASP.NET Core application. A singleton instance of `TelemetryClient` is already registered in the `DependencyInjection` container, which shares `TelemetryConfiguration` with rest of the telemetry. Creating a new `TelemetryClient` instance is recommended only if it needs a configuration that's separate from the rest of the telemetry.
-The following example shows how to track additional telemetry from a controller.
+The following example shows how to track more telemetry from a controller.
```csharp using Microsoft.ApplicationInsights;
The following configuration allows ApplicationInsights to capture all `Informati
} ```
-It's important to note that the following doesn't cause the ApplicationInsights provider to capture `Information` logs. It doesn't capture it because the SDK adds a default logging filter that instructs `ApplicationInsights` to capture only `Warning` logs and more severe logs. ApplicationInsights requires an explicit override.
+It's important to note that the following example doesn't cause the ApplicationInsights provider to capture `Information` logs. It doesn't capture it because the SDK adds a default logging filter that instructs `ApplicationInsights` to capture only `Warning` logs and more severe logs. ApplicationInsights requires an explicit override.
```json {
The extension method `UseApplicationInsights()` is still supported, but it's mar
### I'm deploying my ASP.NET Core application to Web Apps. Should I still enable the Application Insights extension from Web Apps?
-If the SDK is installed at build time as shown in this article, you don't need to enable the [Application Insights extension](./azure-web-apps.md) from the App Service portal. Even if the extension is installed, it will back off when it detects that the SDK is already added to the application. If you enable Application Insights from the extension, you don't have to install and update the SDK. But if you enable Application Insights by following instructions in this article, you have more flexibility because:
+If the SDK is installed at build time as shown in this article, you don't need to enable the [Application Insights extension](./azure-web-apps.md) from the App Service portal. If the extension is installed, it will back off when it detects the SDK is already added. If you enable Application Insights from the extension, you don't have to install and update the SDK. But if you enable Application Insights by following instructions in this article, you have more flexibility because:
* Application Insights telemetry will continue to work in: * All operating systems, including Windows, Linux, and Mac.
If the SDK is installed at build time as shown in this article, you don't need t
* All hosting options, including Web Apps, VMs, Linux, containers, Azure Kubernetes Service, and non-Azure hosting. * All .NET Core versions including preview versions. * You can see telemetry locally when you're debugging from Visual Studio.
- * You can track additional custom telemetry by using the `TrackXXX()` API.
+ * You can track more custom telemetry by using the `TrackXXX()` API.
* You have full control over the configuration. ### Can I enable Application Insights monitoring by using tools like Azure Monitor Application Insights Agent (formerly Status Monitor v2)?
- Yes. Starting from [Application Insights Agent 2.0.0-beta1](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0-beta1), ASP.NET Core applications hosted in IIS are supported.
+ Yes. In [Application Insights Agent 2.0.0-beta1](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0-beta1) and later, ASP.NET Core applications hosted in IIS are supported.
-### If I run my application in Linux, are all features supported?
+### Are all features supported if I run my application in Linux?
Yes. Feature support for the SDK is the same in all platforms, with the following exceptions:
using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
} ```
-This limitation is not applicable from version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later.
+This limitation isn't applicable from version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later.
### Is this SDK supported for the new .NET Core 3.X Worker Service template applications?
For the latest updates and bug fixes, see the [release notes](./release-notes.md
* [Configure a snapshot collection](./snapshot-debugger.md) to see the state of source code and variables at the moment an exception is thrown. * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage. * Use [availability tests](./monitor-web-app-availability.md) to check your app constantly from around the world.
-* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection)
+* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection)
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
To add Application Insights to your ASP.NET website, you need to:
> [!IMPORTANT] > We recommend [connection strings](./sdk-connection-string.md?tabs=net) over instrumentation keys. New Azure regions *require* the use of connection strings instead of instrumentation keys. >
-> A connection string identifies the resource that you want to associate with your telemetry data. It also allows you to modify the endpoints that your resource will use as a destination for your telemetry. You'll need to copy the connection string and add it to your application's code or to an environment variable.
-
+> A connection string identifies the resource that you want to associate with your telemetry data. It also allows you to modify the endpoints that your resource will use as a destination for your telemetry. You'll need to copy the connection string and add it to your application's code or to the ΓÇ£APPLICATIONINSIGHTS_CONNECTION_STRINGΓÇ¥ environment variable.
## Create a basic ASP.NET web app
This section will guide you through automatically adding Application Insights to
1. Select **Project** > **Add Application Insights Telemetry** > **Application Insights Sdk (local)** > **Next** > **Finish** > **Close**. 2. Open the *ApplicationInsights.config* file.
-3. Before the closing `</ApplicationInsights>` tag, add a line that contains the instrumentation key for your Application Insights resource. You can find your instrumentation key on the overview pane of the newly created Application Insights resource that you created as part of the prerequisites for this article.
+3. Before the closing `</ApplicationInsights>` tag, add a line that contains the connection string for your Application Insights resource. Find your connection string on the overview pane of the newly created Application Insights resource.
```xml
- <InstrumentationKey>your-instrumentation-key-goes-here</InstrumentationKey>
+ <InstrumentationKey>Copy connection string from Application Insights Resource Overview</InstrumentationKey>
``` 4. Select **Project** > **Manage NuGet Packages** > **Updates**. Then update each `Microsoft.ApplicationInsights` NuGet package to the latest stable release.
This section will guide you through manually adding Application Insights to a te
<Add Type="Microsoft.ApplicationInsights.Extensibility.AutocollectedMetricsExtractor, Microsoft.ApplicationInsights" /> <Add Type="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.AdaptiveSamplingTelemetryProcessor, Microsoft.AI.ServerTelemetryChannel"> <MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>
- <ExcludedTypes>Event</ExcludedTypes>
+ <ExcludedTypes>Trace</ExcludedTypes>
</Add> <Add Type="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.AdaptiveSamplingTelemetryProcessor, Microsoft.AI.ServerTelemetryChannel"> <MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond> <IncludedTypes>Event</IncludedTypes> </Add>
+ <!--
+ Adjust the include and exclude examples to specify the desired semicolon-delimited types. (Dependency, Event, Exception, PageView, Request, Trace)
+ -->
</TelemetryProcessors> <TelemetryChannel Type="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.ServerTelemetryChannel, Microsoft.AI.ServerTelemetryChannel" /> </Add>
This section will guide you through manually adding Application Insights to a te
Learn more about Application Insights configuration with ApplicationInsights.config here: http://go.microsoft.com/fwlink/?LinkID=513840 -->
- <InstrumentationKey>your-instrumentation-key-here</InstrumentationKey>
+ <InstrumentationKey>Copy connection string from Application Insights Resource Overview</InstrumentationKey>
</ApplicationInsights> ```
-4. Before the closing `</ApplicationInsights>` tag, add your instrumentation key for your Application Insights resource. You can find your instrumentation key on the overview pane of the newly created Application Insights resource that you created as part of the prerequisites for this article.
+4. Before the closing `</ApplicationInsights>` tag, add the connection string for your Application Insights resource. You can find your connection string on the overview pane of the newly created Application Insights resource.
```xml
- <InstrumentationKey>your-instrumentation-key-goes-here</InstrumentationKey>
+ <InstrumentationKey>Copy connection string from Application Insights Resource Overview</InstrumentationKey>
``` 5. At the same level of your project as the *ApplicationInsights.config* file, create a folder called *ErrorHandler* with a new C# file called *AiHandleErrorAttribute.cs*. The contents of the file will look like this:
For the template-based ASP.NET MVC app from this article, the file that you need
## Troubleshooting
-There's a known issue in the current version of Visual Studio 2019: storing the instrumentation key in a user secret is broken for .NET Framework-based apps. The key ultimately has to be hardcoded into the *applicationinsights.config* file to work around this bug. This article is designed to avoid this issue entirely, by not using user secrets.
+There's a known issue in the current version of Visual Studio 2019: storing the instrumentation key or connection string in a user secret is broken for .NET Framework-based apps. The key ultimately has to be hardcoded into the *applicationinsights.config* file to work around this bug. This article is designed to avoid this issue entirely, by not using user secrets.
## Open-source SDK
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md
In order to enable telemetry collection with Application Insights, only the foll
| App setting name | Definition | Value | |||:| | ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` in Windows or `~3` in Linux. |
-| XDT_MicrosoftApplicationInsights_NodeJS | Flag to control if node.js agent is included. | 0 or 1 (only applicable in Windows). |
+| XDT_MicrosoftApplicationInsights_NodeJS | Flag to control if Node.js agent is included. | 0 or 1 (only applicable in Windows). |
> [!NOTE] > Profiler and snapshot debugger are not available for Node.js applications
azure-monitor Resource Manager Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-web-app.md
The following sample creates a basic Azure App Service web app with the ASP.NET
## Node.js runtime (Linux)
-The following sample creates a basic Azure App Service Linux web app with the node.js runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
+The following sample creates a basic Azure App Service Linux web app with the Node.js runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
### Template file
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
Azure Monitor's Change Analysis is a free service. Once enabled, the Change Anal
- Incur any billing cost to subscriptions. - Have any performance impact for scanning Azure Resource properties changes.
-### Data retention
+## Data retention
Change Analysis provides 14 days of data retention. ## Enable Change Analysis at scale for Web App in-guest file and environment variable changes
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Previously updated : 11/11/2021 Last updated : 03/07/2022 # Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations
This article provides details on creating and configuring diagnostic settings to
[Platform metrics](./metrics-supported.md) are sent automatically to [Azure Monitor Metrics](./data-platform-metrics.md) by default and without configuration.
-[Platform logs](./platform-logs-overview.md), including the Azure Activity log and resource logs, provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. The Activity Log exists on its own but can be routed to other locations. Resource logs are not collected until they are routed to a destination.
+[Platform logs](./platform-logs-overview.md) provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on.
+ - **Resource logs** are not collected until they are routed to a destination.
+ - The **Activity Log** exists on its own but can be routed to other locations.
Each Azure resource requires its own diagnostic setting, which defines the following criteria: -- Sources - The type of metric and log data to send to the destinations defined in the setting. The available types vary by resource type.-- Destinations - One or more destinations to send to.
+- **Sources** - The type of metric and log data to send to the destinations defined in the setting. The available types vary by resource type.
+- **Destinations** - One or more destinations to send to.
A single diagnostic setting can define no more than one of each of the destinations. If you want to send data to more than one of a particular destination type (for example, two different Log Analytics workspaces), then create multiple settings. Each resource can have up to 5 diagnostic settings.
-The following video walks you through routing platform logs with diagnostic settings. The video was done at an earlier time and doesn't include the following:
+The following video walks you through routing resource platform logs with diagnostic settings. The video was done at an earlier time and doesn't include the following:
- There are now 4 destinations. You can send platform metrics and logs to certain Azure Monitor partners. - A new feature called category groups was introduced in Nov 2021.
Here are the source options.
The **AllMetrics** setting routes a resource's platform metrics to additional destinations. This option may not be present for all resource providers.
-### Logs
+### Resource Logs
With logs, you can select the log categories you want to route individually or choose a category group.
Currently, there are two category groups:
- **All** - Every resource log offered by the resource. - **Audit** - All resource logs that record customer interactions with data or the settings of the service.
+### Activity Log
+See [Activity Log settings](#activity-log-settings) section below.
+ ## Destinations Platform logs and metrics can be sent to the destinations in the following table.
Platform logs and metrics can be sent to the destinations in the following table
| [Event Hubs](../../event-hubs/index.yml) | Sending logs and metrics to Event Hubs allows you to stream data to external systems such as third-party SIEMs and other Log Analytics solutions. | | [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you are already using one of the partners. |
+## Activity Log settings
+
+The Activity Log uses a diagnostic setting, but has it's own user interface because it applies to the whole subscription rather than individual resources. The destination information listed below still applies. For more information, see the [Azure Activity Log](activity-log.md).
+ ## Requirements and limitations ### Metrics as a source
There are certain limitations with exporting metrics.
To get around these limitations for specific metrics, you can manually extract them using the [Metrics REST API](/rest/api/monitor/metrics/list) and import them into Azure Monitor Logs using the [Azure Monitor Data collector API](../logs/data-collector-api.md).
-### Activity log as a source
-
-> [!IMPORTANT]
-> Before you create a diagnostic setting for the Activity log, you should first disable any legacy configuration. See [Legacy collection methods](../essentials/activity-log.md#legacy-collection-methods) for details.
-- ### Destination limitations Any destinations for the diagnostic setting must be created before creating the diagnostic settings. The destination does not have to be in the same subscription as the resource sending logs as long as the user who configures the setting has appropriate Azure RBAC access to both subscriptions. Using Azure Lighthouse, it is also possible to have diagnostic settings sent to a workspace, storage account or Event Hub in another Azure Active Directory tenant. The following table provides unique requirements for each destination including any regional restrictions.
azure-monitor Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/errors.md
Code: 403 Response:
Details: The token you have presented for authorization belongs to a user who does not have sufficient access to this privilege. Verify your workspace GUID and your token request are correct, and if necessary grant IAM privileges in your workspace to the Azure AD Application you created as Contributor.
+> [!NOTE]
+> When using Azure AD authentication, it may take up to 60 minutes for the Azure Application Insights REST API to recognize new
+> role-based access control (RBAC) permissions. While permissions are propagating, REST API calls may fail with error code 403.
+ ## Bad Authorization Code Code: 403 Response:
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/overview.md
After receiving a token, the process for calling the Log Analytics API is identi
To quickly explore the API without using Azure AD authentication, we provide a demonstration workspace with sample data, which allows [authenticating with an API key](authentication-authorization.md#authenticating-with-an-api-key).
+> [!NOTE]
+> When using Azure AD authentication, it may take up to 60 minutes for the Azure Application Insights REST API to recognize new
+> role-based access control (RBAC) permissions. While permissions are propagating, REST API calls may fail with [error code 403](./errors.md#insufficient-permissions).
+ ## Log Analytics API Query Limits See [the **Query API** section of this page](../../service-limits.md#la-query-api) for information about query limits.
To try the API without writing any code, you can use:
- Your favorite client such as [Fiddler](https://www.telerik.com/fiddler) or [Postman](https://www.getpostman.com/) to manually generate queries with a user interface. - [cURL](https://curl.haxx.se/) from the command line, and then pipe the output into [jsonlint](https://github.com/zaach/jsonlint) to get readable JSON.
-Instead of calling the REST API directly, you can also use the Azure Monitor Query SDK. The SDK contains idiomatic client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), and [Python](/python/api/overview/azure/monitor-query-readme). Each client library is a wrapper around the REST API that allows you to retrieve log data from the workspace.
+Instead of calling the REST API directly, you can also use the Azure Monitor Query SDK. The SDK contains idiomatic client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), and [Python](/python/api/overview/azure/monitor-query-readme). Each client library is a wrapper around the REST API that allows you to retrieve log data from the workspace.
azure-monitor App Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/app-expression.md
The `app` expression is used in an Azure Monitor query to retrieve data from a s
* You must have read access to the application. * Identifying an application by its name assumes that it is unique across all accessible subscriptions. If you have multiple applications with the specified name, the query will fail because of the ambiguity. In this case you must use one of the other identifiers. * Use the related expression [workspace](../logs/workspace-expression.md) to query across Log Analytics workspaces.
-* The app() expression is currently not supported in the log query when using the Azure portal to create a [custom log query alert rule](../alerts/alerts-log.md), unless an Application Insights application is used as the resource for the alert rule.
## Examples
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
Title: Archive data from Log Analytics workspace to Azure storage using Logic Ap
description: Describes a method to use Azure Logic Apps to query data from a Log Analytics workspace and send to Azure Storage. --- Previously updated : 03/22/2022-++ Last updated : 03/01/2022 + # Archive data from Log Analytics workspace to Azure storage using Logic App
-This article describes a method to use [Azure Logic Apps](../../logic-apps/index.yml) to query data from a Log Analytics workspace in Azure Monitor and send to Azure Storage. Use this process when you need to export your Azure Monitor Log data for auditing and compliance scenarios or to allow another service to retrieve this data.
+This article describes a method to use [Azure Logic App](../../logic-apps/index.yml) to query data from a Log Analytics workspace in Azure Monitor and send to Azure Storage. Use this process when you need to export your Azure Monitor Log data for auditing and compliance scenarios or to allow another service to retrieve this data.
## Other export methods The method described in this article describes a scheduled export from a log query using a Logic App. Other options to export data for particular scenarios include the following: -- To export data from your Log Analytics workspace to an Azure storage account or event hub, use the Log Analytics workspace data export feature of Azure Monitor Logs. See [Log Analytics workspace data export in Azure Monitor (preview)](logs-data-export.md)
+- To export data from your Log Analytics workspace to an Azure Storage Account or Event Hubs, use the Log Analytics workspace data export feature of Azure Monitor Logs. See [Log Analytics workspace data export in Azure Monitor](logs-data-export.md)
- One time export using a Logic App. See [Azure Monitor Logs connector for Logic Apps and Power Automate](logicapp-flow-connector.md).-- One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport]](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport).
+- One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport).
## Overview
-This procedure uses the [Azure Monitor Logs connector](/connectors/azuremonitorlogs/) which allows you to run a log query from a logic app and use its output in other actions in the workflow. The [Azure Blob Storage connector](/connectors/azureblob/) is used in this procedure to send the query output to Azure storage. The other actions are described in the sections below.
+This procedure uses the [Azure Monitor Logs connector](/connectors/azuremonitorlogs/) which lets you run a log query from a Logic App and use its output in other actions in the workflow. The [Azure Blob Storage connector](/connectors/azureblob/) is used in this procedure to send the query output to Azure storage.
-![Logic app overview](media/logs-export-logic-app/logic-app-overview.png)
+[![Logic app overview](media/logs-export-logic-app/logic-app-overview.png)](media/logs-export-logic-app/logic-app-overview.png#lightbox)
-When you export data from a Log Analytics workspace, you should filter and aggregate your log data and optimize your query to limit the amount of data processed by your Logic App workflow to the required data. For example, if you need to archive sign-in events, you could filter for required events and project only the required fields with the following query:
+When you export data from a Log Analytics workspace, you should filter and aggregate your log data and optimize query and limit the amount of data processed by your Logic App workflow, to the required data. For example, if you need to archive sign-in events, you should filter for required events and project only the required fields. For example:
```json SecurityEvent
SecurityEvent
| project TimeGenerated , Account , AccountType , Computer ```
-When you export the data on a schedule, use the ingestion_time() function in your query to ensure that you donΓÇÖt miss late arriving data. If data is delayed due to network or platform issues, using the ingestion time ensures that it will be included in the next Logic App execution. See [Add Azure Monitor Logs action](#add-azure-monitor-logs-action) for an example.
+When you export the data on a schedule, use the ingestion_time() function in your query to ensure that you donΓÇÖt miss late arriving data. If data is delayed due to network or platform issues, using the ingestion time ensures that data is included in the next Logic App execution. See *Add Azure Monitor Logs action* under [Logic App procedure]](#logic-app-procedure) for an example.
## Prerequisites
-Following are prerequisites that must be completed before completing this procedure.
+Following are prerequisites that must be completed before this procedure.
-- Log Analytics workspace. The user who creates the logic app must have at least read permission to the workspace. -- Azure storage account. The storage account doesnΓÇÖt have to be in the same subscription as your Log Analytics workspace. The user who creates the logic app must have write permission to the storage account.
+- Log Analytics workspace--The user who creates the Logic App must have at least read permission to the workspace.
+- Azure Storage Account--The Storage Account doesnΓÇÖt have to be in the same subscription as your Log Analytics workspace. The user who creates the Logic App must have write permission to the Storage Account.
## Connector limits
-Log Analytics workspace and log queries in Azure Monitor are multitenancy services that include limits that protect and isolate customers and maintain quality of service. When querying for a large amount of data, you should consider the following limits, which can affect how you configure the Logic App recurrence and your log query:
+Log Analytics workspace and log queries in Azure Monitor are multitenancy services that include limits, to protect and isolate customers, and maintain quality of service. When querying for a large amount of data, you should consider the following limits, which can affect how you configure the Logic App recurrence and your log query:
- Log queries cannot return more than 500,000 rows. - Log queries cannot return more than 64,000,000 bytes. - Log queries cannot run longer than 10 minutes by default. - Log Analytics connector is limited to 100 call per minute.
+## Logic App procedure
-## Create container in the storage account
-Use the procedure in [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) to add a container to your storage account to hold the exported data. The name used for the container in this article is **loganalytics-data**, but you can use any name.
--
-## Create Logic App
-
-Go to **Logic Apps** in the Azure portal and click **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new logic app and then give it a unique name. You can turn on **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.
-
-![Create logic app](media/logs-export-logic-app/create-logic-app.png)
--
-Click **Review + create** and then **Create**. When the deployment is complete, click **Go to resource** to open the **Logic Apps Designer**.
-
-## Create a trigger for the logic app
-Under **Start with a common trigger**, select **Recurrence**. This creates a logic app that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day.
-
-![Recurrence action](media/logs-export-logic-app/recurrence-action.png)
--
-### Add Azure Monitor Logs action
-Click **+ New step** to add an action that runs after the recurrence action. Under **Choose an action**, type **azure monitor** and then select **Azure Monitor Logs**.
-
-![Azure Monitor Logs action](media/logs-export-logic-app/select-azure-monitor-connector.png)
-
-Click **Azure Log Analytics ΓÇô Run query and list results**.
-
-![Screenshot of a new action being added to a step in the Logic App Designer. Azure Monitor Logs is highlighted under Choose an action.](media/logs-export-logic-app/select-query-action-list.png)
-
-You will be prompted to select a tenant and grant access to the Log Analytics workspace with the account that the workflow will use to run the query.
--
-## Add Azure Monitor Logs action
-The Azure Monitor Logs action allows you to specify the query to run. The log query used in this example is optimized for hourly recurrence and collects the data ingested for the particular execution time. For example, if the workflow runs at 4:35, the time range would be 4:00 to 5:00. If you change the Logic App to run at a different frequency, you need the change the query as well. For example, if you set the recurrence to run daily, you would set startTime in the query to startofday(make_datetime(year,month,day,0,0)).
-
-Select the **Subscription** and **Resource Group** for your Log Analytics workspace. Select *Log Analytics Workspace* for the **Resource Type** and then select the workspace's name under **Resource Name**.
-
-Add the following log query to the **Query** window.
-
-```Kusto
-let dt = now();
-let year = datetime_part('year', dt);
-let month = datetime_part('month', dt);
-let day = datetime_part('day', dt);
-let hour = datetime_part('hour', dt);
-let startTime = make_datetime(year,month,day,hour,0)-1h;
-let endTime = startTime + 1h - 1tick;
-AzureActivity
-| where ingestion_time() between(startTime .. endTime)
-| project
- TimeGenerated,
- BlobTime = startTime,
- OperationName ,
- OperationNameValue ,
- Level ,
- ActivityStatus ,
- ResourceGroup ,
- SubscriptionId ,
- Category ,
- EventSubmissionTimestamp ,
- ClientIpAddress = parse_json(HTTPRequest).clientIpAddress ,
- ResourceId = _ResourceId
-```
-
-The **Time Range** specifies the records that will be included in the query based on the **TimeGenerated** column. This should be set to a value equal to or higher than the time range selected in the query. Since this query isn't using the **TimeGenerated** column, then **Set in query** option isn't available. See [Query scope](./scope.md) for more details about the time range.
-
-Select **Last 4 hours** for the **Time Range**. This will ensure that any records with a ingestion time larger than **TimeGenerated** will be included in the results.
+1. **Create container in the Storage Account**
-![Screenshot of the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logs-export-logic-app/run-query-list-action.png)
-
+ Use the procedure in [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) to add a container to your Storage Account to hold the exported data. The name used for the container in this article is **loganalytics-data**, but you can use any name.
-### Add Parse JSON activity (optional)
-The output from the **Run query and list results** action is formatted in JSON. You can parse this data and manipulate it as part of the preparation for **Compose** action.
+1. **Create Logic App**
-You can provide a JSON schema that describes the payload you expect to receive. The designer parses JSON content by using this schema and generates user-friendly tokens that represent the properties in your JSON content. You can then easily reference and use those properties throughout your logic app's workflow.
+ 1. Go to **Logic Apps** in the Azure portal and click **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new Logic App and then give it a unique name. You can turn on **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.<br>
+ [![Create Logic App](media/logs-export-logic-app/create-logic-app.png)](media/logs-export-logic-app/create-logic-app.png#lightbox)
+ 1. Click **Review + create** and then **Create**. When the deployment is complete, click **Go to resource** to open the **Logic Apps Designer**.
-Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **json** and then select **Parse JSON**.
-
-![Select Parse JSON activity](media/logs-export-logic-app/select-parse-json.png)
-
-Click in the **Content** box to display a list of values from previous activities. Select **Body** from the **Run query and list results** action. This is the output from the log query.
-
-[![Select Body](media/logs-export-logic-app/select-body.png)](media/logs-export-logic-app/select-body.png#lightbox)
-
-5. Click **Use sample payload to generate schema**. Run the log query and copy the output to use for the sample payload. For the sample query here, you can use the following output:
--
-```json
-{
- "TimeGenerated": "2020-09-29T23:11:02.578Z",
- "BlobTime": "2020-09-29T23:00:00Z",
- "OperationName": "Returns Storage Account SAS Token",
- "OperationNameValue": "MICROSOFT.RESOURCES/DEPLOYMENTS/WRITE",
- "Level": "Informational",
- "ActivityStatus": "Started",
- "ResourceGroup": "monitoring",
- "SubscriptionId": "00000000-0000-0000-0000-000000000000",
- "Category": "Administrative",
- "EventSubmissionTimestamp": "2020-09-29T23:11:02Z",
- "ClientIpAddress": "192.168.1.100",
- "ResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/monitoring/providers/microsoft.storage/storageaccounts/my-storage-account"
-}
-```
---
-![Parse JSON payload](media/logs-export-logic-app/parse-json-payload.png)
-
-## Add the Compose action
-The **Compose** action takes the parsed JSON output and creates the object that you need to store in the blob.
-
-Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **compose** and then select the **Compose** action.
-
-![Select Compose action](media/logs-export-logic-app/select-compose.png)
--
-Click the **Inputs** box display a list of values from previous activities. Select **Body** from the **Parse JSON** action. This is the parsed output from the log query.
+1. **Create a trigger for the Logic App**
+
+ 1. Under **Start with a common trigger**, select **Recurrence**. This creates a Logic App that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day.<br>
+ [![Recurrence action](media/logs-export-logic-app/recurrence-action.png)](media/logs-export-logic-app/recurrence-action.png#lightbox)
-[![Select body for Compose action](media/logs-export-logic-app/select-body-compose.png)](media/logs-export-logic-app/select-body-compose.png#lightbox)
+2. **Add Azure Monitor Logs action**
+
+ The Azure Monitor Logs action lets you specify the query to run. The log query used in this example is optimized for hourly recurrence and collects the data ingested for the particular execution time. For example, if the workflow runs at 4:35, the time range would be 3:00 to 4:00. If you change the Logic App to run at a different frequency, you need the change the query as well. For example, if you set the recurrence to run daily, you would set startTime in the query to startofday(make_datetime(year,month,day,0,0)).
+ You will be prompted to select a tenant to grant access to the Log Analytics workspace with the account that the workflow will use to run the query.
-## Add the Create Blob action
-The Create Blob action writes the composed JSON to storage.
+ 1. Click **+ New step** to add an action that runs after the recurrence action. Under **Choose an action**, type **azure monitor** and then select **Azure Monitor Logs**.<br>
+ [![Azure Monitor Logs action](media/logs-export-logic-app/select-azure-monitor-connector.png)](media/logs-export-logic-app/select-azure-monitor-connector.png#lightbox)
-Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **blob** and then select the **Create Blob** action.
+ 2. Click **Azure Log Analytics ΓÇô Run query and list results**.<br>
+ [![Screenshot of a new action being added to a step in the Logic App Designer. Azure Monitor Logs is highlighted under Choose an action.](media/logs-export-logic-app/select-query-action-list.png)](media/logs-export-logic-app/select-query-action-list.png#lightbox)
+
+ 3. Select the **Subscription** and **Resource Group** for your Log Analytics workspace. Select *Log Analytics Workspace* for the **Resource Type** and then select the workspace's name under **Resource Name**.
-![Select Create blob](media/logs-export-logic-app/select-create-blob.png)
+ 4. Add the following log query to the **Query** window.
+
+ ```Kusto
+ let dt = now();
+ let year = datetime_part('year', dt);
+ let month = datetime_part('month', dt);
+ let day = datetime_part('day', dt);
+ let hour = datetime_part('hour', dt);
+ let startTime = make_datetime(year,month,day,hour,0)-1h;
+ let endTime = startTime + 1h - 1tick;
+ AzureActivity
+ | where ingestion_time() between(startTime .. endTime)
+ | project
+ TimeGenerated,
+ BlobTime = startTime,
+ OperationName ,
+ OperationNameValue ,
+ Level ,
+ ActivityStatus ,
+ ResourceGroup ,
+ SubscriptionId ,
+ Category ,
+ EventSubmissionTimestamp ,
+ ClientIpAddress = parse_json(HTTPRequest).clientIpAddress ,
+ ResourceId = _ResourceId
+ ```
+
+ 5. The **Time Range** specifies the records that will be included in the query based on the **TimeGenerated** column. This should be set to a value greater than the time range selected in the query. Since this query isn't using the **TimeGenerated** column, then **Set in query** option isn't available. See [Query scope](./scope.md) for more details about the time range. Select **Last 4 hours** for the **Time Range**. This will ensure that any records with an ingestion time larger than **TimeGenerated** will be included in the results.<br>
+ [![Screenshot of the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logs-export-logic-app/run-query-list-action.png)](media/logs-export-logic-app/run-query-list-action.png#lightbox)
+
+3. **Add Parse JSON activity (optional)**
+
+ The output from the **Run query and list results** action is formatted in JSON. You can parse this data and manipulate it as part of the preparation for **Compose** action.
+
+ You can provide a JSON schema that describes the payload you expect to receive. The designer parses JSON content by using this schema and generates user-friendly tokens that represent the properties in your JSON content. You can then easily reference and use those properties throughout your Logic App's workflow.
+
+ 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **json** and then select **Parse JSON**.<br>
+ [![Select Parse JSON activity](media/logs-export-logic-app/select-parse-json.png)](media/logs-export-logic-app/select-parse-json.png#lightbox)
+
+ 2. Click in the **Content** box to display a list of values from previous activities. Select **Body** from the **Run query and list results** action. This is the output from the log query.<br>
+ [![Select Body](media/logs-export-logic-app/select-body.png)](media/logs-export-logic-app/select-body.png#lightbox)
+
+ 3. Click **Use sample payload to generate schema**. Run the log query and copy the output to use for the sample payload. For the sample query here, you can use the following output:
+
+ ```json
+ {
+ "TimeGenerated": "2020-09-29T23:11:02.578Z",
+ "BlobTime": "2020-09-29T23:00:00Z",
+ "OperationName": "Returns Storage Account SAS Token",
+ "OperationNameValue": "MICROSOFT.RESOURCES/DEPLOYMENTS/WRITE",
+ "Level": "Informational",
+ "ActivityStatus": "Started",
+ "ResourceGroup": "monitoring",
+ "SubscriptionId": "00000000-0000-0000-0000-000000000000",
+ "Category": "Administrative",
+ "EventSubmissionTimestamp": "2020-09-29T23:11:02Z",
+ "ClientIpAddress": "192.168.1.100",
+ "ResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/monitoring/providers/microsoft.storage/storageaccounts/my-storage-account"
+ }
+ ```
+
+ [![Parse JSON payload](media/logs-export-logic-app/parse-json-payload.png)](media/logs-export-logic-app/parse-json-payload.png#lightbox)
+
+4. **Add the Compose action**
+
+ The **Compose** action takes the parsed JSON output and creates the object that you need to store in the blob.
-Type a name for the connection to your storage account in **Connection Name** and then click the folder icon in the **Folder path** box to select the container in your storage account. Click the **Blob name** to see a list of values from previous activities. Click **Expression** and enter an expression that matches your time interval. For this query which is run hourly, the following expression sets the blob name per previous hour:
+ 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **compose** and then select the **Compose** action.<br>
+ [![Select Compose action](media/logs-export-logic-app/select-compose.png)](media/logs-export-logic-app/select-compose.png#lightbox)
-```json
-subtractFromTime(formatDateTime(utcNow(),'yyyy-MM-ddTHH:00:00'), 1,'Hour')
-```
+ 2. Click the **Inputs** box display a list of values from previous activities. Select **Body** from the **Parse JSON** action. This is the parsed output from the log query.<br>
+ [![Select body for Compose action](media/logs-export-logic-app/select-body-compose.png)](media/logs-export-logic-app/select-body-compose.png#lightbox)
-[![Blob expression](media/logs-export-logic-app/blob-expression.png)](media/logs-export-logic-app/blob-expression.png#lightbox)
+5. **Add the Create Blob action**
+
+ The Create Blob action writes the composed JSON to storage.
-Click the **Blob content** box to display a list of values from previous activities and then select **Outputs** in the **Compose** section.
+ 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **blob** and then select the **Create Blob** action.<br>
+ [![Select Create blob](media/logs-export-logic-app/select-create-blob.png)](media/logs-export-logic-app/select-create-blob.png#lightbox)
+ 2. Type a name for the connection to your Storage Account in **Connection Name** and then click the folder icon in the **Folder path** box to select the container in your Storage Account. Click the **Blob name** to see a list of values from previous activities. Click **Expression** and enter an expression that matches your time interval. For this query which is run hourly, the following expression sets the blob name per previous hour:
-![Create blob expression](media/logs-export-logic-app/create-blob.png)
+ ```json
+ subtractFromTime(formatDateTime(utcNow(),'yyyy-MM-ddTHH:00:00'), 1,'Hour')
+ ```
+ [![Blob expression](media/logs-export-logic-app/blob-expression.png)](media/logs-export-logic-app/blob-expression.png#lightbox)
-## Test the Logic App
-Test the workflow by clicking **Run**. If the workflow has errors, it will be indicated on the step with the problem. You can view the executions and drill in to each step to view the input and output to investigate failures. See [Troubleshoot and diagnose workflow failures in Azure Logic Apps](../../logic-apps/logic-apps-diagnosing-failures.md) if necessary.
+ 3. Click the **Blob content** box to display a list of values from previous activities and then select **Outputs** in the **Compose** section.<br>
+ [![Create blob expression](media/logs-export-logic-app/create-blob.png)](media/logs-export-logic-app/create-blob.png#lightbox)
-[![Runs history](media/logs-export-logic-app/runs-history.png)](media/logs-export-logic-app/runs-history.png#lightbox)
+6. **Test the Logic App**
+
+ Test the workflow by clicking **Run**. If the workflow has errors, it will be indicated on the step with the problem. You can view the executions and drill in to each step to view the input and output to investigate failures. See [Troubleshoot and diagnose workflow failures in Azure Logic Apps](../../logic-apps/logic-apps-diagnosing-failures.md) if necessary.<br>
+ [![Runs history](media/logs-export-logic-app/runs-history.png)](media/logs-export-logic-app/runs-history.png#lightbox)
-## View logs in Storage
-Go to the **Storage accounts** menu in the Azure portal and select your storage account. Click the **Blobs** tile and select the container you specified in the Create blob action. Select one of the blobs and then **Edit blob**.
-[![Blob data](media/logs-export-logic-app/blob-data.png)](media/logs-export-logic-app/blob-data.png#lightbox)
+7. **View logs in Storage**
+
+ Go to the **Storage accounts** menu in the Azure portal and select your Storage Account. Click the **Blobs** tile and select the container you specified in the Create blob action. Select one of the blobs and then **Edit blob**.<br>
+ [![Blob data](media/logs-export-logic-app/blob-data.png)](media/logs-export-logic-app/blob-data.png#lightbox)
## Next steps
azure-monitor Move Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/move-workspace.md
Title: Move a Log Analytics workspace in Azure Monitor | Microsoft Docs description: Learn how to move your Log Analytics workspace to another subscription or resource group. - Previously updated : 03/22/2022+++ Last updated : 03/01/2022
In this article, you'll learn the steps to move Log Analytics workspace to another resource group or subscription in the same region. You can learn more about moving Azure resources through the Azure portal, PowerShell, the Azure CLI, or the REST API. at [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). > [!IMPORTANT]
-> You can't move a workspace to a different region.
+> You can't move a workspace to a different region using this procedure. Follow [move a Log Analytics workspace to another region](./move-workspace-region.md) article to move workspace across regions.
## Verify Active Directory tenant The workspace source and destination subscriptions must exist within the same Azure Active Directory tenant. Use Azure PowerShell to verify that both subscriptions have the same tenant ID.
The workspace source and destination subscriptions must exist within the same Az
``` ## Workspace move considerations-- Managed solutions that are installed in the workspace will be moved with the Log Analytics workspace move operation.
+- Managed solutions that are installed in the workspace will be moved in this operation.
- Workspace keys (both primary and secondary) are re-generated with workspace move operation. If you keep a copy of your workspace keys in key vault, update them with the new keys generated after the workspace move. -- Connected agents will remain connected and keep send data to the workspace after the move.
+- Connected [MMA agents](../agents/log-analytics-agent.md) will remain connected and keep sending data to the workspace after the move. [AMA agents](../agents/azure-monitor-agent-overview.md) via DCR will be disconnected during the move and should be reconfigured after the move.
- Since the move operation requires that there are no Linked Services from the workspace, solutions that rely on that link must be removed to allow the workspace move. Solutions that must be removed before you can unlink your automation account: - Update Management - Change Tracking
Use the following procedure to remove the solutions using the Azure portal:
2. Select the solutions to remove. 3. Click **Delete Resources** and then confirm the resources to be removed by clicking **Delete**.
-![Delete solutions](media/move-workspace/delete-solutions.png)
+[![Delete solutions](media/move-workspace/delete-solutions.png)](media/move-workspace/delete-solutions.png#lightbox)
### Delete using PowerShell
To remove **Start/Stop VMs** solution, you also need to remove the alert rules c
- ScheduledStartStop_Parent - SequencedStartStop_Parent
- ![Delete rules](media/move-workspace/delete-rules.png)
+ [![Delete rules](media/move-workspace/delete-rules.png)](media/move-workspace/delete-rules.png#lightbox)
## Unlink Automation account Use the following procedure to unlink the Automation account from the workspace using the Azure portal:
Use the following procedure to unlink the Automation account from the workspace
2. In the **Related Resources** section of the menu, select **Linked workspace**. 3. Click **Unlink workspace** to unlink the workspace from your Automation account.
- ![Unlink workspace](media/move-workspace/unlink-workspace.png)
+ [![Unlink workspace](media/move-workspace/unlink-workspace.png)](media/move-workspace/unlink-workspace.png#lightbox)
## Move your workspace
Use the following procedure to move your workspace using the Azure portal:
4. Select a destination **Subscription** and **Resource group**. If you're moving the workspace to another resource group in the same subscription, you won't see the **Subscription** option. 5. Click **OK** to move the workspace and selected resources.
- ![Screenshot shows the Overview pane in the Log Analytics workspace with options to change the resource group and subscription name.](media/move-workspace/portal.png)
+ [![Screenshot shows the Overview pane in the Log Analytics workspace with options to change the resource group and subscription name.](media/move-workspace/portal.png)](media/move-workspace/portal.png#lightbox)
### PowerShell To move your workspace using PowerShell, use the [Move-AzResource](/powershell/module/AzureRM.Resources/Move-AzureRmResource) as in the following example:
Move-AzResource -ResourceId "/subscriptions/00000000-0000-0000-0000-000000000000
> [!IMPORTANT] > After the move operation, removed solutions and Automation account link should be reconfigured to bring the workspace back to its previous state. - ## Next steps - For a list of which resources support move, see [Move operation support for resources](../../azure-resource-manager/management/move-support-resources.md).
azure-monitor Operationalinsights Api Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/operationalinsights-api-retirement.md
Depending on the configuration method you use, you should update the new version
2. Azure Resource Manager templates use the API version in the **apiVersion** property of the resource. Replace that version with the latest version (2020-08-01) as shown in the following example. - ```json { "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#",
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md
na Previously updated : 05/06/2021 Last updated : 03/22/2022 # Dynamically change the service level of a volume
-You can change the service level of an existing volume by moving the volume to another capacity pool that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume does not require that you migrate data. It also does not impact access to the volume.
+You can change the service level of an existing volume by moving the volume to another capacity pool in the same NetApp account that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume does not require that you migrate data. It also does not impact access to the volume.
This functionality enables you to meet your workload needs on demand. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. For example, if the volume is currently in a capacity pool that uses the *Standard* service level and you want the volume to use the *Premium* service level, you can move the volume dynamically to a capacity pool that uses the *Premium* service level.
The capacity pool that you want to move the volume to must already exist. The ca
## Considerations
+* This functionality is supported within the same NetApp account. You can't move the volume to a capacity pool in a different NetApp Account.
+ * After the volume is moved to another capacity pool, you will no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool. * If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least seven days before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*).
azure-portal Original Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/original-preferences.md
- Title: Manage Azure portal settings and preferences (older version)
-description: You can change Azure portal default settings to meet your own preferences. This document describes the older version of the settings experience.
Previously updated : 06/17/2021---
-# Manage Azure portal settings and preferences (older version)
-
-You can change the default settings of the Azure portal to meet your own preferences.
-
-> [!IMPORTANT]
-> We're in the process of moving all Azure users to a newer experience. This topic describes the older experience. For the latest information, see [Manage Azure portal settings and preferences](set-preferences.md).
-
-Most settings are available from the **Settings** menu in the global page header.
-
-![Screenshot showing global page header icons with settings highlighted](./media/original-preferences/header-settings.png)
--
-## Choose your default subscription
-
-You can change the subscription that opens by default when you sign-in to the Azure portal. This is helpful if you have a primary subscription you work with but use others occasionally.
--
-1. Select the directory and subscription filter icon in the global page header.
-
-1. Select the subscriptions you want as the default subscriptions when you launch the portal.
-
- :::image type="content" source="media/original-preferences/default-directory-subscription-filter.png" alt-text="Select the subscriptions you want as the default subscriptions when you launch the portal.":::
--
-## Choose your default view
-
-You can change the page that opens by default when you sign in to the Azure portal.
-
-![Screenshot showing Azure portal settings with default view highlighted](./media/original-preferences/default-view.png)
--- **Home** can't be customized. It displays shortcuts to popular Azure services and lists the resources you've used most recently. We also give you useful links to resources like Microsoft Learn and the Azure roadmap.--- Dashboards can be customized to create a workspace designed just for you. For example, you can build a dashboard that is project, task, or role focused. If you select **Dashboard**, your default view will go to your most recently used dashboard. For more information, see [Create and share dashboards in the Azure portal](azure-portal-dashboards.md).-
-## Choose a portal menu mode
-
-The default mode for the portal menu controls how much space the portal menu takes up on the page.
-
-![Screenshot that shows how to set the default mode for the portal menu.](./media/original-preferences/menu-mode.png)
--- When the portal menu is in **Flyout** mode, it's hidden until you need it. Select the menu icon to open or close the menu.--- If you choose **Docked mode** for the portal menu, it's always visible. You can collapse the menu to provide more working space.-
-## Choose a theme or enable high contrast
-
-The theme that you choose affects the background and font colors that appear in the Azure portal. You can select from one of four preset color themes. Select each thumbnail to find the theme that best suits you.
-
-Alternatively, you can choose one of the high-contrast themes. The high contrast themes make the Azure portal easier to read for people who have a visual impairment; they override all other theme selections.
-
-![Screenshot showing Azure portal settings with themes highlighted](./media/original-preferences/theme.png)
-
-## Enable or disable pop-up notifications
-
-Notifications are system messages related to your current session. They provide information like your current credit balance, when resources you just created become available, or confirm your last action, for example. When pop-up notifications are turned on, the messages briefly display in the top corner of your screen.
-
-To enable or disable pop-up notifications, select or clear **Enable pop-up notifications**.
-
-![Screenshot showing Azure portal settings with pop-up notifications highlighted](./media/original-preferences/pop-up-notifications.png)
-
-To read all notifications received during your current session, select **Notifications** from the global header.
-
-![Screenshot showing Azure portal global header with notifications highlighted](./media/original-preferences/read-notifications.png)
-
-If you want to read notifications from previous sessions, look for events in the Activity log. For more information, see [View the Activity log](../azure-monitor/essentials/activity-log.md#view-the-activity-log).
-
-## Change the inactivity timeout setting
-
-The inactivity timeout setting helps to protect resources from unauthorized access if you forget to secure your workstation. After you've been idle for a while, you're automatically signed out of your Azure portal session. As an individual, you can change the timeout setting for yourself. If you're an admin, you can set it at the directory level for all your users in the directory.
-
-### Change your individual timeout setting (user)
-
-Select the drop-down under **Sign me out when inactive**. Choose the duration after which your Azure portal session is signed out if you're idle.
-
-![Screenshot showing portal settings with inactive timeout settings highlighted](./media/original-preferences/inactive-sign-out-user.png)
-
-The change is saved automatically. If you're idle, your Azure portal session will sign out after the duration you set.
-
-If your admin has enabled an inactivity timeout policy, you can still set your own, as long as it's less than the directory-level setting. Select **Override the directory inactivity timeout policy**, then set a time interval.
-
-![Screenshot showing portal settings with override the directory inactivity timeout policy setting highlighted](./media/original-preferences/inactive-sign-out-override.png)
-
-### Change the directory timeout setting (admin)
-
-Admins in the [Global Administrator role](../active-directory/roles/permissions-reference.md#global-administrator) can enforce the maximum idle time before a session is signed out. The inactivity timeout setting applies at the directory level. The setting takes effect for new sessions. It won't apply immediately to any users who are already signed in. For more information about directories, see [Active Directory Domain Services Overview](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview).
-
-If you're a Global Administrator, and you want to enforce an idle timeout setting for all users of the Azure portal, follow these steps:
-
-1. Select the link text **Configure directory level timeout**.
-
- ![Screenshot showing portal settings with link text highlighted](./media/original-preferences/settings-admin.png)
-
-1. On the **Configure directory level inactivity timeout** page, select **Enable directory level idle timeout for the Azure portal** to turn on the setting.
-
-1. Next, enter the **Hours** and **Minutes** for the maximum time that a user can be idle before their session is automatically signed out.
-
-1. Select **Apply**.
-
- ![Screenshot showing page to set directory-level inactivity timeout](./media/original-preferences/configure.png)
-
-To confirm that the inactivity timeout policy is set correctly, select **Notifications** from the global page header. Verify that a success notification is listed.
-
-![Screenshot showing successful notification message for directory-level inactivity timeout](./media/original-preferences/confirmation.png)
-
-## Restore default settings
-
-If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings**. Any changes you've made to portal settings will be lost. This option doesn't affect dashboard customizations.
-
-![Screenshot showing restore of default settings](./media/original-preferences/useful-links-restore-defaults.png)
-
-## Export user settings
-
-Information about your custom settings is stored in Azure. You can export the following user data:
-
-* Private dashboards in the Azure portal
-* User settings like favorite subscriptions or directories, and last logged-in directory
-* Themes and other custom portal settings
-
-It's a good idea to export and review your settings if you plan to delete them. Rebuilding dashboards or redoing settings can be time-consuming.
-
-To export your portal settings, select **Export all settings**.
-
-![Screenshot showing export of settings](./media/original-preferences/useful-links-export-settings.png)
-
-Exporting settings creates a *.json* file that contains your user settings like your color theme, favorites, and private dashboards. Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the *.json* file.
-
-## Delete user settings and dashboards
-
-Information about your custom settings is stored in Azure. You can delete the following user data:
-
-* Private dashboards in the Azure portal
-* User settings like favorite subscriptions or directories, and last logged-in directory
-* Themes and other custom portal settings
-
-It's a good idea to export and review your settings before you delete them. Rebuilding dashboards or redoing custom settings can be time-consuming.
--
-To delete your portal settings, select **Delete all settings and private dashboards**.
-
-![Screenshot showing delete of settings](./media/original-preferences/useful-links-delete-settings.png)
-
-## Change language and regional settings
-
-There are two settings that control how the text in the Azure portal appears:
-- The **Language** setting controls the language you see for text in the Azure portal. --- **Regional format** controls the way dates, time, numbers, and currency are shown.-
-To change the language that is used in the Azure portal, use the drop-down to select from the list of available languages.
-
-The regional format selection changes to display regional options for only the language you selected. To change that automatic selection, use the drop-down to choose the regional format you want.
-
-For example, if you select English as your language, and then select United States as the regional format, currency is shown in U.S. dollars. If you select English as the language and then select Europe as the regional format, currency is shown in euros.
-
-Select **Apply** to update your language and regional format settings.
-
- ![Screenshot showing language and regional format settings](./media/original-preferences/language.png)
-
->[!NOTE]
->These language and regional settings affect only the Azure portal. Documentation links that open in a new tab or window use your browser's language settings to determine the language to display.
->
-
-## Next steps
--- [Keyboard shortcuts in Azure portal](azure-portal-keyboard-shortcuts.md)-- [Supported browsers and devices](azure-portal-supported-browsers-devices.md)-- [Add, remove, and rearrange favorites](azure-portal-add-remove-sort-favorites.md)-- [Create and share custom dashboards](azure-portal-dashboards.md)-- [Azure portal how-to video series](azure-portal-video-series.md)
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 08/10/2021 Last updated : 03/23/2022
Most settings are available from the **Settings** menu in the top right section
:::image type="content" source="media/set-preferences/settings-top-header.png" alt-text="Screenshot showing the settings icon in the global page header.":::
-> [!NOTE]
-> We're in the process of moving all users to the newest settings experience described in this topic. For information about the older experience, see [Manage Azure portal settings and preferences (older version)](original-preferences.md).
- ## Directories + subscriptions The **Directories + subscriptions** page lets you manage directories and set subscription filters.
azure-resource-manager Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/migrate.md
In the _convert_ phase of migrating your resources to Bicep, the goal is to capt
The convert phase consists of two steps, which you complete in sequence:
-1. **Capture a representation of your Azure resources.** If you have an existing JSON template that you're converting to Bicep, the first step is easy - you already have your source template. If you're converting Azure resources that were deployed by using the portal or another tool, you need to capture the resource definitions. You can capture a JSON representation of your resources using the Azure portal, Azure CLI, or Azure PowerShell cmdlets to *export* single resources, multiple resources, and entire resource groups. You can use the **Import Resource** command within Visual Studio Code to import a Bicep representation of your Azure resource.
+1. **Capture a representation of your Azure resources.** If you have an existing JSON template that you're converting to Bicep, the first step is easy - you already have your source template. If you're converting Azure resources that were deployed by using the portal or another tool, you need to capture the resource definitions. You can capture a JSON representation of your resources using the Azure portal, Azure CLI, or Azure PowerShell cmdlets to *export* single resources, multiple resources, and entire resource groups. You can use the **Insert Resource** command within Visual Studio Code to import a Bicep representation of your Azure resource.
1. **If required, convert the JSON representation to Bicep using the _decompile_ command.** [The Bicep tooling includes the `decompile` command to convert templates.](decompile.md) You can invoke the `decompile` command from either the Azure CLI, or from the Bicep CLI. The decompilation process is a best-effort process and doesn't guarantee a full mapping from JSON to Bicep. You may need to revise the generated Bicep file to meet your template best practices before using the file to deploy resources.
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 10/20/2021 Last updated : 03/23/2022 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.AlertsManagement
+* prometheusRuleGroups
* resourceHealthAlertRules * smartDetectorAlertRules
Some resources have a limit on the number instances per region. This limit is di
* galleries/images/versions * images * snapshots
-* virtualMachineScaleSets - By default, limited to 800 instances. That limit can be increased by contacting support.
* virtualMachines
-* virtualMachines/extensions - Supports an unlimited number of VM extension instances.
+* virtualMachines/extensions
+* virtualMachineScaleSets - By default, limited to 800 instances. That limit can be increased by contacting support.
## Microsoft.ContainerInstance
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.DevTestLab
+* labs/virtualMachines - By default, limited to 800 instances. That limit can be increased by contacting support.
* schedules
-## Microsoft.EnterpriseKnowledgeGraph
+## Microsoft.EdgeOrder
-* services
+* orderItems
+* orders
## Microsoft.EventHub
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.HybridCompute * machines - Supports up to 5,000 instances.
-* machines/extensions - Supports an unlimited number of VM extension instances.
+* machines/extensions
## microsoft.insights
Some resources have a limit on the number instances per region. This limit is di
* applicationGatewayWebApplicationFirewallPolicies * applicationSecurityGroups * bastionHosts
+* customIpPrefixes
* ddosProtectionPlans
+* dnsForwardingRulesets
+* dnsForwardingRulesets/forwardingRules
+* dnsForwardingRulesets/virtualNetworkLinks
+* dnsResolvers
+* dnsResolvers/inboundEndpoints
+* dnsResolvers/outboundEndpoints
* dnszones * dnszones/A * dnszones/AAAA
+* dnszones/all
* dnszones/CAA * dnszones/CNAME * dnszones/MX * dnszones/NS * dnszones/PTR
+* dnszones/recordsets
* dnszones/SOA * dnszones/SRV * dnszones/TXT
-* dnszones/all
-* dnszones/recordsets
+* expressRouteCrossConnections
* networkIntentPolicies * networkInterfaces
+* networkSecurityGroups
* privateDnsZones * privateDnsZones/A * privateDnsZones/AAAA
+* privateDnsZones/all
* privateDnsZones/CNAME * privateDnsZones/MX * privateDnsZones/PTR * privateDnsZones/SOA * privateDnsZones/SRV * privateDnsZones/TXT
-* privateDnsZones/all
* privateDnsZones/virtualNetworkLinks
+* privateEndpointRedirectMaps
* privateEndpoints * privateLinkServices * publicIPAddresses * serviceEndpointPolicies * trafficmanagerprofiles
+* virtualNetworks/privateDnsZoneLinks
* virtualNetworkTaps
-## Microsoft.PortalSdk
-
-* rootResources
- ## Microsoft.PowerBI * workspaceCollections - By default, limited to 800 instances. That limit can be increased by contacting support.
Some resources have a limit on the number instances per region. This limit is di
* namespaces
-## Microsoft.Scheduler
-
-* jobcollections
- ## Microsoft.ServiceBus * namespaces
Some resources have a limit on the number instances per region. This limit is di
* accounts/groupPolicies * accounts/jobs * accounts/models
+* accounts/networks
* accounts/storageContainers ## Microsoft.Sql
+* instancePools
+* managedInstances
+* managedInstances/databases
+* managedInstances/metricDefinitions
+* managedInstances/metrics
+* managedInstances/sqlAgent
+* servers
* servers/databases
+* servers/databases/databaseState
+* servers/elasticpools
+* servers/jobAccounts
+* servers/jobAgents
+* virtualClusters
## Microsoft.Storage * storageAccounts
+## Microsoft.StoragePool
+
+* diskPools
+* diskPools/iscsiTargets
+ ## Microsoft.StreamAnalytics * streamingjobs - By default, limited to 800 instances. That limit can be increased by contacting support.
+## Microsoft.Web
+
+* apiManagementAccounts/apis
+* sites
+ ## Next steps For a complete list of quotas and limits, see [Azure subscription and service limits, quotas, and constraints](azure-subscription-service-limits.md).
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. Previously updated : 03/15/2022 Last updated : 03/23/2022
You apply tags to your Azure resources, resource groups, and subscriptions to lo
For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
-Resource tags support all cost-accruing services. You can use Azure Policy to ensure that cost-accruing services are provisioned with a tag by using one of the many different [tag policies](/azure/azure-resource-manager/management/tag-policies).
+Resource tags support all cost-accruing services. To ensure that cost-accruing services are provisioned with a tag, use one of the [tag policies](tag-policies.md).
> [!WARNING] > Tags are stored as plain text. Never add sensitive values to tags. Sensitive values could be exposed through many methods, including cost reports, tag taxonomies, deployment histories, exported templates, and monitoring logs.
The following limitations apply to tags:
* Tag names can't contain these characters: `<`, `>`, `%`, `&`, `\`, `?`, `/` > [!NOTE]
- > * Azure DNS zones and Traffic Manager doesn't support the use of spaces in the tag or a tag that starts with a number.
- > * Azure DNS tag names do not support special and unicode characters. The value can contain all characters.
+ > * Azure DNS zones don't support the use of spaces in the tag or a tag that starts with a number. Azure DNS tag names do not support special and unicode characters. The value can contain all characters.
+ >
+ > * Traffic Manager doesn't support the use of spaces, `#` or `:` in the tag name. The tag name can't start with a number.
> > * Azure Front Door doesn't support the use of `#` or `:` in the tag name. >
azure-sql Recovery Using Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/recovery-using-backups.md
The following options are available for database recovery by using [automated da
- Create a new database on the same server, recovered to a specified point in time within the retention period. - Create a database on the same server, recovered to the deletion time for a deleted database. - Create a new database on any server in the same region, recovered to the point of the most recent backups.-- Create a new database on any server in any other region, recovered to the point of the most recent replicated backups. Cross-region and cross-subscription restore for SQL Managed Instance isn't currently supported
+- Create a new database on any server in any other region, recovered to the point of the most recent replicated backups. Cross-region and cross-subscription point-in-time restore for SQL Managed Instance isn't currently supported.
If you configured [backup long-term retention](long-term-retention-overview.md), you can also create a new database from any long-term retention backup on any server.
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/log-replay-service-migrate.md
Previously updated : 03/22/2022 Last updated : 03/23/2022 # Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)
The SAS authentication is generated with the time validity that you specified. Y
:::image type="content" source="./media/log-replay-service-migrate/lrs-generated-uri-token.png" alt-text="Screenshot that shows an example of the U R I version of an S A S token.":::
+ > [!NOTE]
+ > Using SAS tokens created with permissions set through defining a [stored access policy](https://docs.microsoft.com/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. You will need to follow the instructions in this guide on manually specifying Read and List permissions for the SAS token.
+ ### Copy parameters from the SAS token Before you use the SAS token to start LRS, you need to understand its structure. The URI of the generated SAS token consists of two parts separated with a question mark (`?`), as shown in this example:
Functional limitations of LRS are:
- System-managed software patches are blocked for 36 hours once the LRS has been started. After this time window expires, the next software maintenance update will stop LRS. You will need to restart LRS from scratch. - LRS requires databases on SQL Server to be backed up with the `CHECKSUM` option enabled. - The SAS token that LRS will use must be generated for the entire Azure Blob Storage container, and it must have Read and List permissions only. For example, if you grant Read, List and Write permissions, LRS will not be able to start because of the extra Write permission.
+- Using SAS tokens created with permissions set through defining a [stored access policy](https://docs.microsoft.com/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. You will need to follow the instructions in this guide on manually specifying Read and List permissions for the SAS token.
- Backup files containing % and $ characters in the file name cannot be consumed by LRS. Consider renaming such file names. - Backup files for different databases must be placed in separate folders on Blob Storage in a flat-file structure. Nested folders inside individual database folders are not supported. - LRS must be started separately for each database pointing to the full URI path containing an individual database folder.
azure-sql Managed Instance Link Use Scripts To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-failover-database.md
echo $uriFull
# Build API request body #
-$bodyFull = @"
-{
- "properties":{
- "ReplicationMode":"sync"
- }
-}"@
+$bodyFull = "{`"properties`":{`"ReplicationMode`":`"sync`"}}"
echo $bodyFull
azure-sql Managed Instance Link Use Scripts To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-replicate-database.md
$SubscriptionID = "<YourSubscriptionID>"
# Enter your Managed Instance name ΓÇô example "sqlmi1" $ManagedInstanceName = "<YourManagedInstanceName>"
-# Insert the cert public key blob you got from the SQL Server
+# Enter name for the server trust certificate - example "Cert_sqlserver1_endpoint"
+$certificateName = "<YourServerTrustCertificateName>"
+
+# Insert the cert public key blob you got from the SQL Server - example "0x1234567..."
$PublicKeyEncoded = "<PublicKeyEncoded>" # ===============================================================================
Select-AzSubscription -SubscriptionName $SubscriptionID
# Build URI for the API call. # $miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName
-$uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/hybridCertificate?api-version=2020-11-01-preview"
+$uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/serverTrustCertificates/" + $certificateName + "?api-version=2021-08-01-preview"
echo $uriFull # Build API request body. #
-$bodyFull = @"
-{
- "properties":{ "PublicBlob":"$PublicKeyEncoded" }
-}"@
+$bodyFull = "{ `"properties`":{ `"PublicBlob`":`"$PublicKeyEncoded`" } }"
echo $bodyFull
$headers.Add("Authorization", "Bearer "+"$authToken")
# Invoke API call #
-Invoke-WebRequest -Method POST -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull
+Invoke-WebRequest -Method PUT -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull
``` The result of this operation will be time stamp of the successful upload of the SQL Server certificate private key to Managed Instance.
GO
> If you get the Error 1475 you'll have to create a full backup without COPY ONLY option, that will start new backup chain. > As the best practice it's highly recommended that collation on SQL Server and SQL Managed Instance is the same. This is because depending on collation settings, AG and DAG names could, or could not be case sensitive. If there's a mismatch with this, there could be issues in ability to successfully connect SQL Server to Managed Instance.
+Replace `<DAGName>` with the name of your distributed availability group. When replicating several databases, one availability group and one distributed availability groups is needed for each database so consider naming each item accordingly - for example `DAG_<db_name>`. Replace `<AGName>` with the name of availability group created in the previous step. Replace `<SQLServerIP>` with the IP address of SQL Server from the previous step. Alternatively, resolvable SQL Server host machine name can be used, but you need to make sure that the name is resolvable from SQL Managed Instance virtual network. Replace `<ManagedInstanceName>` with the short name of your SQL Managed Instance. Replace `<ManagedInstnaceFQDN>` with a fully qualified domain name of SQL Managed Instance.
+
+```sql
+-- Execute on SQL Server
+-- Create DAG for AG and database
+-- ManagedInstanceName example 'sqlmi1'
+-- ManagedInstanceFQDN example 'sqlmi1.73d19f36a420a.database.windows.net'
+USE MASTER
+CREATE AVAILABILITY GROUP [<DAGName>]
+ WITH (DISTRIBUTED)
+ AVAILABILITY GROUP ON
+ '<AGName>' WITH
+ (
+ LISTENER_URL = 'TCP://<SQLServerIP>:5022',
+ AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
+ FAILOVER_MODE = MANUAL,
+ SEEDING_MODE = AUTOMATIC,
+ SESSION_TIMEOUT = 20
+ ),
+ '<ManagedInstanceName>' WITH
+ (
+ LISTENER_URL = 'tcp://<ManagedInstanceFQDN>:5022;Server=[<ManagedInstanceName>]',
+ AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
+ FAILOVER_MODE = MANUAL,
+ SEEDING_MODE = AUTOMATIC
+ );
+GO
+```
+ ### Verify AG and distributed AG Use the following script to list all available Availability Groups and Distributed Availability Groups on the SQL Server. Availability Group state needs to be connected, and Distributed Availability Group state disconnected at this point. Distributed Availability Group state will move to `connected` only when it has been joined with SQL Managed Instance. This will be explained in one of the next steps.
Use the following script to list all available Availability Groups and Distribut
```sql -- Execute on SQL Server -- This will show that Availability Group and Distributed Availability Group have been created on SQL Server.
-SELECT
- name, is_distributed, cluster_type_desc,
- sequence_number, is_contained
-FROM
- sys.availability_groups
+SELECT * FROM sys.availability_groups
``` Alternatively, in SSMS object explorer, expand the ΓÇ£Always On High AvailabilityΓÇ¥, then ΓÇ£Availability GroupsΓÇ¥ folder to show available Availability Groups and Distributed Availability Groups.
$DAGName = "<DAGName>"
# Enter database name that was placed in Availability Group for replciation $DatabaseName = "<DatabaseName>" # Enter SQL Server address
-$ SQLServerAddress = "<SQLServerAddress>"
+$SQLServerAddress = "<SQLServerAddress>"
# ============================================================================= # INVOKING THE API CALL -- THIS PART IS NOT USER CONFIGURABLE
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-build-chat.md
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
# [JavaScript](#tab/javascript)
-We'll use [express.js](https://expressjs.com/), a popular web framework for node.js to achieve this job.
+We'll use [express.js](https://expressjs.com/), a popular web framework for Node.js to achieve this job.
First create an empty express app.
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 03/14/2022 Last updated : 03/23/2022
Currently, secondary region [RPO](azure-backup-glossary.md#rpo-recovery-point-ob
>- The Azure roles needed to restore in the secondary region are the same as those in the primary region. >- While restoring an Azure VM, Azure Backup configures the virtual network settings in the secondary region automatically. If you are [restoring disks](#restore-disks) while deploying the template, ensure to provide the virtual network settings, corresponding to the secondary region. >- If VNet/Subnet is not available in the primary region or is not configured in the secondary region, Azure portal doesn't auto-populate any default values during restore operation.
+>- For Cross Region Restores, the **Staging Location** (that is the storage account location) must be in the region that the Recovery Services vault treats as the *secondary* region. For example, a Recovery Services vault is located in East US 2 region (with Geo-Redundancy and Cross Region Restore enabled). This means that the *secondary* region would be *Central US*. Therefore, you need to create a storage account in *Central US* to perform a Cross Region Restore of the VM. <br> Learn more about [Azure cross-region replication pairings for all geographies](../availability-zones/cross-region-replication-azure.md).
++ [Azure zone pinned VMs](../virtual-machines/windows/create-portal-availability-zone.md) can be restored in any [availability zones](../availability-zones/az-overview.md) of the same region.
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
+
+ Title: 'Quickstart: Create a profile and endpoint - Bicep'
+
+description: In this quickstart, learn how to create an Azure Content Delivery Network profile and endpoint by using a Bicep file
+++
+ na
++ Last updated : 03/14/2022+++
+# Quickstart: Create an Azure CDN profile and endpoint - Bicep
+
+Get started with Azure Content Delivery Network (CDN) by using a Bicep file. The Bicep file deploys a profile and an endpoint.
++
+## Prerequisites
++
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cdn-with-custom-origin/).
+
+This Bicep file is configured to create a:
+
+* Profile
+* Endpoint
++
+One Azure resource is defined in the Bicep file:
+
+* **[Microsoft.Cdn/profiles](/azure/templates/microsoft.cdn/profiles)**
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters profileName=<profile-name> endpointName=<endpoint-name> originURL=<origin-url>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -profileName "<profile-name>" -endpointName "<endpoint-name>" -originURL "<origin-url>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<profile-name\>** with the name of the CDN profile. Replace **\<endpoint-name\>** with a unique CDN Endpoint name. Replace **\<origin-url\>** with the URL of the origin.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group. Verify that an Endpoint and CDN profile were created in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+### Azure CLI
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the VM and all of the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a:
+
+* CDN Profile
+* Endpoint
+
+To learn more about Azure CDN, continue to the article below.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Use CDN to serve static content from a web app](cdn-add-to-web-app.md)
cognitive-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-custom-neural-voice.md
If you've created a custom voice font, use the endpoint that you've created. You
| Brazil South | `https://brazilsouth.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` | | Canada Central | `https://canadacentral.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` | | Central US | `https://centralus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| China East 2 | `https://chinaeast2.voice.speech.azure.cn/cognitiveservices/v1?deploymentId={deploymentId}` |
| East Asia | `https://eastasia.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` | | East US | `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` | | East US 2 | `https://eastus2.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
confidential-ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/overview.md
The data to the Ledger is sent through TLS 1.2 connection and the TLS 1.2 connec
### Ledger storage
-Confidential ledgers are created as blocks in blob storage containers belonging to an Azure Storage account. Transaction data can either be stored encrypted or in plaintext depending on your needs. When you create a Ledger, you will associate a Storage Account using the steps described in [Register a confidential ledger Service Principal](register-ledger-service-principal.md).
+Confidential ledgers are created as blocks in blob storage containers belonging to an Azure Storage account. Transaction data can either be stored encrypted or in plaintext depending on your needs.
The confidential ledger can be managed by administrators utilizing Administrative APIs (Control Plane), and can be called directly by your application code through Functional APIs (Data Plane). The Administrative APIs support basic operations such as create, update, get and, delete.
confidential-ledger Register Ledger Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/register-ledger-resource-provider.md
Before using Azure confidential ledger, you must first register the Azure confid
## Next Steps -- [Overview of Microsoft Azure confidential ledger](overview.md)-- [Register a confidential ledger service principal](register-ledger-service-principal.md)
+- [Overview of Microsoft Azure confidential ledger](overview.md)
confidential-ledger Register Ledger Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/register-ledger-service-principal.md
- Title: Register a Ledger Service Principal with Microsoft Azure confidential ledger
-description: Register a Ledger Service Principal with Microsoft Azure confidential ledger
---- Previously updated : 04/15/2021---
-# Register a confidential ledger service principal
-
-To associate a storage account with your confidential ledger, you must first register confidential ledger service principal.
-
-## Create a service principal
-
-To create a service principal, run the Azure CLI [az ad sp create](/cli/azure/ad/sp#az_ad_sp_create) command or the Azure PowerShell [Connect-AzureAD](/powershell/module/azuread/connect-azuread) and [New-AzureADServicePrincipal](/powershell/module/azuread/new-azureadserviceprincipal) cmdlets.
-
-# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
-az ad sp create --id 4353526e-1c33-4fcf-9e82-9683edf52848
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell-interactive
-Connect-AzureAD -TenantId "<tenant-id-of-customer>"
-New-AzureADServicePrincipal -AppId 4353526e-1c33-4fcf-9e82-9683edf52848 -DisplayName ConfidentialLedger
-```
--
-## Assign roles
-
-Set the IAM "[Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)" for the confidential ledger service principal for the Storage Account. You can do so with the Azure CLI [az role assignment create](/cli/azure/role/assignment) command or the Azure PowerShell [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) cmdlet.
-
-# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
-az role assignment create --role "Storage Blob Data Contributor" --assignee "4353526e-1c33-4fcf-9e82-9683edf52848" --scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>"
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell-interactive
-New-AzRoleAssignment -ApplicationId 4353526e-1c33-4fcf-9e82-9683edf52848 -RoleDefinitionName "Storage Blob Data Contributor" -Scope "/subscriptions/<subscription-id>/resourceGroups/sample-resource-group/providers/Microsoft.Storage/storageAccounts/<storage-account>"
-```
--
-## Next steps
--- [Overview of Microsoft Azure confidential ledger](overview.md)
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
NSG rules are used to limit connectivity to and from a subnet with virtual netwo
### Are service endpoints available for all VNets? No, Only Azure Resource Manager virtual networks can have service endpoint enabled. Classic virtual networks don't support service endpoints.
-### Can I "Accept connections from within public Azure datacenters" when service endpoint access is enabled for Azure Cosmos DB?
-This is required only when you want your Azure Cosmos DB account to be accessed by other Azure first party services like Azure Data factory, Azure Cognitive Search or any service that is deployed in given Azure region.
+### When should I "Accept connections from within public Azure datacenters" for an Azure Cosmos DB account?
+This setting should only be enabled when you want your Azure Cosmos DB account to be accessible to any Azure service in any Azure region. Other Azure first party services such as Azure Data Factory and Azure Cognitive Search provide documentation for how to secure access to data sources including Azure Cosmos DB accounts, for example:
+
+* [Azure Data Factory Managed Virtual Network](../data-factory/managed-virtual-network-private-endpoint.md)
+* [Azure Cognitive Search Indexer access to protected resources](../search/search-indexer-securing-resources.md)
## Next steps
cosmos-db Feature Support 42 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md
The API for MongoDB [supports a variety of indexes](mongodb-indexing.md) to enab
## Client-side field level encryption
-Client-level field encryption is a driver feature and is compatible with the API for MongoDB. Explicit encryption - were the driver explicitly encrypts each field when written is supported. Explicit decryption and automatic decryption is supported.
+Client-level field encryption is a driver feature and is compatible with the API for MongoDB. Explicit encryption - were the driver explicitly encrypts each field when written is supported. Automatic encryption is not supported. Explicit decryption and automatic decryption is supported.
The mongocryptd should not be run since it is not needed to perform any of the supported operations.
cosmos-db Sql Api Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-get-started.md
ms.devlang: csharp Previously updated : 08/26/2021 Last updated : 03/23/2022
Welcome to the Azure Cosmos DB SQL API get started tutorial. After following this tutorial, you'll have a console application that creates and queries Azure Cosmos DB resources.
-This tutorial uses version 3.0 or later of the [Azure Cosmos DB .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos). You can work with [.NET Framework or .NET Core](https://dotnet.microsoft.com/download).
+This tutorial uses version 3.0 or later of the [Azure Cosmos DB .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) and [.NET 6](https://dotnet.microsoft.com/download).
This tutorial covers:
Let's create an Azure Cosmos DB account. If you already have an account you want
## <a id="SetupVS"></a>Step 2: Set up your Visual Studio project 1. Open Visual Studio and select **Create a new project**.
-1. In **Create a new project**, choose **Console App (.NET Framework)** for C#, then select **Next**.
+1. In **Create a new project**, choose **Console App** for C#, then select **Next**.
1. Name your project *CosmosGettingStartedTutorial*, and then select **Create**.-
- :::image type="content" source="./media/sql-api-get-started/configure-cosmos-getting-started-2019.png" alt-text="Configure your project":::
- 1. In the **Solution Explorer**, right-click your new console application, which is under your Visual Studio solution, and select **Manage NuGet Packages**. 1. In the **NuGet Package Manager**, select **Browse** and search for *Microsoft.Azure.Cosmos*. Choose **Microsoft.Azure.Cosmos** and select **Install**.
A database is the logical container of items partitioned across containers. Eith
[!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=CreateDatabaseAsync&highlight=7)]
- `CreateDatabaseAsync` creates a new database with ID `FamilyDatabase` if it doesn't already exist, that has the ID specified from the `databaseId` field.
+ `CreateDatabaseAsync` creates a new database with ID `FamilyDatabase` if it doesn't already exist, that has the ID specified from the `databaseId` field. For the purpose of this demo we are creating the database as part of the exercise, but on applications in production, it is [not recommended to do it as part of the normal flow](troubleshoot-dot-net-sdk-slow-request.md#metadata-operations).
1. Copy and paste the code below where you instantiate the CosmosClient to call the **CreateDatabaseAsync** method you just added.
A database is the logical container of items partitioned across containers. Eith
Program p = new Program(); await p.GetStartedDemoAsync(); }
- catch (CosmosException de)
+ catch (CosmosException cosmosException)
{
- Exception baseException = de.GetBaseException();
- Console.WriteLine("{0} error occurred: {1}\n", de.StatusCode, de);
+ Console.WriteLine("Cosmos Exception with Status {0} : {1}\n", cosmosException.StatusCode, cosmosException);
} catch (Exception e) {
Congratulations! You've successfully created an Azure Cosmos database.
A container can be created by using either the [**CreateContainerIfNotExistsAsync**](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync#Microsoft_Azure_Cosmos_Database_CreateContainerIfNotExistsAsync_Microsoft_Azure_Cosmos_ContainerProperties_System_Nullable_System_Int32__Microsoft_Azure_Cosmos_RequestOptions_System_Threading_CancellationToken_) or [**CreateContainerAsync**](/dotnet/api/microsoft.azure.cosmos.database.createcontainerasync#Microsoft_Azure_Cosmos_Database_CreateContainerAsync_Microsoft_Azure_Cosmos_ContainerProperties_System_Nullable_System_Int32__Microsoft_Azure_Cosmos_RequestOptions_System_Threading_CancellationToken_) method in the `CosmosDatabase` class. A container consists of items (JSON documents if SQL API) and associated server-side application logic in JavaScript, for example, stored procedures, user-defined functions, and triggers.
-1. Copy and paste the `CreateContainerAsync` method below your `CreateDatabaseAsync` method. `CreateContainerAsync` creates a new container with the ID `FamilyContainer` if it doesn't already exist, by using the ID specified from the `containerId` field partitioned by `LastName` property.
+1. Copy and paste the `CreateContainerAsync` method below your `CreateDatabaseAsync` method. `CreateContainerAsync` creates a new container with the ID `FamilyContainer` if it doesn't already exist, by using the ID specified from the `containerId` field partitioned by `LastName` property. For the purpose of this demo we are creating the container as part of the exercise, but on applications in production, it is [not recommended to do it as part of the normal flow](troubleshoot-dot-net-sdk-slow-request.md#metadata-operations).
[!code-csharp[](~/cosmos-dotnet-getting-started/CosmosGettingStartedTutorial/Program.cs?name=CreateContainerAsync&highlight=9)]
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
Previously updated : 02/23/2022 Last updated : 03/22/2022 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
Appending data is the default behavior of this Azure SQL Database sink connector
### Upsert data
-Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and otherwise insert new data.
+Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and otherwise insert new data. To learn more about upsert settings in copy activities, see [Azure SQL Database as the sink](#azure-sql-database-as-the-sink).
+ ### Overwrite the entire table
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md
Previously updated : 02/08/2022 Last updated : 03/22/2022 # Copy and transform data in Azure SQL Managed Instance using Azure Data Factory or Synapse Analytics
Appending data is the default behavior of the SQL Managed Instance sink connecto
### Upsert data
-Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and otherwise insert new data.
+Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and otherwise insert new data. To learn more about upsert settings in copy activities, see [SQL Managed Instance as a sink](#sql-managed-instance-as-a-sink).
### Overwrite the entire table
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
Previously updated : 03/10/2022 Last updated : 03/22/2022 # Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
Appending data is the default behavior of this SQL Server sink connector. the se
### Upsert data
-Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and otherwise insert new data.
+Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and otherwise insert new data. To learn more about upsert settings in copy activities, see [SQL Server as a sink](#sql-server-as-a-sink).
### Overwrite the entire table
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-data-flow-activity.md
Use the Data Flow activity to transform and move data via mapping data flows. If
To use a Data Flow activity in a pipeline, complete the following steps: 1. Search for _Data Flow_ in the pipeline Activities pane, and drag a Data Flow activity to the pipeline canvas.
-1. Select the new Data Flow activity on the canvas if it is not already selected, and its **Settings** tab, to edit its details.
+1. Select the new Data Flow activity on the canvas if it is not already selected, and its **Settings** tab, to edit its details.
:::image type="content" source="media/control-flow-execute-data-flow-activity/data-flow-activity.png" alt-text="Shows the UI for a Data Flow activity.":::
+1. Checkpoint key is used to set the checkpoint when data flow is used for changed data capture. You can overwrite it. Data flow activities use a guid value as checkpoint key instead of ΓÇ£pipelinename + activitynameΓÇ¥ so that it can always keep tracking customerΓÇÖs change data capture state even thereΓÇÖs any renaming actions. All existing data flow activity will use the old pattern key for backward compatibility. Checkpoint key option after publishing a new data flow activity with change data capture enabled data flow resource is shown as below.
-1. Select an existing data flow or create a new one using the New button. Select other options as required to complete your configuration.
+ :::image type="content" source="media/control-flow-execute-data-flow-activity/data-flow-activity-checkpoint.png" alt-text="Shows the UI for a Data Flow activity with checkpoint key.":::
+3. Select an existing data flow or create a new one using the New button. Select other options as required to complete your configuration.
## Syntax
databox-online Azure Stack Edge Gpu Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-checklist.md
Previously updated : 02/23/2022 Last updated : 03/22/2022 zone_pivot_groups: azure-stack-edge-device-deployment
Use the following checklist to ensure you have this information after youΓÇÖve p
|--|-|-| | Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> | | Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |
-| | <ul><li>At least one 1-GbE RJ-45 network cable for Port 1 </li><li>At least one 25/10-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li></ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
+| | <ul><li>At least one 1-GbE RJ-45 network cable for Port 1 </li><li>At least one 25/10-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li></ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
| Network readiness | Check to see how ready your network is for the deployment of an Azure Stack Edge device. |[Use the Azure Stack Network Readiness Checker](azure-stack-edge-deploy-check-network-readiness.md) to test all needed connections. |
-| First-time device connection | Laptop whose IPv4 settings can be changed. <!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. |
+| First-time device connection | Laptop whose IPv4 settings can be changed. <!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| If connecting Port 1 directly to a laptop (without a switch), use an Ethernet crossover cable or a USB to Ethernet adaptor. |
| Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. | | Network settings | Device comes with 2 x 1-GbE, 4 x 25-GbE network ports. <ul><li>Port 1 is used for initial configuration only. One or more data ports can be connected and configured. </li><li>At least one data network interface from among Port 2 - Port 6 needs to be connected to the Internet (with connectivity to Azure).</li><li>DHCP and static IPv4 configuration supported.</li></ul> | Static IPv4 configuration requires IP, DNS server, and default gateway. | | Advanced networking settings | <ul><li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li></ul>| Only static IPv4 configuration is supported.|
Use the following checklist to ensure you have this information after youΓÇÖve p
|--|-|-| | Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> | | Device installation | Four power cables for the two device nodes in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |
-| | <ul><li>At least two 1-GbE RJ-45 network cables for Port 1 on the two device nodes </li><li>You would need two 1-GbE RJ-45 network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you also need SFP+ copper cables to connect Port 3 and Port 4 across the device nodes and also from device nodes to the switches. See the [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md#supported-networking-topologies).</li></ul> | Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
+| | <ul><li>At least two 1-GbE RJ-45 network cables for Port 1 on the two device nodes </li><li>You would need two 1-GbE RJ-45 network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you also need SFP+ copper cables to connect Port 3 and Port 4 across the device nodes and also from device nodes to the switches. See the [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md#supported-networking-topologies).</li></ul> | Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
| First-time device connection | Laptop whose IPv4 settings can be changed.<!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->|This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. | | Network settings | Each device node has 2 x 1-GbE, 4 x 25-GbE network ports. <ul><li>Port 1 is used for initial configuration only.</li><li>Port 2 must be connected to the Internet (with connectivity to Azure). Port 3 and Port 4 must be configured and connected across the two device nodes in accordance with the network topology you intend to deploy. You can choose from one of the three [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md#supported-networking-topologies).</li><li>DHCP and static IPv4 configuration supported.</li></ul> | Static IPv4 configuration requires IP, DNS server, and default gateway. |
databox-online Azure Stack Edge Gpu Deploy Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-connect.md
Previously updated : 02/22/2022 Last updated : 03/21/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect to Azure Stack Edge Pro GPU so I can use it to transfer data to Azure.
Before you configure and set up your Azure Stack Edge Pro GPU device, make sure
1. Configure the Ethernet adapter on your computer to connect to the Azure Stack Edge Pro device with a static IP address of 192.168.100.5 and subnet 255.255.255.0.
-2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use a crossover cable or a USB Ethernet adapter. Use the following illustration to identify PORT 1 on your device.
+2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use an Ethernet crossover cable or a USB Ethernet adapter. Use the following illustration to identify PORT 1 on your device.
![Backplane of a cabled device](./media/azure-stack-edge-gpu-deploy-install/two-pci-slots.png)
You're now at the **Overview** page of your device. The next step is to configur
1. Configure the Ethernet adapter on your computer to connect to the first node of your Azure Stack Edge device with a static IP address of 192.168.100.5 and subnet 255.255.255.0.
-1. Connect the computer to PORT 1 on the first node of your 2-node device. If connecting the computer to the device directly (without a switch), use a crossover cable or a USB Ethernet adapter.
+1. Connect the computer to PORT 1 on the first node of your 2-node device. If connecting the computer to the device directly (without a switch), use an Ethernet crossover cable or a USB Ethernet adapter.
1. Open a browser window and access the local web UI of the device at `https://192.168.100.10`. This action may take a few minutes after you've turned on the device.
databox-online Azure Stack Edge Mini R Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-deploy-checklist.md
Previously updated : 02/23/2022 Last updated : 03/22/2022 # Deployment checklist for your Azure Stack Edge Mini R device
Use the following checklist to ensure you have this information after you have p
|--|-|-| | Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge Mini R/Data Box Gateway, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> | | Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |
-| | <ul><li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li>At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li></ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
+| | <ul><li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li>At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li></ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
| Network readiness | Check to see how ready your network is for the deployment of an Azure Stack Edge device. | [Use the Azure Stack Network Readiness Checker](azure-stack-edge-deploy-check-network-readiness.md) to test all needed connections. | | First-time device connection | Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor.<!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. |
databox-online Azure Stack Edge Mini R Deploy Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-deploy-connect.md
Previously updated : 02/22/2022 Last updated : 03/21/2022 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Mini R so I can use it to transfer data to Azure.
Before you configure and set up your Azure Stack Edge device, make sure that:
1. Configure the Ethernet adapter on your computer to connect to the Azure Stack Edge Pro device with a static IP address of 192.168.100.5 and subnet 255.255.255.0.
-2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use a crossover cable or a USB Ethernet adapter. Use the following illustration to identify PORT 1 on your device.
+2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use an Ethernet crossover cable or a USB Ethernet adapter. Use the following illustration to identify PORT 1 on your device.
![Cabling for Wi-Fi](./media/azure-stack-edge-mini-r-deploy-install/wireless-cabled.png)
databox-online Azure Stack Edge Mini R Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-deploy-install.md
Previously updated : 03/18/2021 Last updated : 03/22/2021 # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Mini R device in datacenter so I can use it to transfer data to Azure.
databox-online Azure Stack Edge Pro 2 Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-checklist.md
Previously updated : 03/04/2022 Last updated : 03/22/2022 zone_pivot_groups: azure-stack-edge-device-deployment
Use the following checklist to ensure you have this information after youΓÇÖve p
|--|-|--| | Device management | - Azure subscription <br> - Resource providers registered <br> - Azure Storage account|Enabled for Azure Stack Edge, owner or contributor access. <br> - In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads. <br> - Need access credentials</li> | | Device installation | One power cable in the package per device node. <!--<br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped.--> | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
-| | <br> - At least two 1-GbE RJ-45 network cable for Port 1 on the two device nodes <br> - You would need two 1-GbE network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you may also need at least one 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) to connect Port 3 and Port 4 across the device nodes. <br> - You would also need at least one 10/1-GbE network switch to connect Port 1 and Port 2. You would need a 100/10-GbE switch to connect Port 3 or Port 4 network interface to the Internet for data.| Customer needs to procure these cables and switches. Exact number of cables and switches would depend on the network topology that you deploy. <br><br> For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware%20Compatible%20Products).|
+| | <br> - At least two 1-GbE RJ-45 network cables for Port 1 on the two device nodes <br> - You would need two 1-GbE network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you may also need at least one 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) to connect Port 3 and Port 4 across the device nodes. <br> - You would also need at least one 10/1-GbE network switch to connect Port 1 and Port 2. You would need a 100/10-GbE switch to connect Port 3 or Port 4 network interface to the Internet for data.| Customer needs to procure these cables and switches. Exact number of cables and switches would depend on the network topology that you deploy. <br><br> For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware%20Compatible%20Products).|
| First-time device connection | Via a laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adapter. | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. | | Network settings | Device comes with 2 x 10/1-GbE network ports, Port 1 and Port 2. Device also has 2 x 100-GbE network ports, Port 3 and Port 4. <br> - Port 1 is used for initial configuration. Port 2, Port 3, and Port 4 are also connected and configured. <br> - At least one data network interface from among Port 2 - Port 4 needs to be connected to the Internet (with connectivity to Azure). <br> - DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
databox-online Azure Stack Edge Pro 2 Deploy Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-connect.md
Previously updated : 02/25/2022 Last updated : 03/21/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Before you configure and set up your device, make sure that:
1. Configure the Ethernet adapter on your computer to connect to your device with a static IP address of 192.168.100.5 and subnet 255.255.255.0.
-2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use a crossover cable or a USB Ethernet adapter. Use the following illustration to identify PORT 1 on your device.
+2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use an Ethernet crossover cable or a USB Ethernet adapter. Use the following illustration to identify PORT 1 on your device.
![Back plane of a cabled device](./media/azure-stack-edge-pro-2-deploy-install/cabled-backplane-1.png)
You're now at the **Overview** page of your device. The next step is to configur
1. Configure the Ethernet adapter on your computer to connect to your device with a static IP address of 192.168.100.5 and subnet 255.255.255.0.
-2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use a crossover cable or a USB Ethernet adapter.
+2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use an Ethernet crossover cable or a USB Ethernet adapter.
3. Open a browser window and access the local web UI of the device at `https://192.168.100.10`. This action may take a few minutes after you've turned on the device.
databox-online Azure Stack Edge Pro 2 Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-install.md
Previously updated : 03/04/2022 Last updated : 03/22/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro 2 in datacenter so I can use it to transfer data to Azure.
Before you start cabling your device, you need the following things:
- Your two Azure Stack Edge Pro 2 physical devices, unpacked, and rack mounted. - One power cable for each device node (included in the device package). - Access to one power distribution unit for each device node.-- At least two 1-GbE RJ-45 network cable per device to connect to Port 1 and Port2. These are the two 10/1-GbE network interfaces on your device.
+- At least two 1-GbE RJ-45 network cables per device to connect to Port 1 and Port2. These are the two 10/1-GbE network interfaces on your device.
- A 100-GbE QSFP28 passive direct attached cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured on each device. The total number needed would depend on the network topology you will deploy. Here is an example QSFP28 DAC connector: ![Example of a QSFP28 DAC connector](./media/azure-stack-edge-pro-2-deploy-install/qsfp28-dac-connector.png)
Cable your device as shown in the following diagram:
![Diagram showing cabling scheme for Switchless network topology.](./media/azure-stack-edge-pro-2-deploy-install/switchless-initial-1.png)
-1. Connect Port 1 on each node to a computer using a crossover cable or a USB Ethernet adapter for the initial configuration of the device.
+1. Connect Port 1 on each node to a computer using an Ethernet crossover cable or a USB Ethernet adapter for the initial configuration of the device.
1. Connect Port 2 on each node to a 1-GbE switch via a 1-GbE RJ-45 network cable. If available, a 10-GbE switch can also be used. 1. Connect Port 3 on one device directly (without a switch) to the Port 3 on the other device node. Use a QSFP28 passive direct attached cable (tested in-house) for the connection. 1. Connect Port 4 on one device directly (without a switch) to the Port 4 on the other device node. Use a QSFP28 passive direct attached cable (tested in-house) for the connection.
databox-online Azure Stack Edge Pro R Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-checklist.md
Previously updated : 02/23/2022 Last updated : 03/22/2022 # Deployment checklist for your Azure Stack Edge Pro R device
Use the following checklist to ensure you have this information after you have p
|--|-|-| | Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge Pro/Data Box Gateway, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> | | Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |
-| | <ul><li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4</li><ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
+| | <ul><li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4</li><ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
| Network readiness | Check to see how ready your network is for the deployment of an Azure Stack Edge device. | [Use the Azure Stack Network Readiness Checker](azure-stack-edge-deploy-check-network-readiness.md) to test all needed connections. | | First-time device connection | Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. <!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. |
databox-online Azure Stack Edge Pro R Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-install.md
Previously updated : 10/18/2020 Last updated : 3/22/2022 # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro R in datacenter so I can use it to transfer data to Azure.
databox Data Box Deploy Copy Data Via Copy Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-copy-service.md
Previously updated : 11/08/2021 Last updated : 03/11/2021 #Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
To copy data by using the data copy service, you need to create a job:
|**Username** |Username in `\\<DomainName><UserName>` format to access the data source. If a local administrator is connecting, they will need explicit security permissions. Right-click the folder, select **Properties** and then select **Security**. This should add the local administrator in the **Security** tab. | |**Password** |Password to access the data source. | |**Destination storage account** |Select the target storage account to upload data to from the list. |
- |**Destination type** |Select the target storage type from the list: **Block Blob**, **Page Blob**, or **Azure Files**. |
+ |**Destination type** |Select the target storage type from the list: **Block Blob**, **Page Blob**, **Azure Files**, or **Block Blob (Archive)**. |
|**Destination container/share** |Enter the name of the container or share that you want to upload data to in your destination storage account. The name can be a share name or a container name. For example, use `myshare` or `mycontainer`. You can also enter the name in the format `sharename\directory_name` or `containername\virtual_directory_name`. | |**Copy files matching pattern** | You can enter the file-name matching pattern in the following two ways:<ul><li>**Use wildcard expressions:** Only `*` and `?` are supported in wildcard expressions. For example, the expression `*.vhd` matches all the files that have the `.vhd` extension. Similarly, `*.dl?` matches all the files with either the extension `.dl` or that start with `.dl`, such as `.dll`. Likewise, `*foo` matches all the files whose names end with `foo`.<br>You can directly enter the wildcard expression in the field. By default, the value you enter in the field is treated as a wildcard expression.</li><li>**Use regular expressions:** POSIX-based regular expressions are supported. For example, the regular expression `.*\.vhd` will match all the files that have the `.vhd` extension. For regular expressions, provide the `<pattern>` directly as `regex(<pattern>)`. For more information about regular expressions, go to [Regular expression language - a quick reference](/dotnet/standard/base-types/regular-expression-language-quick-reference).</li><ul>| |**File optimization** |When this feature is enabled, files smaller than 1 MB are packed during ingestion. This packing speeds up the data copy for small files. It also saves a significant amount of time when the number of files far exceeds the number of directories.</br>If you use file optimization:<ul><li>After you run prepare to ship, you can [download a BOM file](data-box-logs.md#inspect-bom-during-prepare-to-ship), which lists the original file names, to help you ensure that all the right files are copied.</li><li>Don't delete the packed files, whose file names begin with "ADB_PACK_". If you delete a packed file, the original file isn't uploaded during future data copies.</li><li>Don't copy the same files that you copy with the Copy Service via other protocols such as SMB, NFS, or REST API. Using different protocols can result in conflicts and failure during data uploads. </li></ul> |
databox Data Box Deploy Copy Data Via Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-nfs.md
Previously updated : 11/10/2021 Last updated : 03/11/2022 #Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
The following table shows the UNC path to the shares on your Data Box and Azure
| Azure Block blobs | <li>UNC path to shares: `//<DeviceIPAddress>/<StorageAccountName_BlockBlob>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> | | Azure Page blobs | <li>UNC path to shares: `//<DeviceIPAddres>/<StorageAccountName_PageBlob>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> | | Azure Files |<li>UNC path to shares: `//<DeviceIPAddres>/<StorageAccountName_AzFile>/<ShareName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
+| Azure Block blobs (Archive) | <li>UNC path to shares: `//<DeviceIPAddres>/<StorageAccountName_BlockBlobArchive>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
If you are using a Linux host computer, perform the following steps to configure Data Box to allow access to NFS clients.
databox Data Box Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data.md
Previously updated : 01/20/2022 Last updated : 03/17/2022 # Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
The following table shows the UNC path to the shares on your Data Box and Azure
|-|--| | Azure Block blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<StorageAccountName_BlockBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> | | Azure Page blobs | <li>UNC path to shares: `\\<DeviceIPAddres>\<StorageAccountName_PageBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Files |<li>UNC path to shares: `\\<DeviceIPAddres>\<StorageAccountName_AzFile>\<ShareName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
+| Azure Files |<li>UNC path to shares: `\\<DeviceIPAddres>\<StorageAccountName_AzFile>\<ShareName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
+| Azure Block blobs (Archive) | <li>UNC path to shares: `\\<DeviceIPAddres>\<StorageAccountName_BlockBlobArchive>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
If using a Windows Server host computer, follow these steps to connect to the Data Box.
If using a Windows Server host computer, follow these steps to connect to the Da
- Azure Block blob - `\\10.126.76.138\utSAC1_202006051000_BlockBlob` - Azure Page blob - `\\10.126.76.138\utSAC1_202006051000_PageBlob` - Azure Files - `\\10.126.76.138\utSAC1_202006051000_AzFile`
+ - Azure Blob blob (Archive) - `\\10.126.76.138\utSAC0_202202241054_BlockBlobArchive`
4. Enter the password for the share when prompted. If the password has special characters, add double quotation marks before and after it. The following sample shows connecting to a share via the preceding command.
databox Data Box Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-ordered.md
Previously updated : 01/11/2022 Last updated : 03/22/2022 #Customer intent: As an IT admin, I need to be able to order Data Box to upload on-premises data from my server onto Azure.
databox Data Box Deploy Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-picked-up.md
Previously updated : 01/25/2022 Last updated : 03/11/2022 # Customer intent: As an IT admin, I need to be able to return a Data Box to upload on-premises data from my server onto Azure.
databox Data Box Disk Troubleshoot Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-troubleshoot-upload.md
Previously updated : 12/22/2021 Last updated : 02/24/2022
The errors found in the 2018-10-01 copy log are described below.
| Error category | Description | |-|-| | `UploadErrorWin32` |File system error. |
-| `UploadErrorCloudHttp` |Unsupported blob type. For more information about errors in this category, see [Summary of non-retryable upload errors](../databox/data-box-troubleshoot-data-upload.md#summary-of-non-retryable-upload-errors).|
+| `UploadErrorCloudHttp` |Unsupported blob type. For more information about errors in this category, see [Summary of upload errors](../databox/data-box-troubleshoot-data-upload.md#summary-of-upload-errors).|
| `UploadErrorDataValidationError` |CRC computed during data ingestion doesnΓÇÖt match the CRC computed during upload. |
-| `UploadErrorManagedConversionError` |The size of the blob being imported is invalid. The blob size is <*blob-size*> bytes. Supported sizes are between 20971520 Bytes and 8192 GiB. For more information, see [Summary of non-retryable upload errors](../databox/data-box-troubleshoot-data-upload.md#summary-of-non-retryable-upload-errors). |
+| `UploadErrorManagedConversionError` |The size of the blob being imported is invalid. The blob size is <*blob-size*> bytes. Supported sizes are between 20971520 Bytes and 8192 GiB. For more information, see [Summary of upload errors](../databox/data-box-troubleshoot-data-upload.md#summary-of-upload-errors). |
| `UploadErrorUnknownType` |Unknown error. | | `ContainerRenamed` |Renamed the container because the original container name doesn't follow [Azure naming conventions](data-box-disk-limits.md#azure-block-blob-page-blob-and-file-naming-conventions). The original container has been renamed to DataBox-<*GUID*> from <*original container name*>. | | `ShareRenamed` |Renamed the share because the original share name doesn't follow [Azure naming conventions](data-box-disk-limits.md#azure-block-blob-page-blob-and-file-naming-conventions). The original share has been renamed to DataBox-<*GUID*> from <*original folder name*>. |
databox Data Box Troubleshoot Data Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-troubleshoot-data-upload.md
Title: Review copy errors in uploads from Azure Data Box, Azure Data Box Heavy devices
-description: Describes review and follow-up for non-retryable errors that prevent files from uploading from an Azure Data Box or Azure Data Box Heavy device.
+description: Describes review and follow-up for errors during uploads from an Azure Data Box or Azure Data Box Heavy device to the Azure cloud.
Previously updated : 10/21/2021 Last updated : 03/22/2022 # Review copy errors in uploads from Azure Data Box and Azure Data Box Heavy devices
-This article describes review and follow-up for non-retryable errors that occasionally prevent files from uploading to the cloud from an Azure Data Box or Azure Data Box Heavy device.
+This article describes review and follow-up for errors that occasionally prevent files from uploading to the Azure cloud from an Azure Data Box or Azure Data Box Heavy device.
+
+The error notification and options vary depending on whether you can fix the error in the current upload:
+
+- **Retryable errors** - You can fix many types of copy error and resume the upload. The data is then successfully uploaded in your current order.
+
+
+ An example of a retryable error is when Large File Shares are not enabled for a storage account that requires shares with data more than 5 TiB. To resolve this, you will need to enable this setting and then confirm to resume data copy. This type of error is referred to as a *retryable error* in the discussion that follows.
+
+- **Non-retryable errors** - These are errors that can't be fixed. For those errors, the upload pauses to give you a chance to review the errors. But the order completes without the data that failed to upload, and the data is secure erased from the device. You'll need to create a new order after you resolve the issues in your data.
+
+ An example of a non-retryable error is if a blob storage container is configured as Write Once, Read Many (WORM). Upload of any blobs that are already stored in the container will fail. This type of error is referred to as a *non-retryable error* in the discussion that follows.
> [!NOTE] > The information in this article applies to import orders only. + ## Upload errors notification
-When data is uploaded to Azure from your device, some file uploads might occasionally fail because of configuration errors that can't be resolved through a retry. In that case, you receive a notification to give you a chance to review and fix the errors for a later upload.
+When a file upload fails because of an error, you'll receive a notification in the Azure portal. You can tell whether the error can be fixed by the status and options in the order overview.
-You'll see the following notification in the Azure portal. The errors are listed in the data copy log, which you can open using the **DATA COPY PATH**. For guidance on resolving the errors, see [Summary of non-retryable upload errors](#summary-of-non-retryable-upload-errors).
+**Retryable errors**: If you can fix the error in the current order, the notification looks similar to the following one. The current order status is **Data copy halted**. You can either choose to resolve the error or proceed with data erasure without making any change. If you select **Resolve error**, a **Resolve error** screen will tell you how to resolve each error. For step-by-step instructions, see [Review errors and proceed](#review-errors-and-proceed).
-![Notification of errors during upload](media/data-box-troubleshoot-data-upload/copy-completed-with-errors-notification-01.png)
+![Screenshot of a Data Box order with retryable upload errors. The Data Copy Halted status and notification are highlighted.](media/data-box-troubleshoot-data-upload/data-box-retryable-errors-01.png)
+
+**Non-retryable errors:** If the error can't be fixed in the current order, the notification looks similar to the following one. The current order status is **Data copy completed with errors. Device pending data erasure**. The errors are listed in the data copy log, which you can open using the **Copy Log Path**. For guidance on resolving the errors, see [Summary of upload errors](#summary-of-upload-errors).
+
+![Screenshot of a Data Box order with retryable upload errors. TELL WHAT IS HIGHLIGHTED.](media/data-box-troubleshoot-data-upload/copy-completed-with-errors-notification-01.png)
You can't fix these errors. The upload has completed with errors. The notification lets you know about any configuration issues you need to fix before you try another upload via network transfer or a new import order.
-After you review the errors and confirm that you're ready to proceed, the data will be secure erased from the device. If you don't respond to the notification, the order is completed automatically after 14 days. For step-by-step instructions, see [Review errors and proceed](#review-errors-and-proceed).
+After you review the errors and confirm you're ready to proceed, the data is secure erased from the device. If you don't respond to the notification, the order is completed automatically after 14 days. For step-by-step instructions, see [Review errors and proceed](#review-errors-and-proceed).
## Review errors and proceed
-The order will be completed automatically after 14 days. By acting on the notification, you can move things along more quickly.
+How you proceed with an upload depends on whether the errors can be fixed and the current upload resumed (see **Retryable errors** tab), or the errors can't be fixed in the current order (see the **Non-retryable errors** tab).
+
+# [Retryable errors](#tab/retryable-errors)
+
+When a retryable error occurs during an upload, you receive a notification with instructions for fixing the error. If you can't fix the error, or prefer not to, you can proceed with the order without fixing the errors.
+
+To resolve retryable copy errors during an upload, do these steps:
+
+1. Open your order in the Azure portal.
+
+ If any retryable copy errors prevented files from uploading, you'll see the following notification. The current order status will be **Data copy halted**.
+
+ ![Screenshot of a Data Box order with data upload halted by retryable copy errors. The Data Copy Halted status and notification are highlighted.](media/data-box-troubleshoot-data-upload/data-box-retryable-errors-01.png)
+
+1. Select **Resolve error** to view help for the errors.
+
+ Your screen will look similar to the one below. In the example, the **Enable large file share** error can be resolved by toggling **Not enabled** for each storage account.
+
+ The screen tells how to recover from two other copy errors: a missing storage account and a missing access key.
+
+ For each error, there's a **Learn more** link to get more information.
+
+ ![Screenshot of the Resolve Errors pane for multiple retryable errors from a Data Box upload. The Not Enabled buttons, confirmation prompt, and Proceed button are highlighted.](media/data-box-troubleshoot-data-upload/data-box-retryable-errors-02.png)
+
+1. After you resolve the errors, select the check box by **I confirm that the errors have been resolved**. Then select **Proceed**.
+
+ The order status changes to **Data copy error resolved**. The data copy will proceed within 24 hours.
+
+ ![Screenshot of a Data Box order with Data Copy Resolved status. The order status and schedule for proceeding are highlighted.](media/data-box-troubleshoot-data-upload/data-box-retryable-errors-03.png)
+
+ > [!NOTE]
+ > If you don't resolve all of the retryable errors, this process will repeat after the data copy proceeds. To proceed without resolving any of the retryable errors, select **Skip and proceed with data erasure** on the **Overview** screen.
++
+# [Non-retryable errors](#tab/non-retryable-errors)
+
+The following errors can't be resolved in the current order. The order will be completed automatically after 14 days. By acting on the notification, you can move things along more quickly.
[!INCLUDE [data-box-review-nonretryable-errors](../../includes/data-box-review-nonretryable-errors.md)] ++
+## Summary of upload errors
+
+Review the summary tables on the **Retryable errors** tab or the **Non-retryable errors** tab to find out how to resolve or follow up on data copy errors that occurred during your upload.
+
+# [Retryable errors](#tab/retryable-errors)
-## Summary of non-retryable upload errors
+When the following errors occur, you can resolve the errors and include the files in the current data upload.
++
+|Error message |Error description |Error resolution |
+|||--|
+|Large file share not enabled on account |Large file shares aren’t enabled on one or more storage accounts. Resolve the error and resume data copy, or skip to data erasure and complete the order. | Large file shares are not enabled on the indicated storage accounts. Select the option highlighted to enable quota up to 100 TiB per share.|
+|Storage account deleted or moved |One or more storage accounts were moved or deleted. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Storage accounts deleted or moved**<br>Storage accounts: &lt;*storage accounts list*&gt; were either deleted, or moved to a different subscription or resource group. Recover or re-create the storage accounts with the original set of properties, and then confirm to resume data copy.<br>[Learn more on how to recover a storage account](../storage/common/storage-account-recover.md). |
+|Storage account location changed |One or more storage accounts were moved to a different region. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Storage accounts location changed**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved to a different region. Restore the account to the original destination region and then confirm to resume data copy.<br>[Learn more on how to move storage accounts](../storage/common/storage-account-move.md). |
+|Virtual network restriction on storage account |One or more storage accounts are behind a virtual network and have restricted access. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Storage accounts behind virtual network**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved behind a virtual network. Add Data Box to the list of trusted services to allow access and then confirm to resume data copy.<br>[Learn more about trusted first party access](../storage/common/storage-network-security.md#exceptions). |
+|Storage account owned by a different tenant |One or more storage accounts were moved under a different tenant. Resolve the error and resume data copy, or skip to data erasure and complete the order.|**Storage accounts moved to a different tenant**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved to a different tenant. Restore the account to the original tenant and then confirm to resume data copy.<br>[Learn more on how to move storage accounts](../storage/common/storage-account-recover.md#recover-a-deleted-account-via-a-support-ticket). |
+|Kek user identity not found |The user identity that has access to the customer-managed key wasn’t found in the active directory. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**User identity not found**<br>Applied a customer-managed key but the user assigned identity that has access to the key was not found in the active directory.<br>This error may occur if a user identity is deleted from Azure.<br>Try adding another user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](data-box-customer-managed-encryption-key-portal.md#enable-key).<br>Confirm to resume data copy after the error is resolved. |
+|Cross tenant identity access not allowed |Managed identity couldn’t access the customer-managed key. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Cross tenant identity access not allowed**<br>Managed identity couldn’t access the customer-managed key.<br>This error may occur if a subscription is moved to a different tenant. To resolve this error, manually move the identity to the new tenant.<br>Try adding another user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](data-box-customer-managed-encryption-key-portal.md#enable-key).<br>Confirm to resume data copy after the error is resolved. |
+|Key details not found |Couldn’t fetch the passkey as the customer-managed key wasn’t found. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Key details not found**<br>If you deleted the key vault, you can't recover the customer-managed key. If you migrated the key vault to a different tenant, see [Change a key vault tenant ID after a subscription move](../key-vault/general/move-subscription.md). If you deleted the key vault and it is still in the purge-protection duration, use the steps at [Recover a key vault](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).<br>If the key vault was migrated to a different tenant, use one of the following steps to recover the vault:<ol><li>Revert the key vault back to the old tenant.</li><li>Set `Identity` = `None` and then set the value back to `Identity` = `SystemAssigned`. This deletes and recreates the identity after the new identity is created. Enable `Get`, `WrapKey`, and `UnwrapKey` permissions for the new identity in the key vault's access policy.</li></ol> |
+|Key vault details not found |Couldn’t fetch the passkey as the associated key vault for the customer-managed key wasn’t found. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Key vault details not found**<br>If you migrated the key vault to a different tenant, see [Change a key vault tenant ID after a subscription move](../key-vault/general/move-subscription.md). If you deleted the key vault and it is in the purge-protection duration, use the steps in [Recover a key vault](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).<br>If the key vault was migrated to a different tenant, use one of the following steps to recover the vault: <ol><li>Revert the key vault back to the old tenant.</li><li>Set `Identity` = `None` and then set the value back to `Identity` = `SystemAssigned`. This deletes and recreates the identity once the new identity has been created. Enable `Get`, `WrapKey`, and `UnwrapKey` permissions for the new identity in the key vault's access policy.</li></ol>Confirm to resume data copy after the error is resolved. |
+|Key vault bad request exception |Applied a customer-managed key, but either the key access wasn’t granted or was revoked, or the key vault was behind a firewall. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Key vault bad request exception**<br>Add the identity selected for your key vault to enable access to the customer-managed key. If the key vault is behind a firewall, switch to a system-assigned identity and then add a customer-managed key. For more information, see how to [Enable the key](data-box-customer-managed-encryption-key-portal.md#enable-key).<br>Confirm to resume data copy after the error is resolved.<br>[Configure Azure Key Vault firewalls and virtual networks](../key-vault/general/network-security.md) |
+|Encryption key expired |Couldn’t fetch the passkey as the customer-managed key has expired. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Encryption key expired**<br>Enable the key version and then confirm to resume data copy. |
+|Encryption key disabled |Couldn’t fetch the passkey as the customer-managed key is disabled. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Encryption key disabled**<br>Enable the key version and then confirm to resume data copy. |
+|User assigned identity not valid |CouldnΓÇÖt fetch the passkey as the user assigned identity used was not valid. Resolve the error and resume data copy, or skip to data erasure and complete the order.|**User assigned identity not valid**<br>Applied a customer-managed key but the user assigned identity that has access to the key is not valid.<br>Try adding a different user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](data-box-customer-managed-encryption-key-portal.md#enable-key).<br>Confirm to resume data copy after the error is resolved. |
+|User assigned identity not found |CouldnΓÇÖt fetch the `passkey`, `WrapKey`, and `UnwrapKey` permissions for the identity in the key vaultΓÇÖs access policy. These permissions must remain for the lifetime of the customer-managed key. XXX Resolve the error and resume data copy, or skip to data erasure and complete the order. |**User assigned identity not found**<br>Applied a customer-managed key but the user assigned identity that has access to the key wasnΓÇÖt found. To resolve the error, check if:<ol><li>Key vault still has the MSI in the access policy.</li><li>Identity is of type `System assigned`.</li><li>Enable `G the order.</li></ol>Confirm to resume data copy after the error is resolved. |
+|Unknown user error |An error has halted the data copy. Contact Support for details on how to resolve the error. Alternatively, you may skip to data erasure and review copy and error logs for the order for the list of files that weren’t copied. |**Error during data copy**<br>Data copy is halted due to an error. [Contact Support](data-box-disk-contact-microsoft-support.md) for details on how to resolve the error. After the error is resolved, confirm to resume data copy. |
+
+For more information about the data copy log's contents, see [Tracking and event logging for your Azure Data Box and Azure Data Box Heavy import order](data-box-logs.md).
+
+Other REST API errors might occur during data uploads. For more information, see [Common REST API error codes](/rest/api/storageservices/common-rest-api-error-codes). <!--Final two paragraphs should be shared, after (or before) tabs?-->
++
+# [Non-retryable errors](#tab/non-retryable-errors)
The following non-retryable errors result in a notification:
The following non-retryable errors result in a notification:
|UploadErrorCloudHttp |409 |The total provisioned capacity of the shares cannot exceed the account maximum size limit. [Learn more](#the-total-provisioned-capacity-of-the-shares-cannot-exceed-the-account-maximum-size-limit).| |UploadErrorCloudHttp |409 |The blob type is invalid for this operation. [Learn more](#the-blob-type-is-invalid-for-this-operation).| |UploadErrorCloudHttp |409 |There is currently a lease on the blob and no lease ID was specified in the request. [Learn more](#there-is-currently-a-lease-on-the-blob-and-no-lease-id-was-specified-in-the-request).|
-|UploadErrorManagedConversionError |409 |The size of the blob being imported is invalid. The blob size is `<blob-size>` bytes. Supported sizes are between 20971520 Bytes and 8192 GiB. [Learn more](#the-size-of-the-blob-being-imported-is-invalid-the-blob-size-is-blob-size-bytes-supported-sizes-are-between-20971520-bytes-and-8192-gib)|
-<!--Temporarily removed from table: Bad Request (file property failure for Azure Files)-->
+|UploadErrorManagedConversionError |409 |The size of the blob being imported is invalid. The blob size is `<blob-size>` bytes. Supported sizes are between 20,971,520 Bytes and 8,192 GiB. [Learn more](#the-size-of-the-blob-being-imported-is-invalid-the-blob-size-is-blob-size-bytes-supported-sizes-are-between-20971520-bytes-and-8192-gib)|
For more information about the data copy log's contents, see [Tracking and event logging for your Azure Data Box and Azure Data Box Heavy import order](data-box-logs.md).
Other REST API errors might occur during data uploads. For more information, see
**Follow-up:** You can't fix this error in the current upload. The upload has completed with errors. Before you do a network transfer or start a new import order, ensure that the listed blobs do not have an active lease. For more information, see [Pessimistic concurrency for blobs](../storage/blobs/concurrency-manage.md?tabs=dotnet#pessimistic-concurrency-for-blobs).
-### The size of the blob being imported is invalid. The blob size is `<blob-size>` Bytes. Supported sizes are between 20971520 Bytes and 8192 GiB.
+### The size of the blob being imported is invalid. The blob size is `<blob-size>` Bytes. Supported sizes are between 20,971,520 Bytes and 8,192 GiB.
**Error category:** UploadErrorManagedConversionError
Other REST API errors might occur during data uploads. For more information, see
**Follow-up:** You can't fix this error in the current upload. The upload has completed with errors. Before you do a network transfer or start a new import order, make sure each listed blob is from 20 MB to 8192 GiB in size. + ## Next steps
databox Data Box Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-troubleshoot.md
Previously updated : 01/04/2022 Last updated : 03/22/2022
These errors are related to data exceeding the size of data allowed in a contain
### ERROR_CONTAINER_OR_SHARE_CAPACITY_EXCEEDED
-**Error description:** Azure file share limits a share to 5 TiB of data, and large file shares are not enabled on the storage account. This limit was exceeded for some shares.
+**Error description:** Large file shares are not enabled on your storage account(s).
-**Suggested resolution:** On the **Connect and copy** page of the local web UI, download, and review the error files.
+**Suggested resolution:** To disregard this error, follow these steps:
+
+1. In the Data Box local UI, go to the **Connect and Copy** page and go to **Settings**.
-- Identify the folders that have this issue from the error logs and make sure that the files in that folder are under 5 TiB.-- The 5-TiB limit does not apply to a storage account that allows large file shares. However, you must have large file shares configured when you place your order.
- - Contact [Microsoft Support](data-box-disk-contact-microsoft-support.md) and request a new shipping label.
- - [Enable large file shares on the storage account](../storage/files/storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account)
- - [Expand the file shares in the storage account](../storage/files/storage-how-to-create-file-share.md#expand-existing-file-shares) and set the quota to 100 TiB.
-
+ :::image type="content" source="media/data-box-troubleshoot/icon-connect-copy.png" alt-text="Connect and copy":::
+
+1. Enable and apply **Disregard Large File Share Errors**.
+
+ :::image type="content" source="media/data-box-troubleshoot/icon-connect-copy-settings-2.png" alt-text="Connect and copy settings":::
+
+1. **Enable large file shares** on your storage account(s) in the Azure portal.
+
+> [!NOTE]
+> If large file shares are not enabled for the indicated storage accounts on the Azure portal, the data upload to these storage accounts will fail.
## Object or file size limit errors
For more information, see the Azure naming conventions for blob names and file n
For more information, see [Copy to managed disks](data-box-deploy-copy-data-from-vhds.md#connect-to-data-box).
+## Non-critical container or share errors
+
+### ERROR_CONTAINER_OR_SHARE_CAPACITY_EXCEEDED
+**Error description:** Large file share errors were disregarded for Data Box. Remember to **enable large file shares** on your storage account(s) in the Azure portal. If you don't enable large file shares on these storage accounts in the portal, the data upload to these accounts will fail.
+
+**Suggested resolution:** Enable Large File Shares on your storage account(s) in the Azure portal. If you don't enable large file shares on these storage accounts in the portal, the data upload to these accounts will fail.
+ ## Next steps - Learn about the [Data Box Blob storage system requirements](data-box-system-requirements-rest.md).
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Title: Secure score in Microsoft Defender for Cloud description: Description of Microsoft Defender for Cloud's secure score and its security controls ++ Previously updated : 11/09/2021 Last updated : 03/23/2022 # Secure score in Microsoft Defender for Cloud
The central feature in Defender for Cloud that enables you to achieve those goal
Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level.
-The secure score is shown in the Azure portal pages as a percentage value, but the underlying values are also clearly presented:
+- In the Azure portal pages, the secure score is shown as a percentage value and the underlying values are also clearly presented:
+ :::image type="content" source="./media/secure-score-security-controls/single-secure-score-via-ui.png" alt-text="Overall secure score as shown in the portal.":::
-To increase your security, review Defender for Cloud's recommendations page for the outstanding actions necessary to raise your score. Each recommendation includes instructions to help you remediate the specific issue.
+- In the Azure mobile app, the secure score is shown as a percentage value and you can tap the secure score to see the details that explain the score:
-Recommendations are grouped into **security controls**. Each control is a logical group of related security recommendations, and reflects your vulnerable attack surfaces. Your score only improves when you remediate *all* of the recommendations for a single resource within a control. To see how well your organization is securing each individual attack surface, review the scores for each security control.
+ :::image type="content" source="./media/secure-score-security-controls/single-secure-score-via-mobile.png" alt-text="Overall secure score as shown in the Azure mobile app.":::
+
+To increase your security, review Defender for Cloud's recommendations page and remediate the recommendation by implementing the remediation instructions for each issue. Recommendations are grouped into **security controls**. Each control is a logical group of related security recommendations, and reflects your vulnerable attack surfaces. Your score only improves when you remediate *all* of the recommendations for a single resource within a control. To see how well your organization is securing each individual attack surface, review the scores for each security control.
For more information, see [How your secure score is calculated](secure-score-security-controls.md#how-your-secure-score-is-calculated) below. ## How your secure score is calculated
-The contribution of each security control towards the overall secure score is shown clearly on the recommendations page.
+The contribution of each security control towards the overall secure score is shown on the recommendations page.
:::image type="content" source="./media/secure-score-security-controls/security-controls.png" alt-text="Microsoft Defender for Cloud's security controls and their impact on your secure score" lightbox="./media/secure-score-security-controls/security-controls.png":::
-To get all the possible points for a security control, all your resources must comply with all of the security recommendations within the security control. For example, Defender for Cloud has multiple recommendations regarding how to secure your management ports. You'll need to remediate them all to make a difference to your secure score.
+To get all the possible points for a security control, all of your resources must comply with all of the security recommendations within the security control. For example, Defender for Cloud has multiple recommendations regarding how to secure your management ports. You'll need to remediate them all to make a difference to your secure score.
### Example scores for a control
To get all the possible points for a security control, all your resources must c
In this example:
-| # | Name | Description |
-|:-:||--|
-| 1 | **Remediate vulnerabilities security control** | This control groups multiple recommendations related to discovering and resolving known vulnerabilities. |
-| 2 | **Max score** | The maximum number of points you can gain by completing all recommendations within a control. The maximum score for a control indicates the relative significance of that control and is fixed for every environment. Use the max score values to triage the issues to work on first.<br>For a list of all controls and their max scores, see [Security controls and their recommendations](#security-controls-and-their-recommendations). |
-| 3 | **Number of resources** | There are 35 resources affected by this control.<br>To understand the possible contribution of every resource, divide the max score by the number of resources.<br>For this example, 6/35=0.1714<br>**Every resource contributes 0.1714 points.** |
-| 4 | **Current score** | The current score for this control.<br>Current score=[Score per resource]*[Number of healthy resources]<br> 0.1714 x 5 healthy resources = 0.86<br>Each control contributes towards the total score. In this example, the control is contributing 0.86 points to current total secure score. |
-| 5 | **Potential score increase** | The remaining points available to you within the control. If you remediate all the recommendations in this control, your score will increase by 9%.<br>Potential score increase=[Score per resource]*[Number of unhealthy resources]<br> 0.1714 x 30 unhealthy resources = 5.14<br> |
-
+| # | Name | Description |
+| :: | - | |
+| 1 | **Remediate vulnerabilities security control** | This control contains multiple recommendations related to discovering and resolving known vulnerabilities. |
+| 2 | **Max score** | The maximum number of points you can get by fulfilling all recommendations within a control. The maximum score for a control indicates the relative significance of that control and is fixed for every environment. Use the max score values to triage the issues to work on first.<br>For a list of all controls and their max scores, see [Security controls and their recommendations](#security-controls-and-their-recommendations). |
+| 3 | **Number of resources** | There are 35 resources affected by this control.<br>To understand the possible contribution of every resource, divide the max score by the number of resources.<br>For this example, 6/35=0.1714<br>**Every resource contributes 0.1714 points.** |
+| 4 | **Current score** | The current score for this control.<br>Current score=[Score per resource]*[Number of healthy resources]<br> 0.1714 x 5 healthy resources = 0.86<br>Each control contributes towards the total score. In this example, the control is contributing 0.86 points to current total secure score. |
+| 5 | **Potential score increase** | The remaining points available to you within the control. If you remediate all the recommendations in this control, your score will increase by 9%.<br>Potential score increase=[Score per resource]*[Number of unhealthy resources]<br> 0.1714 x 30 unhealthy resources = 5.14<br> |
### Calculations - understanding your score
-|Metric|Formula and example|
-|-|-|
-|**Security control's current score**|<br>![Equation for calculating a security control's score.](media/secure-score-security-controls/secure-score-equation-single-control.png)<br><br>Each individual security control contributes towards the Security Score. Each resource affected by a recommendation within the control, contributes towards the control's current score. The current score for each control is a measure of the status of the resources *within* the control.<br>![Tooltips showing the values used when calculating the security control's current score](media/secure-score-security-controls/security-control-scoring-tooltips.png)<br>In this example, the max score of 6 would be divided by 78 because that's the sum of the healthy and unhealthy resources.<br>6 / 78 = 0.0769<br>Multiplying that by the number of healthy resources (4) results in the current score:<br>0.0769 * 4 = **0.31**<br><br>|
-|**Secure score**<br>Single subscription|<br>![Equation for calculating a subscription's secure score](media/secure-score-security-controls/secure-score-equation-single-sub.png)<br><br>![Single subscription secure score with all controls enabled](media/secure-score-security-controls/secure-score-example-single-sub.png)<br>In this example, there is a single subscription with all security controls available (a potential maximum score of 60 points). The score shows 28 points out of a possible 60 and the remaining 32 points are reflected in the "Potential score increase" figures of the security controls.<br>![List of controls and the potential score increase](media/secure-score-security-controls/secure-score-example-single-sub-recs.png)|
-|**Secure score**<br>Multiple subscriptions|<br>![Equation for calculating the secure score for multiple subscriptions.](media/secure-score-security-controls/secure-score-equation-multiple-subs.png)<br><br>When calculating the combined score for multiple subscriptions, Defender for Cloud includes a *weight* for each subscription. The relative weights for your subscriptions are determined by Defender for Cloud based on factors such as the number of resources.<br>The current score for each subscription is calculated in the same way as for a single subscription, but then the weight is applied as shown in the equation.<br>When viewing multiple subscriptions, secure score evaluates all resources within all enabled policies and groups their combined impact on each security control's maximum score.<br>![Secure score for multiple subscriptions with all controls enabled](media/secure-score-security-controls/secure-score-example-multiple-subs.png)<br>The combined score is **not** an average; rather it's the evaluated posture of the status of all resources across all subscriptions.<br>Here too, if you go to the recommendations page and add up the potential points available, you will find that it's the difference between the current score (24) and the maximum score available (60).|
+| Metric | Formula and example |
+| | |
+| **Security control's current score** | <br>![Equation for calculating a security control's score.](media/secure-score-security-controls/secure-score-equation-single-control.png)<br><br>Each individual security control contributes towards the Security Score. Each resource affected by a recommendation within the control, contributes towards the control's current score. The current score for each control is a measure of the status of the resources *within* the control.<br>![Tooltips showing the values used when calculating the security control's current score](media/secure-score-security-controls/security-control-scoring-tooltips.png)<br>In this example, the max score of 6 would be divided by 78 because that's the sum of the healthy and unhealthy resources.<br>6 / 78 = 0.0769<br>Multiplying that by the number of healthy resources (4) results in the current score:<br>0.0769 * 4 = **0.31**<br><br> |
+| **Secure score**<br>Single subscription | <br>![Equation for calculating a subscription's secure score](media/secure-score-security-controls/secure-score-equation-single-sub.png)<br><br>![Single subscription secure score with all controls enabled](media/secure-score-security-controls/secure-score-example-single-sub.png)<br>In this example, there is a single subscription with all security controls available (a potential maximum score of 60 points). The score shows 28 points out of a possible 60 and the remaining 32 points are reflected in the "Potential score increase" figures of the security controls.<br>![List of controls and the potential score increase](media/secure-score-security-controls/secure-score-example-single-sub-recs.png) |
+| **Secure score**<br>Multiple subscriptions | <br>![Equation for calculating the secure score for multiple subscriptions.](media/secure-score-security-controls/secure-score-equation-multiple-subs.png)<br><br>When calculating the combined score for multiple subscriptions, Defender for Cloud includes a *weight* for each subscription. The relative weights for your subscriptions are determined by Defender for Cloud based on factors such as the number of resources.<br>The current score for each subscription is calculated in the same way as for a single subscription, but then the weight is applied as shown in the equation.<br>When viewing multiple subscriptions, secure score evaluates all resources within all enabled policies and groups their combined impact on each security control's maximum score.<br>![Secure score for multiple subscriptions with all controls enabled](media/secure-score-security-controls/secure-score-example-multiple-subs.png)<br>The combined score is **not** an average; rather it's the evaluated posture of the status of all resources across all subscriptions.<br>Here too, if you go to the recommendations page and add up the potential points available, you will find that it's the difference between the current score (24) and the maximum score available (60). |
### Which recommendations are included in the secure score calculations? Only built-in recommendations have an impact on the secure score.
-Recommendations flagged as **Preview** aren't included in the calculations of your secure score. They should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score.
+Recommendations flagged as **Preview** aren't included in the calculations of your secure score. We recommend that you remediate preview recommendations so that they contribute towards your score when the preview period ends.
An example of a preview recommendation:
An example of a preview recommendation:
## Improve your secure score
-To improve your secure score, remediate security recommendations from your recommendations list. You can remediate each recommendation manually for each resource, or by using the **Fix** option (when available) to resolve an issue on multiple resources quickly. For more information, see [Remediate recommendations](implement-security-recommendations.md).
+To improve your secure score, remediate security recommendations from your recommendations list. You can remediate each recommendation manually for each resource, or use the **Fix** option (when available) to resolve an issue on multiple resources quickly. For more information, see [Remediate recommendations](implement-security-recommendations.md).
-Another way to improve your score and ensure your users don't create resources that negatively impact your score is to configure the Enforce and Deny options on the relevant recommendations. Learn more in [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md).
+You can also configure the Enforce and Deny options on the relevant recommendations to improve your score and ensure your users don't create resources that negatively impact your score. Learn more in [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md).
## Security controls and their recommendations The table below lists the security controls in Microsoft Defender for Cloud. For each control, you can see the maximum number of points you can add to your secure score if you remediate *all* of the recommendations listed in the control, for *all* of your resources.
-The set of security recommendations provided with Defender for Cloud is tailored to the available resources in each organization's environment. The recommendations can be further customized by [disabling policies](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations) and [exempting specific resources from a recommendation](exempt-resource.md).
+The set of security recommendations provided with Defender for Cloud is tailored to the available resources in each organization's environment. You can [disable policies](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations) and [exempt specific resources from a recommendation](exempt-resource.md) to further customize the recommendations.
-We recommend every organization carefully review their assigned Azure Policy initiatives.
+We recommend every organization carefully reviews their assigned Azure Policy initiatives.
> [!TIP]
-> For details of reviewing and editing your initiatives, see [Working with security policies](tutorial-security-policy.md).
+> For details about reviewing and editing your initiatives, see [Working with security policies](tutorial-security-policy.md).
-Even though Defender for Cloud's default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. Consequently, it'll sometimes be necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks you're obligated to meet.<br><br>
+Even though Defender for Cloud's default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. Consequently, it is sometimes necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks.<br><br>
[!INCLUDE [security-center-controls-and-recommendations](../../includes/asc/security-control-recommendations.md)]
Even though Defender for Cloud's default security initiative is based on industr
## FAQ - Secure score ### If I address only three out of four recommendations in a security control, will my secure score change?
-No. It won't change until you remediate all of the recommendations for a single resource. To get the maximum score for a control, you must remediate all recommendations, for all resources.
+No. It won't change until you remediate all of the recommendations for a single resource. To get the maximum score for a control, you must remediate all recommendations for all resources.
### If a recommendation isn't applicable to me, and I disable it in the policy, will my security control be fulfilled and my secure score updated? Yes. We recommend disabling recommendations when they're inapplicable in your environment. For instructions on how to disable a specific recommendation, see [Disable security policies](./tutorial-security-policy.md#disable-security-policies-and-disable-recommendations). ### If a security control offers me zero points towards my secure score, should I ignore it?
-In some cases, you'll see a control max score greater than zero, but the impact is zero. When the incremental score for fixing resources is negligible, it's rounded to zero. Don't ignore these recommendations as they still bring security improvements. The only exception is the "Additional Best Practice" control. Remediating these recommendations won't increase your score, but it will enhance your overall security.
+In some cases, you'll see a control max score greater than zero, but the impact is zero. When the incremental score for fixing resources is negligible, it's rounded to zero. Don't ignore these recommendations because they still bring security improvements. The only exception is the "Additional Best Practice" control. Remediating these recommendations won't increase your score, but it will enhance your overall security.
## Next steps
devtest-labs Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/network-isolation.md
Title: Network isolation
-description: Learn about network isolation in Azure DevTest Labs.
+description: Learn how to enable and configure network isolation for labs in Azure DevTest Labs.
Previously updated : 03/25/2022 Last updated : 03/21/2022
-# Network isolation in DevTest Labs
+# Network isolation in Azure DevTest Labs
-An [Azure virtual network](../virtual-network/virtual-networks-overview.md) acts as a security boundary, isolating your Azure resources from the public internet. You can also join an Azure virtual network to your on-premises network to securely connect to your on-premises resources. In DevTest Labs, you can choose to [isolate all lab virtual machines](devtest-lab-configure-vnet.md) and [environments to your network](connect-environment-lab-virtual-network.md) to ensure lab resources follow organizational networking policies.
+This article walks you through creating a network-isolated lab in Azure DevTest Labs.
-As a lab owner, you can also choose to completely isolate the lab. You isolate virtual machines and environments to the selected network. You can also isolate lab storage account and key vaults you created in your subscription. This article walks you through creating a network isolated lab.
+By default, Azure DevTest Labs creates a new [Azure virtual network](/azure/virtual-network/virtual-networks-overview) for each lab. The virtual network acts as a security boundary to isolate lab resources from the public internet. To ensure lab resources follow organizational networking policies, you can use several other networking options:
-Also review the following articles:
+- Isolate all lab [virtual machines (VMs)](devtest-lab-configure-vnet.md) and [environments](connect-environment-lab-virtual-network.md) in a pre-existing virtual network that you select.
+- Join an Azure virtual network to an on-premises network, to securely connect to on-premises resources. For more information, see [DevTest Labs enterprise reference architecture: Connectivity components](devtest-lab-reference-architecture.md#connectivity-components).
+- Completely isolate the lab, including VMs, environments, the lab storage account, and key vaults, to a selected virtual network. This article describes how to configure network isolation.
-- [How DevTest Labs uses lab storage account](encrypt-storage.md)-- [How DevTest Labs uses key vaults](devtest-lab-store-secrets-in-key-vault.md)
-
-> [!NOTE]
-> Network isolation is currently supported for new labs creations only.
+## Enable network isolation
-## Steps to enable network isolation during lab creation
+You can enable network isolation in the Azure portal only during lab creation. To convert an existing lab and associated lab resources to isolated network mode, use the PowerShell script [Convert-DtlLabToIsolatedNetwork.ps1](https://github.com/Azure/azure-devtestlab/blob/master/Tools/ConvertDtlLabToIsolatedNetwork/Convert-DtlLabToIsolatedNetwork.ps1).
-1. During lab creation, go to the **Networking** tab.
-1. You can either select a **Default** network that the lab will create for you or select an existing network from the drop-down. You'll only be able to select networks that are in the same region and subscription as the lab.
+During lab creation, you can enable network isolation for the default lab virtual network, or choose another, pre-existing virtual network to use for the lab.
+
+### Use the default virtual network and subnet
+
+To enable network isolation for the **Default** virtual network and subnet that DevTest Labs creates for the lab:
+
+1. During [lab creation](devtest-lab-create-lab.md), on the **Create DevTest Lab** screen, select the **Networking** tab.
+1. Next to **Isolate lab resources**, select **Yes**.
+1. Finish creating the lab.
+
+![Screenshot that shows enabling network isolation for the default network.](./media/network-isolation/isolate-lab-resources.png)
+
+After you create the lab, no further action is needed. The lab handles isolating resources from now on.
+
+### Use a different virtual network and subnet
+
+To use a different, existing virtual network for the lab, and enable network isolation for that network:
+
+1. During [lab creation](devtest-lab-create-lab.md), on the **Networking** tab of the **Create DevTest Lab** screen, select a network from the dropdown list. The list only shows networks in the same region and subscription as the lab.
+
+ ![Screenshot that shows selecting a virtual network.](./media/network-isolation/create-lab.png)
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows creating a lab.](./media/network-isolation/create-lab.png)
1. Select a subnet.
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows creating a subnet.](./media/network-isolation/create-lab-subnet.png)
-1. If you choose to isolate the lab storage account and key vault to the default network, no further action is needed. The lab handles isolating resources from now on.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows network isolation.](./media/network-isolation/isolate-lab-resources.png)
-1. If you choose to isolate the lab storage account and key vault to an existing network you selected, complete the following steps after lab creation. These steps ensure that the lab continues to function in the isolated mode.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows isolating resources.](./media/network-isolation/isolate-my-vnet.png)
+ ![Screenshot that shows selecting a subnet.](./media/network-isolation/create-lab-subnet.png)
+
+1. Next to **Isolate lab resources**, select **Yes**.
+
+ ![Screenshot that shows enabling network isolation for a selected network.](./media/network-isolation/isolate-my-vnet.png)
+
+1. Finish creating the lab.
+
+<a name="steps-to-follow-post-lab-creation"></a>
+## Configure service endpoints
+
+If you enabled network isolation for a virtual network other than the default, complete the following steps to isolate the lab storage account and key vault to the network you selected. Do these steps after you create the lab, but before you do any other lab configuration or create any lab resources.
+
+### Configure the endpoint for the lab storage account
+
+1. On the lab's **Overview** page, select the **resource group**.
+
+ ![Screenshot that shows selecting the resource group for a lab.](./media/network-isolation/contoso-lab.png)
+
+1. On the resource group **Overview** page, select the lab's storage account. The naming convention for the lab storage account is `a\<labName>\<4-digit number>`. For example, if the lab name is `contosolab`, the storage account name could be `acontosolab1234`.
+
+ ![Screenshot that shows selecting the lab storage account.](./media/network-isolation/contoso-test.png)
+
+1. On the storage account page, select **Networking** from the left navigation. On the **Firewalls and virtual networks** tab, ensure that **Allow Azure services on the trusted services list to access this storage account.** is selected.
+
+ DevTest Labs is a [trusted Microsoft service](../storage/common/storage-network-security.md#trusted-microsoft-services), so selecting this option lets the lab operate normally in a network isolated mode.
+
+1. Select **Add existing virtual network**.
+
+ ![Screenshot that shows allowing trusted Azure services on the Firewalls and virtual networks tab.](./media/network-isolation/contoso-lab-firewalls-vnets.png)
+
+1. On the **Add networks** pane, select the virtual network and subnet you chose when you created the lab, and then select **Enable**.
+
+ ![Screenshot that shows adding the lab virtual network and subnet to the storage account.](./media/network-isolation/contoso-lab-my-vnet.png)
+
+1. Once the service endpoint is successfully enabled, select **Add**.
+
+1. On the **Networking** page, select **Save**.
+
+ ![Screenshot that shows selecting Add and Save after the service endpoint is enabled.](./media/network-isolation/contoso-firewall-add.png)
+
+Azure Storage now allows inbound connections from the added virtual network, which enables the lab to operate successfully in a network isolated mode.
+
+You can automate these steps with PowerShell or Azure CLI to configure network isolation for multiple labs. For more information, see [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security).
+
+### Configure the endpoint for the lab key vault
+
+1. On the lab's **Overview** page, select the **resource group**.
+
+1. On the resource group **Overview** page, select the lab's key vault.
- > [!IMPORTANT]
- > The lab owner needs to complete these steps prior to configuring or creating any resources in the lab.
+ ![Screenshot that shows selecting the lab's key vault.](./media/network-isolation/key-vault.png)
-### Steps to follow post lab creation
+1. On the key vault page, select **Networking** from the left navigation. On the **Firewalls and virtual networks** tab, ensure that **Allow trusted Microsoft services to bypass this firewall** is set to **Yes**.
-1. On the home page for the lab, select the **resource group** on the **Overview** page. You should see the **Resource group** page for the resource group that contains the lab.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows Contoso lab.](./media/network-isolation/contoso-lab.png)
-1. Select the Azure storage account of the lab. The naming convention for the lab storage account is: a\<*labNameWithoutInvalidCharacters*>\<*4-digit number*>. For example, if the lab name is contosolab, the storage account name could be acontosolab1234.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows Contoso test.](./media/network-isolation/contoso-test.png)
-1. On the storage account, go to Firewalls and virtual networks and ensure ΓÇÿAllow trusted Microsoft Services to access this storage accountΓÇÖ check box is checked. As [DevTest Labs is a trusted Microsoft service](../storage/common/storage-network-security.md#trusted-microsoft-services), this option will enable the lab to operate normally in a network isolated mode.
+1. Select **Add existing virtual networks**.
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows Contoso lab firewalls.](./media/network-isolation/contoso-lab-firewalls-vnets.png)
-1. Next, click on **+Add existing virtual network**, select the virtual network and subnet you picked while creating the lab and click on **Enable**.
+ ![Screenshot that shows allowing trusted Microsoft services on the Firewalls and virtual networks tab.](./media/network-isolation/networking-key-vault.png)
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows Contoso my vnet.](./media/network-isolation/contoso-lab-my-vnet.png)
-5. Once the service endpoint is successfully enabled for the selected virtual network, click on **Add**.
+1. On the **Add networks** pane, select the virtual network and subnet you chose when you created the lab, and then select **Enable**.
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows Add.](./media/network-isolation/contoso-firewall-add.png)
-
-With this, Azure storage will allow inbound connections from the added virtual network and enable the lab to operate successfully in a network isolated mode.
+1. Once the service endpoint is successfully enabled, select **Add**.
-You can also choose to automate these steps to configure this setting for multiple labs.
+1. On the **Networking** page, select **Save**.
-[Learn more on managing default network access rules for Azure Storage using PowerShell and CLI](../storage/common/storage-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json#powershell)
+## Considerations
-## Network isolation for an existing lab
+Here are some things to remember when using a lab in a network isolated mode:
-As a lab owner, you can choose to isolate the network for an existing lab. [This sample script](https://github.com/Azure/azure-devtestlab/blob/master/Tools/ConvertDtlLabToIsolatedNetwork/Convert-DtlLabToIsolatedNetwork.ps1) demonstrates how to convert an existing lab and associated lab resources to an isolated network mode.
+### Enable access to the storage account from outside the lab
-## Things to remember while using a lab in a network isolated mode
+The lab owner must explicitly enable access to a network isolated lab's storage account from an allowed endpoint. Actions like uploading a VHD to the storage account for creating custom images require this access. You can enable access by creating a lab VM, and securely accessing the lab's storage account from that VM.
-### Accessing lab's storage account outside the lab
+For more information, see [Connect to a storage account using an Azure Private Endpoint](/azure/private-link/tutorial-private-endpoint-storage-portal).
-The lab owner must explicitly enable accessing a network isolated lab's storage account from an allowed endpoint. This access is needed for actions like uploading a VHD to the storage account for creating custom images. You can enable access by creating a virtual machine and securely accessing the labΓÇÖs storage account from that virtual machine.
+### Provide storage account to export lab usage data
-[Learn more on accessing a storage account privately from a virtual machine](../private-link/tutorial-private-endpoint-storage-portal.md)
+To [export usage data](personal-data-delete-export.md) for a network isolated lab, the lab owner must explicitly provide a storage account and generate a blob within the account to store the data. Exporting usage data fails in network isolated mode if the user doesn't explicitly provide the storage account to use.
-### Exporting usage data from the lab
+For more information, see [Export or delete personal data from Azure DevTest Labs](personal-data-delete-export.md).
-To [export personal usage data for a network isolated lab](personal-data-delete-export.md), the lab owner must explicitly provide a storage account and generate a blob within the account to store the data.
+### Set key vault access policies
-If no lab storage account is provided, this operation will fail in the network isolated mode. The labΓÇÖs storage account isn't accessible for the lab to use if the customer provides no storage account.
+Enabling the key vault service endpoint affects only the firewall. Make sure to configure the appropriate key vault access permissions in the key vault **Access policies** section.
-[Learn more on exporting lab usage data in a specified storage account](personal-data-delete-export.md#azure-powershell)
+For more information, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy).
## Next steps
-[Create or modify labs automatically using Azure Resource Manager templates and PowerShell](devtest-lab-use-arm-and-powershell-for-lab-resources.md)
+- [Azure Resource Manager (ARM) templates in Azure DevTest Labs](devtest-lab-use-arm-and-powershell-for-lab-resources.md)
+- [Manage Azure DevTest Labs storage accounts](encrypt-storage.md)
+- [Store secrets in a key vault in Azure DevTest Labs](devtest-lab-store-secrets-in-key-vault.md)
digital-twins How To Set Up Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-set-up-instance-portal.md
This version of this article goes through these steps manually, one by one, usin
Here are the additional options you can configure during setup, using the other tabs in the **Create Resource** process.
-* **Networking**: In this tab, you can enable private endpoints with [Azure Private Link](../private-link/private-link-overview.md) to eliminate public network exposure to your instance. For instructions, see [Enable private access with Private Link (preview)](./how-to-enable-private-link.md?tabs=portal#add-a-private-endpoint-during-instance-creation).
+* **Networking**: In this tab, you can enable private endpoints with [Azure Private Link](../private-link/private-link-overview.md) to eliminate public network exposure to your instance. For instructions, see [Enable private access with Private Link](./how-to-enable-private-link.md?tabs=portal#add-a-private-endpoint-during-instance-creation).
* **Advanced**: In this tab, you can enable a system-managed identity for your instance that can be used when forwarding events along [event routes](concepts-route-events.md). For more information about using system-managed identities with Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md#managed-identity-for-accessing-other-resources). * **Tags**: In this tab, you can add tags to your instance to help you organize it among your Azure resources. For more about Azure resource tags, see [Tag resources, resource groups, and subscriptions for logical organization](../azure-resource-manager/management/tag-resources.md).
dns Dns Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-bicep.md
+
+ Title: 'Quickstart: Create an Azure DNS zone and record - Bicep'
+
+description: Learn how to create a DNS zone and record in Azure DNS. This is a step-by-step quickstart to create and manage your first DNS zone and record using Bicep.
+++ Last updated : 03/21/2022+++
+#Customer intent: As an administrator or developer, I want to learn how to configure Azure DNS using Bicep so I can use Azure DNS for my name resolution.
++
+# Quickstart: Create an Azure DNS zone and record using Bicep
+
+This quickstart describes how to use Bicep to create a DNS zone with an `A` record in it.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-dns-new-zone).
+
+In this quickstart, you'll create a unique DNS zone with a suffix of `azurequickstart.org`. An `A` record pointing to two IP addresses will also be placed in the zone.
++
+Two Azure resources have been defined in the Bicep file:
+
+- [**Microsoft.Network/dnsZones**](/azure/templates/microsoft.network/dnsZones)
+- [**Microsoft.Network/dnsZones/A**](/azure/templates/microsoft.network/dnsZones/A): Used to create an `A` record in the zone.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the VM and all of the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a:
+
+- DNS zone
+- `A` record
event-grid Event Schema Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-service-bus.md
Last updated 09/15/2021
This article provides the properties and schema for Service Bus events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
+>[!NOTE]
+> Only Premium tier Service Bus namespace supports event integration. Basic and Standard tiers do not support integration with Event Grid.
+ [!INCLUDE [event-grid-service-bus.md](../service-bus-messaging/includes/event-grid-service-bus.md)] ## Tutorials and how-tos
event-grid Resize Images On Storage Blob Upload Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/resize-images-on-storage-blob-upload-event.md
Title: 'Tutorial: Use Azure Event Grid to automate resizing uploaded images' description: 'Tutorial: Azure Event Grid can trigger on blob uploads in Azure Storage. You can use this to send image files uploaded to Azure Storage to other services, such as Azure Functions, for resizing and other improvements.' Previously updated : 09/28/2021 Last updated : 03/21/2022 ms.devlang: csharp, javascript
Create a function app by using the [az functionapp create](/cli/azure/functionap
2. Create a function app. ```azurecli-interactive
- az functionapp create --name $functionapp --storage-account $functionstorage --resource-group $resourceGroupName --consumption-plan-location $location --functions-version 2
+ az functionapp create --name $functionapp --storage-account $functionstorage --resource-group $resourceGroupName --consumption-plan-location $location --functions-version 3
```
blobStorageAccountKey=$(az storage account keys list -g $resourceGroupName -n $b
storageConnectionString=$(az storage account show-connection-string --resource-group $resourceGroupName --name $blobStorageAccount --query connectionString --output tsv)
-az functionapp config appsettings set --name $functionapp --resource-group $resourceGroupName --settings FUNCTIONS_EXTENSION_VERSION=~2 BLOB_CONTAINER_NAME=thumbnails AZURE_STORAGE_ACCOUNT_NAME=$blobStorageAccount AZURE_STORAGE_ACCOUNT_ACCESS_KEY=$blobStorageAccountKey AZURE_STORAGE_CONNECTION_STRING=$storageConnectionString FUNCTIONS_WORKER_RUNTIME=node
+outputFileName="resized-image.png"
+
+az functionapp config appsettings set --name $functionapp --resource-group $resourceGroupName --settings FUNCTIONS_EXTENSION_VERSION=~2 BLOB_CONTAINER_NAME=thumbnails AZURE_STORAGE_ACCOUNT_NAME=$blobStorageAccount AZURE_STORAGE_ACCOUNT_ACCESS_KEY=$blobStorageAccountKey AZURE_STORAGE_CONNECTION_STRING=$storageConnectionString OUT_BLOB_NAME=$outputFileName FUNCTIONS_WORKER_RUNTIME=node WEBSITE_NODE_DEFAULT_VERSION=~10
```
az functionapp deployment source config --name $functionapp --resource-group $re
# [Node.js v10 SDK](#tab/nodejsv10)
-The sample Node.js resize function is available on [GitHub](https://github.com/Azure-Samples/storage-blob-resize-function-node-v10). Deploy this Functions code project to the function app by using the [az functionapp deployment source config](/cli/azure/functionapp/deployment/source) command.
+The sample Node.js resize function is available on [GitHub](https://github.com/Azure-Samples/storage-blob-resize-function-node). Deploy this Functions code project to the function app by using the [az functionapp deployment source config](/cli/azure/functionapp/deployment/source) command.
```azurecli-interactive az functionapp deployment source config --name $functionapp \ --resource-group $resourceGroupName --branch master --manual-integration \
- --repo-url https://github.com/Azure-Samples/storage-blob-resize-function-node-v10
+ --repo-url https://github.com/Azure-Samples/storage-blob-resize-function-node
```
To learn more about this function, see the [function.json and run.csx files](htt
# [Node.js v10 SDK](#tab/nodejsv10)
-To learn more about this function, see the [function.json and index.js files](https://github.com/Azure-Samples/storage-blob-resize-function-node-v10/tree/master/Thumbnail).
+To learn more about this function, see the [function.json and index.js files](https://github.com/Azure-Samples/storage-blob-resize-function-node/tree/master/Thumbnail).
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
With Front Door, you can control how files are cached for a web request that con
::: zone pivot="front-door-standard-premium"
-* You can also use Rule Set to specify **cache key query string** behavior, to include, or exclude specified parameters when cache key gets generated. For example, the default cache key is: /foo/image/asset.html, and the sample request is `https://contoso.com//foo/image/asset.html?language=EN&userid=100&sessionid=200`. There's a rule set rule to exclude query string 'userid'. Then the query string cache-key would be `/foo/image/asset.html?language=EN&sessionid=200`.
+* **Specify cache key query string** behavior, to include, or exclude specified parameters when cache key gets generated. For example, the default cache key is: /foo/image/asset.html, and the sample request is `https://contoso.com//foo/image/asset.html?language=EN&userid=100&sessionid=200`. There's a rule set rule to exclude query string 'userid'. Then the query string cache-key would be `/foo/image/asset.html?language=EN&sessionid=200`.
::: zone-end
The following request headers won't be forwarded to a backend when using caching
## Cache behavior and duration
-Cache behavior and duration can be configured in both the Front Door designer routing rule and in Rules Engine. Rules Engine caching configuration will always override the Front Door designer routing rule configuration.
+Cache behavior and duration can be configured in Rules Engine. Rules Engine caching configuration will always override the route configuration.
-* When *caching* is **disabled**, Front Door doesnΓÇÖt cache the response contents, irrespective of origin response directives.
+* When *caching* is **disabled**, Azure Front Door doesnΓÇÖt cache the response contents, irrespective of origin response directives.
-* When *caching* is **enabled**, the cache behavior is different for different values of *Use cache default duration*.
- * When *Use cache default duration* is set to **Yes**, Front Door will always honor origin response header directive. If the origin directive is missing, Front Door will cache contents anywhere from 1 to 3 days.
- * When *Use cache default duration* is set to **No**, Front Door will always override with the *cache duration* (required fields), meaning that it will cache the contents for the cache duration ignoring the values from origin response directives.
+* When *caching* is **enabled**, the cache behavior is different based on the cache behavior value selected.
+ * **Honor origin**: Azure Front Door will always honor origin response header directive. If the origin directive is missing, Azure Front Door will cache contents anywhere from 1 to 3 days.
+ * **Override always**: Azure Front Door will always override with the cache duration, meaning that it will cache the contents for the cache duration ignoring the values from origin response directives.
+ * **Override if origin missing**: If the origin doesnΓÇÖt return caching TTL values, Azure Front Door will use the specified cache duration. This behavior will only be applied if the response is cacheable.
> [!NOTE]
-> * The *cache duration* set in the Front Door designer routing rule is the **minimum cache duration**. This override won't work if the cache control header from the backend has a greater TTL than the override value.
> * Azure Front Door makes no guarantees about the amount of time that the content is stored in the cache. Cached content may be removed from the edge cache before the content expiration if the content is not frequently used. Front Door might be able to serve data from the cache even if the cached data has expired. This behavior can help your site to remain partially available when your backends are offline.
->
+> * Origins may specify not to cache specific responses using the Cache-Control header with a value of no-cache, private, or no-store. In these circumstances, Front Door will never cache the content and this action will have no effect.
## Next steps ::: zone pivot="front-door-classic" -- Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
+- Learn how to [create an Azure Front Door (classic)](quickstart-create-front-door.md).
+- Learn [how Azure Front Door (classic) works](front-door-routing-architecture.md).
::: zone-end ::: zone pivot="front-door-standard-premium"
-* Learn more about [Rule Set Match Conditions](standard-premium/concept-rule-set-match-conditions.md)
-* Learn more about [Rule Set Actions](front-door-rules-engine-actions.md)
+* Learn more about [Rule set match conditions](standard-premium/concept-rule-set-match-conditions.md)
+* Learn more about [Rule set actions](front-door-rules-engine-actions.md)
::: zone-end
governance Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/scope.md
In addition to the properties on the policy assignment, is the
[policy exemption](./exemption-structure.md) object. Exemptions enhance the scope story by providing a method to identify a portion of an assignment to not be evaluated. -- Exemption (**free in preview** feature) - A resource hierarchy or individual resource should be
+- Exemption - A resource hierarchy or individual resource should be
evaluated for compliance by the definition, but won't be evaluated for a reason such as having a waiver or being mitigated through another method. Resources in this state show as **Exempted** in compliance reports so that they can be tracked. The exemption object is created on the resource
The following table is a comparison of the scope options:
- Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
+ [Organize your resources with Azure management groups](../../management-groups/overview.md).
hdinsight Enterprise Security Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/enterprise-security-package.md
Enterprise Security is an optional package that you can add on your HDInsight cl
Currently, only the following cluster types support the Enterprise Security Package:
-* Hadoop (HDInsight 3.6 only)
+* Hadoop
* Spark * Kafka * HBase
For information on pricing and SLA for the Enterprise Security Package, see [HDI
* [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md) * [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md) * [Hortonworks release notes associated with Azure HDInsight versions](./hortonworks-release-notes.md)
-* [Apache components on HDInsight](./hdinsight-component-versioning.md)
+* [Apache components on HDInsight](./hdinsight-component-versioning.md)
import-export Storage Import Export Tool Reviewing Job Status V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-reviewing-job-status-v1.md
You'll find the following errors in the copy logs for import jobs and/or export
| Error category | Error message | Imports | Exports | |-|-||| | `UploadErrorWin32` |File system error. | Yes | Yes |
-| `UploadErrorCloudHttp` |Unsupported blob type. For more information about errors in this category, see [Summary of non-retryable upload errors](../databox/data-box-troubleshoot-data-upload.md#summary-of-non-retryable-upload-errors).|Yes |Yes |
+| `UploadErrorCloudHttp` |Unsupported blob type. For more information about errors in this category, see [Summary of upload errors](../databox/data-box-troubleshoot-data-upload.md#summary-of-upload-errors).|Yes |Yes |
| `UploadErrorDataValidationError` |CRC computed during data ingestion doesnΓÇÖt match the CRC computed during upload. |Yes |Yes |
-| `UploadErrorManagedConversionError` |The size of the blob being imported is invalid. The blob size is <*blob-size*> bytes. Supported sizes are between 20971520 Bytes and 8192 GiB. For more information, see [Summary of non-retryable upload errors](../databox/data-box-troubleshoot-data-upload.md#summary-of-non-retryable-upload-errors). |Yes |Yes |
+| `UploadErrorManagedConversionError` |The size of the blob being imported is invalid. The blob size is <*blob-size*> bytes. Supported sizes are between 20971520 Bytes and 8192 GiB. For more information, see [Summary of upload errors](../databox/data-box-troubleshoot-data-upload.md#summary-of-upload-errors). |Yes |Yes |
| `UploadErrorUnknownType` |Unknown error. |Yes |Yes | | `ContainerRenamed` |Renamed the container because the original container name doesn't follow [Azure naming conventions](../databox/data-box-disk-limits.md#azure-block-blob-page-blob-and-file-naming-conventions). The original container has been renamed to DataBox-<*GUID*> from <*original container name*>. |No |Yes | | `ShareRenamed` |Renamed the share because the original share name doesn't follow [Azure naming conventions](../databox/data-box-disk-limits.md#azure-block-blob-page-blob-and-file-naming-conventions). The original share has been renamed to DataBox-<*GUID*> from <*original folder name*>. |No |Yes |
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-connection.md
If you are seeing issues related to your authentication flow:
| 401 IoTHubUnauthorized | DEVICE_DISABLED | The device is disabled in this IoT hub and has moved to another IoT hub. Re-provision the device. | | 401 IoTHubUnauthorized | DEVICE_BLOCKED | An operator has blocked this device. | -- ### File upload error codes Here is a list of common error codes you might see when a device tries to upload a file to the cloud. Remember that before your device can upload a file, you must configure [device file uploads](howto-configure-file-uploads.md) in your application.
Here is a list of common error codes you might see when a device tries to upload
| - | - | - | | 403006 | You've exceeded the number of concurrent file upload operations. Each device client is limited to 10 concurrent file uploads. | Ensure the device promptly notifies IoT Central that the file upload operation has completed. If that doesn't work, try reducing the request timeout. |
-## Payload shape issues
+## Unmodeled data issues
When you've established that your device is sending data to IoT Central, the next step is to ensure that your device is sending data in a valid format.
-There are two main categories of common issues that cause device data to not appear in IoT Central:
--- Device template to device data mismatch:
- - Mismatch in naming such as typos or case-matching issues.
- - Unmodeled properties where the schema isn't defined in the device template.
- - Schema mismatch such as a type defined in the template as `boolean`, but the data is a string.
- - The same telemetry name is defined in multiple interfaces, but the device isn't IoT Plug and Play compliant.
-- Data shape is invalid JSON. To learn more, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).- To detect which categories your issue is in, run the most appropriate command for your scenario: - To validate telemetry, use the preview command:
To detect which categories your issue is in, run the most appropriate command fo
You may be prompted to install the `uamqp` library the first time you run a `validate` command.
-The following output shows example error and warning messages from the validate command:
+There are two main categories of common issues that cause device data to not appear in IoT Central:
+
+- Device template to device data mismatch.
+- Data shape is invalid JSON.
+
+### Device template to device data mismatch
+
+Mismatch in naming such as typos or case-matching issues.
+
+The following output shows example error and warning message where the device is sending a telemetry value called Temperature, when it should be temperature.
```output Validating telemetry.
-Filtering on device: v22upeoqx6.
+Filtering on device: sample-device-01.
Exiting after 300 second(s), or 10 message(s) have been parsed (whichever happens first).
-[WARNING] [DeviceId: v22upeoqx6] No encoding found. Expected encoding 'utf-8' to be present in message header.
+[WARNING] [DeviceId: sample-device-01] [TemplateId: urn:modelDefinition:ofhmazgddj:vmjwwjuvdzg] Device is sending data that has not been defined in the device template. Following capabilities have NOT been defined in the device template '['Temperature']'. Following capabilities have been defined in the device template (grouped by components) '{'thermostat1': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport'], 'thermostat2': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport'], 'deviceInformation': ['manufacturer', 'model', 'swVersion', 'osName', 'processorArchitecture', 'processorManufacturer', 'totalStorage', 'totalMemory']}'.
+```
+
+Unmodeled properties where the schema isn't defined in the device template.
-[WARNING] [DeviceId: v22upeoqx6] Content type '' is not supported. Expected Content type is 'application/json'.
+The following output shows example error and warning message where the osVersion is not defined in the device template :
-[ERROR] [DeviceId: v22upeoqx6] [TemplateId: urn:krhsi_k0u:modelDefinition:w53jukkazs] Datatype of field 'humid' does not match the da
-tatype 'double'. Data '56'. All dates/times/datetimes/durations must be ISO 8601 compliant.
+```output
+Command group 'iot central diagnostics' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
+[WARNING] [DeviceId: sample-device-01] [TemplateId: urn:modelDefinition:ofhmazgddj:vmjwwjuvdzg] Device is sending data that has not been defined in the device template. Following capabilities have NOT been defined in the device template '['osVersion']'. Following capabilities have been defined in the device template (grouped by components) '{'thermostat1': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport', 'rundiagnostics'], 'thermostat2': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport', 'rundiagnostics'], 'deviceInformation': ['manufacturer', 'model', 'swVersion', 'osName', 'processorArchitecture', 'processorManufacturer', 'totalStorage', 'totalMemory']}'.
+```
++
+Schema mismatch such as a type defined in the template as `boolean`, but the data is a string.
+
+The following output shows example error and warning messages where the device using a string value for a property that's defined as a double.
+
+```output
+Command group 'iot central diagnostics' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
+Validating telemetry.
+Filtering on device: sample-device-01.
+Exiting after 300 second(s), or 10 message(s) have been parsed (whichever happens first).
+[ERROR] [DeviceId: sample-device-01] [TemplateId: urn:modelDefinition:ofhmazgddj:vmjwwjuvdzg] Datatype of telemetry field 'temperature' does not match the datatype double. Data sent by the device : curr_temp. For more information, see: https://aka.ms/iotcentral-payloads
```
+The same telemetry name is defined in multiple interfaces, but the device isn't IoT Plug and Play compliant.
+
+### Invalid JSON
+
+If there are no errors reported, but a value isn't appearing, then it's probably malformed JSON. To learn more, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
+ If you prefer to use a GUI, use the IoT Central **Raw data** view to see if something isn't being modeled. The **Raw data** view doesn't detect if the device is sending malformed JSON. :::image type="content" source="media/troubleshoot-connection/raw-data-view.png" alt-text="Screenshot of Raw Data view":::
iot-dps Concepts Device Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-reprovision.md
When designing your solution and defining a reprovisioning logic there are a few
* Retry capability implemented on your client code, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults) at the Azure Architecture Center >[!TIP]
-> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to [get the device registration state](/rest/api/iot-dps/service/device-registration-state/get) and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults).
+> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to use the [Device Registration Status Lookup](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup) API and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults).
>In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios: > * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors. > * For 429 errors, only retry after the time indicated in the Retry-After header.
iot-dps How To Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-reprovision.md
How often a device submits a provisioning request depends on the scenario. When
* Retry capability implemented on your client code, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults) at the Azure Architecture Center >[!TIP]
-> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to [get the device registration state](/rest/api/iot-dps/service/device-registration-state/get) and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults).
+> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to use the [Device Registration Status Lookup](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup) API and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults).
>In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios: > * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors. > * For 429 errors, only retry after the time indicated in the Retry-After header.
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
Learn more about the [Defender for IoT micro agent](../defender-for-iot/device-b
1. Open a terminal on the leaf device.
-1. Use the following command to place the connection string encoded in utf-8 in the Defender for Cloud agent directory into the file `connection_string.txt` in the following path: `/var/defender_iot_micro_agent/connection_string.txt`:
+1. Use the following command to place the connection string encoded in utf-8 in the Defender for Cloud agent directory into the file `connection_string.txt` in the following path: `/etc/defender_iot_micro_agent/connection_string.txt`:
```bash
- sudo bash -c 'echo "<connection string>" > /var/defender_iot_micro_agent/connection_string.txt'
+ sudo bash -c 'echo "<connection string>" > /etc/defender_iot_micro_agent/connection_string.txt'
```
- The `connection_string.txt` should now be located in the following path location `/var/defender_iot_micro_agent/connection_string.txt`.
+ The `connection_string.txt` should now be located in the following path location `/etc/defender_iot_micro_agent/connection_string.txt`.
1. Restart the service using this command:
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
If you are using IoT Edge for Linux on Windows, you need to use the SSH key loca
trusted_ca_certs: "file:///<path>/<root CA cert>" ```
-1. Make sure that the user **iotedge** has read permissions for the directory holding the certificates.
+1. Make sure that the user **iotedge** has read/write permissions for the directory holding the certificates.
1. If you've used any other certificates for IoT Edge on the device before, delete the files in the following two directories before starting or restarting IoT Edge:
load-testing How To Find Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-find-download-logs.md
Title: Troubleshoot load test errors
-description: Learn how you can troubleshoot errors during your load test by downloading and analyzing the Apache JMeter logs in the Azure portal.
+description: Learn how you can diagnose and troubleshoot errors in Azure Load Testing. Download and analyze the Apache JMeter worker logs in the Azure portal.
Previously updated : 01/14/2022 Last updated : 03/23/2022
-# Troubleshoot load test errors by downloading Apache JMeter logs in Azure Load Testing Preview
+# Troubleshoot load test errors by downloading Apache JMeter logs
-In this article, you'll learn how to download the Apache JMeter logs for Azure Load Testing Preview in the Azure portal. You can use the logging information to troubleshoot problems while the Apache JMeter script runs.
+Learn how to diagnose and troubleshoot errors while running a load test with Azure Load Testing Preview. Download the Apache JMeter worker logs or load test results for detailed logging information.
-The Apache JMeter log can help you identify problems in your JMX file, or run-time issues that occur while the test is running. For example, the application endpoint might be unavailable, or the JMX file might contain invalid credentials.
+When you start a load test, the Azure Load Testing test engines run your Apache JMeter script. Errors can occur at different levels. For example, during the execution of the JMeter script, while connecting to the application endpoint, or in the test engine instance.
-When you run a load test, the Azure Load Testing test engines execute your Apache JMeter test script. While your load test is running, Apache JMeter stores detailed logging information in the worker node logs. You can download the JMeter worker node log for your load test run from the Azure portal to help you diagnose load test errors.
+You can use different sources of information to diagnose these errors:
+
+- [Download the Apache JMeter worker logs](#download-apache-jmeter-worker-logs) to investigate issues with JMeter and the test script execution.
+- [Export the load test result](./how-to-export-test-results.md) and analyze the response code and response message of each HTTP request.
+
+There might also be problems with the application endpoint itself. If you host the application on Azure, you can [configure server-side monitoring](./how-to-monitor-server-side-metrics.md) to get detailed insights about the application components.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+## Load test error indicators
+
+After running a load test, there are multiple error indicators available:
+
+- The test run **Status** information is **Failed**.
+
+ :::image type="content" source="media/how-to-find-download-logs/dashboard-test-failed.png" alt-text="Screenshot that shows the load test dashboard, highlighting status information for a failed test.":::
+
+- The test run statistics shows a non-zero **Error percentage** value.
+- The **Errors** graph in the client-side metrics shows errors.
+
+ :::image type="content" source="media/how-to-find-download-logs/dashboard-errors.png" alt-text="Screenshot that shows the load test dashboard, highlighting the error information.":::
+ ## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - An Azure load testing resource that has a completed test run. If you need to create an Azure load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
-## Access and download logs for your load test
+## Download Apache JMeter worker logs
+
+When you run a load test, the Azure Load Testing test engines execute your Apache JMeter test script. During the load test, Apache JMeter stores detailed logging in the worker node logs. You can download these JMeter worker logs for each test run in the Azure portal.
-In this section, you retrieve and download the Azure Load Testing logs from the Azure portal.
+For example, if there's a problem with your JMeter script, the load test status will be **Failed**. In the worker logs you might find additional information about the cause of the problem.
+
+To download the worker logs for an Azure Load Testing test run, follow these steps:
1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
In this section, you retrieve and download the Azure Load Testing logs from the
>[!TIP] > To limit the number of tests, use the search box and the **Time range** filter.
-1. In the list of tests, select the test run you're working with to view its details.
+1. Select a test run from the list to view the test run dashboard.
:::image type="content" source="media/how-to-find-download-logs/test-run.png" alt-text="Screenshot that shows a list of test runs for the selected load test.":::
-1. On the dashboard, select **Download**, and then select **Logs**.
+1. On the dashboard, select **Download**, and then select **Logs**.
- :::image type="content" source="media/how-to-find-download-logs/logs.png" alt-text="Screenshot that shows how to download the load test logs from the test run details page.":::
+ :::image type="content" source="media/how-to-find-download-logs/logs.png" alt-text="Screenshot that shows how to download the test log files from the test run details page.":::
The browser should now start downloading the JMeter worker node log file *worker.log*.
In this section, you retrieve and download the Azure Load Testing logs from the
## Next steps
+- Learn how to [Export the load test result](./how-to-export-test-results.md).
- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).- - Learn how to [Get detailed insights for Azure App Service based applications](./how-to-appservice-insights.md).- - Learn how to [Compare multiple load test runs](./how-to-compare-multiple-test-runs.md).
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 03/07/2022 Last updated : 03/22/2022 # Edit host and app settings for logic apps in single-tenant Azure Logic Apps
-In *single-tenant* Azure Logic Apps, the *app settings* for a logic app specify the global configuration options that affect *all the workflows* in that logic app. However, these settings apply *only* when these workflows run in your *local development environment*. Locally-running workflows can access these app settings as *local environment variables*, which are used by local development tools for values that can often change between environments. For example, these values can contain connection strings. When you deploy to Azure, app settings are ignored and aren't included with your deployment.
+In *single-tenant* Azure Logic Apps, the *app settings* for a logic app specify the global configuration options that affect *all the workflows* in that logic app. However, these settings apply *only* when these workflows run in your *local development environment*. Locally running workflows can access these app settings as *local environment variables*, which are used by local development tools for values that can often change between environments. For example, these values can contain connection strings. When you deploy to Azure, app settings are ignored and aren't included with your deployment.
Your logic app also has *host settings*, which specify the runtime configuration settings and values that apply to *all the workflows* in that logic app, for example, default values for throughput, capacity, data size, and so on, *whether they run locally or in Azure*.
App settings in Azure Logic Apps work similarly to app settings in Azure Functio
| `Workflows.Connection.AuthenticationAudience` | None | Sets the audience for authenticating an Azure-hosted connection. | | `Workflows.WebhookRedirectHostUri` | None | Sets the host name to use for webhook callback URLs. | | `WEBSITE_LOAD_ROOT_CERTIFICATES` | None | Sets the thumbprints for the root certificates to be trusted. |
-| `ServiceProviders.Sql.QueryExecutionTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for SQL service provider operations. |
+| `ServiceProviders.Sql.QueryTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for SQL service provider operations. |
|||| <a name="manage-app-settings"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-| | `Runtime.FlowRunRetryableActionJobCallback.MaximumContentLengthInBytesForPartialContent` | `1073741824` bytes | When chunking is supported and enabled on an operation, sets the maximum size in bytes for downloaded or uploaded content. |
-| `Runtime.FlowRunRetryableActionJobCallback.MaxChunkSizeInBytes` | `52428800` bytes | When chunking is supported and enabled on an operation , sets the maximum size in bytes for each content chunk. |
+| `Runtime.FlowRunRetryableActionJobCallback.MaxChunkSizeInBytes` | `52428800` bytes | When chunking is supported and enabled on an operation, sets the maximum size in bytes for each content chunk. |
| `Runtime.FlowRunRetryableActionJobCallback.MaximumRequestCountForPartialContent` | `1000` requests | When chunking is supported and enabled on an operation, sets the maximum number of requests that an action execution can make to download content. | ||||
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Runtime.Backend.HttpWebhookOperation.DefaultRetryInterval` | `00:00:07` <br>(7 sec) | Sets the default retry interval for HTTP webhook triggers and actions. | | `Runtime.Backend.HttpWebhookOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 hour) | Sets the maximum retry interval for HTTP webhook triggers and actions. | | `Runtime.Backend.HttpWebhookOperation.DefaultRetryMinimumInterval` | `00:00:05` <br>(5 sec) | Sets the minimum retry interval for HTTP webhook triggers and actions. |
-| `Runtime.Backend.HttpWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 hour) | Sets the default wake up interval for HTTP webhook trigger and action jobs. |
+| `Runtime.Backend.HttpWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 hour) | Sets the default wake-up interval for HTTP webhook trigger and action jobs. |
|||| <a name="built-in-azure-functions"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Runtime.Backend.ApiConnectionOperation.DefaultRetryInterval` | `00:00:07` <br>(7 sec) | Sets the default retry interval for managed API connector triggers and actions. | | `Runtime.Backend.ApiWebhookOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 day) | Sets the maximum retry interval for managed API connector webhook triggers and actions. | | `Runtime.Backend.ApiConnectionOperation.DefaultRetryMinimumInterval` | `00:00:05` <br>(5 sec) | Sets the minimum retry interval for managed API connector triggers and actions. |
-| `Runtime.Backend.ApiWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 day) | Sets the default wake up interval for managed API connector webhook trigger and action jobs. |
+| `Runtime.Backend.ApiWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 day) | Sets the default wake-up interval for managed API connector webhook trigger and action jobs. |
|||| <a name="blob-storage"></a>
machine-learning How To Attach Arc Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-arc-kubernetes.md
Use the `identity_type` parameter to enable `SystemAssigned` or `UserAssigned` m
You can attach an AKS or Azure Arc enabled Kubernetes cluster using the Azure Machine Learning 2.0 CLI (preview).
-Use the Azure Machine Learning CLI [`attach`](/cli/azure/ml/compute) command and set the `--type` argument to `kubernetes` to attach your Kubernetes cluster using the Azure Machine Learning 2.0 CLI.
+Use the Azure Machine Learning CLI [`attach`](/cli/azure/ml/compute) command and set the `--type` argument to `Kubernetes` to attach your Kubernetes cluster using the Azure Machine Learning 2.0 CLI.
> [!NOTE] > Compute attach support for AKS or Azure Arc enabled Kubernetes clusters requires a version of the Azure CLI `ml` extension >= 2.0.1a4. For more information, see [Install and set up the CLI (v2)](how-to-configure-cli.md).
The following commands show how to attach an Azure Arc-enabled Kubernetes cluste
**AKS** ```azurecli
-az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --name amlarc-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Kubernetes/managedclusters/<cluster-name>" --type kubernetes --identity-type UserAssigned --user-assigned-identities "subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>" --no-wait
+az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --name amlarc-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Kubernetes/managedclusters/<cluster-name>" --type Kubernetes --identity-type UserAssigned --user-assigned-identities "subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>" --no-wait
``` **Azure Arc enabled Kubernetes**
machine-learning How To Deploy Update Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-update-web-service.md
In this article, you learn how to update a web service that was deployed with Az
## Prerequisites
-This tutorial assumes you have already deployed a web service with Azure Machine Learning. If you need to learn how to deploy a web service, [follow these steps](how-to-deploy-and-where.md).
+- This article assumes you have already deployed a web service with Azure Machine Learning. If you need to learn how to deploy a web service, [follow these steps](how-to-deploy-and-where.md).
+- The code snippets in this article assume that the `ws` variable has already been initialized to your workspace by using the [Workflow()](/python/api/azureml-core/azureml.core.workspace.workspace#constructor) constructor or loading a saved configuration with [Workspace.from_config()](/python/api/azureml-core/azureml.core.workspace.workspace#azureml-core-workspace-workspace-from-config). The following snippet demonstrates how to use the constructor:
+
+ ```python
+ from azureml.core import Workspace
+ ws = Workspace(subscription_id="mysubscriptionid",
+ resource_group="myresourcegroup",
+ workspace_name="myworkspace")
+ ```
## Update web service
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
In this article, you learn how to:
* When the storage account is in the VNet, there are extra validation requirements when using studio: * If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the VNet.
- * If the storage account uses a __private endpoint__, the workspace private endpoint and storage service endpoint must be in the same VNet. In this case, they can be in different subnets.
+ * If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in the same VNet. In this case, they can be in different subnets.
### Designer sample pipeline
marketplace Azure App Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-offer-listing.md
For each contact, you'll provide a name, phone number, and email address (these
## Add marketplace media
+You'll next add your logo. You can also add screenshots and videos to help your offer stand out.
+
+> [!IMPORTANT]
+> When creating media for your offer, make sure that the assets you create are welcoming and inclusive for all. To learn more about how to create accessible media, see [Create accessible media](https://www.microsoft.com/accessibility/supplier-toolkit-resources).
+ ### Store logos Under **Logos**, upload a **Large** logo in PNG format between 216 x 216 and 350 x 350 pixels. Partner Center will automatically create **Small** (48 x 48) and **Medium** (90 x 90) logos, which you can replace later if you want.
marketplace Private Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/private-plans.md
Private plans let publishers offer private, customized solutions to targeted cus
Private plans let publishers take advantage of the scale and global availability of a public marketplace, with the flexibility and control needed to negotiate and deliver custom deals and configurations. Enterprises can now buy and sell in ways they expect.
+>[!Note]
+>Private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. For details, see [ISV to CSP partner private offers](/azure/marketplace/isv-csp-reseller).
+ ## Create private plans For *new or existing offers with plans*, publishers can easily create new, private variations by creating new plans (formerly known as SKUs) and marking them as private. Each offer can have up to 45 private plans.
Private plans will also appear in search results and can be deployed via command
[![[Private offers appearing in search results.]](media/marketplace-publishers-guide/private-product.png)](media/marketplace-publishers-guide/private-product.png#lightbox)
->[!Note]
->Private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program.
- <! ## Next steps
migrate Create Manage Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/create-manage-projects.md
Set up a new project in an Azure subscription.
> [!Note]
- > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](discover-and-assess-using-private-endpoints.md#create-a-project-with-private-endpoint-connectivity)
+ > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](how-to-use-azure-migrate-with-private-endpoints.md#create-a-project-with-private-endpoint-connectivity)
7. Select **Create**.
migrate Discover And Assess Using Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discover-and-assess-using-private-endpoints.md
- Title: Discover and assess using Azure Private Link
-description: Create an Azure Migrate project, set up the Azure Migrate appliance, and use it to discover and assess servers for migration.
--
-ms.
- Previously updated : 12/29/2021-
-
-# Discover and assess servers for migration using Private Link
-
-This article describes how to create an Azure Migrate project, set up the Azure Migrate appliance, and use it to discover and assess servers for migration using [Azure Private Link](../private-link/private-endpoint-overview.md). You can use the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
-
-## Create a project with private endpoint connectivity
-
-To set up a new Azure Migrate project, see [Create and manage projects](./create-manage-projects.md#create-a-project-for-the-first-time).
-
-> [!Note]
-> You can't change the connectivity method to private endpoint connectivity for existing Azure Migrate projects.
-
-In the **Advanced** configuration section, provide the following details to create a private endpoint for your Azure Migrate project.
-1. In **Connectivity method**, choose **Private endpoint**.
-1. In **Disable public endpoint access**, keep the default setting **No**. Some migration tools might not be able to upload usage data to the Azure Migrate project if public network access is disabled. Learn more about [other integrated tools](how-to-use-azure-migrate-with-private-endpoints.md#other-integrated-tools).
-1. In **Virtual network subscription**, select the subscription for the private endpoint virtual network.
-1. In **Virtual network**, select the virtual network for the private endpoint. The Azure Migrate appliance and other software components that need to connect to the Azure Migrate project must be on this network or a connected virtual network.
-1. In **Subnet**, select the subnet for the private endpoint.
-
- ![Screenshot that shows the Advanced section on the Create project page.](./media/how-to-use-azure-migrate-with-private-endpoints/create-project.png)
-
-1. Select **Create** to create a migration project and attach a private endpoint to it. Wait a few minutes for the Azure Migrate project to deploy. Don't close this page while the project creation is in progress.
-
-> [!Note]
-> If you've already created a project, you can use that project to register more appliances to discover and assess more servers. Learn how to [manage projects](create-manage-projects.md#find-a-project).
-
-## Set up the Azure Migrate appliance
-
-1. In **Discover machines** > **Are your machines virtualized?**, select the virtualization server type.
-1. In **Generate Azure Migrate project key**, provide a name for the Azure Migrate appliance.
-1. Select **Generate key** to create the required Azure resources.
-
- > [!Important]
- > Don't close the **Discover machines** page during the creation of resources.
- - At this step, Azure Migrate creates a key vault, a storage account, a Recovery Services vault (only for agentless VMware migrations), and a few internal resources. Azure Migrate attaches a private endpoint to each resource. The private endpoints are created in the virtual network selected during the project creation.
- - After the private endpoints are created, the DNS CNAME resource records for the Azure Migrate resources are updated to an alias in a subdomain with the prefix *privatelink*. By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type and inserts DNS A records for the associated private endpoints. This action enables the Azure Migrate appliance and other software components that reside in the source network to reach the Azure Migrate resource endpoints on private IP addresses.
- - Azure Migrate also enables a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the migrate project and the Recovery Services vault and grants permissions to the managed identity to securely access the storage account.
-
-1. After the key is successfully generated, copy the key details to configure and register the appliance.
-
-### Download the appliance installer file
-
-Azure Migrate: Discovery and assessment use a lightweight Azure Migrate appliance. The appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate.
-
-> [!Note]
-> If you have deployed an appliance using a template (OVA for servers on a VMware environment and VHD for a Hyper-V environment), you can use the same appliance and register it with an Azure Migrate project with private endpoint connectivity.
-
-To set up the appliance:
- 1. Download the zipped file that contains the installer script from the portal.
- 1. Copy the zipped file on the server that will host the appliance.
- 1. After you download the zipped file, verify the file security.
- 1. Run the installer script to deploy the appliance.
-
-### Verify security
-
-Check that the zipped file is secure, before you deploy it.
-
-1. On the server to which you downloaded the file, open an administrator command window.
-2. Run the following command to generate the hash for the zipped file:
- - ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
-3. Verify the latest appliance version and hash value:
-
- **Download** | **Hash value**
- |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2160648) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
-
-> [!NOTE]
-> The same script can be used to set up an appliance with private endpoint connectivity for any of the chosen scenarios, such as VMware, Hyper-V, physical or other to deploy an appliance with the desired configuration.
-
-Make sure the server meets the [hardware requirements](./migrate-appliance.md) for the chosen scenario, such as VMware, Hyper-V, physical or other, and can connect to the [required URLs](./migrate-appliance.md#public-cloud-urls-for-private-link-connectivity).
-
-### Run the Azure Migrate installer script
-
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
-
-2. Launch PowerShell on the above server with administrative (elevated) privilege.
-
-3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
-
-4. Run the script named `AzureMigrateInstaller.ps1` by running the following command:
-
- `PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1`
-
-5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **servers running in your VMware environment** to an Azure Migrate project with **private endpoint connectivity** on **Azure public cloud**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/script-vmware-private-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration for private endpoint." lightbox="./media/how-to-use-azure-migrate-with-private-endpoints/script-vmware-private-expanded.png":::
-
-After the script has executed successfully, the appliance configuration manager will be launched automatically.
-
-> [!NOTE]
-> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
-
-## Configure the appliance and start continuous discovery
-
-Open a browser on any machine that can connect to the appliance server. Open the URL of the appliance configuration manager, `https://appliance name or IP address: 44368`. Or, you can open the configuration manager from the appliance server desktop by selecting the shortcut for the configuration manager.
-
-### Set up prerequisites
-
-1. Read the third-party information, and accept the **license terms**.
-
-1. In the configuration manager under **Set up prerequisites**, do the following:
- - **Connectivity**: The appliance checks for access to the required URLs. If the server uses a proxy:
- - Select **Set up proxy** to specify the proxy address `http://ProxyIPAddress` or `http://ProxyFQDN` and listening port.
- - Specify credentials if the proxy needs authentication. Only HTTP proxy is supported.
- - You can add a list of URLs or IP addresses that should bypass the proxy server.
- ![Adding to bypass proxy list](./media/how-to-use-azure-migrate-with-private-endpoints/bypass-proxy-list.png)
- - Select **Save** to register the configuration if you've updated the proxy server details or added URLs or IP addresses to bypass proxy.
-
- > [!Note]
- > If you get an error with the aka.ms/* link during the connectivity check and you don't want the appliance to access this URL over the internet, disable the auto-update service on the appliance. Follow the steps in [Turn off auto-update](./migrate-appliance.md#turn-off-auto-update). After you've disabled auto-update, the aka.ms/* URL connectivity check will be skipped.
-
- - **Time sync**: The time on the appliance should be in sync with internet time for discovery to work properly.
- - **Install updates**: The appliance ensures that the latest updates are installed. After the check completes, select **View appliance services** to see the status and versions of the services running on the appliance server.
- > [!Note]
- > If you disabled auto-update on the appliance, you can update the appliance services manually to get the latest versions of the services. Follow the steps in [Manually update an older version](./migrate-appliance.md#manually-update-an-older-version).
- - **Install VDDK**: _(Needed only for VMware appliance.)_ The appliance checks that the VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zipped contents to the specified location on the appliance, as provided in the installation instructions.
-
-### Register the appliance and start continuous discovery
-
-After the prerequisites check has completed, follow the steps to register the appliance and start continuous discovery for the respective scenarios:
-- [VMware VMs](./tutorial-discover-vmware.md#register-the-appliance-with-azure-migrate)-- [Hyper-V VMs](./tutorial-discover-hyper-v.md#register-the-appliance-with-azure-migrate)-- [Physical servers](./tutorial-discover-physical.md#register-the-appliance-with-azure-migrate)-- [AWS VMs](./tutorial-discover-aws.md#register-the-appliance-with-azure-migrate)-- [GCP VMs](./tutorial-discover-gcp.md#register-the-appliance-with-azure-migrate)-
->[!Note]
-> If you get DNS resolution issues during appliance registration or at the time of starting discovery, ensure that Azure Migrate resources created during the **Generate key** step in the portal are reachable from the on-premises server that hosts the Azure Migrate appliance. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
-
-## Assess your servers for migration to Azure
-After the discovery is complete, assess your servers, such as [VMware VMs](./tutorial-assess-vmware-azure-vm.md), [Hyper-V VMs](./tutorial-assess-hyper-v.md), [physical servers](./tutorial-assess-vmware-azure-vm.md), [AWS VMs](./tutorial-assess-aws.md), and [GCP VMs](./tutorial-assess-gcp.md), for migration to Azure VMs or Azure VMware Solution by using the Azure Migrate: Discovery and assessment tool.
-
-You can also [assess your on-premises machines](./tutorial-discover-import.md#prepare-the-csv) with the Azure Migrate: Discovery and assessment tool by using an imported CSV file.
-
-## Next steps
--- [Migrate servers to Azure using Private Link](migrate-servers-to-azure-using-private-link.md).
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
ms. Previously updated : 12/29/2021 Last updated : 05/10/2020
-# Support requirements and considerations
+# Use Azure Migrate with private endpoints
-The article series describes how to use Azure Migrate to discover, assess, and migrate servers over a private network by using [Azure Private Link](../private-link/private-endpoint-overview.md). You can use the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) and [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tools to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
+This article describes how to use Azure Migrate to discover, assess, and migrate servers over a private network by using [Azure Private Link](../private-link/private-endpoint-overview.md).
-We recommend the private endpoint connectivity method when there is an organizational requirement to access Azure Migrate and other Azure resources without traversing public networks. By using Private Link, you can use your existing ExpressRoute private peering circuits for better bandwidth or latency requirements.
+You can use the [Azure Migrate: Discovery and assessment](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) and [Azure Migrate: Server Migration](./migrate-services-overview.md#azure-migrate-server-migration-tool) tools to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
-Before you get started, review the required permissions and the supported scenarios and tools.
+We recommend the private endpoint connectivity method when there's an organizational requirement to access Azure Migrate and other Azure resources without traversing public networks. By using Private Link, you can use your existing ExpressRoute private peering circuits for better bandwidth or latency requirements.
## Support requirements
You must have Contributor + User Access Administrator or Owner permissions on th
**Discovery and assessment** | Perform an agentless, at-scale discovery and assessment of your servers running on any platform. Examples include hypervisor platforms such as [VMware vSphere](./tutorial-discover-vmware.md) or [Microsoft Hyper-V](./tutorial-discover-hyper-v.md), public clouds such as [AWS](./tutorial-discover-aws.md) or [GCP](./tutorial-discover-gcp.md), or even [bare metal servers](./tutorial-discover-physical.md). | Azure Migrate: Discovery and assessment <br/> **Software inventory** | Discover apps, roles, and features running on VMware VMs. | Azure Migrate: Discovery and assessment **Dependency visualization** | Use the dependency analysis capability to identify and understand dependencies across servers. <br/> [Agentless dependency visualization](./how-to-create-group-machine-dependencies-agentless.md) is supported natively with Azure Migrate private link support. <br/>[Agent-based dependency visualization](./how-to-create-group-machine-dependencies.md) requires internet connectivity. Learn how to use [private endpoints for agent-based dependency visualization](../azure-monitor/logs/private-link-security.md). | Azure Migrate: Discovery and assessment |
-**Migration** | Perform [agentless VMware migrations](./tutorial-migrate-vmware.md), [agentless Hyper-V migrations](./tutorial-migrate-hyper-v.md), or use the agent-based approach to migrate your [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](./tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider. | Azure Migrate: Server Migration
+**Migration** | Perform [agentless Hyper-V migrations](./tutorial-migrate-hyper-v.md) or use the agent-based approach to migrate your [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](./tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider. | Azure Migrate: Server Migration
+
+>[!Note]
+> [Agentless migration of VMware VMs](./tutorial-migrate-vmware.md) currently supports replication data transfer over a private network. Other traffic (orchestration, non-voluminous traffic) will require internet access or connectivity via ExpressRoute Microsoft peering. [Learn more.](./replicate-using-expressroute.md)
#### Other integrated tools
To enable public network access for the Azure Migrate project, sign in to the Az
**Pricing** | For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/). **Virtual network requirements** | The ExpressRoute/VPN gateway endpoint should reside in the selected virtual network or a virtual network connected to it. You might need about 15 IP addresses in the virtual network.
+## Create a project with private endpoint connectivity
+
+To set up a new Azure Migrate project, see [Create and manage projects](./create-manage-projects.md#create-a-project-for-the-first-time).
+
+> [!Note]
+> You can't change the connectivity method to private endpoint connectivity for existing Azure Migrate projects.
+In the **Advanced** configuration section, provide the following details to create a private endpoint for your Azure Migrate project.
+1. In **Connectivity method**, choose **Private endpoint**.
+1. In **Disable public endpoint access**, keep the default setting **No**. Some migration tools might not be able to upload usage data to the Azure Migrate project if public network access is disabled. Learn more about [other integrated tools](#other-integrated-tools).
+1. In **Virtual network subscription**, select the subscription for the private endpoint virtual network.
+1. In **Virtual network**, select the virtual network for the private endpoint. The Azure Migrate appliance and other software components that need to connect to the Azure Migrate project must be on this network or a connected virtual network.
+1. In **Subnet**, select the subnet for the private endpoint.
+
+ ![Screenshot that shows the Advanced section on the Create project page.](./media/how-to-use-azure-migrate-with-private-endpoints/create-project.png)
+
+1. Select **Create** to create a migration project and attach a private endpoint to it. Wait a few minutes for the Azure Migrate project to deploy. Don't close this page while the project creation is in progress.
+
+## Discover and assess servers for migration by using Private Link
+
+This section describes how to set up the Azure Migrate appliance. Then you'll use it to discover and assess servers for migration.
+
+### Set up the Azure Migrate appliance
+
+1. In **Discover machines** > **Are your machines virtualized?**, select the server type.
+1. In **Generate Azure Migrate project key**, provide a name for the Azure Migrate appliance.
+1. Select **Generate key** to create the required Azure resources.
+
+ > [!Important]
+ > Don't close the **Discover machines** page during the creation of resources.
+ - At this step, Azure Migrate creates a key vault, a storage account, a Recovery Services vault (only for agentless VMware migrations), and a few internal resources. Azure Migrate attaches a private endpoint to each resource. The private endpoints are created in the virtual network selected during the project creation.
+ - After the private endpoints are created, the DNS CNAME resource records for the Azure Migrate resources are updated to an alias in a subdomain with the prefix *privatelink*. By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type and inserts DNS A records for the associated private endpoints. This action enables the Azure Migrate appliance and other software components that reside in the source network to reach the Azure Migrate resource endpoints on private IP addresses.
+ - Azure Migrate also enables a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the migrate project and grants permissions to the managed identity to securely access the storage account.
+1. After the key is successfully generated, copy the key details to configure and register the appliance.
+
+#### Download the appliance installer file
+
+Azure Migrate: Discovery and assessment use a lightweight Azure Migrate appliance. The appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate.
+
+> [!Note]
+> If you have deployed an appliance using a template (OVA for servers on a VMware environment and VHD for a Hyper-V environment), you can use the same appliance and register it with an Azure Migrate project with private endpoint connectivity.
+To set up the appliance:
+ 1. Download the zipped file that contains the installer script from the portal.
+ 1. Copy the zipped file on the server that will host the appliance.
+ 1. After you download the zipped file, verify the file security.
+ 1. Run the installer script to deploy the appliance.
+
+#### Verify security
+
+Check that the zipped file is secure, before you deploy it.
+
+1. On the server to which you downloaded the file, open an administrator command window.
+2. Run the following command to generate the hash for the zipped file:
+ - ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
+3. Verify the latest appliance version and hash value:
+
+ **Download** | **Hash value**
+ |
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2160648) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
+
+> [!NOTE]
+> The same script can be used to set up an appliance with private endpoint connectivity for any of the chosen scenarios, such as VMware, Hyper-V, physical or other to deploy an appliance with the desired configuration.
+Make sure the server meets the [hardware requirements](./migrate-appliance.md) for the chosen scenario, such as VMware, Hyper-V, physical or other, and can connect to the [required URLs](./migrate-appliance.md#public-cloud-urls-for-private-link-connectivity).
+
+#### Run the Azure Migrate installer script
+
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+
+2. Launch PowerShell on the above server with administrative (elevated) privilege.
+
+3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
+
+4. Run the script named `AzureMigrateInstaller.ps1` by running the following command:
+
+ `PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1`
+
+5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **servers running in your VMware environment** to an Azure Migrate project with **private endpoint connectivity** on **Azure public cloud**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/script-vmware-private-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration for private endpoint." lightbox="./media/how-to-use-azure-migrate-with-private-endpoints/script-vmware-private-expanded.png":::
+
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
+
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
+### Configure the appliance and start continuous discovery
+
+Open a browser on any machine that can connect to the appliance server. Open the URL of the appliance configuration manager, `https://appliance name or IP address: 44368`. Or, you can open the configuration manager from the appliance server desktop by selecting the shortcut for the configuration manager.
+
+#### Set up prerequisites
+
+1. Read the third-party information, and accept the **license terms**.
+
+1. In the configuration manager under **Set up prerequisites**, do the following:
+ - **Connectivity**: The appliance checks for access to the required URLs. If the server uses a proxy:
+ - Select **Set up proxy** to specify the proxy address `http://ProxyIPAddress` or `http://ProxyFQDN` and listening port.
+ - Specify credentials if the proxy needs authentication. Only HTTP proxy is supported.
+ - You can add a list of URLs or IP addresses that should bypass the proxy server.
+ ![Adding to bypass proxy list](./media/how-to-use-azure-migrate-with-private-endpoints/bypass-proxy-list.png)
+ - Select **Save** to register the configuration if you've updated the proxy server details or added URLs or IP addresses to bypass proxy.
+
+ > [!Note]
+ > If you get an error with the aka.ms/* link during the connectivity check and you don't want the appliance to access this URL over the internet, disable the auto-update service on the appliance. Follow the steps in [Turn off auto-update](./migrate-appliance.md#turn-off-auto-update). After you've disabled auto-update, the aka.ms/* URL connectivity check will be skipped.
+ - **Time sync**: The time on the appliance should be in sync with internet time for discovery to work properly.
+ - **Install updates**: The appliance ensures that the latest updates are installed. After the check completes, select **View appliance services** to see the status and versions of the services running on the appliance server.
+ > [!Note]
+ > If you disabled auto-update on the appliance, you can update the appliance services manually to get the latest versions of the services. Follow the steps in [Manually update an older version](./migrate-appliance.md#manually-update-an-older-version).
+ - **Install VDDK**: _(Needed only for VMware appliance.)_ The appliance checks that the VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zipped contents to the specified location on the appliance, as provided in the installation instructions.
+#### Register the appliance and start continuous discovery
+
+After the prerequisites check has completed, follow the steps to register the appliance and start continuous discovery for the respective scenarios:
+- [VMware VMs](./tutorial-discover-vmware.md#register-the-appliance-with-azure-migrate)
+- [Hyper-V VMs](./tutorial-discover-hyper-v.md#register-the-appliance-with-azure-migrate)
+- [Physical servers](./tutorial-discover-physical.md#register-the-appliance-with-azure-migrate)
+- [AWS VMs](./tutorial-discover-aws.md#register-the-appliance-with-azure-migrate)
+- [GCP VMs](./tutorial-discover-gcp.md#register-the-appliance-with-azure-migrate)
+
+>[!Note]
+> If you get DNS resolution issues during appliance registration or at the time of starting discovery, ensure that Azure Migrate resources created during the **Generate key** step in the portal are reachable from the on-premises server that hosts the Azure Migrate appliance. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
+### Assess your servers for migration to Azure
+After the discovery is complete, assess your servers, such as [VMware VMs](./tutorial-assess-vmware-azure-vm.md), [Hyper-V VMs](./tutorial-assess-hyper-v.md), [physical servers](./tutorial-assess-vmware-azure-vm.md), [AWS VMs](./tutorial-assess-aws.md), and [GCP VMs](./tutorial-assess-gcp.md), for migration to Azure VMs or Azure VMware Solution by using the Azure Migrate: Discovery and assessment tool.
+
+You can also [assess your on-premises machines](./tutorial-discover-import.md#prepare-the-csv) with the Azure Migrate: Discovery and assessment tool by using an imported CSV file.
+
+## Migrate servers to Azure by using Private Link
+
+The following sections describe the steps required to use Azure Migrate with [private endpoints](../private-link/private-endpoint-overview.md) for migrations by using ExpressRoute private peering or VPN connections.
+
+This article shows a proof-of-concept deployment path for agent-based replications to migrate your [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](./tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider by using Azure private endpoints. You can use a similar approach for performing [agentless Hyper-V migrations](./tutorial-migrate-hyper-v.md) by using Private Link.
+
+>[!Note]
+>[Agentless VMware migrations](./tutorial-assess-physical.md) require internet access or connectivity via ExpressRoute Microsoft peering.
+### Set up a replication appliance for migration
+
+The following diagram illustrates the agent-based replication workflow with private endpoints by using the Azure Migrate: Server Migration tool.
+
+![Diagram that shows replication architecture.](./media/how-to-use-azure-migrate-with-private-endpoints/replication-architecture.png)
+
+The tool uses a replication appliance to replicate your servers to Azure. Learn more about how to [prepare and set up a machine for the replication appliance](./tutorial-migrate-physical-virtual-machines.md#prepare-a-machine-for-the-replication-appliance).
+
+After you set up the replication appliance, follow these steps to create the required resources for migration.
+
+1. In **Discover machines** > **Are your machines virtualized?**, select **Not virtualized/Other**.
+1. In **Target region**, select and confirm the Azure region to which you want to migrate the machines.
+1. Select **Create resources** to create the required Azure resources. Don't close the page during the creation of resources.
+ - This step creates a Recovery Services vault in the background and enables a managed identity for the vault. A Recovery Services vault is an entity that contains the replication information of servers and is used to trigger replication operations.
+ - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This step adds five fully qualified domain names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault.
+ - The five domain names are formatted in this pattern: <br/> _{Vault-ID}-asr-pod01-{type}-.{target-geo-code}_.privatelink.siterecovery.windowsazure.com
+ - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS is then linked to the private endpoint virtual network.
+
+1. Before you register the replication appliance, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication appliance. Additional DNS configuration may be required for the on-premises replication appliance to resolve the private link FQDNs to their private IP addresses. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
+
+1. After you verify the connectivity, download the appliance setup and key file, run the installation process, and register the appliance to Azure Migrate. Learn more about how to [set up the replication appliance](./tutorial-migrate-physical-virtual-machines.md#set-up-the-replication-appliance). After you set up the replication appliance, follow these instructions to [install the mobility service](./tutorial-migrate-physical-virtual-machines.md#install-the-mobility-service) on the machines you want to migrate.
+
+### Replicate servers to Azure by using Private Link
+
+Follow [these steps](./tutorial-migrate-physical-virtual-machines.md#replicate-machines) to select servers for replication.
+
+In **Replicate** > **Target settings** > **Cache/Replication storage account**, use the dropdown list to select a storage account to replicate over a private link.
+
+If your Azure Migrate project has private endpoint connectivity, you must [grant permissions to the Recovery Services vault managed identity](#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate.
+
+To enable replications over a private link, [create a private endpoint for the storage account](#create-a-private-endpoint-for-the-storage-account-optional).
+
+#### Grant access permissions to the Recovery Services vault
+
+You must grant the permissions to the Recovery Services vault for authenticated access to the cache/replication storage account.
+
+To identify the Recovery Services vault created by Azure Migrate and grant the required permissions, follow these steps.
+
+**Identify the Recovery Services vault and the managed identity object ID**
+
+You can find the details of the Recovery Services vault on the Azure Migrate: Server Migration **Properties** page.
+
+1. Go to the **Azure Migrate** hub, and on the **Azure Migrate: Server Migration** tile, select **Overview**.
+
+ ![Screenshot that shows the Overview page on the Azure Migrate hub.](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
+
+1. In the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have **Private endpoint** as the **Connectivity type** and **Other** as the **Replication type**. You'll need this information when you provide access to the vault.
+
+ ![Screenshot that shows the Azure Migrate: Server Migration Properties page.](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png)
+
+**Permissions to access the storage account**
+
+ To the managed identity of the vault, you must grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
+
+>[!Note]
+> When you migrate Hyper-V VMs to Azure by using Private Link, you must grant access to both the replication storage account and the cache storage account.
+The role permissions for the Azure Resource Manager vary depending on the type of storage account.
+
+|**Storage account type** | **Role permissions**|
+| | |
+|Standard type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|
+|Premium type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner)
+
+1. Go to the replication/cache storage account selected for replication. In the left pane, select **Access control (IAM)**.
+
+1. Select **+ Add**, and select **Add role assignment**.
+
+ ![Screenshot that shows Add role assignment.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png)
+
+1. On the **Add role assignment** page in the **Role** box, select the appropriate role from the permissions list previously mentioned. Enter the name of the vault noted previously, and select **Save**.
+
+ ![Screenshot that shows the Add role assignment page.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png)
+
+1. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, on the **Networking** tab in the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account**.
+
+ ![Screenshot that shows the Allow trusted Microsoft services to access this storage account option.](./media/how-to-use-azure-migrate-with-private-endpoints/exceptions.png)
+
+### Create a private endpoint for the storage account (optional)
+
+To replicate by using ExpressRoute with private peering, [create a private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage accounts (target subresource: _blob_).
+
+>[!Note]
+> You can create private endpoints only on a general-purpose v2 storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+Create the private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
+
+Select **Yes**, and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network. It also adds the DNS records for the resolution of new IPs and FQDNs that are created. Learn more about [private DNS zones](../dns/private-dns-overview.md).
+
+If the user who created the private endpoint is also the storage account owner, the private endpoint creation will be auto-approved. Otherwise, the owner of the storage account must approve the private endpoint for use. To approve or reject a requested private endpoint connection, on the storage account page under **Networking**, go to **Private endpoint connections**.
+
+Review the status of the private endpoint connection state before you continue.
+
+![Screenshot that shows the Private endpoint approval status.](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection-state.png)
+
+After you've created the private endpoint, use the dropdown list in **Replicate** > **Target settings** > **Cache storage account** to select the storage account for replicating over a private link.
+
+Ensure that the on-premises replication appliance has network connectivity to the storage account on its private endpoint. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
+
+>[!Note]
+> For Hyper-V VM migrations to Azure, if the replication storage account is of _Premium_ type, you must select another storage account of _Standard_ type for the cache storage account. In this case, you must create private endpoints for both the replication and cache storage account.
+Next, follow the instructions to [review and start replication](./tutorial-migrate-physical-virtual-machines.md#replicate-machines) and [perform migrations](./tutorial-migrate-physical-virtual-machines.md#run-a-test-migration).
-This three-part article series illustrates how to:
+## Next steps
-- [Discover and assess servers for migration using Private Link](discover-and-assess-using-private-endpoints.md)-- [Migrate servers to Azure using Private Link](migrate-servers-to-azure-using-private-link.md) -- [Troubleshoot common issues with private endpoint connectivity](troubleshoot-network-connectivity.md)
+- Complete the [migration process](./tutorial-migrate-physical-virtual-machines.md#complete-the-migration).
+- Review the [post-migration best practices](./tutorial-migrate-physical-virtual-machines.md#post-migration-best-practices).
migrate Migrate Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md
management.azure.com | Used for resource deployments and management operations
*.services.visualstudio.com (optional) | Upload appliance logs used for internal monitoring. aka.ms/* (optional) | Allow access to these links; used to download and install the latest updates for appliance services. download.microsoft.com/download | Allow downloads from Microsoft download center.
-*.blob.core.windows.net (optional) | This is optional and is not required if the storage account has a private endpoint attached.
-
-### Government cloud URLs for private link connectivity
-
-**URL** | **Details**
- | |
-*.portal.azure.us | Navigate to the Azure portal.
-graph.windows.net | Sign in to your Azure subscription.
-login.microsoftonline.us | Used for access control and identity management by Azure Active Directory.
-management.usgovcloudapi.net | Used for resource deployments and management operations.
-*.services.visualstudio.com (optional)| Upload appliance logs used for internal monitoring.
-aka.ms/* (optional)| Allow access to these links; used to download and install the latest updates for appliance services.
-download.microsoft.com/download | Allow downloads from Microsoft download center.
-*.blob.core.usgovcloudapi.net (optional)| This is optional and is not required if the storage account has a private endpoint attached.
-*.applicationinsights.us (optional)| Upload appliance logs used for internal monitoring.
-
+*.servicebus.windows.net | **Used for VMware agentless migration**<br><br> Communication between the appliance and the Azure Migrate service.
+*.hypervrecoverymanager.windowsazure.com | **Used for VMware agentless migration**<br><br> Connect to Azure Migrate service URLs.
+*.blob.core.windows.net | **Used for VMware agentless migration**<br><br>Upload data to storage for migration. <br>This is optional and is not required if the storage accounts (both cache storage account and gateway storage account) have a private endpoint attached.
### Azure China 21Vianet (Azure China) URLs
migrate Migrate Servers To Azure Using Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-servers-to-azure-using-private-link.md
- Title: Migrate servers to Azure by using Private Link
-description: Use Azure Migrate with private endpoints for migrations by using ExpressRoute private peering or VPN connections.
--
-ms.
-zone_pivot_groups: migrate-agentlessvmware-hyperv-agentbased
- Previously updated : 12/29/2021--
-# Migrate servers to Azure using Private Link
-
-This article describes how to use Azure Migrate to migrate servers over a private network by using [Azure Private Link](../private-link/private-endpoint-overview.md). You can use the [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
---
-This article shows how to migrate on-premises VMware VMs to Azure, using the [Azure Migrate: Server Migration tool](migrate-services-overview.md#azure-migrate-server-migration-tool), with agentless migration.
-
-## Set up the Azure Migrate appliance
-
-Azure Migrate: Server Migration runs a lightweight VMware VM appliance to enable the discovery, assessment, and agentless migration of VMware VMs. If you have followed the [Discovery and assessment tutorial](discover-and-assess-using-private-endpoints.md), you've already set the appliance up. If you didn't, [set up and configure the appliance](./discover-and-assess-using-private-endpoints.md#set-up-the-azure-migrate-appliance) before you proceed.
-
-## Replicate VMs
-
-After setting up the appliance and completing discovery, you can begin replicating VMware VMs to Azure.
-
-The following diagram illustrates the agentless replication workflow with private endpoints by using the Azure Migrate: Server Migration tool.
-
-![Diagram that shows agentless replication architecture.](./media/how-to-use-azure-migrate-with-private-endpoints/agentless-replication-architecture.png)
-
-Enable replication as follows:
-1. In the Azure Migrate project > **Servers** > **Migration tools** > Azure Migrate: Server Migration, click **Replicate**.
-
- ![Diagram that shows how to replicate servers.](./media/how-to-use-azure-migrate-with-private-endpoints/replicate-servers.png)
-
-1. In **Replicate** > **Basics** > **Are your machines virtualized?**, select **Yes, with VMware vSphere**.
-1. In **On-premises appliance**, select the name of the Azure Migrate appliance. Select **OK**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/source-settings-vmware.png" alt-text="Diagram that shows how to complete source settings.":::
-
-1. In **Virtual machines**, select the machines you want to replicate. To apply VM sizing and disk type from an assessment, in **Import migration settings from an Azure Migrate assessment?**,
- - Select **Yes**, and select the VM group and assessment name.
- - Select **No** if you aren't using assessment settings.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/migrate-machines-vmware.png" alt-text="Diagram that shows how to select the VMs.":::
-
-1. In **Virtual machines**, select VMs you want to migrate. Then click **Next**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/select-vm-vmware.png" alt-text="Screenshot of selected VMs to be replicated.":::
-
-1. In **Target settings**, select the **target region** in which the Azure VMs will reside after migration.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/target-settings.png" alt-text="Screenshot of the Target settings screen.":::
-
-1. In **Replication storage account**, use the dropdown list to select a storage account to replicate over a private link.
- >[!NOTE]
- > Only the storage accounts in the selected target region and Azure Migrate project subscription are listed.
-
-1. Next, [**create a private endpoint for the storage account**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account) to enable replications over a private link. Ensure that the Azure Migrate appliance has network connectivity to the storage account on its private endpoint. Learn how to [verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
- >[!NOTE]
- > - The storage account cannot be changed after you enable replication.
- > - To orchestrate replications, Azure Migrate will grant the trusted Microsoft services and the Recovery Services vault managed identity access to the selected storage account.
-
- >[!Tip]
- > You can manually update the DNS records by editing the DNS hosts file on the Azure Migrate appliance with the private link FQDNs and private IP address of the storage account.
-
-1. Select the **Subscription** and **Resource group** in which the Azure VMs reside after migration.
-1. In **Virtual network**, select the Azure VNet/subnet for the migrated Azure VMs.
-1. In **Availability options**, select:
-
- - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones
-
- - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
-
- - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines.
-1. In **Disk encryption type**, select:
-
- - Encryption-at-rest with platform-managed key
-
- - Encryption-at-rest with customer-managed key
-
- - Double encryption with platform-managed and customer-managed keys
-
- >[!Note]
- > To replicate VMs with CMK, you'll need to [create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set) under the target Resource Group. A disk encryption set object maps Managed Disks to a Key Vault that contains the CMK to use for SSE.
-1. In **Azure Hybrid Benefit**:
-
- - Select **No** if you don't want to apply Azure Hybrid Benefit and click **Next**.
-
- - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating and click **Next**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/azure-hybrid-benefit.png" alt-text="Screenshot shows the options in Azure Hybrid Benefit.":::
-
-1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-vmware-migration.md#azure-vm-requirements).
-
- - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise, Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
-
- - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
-
- - **Availability Zone**: Specify the Availability Zone to use.
-
- - **Availability Set**: Specify the Availability Set to use.
- >[!Note]
- > If you want to select a different availability option for a set of virtual machines, go to step 1 and repeat the steps by selecting different availability options after starting replication for one set of virtual machines.
-1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium-managed disks) in Azure. Then click **Next**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/disks-agentless-vmware.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box.":::
-
-1. In **Tags**, add tags to your migrated virtual machines, disks, and NICs.
-
-1. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
- Next, follow the instructions to [perform migrations](tutorial-migrate-vmware.md#run-a-test-migration).
-
-#### Provisioning for the first time
-
-Azure Migrate does not create any additional resources for replications using Azure Private Link (Service Bus, Key Vault, and storage accounts are not created). Azure Migrate will make use of the selected storage account for uploading replication data, state data, and orchestration messages.
-
-## Create a private endpoint for the storage account
-
-To replicate by using ExpressRoute with private peering, [**create a private endpoint**](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage account (target subresource: *blob*).
-
->[!Note]
-> You can create private endpoints only on a general-purpose v2 storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-
-Create the private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
-
-Select **Yes** and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network. It also adds the DNS records for the resolution of new IPs and FQDNs that are created. Learn more about [private DNS zones](../dns/private-dns-overview.md).
-
-If the user who created the private endpoint is also the storage account owner, the private endpoint creation will be auto approved. Otherwise, the owner of the storage account must approve the private endpoint for use. To approve or reject a requested private endpoint connection, on the storage account page under **Networking**, go to **Private endpoint connections**.
-
-Review the status of the private endpoint connection state before you continue.
--
-Ensure that the on-premises appliance has network connectivity to the storage account via its private endpoint. To validate the private link connection, perform a DNS resolution of the storage account endpoint (private link resource FQDN) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address. Learn how to verify [network connectivity.](./troubleshoot-network-connectivity.md#verify-dns-resolution)
-
-## Next steps
-------
-This article shows you how to [migrate on-premises Hyper-V VMs to Azure](tutorial-migrate-hyper-v.md), using the [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool, with agentless migration. You can also migrate using agent-based migration.
-
-## Set up the replication provider for migration
-
-The following diagram illustrates the agentless migration workflow with private endpoints by using the Azure Migrate: Server Migration tool.
-
- ![Diagram that shows replication architecture.](./media/how-to-use-azure-migrate-with-private-endpoints/replication-architecture.png)
-
-For migrating Hyper-V VMs, Azure Migrate: Server Migration installs software providers (Microsoft Azure Site Recovery provider and Microsoft Azure Recovery Service agent) on Hyper-V Hosts or cluster nodes.
-1. In the Azure Migrate project > **Servers**, in **Azure Migrate: Server Migration**, click **Discover**.
-1. In **Discover machines** > **Are your machines virtualized?**, select **Yes, with Hyper-V**.
-1. In **Target region**, select the Azure region to which you want to migrate the machines.
-1. Select **Confirm that the target region for migration is region-name**.
-1. Select **Create resources**. This creates an Azure Site Recovery vault in the background. Don't close the page during the creation of resources. If you have already set up migration with Azure Migrate: Server Migration, this option won't appear since resources were set up previously.
- - This step creates a Recovery Services vault in the background and enables a managed identity for the vault. A Recovery Services vault is an entity that contains the replication information of servers and is used to trigger replication operations.
- - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This step adds five fully qualified domain names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault.
- - The five domain names are formatted in this pattern: <br> _{Vault-ID}-asr-pod01-{type}-.{target-geo-code}_.privatelink.siterecovery.windowsazure.com
- - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS is then linked to the private endpoint virtual network.
-1. In **Prepare Hyper-V host servers**, download the Hyper-V Replication provider, and the registration key file.
-
- - The registration key is needed to register the Hyper-V host with Azure Migrate Server Migration.
-
- - The key is valid for five days after you generate it.
-
- ![Screenshot of discover machines screen.](./media/how-to-use-azure-migrate-with-private-endpoints/discover-machines-hyperv.png)
-1. Copy the provider setup file and registration key file to each Hyper-V host (or cluster node) running VMs you want to replicate.
-> [!Note]
->Before you register the replication provider, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication provider. Additional DNS configuration may be required for the on-premises replication appliance to resolve the private link FQDNs to their private IP addresses. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution)
-
-Next, follow these instructions to [install and register the replication provider](tutorial-migrate-hyper-v.md#install-and-register-the-provider).
-
-## Replicate Hyper-V VMs
-
-With discovery completed, you can begin replication of Hyper-V VMs to Azure.
-
-> [!Note]
-> You can replicate up to 10 machines together. If you need to replicate more, then replicate them simultaneously in batches of 10.
-
-1. In the Azure Migrate project > **Servers** > **Migration tools** > Azure Migrate: Server Migration, click **Replicate**.
-1. In **Replicate** > **Basics** > **Are your machines virtualized?**, select **Yes, with Hyper-V**. Then click **Next: Virtual machines**.
-1. In **Virtual machines**, select the machines you want to replicate.
- - If you've run an assessment for the VMs, you can apply VM sizing and disk type (premium/standard) recommendations from the assessment results. To do this, in **Import migration settings from an Azure Migrate assessment?**, select the **Yes** option.
- - If you didn't run an assessment, or you don't want to use the assessment settings, select the **No** option.
- - If you selected to use the assessment, select the VM group, and assessment name.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/migrate-machines-vmware.png" alt-text="Screenshot of migrate machines screen.":::
-
-1. In **Virtual machines**, search for VMs as needed, and select each VM you want to migrate. Then click **Next:Target settings**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/select-vm.png" alt-text="Screenshot of selected VMs.":::
-
-1. In **Target settings**, select the target region to which you'll migrate, the subscription, and the resource group in which the Azure VMs will reside after migration.
-
- :::image type="content" source="./media/tutorial-migrate-hyper-v/target-settings.png" alt-text="Screenshot of target settings.":::
-
-1. In **Replication storage account**, select the Azure storage account in which replicated data will be stored in Azure.
-
-1. Next, [**create a private endpoint for the storage account**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1) and [**grant permissions to the Recovery Services vault managed identity**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate. This is mandatory before you proceed.
-
- - For Hyper-V VM migrations to Azure, if the replication storage account is of *Premium* type, you must select another storage account of *Standard* type for the cache storage account. In this case, you must create private endpoints for both the replication and cache storage account.
-
- - Ensure that the server hosting the replication provider has network connectivity to the storage accounts via the private endpoints before you proceed. Learn how to [verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
- >[!Tip]
- > You can manually update the DNS records by editing the DNS hosts file on the Azure Migrate appliance with the private link FQDNs and private IP addresses of the storage account.
-
-1. In **Virtual network**, select the Azure VNet/subnet for the migrated Azure VMs.
-
-1. In **Availability options**, select:
-
- - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones.
-
- - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
-
- - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines.
-
-1. In **Azure Hybrid Benefit**:
-
- - Select **No** if you don't want to apply Azure Hybrid Benefit. Then, click **Next**.
-
- - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then click **Next**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/azure-hybrid-benefit.png" alt-text="Screenshot of Azure Hybrid benefit selection.":::
-
-1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-hyper-v-migration.md#azure-vm-requirements).
-
- - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise, Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
-
- - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
-
- - **Availability Set**: If the VM should be in an Azure availability set after migration, specify the set. The set must be in the target resource group you specify for the migration.
-
-1. In **Disks**, specify the VM disks that need to be replicated to Azure. Then click **Next**.
- - You can exclude disks from replication.
- - If you exclude disks, they won't be present on the Azure VM after migration.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/disks.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box.":::
-
-1. In **Tags**, add tags to your migrated virtual machines, disks, and NICs.
-
-1. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
-
- > [!Note]
- > You can update replication settings any time before replication starts, **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
-
- Next, follow the instructions to [perform migrations](tutorial-migrate-hyper-v.md#migrate-vms).
-]
-### Grant access permissions to the Recovery Services vault
-
-You must grant the permissions to the Recovery Services vault for authenticated access to the cache/replication storage account.
-
-To identify the Recovery Services vault created by Azure Migrate and grant the required permissions, follow these steps.
-
-**Identify the Recovery Services vault and the managed identity object ID**
-
-You can find the details of the Recovery Services vault on the Azure Migrate: Server Migration **Properties** page.
-
-1. Go to the **Azure Migrate** hub, and on the **Azure Migrate: Server Migration** tile, select **Overview**.
-
- ![Screenshot that shows the Overview page on the Azure Migrate hub.](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
-
-1. In the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have **Private endpoint** as the **Connectivity type** and **Other** as the **Replication type**. You'll need this information when you provide access to the vault.
-
- ![Screenshot that shows the Azure Migrate: Server Migration Properties page.](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png)
-
-**Permissions to access the storage account**
-
- To the managed identity of the vault, you must grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
-
-The role permissions for the Azure Resource Manager vary depending on the type of storage account.
-
-|**Storage account type** | **Role permissions**|
-| | |
-|Standard type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|
-|Premium type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner)
-
-1. Go to the replication/cache storage account selected for replication. In the left pane, select **Access control (IAM)**.
-
-1. Select **+ Add**, and select **Add role assignment**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png" alt-text="Screenshot that shows Add role assignment.":::
-
-1. On the **Add role assignment** page in the **Role** box, select the appropriate role from the permissions list previously mentioned. Enter the name of the vault noted previously and select **Save**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png" alt-text="Screenshot that shows the Add role assignment page.":::
-
-1. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, on the **Networking** tab in the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/exceptions.png" alt-text="Screenshot that shows the Allow trusted Microsoft services to access this storage account option.":::
-
-## Create a private endpoint for the storage account
-
-To replicate by using ExpressRoute with private peering, [create a private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage accounts (target subresource: _blob_).
-
->[!Note]
-> You can create private endpoints only on a general-purpose v2 storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-
-Create the private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
-
-Select **Yes** and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network. It also adds the DNS records for the resolution of new IPs and FQDNs that are created. Learn more about [private DNS zones](../dns/private-dns-overview.md).
-
-If the user who created the private endpoint is also the storage account owner, the private endpoint creation will be auto approved. Otherwise, the owner of the storage account must approve the private endpoint for use. To approve or reject a requested private endpoint connection, on the storage account page under **Networking**, go to **Private endpoint connections**.
-
-Review the status of the private endpoint connection state before you continue.
-
-![Screenshot that shows the Private endpoint approval status.](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection-state.png)
-
-After you've created the private endpoint, use the dropdown list in **Replicate** > **Target settings** > **Cache storage account** to select the storage account for replicating over a private link.
-
-Ensure that the on-premises replication appliance has network connectivity to the storage account on its private endpoint. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
-
-Ensure that the replication provider has network connectivity to the storage account via its private endpoint. To validate the private link connection, perform a DNS resolution of the storage account endpoint (private link resource FQDN) from the on-premises server hosting the replication provider and ensure that it resolves to a private IP address. Learn how to verify [network connectivity.](./troubleshoot-network-connectivity.md#verify-dns-resolution)
--
->[!Note]
-> For Hyper-V VM migrations to Azure, if the replication storage account is of _Premium_ type, you must select another storage account of _Standard_ type for the cache storage account. In this case, you must create private endpoints for both the replication and cache storage account.
--
-## Next steps
------
-This article shows a proof-of-concept deployment path for agent-based replications to migrate your [VMware VMs](tutorial-migrate-vmware-agent.md), [Hyper-V VMs](tutorial-migrate-physical-virtual-machines.md), [physical servers](tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider by using Azure private endpoints.
-
-## Set up a replication appliance for migration
-
-The following diagram illustrates the agent-based replication workflow with private endpoints by using the Azure Migrate: Server Migration tool.
-
-![Diagram that shows replication architecture.](./media/how-to-use-azure-migrate-with-private-endpoints/replication-architecture.png)
-
-The tool uses a replication appliance to replicate your servers to Azure. Follow these steps to create the required resources for migration.
-
-1. In **Discover machines** > **Are your machines virtualized?**, select **Not virtualized/Other**.
-1. In **Target region**, select and confirm the Azure region to which you want to migrate the machines.
-1. Select **Create resources** to create the required Azure resources. Don't close the page during the creation of resources.
- - This step creates a Recovery Services vault in the background and enables a managed identity for the vault. A Recovery Services vault is an entity that contains the replication information of servers and is used to trigger replication operations.
- - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This step adds five fully qualified domain names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault.
- - The five domain names are formatted in this pattern: <br> _{Vault-ID}-asr-pod01-{type}-.{target-geo-code}_.privatelink.siterecovery.windowsazure.com
- - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS is then linked to the private endpoint virtual network.
-
->[!Note]
-> Before you register the replication appliance, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication appliance. Additional DNS configuration may be required for the on-premises replication appliance to resolve the private link FQDNs to their private IP addresses. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
-
-After you verify the connectivity, download the appliance setup and key file, run the installation process, and register the appliance to Azure Migrate. Learn more about how to [set up the replication appliance](./tutorial-migrate-physical-virtual-machines.md#prepare-a-machine-for-the-replication-appliance). After you set up the replication appliance, follow these instructions to [install the mobility service](./tutorial-migrate-physical-virtual-machines.md#install-the-mobility-service) on the machines you want to migrate.
-
-## Replicate servers
-
-Now, select machines for replication and migration.
-
->[!Note]
-> You can replicate up to 10 machines together. If you need to replicate more, then replicate them simultaneously in batches of 10.
-
-1. In the Azure Migrate project > **Servers** > **Migration tools** > Azure Migrate: Server Migration, click **Replicate**.
-
- ![Diagram that shows how to replicate servers.](./media/how-to-use-azure-migrate-with-private-endpoints/replicate-servers.png)
-
-1. In **Replicate** > **Basics** > **Are your machines virtualized?**, select **Not virtualized/Other**.
-1. In **On-premises appliance**, select the name of the Azure Migrate appliance that you set up.
-1. In **Process Server**, select the name of the replication appliance.
-1. In **Guest credentials**, please select the dummy account created previously during the [replication installer setup](tutorial-migrate-physical-virtual-machines.md#download-the-replication-appliance-installer) to install the Mobility service manually (push install is not supported). Then click **Next: Virtual machines.**
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/source-settings-vmware.png" alt-text="Diagram that shows how to complete source settings.":::
-
-1. In **Virtual machines**, in **Import migration settings from an assessment?**, leave the default setting **No, I'll specify the migration settings manually**.
-1. Select each VM you want to migrate. Then click **Next:Target settings**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/select-vm.png" alt-text="Screenshot of selected VMs to be replicated.":::
-
-1. In **Target settings**, select the subscription,the target region to which you'll migrate, and the resource group in which the Azure VMs will reside after migration.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/target-settings-agent-inline.png" alt-text="Screenshot displays the options in Overview." lightbox="./media/how-to-use-azure-migrate-with-private-endpoints/target-settings-agent-expanded.png":::
-
-1. In **Virtual network**, select the Azure VNet/subnet for the migrated Azure VMs.
-1. In **Cache storage account**, use the dropdown list to select a storage account to replicate over a private link.
-
-1. Next, [**create a private endpoint for the storage account**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1) and [**grant permissions to the Recovery Services vault managed identity**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate. This is mandatory before you proceed.
-
- - Ensure that the server hosting the replication appliance has network connectivity to the storage accounts via the private endpoints before you proceed. Learn how to [verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
-
- >[!Tip]
- > You can manually update the DNS records by editing the DNS hosts file on the Azure Migrate appliance with the private link FQDNs and private IP addresses of the storage account.
-
-1. In **Availability options**, select:
-
- - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones.
-
- - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
-
- - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines.
-1. In **Disk encryption type**, select:
-
- - Encryption-at-rest with platform-managed key
- - Encryption-at-rest with customer-managed key
- - Double encryption with platform-managed and customer-managed keys
- > [!Note]
- > To replicate VMs with CMK, you'll need to [create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set) under the target Resource Group. A disk encryption set object maps Managed Disks to a Key Vault that contains the CMK to use for SSE.
-1. In **Azure Hybrid Benefit**:
- - Select **No** if you don't want to apply Azure Hybrid Benefit. Then, click **Next**.
- - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then click **Next**.
-1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-physical-migration.md#azure-vm-requirements).
- - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise, Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
-
- - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
-
- - **Availability Zone**: Specify the Availability Zone to use.
-
- - **Availability Set**: Specify the Availability Set to use.
-
-1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then click **Next**.
- - You can exclude disks from replication.
- - If you exclude disks, they won't be present on the Azure VM after migration.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/disks.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box.":::
--
-1. In **Tags**, add tags to your migrated virtual machines, disks, and NICs.
-
-1. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
-
- > [!Note]
- > You can update replication settings any time before replication starts, **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
-
- Next, follow the instructions to [perform migrations](tutorial-migrate-physical-virtual-machines.md#run-a-test-migration).
-
-### Grant access permissions to the Recovery Services vault
-
-You must grant the permissions to the Recovery Services vault for authenticated access to the cache/replication storage account.
-
-To identify the Recovery Services vault created by Azure Migrate and grant the required permissions, follow these steps.
-
-**Identify the Recovery Services vault and the managed identity object ID**
-
-You can find the details of the Recovery Services vault on the **Azure Migrate: Server Migration Properties** page.
-
-1. Go to the **Azure Migrate** hub, and on the **Azure Migrate: Server Migration** tile, select **Overview**.
-
- ![Screenshot that shows the Overview page on the Azure Migrate hub.](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
-
-1. In the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have **Private endpoint** as the **Connectivity type** and **Other** as the **Replication type**. You'll need this information when you provide access to the vault.
-
- ![Screenshot that shows the Azure Migrate: Server Migration Properties page.](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png)
-
-**Permissions to access the storage account**
-
- To the managed identity of the vault, you must grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
-
-The role permissions for the Azure Resource Manager vary depending on the type of storage account.
-
-|**Storage account type** | **Role permissions**|
-| | |
-|Standard type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|
-|Premium type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner)
-
-1. Go to the replication/cache storage account selected for replication. In the left pane, select **Access control (IAM)**.
-1. Select **+ Add**, and select **Add role assignment**.
-
- ![Screenshot that shows Add role assignment.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png)
-
-1. On the **Add role assignment** page in the **Role** box, select the appropriate role from the permissions list previously mentioned. Enter the name of the vault noted previously and select **Save**.
-
- ![Screenshot that shows the Add role assignment page.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png)
-
-1. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, on the **Networking** tab in the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account**.
-
- ![Screenshot that shows the Allow trusted Microsoft services to access this storage account option.](./media/how-to-use-azure-migrate-with-private-endpoints/exceptions.png)
-
-## Create a private endpoint for the storage account
-
-To replicate by using ExpressRoute with private peering, [create a private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage accounts (target subresource: _blob_).
-
->[!Note]
-> You can create private endpoints only on a general-purpose v2 storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-
-Create the private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
-
-Select **Yes**, and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network. It also adds the DNS records for the resolution of new IPs and FQDNs that are created. Learn more about [private DNS zones](../dns/private-dns-overview.md).
-
-If the user who created the private endpoint is also the storage account owner, the private endpoint creation will be auto-approved. Otherwise, the owner of the storage account must approve the private endpoint for use. To approve or reject a requested private endpoint connection, on the storage account page under **Networking**, go to **Private endpoint connections**.
-
-Review the status of the private endpoint connection state before you continue.
-
-![Screenshot that shows the Private endpoint approval status.](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection-state.png)
-
-After you've created the private endpoint, use the dropdown list in **Replicate** > **Target settings** > **Cache storage account** to select the storage account for replicating over a private link.
-
-Ensure that the on-premises replication appliance has network connectivity to the storage account on its private endpoint. To validate the private link connection, perform a DNS resolution of the storage account endpoint (private link resource FQDN) from the on-premises server hosting the replication appliance and ensure that it resolves to a private IP address. Learn how to verify [network connectivity.](./troubleshoot-network-connectivity.md#verify-dns-resolution)
-
-## Next steps
-- [Migrate VMs](tutorial-migrate-physical-virtual-machines.md#migrate-vms)-- Complete the [migration process](tutorial-migrate-physical-virtual-machines.md#complete-the-migration). -- Review the [post-migration best practices](tutorial-migrate-physical-virtual-machines.md#post-migration-best-practices).---
migrate Server Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/server-migrate-overview.md
Use these selected comparisons to help you decide which method to use. You can a
**Disk limits** | OS disk: 2 TB<br/><br/> Data disk: 32 TB<br/><br/> Maximum disks: 60 | OS disk: 2 TB<br/><br/> Data disk: 32 TB<br/><br/> Maximum disks: 63 **Passthrough disks** | Not supported | Supported **UEFI boot** | Supported. | Supported.
-**Connectivity** | Public internet <br/> ExpressRoute with Private peering <br/> ExpressRoute with Microsoft peering <br/> Site-to-site VPN |Public internet <br/> ExpressRoute with Private peering <br/> ExpressRoute with Microsoft peering <br/> Site-to-site VPN
+**Connectivity** | Public internet <br/> ExpressRoute with Microsoft peering <br/> <br/> [Learn how](./replicate-using-expressroute.md) to use private endpoints for replication over an ExpressRoute private peering or a S2S VPN connection. |Public internet <br/> ExpressRoute with Private peering <br/> ExpressRoute with Microsoft peering <br/> Site-to-site VPN
## Compare deployment steps
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-network-connectivity.md
Make sure the private endpoint is an approved state.
b. If the connection is in a Pending state, you need to get it approved. c. You may also navigate to the private endpoint resource and review if the virtual network matches the Migrate project private endpoint virtual network.
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection.png" alt-text="Screenshot of View Private Endpoint connection.":::
-
+ ![View Private Endpoint connection](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection.png)
## Validate the data flow through the private endpoints Review the data flow metrics to verify the traffic flow through private endpoints. Select the private endpoint in the Azure Migrate: Server Assessment and Server Migration Properties page. This will redirect to the private endpoint overview section in Azure Private Link Center. In the left menu, select **Metrics** to view the _Data Bytes In_ and _Data Bytes Out_ information to view the traffic flow.
Review the data flow metrics to verify the traffic flow through private endpoint
The on-premises appliance (or replication provider) will access the Azure Migrate resources using their fully qualified private link domain names (FQDNs). You may require additional DNS settings to resolve the private IP address of the private endpoints from the source environment. [See this article](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder) to understand the DNS configuration scenarios that can help troubleshoot any network connectivity issues.
-To validate the private link connection, perform a DNS resolution of the Azure Migrate resource endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address.
-
-**To obtain the private endpoint details to verify DNS resolution:**
-
-1. The private endpoint details and private link resource FQDNs' information is available in the Discovery and Assessment and Server Migration properties pages. Select **Download DNS settings** to view the list. Note, only the private endpoints that were automatically created by Azure Migrate are listed below.
-
- ![Azure Migrate: Discovery and Assessment Properties](./media/how-to-use-azure-migrate-with-private-endpoints/server-assessment-properties.png)
-
- [![Azure Migrate: Server Migration Properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties-expanded.png#lightbox)
+To validate the private link connection, perform a DNS resolution of the Azure Migrate resource endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address.
+The private endpoint details and private link resource FQDNs' information is available in the Discovery and Assessment and Server Migration properties pages. Select **Download DNS settings** to view the list.
-2. If you have created a private endpoint for the storage account(s) for replicating over a private network, you can obtain the private link FQDN and IP address as illustrated below.
+ ![Azure Migrate: Discovery and Assessment Properties](./media/how-to-use-azure-migrate-with-private-endpoints/server-assessment-properties.png)
- - Go to the **Storage account** > **Networking** > **Private endpoint connections** and select the private endpoint created.
-
- :::image type="content" source="./media/troubleshoot-network-connectivity/private-endpoint.png" alt-text="Screenshot of the Private Endpoint connections.":::
-
- - Go to **Settings** > **DNS configuration** to obtain the storage account FQDN and private IP address.
-
- :::image type="content" source="./media/troubleshoot-network-connectivity/private-link-info.png" alt-text="Screenshot showing the Private Link FQDN information.":::
+ [![Azure Migrate: Server Migration Properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties-expanded.png#lightbox)
An illustrative example for DNS resolution of the storage account private link FQDN. -- Enter ```nslookup_<storage-account-name>_.blob.core.windows.net.``` Replace ```<storage-account-name>``` with the name of the storage account used for Azure Migrate.
+- Enter _nslookup ```<storage-account-name>_.blob.core.windows.net.``` Replace ```<storage-account-name>``` with the name of the storage account used for Azure Migrate.
You'll receive a message like this:
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/dns-resolution-example.png" alt-text="Screenshot showing a DNS resolution example.":::
+ ![DNS resolution example](./media/how-to-use-azure-migrate-with-private-endpoints/dns-resolution-example.png)
- A private IP address of 10.1.0.5 is returned for the storage account. This address belongs to the private endpoint virtual network subnet.
You can verify the DNS resolution for other Azure Migrate artifacts using a simi
If the DNS resolution is incorrect, follow these steps:
-**Recommended**: Manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses.
-- If you use a custom DNS, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see [private endpoint overview: DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration).-- If you use Azure-provided DNS servers, refer to the below section for further troubleshooting.
+**Recommended** for testing: You can manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses.
> [!Tip]
-> For testing, you can manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses. <br/>
-
+> You can manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses. This option is recommended only for testing. <br/>
## Validate the Private DNS Zone If the DNS resolution is not working as described in the previous section, there might be an issue with your Private DNS Zone.
If the DNS resolution is incorrect, follow these steps:
1. **Proxy server considerations**: If the appliance uses a proxy server for outbound connectivity, you may need to validate your network settings and configurations to ensure the private link URLs are reachable and can be routed as expected.
- - If the proxy server is for internet connectivity, you may need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./discover-and-assess-using-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
+ - If the proxy server is for internet connectivity, you may need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./how-to-use-azure-migrate-with-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
- Alternatively, if the proxy server is for all outbound traffic, make sure the proxy server can resolve the private link FQDNs to their respective private IP addresses. For a quick workaround, you can manually update the DNS records on the proxy server with the DNS mappings and the associated private IP addresses, as shown above. This option is recommended for testing. 1. If the issue still persists, [refer to this section](#validate-the-private-dns-zone) for further troubleshooting.
In addition to the URLs above, the appliance needs access to the following URLs
|*.windows.net <br/> *.msftauth.net <br/> *.msauth.net <br/> *.microsoft.com <br/> *.live.com <br/> *.office.com <br/> *.microsoftonline.com <br/> *.microsoftonline-p.com <br/> | Used for access control and identity management by Azure Active Directory |management.azure.com | For triggering Azure Resource Manager deployments |*.services.visualstudio.com (optional) | Upload appliance logs used for internal monitoring.
-|aka.ms/* (optional) | Allow access to *also know as* links; used to download and install the latest updates for appliance services
+|aka.ms/* (optional) | Allow access to aka links; used to download and install the latest updates for appliance services
|download.microsoft.com/download | Allow downloads from Microsoft download center - Open the command line and run the following nslookup command to verify privatelink connectivity to the URLs listed in the DNS settings file. Repeat this step for all URLs in the DNS settings file.
If the DNS resolution is incorrect, follow these steps:
1. **Proxy server considerations**: If the appliance uses a proxy server for outbound connectivity, you may need to validate your network settings and configurations to ensure the private link URLs are reachable and can be routed as expected.
- - If the proxy server is for internet connectivity, you may need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./discover-and-assess-using-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
+ - If the proxy server is for internet connectivity, you may need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./how-to-use-azure-migrate-with-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
- Alternatively, if the proxy server is for all outbound traffic, make sure the proxy server can resolve the private link FQDNs to their respective private IP addresses. For a quick workaround, you can manually update the DNS records on the proxy server with the DNS mappings and the associated private IP addresses, as shown above. This option is recommended for testing. 1. If the issue still persists, [refer to this section](#validate-the-private-dns-zone) for further troubleshooting.
migrate Tutorial Discover Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md
Set up a new Azure Migrate project if you don't have one.
![Boxes for project name and region](./media/tutorial-discover-import/new-project.png) > [!Note]
- > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](discover-and-assess-using-private-endpoints.md#create-a-project-with-private-endpoint-connectivity)
+ > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](how-to-use-azure-migrate-with-private-endpoints.md#create-a-project-with-private-endpoint-connectivity)
7. Select **Create**. 8. Wait a few minutes for the Azure Migrate project to deploy.
migrate Tutorial Migrate Aws Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md
A Mobility service agent must be installed on the source AWS VMs to be migrated.
10. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the drop down if you'd like to specify a different storage account to use as the cache storage account for replication. <br/> > [!NOTE] >
- > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault)
- > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1)
+ > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#grant-access-permissions-to-the-recovery-services-vault)
+ > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#create-a-private-endpoint-for-the-storage-account-optional)
11. In **Availability options**, select: - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
migrate Tutorial Migrate Gcp Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-gcp-virtual-machines.md
A Mobility service agent must be installed on the source GCP VMs to be migrated.
10. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the dropdown if you'd like to specify a different storage account to use as the cache storage account for replication. <br/> > [!NOTE] >
- > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault)
- > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1)
+ > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#grant-access-permissions-to-the-recovery-services-vault)
+ > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#create-a-private-endpoint-for-the-storage-account-optional)
11. In **Availability options**, select: - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
Now, select machines for migration.
8. In **Target settings**, select the subscription, and target region to which you'll migrate, and specify the resource group in which the Azure VMs will reside after migration. 9. In **Virtual Network**, select the Azure VNet/subnet to which the Azure VMs will be joined after migration.
-10. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the drop down if you'd like to specify a different storage account to use as the cache storage account for replication. <br/>
- >[!NOTE]
- > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault)
- > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1)
+10. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the drop down if you'd like to specify a different storage account to use as the cache storage account for replication.
+
+ > [!NOTE]
+ >
+ > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#grant-access-permissions-to-the-recovery-services-vault)
+ > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#create-a-private-endpoint-for-the-storage-account-optional)
11. In **Availability options**, select: - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones
migrate Tutorial Migrate Vmware Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-agent.md
Select VMs for migration.
12. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the drop down if you'd like to specify a different storage account to use as the cache storage account for replication. <br/> > [!NOTE] >
- > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault )
- > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1)
+ > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#grant-access-permissions-to-the-recovery-services-vault)
+ > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#create-a-private-endpoint-for-the-storage-account-optional)
13. In **Availability options**, select: - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
-## Update (March 2022)
-- Perform agentless VMware VM discovery, assessments, and migrations over a private network using Azure Private Link. [Learn more.](how-to-use-azure-migrate-with-private-endpoints.md)-- ## Update (February 2022) - General Availability: Migrate Windows and Linux Hyper-V virtual machines with large data disks (up to 32 TB in size). - Azure Migrate is now supported in Azure China. [Learn more](/azure/china/overview-operations#azure-operations-in-china).
openshift Built In Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/built-in-container-registry.md
Now that you've set up the authentication methods to the ARO cluster, let's enab
1. Use an InPrivate, Incognito or other equivalent browser window feature to sign in to the console. The window will look different after having enabled OIDC. :::image type="content" source="media/built-in-container-registry/oidc-enabled-login-window.png" alt-text="OpenID Connect enabled sign in window.":::
- 1. Select **openid**
+ 1. Select **AAD**
> [!NOTE] > Take note of the username and password you use to sign in here. This username and password will function as an administrator for other actions in this and other articles.
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
Complete the following prerequisites to successfully walk through this guide.
1. Verify you can sign in to the OpenShift CLI with the token for user `kubeadmin`.
-### Enable the built-in container registry for OpenShift
+### Configure Azure Active Directory authentication
-The steps in this tutorial create a Docker image which must be pushed to a container registry accessible to OpenShift. The simplest option is to use the built-in registry provided by OpenShift. To enable the built-in container registry, follow the steps in [Configure built-in container registry for Azure Red Hat OpenShift 4](built-in-container-registry.md). Three items from those steps are used in this article.
+Azure Active Directory (Azure AD) implements OpenID Connect (OIDC). OIDC lets you use Azure AD to sign in to the ARO cluster. Follow the steps in [Configure Azure Active Directory authentication](configure-azure-ad-cli.md) to set up your cluster.
-* The username and password of the Azure AD user for signing in to the OpenShift web console.
-* The output of `oc whoami` after following the steps for signing in to the OpenShift CLI. This value is called **aad-user** for discussion.
-* The container registry URL.
+After you complete the setup, return to this document and sign in to the cluster with an Azure AD user.
-Note these items down as you complete the steps to enable the built-in container registry.
+1. Sign in to the OpenShift web console from your browser using the credentials of an Azure AD user. We'll leverage the OpenShift OpenID authentication against Azure Active Directory to use OpenID to define the administrator.
+
+ 1. Use an InPrivate, Incognito or other equivalent browser window feature to sign in to the console. The window will look different after having enabled OIDC.
+
+ :::image type="content" source="media/built-in-container-registry/oidc-enabled-login-window.png" alt-text="OpenID Connect enabled sign in window.":::
+ 1. Select **AAD**
+
+ > [!NOTE]
+ > Take note of the username and password you use to sign in here. This username and password will function as an administrator for other actions in this article.
+1. Sign in with the OpenShift CLI by using the following steps. For discussion, this process is known as `oc login`.
+ 1. At the right-top of the web console, expand the context menu of the signed-in user, then select **Copy Login Command**.
+ 1. Sign in to a new tab window with the same user if necessary.
+ 1. Select **Display Token**.
+ 1. Copy the value listed below **Login with this token** to the clipboard and run it in a shell, as shown here.
+
+ ```bash
+ oc login --token=<login-token> --server=<server-url>
+ ```
+
+1. Run `oc whoami` in the console and note the output as **\<aad-user>**. We'll use this value later in the article.
+1. Sign out of the OpenShift web console. Select the button in the top right of the browser window labeled as the **\<aad-user>** and choose **Log Out**.
### Create an OpenShift namespace for the Java app
Besides image management, the **aad-user** will also be granted administrative p
# Switch to project "open-liberty-demo" oc project open-liberty-demo Now using project "open-liberty-demo" on server "https://api.x8xl3f4y.eastus.aroapp.io:6443".
- # Note: replace "<aad-user>" with the one noted by executing the steps in
- # Configure built-in container registry for Azure Red Hat OpenShift 4
+ oc adm policy add-role-to-user admin <aad-user> clusterrole.rbac.authorization.k8s.io/admin added: "kaaIjx75vFWovvKF7c02M0ya5qzwcSJ074RZBfXUc34" ```
Follow the instructions below to set up an Azure Database for MySQL for use with
3. Open **your SQL database** > **Connection strings** > Select **JDBC**. Write down the **Port number** following sql server address. For example, **3306** is the port number in the example below. ```text
- String url ="jdbc:mysql://<Database name>.mysql.database.azure.com:3306/{your_database}?useSSL=true&requireSSL=false"; myDbConn = DriverManager.getConnection(url, "<Server admin login>", {your_password});
+ String url ="jdbc:mysql://<Server name>.mysql.database.azure.com:3306/{your_database}?useSSL=true&requireSSL=false"; myDbConn = DriverManager.getConnection(url, "<Server admin login>", {your_password});
``` 4. If you didn't create a database in above steps, follow the steps in [Quickstart: Create an Azure Database for MySQL server by using the Azure portal#connect-to-the-server-by-using-mysqlexe](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md#connect-to-the-server-by-using-mysqlexe) to create one. Return to this document after creating the database.
The directory `2-simple` of your local clone shows the Maven project with the ab
To deploy and run your Liberty application on an ARO 4 cluster, containerize your application as a Docker image using [Open Liberty container images](https://github.com/OpenLiberty/ci.docker) or [WebSphere Liberty container images](https://github.com/WASdev/ci.docker).
-### Build application image
- Complete the following steps to build the application image: # [with DB connection](#tab/with-mysql-image)
-After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image.
-
-```bash
-cd <path-to-your-repo>/open-liberty-on-aro/3-integration/connect-db/mysql
-
-# Fetch maven artifactId as image name, maven build version as image version
-IMAGE_NAME=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
-IMAGE_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
-cd <path-to-your-repo>/open-liberty-on-aro/3-integration/connect-db/mysql/target
-
-# If you are build with Open Liberty base image
-docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile .
-# If you are build with WebSphere Liberty base image
-docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile-wlp .
-```
-
-### Push the image to the container image registry
-
-When you're satisfied with the state of the application, push it to the built-in container image registry by following the instructions below.
+### Log in to the OpenShift CLI as the Azure AD user
-#### Log in to the OpenShift CLI as the Azure AD user
+Since you have already successfully run the app in the Liberty Docker container, sign in to the OpenShift CLI as the Azure AD user in order to build image remotely on the cluster.
1. Sign in to the OpenShift web console from your browser using the credentials of an Azure AD user. 1. Use an InPrivate, Incognito or other equivalent browser window feature to sign in to the console.
- 1. Select **openid**
+ 1. Select **AAD**
> [!NOTE] > Take note of the username and password you use to sign in here. This username and password will function as an administrator for other actions in this and other articles.
When you're satisfied with the state of the application, push it to the built-in
1. Copy the value listed below **Login with this token** to the clipboard and run it in a shell, as shown here. ```bash
- oc login --token=XOdASlzeT7BHT0JZW6Fd4dl5EwHpeBlN27TAdWHseob --server=https://api.aqlm62xm.rnfghf.aroapp.io:6443
- Logged into "https://api.aqlm62xm.rnfghf.aroapp.io:6443" as "kube:admin" using the token provided.
+ oc login --token=<login-token> --server=<server-url>
+ ```
- You have access to 57 projects, the list has been suppressed. You can list all projects with 'oc projects'
+### Build the application and push to the image stream
- Using project "open-liberty-demo".
- ```
+Next, you're going to build the image remotely on the cluster by executing the following commands.
-#### Push the container image to the container registry for OpenShift
+1. Identify the source directory and Dockerfile.
-Execute these commands to push the image to the container registry for OpenShift.
+ ```bash
+ cd <path-to-your-repo>/open-liberty-on-aro/3-integration/connect-db/mysql
+
+ # Fetch maven artifactId as image name, maven build version as image version
+ IMAGE_NAME=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
+ IMAGE_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
+ cd <path-to-your-repo>/open-liberty-on-aro/3-integration/connect-db/mysql/target
-```bash
-# Note: replace "<Container_Registry_URL>" with the fully qualified name of the registry
-Container_Registry_URL=<Container_Registry_URL>
+ # If you are building with Open Liberty base image, the existing Dockerfile is ready for you
-# Create a new tag with registry info that refers to source image
-docker tag ${IMAGE_NAME}:${IMAGE_VERSION} ${Container_Registry_URL}/${NAMESPACE}/${IMAGE_NAME}:${IMAGE_VERSION}
+ # If you are building with WebSphere Liberty base image, uncomment and execute the following two commands to rename Dockerfile-wlp to Dockerfile
+ # mv Dockerfile Dockerfile.backup
+ # mv Dockerfile-wlp Dockerfile
+ ```
-# Sign in to the built-in container image registry
-docker login -u $(oc whoami) -p $(oc whoami -t) ${Container_Registry_URL}
-```
+1. Create an image stream.
-Successful output will look similar to the following.
+ ```bash
+ oc create imagestream ${IMAGE_NAME}
+ ```
-```bash
-WARNING! Using --password via the CLI is insecure. Use --password-stdin.
-Login Succeeded
-```
+1. Create a build configuration which specifies the image stream tag of the build output.
+
+ ```bash
+ oc new-build --name ${IMAGE_NAME}-config --binary --strategy docker --to ${IMAGE_NAME}:${IMAGE_VERSION}
+ ```
-Push image to the built-in container image registry with the following command.
+1. Start the build to upload local contents, containerize, and output to the image stream tag specified before.
-```bash
-docker push ${Container_Registry_URL}/${NAMESPACE}/${IMAGE_NAME}:${IMAGE_VERSION}
-```
+ ```bash
+ oc start-build ${IMAGE_NAME}-config --from-dir . --follow
+ ```
# [without DB connection](#tab/without-mysql-mage)
+### Build and run the application locally with Docker
+
+Before deploying the containerized application to a remote cluster, build and run with your local Docker to verify whether it works:
+ 1. Change directory to `2-simple` of your local clone.
-2. Run `mvn clean package` to package the application.
-3. Run one of the following commands to build the application image.
+1. Run `mvn clean package` to package the application.
+1. Run one of the following commands to build the application image.
* Build with Open Liberty base image: ```bash
docker push ${Container_Registry_URL}/${NAMESPACE}/${IMAGE_NAME}:${IMAGE_VERSION
docker build -t javaee-cafe-simple:1.0.0 --pull --file=Dockerfile-wlp . ```
-### Run the application locally with Docker
-
-Before deploying the containerized application to a remote cluster, run with your local Docker to verify whether it works:
- 1. Run `docker run -it --rm -p 9080:9080 javaee-cafe-simple:1.0.0` in your console.
-2. Wait for Liberty server to start and the application to deploy successfully.
-3. Open `http://localhost:9080/` in your browser to visit the application home page.
-4. Press **Control-C** to stop the application and Liberty server.
-
-### Push the image to the container image registry
+1. Wait for Liberty server to start and the application to deploy successfully.
+1. Open `http://localhost:9080/` in your browser to visit the application home page.
+1. Press **Control-C** to stop the application and Liberty server.
-When you're satisfied with the state of the application, push it to the built-in container image registry by following the instructions below.
+### Log in to the OpenShift CLI as the Azure AD user
-#### Log in to the OpenShift CLI as the Azure AD user
+When you're satisfied with the state of the application, sign in to the OpenShift CLI as the Azure AD user in order to build image remotely on the cluster.
1. Sign in to the OpenShift web console from your browser using the credentials of an Azure AD user. 1. Use an InPrivate, Incognito or other equivalent browser window feature to sign in to the console.
- 1. Select **openid**
+ 1. Select **AAD**
> [!NOTE] > Take note of the username and password you use to sign in here. This username and password will function as an administrator for other actions in this and other articles.
When you're satisfied with the state of the application, push it to the built-in
1. Copy the value listed below **Login with this token** to the clipboard and run it in a shell, as shown here. ```bash
- oc login --token=XOdASlzeT7BHT0JZW6Fd4dl5EwHpeBlN27TAdWHseob --server=https://api.aqlm62xm.rnfghf.aroapp.io:6443
- Logged into "https://api.aqlm62xm.rnfghf.aroapp.io:6443" as "kube:admin" using the token provided.
-
- You have access to 57 projects, the list has been suppressed. You can list all projects with 'oc projects'
-
- Using project "default".
+ oc login --token=<login-token> --server=<server-url>
```
-#### Push the container image to the container registry for OpenShift
+### Build the application and push to the image stream
-Execute these commands to push the image to the container registry for OpenShift.
+Next, you're going to build the image remotely on the cluster by executing the following commands.
-```bash
-# Note: replace "<Container_Registry_URL>" with the fully qualified name of the registry
-Container_Registry_URL=<Container_Registry_URL>
+1. Identity the source directory and the Dockerfile.
-# Create a new tag with registry info that refers to source image
-docker tag javaee-cafe-simple:1.0.0 ${Container_Registry_URL}/open-liberty-demo/javaee-cafe-simple:1.0.0
+ ```bash
+ cd <path-to-your-repo>/open-liberty-on-aro/2-simple
-# Sign in to the built-in container image registry
-docker login -u $(oc whoami) -p $(oc whoami -t) ${Container_Registry_URL}
-```
+ # If you are building with Open Liberty base image, the existing Dockerfile is ready for you
-Successful output will look similar to the following.
+ # If you are building with WebSphere Liberty base image, uncomment and execute the following two commands to rename Dockerfile-wlp to Dockerfile
+ # mv Dockerfile Dockerfile.backup
+ # mv Dockerfile-wlp Dockerfile
+ ```
-```bash
-WARNING! Using --password via the CLI is insecure. Use --password-stdin.
-Login Succeeded
-```
+1. Create an image stream.
-Push image to the built-in container image registry with the following command.
+ ```bash
+ oc create imagestream javaee-cafe-simple
+ ```
-```bash
+1. Create a build configuration which specifies the image stream tag of the build output.
-docker push ${Container_Registry_URL}/open-liberty-demo/javaee-cafe-simple:1.0.0
-```
+ ```bash
+ oc new-build --name javaee-cafe-simple-config --binary --strategy docker --to javaee-cafe-simple:1.0.0
+ ```
+1. Start the build to upload local contents, containerize, and output to the image stream tag specified before.
+
+ ```bash
+ oc start-build javaee-cafe-simple-config --from-dir . --follow
+ ```
Now you can deploy the sample Liberty application to the ARO 4 cluster with the
1. [Log in to the OpenShift CLI with the token for the Azure AD user](https://github.com/Azure-Samples/open-liberty-on-aro/blob/master/guides/howto-deploy-java-liberty-app.md#log-in-to-the-openshift-cli-with-the-token). 1. Run the following commands to deploy the application. ```bash
- # Change directory to "<path-to-repo>/3-integration/connect-db/mysql"
- cd <path-to-repo>/3-integration/connect-db/mysql
+ # Change directory to "<path-to-repo>/3-integration/connect-db/mysql/target"
+ cd <path-to-repo>/3-integration/connect-db/mysql/target
# Change project to "open-liberty-demo" oc project open-liberty-demo
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
Previously updated : 01/25/2022 Last updated : 03/23/2022
For Azure services, use the recommended zone names as described in the following
| Azure Data Factory (Microsoft.DataFactory/factories) / portal | privatelink.adf.azure.com | adf.azure.com | | Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net | | Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) / redisEnterprise | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net |
-| Azure Purview (Microsoft.Purview) / portal | privatelink.purview.azure.com | purview.azure.com |
+| Azure Purview (Microsoft.Purview) / account | privatelink.purview.azure.com | purview.azure.com |
| Azure Purview (Microsoft.Purview) / portal| privatelink.purviewstudio.azure.com | purview.azure.com | | Azure Digital Twins (Microsoft.DigitalTwins) / digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.net | azurehdinsight.net |
purview Concept Data Owner Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-data-owner-policies.md
Then letΓÇÖs assume that user ΓÇÿuser1ΓÇÖ, who is part of two groups:
## Policy publishing
-A newly created policy exists in the draft mode state, only visible in Azure Purview. The act of publishing initiates enforcement of a policy in the specified data systems. It's an asynchronous action that can take up to 2 minutes to be effective on the underlying data sources.
+A newly created policy exists in the draft mode state, only visible in Azure Purview. The act of publishing initiates enforcement of a policy in the specified data systems. It's an asynchronous action that can take between 5 minutes and 2 hours to be effective, depending on the enforcement code in the underlying data sources. For more information, consult the tutorials related to each data source
A policy published to a data source could contain references to an asset belonging to a different data source. Such references will be ignored since the asset in question does not exist in the data source where the policy is applied.
security Ocsp Sha 1 Sunset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ocsp-sha-1-sunset.md
+
+ Title: Sunset for SHA-1 Online Certificate Standard Protocol signing
+description: Important information regarding changes to the OCSP service.
+++++ Last updated : 03/17/2022++++++
+# Sunset for SHA-1 Online Certificate Standard Protocol signing
+
+Microsoft is updating the Online Certificate Standard Protocol (OCSP) service to comply with a recent change to the [Certificate Authority / Browser Forum (CA/B Forum)](https://cabforum.org/) Baseline Requirements. This change requires that all publicly-trusted Public Key Infrastructures (PKIs) end usage of the SHA-1 hash algorithms for OCSP responses by May 31, 2022.
+
+Microsoft leverages certificates from multiple PKIs to secure its services. Many of those certificates already use OCSP responses that use the SHA-256 hash algorithm. This change brings all remaining PKIs used by Microsoft into compliance with this new requirement.
+
+## When will this change happen?
+
+Starting on March 28, 2022, Microsoft will begin updating its remaining OCSP Responders that use the SHA-1 hash algorithm to use the SHA-256 hash algorithm. By May 30, 2022, all OCSP responses for certificates used by Microsoft services will use the SHA-256 hash algorithm.
+
+## What is the scope of the change?
+
+This change impacts OCSP-based revocation for the Microsoft operated PKIs that were using SHA-1 hashing algorithms. All OCSP responses will use the SHA-256 hashing algorithm. The change only impacts OCSP responses, not the certificates themselves.
+
+## Why is this change happening?
+
+The [Certificate Authority / Browser Forum (CA/B Forum)](https://cabforum.org/) created this requirement from [ballot measure SC53](https://cabforum.org/2022/01/26/ballot-sc53-sunset-for-sha-1-ocsp-signing/). Microsoft is updating its configuration to remain in line with the updated [Baseline Requirement](https://cabforum.org/baseline-requirements-documents/).
+
+## Will this change affect me?
+
+Most customers won't be impacted. However, some older client configurations that don't support SHA-256 could experience a certificate validation error.
+
+After May 31, 2022, clients that don't support SHA-256 hashes will be unable to validate the revocation status of a certificate, which could result in a failure in the client, depending on the configuration.
+
+If you're unable to update your legacy client to one that supports SHA-256, you can disable revocation checking to bypass OCSP until you update your client. If your Transport Layer Security (TLS) stack is older than 2015, you should review your configuration for potential incompatibilities.
+
+## Next steps
+
+If you have questions, contact us through [support](https://azure.microsoft.com/support/options/).
service-fabric Service Fabric Cluster Resource Manager Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-introduction.md
The Cluster Resource Manager is the system component that handles orchestration
2. Optimizing Your Environment 3. Helping with Other Processes
+[Check this page for a training video to understand how the Cluster Resource Manager works:](/shows/building-microservices-applications-on-azure-service-fabric/cluster-resource-manager.md)
### What it isnΓÇÖt In traditional N tier applications, there's always a [Load Balancer](https://en.wikipedia.org/wiki/Load_balancing_(computing)). Usually this was a Network Load Balancer (NLB) or an Application Load Balancer (ALB) depending on where it sat in the networking stack. Some load balancers are Hardware-based like F5ΓÇÖs BigIP offering, others are software-based such as MicrosoftΓÇÖs NLB. In other environments, you might see something like HAProxy, nginx, Istio, or Envoy in this role. In these architectures, the job of load balancing is to ensure stateless workloads receive (roughly) the same amount of work. Strategies for balancing load varied. Some balancers would send each different call to a different server. Others provided session pinning/stickiness. More advanced balancers use actual load estimation or reporting to route a call based on its expected cost and current machine load.
service-fabric Service Fabric Cross Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cross-availability-zones.md
description: Learn how to create an Azure Service Fabric cluster across Availabi
Previously updated : 05/24/2021- Last updated : 03/16/2022+ # Deploy an Azure Service Fabric cluster across Availability Zones
The Service Fabric node type must be enabled to support multiple Availability Zo
* If this value is omitted or set to `Hierarchical`: VMs are grouped to reflect the zonal distribution in up to 15 UDs. Each of the three zones has five UDs. This ensures that the zones are updated one at a time, moving to next zone only after completing five UDs within the first zone. This update process is safer for the cluster and the user application. This property only defines the upgrade behavior for Service Fabric application and code upgrades. The underlying virtual machine scale set upgrades are still parallel in all Availability Zones. This property doesn't affect the UD distribution for node types that don't have multiple zones enabled.
-* The third value is `vmssZonalUpgradeMode = Parallel`. This property is mandatory if a node type with multiple Availability Zones is added. This property defines the upgrade mode for the virtual machine scale set updates that happen in all Availability Zones at once.
+* The third value is `vmssZonalUpgradeMode`, is optional and can be updated at anytime. This property defines the upgrade scheme for the virtual machine scale set to happen in parallel or sequentially across Availability Zones.
- Currently, this property can only be set to parallel.
+ * If this value is set to `Parallel`: All scale set updates happen in parallel in all zones. This deployment mode is faster for upgrades, we don't recommend it because it goes against the SDP guidelines, which state that the updates should be applied to one zone at a time.
+ * If this value is omitted or set to `Hierarchical`: This ensures that the zones are updated one at a time, moving to next zone only after completing five UDs within the first zone. This update process is safer for the cluster and the user application.
>[!IMPORTANT] >The Service Fabric cluster resource API version should be 2020-12-01-preview or later.
service-fabric Service Fabric Stateless Node Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-stateless-node-types.md
description: Learn how to create and deploy stateless node types in Azure Servic
Previously updated : 10/19/2021- Last updated : 03/16/2022+ # Deploy an Azure Service Fabric cluster with stateless-only node types Service Fabric node types come with inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types change this assumption for a node type, thus allowing the node type to use other features such as faster scale out operations, support for Automatic OS Upgrades on Bronze durability and scaling out to more than 100 nodes in a single virtual machine scale set.
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
Title: Enable Zone to Zone Disaster Recovery for Azure Virtual Machines description: This article describes when and how to use Zone to Zone Disaster Recovery for Azure virtual machines.--++ Previously updated : 04/28/2020- Last updated : 03/23/2022+
This article describes how to replicate, failover, and failback Azure virtual ma
>[!NOTE] >
->- Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, East Asia, Japan East, Korea Central, Australia East, India Central, UK South, West Europe, North Europe, Norway East, France Central, Sweden Central (Managed Access), Canada Central, Central US, South Central US, East US, East US 2, West US 2, Brazil South and West US 3.
+>- Support for Zone to Zone disaster recovery is currently limited to the following regions: Southeast Asia, East Asia, Japan East, Korea Central, Australia East, India Central, China North 3, UK South, West Europe, North Europe, Norway East, France Central, Switzerland North, Sweden Central (Managed Access), South Africa North, Canada Central, US Gov Virginia, Central US, South Central US, East US, East US 2, West US 2, Brazil South and West US 3.
>- Site Recovery does not move or store customer data out of the region in which it is deployed when the customer is using Zone to Zone Disaster Recovery. Customers may select a Recovery Services Vault from a different region if they so choose. The Recovery Services Vault contains metadata but no actual customer data. >- Zone to Zone disaster recovery is not supported for VMs having ZRS managed disks.
spring-cloud Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/overview.md
The following quickstarts apply to Basic/Standard tier only. For Enterprise tier
Based on our learnings from customer engagements, we built Azure Spring Cloud Enterprise tier with commercially supported Spring runtime components to help enterprise customers to ship faster and unlock SpringΓÇÖs full potential.
+The following video introduces Azure Spring Cloud Enterprise tier.
+
+<br>
+
+> [!VIDEO https://www.youtube.com/embed/RoUtUv5CQSc]
+ ### Deploy and manage Spring and polyglot applications The fully managed VMware Tanzu® Build Service™ in Azure Spring Cloud Enterprise tier automates container creation, management and governance at enterprise scale using open-source [Cloud Native Buildpacks](https://buildpacks.io/) and commercial [VMware Tanzu® Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/). Tanzu Build Service offers a higher-level abstraction for building apps and provides a balance of control that reduces the operational burden on developers and supports enterprise IT operators who manage applications at scale. You can configure what Buildpacks to apply and build Spring applications and polyglot applications that run alongside Spring applications on Azure Spring Cloud.
spring-cloud Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quotas.md
All Azure services set default limits and quotas for resources and features. A
|--|--|--|| | vCPU | per app instance | 1 | 4 | | Memory | per app instance | 2 GB | 8 GB |
-| Azure Spring Cloud service instances | per region per subscription | 1 | 1 |
+| Azure Spring Cloud service instances | per region per subscription | 10 | 10 |
| Total app instances | per Azure Spring Cloud service instance | 25 | 500 | | Custom Domains | per Azure Spring Cloud service instance | 0 | 25 | | Persistent volumes | per Azure Spring Cloud service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps |
storage Storage Files Quick Create Use Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-windows.md
Title: Tutorial - Create and use an Azure file shares on Windows VMs
-description: This tutorial covers how to create and use an Azure files shares in the Azure portal. Connect it to a Windows VM, connect to the file share, and upload a file to the file share.
+ Title: Tutorial - Create an SMB Azure file share and connect it to a Windows virtual machine using the Azure portal
+description: This tutorial covers how to create an SMB Azure file share using the Azure portal, connect it to a Windows VM, upload a file to the file share, create a snapshot, and restore the share from the snapshot.
Previously updated : 02/14/2022 Last updated : 03/23/2022
-#Customer intent: As an IT admin new to Azure Files, I want to try out Azure file share so I can determine whether I want to subscribe to the service.
+#Customer intent: As an IT admin new to Azure Files, I want to try out Azure file shares so I can determine whether I want to subscribe to the service.
-# Tutorial: Create and manage Azure file shares with Windows virtual machines via the Azure portal
+# Tutorial: Create an SMB Azure file share and connect it to a Windows VM using the Azure portal
-Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) or [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System). In this tutorial, you will learn a few ways you can use an Azure file share in a Windows virtual machine (VM).
+Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) or [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System). In this tutorial, you will learn a few ways you can use an SMB Azure file share in a Windows virtual machine (VM).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you don't have an Azure subscription, create a [free account](https://azure.m
> * Create a storage account > * Create a file share > * Deploy a VM
-> * Connect to a VM
+> * Connect to the VM
> * Mount an Azure file share to your VM > * Create and delete a share snapshot
The following image shows the settings on the **Basics** tab for a new storage a
### Create an Azure file share
-Next, create a file share.
+Next, create an SMB Azure file share.
1. When the Azure storage account deployment is complete, select **Go to resource**. 1. Select **File shares** from the storage account pane.
Next, create a file share.
### Deploy a VM
-So far, you've created an Azure storage account and a file share with one file in it. Next, create an Azure VM with Windows Server 2016 Datacenter to represent the on-premises server.
+So far, you've created an Azure storage account and a file share with one file in it. Next, create an Azure VM with Windows Server 2019 Datacenter to represent the on-premises server.
1. Expand the menu on the left side of the portal and select **Create a resource** in the upper left-hand corner of the Azure portal. 1. Under **Popular services** select **Virtual machine**.
So far, you've created an Azure storage account and a file share with one file i
![Screenshot of Basic tab, basic VM information filled out.](./media/storage-files-quick-create-use-windows/vm-resource-group-and-subscription.png) 1. Under **Instance details**, name the VM *qsVM*.
-1. For **Image** select **Windows Server 2016 Datacenter - Gen2**.
+1. For **Image** select **Windows Server 2019 Datacenter - Gen2**.
1. Leave the default settings for **Region**, **Availability options**, and **Size**. 1. Under **Administrator account**, add a **Username** and enter a **Password** for the VM. 1. Under **Inbound port rules**, choose **Allow selected ports** and then select **RDP (3389)** and **HTTP** from the drop-down.
stream-analytics Machine Learning Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/machine-learning-udf.md
Previously updated : 12/21/2020 Last updated : 03/23/2022
-# Integrate Azure Stream Analytics with Azure Machine Learning (Preview)
+# Integrate Azure Stream Analytics with Azure Machine Learning
You can implement machine learning models as a user-defined function (UDF) in your Azure Stream Analytics jobs to do real-time scoring and predictions on your streaming input data. [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) allows you to use any popular open-source tool, such as TensorFlow, scikit-learn, or PyTorch, to prep, train, and deploy models.
Complete the following steps before you add a machine learning model as a functi
1. Use Azure Machine Learning to [deploy your model as a web service](../machine-learning/how-to-deploy-and-where.md).
-2. Your scoring script should have [sample inputs and outputs](../machine-learning/how-to-deploy-and-where.md) which is used by Azure Machine Learning to generate a schema specification. Stream Analytics uses the schema to understand the function signature of your web service. You can use this [sample swagger definition](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/AzureML/swagger-example.json) as a reference to ensure you have set it up correctly.
+2. Your machine learning endpoint must have an associated [swagger](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-advanced-entry-script) that helps Stream Analytics understand the schema of the input and output. You can use this [sample swagger definition](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/AzureML/asa-mlswagger.json) as a reference to ensure you have set it up correctly.
3. Make sure your web service accepts and returns JSON serialized data.
-4. Deploy your model on [Azure Kubernetes Service](../machine-learning/how-to-deploy-and-where.md#choose-a-compute-target) for high-scale production deployments. If the web service is not able to handle the number of requests coming from your job, the performance of your Stream Analytics job will be degraded, which impacts latency. Models deployed on Azure Container Instances are supported only when you use the Azure portal. Models built using [Azure Machine Learning Designer](../machine-learning/concept-designer.md) are not yet supported in Stream Analytics.
+4. Deploy your model on [Azure Kubernetes Service](../machine-learning/how-to-deploy-and-where.md#choose-a-compute-target) for high-scale production deployments. If the web service is not able to handle the number of requests coming from your job, the performance of your Stream Analytics job will be degraded, which impacts latency. Models deployed on Azure Container Instances are supported only when you use the Azure portal.
## Add a machine learning model to your job
The following table describes each property of Azure Machine Learning Service fu
|Function alias|Enter a name to invoke the function in your query.| |Subscription|Your Azure subscription..| |Azure Machine Learning workspace|The Azure Machine Learning workspace you used to deploy your model as a web service.|
-|Deployments|The web service hosting your model.|
+|Endpoint|The web service hosting your model.|
|Function signature|The signature of your web service inferred from the API's schema specification. If your signature fails to load, check that you have provided sample input and output in your scoring script to automatically generate the schema.|
-|Number of parallel requests per partition|This is an advanced configuration to optimize high-scale throughput. This number represents the concurrent requests sent from each partition of your job to the web service. Jobs with six streaming units (SU) and lower have one partition. Jobs with 12 SUs have two partitions, 18 SUs have three partitions and so on.<br><br> For example, if your job has two partitions and you set this parameter to four, there will be eight concurrent requests from your job to your web service. At this time of public preview, this value defaults to 20 and cannot be updated.|
+|Number of parallel requests per partition|This is an advanced configuration to optimize high-scale throughput. This number represents the concurrent requests sent from each partition of your job to the web service. Jobs with six streaming units (SU) and lower have one partition. Jobs with 12 SUs have two partitions, 18 SUs have three partitions and so on.<br><br> For example, if your job has two partitions and you set this parameter to four, there will be eight concurrent requests from your job to your web service.|
|Max batch count|This is an advanced configuration for optimizing high-scale throughput. This number represents the maximum number of events be batched together in a single request sent to your web service.|
-## Supported input parameters
+## Calling machine learning endpoint from your query
-When your Stream Analytics query invokes an Azure Machine Learning UDF, the job creates a JSON serialized request to the web service. The request is based on a model-specific schema. You have to provide a sample input and output in your scoring script to [automatically generate a schema](../machine-learning/how-to-deploy-and-where.md). The schema allows Stream Analytics to construct the JSON serialized request for any of the supported data types such as numpy, pandas and PySpark. Multiple input events can be batched together in a single request.
+When your Stream Analytics query invokes an Azure Machine Learning UDF, the job creates a JSON serialized request to the web service. The request is based on a model-specific schema that Stream Analytics infers from the endpoint's swagger.
The following Stream Analytics query is an example of how to invoke an Azure Machine Learning UDF:
The following Stream Analytics query is an example of how to invoke an Azure Mac
SELECT udf.score(<model-specific-data-structure>) INTO output FROM input
+WHERE <model-specific-data-structure> is not null
```
-Stream Analytics only supports passing one parameter for Azure Machine Learning functions. You may need to prepare your data before passing it as an input to machine learning UDF. You must ensure the input to ML UDF is not null as null inputs will cause the job to fail.
+If your input data sent to the ML UDF is inconsistent with the schema that is expected, the endpoint will return a response with error code 400, which will cause your Stream Analytics job to go to a failed state. It is recommended that you [enable resource logs](https://docs.microsoft.com/azure/stream-analytics/stream-analytics-job-diagnostic-logs#send-diagnostics-to-azure-monitor-logs) for your job, which will enable you to easily debug and troubleshoot such problems. Therefore, it is strongly recommended that you:
+
+- Validate input to your ML UDF is not null
+- Validate the type of every field that is an input to your ML UDF to ensure it matches what the endpoint expects
+ ## Pass multiple input parameters to the UDF
The following JSON is an example request:
```JSON {
- "data": [
+ "Inputs": {
+ "WebServiceInput0": [
["1","Mon","12","1","5.8"], ["2","Wed","10","2","10"]
- ]
+ ]
+ }
} ```
FROM input
SELECT udf.score(Dataframe) INTO output FROM Dataframe
+WHERE Dataframe is not null
``` The following JSON is an example request from the previous query: ```JSON {
- "data": [{
+ "Inputs": {
+ "WebServiceInput0": [
+ {
"vendorid": "1", "weekday": "Mon", "pickuphour": "12", "passenger": "1", "distance": "5.8"
- }, {
+ },
+ {
"vendorid": "2", "weekday": "Tue", "pickuphour": "10", "passenger": "2", "distance": "10"
- }
- ]
+ }]
+ }
} ```
When you deploy your model to Azure Kubernetes Service, you can [profile your mo
If you have a scenario with high event throughput, you may need to change the following parameters in Stream Analytics to achieve optimal performance with low end-to-end latencies:
-1. Max batch count.
-2. Number of parallel requests per partition.
+- Maximum batch count.
+- Number of parallel requests per partition.
### Determine the right batch size
After you have deployed your web service, you send sample request with varying b
At optimal scaling, your Stream Analytics job should be able to send multiple parallel requests to your web service and get a response within few milliseconds. The latency of the web service's response can directly impact the latency and performance of your Stream Analytics job. If the call from your job to the web service takes a long time, you will likely see an increase in watermark delay and may also see an increase in the number of backlogged input events.
-To prevent such latency, ensure that your Azure Kubernetes Service (AKS) cluster has been provisioned with the [right number of nodes and replicas](../machine-learning/how-to-deploy-azure-kubernetes-service.md#using-the-cli). It's critical that your web service is highly available and returns successful responses. If your job receives a service unavailable response (503) from your web service, it will continuously retry with exponential back off. Any response other than success (200) and service unavailable (503) will cause your job to go to a failed state.
+You can achieve low latency by ensuring that your Azure Kubernetes Service (AKS) cluster has been provisioned with the [right number of nodes and replicas](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-azure-kubernetes-service?tabs=python#autoscaling). It's critical that your web service is highly available and returns successful responses. If your job receives an error that is retriable such as service unavailable response (503), it will automaticaly retry with exponential back off. If your job receives one of these errors as a response from the endpoint, the job will go to a failed state.
+* Bad Request (400)
+* Conflict (409)
+* Not Found (404)
+* Unauthorized (401)
## Next steps * [Tutorial: Azure Stream Analytics JavaScript user-defined functions](stream-analytics-javascript-user-defined-functions.md)
-* [Scale your Stream Analytics job with Azure Machine Learning Studio (classic) function](stream-analytics-scale-with-machine-learning-functions.md)
stream-analytics Stream Analytics Get Started With Azure Stream Analytics To Process Data From Iot Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-get-started-with-azure-stream-analytics-to-process-data-from-iot-devices.md
Previously updated : 11/26/2019 Last updated : 03/23/2022 # Process real-time IoT data streams with Azure Stream Analytics
In this example, the data is generated from a Texas Instruments sensor tag devic
} ```
-In a real-world scenario, you could have hundreds of these sensors generating events as a stream. Ideally, a gateway device would run code to push these events to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) or [Azure IoT Hubs](https://azure.microsoft.com/services/iot-hub/). Your Stream Analytics job would ingest these events from Event Hubs and run real-time analytics queries against the streams. Then, you could send the results to one of the [supported outputs](stream-analytics-define-outputs.md).
+In a real-world scenario, you could have hundreds of these sensors generating events as a stream. Ideally, a gateway device would run code to push these events to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) or [Azure IoT Hubs](https://azure.microsoft.com/services/iot-hub/). Your Stream Analytics job would ingest these events from Event Hubs or Iot Hubs and run real-time analytics queries against the streams. Then, you could send the results to one of the [supported outputs](stream-analytics-define-outputs.md).
For ease of use, this getting started guide provides a sample data file, which was captured from real sensor tag devices. You can run queries on the sample data and see results. In subsequent tutorials, you will learn how to connect your job to inputs and outputs and deploy them to the Azure service.
Here we use a **LEFT OUTER** join to the same data stream (self-join). For an **
## Conclusion
-The purpose of this article is to demonstrate how to write different Stream Analytics Query Language queries and see results in the browser. However, this is just to get you started. Stream Analytics supports a variety of inputs and outputs and can even use functions in Azure Machine Learning to make it a robust tool for analyzing data streams. For more information about how to write queries, read the article about [common query patterns](stream-analytics-stream-analytics-query-patterns.md).
+The purpose of this article is to demonstrate how to write different Stream Analytics Query Language queries and see results in the browser. However, this is just to get you started. Stream Analytics supports a variety of inputs and outputs and can even use functions in Azure Machine Learning to make it a robust tool for analyzing data streams. For more information about how to write queries, read the article about [common query patterns](stream-analytics-stream-analytics-query-patterns.md).
stream-analytics Stream Analytics Managed Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-managed-identities-overview.md
+
+ Title: Managed identities for Azure Stream Analytics
+description: This article describes managed identities for Azure Stream Analytics.
++++ Last updated : 03/02/2022++
+# Managed identities for Azure Stream Analytics
+
+Azure Stream Analytics currently allows you to authenticate to other Azure resources using managed identities.
+A common challenge when building cloud applications is credential management in your code to authenticate cloud services. Keeping the credentials secure is an important task. The credentials shouldn't be stored in developer workstations or checked into source control.
+
+The Azure Active Directory (Azure AD) managed identities for Azure resources feature solves this problem. The feature provides Azure services with an automatically managed identity in Azure AD. This allows you to assign an identity to your Stream Analytics job which can then authenticate to any input or outputs that supports Azure AD authentication, without any credentials. See [managed identities for Azure resources overview page](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) for more information about this service.
+++
+## Managed identity types
+
+Stream Analytics supports two types of managed identities:
+
+* System-assigned identity: When you enable a system-assigned managed identity for your job, you create an identity in Azure AD that is tied to the lifecycle of that job. So when you delete the resource, Azure automatically deletes the identity for you.
+* User-assigned identity: You may also create a managed identity as a standalone Azure resource and assign it to your Stream Analytics job. In the case of user-assigned managed identities, the identity is managed separately from the resources that use it.
++++
+> [!IMPORTANT]
+> Regardless of the type of identity chosen, a managed identity is a service principal of a special type that may only be used with Azure resources. The corresponding service principal is automatically removed when the managed identity is deleted.
+
+## Connecting your job to other Azure resources using managed identity
+
+Below is a table that shows Azure Stream Analytics inputs and outputs that support system-assigned managed identity or user-assigned managed identity:
+
+| Type |  Adapter | User-assigned managed identity | System-assigned managed identity |
+|--|-|||
+| Storage Account | Blob/ADLS Gen 2 | Yes | Yes |
+| Inputs | Event Hubs | Yes | Yes |
+| | IoT Hubs | No (available with a workaround: users can route events to Event Hubs) | No |
+| | Blob/ADLS Gen 2 | Yes | Yes |
+| Reference Data | Blob/ADLS Gen 2 | Yes | Yes |
+| | SQL | Yes (preview) | Yes |
+| Outputs | Event Hubs | Yes | Yes |
+| | SQL Database | Yes | Yes |
+| | Blob/ADLS Gen 2 | Yes | Yes |
+| | Table Storage | No | No |
+| | Service Bus Topic | No | No |
+| | Service Bus Queue | No | No |
+| | Cosmos DB | No | No |
+| | Power BI | Yes | No |
+| | Data Lake Storage Gen1 | Yes | Yes |
+| | Azure Functions | No | No |
+| | Azure Database for PostgreSQL | No | No |
+| | Azure Data Explorer | Yes | Yes |
+| | Azure Synapse Analytics | Yes | Yes |
+++
+## Next steps
+
+* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
stream-analytics Stream Analytics Previews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-previews.md
Previously updated : 03/16/2021 Last updated : 03/23/2022 # Azure Stream Analytics preview features
This article summarizes all the features currently in preview for Azure Stream A
Azure Stream Analytics supports [Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for Azure SQL Database output sinks. Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate due to password changes.
-## Real-time high performance scoring with custom ML models managed by Azure Machine Learning
-
-Azure Stream Analytics supports high-performance, real-time scoring by leveraging custom pre-trained Machine Learning models managed by Azure Machine Learning, and hosted in Azure Kubernetes Service (AKS) or Azure Container Instances (ACI), using a workflow that does not require you to write code. [Sign up](https://aka.ms/asapreview1) for preview
- ## C# custom de-serializers Developers can leverage the power of Azure Stream Analytics to process data in Protobuf, XML, or any custom format. You can implement [custom de-serializers](custom-deserializer-examples.md) in C#, which can then be used to de-serialize events received by Azure Stream Analytics.
stream-analytics Stream Analytics User Assigned Managed Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-user-assigned-managed-identity-overview.md
+
+ Title: User-assigned managed identities for Azure Stream Analytics (preview)
+description: This article describes configuring user-assigned managed identities for Azure Stream Analytics.
++++ Last updated : 03/22/2022++
+# User-assigned managed identities for Azure Stream Analytics (preview)
+
+Azure Stream Analytics currently allows you to use user-assigned managed identities to authenticate to your job's inputs and outputs.
+
+In this article, you learn how to create a user-assigned managed identity for your Azure Stream Analytics job by using the Azure portal.
+
+> [!IMPORTANT]
+> Regardless of the type of identity chosen, a managed identity is a service principal of a special type that may only be used with Azure resources. The corresponding service principal is automatically removed when the managed identity is deleted.
++
+## Create a user-assigned managed identity
+
+To create a user-assigned managed identity, your account needs the [Managed Identity Contributor role](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#managed-identity-contributor) assignment.
+
+> [!NOTE]
+> Only alphanumeric characters (0-9, a-z, and A-Z) and the hyphen (-) are supported when you create user-assigned managed identities. For the assignment to a virtual machine or virtual machine scale set to work correctly, the name is limited to 24 characters. For more information, see [**FAQs and known issues**](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/known-issues)
+
+ ![Create managed identity](./media/common/create-managed-identity.png)
+
+1. Sign in to the Azure portal by using an account associated with the Azure subscription to create the user-assigned managed identity.
+2. In the search box, enter **Managed Identities**. Under **Services**, select **Managed Identities**.
+3. Select **Add**, and enter values in the following boxes in the **Create User Assigned Managed Identity** pane:
+ * **Subscription**: Choose the subscription to create the user-assigned managed identity under.
+ * **Resource group**: Choose a resource group to create the user-assigned managed identity in, or select **Create new** to create a new resource group.
+ * **Region**: Choose a region to deploy the user-assigned managed identity, for example, **West US**.
+ * **Name**: Enter the name for your user-assigned managed identity, for example, UAI1.
+4. Select **Review + create** to review changes
+5. Select **Create**
+
+For more information on how to manager user-assigned managed identities please visit the [Managed user-assigned managed identities page](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp).
++
+## Switching to user-assigned managed identity
+If you have an existing job, you can switch to a user-assigned identity by following the instructions below:
+
+After creating your user-assigned identity and configuring your input and output, you can switch to user-assigned identity by navigating to the **Managed Identity** tab on the left side under **Configure**.
+
+ ![Configure Stream Analytics managed identity](./media/common/stream-analytics-enable-managed-identity-new.png)
+
+1. Click on the **managed identity tab** under **configure**.
+2. Select on **Switch Identity** and select the identity to use with the job.
+3. Select the subscription where your user-assigned identity is located and select the name of your identity.
+4. Review and **save**
++
+## Endpoint management
+> [!NOTE]
+> After switching to a user-assigned identity for the job, you may have to re-grant access to the inputs and outputs associated with the stream analytics job to use the user-assigned identity for your job to run
+
+1. Select **Endpoint management** and grant access to each input and output under connection.
+2. Under **connection status** click on **try regranting access** to switch from system-assigned to user-assigned.
+3. Wait for a few minutes for the input/output to be granted access to the job.
+
+You can select each input and output on the endpoint management to manually configure an adapter to the job.
+++
+## Next steps
+
+* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
synapse-analytics What Is Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/what-is-machine-learning.md
Most machine learning projects involve well-established steps, and one of these
### Data source and pipelines
-Thanks to [Azure Data Factory](../../data-factory/introduction.md), a natively integrated part of Azure Synapse, there is a powerful set of tools available for data ingestion and data orchestration pipelines. This allows you to easily build data pipelines to access and transform the data into a format that can be consumed for machine learning. [Learn more about data pipelines](../../data-factory/concepts-pipelines-activities.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) in Synapse.
+Thanks to [Azure Data Factory](../../data-factory/introduction.md), a natively integrated part of Azure Synapse, there is a powerful set of tools available for data ingestion and data orchestration pipelines. This allows you to easily build data pipelines to access and transform the data into a format that can be consumed for machine learning. [Learn more about data pipelines](../../data-factory/concepts-pipelines-activities.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json) in Synapse.
### Data preparation and exploration/visualization
synapse-analytics Quickstart Copy Activity Load Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-copy-activity-load-sql-pool.md
In this section, you manually trigger the pipeline published in the previous ste
Advance to the following article to learn about Azure Synapse Analytics support: > [!div class="nextstepaction"]
-> [Pipeline and activities](../data-factory/concepts-pipelines-activities.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
-> [Connector overview](../data-factory/connector-overview.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
-> [Copy activity](../data-factory/copy-activity-overview.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
+> [Pipeline and activities](../data-factory/concepts-pipelines-activities.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json)
+> [Connector overview](../data-factory/connector-overview.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json)
+> [Copy activity](../data-factory/copy-activity-overview.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json)
synapse-analytics Quickstart Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-data-flow.md
A pipeline contains the logical flow for an execution of a set of activities. In
Once you create your Data Flow, you'll be automatically sent to the data flow canvas. In this step, you'll build a data flow that takes the MoviesDB.csv in ADLS storage and aggregates the average rating of comedies from 1910 to 2000. You'll then write this file back to the ADLS storage.
-1. Above the data flow canvas, slide the **Data flow debug** slider on. Debug mode allows for interactive testing of transformation logic against a live Spark cluster. Data Flow clusters take 5-7 minutes to warm up and users are recommended to turn on debug first if they plan to do Data Flow development. For more information, see [Debug Mode](../data-factory/concepts-data-flow-debug-mode.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json).
+1. Above the data flow canvas, slide the **Data flow debug** slider on. Debug mode allows for interactive testing of transformation logic against a live Spark cluster. Data Flow clusters take 5-7 minutes to warm up and users are recommended to turn on debug first if they plan to do Data Flow development. For more information, see [Debug Mode](../data-factory/concepts-data-flow-debug-mode.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json).
![Slide the debug on](media/quickstart-data-flow/debug-on.png)
Once you create your Data Flow, you'll be automatically sent to the data flow ca
1. Name your filter transformation **FilterYears**. Click on the expression box next to **Filter on** to open the expression builder. Here you'll specify your filtering condition.
-1. The data flow expression builder lets you interactively build expressions to use in various transformations. Expressions can include built-in functions, columns from the input schema, and user-defined parameters. For more information on how to build expressions, see [Data Flow expression builder](../data-factory/concepts-data-flow-expression-builder.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json).
+1. The data flow expression builder lets you interactively build expressions to use in various transformations. Expressions can include built-in functions, columns from the input schema, and user-defined parameters. For more information on how to build expressions, see [Data Flow expression builder](../data-factory/concepts-data-flow-expression-builder.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json).
In this quickstart, you want to filter movies of genre comedy that came out between the years 1910 and 2000. As year is currently a string, you need to convert it to an integer using the ```toInteger()``` function. Use the greater than or equals to (>=) and less than or equals to (<=) operators to compare against literal year values 1910 and 200-. Union these expressions together with the `&&` (and) operator. The expression comes out as:
If you followed this quickstart correctly, you should have written 83 rows and 2
Advance to the following articles to learn about Azure Synapse Analytics support: > [!div class="nextstepaction"]
-> [Pipeline and activities](../data-factory/concepts-pipelines-activities.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
-> [Mapping data flow overview](../data-factory/concepts-data-flow-overview.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
-> [Data flow expression language](../data-factory/data-flow-expression-functions.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
+> [Pipeline and activities](../data-factory/concepts-pipelines-activities.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json)
+> [Mapping data flow overview](../data-factory/concepts-data-flow-overview.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json)
+> [Data flow expression language](../data-factory/data-flow-expression-functions.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json)
synapse-analytics Quickstart Transform Data Using Spark Job Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md
You can add properties for Apache Spark job definition activity in this panel.
Advance to the following articles to learn about Azure Synapse Analytics support: > [!div class="nextstepaction"]
-> [Pipeline and activities](../data-factory/concepts-pipelines-activities.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
-> [Mapping data flow overview](../data-factory/concepts-data-flow-overview.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
-> [Data flow expression language](../data-factory/data-flow-expression-functions.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
+> [Pipeline and activities](../data-factory/concepts-pipelines-activities.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json)
+> [Mapping data flow overview](../data-factory/concepts-data-flow-overview.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json)
+> [Data flow expression language](../data-factory/data-flow-expression-functions.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json)
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
At a high-level, the connector provides the following capabilities:
* Comprehensive predicate push down support, where filters on DataFrame get mapped to corresponding SQL predicate push down. * Support for column pruning.
+> [!NOTE]
+> The latest release of the Connector introduced certain default behavior changes for the write path. Please refer to the section [Common Issues](#common-issues) for scenario description and relevant mitigation steps.
+ ## Orchestration Approach ### Write
Benefits of this approach over printing the end state result to console (partial
## Connector API Documentation
-Azure Synapse Dedicated SQL Pool Connector for Apache Spark - [API Documentation](https://synapsesql.blob.core.windows.net/docs/2.0.0/scaladocs/com/microsoft/spark/sqlanalytics/utils/https://docsupdatetracker.net/index.html).
+Azure Synapse Dedicated SQL Pool Connector for Apache Spark - [API Documentation](https://synapsesql.blob.core.windows.net/docs/2.0.0/scaladocs/com/microsoft/spark/sqlanalytics/https://docsupdatetracker.net/index.html).
## Code Templates
This section presents reference code templates to describe how to use and invoke
### Write Scenario
-#### `synapsesql` Write Method Signature
+#### Write Request - `synapsesql` Method Signature
The method signature for the Connector version built for Spark 2.4.8 has one less argument, than that applied to the Spark 3.1.2 version. Following are the two method signatures:
The method signature for the Connector version built for Spark 2.4.8 has one les
```Scala synapsesql(tableName:String, tableType:String = Constants.INTERNAL,
- location:Option[String] = None)
+ location:Option[String] = None):Unit
``` * Spark Pool Version 3.1.2
synapsesql(tableName:String,
synapsesql(tableName:String, tableType:String = Constants.INTERNAL, location:Option[String] = None,
- callBackHandle=Option[(Map[String, Any], Option[Throwable])=>Unit])
+ callBackHandle=Option[(Map[String, Any], Option[Throwable])=>Unit]):Unit
``` #### Write Code Template
readDF.
if(errorDuringWrite.isDefined) throw errorDuringWrite.get ```
-#### SaveModes
+#### DataFrame SaveMode Support
Following is a brief description of how the SaveMode setting by the User would translate into actions taken by the Connector:
Following is a sample JSON string with post-write metrics:
### Read Scenario
-#### `synapsesql` Read Method Signature
+#### Read Request - `synapsesql` Method Signature
Following is the signature to leverage `synapsesql` (applies to both Spark 2.4.8 and Spark 3.1.2 Connector versions):
import org.apache.spark.sql.SqlAnalyticsConnector._
```
-### Things to Note
+## Common Issues
+
+The latest release of the Connector introduced certain default behavior changes for the write path. Following is the list of such common behaviors and necessary mitigation steps:
+
+* Error Handling (i.e., throwing exceptions from cells) when writing to Synapse Dedicated SQL Pool.
+ * Context
+ * Typically, when the code in a notebook cell contains an error, an error will be surfaced and notebook execution will stop.
+ * The current implementation of this connector is different, in that any errors will be written to the Driver Logs, but notebook cell execution will continue.
+ * Mitigation
+ * Handling and surfacing the error will allow the Cell execution to fail. Subsequent cell execution will not be attempted (i.e., cancelled).
+ * See section - Write [Code Template](#write-code-template) section for a sample code reference.
+
+* A write request returns a validation error message, as described below
+ * Detailed Error Message
+ `ΓÇ£java.lang.IllegalArgumentException: Valid SQL Server option - logical_server and a valid three-part table name are required to succesfully setup SQL Server connections`.
+ * Mitigation - Specify the write option parameter `Constants.SERVER`as shown below (also included in the [Write Code Template](#write-code-template))
+
+ ```Scala
+ df.write.
+ option(Constants.SERVER, "<sql_server_name-supporting-dedicated-pool>.sql.azuresynapse.net"). //required; can be fetched from Portal ΓÇô Azure synapse workspace Overview pane - Dedicated SQL endpoint config.
+ option(Constants.TEMP_FOLDER, "abfss://<storage-container-name>@<storage-account-name>.dfs.core.windows.net/temp-tables"). //Defaults to workspace attached primary storage.
+ mode(SaveMode.Overwrite). //Defaults to ErrorIfExists SaveMode option.
+ synapsesql("<db_name>.<schema_name>.<table_name>", Constants.INTERNAL, None, Option(callBackHandle))
+ ```
+
+* Deprecation Warning
+ * Context - When using the `synapsesql` method to write to Synapse Dedicated SQL Pool table, following warning message is displayed below the respective cell:
+ * "warning: there was one deprecation warning; for details, enable `:setting -deprecation' or`:replay -deprecation'"
+ * Mitigation
+ * This is related to the deprecated `sqlanalytics` signature.
+ * End users can safely ignore this warning. This does not effect use of `synapsesql` method.
+
+## Things to Note
The Connector leverages the capabilities of dependent resources (Azure Storage and Synapse Dedicated SQL Pool) to achieve efficient data transfers. Following are few important aspects must be taken into consideration when tuning for optimized (note, doesn't necessarily mean speed; this also relates to predictable outcomes) performance:
synapse-analytics Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/cheat-sheet.md
Knowing the types of operations in advance helps you optimize the design of your
## Data migration
-First, load your data into [Azure Data Lake Storage](../../data-factory/connector-azure-data-lake-store.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) or Azure Blob Storage. Next, use the [COPY statement](/sql/t-sql/statements/copy-into-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to load your data into staging tables. Use the following configuration:
+First, load your data into [Azure Data Lake Storage](../../data-factory/connector-azure-data-lake-store.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json) or Azure Blob Storage. Next, use the [COPY statement](/sql/t-sql/statements/copy-into-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&view=azure-sqldw-latest&preserve-view=true) to load your data into staging tables. Use the following configuration:
| Design | Recommendation | |: |: |
synapse-analytics Design Elt Data Loading https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/design-elt-data-loading.md
Traditional SMP dedicated SQL pools use an Extract, Transform, and Load (ETL) pr
Using an Extract, Load, and Transform (ELT) process leverages built-in distributed query processing capabilities and eliminates the resources needed for data transformation prior to loading.
-While dedicated SQL pools support many loading methods, including popular SQL Server options such as [bcp](/sql/tools/bcp-utility?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) and the [SqlBulkCopy API](/dotnet/api/system.data.sqlclient.sqlbulkcopy?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json), the fastest and most scalable way to load data is through PolyBase external tables and the [COPY statement](/sql/t-sql/statements/copy-into-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
+While dedicated SQL pools support many loading methods, including popular SQL Server options such as [bcp](/sql/tools/bcp-utility?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&view=azure-sqldw-latest&preserve-view=true) and the [SqlBulkCopy API](/dotnet/api/system.data.sqlclient.sqlbulkcopy?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json), the fastest and most scalable way to load data is through PolyBase external tables and the [COPY statement](/sql/t-sql/statements/copy-into-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&view=azure-sqldw-latest&preserve-view=true).
With PolyBase and the COPY statement, you can access external data stored in Azure Blob storage or Azure Data Lake Store via the T-SQL language. For the most flexibility when loading, we recommend using the COPY statement.
Getting data out of your source system depends on the storage location. The goal
With PolyBase and the COPY statement, you can load data from UTF-8 and UTF-16 encoded delimited text or CSV files. In addition to delimited text or CSV files, it loads from the Hadoop file formats such as ORC and Parquet. PolyBase and the COPY statement can also load data from Gzip and Snappy compressed files.
-Extended ASCII, fixed-width format, and nested formats such as WinZip or XML aren't supported. If you're exporting from SQL Server, you can use the [bcp command-line tool](/sql/tools/bcp-utility?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to export the data into delimited text files.
+Extended ASCII, fixed-width format, and nested formats such as WinZip or XML aren't supported. If you're exporting from SQL Server, you can use the [bcp command-line tool](/sql/tools/bcp-utility?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&view=azure-sqldw-latest&preserve-view=true) to export the data into delimited text files.
## 2. Land the data into Azure Blob storage or Azure Data Lake Store
-To land the data in Azure storage, you can move it to [Azure Blob storage](../../storage/blobs/storage-blobs-introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) or [Azure Data Lake Store Gen2](../../data-lake-store/data-lake-store-overview.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). In either location, the data should be stored in text files. PolyBase and the COPY statement can load from either location.
+To land the data in Azure storage, you can move it to [Azure Blob storage](../../storage/blobs/storage-blobs-introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json) or [Azure Data Lake Store Gen2](../../data-lake-store/data-lake-store-overview.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json). In either location, the data should be stored in text files. PolyBase and the COPY statement can load from either location.
Tools and services you can use to move data to Azure Storage: -- [Azure ExpressRoute](../../expressroute/expressroute-introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) service enhances network throughput, performance, and predictability. ExpressRoute is a service that routes your data through a dedicated private connection to Azure. ExpressRoute connections do not route data through the public internet. The connections offer more reliability, faster speeds, lower latencies, and higher security than typical connections over the public internet.-- [AzCopy utility](../../storage/common/storage-choose-data-transfer-solution.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) moves data to Azure Storage over the public internet. This works if your data sizes are less than 10 TB. To perform loads on a regular basis with AzCopy, test the network speed to see if it is acceptable.-- [Azure Data Factory (ADF)](../../data-factory/introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) has a gateway that you can install on your local server. Then you can create a pipeline to move data from your local server up to Azure Storage. To use Data Factory with dedicated SQL pools, see [Loading data for dedicated SQL pools](../../data-factory/load-azure-sql-data-warehouse.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
+- [Azure ExpressRoute](../../expressroute/expressroute-introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json) service enhances network throughput, performance, and predictability. ExpressRoute is a service that routes your data through a dedicated private connection to Azure. ExpressRoute connections do not route data through the public internet. The connections offer more reliability, faster speeds, lower latencies, and higher security than typical connections over the public internet.
+- [AzCopy utility](../../storage/common/storage-choose-data-transfer-solution.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json) moves data to Azure Storage over the public internet. This works if your data sizes are less than 10 TB. To perform loads on a regular basis with AzCopy, test the network speed to see if it is acceptable.
+- [Azure Data Factory (ADF)](../../data-factory/introduction.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json) has a gateway that you can install on your local server. Then you can create a pipeline to move data from your local server up to Azure Storage. To use Data Factory with dedicated SQL pools, see [Loading data for dedicated SQL pools](../../data-factory/load-azure-sql-data-warehouse.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json).
## 3. Prepare the data for loading
If you are using PolyBase, you need to define external tables in your dedicated
Defining external tables involves specifying the data source, the format of the text files, and the table definitions. T-SQL syntax reference articles that you will need are: -- [CREATE EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)-- [CREATE EXTERNAL FILE FORMAT](/sql/t-sql/statements/create-external-file-format-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)-- [CREATE EXTERNAL TABLE](/sql/t-sql/statements/create-external-table-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)
+- [CREATE EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&view=azure-sqldw-latest&preserve-view=true)
+- [CREATE EXTERNAL FILE FORMAT](/sql/t-sql/statements/create-external-file-format-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&view=azure-sqldw-latest&preserve-view=true)
+- [CREATE EXTERNAL TABLE](/sql/t-sql/statements/create-external-table-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&view=azure-sqldw-latest&preserve-view=true)
Use the following SQL data type mapping when loading Parquet files:
To format the text files:
- If your data is coming from a non-relational source, you need to transform it into rows and columns. Whether the data is from a relational or non-relational source, the data must be transformed to align with the column definitions for the table into which you plan to load the data. - Format data in the text file to align with the columns and data types in the destination table. Misalignment between data types in the external text files and the dedicated SQL pool table causes rows to be rejected during the load.-- Separate fields in the text file with a terminator. Be sure to use a character or a character sequence that isn't found in your source data. Use the terminator you specified with [CREATE EXTERNAL FILE FORMAT](/sql/t-sql/statements/create-external-file-format-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
+- Separate fields in the text file with a terminator. Be sure to use a character or a character sequence that isn't found in your source data. Use the terminator you specified with [CREATE EXTERNAL FILE FORMAT](/sql/t-sql/statements/create-external-file-format-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&view=azure-sqldw-latest&preserve-view=true).
## 4. Load the data using PolyBase or the COPY statement
To load data, you can use any of these loading options:
- The [COPY statement](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) is the recommended loading utility as it enables you to seamlessly and flexibly load data. The statement has many additional loading capabilities that PolyBase does not provide. See the [NY taxi cab COPY tutorial](./load-data-from-azure-blob-storage-using-copy.md) to run through a sample tutorial. - [PolyBase with T-SQL](./sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md) requires you to define external data objects.-- [PolyBase and COPY statement with Azure Data Factory (ADF)](../../data-factory/load-azure-sql-data-warehouse.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) is another orchestration tool. It defines a pipeline and schedules jobs.-- [PolyBase with SSIS](/sql/integration-services/load-data-to-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) works well when your source data is in SQL Server. SSIS defines the source to destination table mappings, and also orchestrates the load. If you already have SSIS packages, you can modify the packages to work with the new data warehouse destination.
+- [PolyBase and COPY statement with Azure Data Factory (ADF)](../../data-factory/load-azure-sql-data-warehouse.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json) is another orchestration tool. It defines a pipeline and schedules jobs.
+- [PolyBase with SSIS](/sql/integration-services/load-data-to-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&view=azure-sqldw-latest&preserve-view=true) works well when your source data is in SQL Server. SSIS defines the source to destination table mappings, and also orchestrates the load. If you already have SSIS packages, you can modify the packages to work with the new data warehouse destination.
- [PolyBase with Azure Databricks](/azure/databricks/scenarios/databricks-extract-load-sql-data-warehouse?bc=%2fazure%2fsynapse-analytics%2fsql-data-warehouse%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2fsql-data-warehouse%2ftoc.json) transfers data from a table to a Databricks dataframe and/or writes data from a Databricks dataframe to a table using PolyBase. ### Other loading options
-In addition to PolyBase and the COPY statement, you can use [bcp](/sql/tools/bcp-utility?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) or the [SqlBulkCopy API](/dotnet/api/system.data.sqlclient.sqlbulkcopy?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). bcp loads directly to the database without going through Azure Blob storage, and is intended only for small loads.
+In addition to PolyBase and the COPY statement, you can use [bcp](/sql/tools/bcp-utility?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&view=azure-sqldw-latest&preserve-view=true) or the [SqlBulkCopy API](/dotnet/api/system.data.sqlclient.sqlbulkcopy?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json). bcp loads directly to the database without going through Azure Blob storage, and is intended only for small loads.
> [!NOTE] > The load performance of these options is slower than PolyBase and the COPY statement.
synapse-analytics Sql Data Warehouse Overview Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-integrate.md
Azure Data Factory gives users a managed platform to create complex extract and
* **Stored Procedures**: Orchestrate the execution of stored procedures. * **Copy**: Use ADF to move data into dedicated SQL pool (formerly SQL DW). This operation can use ADF's standard data movement mechanism or PolyBase under the covers.
-For more information, see [Integrate with Azure Data Factory](../../data-factory/load-azure-sql-data-warehouse.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
+For more information, see [Integrate with Azure Data Factory](../../data-factory/load-azure-sql-data-warehouse.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json).
## Azure Machine Learning
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-openrowset.md
Previously updated : 11/02/2021 Last updated : 03/23/2022
The `OPENROWSET(BULK...)` function allows you to access files in Azure Storage.
The `OPENROWSET` function can be referenced in the `FROM` clause of a query as if it were a table name `OPENROWSET`. It supports bulk operations through a built-in BULK provider that enables data from a file to be read and returned as a rowset.
+> [!NOTE]
+> The OPENROWSET function is not supported in dedicated SQL pool.
+ ## Data source OPENROWSET function in Synapse SQL reads the content of the file(s) from a data source. The data source is an Azure storage account and it can be explicitly referenced in the `OPENROWSET` function or can be dynamically inferred from URL of the files that you want to read.
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-features.md
Synapse SQL pools enable you to use built-in security features to secure your da
| **Storage Azure Active Directory (Azure AD) passthrough authentication** | Yes | Yes, [Azure AD passthrough authentication](develop-storage-files-storage-access-control.md?tabs=user-identity#supported-storage-authorization-types) is applicable to Azure AD logins. The identity of the Azure AD user is passed to the storage if a credential is not specified. Azure AD passthrough authentication is not available for the SQL users. | | **Storage shared access signature (SAS) token authentication** | No | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) with [shared access signature token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) or instance-level [CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) with [shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential). | | **Storage Access Key authentication** | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No, [use SAS token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential) instead of storage access key. |
-| **Storage [Managed Identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics) authentication** | Yes, using [Managed Service Identity Credential](../../azure-sql/database/vnet-service-endpoint-rule-overview.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&preserve-view=true&toc=%2fazure%2fsynapse-analytics%2ftoc.json&view=azure-sqldw-latest&preserve-view=true) | Yes, The query can access the storage using the workspace [Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) credential. |
+| **Storage [Managed Identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics) authentication** | Yes, using [Managed Service Identity Credential](../../azure-sql/database/vnet-service-endpoint-rule-overview.md?preserve-view=true&toc=%2fazure%2fsynapse-analytics%2ftoc.json&view=azure-sqldw-latest&preserve-view=true) | Yes, The query can access the storage using the workspace [Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) credential. |
| **Storage Application identity/Service principal (SPN) authentication** | [Yes](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can create a [credential](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential) with a [service principal application ID](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types) that will be used to authenticate on the storage. | | **Server roles** | No | Yes, sysadmin, public, and other server-roles are supported. | | **SERVER SCOPED CREDENTIAL** | No | Yes, the [server scoped credentials](develop-storage-files-storage-access-control.md?tabs=user-identity#server-scoped-credential) are used by the `OPENROWSET` function that do not uses explicit data source. |
Synapse SQL pools enable you to use built-in security features to secure your da
| **Permissions - [Database-level](/sql/relational-databases/security/authentication-access/database-level-roles?view=azure-sqldw-latest&preserve-view=true)** | Yes | Yes, you can grant, deny, or revoke permissions on the database objects. | | **Permissions - Schema-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema | Yes, you can specify schema-level permissions including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema. | | **Permissions - Object-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users | Yes, you can GRANT, DENY, and REVOKE permissions to users/logins on the system objects that are supported. |
-| **Permissions - [Column-level security](../sql-data-warehouse/column-level-security.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)** | Yes | Yes, column-level security is supported in serverless SQL pools. |
+| **Permissions - [Column-level security](../sql-data-warehouse/column-level-security.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json)** | Yes | Yes, column-level security is supported in serverless SQL pools. |
| **Row-level security** | [Yes](/sql/relational-databases/security/row-level-security?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) | No, there is no built-in support for the row-level security. Use custom views as a [workaround](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-implement-row-level-security-in-serverless-sql-pools/ba-p/2354759). | | **Data masking** | [Yes](../guidance/security-white-paper-access-control.md#dynamic-data-masking) | No, built-in data masking is not supported in the serverless SQL pools. Use wrapper SQL views that explicitly mask some columns as a workaround. | | **Built-in/system security &amp; identity functions** | Some Transact-SQL security functions and operators: `CURRENT_USER`, `HAS_DBACCESS`, `IS_MEMBER`, `IS_ROLEMEMBER`, `SESSION_USER`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, `OPEN/CLOSE MASTER KEY` | Some Transact-SQL security functions and operators are supported: `CURRENT_USER`, `HAS_DBACCESS`, `HAS_PERMS_BY_NAME`, `IS_MEMBER`, `IS_ROLEMEMBER`, `IS_SRVROLEMEMBER`, `SESSION_USER`, `SESSION_CONTEXT`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, and `REVERT`. Security functions cannot be used to query external data (store the result in variable that can be used in the query). |
Data that is analyzed can be stored on various storage types. The following tabl
| **Azure SQL/SQL Server (remote)** | No | No, serverless SQL pool cannot reference Azure SQL database. You can reference serverless SQL pools from Azure SQL using [elastic queries](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or [linked servers](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance). | | **Dataverse** | No, you can [load CosmosDB data into a dedicated pool using Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168) or Spark. | Yes, you can read Dataverse tables using [Synapse link](/powerapps/maker/data-platform/azure-synapse-link-data-lake). | | **Azure Cosmos DB transactional storage** | No | No, you cannot access Cosmos DB containers to update data or read data from the Cosmos DB transactional storage. Use [Spark pools to update the Cosmos DB](../synapse-link/how-to-query-analytical-store-spark.md) transactional storage. |
-| **Azure Cosmos DB analytical storage** | No, you can [load CosmosDB data into a dedicated pool using Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168), ADF, Spark or some other load tool. | Yes, you can [query Cosmos DB analytical storage](query-cosmos-db-analytical-store.md) using [Synapse Link](../../cosmos-db/synapse-link.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json). |
+| **Azure Cosmos DB analytical storage** | No, you can [load CosmosDB data into a dedicated pool using Synapse Link in serverless SQL pool (via ADLS)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/loading-cosmosdb-and-dataverse-data-into-dedicated-sql-pool-dw/ba-p/3104168), ADF, Spark or some other load tool. | Yes, you can [query Cosmos DB analytical storage](query-cosmos-db-analytical-store.md) using [Synapse Link](../../cosmos-db/synapse-link.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json). |
| **Apache Spark tables (in workspace)** | No | Yes, serverless pool can read PARQUET and CSV tables using [metadata synchronization](develop-storage-files-spark-tables.md). | | **Apache Spark tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). However, you can manually create an external table that reference external Spark table location. | | **Databricks tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). However, you can manually create an external table that reference Databricks table location. |
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Title: Azure Virtual Desktop authentication - Azure
-description: Authentication methods for Azure Virtual Desktop.
+ Title: Azure Virtual Desktop identities and authentication - Azure
+description: Identities and authentication methods for Azure Virtual Desktop.
- Last updated 12/07/2021
-# Supported authentication methods
+# Supported identities and authentication methods
-In this article, we'll give you a brief overview of what kinds of authentication you can use in Azure Virtual Desktop.
+In this article, we'll give you a brief overview of what kinds of identities and authentication methods you can use in Azure Virtual Desktop.
## Identities
virtual-desktop Autoscale Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-diagnostics.md
The following JSON file is an example of what you'll see when you open a report:
- Review how to create a scaling plan at [Autoscale for Azure Virtual Desktop session hosts](autoscale-scaling-plan.md). - [Assign your scaling plan to new or existing host pools](autoscale-new-existing-host-pool.md). - Learn more about terms used in this article at our [autoscale glossary](autoscale-glossary.md).
+- For examples of how the autoscale feature works, see [Autoscale example scenarios](autoscale-scenarios.md).
- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.
virtual-desktop Autoscale Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-glossary.md
# Autoscale (preview) glossary
+> [!IMPORTANT]
+> The autoscale feature is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ This article is a list of definitions for key terms and concepts related to the autoscale (preview) feature for Azure Virtual Desktop. ## Autoscale
A scaling plan is an Azure Virtual Desktop Azure Resource Manager object that de
Schedules are sub-resources of [scaling plans](#scaling-plan) that specify the start time, capacity threshold, minimum percentage of hosts, load-balancing algorithm, and other configuration settings for the different phases of the day.
-## Ramp up
+## Ramp-up
The ramp-up phase of a [scaling plan](#scaling-plan) [schedule](#schedule) is usually at the beginning of the work day, when users start to sign in and start their sessions. In this phase, the number of [active user sessions](#active-user-session) usually increases at a rapid pace without reaching the maximum number of active sessions for the day yet.
The ramp-up phase of a [scaling plan](#scaling-plan) [schedule](#schedule) is us
The peak phase of a [scaling plan](#scaling-plan) [schedule](#schedule) is when your host pool reaches the maximum number of [active user sessions](#active-user-session) for the day. In this phase, the number of active sessions usually holds steady until the peak phase ends. New active user sessions can be established during this phase, but usually at a slower rate than the ramp-up phase.
-## Ramp down
+## Ramp-down
The ramp-down phase of a [scaling plan](#scaling-plan) [schedule](#schedule) is usually at the end of the work day, when users start to sign out and end their sessions for the evening. In this phase, the number of [active user sessions](#active-user-session) usually decreases rapidly.
An exclusion tag is a property of a [scaling plan](#scaling-plan) that's a tag n
## Next steps - For more information about the autoscale feature, see the [autoscale feature document](autoscale-scaling-plan.md).
+- For examples of how the autoscale feature works, see [Autoscale example scenarios](autoscale-scenarios.md).
- For more information about the scaling script, see the [scaling script document](set-up-scaling-script.md).
virtual-desktop Autoscale New Existing Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-new-existing-host-pool.md
To edit an existing scaling plan:
- Review how to create a scaling plan at [Autoscale (preview) for Azure Virtual Desktop session hosts](autoscale-new-existing-host-pool.md). - Learn how to troubleshoot your scaling plan at [Enable diagnostics for your scaling plan](autoscale-diagnostics.md). - Learn more about terms used in this article at our [autoscale glossary](autoscale-glossary.md).
+- For examples of how the autoscale feature works, see [Autoscale example scenarios](autoscale-scenarios.md).
- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
Now that you've created your scaling plan, here are some things you can do:
- [Assign your scaling plan to new and existing host pools](autoscale-new-existing-host-pool.md) - [Enable diagnostics for your scaling plan](autoscale-diagnostics.md)
-If you'd like to learn more about terms used in this article, check out our [autoscale glossary](autoscale-glossary.md). You can also look at our [autoscale FAQ](autoscale-faq.yml) if you have additional questions.
+If you'd like to learn more about terms used in this article, check out our [autoscale glossary](autoscale-glossary.md). For examples of how the autoscale feature works, see [Autoscale example scenarios](autoscale-scenarios.md). You can also look at our [autoscale FAQ](autoscale-faq.yml) if you have additional questions.
virtual-desktop Autoscale Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scenarios.md
+
+ Title: Azure Virtual Desktop autoscale example scenarios preview
+description: A collection of four example scenarios that illustrate how various parts of the autoscale feature for Azure Virtual Desktop work.
++ Last updated : 03/23/2022+++
+# Autoscale (preview) example scenarios
+
+> [!IMPORTANT]
+> The autoscale feature is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+In this article, we're going to walk you through four scenarios that show how different parts of the autoscale (preview) feature work. In each section, we'll have tables that show the example host pool's settings and animated visual demonstrations.
+
+## Scenario 1: When does the autoscale feature turn virtual machines on?
+
+In this scenario, we'll demonstrate that the autoscale feature can turn on virtual machines (VMs) in any phase of the scaling plan schedule when the used host pool capacity exceeds the capacity threshold.
+
+For example, let's look at the following host pool setup as described in this table:
+
+|Parameter | Value|
+|||
+|Phase | Ramp-up |
+|Total session hosts | 6 |
+|Load balancing algorithm | Breadth-first |
+|Capacity threshold | 30% |
+|Minimum percentage of hosts | 30% |
+|Available session hosts | 2 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 10 |
+|User sessions | 0 |
+|Used host pool capacity | 0% |
+
+>[!NOTE]
+>To learn more about what the parameter terms mean, see [our autoscale glossary](autoscale-glossary.md).
+
+At the beginning of this phase, the autoscale feature has turned on two session hosts to match the minimum percentage of hosts. Although 30% of six isn't a whole number, the autoscale feature rounds up to the nearest whole number. Having two available session hosts and a maximum session limit of five sessions per host means that this host pool has an available host pool capacity of 10. Since there aren't currently any user sessions, the used host pool capacity is 0%.
+
+When the day begins, let's say three users sign in and start user sessions. Their user sessions get evenly distributed across the two available session hosts since the load balancing algorithm is breadth first. The available host pool capacity is still 10, but with the three new user sessions, the used host pool capacity is now 30%. However, the autoscale feature won't turn on virtual machines (VMs) until the used host pool capacity is greater than the capacity threshold. In this example, the capacity threshold is 30%, so the autoscale feature won't turn on any VMs yet.
+
+At this point, the host pool's parameters look like this:
+
+|Parameter | Value|
+|||
+|Phase | Ramp-up |
+|Total session hosts |6 |
+|Load balancing algorithm | Breadth-first |
+|Capacity threshold | 30% |
+|Minimum percentage of hosts | 30% |
+|Available session hosts | 2 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 10 |
+|User sessions | 3 |
+|Used host pool capacity | 30% |
+
+When another user signs in and starts a session, there are now four total users sessions distributed across two session hosts. The used host pool capacity is now 40%, which is greater than the capacity threshold. As a result, the autoscale feature will turn on another session host to bring the used host pool capacity to less than or equal to the capacity threshold (30%).
+
+In summary, here are the parameters when the used host pool capacity exceeds the capacity threshold:
+
+|Parameter | Value|
+|||
+|Phase | Ramp-up |
+|Total session hosts | 6 |
+|Load balancing algorithm | Breadth-first |
+|Capacity threshold | 30% |
+|Minimum percentage of hosts | 30% |
+|Available session hosts | 2 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 10 |
+|User sessions | 4 |
+|Used host pool capacity | 40% |
+
+Here are the parameters after autoscale turns on another session host:
+
+|Parameter | Value|
+|||
+|Phase | Ramp-up |
+|Total session hosts | 6 |
+|Load balancing algorithm | Breadth-first |
+|Capacity threshold | 30% |
+|Minimum percentage of hosts | 30% |
+|Available session hosts | 3 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 15 |
+|User sessions | 4 |
+|Used host pool capacity | 27% |
+
+Turning on another session host means there are now three available session hosts in the host pool. With the maximum session limit still being five, the available host pool capacity has gone up to 15. Because the available host pool capacity increased, the used host pool capacity has gone down to 27%, which is below the 30% capacity threshold.
+
+When another user signs in, there are now five user sessions spread across three available session hosts. The used host pool capacity is now 33%, which is over the 30% capacity threshold. Exceeding the capacity threshold activates the autoscale feature to turn on another session host.
+
+Since our example is in the ramp-up phase, new users are likely to keep signing in. As more users arrive, the pattern becomes clearer:
+
+| Total user sessions | Number of available session hosts | Available host pool capacity |Capacity threshold | Used host pool capacity | Does autoscale turn on another session host? |
+|-||||||
+|5|3|15|30%|33%|Yes|
+|5|4|20|30%|25%|No|
+|6|4|20|30%|30%|No|
+|7|4|20|30%|35%|Yes|
+|7|5|25|30%|28%|No|
+
+As this table shows, the autoscale feature only turns on new session hosts when the used host pool capacity goes over the capacity threshold. If the used host pool capacity is at or below the capacity threshold, the autoscale feature won't turn on new session hosts.
+
+The following animation is a visual recap of what we just went over in Scenario 1.
++
+## Scenario 2: When does the autoscale feature turn virtual machines off?
+
+In this scenario, we'll show that the autoscale feature turns off session hosts when all of the following things are true:
+
+- The used host pool capacity is below the capacity threshold.
+- The autoscale feature can turn off session hosts without exceeding the capacity threshold.
+- The autoscale feature only turns off session hosts with no user sessions on them (unless the scaling plan is in ramp-down phase and you've enabled the force logoff setting).
+
+For this scenario, the host pool starts off looking like this:
+
+|Parameter | Value|
+|||
+|Phase | Peak |
+|Total session hosts | 6 |
+|Load balancing algorithm | Breadth-first |
+|Capacity threshold | 30% |
+|Minimum percentage of hosts | 30% |
+|Available session hosts | 5 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 25 |
+|User sessions | 7 |
+|Used host pool capacity | 28% |
+
+Because we're in the peak phase, we can expect the number of users to remain relatively stable. However, to keep the amount of resources used stable while also remaining efficient, the autoscale feature will turn session hosts on and off as needed.
+
+So, let's say that there are seven users signed in during peak hours. If the total number of user sessions is seven, that would make the used host pool capacity 28%. Because autoscale can't turn off a session host without the used host pool capacity exceeding the capacity threshold, the autoscale feature won't turn off any session hosts yet.
+
+If two of the seven users sign out during their lunch break, that leaves five user sessions across five session hosts. Since the maximum session limit is still five, the available host pool capacity is 25. Having only five users means that the used host pool capacity is now 20%. The autoscale feature must now check if it can turn off a session host without making the used host pool capacity go above the capacity threshold.
+
+If the autoscale feature turned off a session host, the available host pool capacity would be 20. With five users, the used host pool capacity would then be 25%. Because 25% is less than the capacity threshold of 30%, the autoscale feature will select a session host without user sessions on it, put it in drain mode, and turn it off.
+
+Once the autoscale feature turns off one of the session hosts without user sessions, there are four available session hosts left. The host pool maximum session limit is still five, so the available host pool capacity is 20. Since there are five user sessions, the used host pool capacity is 25%, which is still below the capacity threshold.
+
+However, if another user signs out and heads out for lunch, there are now four user sessions spread across the four session hosts in the host pool. Since the maximum session limit is still five, the available host pool capacity is 20, and the used host pool capacity is 20%. Turning off another session host would leave three session hosts and an available host pool capacity of 15, which would cause the used host pool capacity to jump up to around 27%. Even though 27% is below the capacity threshold, there are no session hosts with zero user sessions on it. The autoscale feature will select the session host with the least number of user sessions, put it in drain mode, and wait for all user sessions to sign out before turning it off. If at any point the used host pool capacity gets to a point where the autoscale feature can no longer turn off the session host, it will take the session host out of drain mode.
+
+The following animation is a visual recap of what we just went over in Scenario 2.
++
+## Scenario 3: When does the autoscale feature force users to sign out?
+
+The autoscale feature only forces users to sign out if you've enabled the **force logoff** setting during the ramp-down phase of your scaling plan schedule. The force logoff setting won't sign out users during any other phase of the scaling plan schedule.
+
+For example, let's look at a host pool with the following parameters:
+
+|Parameter | Value|
+|||
+|Phase | Ramp-down |
+|Total session hosts | 6 |
+|Load balancing algorithm | Depth-first |
+|Capacity threshold | 75% |
+|Minimum percentage of hosts | 10% |
+|Available session hosts | 4 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 20 |
+|User sessions | 4 |
+|Used host pool capacity | 20% |
+
+During the ramp-down phase, the host pool admin has set the capacity threshold to 75% and the minimum percentage of hosts to 10%. Having a high capacity threshold and a low minimum percentage of hosts in this phase decreases the need to turn on new session hosts at the end of the workday.
+
+For this scenario, let's say that there are currently four users on the four available session hosts in this host pool. Since the available host pool capacity is 20, that means the used host pool capacity is 20%. Based on this information, the autoscale feature detects that it can turn off two session hosts without going over the capacity threshold of 75%. However, since there are user sessions on all the session hosts in the host pool, in order to turn off two session hosts, the autoscale feature will need to force users to sign out.
+
+When you've enabled the force logoff setting, the autoscale feature will select the session hosts with the fewest user sessions, then put the session hosts in drain mode. The autoscale feature then sends users in the selected session hosts notifications that they're going to be forcibly signed out of their sessions after a certain time. Once that time has passed, if the users haven't already ended their sessions, the autoscale feature will forcibly end their sessions for them. In this scenario, since there are equal numbers of user sessions on each of the session hosts in the host pool, the autoscale feature will choose two session hosts at random to forcibly sign out all their users and will then turn off the session hosts.
+
+Once the autoscale feature turns off the two session hosts, the available host pool capacity is now 10. Now that there are only two user sessions left, the used host pool capacity is 20%, as shown in the following table.
+
+|Parameter | Value|
+|||
+|Phase | Ramp-down |
+|Total session hosts | 6 |
+|Load balancing algorithm | Depth-first |
+|Capacity threshold | 75% |
+|Minimum percentage of hosts | 10% |
+|Available session hosts | 2 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 10 |
+|User sessions | 2 |
+|Used host pool capacity | 20% |
+
+Now, let's say that the two users who were forced to sign out want to continue doing work and sign back in. Since the available host pool capacity is still 10, the used host pool capacity is now 40%, which is below the capacity threshold of 75%. However, the autoscale feature can't turn off more session hosts, because that would leave only one available session host and an available host pool capacity of five. With four users, that would make the used host pool capacity 80%, which is above the capacity threshold.
+
+So now the parameters look like this:
+
+|Parameter | Value|
+|||
+|Phase | Ramp-down |
+|Total session hosts | 6 |
+|Load balancing algorithm | Depth-first |
+|Capacity threshold | 75% |
+|Minimum percentage of hosts | 10% |
+|Available session hosts | 2 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 10 |
+|User sessions | 4 |
+|Used host pool capacity | 40% |
+
+If at this point another user signs out, that leaves only three user sessions distributed across the two available session hosts. In other words, the host pool now looks like this:
+
+|Parameter | Value|
+|||
+|Phase | Ramp-down |
+|Total session hosts | 6 |
+|Load balancing algorithm | Depth-first |
+|Capacity threshold | 75% |
+|Minimum percentage of hosts | 10% |
+|Available session hosts | 2 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 10 |
+|User sessions | 3 |
+|Used host pool capacity | 30% |
+
+Because the maximum session limit is still five and the available host pool capacity is 10, the used host pool capacity is now 30%. The autoscale feature can now turn off one session host without exceeding the capacity threshold. The autoscale feature turns off a session host by choosing the session host with the fewest number of user sessions on it. The autoscale feature then puts the session host in drain mode, sends users a notification that says the session host will be turned off, then after a set amount of time, forcibly signs any remaining users out and turns it off. After doing so, there's now one remaining available session host in the host pool with a maximum session limit of five, making the available host pool capacity five.
+
+Since autoscale forced a user to sign out when turning off the chosen session host, there are now only two user sessions left, which makes the used host pool capacity 40%.
+
+To recap, here's what the host pool looks like now:
+
+|Parameter | Value|
+|||
+|Phase | Ramp-down |
+|Total session hosts | 6 |
+|Maximum session limit | 5 |
+|Load balancing algorithm | Depth-first |
+|Capacity threshold | 75% |
+|Minimum percentage of hosts | 10% |
+|Available host pool capacity | 5 |
+|User sessions | 2 |
+|Available session hosts | 1 |
+|Used host pool capacity | 40% |
+
+After that, let's imagine that the user who was forced to sign out signs back in, making the host pool look like this:
+
+|Parameter | Value|
+|||
+|Phase | Ramp-down |
+|Total session hosts | 6 |
+|Load balancing algorithm | Depth-first |
+|Capacity threshold | 75% |
+|Minimum percentage of hosts | 10% |
+|Available session hosts | 1 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 5 |
+|User sessions | 3 |
+|Used host pool capacity | 60% |
+
+Now there are three user sessions in the host pool. However, the host pool capacity is still five, which means the used host pool capacity is 60% and below the capacity threshold. Because turning off the remaining session host would make the available host pool capacity zero, which is below the 10% minimum percentage of hosts, the autoscale feature will ensure that there's always at least one available session host during the ramp-down phase.
+
+The following animation is a visual recap of what we just went over in Scenario 3.
++
+## Scenario 4: How do exclusion tags work?
+
+When a virtual machine has a tag name that matches the scaling plan exclusion tag, the autoscale feature won't turn it on, off, or change its drain mode setting. Exclusion tags are applicable in all phases of your scaling plan schedule.
+
+Here's the example host pool we're starting with:
+
+|Parameter | Value|
+|||
+|Phase | Off-peak |
+|Total session hosts | 6 |
+|Load balancing algorithm | Breadth-first |
+|Capacity threshold | 75% |
+|Minimum percentage of hosts | 10% |
+|Available session hosts | 1 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 5 |
+|User sessions | 3 |
+|Used host pool capacity | 60% |
+
+In this example scenario, the host pool admin applies the scaling plan exclusion tag to five out of the six session hosts. When a new user signs in, that brings the total number of user sessions up to four. There's only one available session host and the host pool's maximum session limit is still five, so the available host pool capacity is five. The used host pool capacity is 80%. However, even though the used host pool capacity is greater than the capacity threshold, the autoscale feature won't turn on any other session hosts because all of the session hosts except for the one currently running have been tagged with the exclusion tag.
+
+So, now the host pool looks like this:
+
+|Parameter | Value|
+|||
+|Phase | Off-peak |
+|Total session hosts | 6 |
+|Load balancing algorithm | Breadth-first |
+|Capacity threshold | 75% |
+|Minimum percentage of hosts | 10% |
+|Available session hosts | 1|
+|Maximum session limit | 5 |
+|Available host pool capacity | 5 |
+|User sessions | 4 |
+|Used host pool capacity | 80% |
+
+Next, let's say all four users have signed out, leaving no user sessions left on the available session host. Because there are no user sessions in the host pool, the used host pool capacity is 0. The autoscale feature will keep this single session host on despite it having no users, because during the off-peak phase, autoscale's minimum percentage of hosts setting dictates that it needs to keep at least one session host available during this phase.
+
+To summarize, the host pool now looks like this:
+
+|Parameter | Value|
+|||
+|Phase | Off-peak |
+|Total session hosts | 6 |
+|Load balancing algorithm | Breadth-first |
+|Capacity threshold | 75% |
+|Minimum percentage of hosts | 10% |
+|Available session hosts | 1 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 5 |
+|User sessions | 0 |
+|Used host pool capacity | 0% |
+
+If the admin applies the exclusion tag name to the last untagged session host virtual machine and turns it off, then that means even if other users try to sign in, autoscale won't be able to turn on a VM to accommodate their user session. That user will see a "No resources available" error.
+
+However, being unable to turn VMs back on means that the host pool won't be able to meet its minimum percentage of hosts. To fix any potential problems that causes, the admin removes the exclusion tags from two of the VMs. Autoscale only turns on one of the VMs, because it only needs one VM to meet the 10% minimum requirement.
+
+So, finally, the host pool will look like this:
+
+|Parameter | Value|
+|||
+|Phase | Off-peak |
+|Total session hosts | 6 |
+|Load balancing algorithm | Breadth-first |
+|Capacity threshold | 75% |
+|Minimum percentage of hosts | 19% |
+|Available session hosts | 1 |
+|Maximum session limit | 5 |
+|Available host pool capacity | 5 |
+|User sessions | 0 |
+|Used host pool capacity | 0% |
+
+The following animation is a visual recap of what we just went over in Scenario 4.
++
+## Next steps
+
+- To review what the autoscale feature is and how it works, see [Autoscale (preview) for Azure Virtual Desktop host pools](autoscale-scaling-plan.md).
+- To learn how to enable scaling plans for the autoscale feature, see [Enable scaling plans for existing and new host pools (preview)](autoscale-new-existing-host-pool.md).
+- To review terms associated with the autoscale feature, see [the autoscale glossary](autoscale-glossary.md).
+- For answers to commonly asked questions about the autoscale feature, see [the autoscale FAQ](autoscale-faq.yml).
virtual-desktop Azure Monitor Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor-glossary.md
The most urgent items that you need to take care of right away. If you don't add
## Time to connect
-Time to connect is the time between when a user starts their session and when they're counted as being signed in to the service. Establishing new connections tends to take longer than reestablishing existing connections.
+Time to connect is the time between when a user clicks a resource to start their session and when their desktop has loaded and is ready to use. For remote app use cases this is the time to launch the application. For new sessions this time encompasses two primary stages: connection, Azure service timing related to the time to route the user to a session host, and logon, the length of time taken to perform personalization and other tasks to establish a session on the session host. When monitoring time to connect, keep in mind the following things:
+
+* Time to connect is measured with the following checkpoints from AVDΓÇÖs service data:
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Begins: [WVDConnection](https://docs.microsoft.com/azure/azure-monitor/reference/tables/wvdconnections) state = started
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Ends: [WVDCheckpoints](https://docs.microsoft.com/azure/azure-monitor/reference/tables/wvdcheckpoints) Name = ShellReady (desktops); Name = first app launch for RemoteApp (RdpShellAppExecuted)
+
+As an example, the time for a desktop experience to launch would be measured up to the launch of Windows Explorer (explorer.exe).
+
+- Establishing new sessions tends to take longer than reestablishing connections to existing sessions due to Logon stages required in new session setup.
+
+- The time it takes the user to provide credentials is subtracted from their time to connect to help avoid signaling long connection times where a user may have had a long delay to enter credentials or use alternative authentication methods.
+
+When troubleshooting a high time to connect, you can breakdown total connection time into a few components to help identify actionable ways to reduce logon time.
+
+>[!NOTE]
+>Only primary connection steps are surfaced in the stages, and these components can run in parallel, meaning they will not add up to equal a total time to connect.
+
+Connection stages:
+
+ ```mermaid
+flowchart LR
+ id0{{User Initiates Connection}}
+ id1[User Route]
+ id2[Stack Connect]
+ id3[Logon]
+ id4[Shell Start]
+ id5{{Session is Ready}}
+ id0 --> id1
+ id1 --> id2
+ id2 --> id3
+ id2 --> id4
+ id3 --> id5
+ id4 --> id5
+```
+- User route: Time from when the user clicks the icon to launch a session to when the service identifies a host to connect to. Network load, service load, or unique network traffic routing could lead to high routing times. Troubleshooting may require more detailed network path investigation.
+
+- Stack connected: Time from when the service has resolved a target session host for the user to when the connection is established from the session host to the userΓÇÖs remote client. Like user routing, the network load, server load, or unique network traffic routing could lead to high connection times. An additional consideration for network routing would be ensuring proxy configurations on both the client and session host side are appropriately configured and routing to the service is optimal.
+
+- Logon: Time from when the connection to a host is established to when the shell starts to load. Logon time includes several processes that can contribute to high logon time; you can use Logon stages in Insights to identify peaks and see Logon stages documentation below to learn more. More details on the logon stages are provided in the next section.
+
+- Shell start to shell ready: Time from when the shell starts to load to when it is fully loaded and ready for use. The most likely sources of delays in this phase include session host overload (high CPU, memory, or disk activity) or configuration issues.
+
+Logon stages:
+
+- Profiles: The time it takes to load a userΓÇÖs profile for new sessions. This time will largely relate to profile sizes or user profile solutions in use (e.g., User Experience Virtualization). For solutions making use of a network stored profile excess latency may also lead to longer profile loading times.
+
+- Group Policy (GPOs): Time it takes to apply group policies to new sessions. A spike in this time bucket indicates that you have too many group policies, the policies take too long to apply, or the session host is experiencing resource issues. As a further note, the Domain Controller (DC) needs to be close to session hosts for optimal GPO processing times.
+
+- Shell Start: The time it takes to launch the shell (usually explorer.exe).
+
+- FSLogix (Frxsvc): Time it takes to launch FSLogix in new sessions. If this time is slow, it may indicate issues with the shares used to host the FSLogix user profiles; ensure the shares are collocated with the session hosts and appropriately scaled for the user volume logging into hosts. Additionally, larger profile sizes could contribute to slowness.
## User report
virtual-desktop Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rbac.md
Title: Built-in roles Azure Virtual Desktop - Azure
-description: An overview of built-in roles for Azure Virtual Desktop available for Azure RBAC.
+ Title: Built-in Azure RBAC roles Azure Virtual Desktop
+description: An overview of built-in Azure RBAC roles for Azure Virtual Desktop available.
- Previously updated : 12/15/2020 Last updated : 03/22/2022
-# Built-in roles for Azure Virtual Desktop
+# Built-in Azure RBAC roles for Azure Virtual Desktop
-Azure Virtual Desktop uses Azure role-based access controls (RBAC) to assign roles to users and admins. These roles give admins permission to carry out certain tasks. To learn more about built-in roles for Azure RBAC, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
+Azure Virtual Desktop uses Azure role-based access control (RBAC) to control access to resources. There are a number of built-in roles for use with Azure Virtual Desktop which is a collection of permissions. You assign roles to users and admins and these roles give permission to carry out certain tasks. To learn more about Azure RBAC, see [What is Azure RBAC?](../role-based-access-control/overview.md).
-The standard built-in roles for Azure are Owner, Contributor, and Reader. However, Azure Virtual Desktop has additional roles that let you separate management roles for host pools, app groups, and workspaces. This separation lets you have more granular control over administrative tasks. These roles are named in compliance with Azure's standard roles and least-privilege methodology.
+The standard built-in roles for Azure are Owner, Contributor, and Reader. However, Azure Virtual Desktop has additional roles that let you separate management roles for host pools, application groups, and workspaces. This separation lets you have more granular control over administrative tasks. These roles are named in compliance with Azure's standard roles and least-privilege methodology.
-Azure Virtual Desktop doesn't have a specific Owner role. However, you can use a standard Owner role for the service objects.
+Azure Virtual Desktop doesn't have a specific Owner role. However, you can use the general Owner role for the service objects.
-## Desktop Virtualization Contributor
+The built-in roles for Azure Virtual Desktop and the permissions for each one are detailed below. The assignable scope for all built-in roles are set to the root scope ("/"). The root scope indicates that the role is available for assignment in all scopes, for example management groups, subscriptions, or resource groups. For more information, see [Understand Azure role definitions](../role-based-access-control/role-definitions.md).
-The Desktop Virtualization Contributor role lets you manage all aspects of the deployment. However, it doesn't grant you access to compute resources. You'll also need the User Access Administrator role to publish app groups to users or user groups.
+## Desktop Virtualization Contributor
+The Desktop Virtualization Contributor role allows users to manage all aspects of the deployment. However, it doesn't grant users access to compute resources. You'll also need the *User Access Administrator* role to publish application groups to users or user groups.
-- Microsoft.DesktopVirtualization/\* -- Microsoft.Resources/subscriptions/resourceGroups/read-- Microsoft.Resources/deployments/\*-- Microsoft.Authorization/\*/read-- Microsoft.Insights/alertRules/\*-- Microsoft.Support/\*
+| Action type | Permissions |
+|--|--|
+| actions | <ul><li>Microsoft.DesktopVirtualization/\*</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.Resources/deployments/\*</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/\*</li><li>Microsoft.Support/\*</li></ul> |
+| notActions | None |
+| dataActions | None |
+| notDataActions | None |
## Desktop Virtualization Reader
-The Desktop Virtualization Reader role lets you view everything in the deployment but doesn't let you make any changes.
+The Desktop Virtualization Reader role allows users to view everything in the deployment, but doesn't let them make any changes.
-- Microsoft.DesktopVirtualization/\*/read-- Microsoft.Resources/subscriptions/resourceGroups/read-- Microsoft.Resources/deployments/read-- Microsoft.Authorization/\*/read-- Microsoft.Insights/alertRules/\*-- Microsoft.Support/\*
+| Action type | Permissions |
+|--|--|
+| actions | <ul><li>Microsoft.DesktopVirtualization/\*/read</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.Resources/deployments/read</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/read</li><li>Microsoft.Support/\*</li></ul> |
+| notActions | None |
+| dataActions | None |
+| notDataActions | None |
-## Desktop Virtualization Host Pool Contributor
+## Desktop Virtualization User
-The Host Pool Contributor role lets you manage all aspects of host pools, including access to resources. You'll need an extra contributor role, Virtual Machine Contributor, to create virtual machines. You will need AppGroup and Workspace contributor roles to create host pool using the portal or you can use Desktop Virtualization Contributor role.
+The Desktop Virtualization User role allows users to use the applications in an application group.
+
+| Action type | Permissions |
+|--|--|
+| actions | None |
+| notActions | None |
+| dataActions | <ul><li>Microsoft.DesktopVirtualization/applicationGroups/useApplications/action</li></ul> |
+| notDataActions | None |
+
+## Desktop Virtualization Host Pool Contributor
-The following list describes which permissions this role can access:
+The Desktop Virtualization Host Pool Contributor role allows users to manage all aspects of host pools, including access to resources. You'll also need the *Virtual Machine Contributor* role to create virtual machines. You will need *Desktop Virtualization Application Group Contributor* and *Desktop Virtualization Workspace Contributor* roles to create host pools using the portal, or you can use the *Desktop Virtualization Contributor* role.
-- Microsoft.DesktopVirtualization/hostpools/\*-- Microsoft.Resources/subscriptions/resourceGroups/read-- Microsoft.Resources/deployments/\*-- Microsoft.Authorization/\*/read-- Microsoft.Insights/alertRules/\*-- Microsoft.Support/\*
+| Action type | Permissions |
+|--|--|
+| actions | <ul><li>Microsoft.DesktopVirtualization/hostpools/\*</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.Resources/deployments/\*</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/\*</li><li>Microsoft.Support/\*</li></ul> |
+| notActions | None |
+| dataActions | None |
+| notDataActions | None |
## Desktop Virtualization Host Pool Reader
-The Host Pool Reader role lets you view everything in the host pool, but won't allow you to make any changes.
+The Desktop Virtualization Host Pool Reader role allows users to view everything in the host pool, but won't allow them to make any changes.
-- Microsoft.DesktopVirtualization/hostpools/\*/read-- Microsoft.Resources/subscriptions/resourceGroups/read-- Microsoft.Resources/deployments/read-- Microsoft.Authorization/\*/read-- Microsoft.Insights/alertRules/\*-- Microsoft.Support/\*
+| Action type | Permissions |
+|--|--|
+| actions | <ul><li>Microsoft.DesktopVirtualization/hostpools/\*/read</li><li>Microsoft.DesktopVirtualization/hostpools/read</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.Resources/deployments/read</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/read</li><li>Microsoft.Support/\*</li></ul> |
+| notActions | None |
+| dataActions | None |
+| notDataActions | None |
## Desktop Virtualization Application Group Contributor
-The Application Group Contributor role lets you manage all aspects of app groups. If you want to publish app groups to users or user groups, you'll need the User Access Administrator role.
+The Desktop Virtualization Application Group Contributor role allows users to manage all aspects of application groups. If you want users to publish application groups to users or user groups, they'll also need the *User Access Administrator* role.
-The following list describes which permissions this role can access:
--- Microsoft.DesktopVirtualization/applicationgroups/\*-- Microsoft.DesktopVirtualization/hostpools/read-- Microsoft.DesktopVirtualization/hostpools/sessionhosts/read-- Microsoft.Resources/subscriptions/resourceGroups/read-- Microsoft.Resources/deployments/\*-- Microsoft.Authorization/\*/read-- Microsoft.Insights/alertRules/\*-- Microsoft.Support/\*
+| Action type | Permissions |
+|--|--|
+| actions | <ul><li>Microsoft.DesktopVirtualization/applicationgroups/\*</li><li>Microsoft.DesktopVirtualization/hostpools/read</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/read</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.Resources/deployments/\*</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/\*</li><li>Microsoft.Support/\*</ul></li> |
+| notActions | None |
+| dataActions | None |
+| notDataActions | None |
## Desktop Virtualization Application Group Reader
-The Application Group Reader role lets you view everything in the app group and will not allow you to make any changes.
-
-The following list describes which permissions this role can access:
+The Desktop Virtualization Application Group Reader role allows users to view everything in the application group and will not allow them to make any changes.
-- Microsoft.DesktopVirtualization/applicationgroups/\*/read-- Microsoft.DesktopVirtualization/applicationgroups/read-- Microsoft.DesktopVirtualization/hostpools/read-- Microsoft.DesktopVirtualization/hostpools/sessionhosts/read-- Microsoft.Resources/subscriptions/resourceGroups/read-- Microsoft.Resources/deployments/read-- Microsoft.Authorization/\*/read-- Microsoft.Insights/alertRules/\*-- Microsoft.Support/\*
+| Action type | Permissions |
+|--|--|
+| actions | <ul><li>Microsoft.DesktopVirtualization/applicationgroups/\*/read</li><li>Microsoft.DesktopVirtualization/applicationgroups/read</li><li>Microsoft.DesktopVirtualization/hostpools/read</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/read</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.Resources/deployments/read</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/read</li><li>Microsoft.Support/\*</li></ul> |
+| notActions | None |
+| dataActions | None |
+| notDataActions | None |
## Desktop Virtualization Workspace Contributor
-The Workspace Contributor role lets you manage all aspects of workspaces. To get information on applications added to the app groups, you'll also need to be assigned the Application Group Reader role.
-
-The following list describes which permissions this role can access:
+The Desktop Virtualization Workspace Contributor role allows users to manage all aspects of workspaces. To get information on applications added to the application groups, they'll also need the *Application Group Reader* role.
-- Microsoft.DesktopVirtualization/workspaces/\*-- Microsoft.DesktopVirtualization/applicationgroups/read-- Microsoft.Resources/subscriptions/resourceGroups/read-- Microsoft.Resources/deployments/\*-- Microsoft.Authorization/\*/read-- Microsoft.Insights/alertRules/\*-- Microsoft.Support/\*
+| Action type | Permissions |
+|--|--|
+| actions | <ul><li>Microsoft.DesktopVirtualization/workspaces/\*</li><li>Microsoft.DesktopVirtualization/applicationgroups/read</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.Resources/deployments/\*</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/\*</li><li>Microsoft.Support/\*</li></ul> |
+| notActions | None |
+| dataActions | None |
+| notDataActions | None |
## Desktop Virtualization Workspace Reader
-The Workspace Reader role lets you view everything in the workspace, but won't allow you to make any changes.
+The Desktop Virtualization Workspace Reader role allows users to view everything in the workspace, but won't allow them to make any changes.
-The following list describes which permissions this role can access:
--- Microsoft.DesktopVirtualization/workspaces/read-- Microsoft.DesktopVirtualization/applicationgroups/read-- Microsoft.Resources/subscriptions/resourceGroups/read-- Microsoft.Resources/deployments/read-- Microsoft.Authorization/\*/read-- Microsoft.Insights/alertRules/\*-- Microsoft.Support/\*
+| Action type | Permissions |
+|--|--|
+| actions | <ul><li>Microsoft.DesktopVirtualization/workspaces/read</li><li>Microsoft.DesktopVirtualization/applicationgroups/read</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.Resources/deployments/read</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/read</li><li>Microsoft.Support/\*</li></ul> |
+| notActions | None |
+| dataActions | None |
+| notDataActions | None |
## Desktop Virtualization User Session Operator
-The User Session Operator role lets you send messages, disconnect sessions, and use the "logoff" function to sign sessions out of the session host. However, this role doesn't let you perform session host management like removing session host, changing drain mode, and so on. This role can see assignments, but can't modify admins. We recommend you assign this role to specific host pools. If you give this permission at a resource group level, the admin will have read permission on all host pools under a resource group.
-
-The following list describes which permissions this role can access:
+The Desktop Virtualization User Session Operator role allows users to send messages, disconnect sessions, and use the "logoff" function to sign sessions out of the session host. However, this role doesn't let users perform session host management like removing session host, changing drain mode, and so on. This role can see assignments, but can't modify admins. We recommend you assign this role to specific host pools. If you give this permission at a resource group level, the admin will have read permission on all host pools under a resource group.
-- Microsoft.DesktopVirtualization/hostpools/read-- Microsoft.DesktopVirtualization/hostpools/sessionhosts/read-- Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/\*-- Microsoft.Resources/subscriptions/resourceGroups/read-- Microsoft.Resources/deployments/read-- Microsoft.Authorization/\*/read-- Microsoft.Insights/alertRules/\*-- Microsoft.Support/\*
+| Action type | Permissions |
+|--|--|
+| actions | <ul><li>Microsoft.DesktopVirtualization/hostpools/read</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/read</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/\*</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.Resources/deployments/\*</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/\*</li><li>Microsoft.Support/\*</li></ul> |
+| notActions | None |
+| dataActions | None |
+| notDataActions | None |
## Desktop Virtualization Session Host Operator
-The Session Host Operator role lets you view and remove session hosts, as well as change drain mode. They can't add session hosts using the Azure portal because they don't have write permission for host pool objects. If the registration token is valid (generated and not expired), you can use this role to add session hosts to the host pool outside of Azure portal if the admin has compute permissions through the Virtual Machine Contributor role.
-
-The following list describes which permissions this role can access:
+The Desktop Virtualization Session Host Operator role allows users to view and remove session hosts, as well as change drain mode. Users can't add session hosts using the Azure portal because they don't have write permission for host pool objects. If the registration token is valid (generated and not expired), users assigned this role can add session hosts to the host pool outside of the Azure portal if they also have the *Virtual Machine Contributor* role.
-- Microsoft.DesktopVirtualization/hostpools/read-- Microsoft.DesktopVirtualization/hostpools/sessionhosts/\*-- Microsoft.Resources/subscriptions/resourceGroups/read-- Microsoft.Resources/deployments/read-- Microsoft.Authorization/\*/read-- Microsoft.Insights/alertRules/\*-- Microsoft.Support/\*
+| Action type | Permissions |
+|--|--|
+| actions | <ul><li>Microsoft.DesktopVirtualization/hostpools/read</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/\*</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.Resources/deployments/\*</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/\*</li><li>Microsoft.Support/\*</li></ul> |
+| notActions | None |
+| dataActions | None |
+| notDataActions | None |
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
Set-AzVMExtension -ResourceGroupName $VMresourceGroup -VMName $vmName -Location
```azurecli # Set your Azure virtual machine scale set diagnostic variables.
-$my_resource_group=<your_azure_resource_group_name_containing_your_azure_linux_vm>
-$my_linux_vmss=<your_azure_linux_vmss_name>
-$my_diagnostic_storage_account=<your_azure_storage_account_for_storing_vm_diagnostic_data>
+my_resource_group=<your_azure_resource_group_name_containing_your_azure_linux_vm>
+my_linux_vmss=<your_azure_linux_vmss_name>
+my_diagnostic_storage_account=<your_azure_storage_account_for_storing_vm_diagnostic_data>
# Login to Azure before you do anything else. az login
az vmss identity assign -g $my_resource_group -n $my_linux_vmss
wget https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json -O portal_public_settings.json # Build the virtual machine scale set resource ID. Replace the storage account name and resource ID in the public settings.
-$my_vmss_resource_id=$(az vmss show -g $my_resource_group -n $my_linux_vmss --query "id" -o tsv)
+my_vmss_resource_id=$(az vmss show -g $my_resource_group -n $my_linux_vmss --query "id" -o tsv)
sed -i "s#__DIAGNOSTIC_STORAGE_ACCOUNT__#$my_diagnostic_storage_account#g" portal_public_settings.json sed -i "s#__VM_RESOURCE_ID__#$my_vmss_resource_id#g" portal_public_settings.json # Build the protected settings (storage account SAS token).
-$my_diagnostic_storage_account_sastoken=$(az storage account generate-sas --account-name $my_diagnostic_storage_account --expiry 2037-12-31T23:59:00Z --permissions wlacu --resource-types co --services bt -o tsv)
-$my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'}"
+my_diagnostic_storage_account_sastoken=$(az storage account generate-sas --account-name $my_diagnostic_storage_account --expiry 2037-12-31T23:59:00Z --permissions wlacu --resource-types co --services bt -o tsv)
+my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'}"
# Finally, tell Azure to install and enable the extension. az vmss extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 4.0 --resource-group $my_resource_group --vmss-name $my_linux_vmss --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
virtual-network-manager Concept Network Manager Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-network-manager-scope.md
In this article, you'll learn about how Azure Virtual Network Manager uses the c
A *scope* within Azure Virtual Network Manager is a set of resources where features can be applied to. When specifying a scope, you're limiting the access to which Network Manager can manage resources for. The value for the scope can be at the management group level or at the subscription level. See [Azure management groups](../governance/management-groups/overview.md), to learn how to manage your resource hierarchy. When you select a management group as the scope, all child resources are included within the scope. > [!NOTE]
-> You can't create multiple Azure Virtual Network Manager with an overlapping scope of the same hierarchy and the same features selected.
+> You can't create multiple Azure Virtual Network Manager instances with an overlapping scope of the same hierarchy and the same features selected.
> ## Features