Updates from: 09/10/2022 01:09:33
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Active Directory Certificate Based Authentication Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/active-directory-certificate-based-authentication-get-started.md
-+ # Get started with certificate-based authentication in Azure Active Directory with federation
This topic:
To configure CBA with federation, the following statements must be true: -- CBA with federation is only supported for Federated environments for browser applications, native clients using modern authentication (ADAL), or MSAL libraries. The one exception is Exchange Active Sync (EAS) for Exchange Online (EXO), which can be used for federated and managed accounts. To configure Azure AD CBA without needing federation, see [How to configure Azure AD certificate-based authentication](how-to-certificate-based-authentication.md).
+- CBA with federation is only supported for Federated environments for browser applications, native clients using modern authentication, or MSAL libraries. The one exception is Exchange Active Sync (EAS) for Exchange Online (EXO), which can be used for federated and managed accounts. To configure Azure AD CBA without needing federation, see [How to configure Azure AD certificate-based authentication](how-to-certificate-based-authentication.md).
- The root certificate authority and any intermediate certificate authorities must be configured in Azure Active Directory. - Each certificate authority must have a certificate revocation list (CRL) that can be referenced via an internet-facing URL. - You must have at least one certificate authority configured in Azure Active Directory. You can find related steps in the [Configure the certificate authorities](#step-2-configure-the-certificate-authorities) section.
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
description: Learn how to use additional context in MFA notifications
Previously updated : 09/01/2022 Last updated : 09/09/2022 # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use additional context in Microsoft Authenticator notifications (Preview) - Authentication methods policy
+# How to use additional context in Microsoft Authenticator app notifications (Preview) - Authentication Methods Policy
-This topic covers how to improve the security of user sign-in by adding the application name and geographic location of the sign-in to Microsoft Authenticator push and passwordless notifications.
+This topic covers how to improve the security of user sign-in by adding the application name and geographic location of the sign-in to Microsoft Authenticator push and passwordless notifications. The schema for the API to enable application name and geographic location is currently being updated. **While the API is updated over the next two weeks, you should only use the Azure AD portal to enable application name and geographic location.**
## Prerequisites
The additional context can be combined with [number matching](how-to-mfa-number-
:::image type="content" border="false" source="./media/howto-authentication-passwordless-phone/location-with-number-match.png" alt-text="Screenshot of additional context with number matching in the MFA push notification.":::
-## Enable additional context using Graph API
+## Enable additional context
->[!NOTE]
->In Graph Explorer, ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
-
-You can enable and disable application name and geographic location separately. Under featureSettings, you can use the following mapping for the following features:
--- Application name: displayAppInformationRequiredState-- Geographic location: displayLocationInformationRequiredState--
-Identify your single target group for each of the features. Then use the following API endpoint to change the displayAppInformationRequiredState or displayLocationInformationRequiredState properties under featureSettings to **enabled** and include or exclude the groups you want::
-
-`https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator`
-
->[!NOTE]
->For Passwordless phone sign-in, the Authenticator app does not retrieve policy information just in time for each sign-in request. Instead, the Authenticator app does a best effort retrieval of the policy once every 7 days. We understand this limitation is less than ideal and are working to optimize the behavior. In the meantime, if you want to force a policy update to test using additional context with Passwordless phone sign-in, you can remove and re-add the account in the Authenticator app.
-
-### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|||-|
-| id | String | The authentication method policy identifier. |
-| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
-
-**RELATIONSHIPS**
-
-| Relationship | Type | Description |
-|--||-|
-| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of users or groups who are enabled to use the authentication method. |
-| featureSettings | [microsoftAuthenticatorFeatureSettings](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of Microsoft Authenticator features. |
-
-### MicrosoftAuthenticator includeTarget properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
-| id | String | Object ID of an Azure AD user or group. |
-| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.|
-
-### MicrosoftAuthenticator featureSettings properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| numberMatchingRequiredState | authenticationMethodFeatureConfiguration | Require number matching for MFA notifications. Value is ignored for phone sign-in notifications. |
-| displayAppInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown application name in Microsoft Authenticator notification. |
-| displayLocationInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown geographic location context in Microsoft Authenticator notification. |
-
-### Authentication Method Feature Configuration properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br>You can only exclude one group for each feature.|
-| includeTarget | featureTarget | A single entity that is included in this feature. <br>You can only include one group for each feature.|
-| State | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
-
-### Feature Target properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| id | String | ID of the entity targeted. |
-| targetType | featureTargetType | The kind of entity targeted, such as group, role, or administrative unit. The possible values are: ΓÇÿgroupΓÇÖ, 'administrativeUnitΓÇÖ, ΓÇÿroleΓÇÖ, unknownFutureValueΓÇÖ. |
-
-### Example of how to enable additional context for all users
-
-In **featureSettings**, change **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **default** to **enabled**.
-
-The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you do not want to allow passwordless, use **push**.
-
-You might need to PATCH the entire schema to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example shows how to update **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
-
-Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
-
-```json
-//Retrieve your existing policy via a GET.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change the Query to PATCH and Run query
-
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "displayAppInformationRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "all_users"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- },
- "displayLocationInformationRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "all_users"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any",
- }
- ]
-}
-```
-
-
-### Example of how to enable application name and geographic location for separate groups
-
-In **featureSettings**, change **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **default** to **enabled.**
-Inside the **includeTarget** for each featureSetting, change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
-
-You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
-
-Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "displayAppInformationRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- },
- "displayLocationInformationRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "a229e768-961a-4401-aadb-11d836885c11"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any",
- }
- ]
-}
-```
-
-To verify, RUN GET again and verify the ObjectID:
-
-```http
-GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-```
-
-### Example of how to disable application name and only enable geographic location
-
-In **featureSettings**, change the state of **displayAppInformationRequiredState** to **default** or **disabled** and **displayLocationInformationRequiredState** to **enabled.**
-Inside the **includeTarget** for each featureSetting, change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
-
-You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
-
-Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "displayAppInformationRequiredState": {
- "state": "disabled",
- "includeTarget": {
- "targetType": "group",
- "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- },
- "displayLocationInformationRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "a229e768-961a-4401-aadb-11d836885c11"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any",
- }
- ]
-}
-```
-
-### Example of how to exclude a group from application name and geographic location
-
-In **featureSettings**, change the states of **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** to from **default** to **enabled.**
-Inside the **includeTarget** for each featureSetting, change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
-
-In addition, for each of the features, you will change the id of the excludeTarget to the ObjectID of the group from the Azure AD portal. This will exclude that group from seeing application name or geographic location.
-
-You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
-
-Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "displayAppInformationRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "5af8a0da-5420-4d69-bf3c-8b129f3449ce"
- }
- },
- "displayLocationInformationRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "a229e768-961a-4401-aadb-11d836885c11"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "b6bab067-5f28-4dac-ab30-7169311d69e8"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any",
- }
- ]
-}
-```
-
-### Example of removing the excluded group
-
-In **featureSettings**, change the states of **displayAppInformationRequiredState** from **default** to **enabled.**
-You need to change the **id** of the **excludeTarget** to `00000000-0000-0000-0000-000000000000`.
-
-You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
-
-Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- " displayAppInformationRequiredState ": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": " 00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any"
- }
- ]
-}
-```
-
-## Turn off additional context
-
-To turn off additional context, you'll need to PATCH **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **enabled** to **disabled**/**default**. You can also turn off just one of the features.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "displayAppInformationRequiredState": {
- "state": "disabled",
- "includeTarget": {
- "targetType": "group",
- "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- },
- "displayLocationInformationRequiredState": {
- "state": "disabled",
- "includeTarget": {
- "targetType": "group",
- "id": "a229e768-961a-4401-aadb-11d836885c11"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any",
- }
- ]
-}
-```
-
-## Enable additional context in the portal
-
-To enable application name or geographic location in the Azure AD portal, complete the following steps:
+To enable application name or geographic location, complete the following steps:
1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**. 1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone, and change **Authentication mode** to **Any**.
Additional context is not supported for Network Policy Server (NPS) or Active Di
## Next steps
-[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
+[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 09/01/2022 Last updated : 09/09/2022
# How to use number matching in multifactor authentication (MFA) notifications (Preview) - Authentication Methods Policy
-This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. Number matching can be enabled by using the Azure portal or Microsoft Graph API.
+This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. The schema for the API to enable number match is currently being updated. **While the API is updated over the next two weeks, you should only use the Azure AD portal to enable number match.**
>[!NOTE] >Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator that will be enabled by default for all tenants a few months after general availability (GA).<br>
Your organization will need to enable Authenticator (traditional second factor)
## Number matching
+<!check below with Mayur. The bit about the policy came from the number match FAQ at the end.>
+ Number matching can be targeted to only a single group, which can be dynamic or nested. On-premises synchronized security groups and cloud-only security groups are supported for the Authentication Method Policy. Number matching is available for the following scenarios. When enabled, all scenarios support number matching.
To create the registry key that overrides push notifications:
## Enable number matching -
->[!NOTE]
->In Graph Explorer, ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
-
-Identify your single target group for the schema configuration. Then use the following API endpoint to change the numberMatchingRequiredState property under featureSettings to **enabled** and include or exclude groups:
-
-```http
-https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-```
--
-### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|||-|
-| id | String | The authentication method policy identifier. |
-| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
-
-**RELATIONSHIPS**
-
-| Relationship | Type | Description |
-|--||-|
-| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget?view=graph-rest-beta&preserve-view=true) collection | A collection of users or groups who are enabled to use the authentication method |
-| featureSettings | [microsoftAuthenticatorFeatureSettings](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of Microsoft Authenticator features. |
-
-### MicrosoftAuthenticator includeTarget properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
-| id | String | Object ID of an Azure AD user or group. |
-| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.|
---
-### MicrosoftAuthenticator featureSettings properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| numberMatchingRequiredState | authenticationMethodFeatureConfiguration | Require number matching for MFA notifications. Value is ignored for phone sign-in notifications. |
-| displayAppInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown application name in Microsoft Authenticator notification. |
-| displayLocationInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown geographic location context in Microsoft Authenticator notification. |
-
-### Authentication Method Feature Configuration properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br> Please note: You will be able to only exclude one group for number matching. |
-| includeTarget | featureTarget | A single entity that is included in this feature. <br> Please note: You will be able to only set one group for number matching. |
-| State | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
-
-### Feature Target properties
-
-**PROPERTIES**
-
-| Property | Type | Description |
-|-||-|
-| id | String | ID of the entity targeted. |
-| targetType | featureTargetType | The kind of entity targeted, such as group, role, or administrative unit. The possible values are: ΓÇÿgroupΓÇÖ, 'administrativeUnitΓÇÖ, ΓÇÿroleΓÇÖ, unknownFutureValueΓÇÖ. |
-
->[!NOTE]
->Number matching can be enabled only for a single group.
-
-### Example of how to enable number matching for all users
-
-In **featureSettings**, you will need to change the **numberMatchingRequiredState** from **default** to **enabled**.
-
-Note that the value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we will use **any**, but if you do not want to allow passwordless, use **push**.
-
->[!NOTE]
->For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
-
-You might need to patch the entire schema to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example only shows the update to the **numberMatchingRequiredState** under **featureSettings**.
-
-Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
-
-```json
-//Retrieve your existing policy via a GET.
-//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
-//Change the Query to PATCH and Run query
-
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "numberMatchingRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "all_users"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any",
- }
- ]
-}
-
-```
-
-To confirm this has applied, please run the GET request below using the endpoint below.
-
-```http
-GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-```
-
-### Example of how to enable number matching for a single group
-
-In **featureSettings**, you will need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
-Inside the **includeTarget**, you will need to change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
-
-You need to PATCH the entire configuration to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **numberMatchingRequiredState**.
-
-Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "numberMatchingRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": "00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any"
- }
- ]
-}
-```
-
-To verify, RUN GET again and verify the ObjectID
-
-```http
-GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
-```
-
-### Example of removing the excluded group from number matching
-
-In **featureSettings**, you will need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
-You need to change the **id** of the **excludeTarget** to `00000000-0000-0000-0000-000000000000`.
-
-You need to PATCH the entire configuration to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **numberMatchingRequiredState**.
-
-Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will be excluded from the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "numberMatchingRequiredState": {
- "state": "enabled",
- "includeTarget": {
- "targetType": "group",
- "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": " 00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any"
- }
- ]
-}
-```
-
-## Turn off number matching
-
-To turn number matching off, you will need to PATCH remove **numberMatchingRequiredState** from **enabled** to **disabled**/**default**.
-
-```json
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
- "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
- "id": "MicrosoftAuthenticator",
- "state": "enabled",
- "featureSettings": {
- "numberMatchingRequiredState": {
- "state": "default",
- "includeTarget": {
- "targetType": "group",
- "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
- },
- "excludeTarget": {
- "targetType": "group",
- "id": " 00000000-0000-0000-0000-000000000000"
- }
- }
- },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
- "includeTargets": [
- {
- "targetType": "group",
- "id": "all_users",
- "isRegistrationRequired": false,
- "authenticationMode": "any"
- }
- ]
-}
-```
-
-## Enable number matching in the portal
-
-To enable number matching in the Azure AD portal, complete the following steps:
+To enable number matching, complete the following steps:
1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**. 1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone, and change **Authentication mode** to **Push**.
To enable number matching in the Azure AD portal, complete the following steps:
## Next steps
-[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
+[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
active-directory How To Migrate Mfa Server To Azure Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-with-federation.md
Run the following PowerShell cmdlet:
The command returns your current additional authentication rules for your relying party trust. Append the following rules to your current claim rules: ```console
-c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
-"YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "http://schemas.microsoft.com/claims/authnmethodsproviders",
Value = "AzureMfaAuthentication");
-not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+not exists([Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
Value=="YourGroupSid"]) => issue(Type =
-"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"http://schemas.microsoft.com/claims/authnmethodsproviders", Value =
"AzureMfaServerAuthentication");' ```
The following example assumes your current claim rules are configured to prompt
```PowerShell Set-AdfsAdditionalAuthenticationRule -AdditionalAuthenticationRules 'c:[type ==
-"https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
-"https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
-"https://schemas.microsoft.com/claims/multipleauthn" );
- c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
-"YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+"http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
+"http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
+"http://schemas.microsoft.com/claims/multipleauthn" );
+ c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "http://schemas.microsoft.com/claims/authnmethodsproviders",
Value = "AzureMfaAuthentication");
-not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+not exists([Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
Value=="YourGroupSid"]) => issue(Type =
-"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"http://schemas.microsoft.com/claims/authnmethodsproviders", Value =
"AzureMfaServerAuthentication");' ```
This example modifies claim rules on a specific relying party trust (application
```PowerShell Set-AdfsRelyingPartyTrust -TargetName AppA -AdditionalAuthenticationRules 'c:[type ==
-"https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
-"https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
-"https://schemas.microsoft.com/claims/multipleauthn" );
-c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
-"YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+"http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
+"http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
+"http://schemas.microsoft.com/claims/multipleauthn" );
+c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "http://schemas.microsoft.com/claims/authnmethodsproviders",
Value = "AzureMfaAuthentication");
-not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+not exists([Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
Value=="YourGroupSid"]) => issue(Type =
-"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"http://schemas.microsoft.com/claims/authnmethodsproviders", Value =
"AzureMfaServerAuthentication");' ```
For example, remove the following from the rule(s):
```console
-c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
-"**YourGroupSID**"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"**YourGroupSID**"] => issue(Type = "http://schemas.microsoft.com/claims/authnmethodsproviders",
Value = "AzureMfaAuthentication");
-not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+not exists([Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
Value=="YourGroupSid"]) => issue(Type =
-"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"http://schemas.microsoft.com/claims/authnmethodsproviders", Value =
"AzureMfaServerAuthentication");' ```
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
This method can also be used for easy recovery when the user has lost or forgott
### Technical considerations
-**Active Directory Federation Services (AD FS) Integration** - When a user enables the Authenticator passwordless credential, authentication for that user defaults to sending a notification for approval. Users in a hybrid tenant are prevented from being directed to AD FS for sign-in unless they select "Use your password instead." This process also bypasses any on-premises Conditional Access policies, and pass-through authentication (PTA) flows. However, if a login_hint is specified, the user is forwarded to AD FS and bypasses the option to use the passwordless credential.
+**Active Directory Federation Services (AD FS) Integration** - When a user enables the Authenticator passwordless credential, authentication for that user defaults to sending a notification for approval. Users in a hybrid tenant are prevented from being directed to AD FS for sign-in unless they select "Use your password instead." This process also bypasses any on-premises Conditional Access policies, and pass-through authentication (PTA) flows. However, if a login_hint is specified, the user is forwarded to AD FS and bypasses the option to use the passwordless credential. For non-Microsoft 365 applications which use AD FS for authentication, Azure AD Conditional Access policies will not be applied and you will need to set up access control policies within AD FS.
**MFA server** - End users enabled for multi-factor authentication through an organization's on-premises MFA server can create and use a single passwordless phone sign-in credential. If the user attempts to upgrade multiple installations (5 or more) of the Authenticator app with the credential, this change may result in an error.
active-directory Tutorial Enable Cloud Sync Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md
Previously updated : 08/22/2022 Last updated : 09/08/2022
# Customer intent: As an Azure AD Administrator, I want to learn how to enable and use password writeback so that when end-users reset their password through a web browser their updated password is synchronized back to my on-premises AD environment.
-# Tutorial: Enable cloud sync self-service password reset writeback to an on-premises environment (preview)
+# Tutorial: Enable cloud sync self-service password reset writeback to an on-premises environment
-Azure Active Directory Connect cloud sync can synchronize Azure AD password changes in real time between users in disconnected on-premises Active Directory Domain Services (AD DS) domains. The public preview of Azure AD Connect cloud sync can run side-by-side with [Azure Active Directory Connect](tutorial-enable-sspr-writeback.md) at the domain level to simplify password writeback for additional scenarios, such as users who are in disconnected domains because of a company split or merge. You can configure each service in different domains to target different sets of users depending on their needs. Azure Active Directory Connect cloud sync uses the lightweight Azure AD cloud provisioning agent to simplify the setup for self-service password reset (SSPR) writeback and provide a secure way to send password changes in the cloud back to an on-premises directory.
+Azure Active Directory Connect cloud sync can synchronize Azure AD password changes in real time between users in disconnected on-premises Active Directory Domain Services (AD DS) domains. Azure AD Connect cloud sync can run side-by-side with [Azure Active Directory Connect](tutorial-enable-sspr-writeback.md) at the domain level to simplify password writeback for additional scenarios, such as users who are in disconnected domains because of a company split or merge. You can configure each service in different domains to target different sets of users depending on their needs. Azure Active Directory Connect cloud sync uses the lightweight Azure AD cloud provisioning agent to simplify the setup for self-service password reset (SSPR) writeback and provide a secure way to send password changes in the cloud back to an on-premises directory.
-Azure Active Directory Connect cloud sync self-service password reset writeback is supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites - An Azure AD tenant with at least an Azure AD Premium P1 or trial license enabled. If needed, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An account with either:
- - [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) and [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) roles
+- An account with:
- [Global Administrator](../roles/permissions-reference.md#global-administrator) role - Azure AD configured for self-service password reset. If needed, complete this tutorial to enable Azure AD SSPR. -- An on-premises AD DS environment configured with Azure AD Connect cloud sync version 1.1.587 or later. Learn how to [identify the agent's current version](../cloud-sync/how-to-automatic-upgrade.md). If needed, configure Azure AD Connect cloud sync using [this tutorial](tutorial-enable-sspr.md). -- Enabling password writeback in Azure AD Connect cloud sync requires executing signed PowerShell scripts.
- - Ensure that the PowerShell execution policy will allow running of scripts.
- - The recommended execution policy during installation is "RemoteSigned".
- - For more information about setting the PowerShell execution policy, see [Set-ExecutionPolicy](/powershell/module/microsoft.powershell.security/set-executionpolicy).
+- An on-premises AD DS environment configured with [Azure AD Connect cloud sync version 1.1.972.0 or later](../app-provisioning/provisioning-agent-release-version-history.md). Learn how to [identify the agent's current version](../cloud-sync/how-to-automatic-upgrade.md). If needed, configure Azure AD Connect cloud sync using [this tutorial](tutorial-enable-sspr.md).
## Deployment steps 1. [Configure Azure AD Connect cloud sync service account permissions](#configure-azure-ad-connect-cloud-sync-service-account-permissions)
-1. [Enable password writeback in Azure AD Connect cloud sync](#enable-password-writeback-in-azure-ad-connect-cloud-sync)
-1. [Enable password writeback for SSPR](#enable-password-writeback-for-sspr)
+1. [Enable password writeback in Azure AD Connect cloud sync](#enable-password-writeback-in-sspr)
+1. [Enable password writeback for SSPR](#enable-password-writeback-in-sspr)
### Configure Azure AD Connect cloud sync service account permissions Permissions for cloud sync are configured by default. If permissions need to be reset, see [Troubleshooting](#troubleshooting) for more details about the specific permissions required for password writeback and how to set them by using PowerShell.
-### Enable password writeback in Azure AD Connect cloud sync
+### Enable password writeback in SSPR
+You can enable Azure AD connect cloud sync provisioning directly in Azure portal or through PowerShell.
-For public preview, you need to enable password writeback in Azure AD Connect cloud sync by running `Set-AADCloudSyncPasswordWritebackConfiguration` on any server with the provisioning agent. You will need global administrator credentials:
-
-```powershell
-Import-Module 'C:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dll'
-Set-AADCloudSyncPasswordWritebackConfiguration -Enable $true -Credential $(Get-Credential)
-```
-
-### Enable password writeback for SSPR
+#### Enable password writeback in Azure portal
With password writeback enabled in Azure AD Connect cloud sync, now verify, and configure Azure AD self-service password reset (SSPR) for password writeback. When you enable SSPR to use password writeback, users who change or reset their password have that updated password synchronized back to the on-premises AD DS environment as well. To verify and enable password writeback in SSPR, complete the following steps:
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global Administrator account.
+1. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
+1. Check the option for **Write back passwords to your on-premises directory** .
+1. (optional) If Azure AD Connect provisioning agents are detected, you can additionally check the option for **Write back passwords with Azure AD Connect cloud sync**.
+3. Check the option for **Allow users to unlock accounts without resetting their password** to *Yes*.
-1. Sign into the Azure portal using a [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) account.
-1. Navigate to Azure Active Directory, select **Password reset**, then choose **On-premises integration**.
-1. Verify the Azure AD Connect cloud sync agent set up is complete.
-1. Set **Write back passwords to your on-premises directory?** to **Yes**.
-1. Set **Allow users to unlock accounts without resetting their password?** to **Yes**.
-
- ![Screenshot showing how to enable writeback.](media/tutorial-enable-sspr-cloud-sync-writeback/writeback.png)
+ ![Enable Azure AD self-service password reset for password writeback](media/tutorial-enable-sspr-writeback/enable-sspr-writeback-cloudsync.png)
-1. When ready, select **Save**.
+1. When ready, select **Save**.
+
+#### PowerShell
+With PowerShell you can enable Azure AD Connect cloud sync by using the Set-AADCloudSyncPasswordWritebackConfiguration cmdlet on the servers with the provisioning agents. You will need global administrator credentials:
+
+```powershell
+Import-Module 'C:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dll'
+Set-AADCloudSyncPasswordWritebackConfiguration -Enable $true -Credential $(Get-Credential)
+```
## Clean up resources
+If you no longer want to use the SSPR writeback functionality you have configured as part of this tutorial, complete the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
+1. Uncheck the option for **Write back passwords to your on-premises directory**.
+1. Uncheck the option for **Write back passwords with Azure AD Connect cloud sync**.
+1. Uncheck the option for **Allow users to unlock accounts without resetting their password**.
+1. When ready, select **Save**.
-If you no longer want to use the SSPR password writeback functionality you have configured as part of this document, complete the following steps:
+If you no longer want to use the Azure AD Connect cloud sync for SSPR writeback functionality but want to continue using Azure AD Connect sync agent for writebacks complete the following steps:
-1. Sign into the Azure portal using a [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) account.
-1. Search for and select Azure Active Directory, select **Password reset**, then choose **On-premises integration**.
-1. Set **Write back passwords to your on-premises directory?** to **No**.
-1. Set **Allow users to unlock accounts without resetting their password?** to **No**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
+1. Uncheck the option for **Write back passwords with Azure AD Connect cloud sync**.
+1. When ready, select **Save**.
-From your Azure AD Connect cloud sync server, run `Set-AADCloudSyncPasswordWritebackConfiguration` using Hybrid Identity Administrator credentials to disable password writeback with Azure AD Connect cloud sync.
+You can also use PowerShell to disable Azure AD Connect cloud sync for SSPR writeback functionality, from your Azure AD Connect cloud sync server, run `Set-AADCloudSyncPasswordWritebackConfiguration` using Hybrid Identity Administrator credentials to disable password writeback with Azure AD Connect cloud sync.
```powershell Import-Module ΓÇÿC:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dllΓÇÖ
Try the following operations to validate scenarios using password writeback. All
## Troubleshooting
-The Azure AD Connect cloud sync group Managed Service Account should have the following permissions set to writeback the passwords by default:
--- Reset password-- Write permissions on lockoutTime-- Write permissions on pwdLastSet-- Extended rights for "Unexpire Password" on the root object of each domain in that forest, if not already set. -
-If these permissions are not set, you can set the PasswordWriteBack permission on the service account by using the Set-AADCloudSyncPermissions cmdlet and on-premises enterprise administrator credentials:
-
-```powershell
-Import-Module ΓÇÿC:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dllΓÇÖ
-Set-AADCloudSyncPermissions -PermissionType PasswordWriteBack -EACredential $(Get-Credential)
-```
-
-After you have updated the permissions, it may take up to an hour or more for these permissions to replicate to all the objects in your directory.
+- The Azure AD Connect cloud sync group Managed Service Account should have the following permissions set to writeback the passwords by default:
+ - Reset password
+ - Write permissions on lockoutTime
+ - Write permissions on pwdLastSet
+ - Extended rights for "Unexpire Password" on the root object of each domain in that forest, if not already set.
+
+ If these permissions are not set, you can set the PasswordWriteBack permission on the service account by using the Set-AADCloudSyncPermissions cmdlet and on-premises enterprise administrator credentials:
-If you don't assign these permissions, writeback may appear to be configured correctly, but users may encounter errors when they update their on-premises passwords from the cloud. Permissions must be applied to ΓÇ£This object and all descendant objectsΓÇ¥ for "Unexpire Password" to appear.
+ ```powershell
+ Import-Module ΓÇÿC:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dllΓÇÖ
+ Set-AADCloudSyncPermissions -PermissionType PasswordWriteBack -EACredential $(Get-Credential)
+ ```
-If passwords for some user accounts aren't written back to the on-premises directory, make sure that inheritance isn't disabled for the account in the on-prem AD DS environment. Write permissions for passwords must be applied to descendant objects for the feature to work correctly.
+ After you have updated the permissions, it may take up to an hour or more for these permissions to replicate to all the objects in your directory.
+
+- If passwords for some user accounts aren't written back to the on-premises directory, make sure that inheritance isn't disabled for the account in the on-premises AD DS environment. Write permissions for passwords must be applied to descendant objects for the feature to work correctly.
-Password policies in the on-premises AD DS environment may prevent password resets from being correctly processed. If you are testing this feature and want to reset password for users more than once per day, the group policy for Minimum password age must be set to 0. This setting can be found under Computer Configuration > Policies > Windows Settings > Security Settings > Account Policies within gpmc.msc.
+- Password policies in the on-premises AD DS environment may prevent password resets from being correctly processed. If you are testing this feature and want to reset password for users more than once per day, the group policy for Minimum password age must be set to 0. This setting can be found under Computer Configuration > Policies > Windows Settings > Security Settings > Account Policies within gpmc.msc.
-If you update the group policy, wait for the updated policy to replicate, or use the gpupdate /force command.
+- If you update the group policy, wait for the updated policy to replicate, or use the gpupdate /force command.
-For passwords to be changed immediately, Minimum password age must be set to 0. However, if users adhere to the on-premises policies, and the Minimum password age is set to a value greater than zero, password writeback will not work after the on-premises policies are evaluated.
+- For passwords to be changed immediately, Minimum password age must be set to 0. However, if users adhere to the on-premises policies, and the Minimum password age is set to a value greater than zero, password writeback will not work after the on-premises policies are evaluated.
For more information about how to validate or set up the appropriate permissions, see [Configure account permissions for Azure AD Connect](tutorial-enable-sspr-writeback.md#configure-account-permissions-for-azure-ad-connect).
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
Previously updated : 05/31/2022 Last updated : 09/08/2022
To enable SSPR writeback, first enable the writeback option in Azure AD Connect.
## Enable password writeback for SSPR
-With password writeback enabled in Azure AD Connect, now configure Azure AD SSPR for writeback. When you enable SSPR to use password writeback, users who change or reset their password have that updated password synchronized back to the on-premises AD DS environment as well.
+With password writeback enabled in Azure AD Connect, now configure Azure AD SSPR for writeback. SSPR can be configured to writeback through Azure AD Connect sync agents and Azure AD Connect provisioning agents (cloud sync). When you enable SSPR to use password writeback, users who change or reset their password have that updated password synchronized back to the on-premises AD DS environment as well.
To enable password writeback in SSPR, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com) using a Hybrid Identity Administrator account. 1. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
-1. Set the option for **Write back passwords to your on-premises directory?** to *Yes*.
-1. Set the option for **Allow users to unlock accounts without resetting their password?** to *Yes*.
+1. Check the option for **Write back passwords to your on-premises directory** .
+1. (optional) If Azure AD Connect provisioning agents are detected, you can additionally check the option for **Write back passwords with Azure AD Connect cloud sync**.
+3. Check the option for **Allow users to unlock accounts without resetting their password** to *Yes*.
- ![Enable Azure AD self-service password reset for password writeback](media/tutorial-enable-sspr-writeback/enable-sspr-writeback.png)
+ ![Configure Azure AD Connect for password writeback](media/tutorial-enable-sspr-writeback/enable-password-writeback.png)
1. When ready, select **Save**.
If you no longer want to use the SSPR writeback functionality you have configure
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
-1. Set the option for **Write back passwords to your on-premises directory?** to *No*.
-1. Set the option for **Allow users to unlock accounts without resetting their password?** to *No*.
+1. Uncheck the option for **Write back passwords to your on-premises directory**.
+1. Uncheck the option for **Write back passwords with Azure AD Connect cloud sync**.
+1. Uncheck the option for **Allow users to unlock accounts without resetting their password**.
+1. When ready, select **Save**.
+
+If you no longer want to use the Azure AD Connect cloud sync for SSPR writeback functionality but want to continue using Azure AD Connect sync agent for writebacks complete the following steps:
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
+1. Uncheck the option for **Write back passwords with Azure AD Connect cloud sync**.
+1. When ready, select **Save**.
If you no longer want to use any password functionality, complete the following steps from your Azure AD Connect server:
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
When creating Conditional Access policies, administrators have asked for the abi
There are multiple scenarios that organizations can now enable using filter for devices condition. Below are some core scenarios with examples of how to use this new condition. - **Restrict access to privileged resources**. For this example, lets say you want to allow access to Microsoft Azure Management from a user who is assigned a privilged role Global Admin, has satisfied multifactor authentication and accessing from a device that is [privileged or secure admin workstations](/security/compass/privileged-access-devices) and attested as compliant. For this scenario, organizations would create two Conditional Access policies:
- - Policy 1: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
- - Policy 2: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block. Learn how to [update extensionAttributes on an Azure AD device object](/graph/api/device-update?view=graph-rest-1.0&tabs=http&preserve-view=true).
+ - Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
+ - Policy 2: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block. Learn how to [update extensionAttributes on an Azure AD device object](/graph/api/device-update?view=graph-rest-1.0&tabs=http&preserve-view=true).
- **Block access to organization resources from devices running an unsupported Operating System**. For this example, lets say you want to block access to resources from Windows OS version older than Windows 10. For this scenario, organizations would create the following Conditional Access policy: - All users, accessing all cloud apps, excluding a filter for devices using rule expression device.operatingSystem equals Windows and device.operatingSystemVersion startsWith "10.0" and for Access controls, Block. - **Do not require multifactor authentication for specific accounts on specific devices**. For this example, lets say you want to not require multifactor authentication when using service accounts on specific devices like Teams phones or Surface Hub devices. For this scenario, organizations would create the following two Conditional Access policies:
Filter for devices is an option when creating a Conditional Access policy in the
The following steps will help create two Conditional Access policies to support the first scenario under [Common scenarios](#common-scenarios).
-Policy 1: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
+Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**..
- 1. Under **Include**, select **Directory roles** and choose **Global administrator**.
+ 1. Under **Include**, select **Directory roles** and choose **Global Administrator**.
> [!WARNING] > Conditional Access policies support built-in roles. Conditional Access policies are not enforced for other role types including [administrative unit-scoped](../roles/admin-units-assign-roles.md) or [custom roles](../roles/custom-create.md).
Policy 1: All users with the directory role of Global administrator, accessing t
1. Confirm your settings and set **Enable policy** to **On**. 1. Select **Create** to create to enable your policy.
-Policy 2: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block.
+Policy 2: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block.
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**..
- 1. Under **Include**, select **Directory roles** and choose **Global administrator**.
+ 1. Under **Include**, select **Directory roles** and choose **Global Administrator**.
> [!WARNING] > Conditional Access policies support built-in roles. Conditional Access policies are not enforced for other role types including [administrative unit-scoped](../roles/admin-units-assign-roles.md) or [custom roles](../roles/custom-create.md).
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The following client apps are confirmed to support this setting:
- Microsoft Teams - Microsoft To Do - Microsoft Word
+- Microsoft Power Apps
+- Microsoft Field Service (Dynamics 365)
- MultiLine for Intune - Nine Mail - Email and Calendar - Notate for Intune
active-directory Concept Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policies.md
All policies are enforced in two phases:
- Use the session details gathered in phase 1 to identify any requirements that haven't been met. - If there's a policy that is configured to block access, with the block grant control, enforcement will stop here and the user will be blocked. - The user will be prompted to complete more grant control requirements that weren't satisfied during phase 1 in the following order, until policy is satisfied:
- - [Multi-factor authenticationΓÇï](concept-conditional-access-grant.md#require-multi-factor-authentication)
- - [Device to be marked as compliant](./concept-conditional-access-grant.md#require-device-to-be-marked-as-compliant)
- - [Hybrid Azure AD joined device](./concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device)
- - [Approved client app](./concept-conditional-access-grant.md#require-approved-client-app)
- - [App protection policy](./concept-conditional-access-grant.md#require-app-protection-policy)
- - [Password change](./concept-conditional-access-grant.md#require-password-change)
- - [Terms of use](concept-conditional-access-grant.md#terms-of-use)
- - [Custom controls](./concept-conditional-access-grant.md#custom-controls-preview)
+ 1. [Multi-factor authenticationΓÇï](concept-conditional-access-grant.md#require-multi-factor-authentication)
+ 2. [Device to be marked as compliant](./concept-conditional-access-grant.md#require-device-to-be-marked-as-compliant)
+ 3. [Hybrid Azure AD joined device](./concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device)
+ 4. [Approved client app](./concept-conditional-access-grant.md#require-approved-client-app)
+ 5. [App protection policy](./concept-conditional-access-grant.md#require-app-protection-policy)
+ 6. [Password change](./concept-conditional-access-grant.md#require-password-change)
+ 7. [Terms of use](concept-conditional-access-grant.md#terms-of-use)
+ 8. [Custom controls](./concept-conditional-access-grant.md#custom-controls-preview)
- Once all grant controls have been satisfied, apply session controls (App Enforced, Microsoft Defender for Cloud Apps, and token Lifetime) - Phase 2 of policy evaluation occurs for all enabled policies.
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
The following options are available to include when creating a Conditional Acces
- All guest and external users - This selection includes any [B2B guests and external users](../external-identities/external-identities-overview.md) including any user with the `user type` attribute set to `guest`. This selection also applies to any external user signed-in from a different organization like a Cloud Solution Provider (CSP). - Directory roles
- - Allows administrators to select specific [built-in Azure AD directory roles](../roles/permissions-reference.md) used to determine policy assignment. For example, organizations may create a more restrictive policy on users assigned the global administrator role. Other role types aren't supported, including administrative unit-scoped roles and custom roles.
+ - Allows administrators to select specific [built-in Azure AD directory roles](../roles/permissions-reference.md) used to determine policy assignment. For example, organizations may create a more restrictive policy on users assigned the Global Administrator role. Other role types aren't supported, including administrative unit-scoped roles and custom roles.
- Users and groups - Allows targeting of specific sets of users. For example, organizations can select a group that contains all members of the HR department when an HR app is selected as the cloud app. A group can be any type of user group in Azure AD, including dynamic or assigned security and distribution groups. Policy will be applied to nested users and groups.
The following options are available to exclude when creating a Conditional Acces
- All guest and external users - This selection includes any B2B guests and external users including any user with the `user type` attribute set to `guest`. This selection also applies to any external user signed-in from a different organization like a Cloud Solution Provider (CSP). - Directory roles
- - Allows administrators to select specific Azure AD directory roles used to determine assignment. For example, organizations may create a more restrictive policy on users assigned the global administrator role.
+ - Allows administrators to select specific Azure AD directory roles used to determine assignment. For example, organizations may create a more restrictive policy on users assigned the Global Administrator role.
- Users and groups - Allows targeting of specific sets of users. For example, organizations can select a group that contains all members of the HR department when an HR app is selected as the cloud app. A group can be any type of group in Azure AD, including dynamic or assigned security and distribution groups. Policy will be applied to nested users and groups.
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
If you aren't using CAE-capable clients, your default access token lifetime will
### User condition change flow
-In the following example, a Conditional Access administrator has configured a location based Conditional Access policy to only allow access from specific IP ranges:
+In the following example, a Conditional Access Administrator has configured a location based Conditional Access policy to only allow access from specific IP ranges:
![User condition event flow](./media/concept-continuous-access-evaluation/user-condition-change-flow.png)
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/faqs.md
-+ # Azure Active Directory Conditional Access FAQs
Microsoft Teams relies heavily on Exchange Online and SharePoint Online for core
Microsoft Teams also is supported separately as a cloud app in Conditional Access policies. Conditional Access policies that are set for a cloud app apply to Microsoft Teams when a user signs in. However, without the correct policies on other apps like Exchange Online and SharePoint Online users may still be able to access those resources directly.
-Microsoft Teams desktop clients for Windows and Mac support modern authentication. Modern authentication brings sign-in based on the Azure Active Directory Authentication Library (ADAL) to Microsoft Office client applications across platforms.
+Microsoft Teams desktop clients for Windows and Mac support modern authentication. Modern authentication brings sign-in to Microsoft Office client applications across platforms.
For more information, see the article, [Conditional Access service dependencies](service-dependencies.md) and consider targeting policies to the [Office 365 app](concept-conditional-access-cloud-apps.md#office-365) instead.
active-directory Howto Conditional Access Insights Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-insights-reporting.md
To enable the insights and reporting workbook, your tenant must have a Log Analy
The following roles can access insights and reporting: -- Conditional Access administrator
+- Conditional Access Administrator
- Security reader - Security administrator -- Global reader -- Global administrator
+- Global Reader
+- Global Administrator
Users also need one of the following Log Analytics workspace roles:
You can also investigate the sign-ins of a specific user by searching for sign-i
To configure a Conditional Access policy in report-only mode:
-1. Sign into the **Azure portal** as a Conditional Access administrator, security administrator, or global administrator.
+1. Sign into the **Azure portal** as a Conditional Access Administrator, security administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select an existing policy or create a new policy. 1. Under **Enable policy** set the toggle to **Report-only** mode.
active-directory Howto Conditional Access Policy Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md
Organizations can choose to deploy this policy using the steps outlined below or
The following steps will help create a Conditional Access policy to require those assigned administrative roles to perform multifactor authentication.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **Directory roles** and choose built-in roles like:
- - Global administrator
+ - Global Administrator
- Application administrator - Authentication Administrator - Billing administrator - Cloud application administrator
- - Conditional Access administrator
+ - Conditional Access Administrator
- Exchange administrator - Helpdesk administrator - Password administrator
active-directory Howto Conditional Access Policy All Users Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa.md
Organizations can choose to deploy this policy using the steps outlined below or
The following steps will help create a Conditional Access policy to require all users do multifactor authentication.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-azure-management.md
The following steps will help create a Conditional Access policy to require user
> [!CAUTION] > Make sure you understand how Conditional Access works before setting up a policy to manage access to Microsoft Azure Management. Make sure you don't create conditions that could block your own access to the portal.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Block Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-access.md
The following steps will help create Conditional Access policies to block access
The first policy blocks access to all apps except for Microsoft 365 applications if not on a trusted location.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Block Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md
Organizations can choose to deploy this policy using the steps outlined below or
The following steps will help create a Conditional Access policy to block legacy authentication requests. This policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Compliant Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md
Organizations can choose to deploy this policy using the steps outlined below or
The following steps will help create a Conditional Access policy to require devices accessing resources be marked as compliant with your organization's Intune compliance policies.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-location.md
With the location condition in Conditional Access, you can control access to you
## Define locations
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. 1. Choose **New location**. 1. Give your location a name.
More information about the location condition in Conditional Access can be found
## Create a Conditional Access policy
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
The following policy applies to the selected users, who attempt to register usin
1. Under **Exclude**. 1. Select **All guest and external users**.
- 1. Select **Directory roles** and choose **Global administrator**
+ 1. Select **Directory roles** and choose **Global Administrator**
> [!NOTE] > Temporary Access Pass does not work for guest users.
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
Organizations can choose to deploy this policy using the steps outlined below or
## Enable with Conditional Access policy
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md
Organizations can choose to deploy this policy using the steps outlined below or
## Enable with Conditional Access policy
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
To make sure that your policy works as expected, the recommended best practice i
### Policy 1: Sign-in frequency control
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
To make sure that your policy works as expected, the recommended best practice i
### Policy 2: Persistent browser session
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
To make sure that your policy works as expected, the recommended best practice i
### Policy 3: Sign-in frequency control every time risky user
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
The following steps will help create a Conditional Access policy requiring an ap
Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
After confirming your settings using [report-only mode](howto-conditional-access
This policy will block all Exchange ActiveSync clients using basic authentication from connecting to Exchange Online.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md
Microsoft provides [security defaults](../fundamentals/concept-fundamentals-secu
### Prerequisites * A working Azure AD tenant with Azure AD Premium or trial license enabled. If needed, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An account with Conditional Access administrator privileges.
+* An account with Conditional Access Administrator privileges.
* A test user (non-administrator) that allows you to verify policies work as expected before you impact real users. If you need to create a user, see [Quickstart: Add new users to Azure Active Directory](../fundamentals/add-users-azure-active-directory.md). * A group that the non-administrator user is a member of. If you need to create a group, see [Create a group and add members in Azure Active Directory](../fundamentals/active-directory-groups-create-azure-portal.md).
active-directory Require Tou https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/require-tou.md
This section provides you with the steps to create a sample ToU. When you create
1. In Microsoft Word, create a new document. 1. Type **My terms of use**, and then save the document on your computer as **mytou.pdf**.
-1. Sign in to your [Azure portal](https://portal.azure.com) as global administrator, security administrator, or a Conditional Access administrator.
+1. Sign in to your [Azure portal](https://portal.azure.com) as Global Administrator, Security Administrator, or a Conditional Access Administrator.
1. Search for and select **Azure Active Directory**. From the menu on the left-hand side select **Security**. ![Azure Active Directory](./media/require-tou/02.png)
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Azure AD terms of use policies use the PDF format to present content. The PDF fi
Once you've completed your terms of use policy document, use the following procedure to add it.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select, **New terms**.
If you want to view more activity, Azure AD terms of use policies include audit
To get started with Azure AD audit logs, use the following procedure:
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select a terms of use policy. 1. Select **View audit logs**.
Users can review and see the terms of use policies that they've accepted by usin
You can edit some details of terms of use policies, but you can't modify an existing document. The following procedure describes how to edit the details.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit terms**.
You can edit some details of terms of use policies, but you can't modify an exis
## Update the version or pdf of an existing terms of use
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit terms**.
You can edit some details of terms of use policies, but you can't modify an exis
## View previous versions of a ToU
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy for which you want to view a version history. 1. Select **Languages and version history**
You can edit some details of terms of use policies, but you can't modify an exis
## See who has accepted each version
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. To see who has currently accepted the ToU, select the number under the **Accepted** column for the ToU you want. 1. By default, the next page will show you the current state of each user's acceptance to the ToU
You can edit some details of terms of use policies, but you can't modify an exis
The following procedure describes how to add a ToU language.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit Terms**
If a user is using browser that isn't supported, they'll be asked to use a diffe
You can delete old terms of use policies using the following procedure.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to remove. 1. Select **Delete terms**.
active-directory Troubleshoot Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md
More information can be found about the problem by clicking **More Details** in
To find out which Conditional Access policy or policies applied and why do the following.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or global reader.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Global Reader.
1. Browse to **Azure Active Directory** > **Sign-ins**. 1. Find the event for the sign-in to review. Add or remove filters and columns to filter out unnecessary information. 1. Add filters to narrow the scope:
active-directory Troubleshoot Policy Changes Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-policy-changes-audit-log.md
Find these options in the **Azure portal** > **Azure Active Directory**, **Diagn
## Use the audit log
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Audit logs**. 1. Select the **Date** range you want to query in. 1. Select **Activity** and choose one of the following
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
This preview enables blocking service principals from outside of trusted public
Create a location based Conditional Access policy that applies to service principals.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
Create a location based Conditional Access policy that applies to service princi
:::image type="content" source="media/workload-identity/conditional-access-workload-identity-risk-policy.png" alt-text="Creating a Conditional Access policy with a workload identity and risk as a condition." lightbox="media/workload-identity/conditional-access-workload-identity-risk-policy.png":::
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Quickstart Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-create-new-tenant.md
Previously updated : 02/15/2021 Last updated : 09/08/2022
This quickstart addresses two scenarios for the type of app you want to build:
## Work and school accounts, or personal Microsoft accounts
-To build an environment for either work and school accounts or personal Microsoft accounts, you can use an existing Azure AD tenant or create a new one.
+To build an environment for either work and school accounts or personal Microsoft accounts (MSA), you can use an existing Azure AD tenant or create a new one.
### Use an existing Azure AD tenant Many developers already have tenants through services or subscriptions that are tied to Azure AD tenants, such as Microsoft 365 or Azure subscriptions.
If you don't already have an Azure AD tenant or if you want to create a new one
You'll provide the following information to create your new tenant:
+- **Tenant type** - Choose between an Azure AD and Azure AD B2C tenant
- **Organization name** - **Initial domain** - Initial domain `<domainname>.onmicrosoft.com` can't be edited or deleted. You can add a customized domain name later. - **Country or region**
You'll provide the following information to create your new tenant:
## Social and local accounts
-To begin building apps that sign in social and local accounts, create an Azure AD B2C tenant. To begin, see [Create an Azure AD B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md).
+To begin building external facing applications that sign in social and local accounts, create an Azure AD B2C tenant. To begin, see [Create an Azure AD B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md).
## Next steps
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS50072 | UserStrongAuthEnrollmentRequiredInterrupt - User needs to enroll for second factor authentication (interactive). | | AADSTS50074 | UserStrongAuthClientAuthNRequiredInterrupt - Strong authentication is required and the user did not pass the MFA challenge. | | AADSTS50076 | UserStrongAuthClientAuthNRequired - Due to a configuration change made by the admin, or because you moved to a new location, the user must use multi-factor authentication to access the resource. Retry with a new authorize request for the resource. |
+| AADSTS50078 | UserStrongAuthExpired- Presented multi-factor authentication has expired due to policies configured by your administrator, you must refresh your multi-factor authentication to access '{resource}'.|
| AADSTS50079 | UserStrongAuthEnrollmentRequired - Due to a configuration change made by the administrator, or because the user moved to a new location, the user is required to use multi-factor authentication. | | AADSTS50085 | Refresh token needs social IDP login. Have user try signing-in again with username -password | | AADSTS50086 | SasNonRetryableError |
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md
This article explains how the local administrators membership update works and h
When you connect a Windows device with Azure AD using an Azure AD join, Azure AD adds the following security principals to the local administrators group on the device: -- The Azure AD global administrator role
+- The Azure AD Global Administrator role
- The Azure AD joined device local administrator role - The user performing the Azure AD join
By adding Azure AD roles to the local administrators group, you can update the u
## Manage the global administrators role
-To view and update the membership of the global administrator role, see:
+To view and update the membership of the Global Administrator role, see:
- [View all members of an administrator role in Azure Active Directory](../roles/manage-roles-portal.md) - [Assign a user to administrator roles in Azure Active Directory](../fundamentals/active-directory-users-assign-role-azure-portal.md)
To view and update the membership of the global administrator role, see:
In the Azure portal, you can manage the device administrator role from **Device settings**.
-1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
1. Browse to **Azure Active Directory** > **Devices** > **Device settings**. 1. Select **Manage Additional local administrators on all Azure AD joined devices**. 1. Select **Add assignments** then choose the other administrators you want to add and select **Add**.
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md
The following diagrams illustrate the underlying details in issuing, renewing, a
| :: | | | A | User enters their password in the sign in UI. LogonUI passes the credentials in an auth buffer to LSA, which in turns passes it internally to CloudAP. CloudAP forwards this request to the CloudAP plugin. | | B | CloudAP plugin initiates a realm discovery request to identify the identity provider for the user. If userΓÇÖs tenant has a federation provider setup, Azure AD returns the federation providerΓÇÖs Metadata Exchange endpoint (MEX) endpoint. If not, Azure AD returns that the user is managed indicating that user can authenticate with Azure AD. |
-| C | If the user is managed, CloudAP will get the nonce from Azure AD. If the user is federated, CloudAP plugin requests a SAML token from the federation provider with the userΓÇÖs credentials. Once it receives, the SAML token, it requests a nonce from Azure AD. |
+| C | If the user is managed, CloudAP will get the nonce from Azure AD. If the user is federated, CloudAP plugin requests a SAML token from the federation provider with the userΓÇÖs credentials. Nonce is requested before the SAML token is sent to Azure AD. |
| D | CloudAP plugin constructs the authentication request with the userΓÇÖs credentials, nonce, and a broker scope, signs the request with the Device key (dkpriv) and sends it to Azure AD. In a federated environment, CloudAP plugin uses the SAML token returned by the federation provider instead of the userΓÇÖ credentials. | | E | Azure AD validates the user credentials, the nonce, and device signature, verifies that the device is valid in the tenant and issues the encrypted PRT. Along with the PRT, Azure AD also issues a symmetric key, called the Session key encrypted by Azure AD using the Transport key (tkpub). In addition, the Session key is also embedded in the PRT. This Session key acts as the Proof-of-possession (PoP) key for subsequent requests with the PRT. | | F | CloudAP plugin passes the encrypted PRT and Session key to CloudAP. CloudAP request the TPM to decrypt the Session key using the Transport key (tkpriv) and re-encrypt it using the TPMΓÇÖs own key. CloudAP stores the encrypted Session key in its cache along with the PRT. |
The following diagrams illustrate the underlying details in issuing, renewing, a
| A | User enters their password in the sign in UI. LogonUI passes the credentials in an auth buffer to LSA, which in turns passes it internally to CloudAP. CloudAP forwards this request to the CloudAP plugin. | | B | If the user has previously logged on to the user, Windows initiates cached sign in and validates credentials to log the user in. Every 4 hours, the CloudAP plugin initiates PRT renewal asynchronously. | | C | CloudAP plugin initiates a realm discovery request to identify the identity provider for the user. If userΓÇÖs tenant has a federation provider setup, Azure AD returns the federation providerΓÇÖs Metadata Exchange endpoint (MEX) endpoint. If not, Azure AD returns that the user is managed indicating that user can authenticate with Azure AD. |
-| D | If the user is federated, CloudAP plugin requests a SAML token from the federation provider with the userΓÇÖs credentials. Once it receives, the SAML token, it requests a nonce from Azure AD. If the user is managed, CloudAP will directly get the nonce from Azure AD. |
+| D | If the user is federated, CloudAP plugin requests a SAML token from the federation provider with the userΓÇÖs credentials. Nonce is requested before the SAML token is sent to Azure AD. If the user is managed, CloudAP will directly get the nonce from Azure AD. |
| E | CloudAP plugin constructs the authentication request with the userΓÇÖs credentials, nonce, and the existing PRT, signs the request with the Session key and sends it to Azure AD. In a federated environment, CloudAP plugin uses the SAML token returned by the federation provider instead of the userΓÇÖ credentials. | | F | Azure AD validates the Session key signature by comparing it against the Session key embedded in the PRT, validates the nonce and verifies that the device is valid in the tenant and issues a new PRT. As seen before, the PRT is again accompanied with the Session key encrypted by Transport key (tkpub). | | G | CloudAP plugin passes the encrypted PRT and Session key to CloudAP. CloudAP requests the TPM to decrypt the Session key using the Transport key (tkpriv) and re-encrypt it using the TPMΓÇÖs own key. CloudAP stores the encrypted Session key in its cache along with the PRT. |
active-directory Howto Hybrid Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-azure-ad-join.md
Bringing your devices to Azure AD maximizes user productivity through single sig
- [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) version 1.1.819.0 or later. - Don't exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to Azure AD, see [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10). - If the computer objects of the devices you want to be hybrid Azure AD joined belong to specific organizational units (OUs), configure the correct OUs to sync in Azure AD Connect. To learn more about how to sync computer objects by using Azure AD Connect, see [Organizational unitΓÇôbased filtering](../hybrid/how-to-connect-sync-configure-filtering.md#organizational-unitbased-filtering).-- Global administrator credentials for your Azure AD tenant.
+- Global Administrator credentials for your Azure AD tenant.
- Enterprise administrator credentials for each of the on-premises Active Directory Domain Services forests. - (**For federated domains**) At least Windows Server 2012 R2 with Active Directory Federation Services installed. - Users can register their devices with Azure AD. More information about this setting can be found under the heading **Configure device settings**, in the article, [Configure device settings](device-management-azure-portal.md#configure-device-settings).
Configure hybrid Azure AD join by using Azure AD Connect for a managed domain:
1. Start Azure AD Connect, and then select **Configure**. 1. In **Additional tasks**, select **Configure device options**, and then select **Next**. 1. In **Overview**, select **Next**.
-1. In **Connect to Azure AD**, enter the credentials of a global administrator for your Azure AD tenant.
+1. In **Connect to Azure AD**, enter the credentials of a Global Administrator for your Azure AD tenant.
1. In **Device options**, select **Configure Hybrid Azure AD join**, and then select **Next**. 1. In **Device operating systems**, select the operating systems that devices in your Active Directory environment use, and then select **Next**. 1. In **SCP configuration**, for each forest where you want Azure AD Connect to configure the SCP, complete the following steps, and then select **Next**.
Configure hybrid Azure AD join by using Azure AD Connect for a federated environ
1. Start Azure AD Connect, and then select **Configure**. 1. On the **Additional tasks** page, select **Configure device options**, and then select **Next**. 1. On the **Overview** page, select **Next**.
-1. On the **Connect to Azure AD** page, enter the credentials of a global administrator for your Azure AD tenant, and then select **Next**.
+1. On the **Connect to Azure AD** page, enter the credentials of a Global Administrator for your Azure AD tenant, and then select **Next**.
1. On the **Device options** page, select **Configure Hybrid Azure AD join**, and then select **Next**. 1. On the **SCP** page, complete the following steps, and then select **Next**: 1. Select the forest.
active-directory Hybrid Azuread Join Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-manual.md
This article covers the manual configuration of requirements for hybrid Azure AD
- [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) version 1.1.819.0 or later. - To get device registration sync join to succeed, as part of the device registration configuration, don't exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to Azure AD, see [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10). - If the computer objects of the devices you want to be hybrid Azure AD joined belong to specific organizational units (OUs), configure the correct OUs to sync in Azure AD Connect. To learn more about how to sync computer objects by using Azure AD Connect, see [Organizational unitΓÇôbased filtering](../hybrid/how-to-connect-sync-configure-filtering.md#organizational-unitbased-filtering).-- Global administrator credentials for your Azure AD tenant.
+- Global Administrator credentials for your Azure AD tenant.
- Enterprise administrator credentials for each of the on-premises Active Directory Domain Services forests. - (**For federated domains**) Windows Server 2012 R2 with Active Directory Federation Services installed. - Users can register their devices with Azure AD. More information about this setting can be found under the heading **Configure device settings**, in the article, [Configure device settings](device-management-azure-portal.md#configure-device-settings).
active-directory Hybrid Cloud To On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md
The following diagram provides a high-level overview of how Azure AD Application
You can manage the on-premises B2B user objects through lifecycle management policies. For example: - You can set up multi-factor authentication (MFA) policies for the Guest user so that MFA is used during Application Proxy authentication. For more information, see [Conditional Access for B2B collaboration users](authentication-conditional-access.md).-- Any sponsorships, access reviews, account verifications, etc. that are performed on the cloud B2B user applies to the on-premises users. For example, if the cloud user is deleted through your lifecycle management policies, the on-premises user is also deleted by MIM Sync or through Azure AD Connect sync. For more information, see [Manage guest access with Azure AD access reviews](../governance/manage-guest-access-with-access-reviews.md).
+- Any sponsorships, access reviews, account verifications, etc. that are performed on the cloud B2B user applies to the on-premises users. For example, if the cloud user is deleted through your lifecycle management policies, the on-premises user is also deleted by MIM Sync or through the Azure AD B2B script. For more information, see [Manage guest access with Azure AD access reviews](../governance/manage-guest-access-with-access-reviews.md).
### Create B2B guest user objects through an Azure AD B2B script
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
You can usually leave an organization on your own without having to contact an a
1. To view the organizations you belong to, first open your **My Account** page by doing one of the following: - If you're using a work or school account, go to https://myaccount.microsoft.com and sign in.
- - If you're using a personal account, go to https://myapps.microsoft.com and sign in, and then select your account icon in the upper right and select **View account**. Or, use a My Account URL that includes your tenant information to go directly to your My Account page (examples are shown in the following note).
-
- > [!NOTE]
- > If you use the email one-time passcode feature when signing in, you'll need to use a My Account URL that includes your tenant name or tenant ID, for example: `https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com` or `https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789`.
+ - If you're using a personal account or email one-time passcode, you'll need to use a My Account URL that includes your tenant name or tenant ID, for example: https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com or https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789.
1. Select **Organizations** from the left navigation pane or select the **Manage organizations** link from the **Organizations** block.
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
You can block external users from accessing specific sets of resources with Cond
To create a policy that blocks access for external users to a set of applications:
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_FinanceApps.
After confirming your settings using [report-only mode](../conditional-access/ho
There may be times you want to block external users except a specific group. For example, you may want to block all external users except those working for the finance team from the finance applications. To do this [Create a security group](active-directory-groups-create-azure-portal.md) to contain the external users who should access the finance applications:
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_AllButFinance.
active-directory Add Users Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-users-azure-active-directory.md
If you have an environment with both Azure Active Directory (cloud) and Windows
You can delete an existing user using Azure Active Directory portal. >[!Note]
->You must have a Global administrator or User administrator role assignment to delete users in your organization. Global admins can delete any users including other admins. User administrators can delete any non-admin users, Helpdesk administrators and other User administrators. For more information, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
+>You must have a Global administrator, Privileged authentication administrator or User administrator role assignment to delete users in your organization. Global admins and Privileged authentication admins can delete any users including other admins. User administrators can delete any non-admin users, Helpdesk administrators and other User administrators. For more information, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
To delete a user, follow these steps:
After you've added your users, you can do the following basic processes:
- [Work with dynamic groups and users](../enterprise-users/groups-create-rule.md)
-Or you can do other user management tasks, such as [adding guest users from another directory](../external-identities/what-is-b2b.md) or [restoring a deleted user](active-directory-users-restore.md). For more information about other available actions, see [Azure Active Directory user management documentation](../enterprise-users/index.yml).
+Or you can do other user management tasks, such as [adding guest users from another directory](../external-identities/what-is-b2b.md) or [restoring a deleted user](active-directory-users-restore.md). For more information about other available actions, see [Azure Active Directory user management documentation](../enterprise-users/index.yml).
active-directory Road To The Cloud Establish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-establish.md
Title: Road to the cloud - Establish a footprint for moving identity and access management from AD to Azure AD
-description: Establish an Azure AD footprint as part of planning your migration of IAM from AD to Azure AD.
+ Title: Road to the cloud - Establish a footprint for moving identity and access management from Active Directory to Azure AD
+description: Establish an Azure AD footprint as part of planning your migration of IAM from Active Directory to Azure AD.
documentationCenter: ''
# Establish an Azure AD footprint
+Before you migrate identity and access management (IAM) from Active Directory to Azure Active Directory (Azure AD), you need to set up Azure AD.
+ ## Required tasks
-If you're using Microsoft Office 365, Exchange Online, or Teams then you are already using Azure AD. If you do, your next step is to establish more Azure AD capabilities.
+If you're using Microsoft Office 365, Exchange Online, or Teams, then you're already using Azure AD. Your next step is to establish more Azure AD capabilities:
-* Establish hybrid identity synchronization between AD and Azure AD using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) or [Azure AD Connect Cloud Sync](../cloud-sync/what-is-cloud-sync.md).
+* Establish hybrid identity synchronization between Active Directory and Azure AD by using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) or [Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md).
-* [Select authentication methods](../hybrid/choose-ad-authn.md). We strongly recommend password hash synchronization (PHS).
+* [Select authentication methods](../hybrid/choose-ad-authn.md). We strongly recommend password hash synchronization.
-* Secure your hybrid identity infrastructure by following [Secure your Azure AD identity infrastructure - Azure Active Directory](../../security/fundamentals/steps-secure-identity.md)
+* Secure your hybrid identity infrastructure by following [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md).
## Optional tasks
-The following aren't specific or mandatory to transforming from AD to Azure AD but are recommended functions to incorporate into your environment. These are also items recommended in the [Zero Trust](/security/zero-trust/) guidance.
+The following functions aren't specific or mandatory to move from Active Directory to Azure AD, but we recommend incorporating them into your environment. These items are also recommended in the [Zero Trust](/security/zero-trust/) guidance.
-### Deploy Passwordless authentication
+### Deploy passwordless authentication
-In addition to the security benefits of [passwordless credentials](../authentication/concept-authentication-passwordless.md), this simplifies your environment because the management and registration experience is already native to the cloud. Azure AD provides different passwordless credentials that align with different use cases. Use the information in this document to plan your deployment: [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md)
+In addition to the security benefits of [passwordless credentials](../authentication/concept-authentication-passwordless.md), passwordless authentication simplifies your environment because the management and registration experience is already native to the cloud. Azure AD provides passwordless credentials that align with various use cases. Use the information in this article to plan your deployment: [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md).
-Once you roll out passwordless credentials to your users, consider reducing the use of password credentials. You can use the [reporting and Insights dashboard](../authentication/howto-authentication-methods-activity.md) to continue to drive use of passwordless credentials and reduce use of passwords in Azure AD.
+After you roll out passwordless credentials to your users, consider reducing the use of password credentials. You can use the [reporting and insights dashboard](../authentication/howto-authentication-methods-activity.md) to continue to drive the use of passwordless credentials and reduce the use of passwords in Azure AD.
>[!IMPORTANT] >During your application discovery, you might find applications that have a dependency or assumptions around passwords. Users of these applications need to have access to their passwords until those applications are updated or migrated. ### Configure hybrid Azure AD join for existing Windows clients
-You can configure hybrid Azure AD join for existing AD joined Windows clients to benefit from cloud-based security features such as [co-management](/mem/configmgr/comanage/overview), conditional access, and Windows Hello for Business. New devices should be Azure AD joined and not hybrid Azure AD joined.
+You can configure hybrid Azure AD join for existing Active Directory-joined Windows clients to benefit from cloud-based security features such as [co-management](/mem/configmgr/comanage/overview), conditional access, and Windows Hello for Business. New devices should be Azure AD joined and not hybrid Azure AD joined.
-To learn more, check: [Plan your hybrid Azure Active Directory join deployment](../devices/hybrid-azuread-join-plan.md)
+To learn more, check [Plan your hybrid Azure Active Directory join implementation](../devices/hybrid-azuread-join-plan.md).
## Next steps
active-directory Road To The Cloud Implement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-implement.md
Title: Road to the cloud - Implementing a cloud-first approach when moving identity and access management from AD to Azure AD
-description: Implement a cloud-first approach as part of planning your migration if IAM from AD to Azure AD.
+ Title: Road to the cloud - Implement a cloud-first approach when moving identity and access management from Active Directory to Azure AD
+description: Implement a cloud-first approach as part of planning your migration of IAM from Active Directory to Azure AD.
documentationCenter: ''
-# Implement cloud first approach
+# Implement a cloud-first approach
-This is mainly a process and policy driven phase to stop, or limit as much as possible, adding new dependencies to AD and implement a cloud-first approach for new demand of IT solutions.
+It's mainly a process and policy-driven phase to stop, or limit as much as possible, adding new dependencies to Active Directory and implement a cloud-first approach for new demand of IT solutions.
-It's key at this point to identify the internal processes that would lead to adding new dependencies on AD. For example, most organizations would have a change management process that has to be followed before new scenarios/features/solutions are implemented. We strongly recommend making sure that these change approval processes are updated to include a step to evaluate whether the proposed change would add new dependencies on AD and request the evaluation of Azure AD alternatives when possible.
+It's key at this point to identify the internal processes that would lead to adding new dependencies on Active Directory. For example, most organizations would have a change management process that has to be followed before the implementation of new scenarios, features, and solutions. We strongly recommend making sure that these change approval processes are updated to:
+
+- Include a step to evaluate whether the proposed change would add new dependencies on Active Directory.
+- Request the evaluation of Azure Active Directory (Azure AD) alternatives when possible.
## Users and groups You can enrich user attributes in Azure AD to make more user attributes available for inclusion. Examples of common scenarios that require rich user attributes include:
-* App provisioning - The data source of app provisioning is Azure AD and necessary user attributes must be in there.
+* App provisioning: The data source of app provisioning is Azure AD, and necessary user attributes must be in there.
-* Application authorization - Token issued by Azure AD can include claims generated from user attributes so that applications can make authorization decision based on the claims in token.
+* Application authorization: A token that Azure AD issues can include claims generated from user attributes so that applications can make authorization decisions based on the claims in the token.
-* Group membership population and maintenance - Dynamic groups enables dynamic population of group membership based on user attributes such as department information.
+* Group membership population and maintenance: Dynamic groups enable dynamic population of group membership based on user attributes, such as department information.
These two links provide guidance on making schema changes:
These two links provide guidance on making schema changes:
* [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md)
-These links provide additional information on this topic but are not specific to changing the schema:
+These links provide more information on this topic but are not specific to changing the schema:
* [Use Azure AD schema extension attributes in claims - Microsoft identity platform](../develop/active-directory-schema-extensions.md)
-* [What are custom security attributes in Azure AD? (Preview) - Azure Active Directory](../fundamentals/custom-security-attributes-overview.md)
+* [What are custom security attributes in Azure AD (preview)?](../fundamentals/custom-security-attributes-overview.md)
-* [Tutorial - Customize Azure Active Directory attribute mappings in Application Provisioning](../app-provisioning/customize-application-attributes.md)
+* [Customize Azure Active Directory attribute mappings in application provisioning](../app-provisioning/customize-application-attributes.md)
* [Provide optional claims to Azure AD apps - Microsoft identity platform](../develop/active-directory-optional-claims.md)
-These links provide additional information relevant to groups:
-
-* [Create or edit a dynamic group and get status - Azure AD](../enterprise-users/groups-create-rule.md)
+These links provide more information about groups:
-* Use dynamic groups for automated group management
+* [Create or edit a dynamic group and get status in Azure AD](../enterprise-users/groups-create-rule.md)
-* Use self-service groups for user-initiated group management
+* [Use self-service groups for user-initiated group management](../enterprise-users/groups-self-service-management.md)
-* For application access, consider using [scope provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) or [entitlement management](../governance/entitlement-management-overview.md)
+* [Attribute-based application provisioning with scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) or [What is Azure AD entitlement management?](../governance/entitlement-management-overview.md) (for application access)
-For more information on group types, see [Compare groups](/microsoft-365/admin/create-groups/compare-groups).
+* [Compare groups](/microsoft-365/admin/create-groups/compare-groups)
-* Use external identities for collaboration with other organizations - stop creating accounts of external users in on-premises directories
+* [Restrict guest access permissions in Azure Active Directory](../enterprise-users/users-restrict-guest-permissions.md)
-You and your team might feel compelled to change your current employee provisioning to use cloud-only accounts at this stage. The effort is non-trivial and doesn't provide enough business value to warrant the effort. We recommend you plan this transition at a different phase of your transformation.
+You and your team might feel compelled to change your current employee provisioning to use cloud-only accounts at this stage. The effort is non-trivial but doesn't provide enough business value. We recommend that you plan this transition at a different phase of your transformation.
## Devices
-Client workstations are traditionally joined to AD and managed via group policy (GPO) and/or device management solutions such as Microsoft Endpoint Configuration Manager (MECM). Your teams will establish a new policy and process to prevent newly deployed workstations from being domain-joined going forward. Key points include:
+Client workstations are traditionally joined to Active Directory and managed via Group Policy objects (GPOs) or device management solutions such as Microsoft Endpoint Configuration Manager. Your teams will establish a new policy and process to prevent newly deployed workstations from being domain joined. Key points include:
-* Mandate [Azure AD join](../devices/concept-azure-ad-join.md) for new Windows client workstations to achieve "No more domain join"
+* Mandate [Azure AD join](../devices/concept-azure-ad-join.md) for new Windows client workstations to achieve "no more domain join."
-* Manage workstations from cloud by using Unified Endpoint Management (UEM) solutions such as [Intune](/mem/intune/fundamentals/what-is-intune)
+* Manage workstations from the cloud by using unified endpoint management (UEM) solutions such as [Intune](/mem/intune/fundamentals/what-is-intune).
-[Windows Autopilot](/mem/autopilot/windows-autopilot) is highly recommended to establish a streamlined onboarding and device provisioning, which can enforce these directives.
+[Windows Autopilot](/mem/autopilot/windows-autopilot) can help you establish a streamlined onboarding and device provisioning, which can enforce these directives.
-For more information, see [Learn more about cloud-native endpoints](/mem/cloud-native-endpoints-overview)
+For more information, see [Learn more about cloud-native endpoints](/mem/cloud-native-endpoints-overview).
## Applications
-Traditionally, application servers are often joined to an on-premises Active Directory domain so that they can utilize Windows Integrated Authentication (Kerberos or NTLM), directory queries using LDAP and server management using Group Policy or Microsoft Endpoint Configuration Manager (MECM).
+Traditionally, application servers are often joined to an on-premises Active Directory domain so that they can use Windows Integrated Authentication (Kerberos or NTLM), directory queries through LDAP, and server management through GPO or Microsoft Endpoint Configuration Manager.
-The organization has a process to evaluate Azure AD alternatives when considering new services/apps/infrastructure. Directives for a cloud-first approach to applications should be as follows (new on-premises/legacy applications should be a rare exception when no modern alternative exists):
+The organization has a process to evaluate Azure AD alternatives when it's considering new services, apps, or infrastructure. Directives for a cloud-first approach to applications should be as follows. (New on-premises applications or legacy applications should be a rare exception when no modern alternative exists.)
-* Provide recommendation to change procurement policy and application development policy to require modern protocols (OIDC/OAuth2 and SAML) and authenticate using Azure AD. New apps should also support [Azure AD App Provisioning](../app-provisioning/what-is-hr-driven-provisioning.md) and have no dependency on LDAP queries. Exceptions require explicit review and approval.
+* Provide a recommendation to change the procurement policy and application development policy to require modern protocols (OIDC/OAuth2 and SAML) and authenticate by using Azure AD. New apps should also support [Azure AD app provisioning](../app-provisioning/what-is-hr-driven-provisioning.md) and have no dependency on LDAP queries. Exceptions require explicit review and approval.
-> [!IMPORTANT]
-> Depending on anticipated demand of application that require legacy protocols, when more current alternatives are not feasible you can choose to deploy [Azure AD Domain Services](../../active-directory-domain-services/overview.md).
+ > [!IMPORTANT]
+ > Depending on the anticipated demands of applications that require legacy protocols, you can choose to deploy [Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md) when more current alternatives won't work.
-* Provide a recommendation to create a policy to prioritize use of cloud native alternatives. The policy should limit deployment of new application servers to the domain. Common cloud native scenarios to replace AD joined servers include:
+* Provide a recommendation to create a policy to prioritize use of cloud-native alternatives. The policy should limit deployment of new application servers to the domain. Common cloud-native scenarios to replace Active Directory-joined servers include:
- * File servers
+ * File servers:
- * SharePoint / OneDrive - Collaboration support across Microsoft 365 solutions and built-in governance, risk, security, and compliance.
+ * SharePoint or OneDrive provides collaboration support across Microsoft 365 solutions and built-in governance, risk, security, and compliance.
- * [Azure Files](../../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry standard SMB or NFS protocol. Customers can use native [Azure AD authentication to Azure Files](../../virtual-desktop/create-profile-container-azure-ad.md) over the internet without line of sight to a DC.
+ * [Azure Files](../../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry-standard SMB or NFS protocol. Customers can use native [Azure AD authentication to Azure Files](../../virtual-desktop/create-profile-container-azure-ad.md) over the internet without line of sight to a domain controller.
- * Azure AD also works with third party applications in our [Application Gallery](/microsoft-365/enterprise/integrated-apps-and-azure-ads)
+ * Azure AD works with third-party applications in the Microsoft [application gallery](/microsoft-365/enterprise/integrated-apps-and-azure-ads).
- * Print Servers
+ * Print servers:
- * Mandate to procure [Universal Print](/universal-print/) compatible printers - [Partner Integrations](/universal-print/fundamentals/universal-print-partner-integrations)
+ * If your organization has a mandate to procure [Universal Print](/universal-print/)-compatible printers, see [Partner integrations](/universal-print/fundamentals/universal-print-partner-integrations).
- * Bridge with [Universal Print connector](/universal-print/fundamentals/universal-print-connector-overview) for non-compatible printers
+ * Bridge with the [Universal Print connector](/universal-print/fundamentals/universal-print-connector-overview) for incompatible printers.
## Next steps
active-directory Road To The Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-introduction.md
Title: Road to the cloud - Introduction to moving identity and access management from AD to Azure AD
-description: Introduction to planning your migration if IAM from AD to Azure AD.
+description: Learn how to plan a migration of IAM from Active Directory to Azure AD.
documentationCenter: ''
-# Introduction
+# Road to the cloud: Introduction
-Some organizations set goals to remove AD, and their on-premises IT footprint. Others take advantage of some cloud-based capabilities to reduce the AD footprint, but not to completely remove their on-premises environments. This content provides guidance to move:
+Some organizations set goals to remove Active Directory and their on-premises IT footprint. Others take advantage of some cloud-based capabilities to reduce the Active Directory footprint, but not to completely remove their on-premises environments.
- * **From** - Active Directory (AD) and other non-cloud based services, either hosted on-premises or Infrastructure-as-a-Service (IaaS), that provide identity management (IDM), identity and access management (IAM) and device management.
+This content provides guidance to move:
-* **To** - Azure Active Directory (Azure AD) and other Microsoft cloud native solutions for identity management (IDM), identity and access management (IAM), and device management.
+* *From* Active Directory and other non-cloud-based services, either on-premises or infrastructure as a service (IaaS), that provide identity management (IDM), identity and access management (IAM), and device management.
+
+* *To* Azure Active Directory (Azure AD) and other Microsoft cloud-native solutions for IDM, IAM, and device management.
>[!NOTE]
-> In this content, when we refer to AD, we are referring to Windows Server Active Directory Domain Services.
+> In this content, *Active Directory* refers to Windows Server Active Directory Domain Services.
-Transformation must be aligned with and achieve business objectives including increased productivity, reduced costs and complexity, and improved security posture. To better understand the costs vs. value of moving to the cloud, see [Forrester TEI for Microsoft Azure Active Directory](https://www.microsoft.com/security/business/forrester-tei-study) and other TEI reports and [Cloud economics](https://azure.microsoft.com/overview/cloud-economics/).
+Transformation must be aligned with and achieve business objectives, including increased productivity, reduced costs and complexity, and improved security posture. To better understand the costs versus value of moving to the cloud, see [Forrester TEI for Microsoft Azure Active Directory](https://www.microsoft.com/security/business/forrester-tei-study) and [Cloud economics](https://azure.microsoft.com/overview/cloud-economics/).
## Next steps
active-directory Road To The Cloud Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-migrate.md
Title: Road to the cloud - Moving identity and access management from AD to Azure AD migration workstream
-description: Learn to plan your migration workstream of IAM from AD to Azure AD.
+ Title: Road to the cloud - Move identity and access management from Active Directory to an Azure AD migration workstream
+description: Learn to plan your migration workstream of IAM from Active Directory to Azure AD.
documentationCenter: ''
# Transition to the cloud
-After aligning the organization towards halting growth of the AD footprint, you can focus on moving the existing on-premises workloads to Azure AD. This section describes the various migration workstreams. You can execute the workstreams in this section based on your priorities and resources.
+After you align your organization toward halting growth of the Active Directory footprint, you can focus on moving the existing on-premises workloads to Azure Active Directory (Azure AD). This article describes the various migration workstreams. You can execute the workstreams in this article based on your priorities and resources.
A typical migration workstream has the following stages:
-* **Discover**: find out what you currently have in your environment
+* **Discover**: Find out what you currently have in your environment.
-* **Pilot**: deploy new cloud capabilities to a small subset (of users, applications, or devices, depending on the workstream)
+* **Pilot**: Deploy new cloud capabilities to a small subset of users, applications, or devices, depending on the workstream.
-* **Scale Out**: expand the pilot out to complete the transition of a capability to the cloud
+* **Scale out**: Expand the pilot to complete the transition of a capability to the cloud.
-* **Cut-over (when applicable)**: stop using the old on-premises workload
+* **Cut over (when applicable)**: Stop using the old on-premises workload.
-## Users and Groups
+## Users and groups
### Enable password self-service We recommend a [passwordless environment](../authentication/concept-authentication-passwordless.md). Until then, you can migrate password self-service workflows from on-premises systems to Azure AD to simplify your environment. Azure AD [self-service password reset (SSPR)](../authentication/concept-sspr-howitworks.md) gives users the ability to change or reset their password, with no administrator or help desk involvement.
-To enable self-service capabilities, choose the appropriate [authentication methods](../authentication/concept-authentication-methods.md) for your organization. Once the authentication methods are updated, you can enable user self-service password capability for your Azure AD authentication environment. For deployment guidance, see [Deployment considerations for Azure Active Directory self-service password reset](../authentication/howto-sspr-deployment.md)
+To enable self-service capabilities, choose the appropriate [authentication methods](../authentication/concept-authentication-methods.md) for your organization. After the authentication methods are updated, you can enable user self-service password capability for your Azure AD authentication environment. For deployment guidance, see [Deployment considerations for Azure Active Directory self-service password reset](../authentication/howto-sspr-deployment.md).
-**Additional considerations include**:
+Additional considerations include:
-* Deploy [Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md) in a subset of DCs with *Audit Mode* to gather information about impact of modern policies. For more guidance, see [Enable on-premises Azure Active Directory Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md).
-* Gradually register and enable [Combined registration for SSPR and Azure AD Multi-Factor Authentication](../authentication/concept-registration-mfa-sspr-combined.md). This enables both MFA and SSPR. For example, roll out by region, subsidiary, department, etc. for all users.
-* Go through a cycle of password change for all users to flush out weak passwords.
-* Once the cycle is complete, implement the policy expiration time.
+* Deploy [Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md) in a subset of domain contollers with **Audit** mode to gather information about the impact of modern policies.
+* Gradually enable [combined registration for SSPR and Azure AD Multi-Factor Authentication](../authentication/concept-registration-mfa-sspr-combined.md). For example, roll out by region, subsidiary, or department for all users.
+* Go through a cycle of password change for all users to flush out weak passwords. After the cycle is complete, implement the policy expiration time.
+* Switch the Password Protection configuration in the domain controllers that have the mode set to **Enforced**. For more information, see [Enable on-premises Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md).
-* Switch the "Password Protection" configuration in the DCs that have "Audit Mode" set to "Enforced mode" ([Enable on-premises Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md))
>[!NOTE]
->* End user communications and evangelizing are recommended for a smooth deployment. See [Sample SSPR rollout materials](https://www.microsoft.com/download/details.aspx?id=56768) to assist with required end-user communications and evangelizing.
->* For customers with Azure AD Identity Protection, enable [password reset as a control in Conditional Access policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) for risky users (users marked as risky through Identity Protection).
+>* We recommend user communications and evangelizing for a smooth deployment. See [Sample SSPR rollout materials](https://www.microsoft.com/download/details.aspx?id=56768).
+>* If you use Azure AD Identity Protection, enable [password reset as a control in Conditional Access policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) for users marked as risky.
-### Move groups management
+### Move management of groups
To transform groups and distribution lists:
-* For security groups, use your existing business logic that assigns users to security groups, migrate the logic and capability to Azure AD and dynamic groups.
+* For security groups, use your existing business logic that assigns users to security groups. Migrate the logic and capability to Azure AD and dynamic groups.
-* For self-managed group capabilities provided by Microsoft Identity Manager (MIM), replace the capability with self-service group management.
+* For self-managed group capabilities provided by Microsoft Identity Manager, replace the capability with self-service group management.
-* [Conversion of legacy distribution lists to Microsoft 365 groups](/microsoft-365/admin/manage/upgrade-distribution-lists) - You can upgrade distribution lists to Microsoft 365 groups in Outlook. This is a great way to give your organization's distribution lists all the features and functionality of Microsoft 365 groups.
+* You can [convert distribution lists to Microsoft 365 groups](/microsoft-365/admin/manage/upgrade-distribution-lists) in Outlook. This is a great way to give your organization's distribution lists all the features and functionality of Microsoft 365 groups.
* Upgrade your [distribution lists to Microsoft 365 groups in Outlook](https://support.microsoft.com/office/7fb3d880-593b-4909-aafa-950dd50ce188) and [decommission your on-premises Exchange server](/exchange/decommission-on-premises-exchange). ### Move provisioning of users and groups to applications
-This workstream will help you to simplify your environment by removing application provisioning flows from on-premises IDM systems such as Microsoft Identity Manager. Based on your application discovery, categorize your application based on the following:
+You can simplify your environment by removing application provisioning flows from on-premises identity management (IDM) systems such as Microsoft Identity Manager. Based on your application discovery, categorize your application based on the following characteristics:
-* Applications in your environment that have a provisioning integration with the [Azure AD Application Gallery](https://www.microsoft.com/security/business/identity-access-management/integrated-apps-azure-ad)
+* Applications in your environment that have a provisioning integration with the [Azure AD application gallery](https://www.microsoft.com/security/business/identity-access-management/integrated-apps-azure-ad).
-* Applications that aren't in the gallery but support the SCIM 2.0 protocol are natively compatible with Azure AD cloud provisioning service.
+* Applications that aren't in the gallery but support the SCIM 2.0 protocol. These applications are natively compatible with the Azure AD cloud provisioning service.
-* On-Premises applications that have an ECMA connector available, can be integrated with [Azure AD on-premises application provisioning](../app-provisioning/on-premises-application-provisioning-architecture.md)
+* On-premises applications that have an ECMA connector available. These applications can be integrated with [Azure AD on-premises application provisioning](../app-provisioning/on-premises-application-provisioning-architecture.md).
-For more information check: [Plan an automatic user provisioning deployment for Azure Active Directory](../app-provisioning/plan-auto-user-provisioning.md)
+For more information, check [Plan an automatic user-provisioning deployment for Azure Active Directory](../app-provisioning/plan-auto-user-provisioning.md).
-### Move to Cloud HR provisioning
+### Move to cloud HR provisioning
-This workstream will reduce your on-premises footprint by moving the HR provisioning workflows from on-premises identity management (IDM) systems such as Microsoft Identity Manager (MIM) to Azure AD. Azure AD cloud HR provisioning can provision hybrid accounts or cloud-only accounts.
+You can reduce your on-premises footprint by moving the HR provisioning workflows from on-premises IDM systems, such as Microsoft Identity Manager, to Azure AD. Two account types are available for Azure AD cloud HR provisioning:
-* For new employees who are exclusively using applications that use Azure AD, you can choose to provision cloud-only accounts, which in turn helps you to contain the footprint of AD.
+* For new employees who are exclusively using applications that use Azure AD, you can choose to provision *cloud-only accounts*. This provisioning helps you contain the footprint of Active Directory.
-* For new employees who need access to applications that have dependency on AD, you can provision hybrid accounts
+* For new employees who need access to applications that have dependency on Active Directory, you can provision *hybrid accounts*.
-Azure AD Cloud HR provisioning can also manage AD accounts for existing employees. For more information, see: [Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) and, specifically, [Plan the deployment project](../app-provisioning/plan-auto-user-provisioning.md).
+Azure AD cloud HR provisioning can also manage Active Directory accounts for existing employees. For more information, see [Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) and [Plan the deployment project](../app-provisioning/plan-auto-user-provisioning.md).
### Move external identity management
-If your organization provisions accounts in AD or other on-premises directories for external identities such as vendors, contractors, consultants, etc. You can simplify your environment by managing those third parties (3P) user objects natively in the cloud.
+If your organization provisions accounts in Active Directory or other on-premises directories for external identities such as vendors, contractors, or consultants, you can simplify your environment by managing those third-party user objects natively in the cloud. Here are some possibilities:
-* For new external users, use [Azure AD External Identities](../external-identities/external-identities-overview.md), which will stop the AD footprint of users.
+* For new external users, use [Azure AD External Identities](../external-identities/external-identities-overview.md), which will stop the Active Directory footprint of users.
-* For existing AD accounts that you provision for external identities, you can remove the overhead of managing local credentials (for example, passwords) by configuring them for B2B collaboration using the steps here: [Invite internal users to B2B collaboration](../external-identities/invite-internal-users.md).
+* For existing Active Directory accounts that you provision for external identities, you can remove the overhead of managing local credentials (for example, passwords) by configuring them for business-to-business (B2B) collaboration. Follow the steps in [Invite internal users to B2B collaboration](../external-identities/invite-internal-users.md).
-* Use [Azure AD Entitlement Management](../governance/entitlement-management-overview.md) to grant access to applications and resources. Most companies have dedicated systems and workflows for this purpose that you can now move out on-premises tools.
+* Use [Azure AD entitlement management](../governance/entitlement-management-overview.md) to grant access to applications and resources. Most companies have dedicated systems and workflows for this purpose that you can now move out of on-premises tools.
-* Use [Access Reviews](../governance/access-reviews-external-users.md) to remove access rights and/or external identities that are no longer needed.
+* Use [access reviews](../governance/access-reviews-external-users.md) to remove access rights and/or external identities that are no longer needed.
## Devices
-### Move Non-Windows OS workstations
+### Move non-Windows workstations
-Non-Windows workstations can be integrated with Azure AD to enhance user experience and benefit from cloud-based security features such as conditional access.
+You can integrate non-Windows workstations with Azure AD to enhance the user experience and to benefit from cloud-based security features such as conditional access.
-* macOS
+* For macOS:
- * Register macOS to Azure AD and [enroll/manage them with MDM solution](/mem/intune/enrollment/macos-enroll)
+ * Register macOS to Azure AD and [enroll/manage them by using a mobile device management solution](/mem/intune/enrollment/macos-enroll).
- * Deploy [Microsoft Enterprise SSO plug-in for Apple devices](../develop/apple-sso-plugin.md)
+ * Deploy the [Microsoft Enterprise SSO (single sign-on) plug-in for Apple devices](../develop/apple-sso-plugin.md).
- * Plan to deploy [Platform SSO for macOS 13](https://techcommunity.microsoft.com/t5/microsoft-endpoint-manager-blog/microsoft-simplifies-endpoint-manager-enrollment-for-apple/ba-p/3570319)
+ * Plan to deploy [Platform SSO for macOS 13](https://techcommunity.microsoft.com/t5/microsoft-endpoint-manager-blog/microsoft-simplifies-endpoint-manager-enrollment-for-apple/ba-p/3570319).
-* Linux
+* For Linux, you can [sign in to a Linux virtual machine (VM) by using Azure Active Directory credentials](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
- * [Sign in to a Linux VM with Azure Active Directory credentials](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md) is available on Linux on Azure VM
+### Replace other Windows versions for workstations
-### Replace Other Windows versions as Workstation use
-
-If you have below versions of Windows, consider replacing to latest Windows client version to benefit from cloud native management (Azure AD join and UEM):
+If you have the following operating systems on workstations, consider upgrading to the latest versions to benefit from cloud-native management (Azure AD join and unified endpoint management):
* Windows 7 or 8.x
-* Windows Server OS as workstation use
+* Windows Server
-### Virtual desktop infrastructure (VDI) solution
+### VDI solution
-This project has two primary initiatives. The first is to plan and implement a VDI environment for new deployments that isn't AD-dependent. The second is to plan a transition path for existing deployments that have AD-dependency.
+This project has two primary initiatives:
-* **New deployments** - Deploy a cloud managed VDI solution such as Windows 365 and or Azure Virtual Desktop (AVD) that doesn't require on-premises AD.
+* **New deployments**: Deploy a cloud-managed virtual desktop infrastructure (VDI) solution, such as Windows 365 or Azure Virtual Desktop, that doesn't require on-premises Active Directory.
-* **Existing deployments** - If your existing VDI deployment is dependent on AD, use business objectives and goals to determine whether you maintain the solution or migrate it to Azure AD.
+* **Existing deployments**: If your existing VDI deployment is dependent on Active Directory, use business objectives and goals to determine whether you maintain the solution or migrate it to Azure AD.
For more information, see:
-* [Deploy Azure AD joined VMs in Azure Virtual Desktop - Azure](../../virtual-desktop/deploy-azure-ad-joined-vm.md)
+* [Deploy Azure AD-joined VMs in Azure Virtual Desktop](../../virtual-desktop/deploy-azure-ad-joined-vm.md)
* [Windows 365 planning guide](/windows-365/enterprise/planning-guide) ## Applications
-To help maintain a secure environment, Azure AD supports modern authentication protocols. To transition application authentication from AD to Azure AD, you must:
-
-* Determine which applications can migrate to Azure AD with no modification
-
-* Determine which applications have an upgrade path that enables you to migrate with an upgrade
+To help maintain a secure environment, Azure AD supports modern authentication protocols. To transition application authentication from Active Directory to Azure AD, you must:
-* Determine which applications require replacement or significant code changes to migrate
+* Determine which applications can migrate to Azure AD with no modification.
-The outcome of your application discovery initiative is to create a prioritized list used to migrate your application portfolio. The list also contains applications that:
+* Determine which applications have an upgrade path that enables you to migrate with an upgrade.
-* Require an upgrade or update to the software - there's an upgrade path available
+* Determine which applications require replacement or significant code changes to migrate.
-* Require an upgrade or update to the software - there isn't an upgrade path available
+The outcome of your application discovery initiative is to create a prioritized list for migrating your application portfolio. The list contains applications that:
-Using the list, you can further evaluate the applications that don't have an existing upgrade path.
+* Require an upgrade or update to the software, and an upgrade path is available.
-* Determine whether business value warrants updating the software or if it should be retired.
+* Require an upgrade or update to the software, but an upgrade path isn't available.
-* If retired, is a replacement needed or is the application no longer needed?
+By using the list, you can further evaluate the applications that don't have an existing upgrade path. Determine whether business value warrants updating the software or if it should be retired. If the software should be retired, decide whether you need a replacement.
-Based on the results, you might redesign various aspects of your transformation from AD to Azure AD. While there are approaches you can use to extend on-premises AD to Azure IaaS (lift-and-shift) for applications with non-supported authentication protocols, we recommend you set a policy that requires a policy exception to use this approach.
+Based on the results, you might redesign aspects of your transformation from Active Directory to Azure AD. There are approaches that you can use to extend on-premises Active Directory to Azure infrastructure as a service (IaaS) (lift and shift) for applications with unsupported authentication protocols. We recommend that you set a policy that requires an exception to use this approach.
### Application discovery
-Once you've segmented your app portfolio, then you can prioritize migration based on business value and business priority. The following are types of applications you might use to categorize your portfolio, and some tools you can use to discover certain apps in your environment.
+After you've segmented your app portfolio, you can prioritize migration based on business value and business priority. You can use tools to create or refresh your app inventory.
-When you think about application types, there are three main ways to categorize your apps:
+There are three main ways to categorize your apps:
-* **Modern Authentication Apps**: These are applications that use modern authentication protocols such as OIDC, OAuth2, SAML, WS-Federation, using a Federation Service such as AD FS.
+* **Modern authentication apps**: These applications use modern authentication protocols (such as OIDC, OAuth2, SAML, or WS-Federation) or that use a federation service such as Active Directory Federation Services (AD FS).
-* **Web Access Management (WAM) tools**: These applications use headers, cookies, and similar techniques for SSO. These apps typically require a WAM Identity Provider such as Symantec Site Minder.
+* **Web access management (WAM) tools**: These applications use headers, cookies, and similar techniques for SSO. These apps typically require a WAM identity provider, such as Symantec SiteMinder.
-* **Legacy Apps**: These applications use legacy protocols such as Kerberos, LDAP, Radius, Remote Desktop, NTLM (not recommended) etc.
+* **Legacy apps**: These applications use legacy protocols such as Kerberos, LDAP, Radius, Remote Desktop, and NTLM (not recommended).
-Azure AD can be used with each type of application providing different levels of functionality that will result in different migration strategies, complexity, and trade-offs. Some organizations have an application inventory, that can be used as a discovery baseline (It's common that this inventory isn't complete or updated). Below are some tools that can be used to create or refresh your inventory:
+Azure AD can be used with each type of application to provide levels of functionality that will result in different migration strategies, complexity, and trade-offs. Some organizations have an application inventory that can be used as a discovery baseline. (It's common that this inventory isn't complete or updated.)
To discover modern authentication apps:
-* If you're using AD FS, use the [AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md)
+* If you're using AD FS, use the [AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md).
-* If you're using a different identity provider, you can use the logs and configuration.
+* If you're using a different identity provider, use the logs and configuration.
-The following tools can help you to discover applications that use LDAP.
+The following tools can help you discover applications that use LDAP:
-* [Event1644Reader](/troubleshoot/windows-server/identity/event1644reader-analyze-ldap-query-performance) : Sample tool for collecting data on LDAP Queries made to Domain Controllers using Field Engineering Logs.
+* [Event1644Reader](/troubleshoot/windows-server/identity/event1644reader-analyze-ldap-query-performance): Sample tool for collecting data on LDAP queries made to domain controllers by using field engineering logs.
-* [Microsoft Microsoft 365 Defender for Identity](/defender-for-identity/monitored-activities): Utilize the sign in Operations monitoring capability (note captures binds using LDAP, but not Secure LDAP.
+* [Microsoft 365 Defender for Identity](/defender-for-identity/monitored-activities): Security solution that uses a sign-in operations monitoring capability. (Note that it captures binds by using LDAP, not Secure LDAP.)
-* [PSLDAPQueryLogging](https://github.com/RamblingCookieMonster/PSLDAPQueryLogging) : GitHub tool for reporting on LDAP queries.
+* [PSLDAPQueryLogging](https://github.com/RamblingCookieMonster/PSLDAPQueryLogging): GitHub tool for reporting on LDAP queries.
-### Migrate AD FS / federation services
+### Migrate AD FS or other federation services
-When you plan your migration to Azure AD, consider migrating the apps that use modern authentication protocols (such as SAML and Open ID Connect) first. These apps can be reconfigured to authenticate with Azure AD either via a built-in connector from the Azure App Gallery, or by registering the application in Azure AD. Once you have moved SaaS applications that were federated to Azure AD, there are a few steps to decommission the on-premises federation system. Verify you've completed the following:
+When you plan your migration to Azure AD, consider migrating the apps that use modern authentication protocols (such as SAML and OpenID Connect) first. You can reconfigure these apps to authenticate with Azure AD either via a built-in connector from the Azure App Gallery or via registration in Azure AD.
-* [Move application authentication to Azure Active Directory](../manage-apps/migrate-adfs-apps-to-azure.md)
+After you move SaaS applications that were federated to Azure AD, there are a few steps to decommission the on-premises federation system:
-Once you have moved SaaS applications that were federated to Azure AD, there are a few steps to decommission the on-premises federation system. Verify you have completed migration of:
+* [Move application authentication to Azure Active Directory](../manage-apps/migrate-adfs-apps-to-azure.md)
-* [Migrate from Azure AD Multi-Factor Authentication Server to Azure multi-factor authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md)
+* [Migrate from Azure AD Multi-Factor Authentication Server to Azure AD Multi-Factor Authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md)
* [Migrate from federation to cloud authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md)
-* If you're using Web Application Proxy, [Move Remote Access to internal applications](#move-remote-access-to-internal-applications)
+* [Move remote access to internal applications](#move-remote-access-to-internal-applications), if you're using Azure AD Application Proxy
>[!IMPORTANT]
->If you are using other features, such as remote access, verify those services are relocated prior to decommissioning AD federated services.
-### Move WAM Authentication apps
+>If you're using other features, verify that those services are relocated before you decommission Active Directory Federation Services.
+
+### Move WAM authentication apps
-This project focuses on migrating SSO capability from Web Access Management systems (such as Symantec SiteMinder) to Azure AD. To learn more, see [Migrating applications from Symantec SiteMinder to Azure AD](https://azure.microsoft.com/resources/migrating-applications-from-symantec-siteminder-to-azure-active-directory/)
+This project focuses on migrating SSO capability from WAM systems to Azure AD. To learn more, see [Migrate applications from Symantec SiteMinder to Azure AD](https://azure.microsoft.com/resources/migrating-applications-from-symantec-siteminder-to-azure-active-directory/).
-### Define Application Server Management strategy
+### Define an application server management strategy
-In terms of infrastructure management, on-premises (using AD) environments often use a combination of group policy objects (GPOs) and Microsoft Endpoint Configuration Manager (MECM) features to segment management duties. For example, security policy management, update management, config management, and monitoring.
+In terms of infrastructure management, on-premises environments often use a combination of Group Policy objects (GPOs) and Microsoft Endpoint Configuration Manager features to segment management duties. For example, duties can be segmented into security policy management, update management, configuration management, and monitoring.
-Since AD was designed and built for on-premises IT environments and Azure AD was built for cloud-based IT environments, one-to-one parity of features isn't present here. Therefore, application servers can be managed in several different ways. For example, Azure Arc helps bring many of these features that exist in AD together into a single view when Azure AD is used for IAM. Azure AD DS can also be used to domain join servers in Azure AD, especially those where it's desirable to use GPOs for specific business or technical reasons.
+Active Directory is for on-premises IT environments, and Azure AD is for cloud-based IT environments. One-to-one parity of features isn't present here, so you can manage application servers in several ways.
-Use the following table to determine what Azure-based tools you use to replace the on-premises or AD-based environment:
+For example, Azure Arc helps bring many of the features that exist in Active Directory together into a single view when you use Azure AD for identity and access management (IAM). You can also use Azure Active Directory Domain Services (Azure AD DS) to domain-join servers in Azure AD, especially when you want those servers to use GPOs for specific business or technical reasons.
-| Management area | On-premises (AD) feature | Equivalent Azure AD feature |
+Use the following table to determine what Azure-based tools you can use to replace the on-premises environment:
+
+| Management area | On-premises (Active Directory) feature | Equivalent Azure AD feature |
| - | - | -|
-| Security policy management| GPO, MECM| [Microsoft Microsoft 365 Defender for Cloud](https://azure.microsoft.com/services/security-center/) |
-| Update management| MECM, WSUS| [Azure Automation Update Management](../../automation/update-management/overview.md) |
-| Configuration management| GPO, MECM| [Azure Automation State Configuration](../../automation/automation-dsc-overview.md) |
+| Security policy management| GPO, Microsoft Endpoint Configuration Manager| [Microsoft 365 Defender for Cloud](https://azure.microsoft.com/services/security-center/) |
+| Update management| Microsoft Endpoint Configuration Manager, Windows Server Update Services| [Azure Automation Update Management](../../automation/update-management/overview.md) |
+| Configuration management| GPO, Microsoft Endpoint Configuration Manager| [Azure Automation State Configuration](../../automation/automation-dsc-overview.md) |
| Monitoring| System Center Operations Manager| [Azure Monitor Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) |
-More tools and notes:
+Here's more information that you can use for application server management:
-* [Azure Arc](https://azure.microsoft.com/services/azure-arc/) enables above Azure features to non-Azure VMs. For example, Windows Server when used on-premises or on AWS.
+* [Azure Arc](https://azure.microsoft.com/services/azure-arc/) enables Azure features for non-Azure VMs. For example, you can use it to get Azure features for Windows Server when it's used on-premises or on Amazon Web Services.
* [Manage and secure your Azure VM environment](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/).
-* If you must wait to migrate or perform a partial migration, GPO can be used with [Azure AD Domain Services (Azure AD DS)](https://azure.microsoft.com/services/active-directory-ds/)
+* If you must wait to migrate or perform a partial migration, you can use GPOs with [Azure AD DS](https://azure.microsoft.com/services/active-directory-ds/).
-If you require management of application servers with Microsoft Endpoint Configuration Manager (MECM), you can't achieve this using Azure AD DS. MECM isn't supported to run in an Azure AD DS environment. Instead, you'll need to extend your on-premises AD to a Domain Controller (DC) running on an Azure VM or deploy a new Active Directory (AD) to an Azure IaaS vNet.
+If you require management of application servers with Microsoft Endpoint Configuration Manager, you can't achieve this by using Azure AD DS. Microsoft Endpoint Configuration Manager isn't supported to run in an Azure AD DS environment. Instead, you'll need to extend your on-premises Active Directory instance to a domain controller running on an Azure VM. Or, you'll need to deploy a new Active Directory instance to an Azure IaaS virtual network.
-### Define Legacy Application migration strategy
+### Define the migration strategy for legacy applications
-Legacy applications have different areas of dependencies to AD:
+Legacy applications have dependencies like these to Active Directory:
-* User Authentication and Authorization: Kerberos, NTLM, LDAP Bind, ACLs
+* User authentication and authorization: Kerberos, NTLM, LDAP bind, ACLs.
-* Access to Directory Data: LDAP queries, schema extensions, read/write of directory objects
+* Access to directory data: LDAP queries, schema extensions, read/write of directory objects.
-* Server Management: As determined by the [server management strategy](#define-application-server-management-strategy)
+* Server management: As determined by the [server management strategy](#define-an-application-server-management-strategy).
-To reduce or eliminate the dependencies above, there are three main approaches, listed below in order of preference:
+To reduce or eliminate those dependencies, you have three main approaches.
-* **Approach 1** Replace with SaaS alternatives that use modern authentication. In this approach, undertake projects to migrate from legacy applications to SaaS alternatives that use modern authentication. Have the SaaS alternatives authenticate to Azure AD directly.
+#### Approach 1
-* **Approach 2** Replatform (for example, adopt serverless/PaaS) to support modern hosting without servers and/or update the code to support modern authentication. In this approach, undertake projects to update authentication code for applications that will be modernized or replatform on serverless/PaaS to eliminate the need for underlying server management. Enable the app to use modern authentication and integrate to Azure AD directly. [Learn about MSAL - Microsoft identity platform](../develop/msal-overview.md).
+In the most preferred approach, you undertake projects to migrate from legacy applications to SaaS alternatives that use modern authentication. Have the SaaS alternatives authenticate to Azure AD directly:
-* **Approach 3** Leave the applications as legacy applications for the foreseeable future or sunset the applications and opportunity arises. We recommend that this is considered as a last resort.
+1. Deploy Azure AD DS into an Azure virtual network.
-Based on the app dependencies, you have three migration options:
+2. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined to Azure AD DS.
-#### Implement approach #1
+3. Publish legacy apps to the cloud by using Azure AD Application Proxy or a [secure hybrid access](../manage-apps/secure-hybrid-access.md) partner.
-1. Deploy Azure AD Domain Services into an Azure virtual network
+4. As legacy apps retire through attrition, eventually decommission Azure AD DS running in the Azure virtual network.
-2. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined to Azure AD Domain Services
+>[!NOTE]
+>* Use Azure AD DS if the dependencies are aligned with [common deployment scenarios for Azure AD DS](../../active-directory-domain-services/scenarios.md).
+>* To validate if Azure AD DS is a good fit, you might use tools like [Service Map in Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview) and [automatic dependency mapping with Service Map and Live Maps](https://techcommunity.microsoft.com/t5/system-center-blog/automatic-dependency-mapping-with-service-map-and-live-maps/ba-p/351867).
+>* Validate that your SQL Server instantiations can be [migrated to a different domain](https://social.technet.microsoft.com/wiki/contents/articles/24960.migrating-sql-server-to-new-domain.aspx). If your SQL service is running in virtual machines, [use this guidance](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).
-3. Publish legacy apps to the cloud using Azure AD App Proxy or a [Secure Hybrid Access](../manage-apps/secure-hybrid-access.md) partner
+#### Approach 2
-4. As legacy apps retire through attrition, eventually decommission Azure AD Domain Services running in the Azure virtual network
+If the first approach isn't possible and an application has a strong dependency on Active Directory, you can extend on-premises Active Directory to Azure IaaS.
->[!NOTE]
->* Utilize Azure AD Domain Services if the dependencies are aligned with [Common deployment scenarios for Azure AD Domain Services](../../active-directory-domain-services/scenarios.md).
->* To validate if Azure AD DS is a good fit, you might use tools like Service Map [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview) and [Automatic Dependency Mapping with Service Map and Live Maps](https://techcommunity.microsoft.com/t5/system-center-blog/automatic-dependency-mapping-with-service-map-and-live-maps/ba-p/351867).
->* Validate your SQL server instantiations can be [migrated to a different domain](https://social.technet.microsoft.com/wiki/contents/articles/24960.migrating-sql-server-to-new-domain.aspx). If your SQL service is running in virtual machines, [use this guidance](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).
+You can replatform to support modern serverless hosting--for example, use platform as a service (PaaS). Or, you can update the code to support modern authentication. You can also enable the app to integrate with Azure AD directly. [Learn about Microsoft Authentication Library in the Microsoft identity platform](../develop/msal-overview.md).
-#### Implement approach #2
+1. Connect an Azure virtual network to the on-premises network via virtual private network (VPN) or Azure ExpressRoute.
-Extend on-premises AD to Azure IaaS. If #1 isn't possible and an application has a strong dependency on AD
+2. Deploy new domain controllers for the on-premises Active Directory instance as virtual machines into the Azure virtual network.
-1. Connect an Azure virtual network to the on-premises network via VPN or ExpressRoute
+3. Lift and shift legacy apps to VMs on the Azure virtual network that are domain joined.
-2. Deploy new Domain Controllers for the on-premises AD as virtual machines into the Azure virtual network
+4. Publish legacy apps to the cloud by using Azure AD Application Proxy or a [secure hybrid access](../manage-apps/secure-hybrid-access.md) partner.
-3. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined
+5. Eventually, decommission the on-premises Active Directory infrastructure and run Active Directory in the Azure virtual network entirely.
-4. Publish legacy apps to the cloud using Azure AD App Proxy or a [Secure Hybrid Access](../manage-apps/secure-hybrid-access.md) partner
+6. As legacy apps retire through attrition, eventually decommission the Active Directory instance running in the Azure virtual network.
-5. Eventually, decommission the on-premises AD infrastructure and run the Active Directory in the Azure virtual network entirely
+#### Approach 3
-6. As legacy apps retire through attrition, eventually decommission the Active Directory running in the Azure virtual network
+If the first migration isn't possible and an application has a strong dependency on Active Directory, you can deploy a new Active Directory instance to Azure IaaS. Leave the applications as legacy applications for the foreseeable future, or sunset them when the opportunity arises.
-#### Implement approach #3
+This approach enables you to decouple the app from the existing Active Directory instance to reduce surface area. We recommend that you consider it only as a last resort.
-Deploy a new AD to Azure IaaS. If migration option #1 isn't possible and an application has a strong dependency on AD. This approach enables you to decouple the app from the existing AD to reduce surface area.
+1. Deploy a new Active Directory instance as virtual machines in an Azure virtual network.
-1. Deploy a new Active Directory as virtual machines into an Azure virtual network
+2. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined to the new Active Directory instance.
-2. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined to the new Active Directory
+3. Publish legacy apps to the cloud by using Azure AD Application Proxy or a [secure hybrid access](../manage-apps/secure-hybrid-access.md) partner.
-3. Publish legacy apps to the cloud using Azure AD App Proxy or a [Secure Hybrid Access](../manage-apps/secure-hybrid-access.md) partner
-
-4. As legacy apps retire through attrition, eventually decommission the Active Directory running in the Azure virtual network
+4. As legacy apps retire through attrition, eventually decommission the Active Directory instance running in the Azure virtual network.
#### Comparison of strategies
-| Strategy | A-Azure AD Domain Services | B-Extend AD to IaaS | C-Independent AD in IaaS |
+| Strategy | Azure AD DS | Extend Active Directory to IaaS | Independent Active Directory instance in IaaS |
| - | - | - | - |
-| De-coupled from on-premises AD| Yes| No| Yes |
-| Allows Schema Extensions| No| Yes| Yes |
+| Decoupling from on-premises Active Directory| Yes| No| Yes |
+| Allowing schema extensions| No| Yes| Yes |
| Full administrative control| No| Yes| Yes |
-| Potential reconfiguration of apps required (ACLs, authorization etc.)| Yes| No| Yes |
+| Potential reconfiguration of apps required (for example, ACLs or authorization)| Yes| No| Yes |
+
+### Move VPN authentication
-### Move Virtual private network (VPN) authentication
+This project focuses on moving your VPN authentication to Azure AD. It's important to know that different configurations are available for VPN gateway connections. You need to determine which configuration best fits your needs. For more information on designing a solution, see [VPN gateway design](../../vpn-gateway/design.md).
-This project focuses on moving your VPN authentication to Azure AD. It's important to know that there are different configurations available for VPN gateway connections. You need to determine which configuration best fits your needs. For more information on designing a solution, see [VPN Gateway design](../../vpn-gateway/design.md). Some key points about usage of Azure AD for VPN authentication:
+Here are key points about usage of Azure AD for VPN authentication:
* Check if your VPN providers support modern authentication. For example:
-* [Tutorial: Azure Active Directory single sign-on (SSO) integration with Cisco AnyConnect](../saas-apps/cisco-anyconnect.md)
+ * [Tutorial: Azure Active Directory SSO integration with Cisco AnyConnect](../saas-apps/cisco-anyconnect.md)
-* [Tutorial: Azure Active Directory single sign-on (SSO) integration with Palo Alto Networks](../saas-apps/palo-alto-networks-globalprotect-tutorial.md) [GlobalProtect](../saas-apps/palo-alto-networks-globalprotect-tutorial.md)
+ * [Tutorial: Azure Active Directory SSO integration with Palo Alto Networks GlobalProtect](../saas-apps/palo-alto-networks-globalprotect-tutorial.md)
-* For windows 10 devices, consider integrating [Azure AD support to the built-in VPN client](/windows-server/remote/remote-access/vpn/ad-ca-vpn-connectivity-windows10)
+* For Windows 10 devices, consider integrating [Azure AD support into the built-in VPN client](/windows-server/remote/remote-access/vpn/ad-ca-vpn-connectivity-windows10).
-* After evaluating this scenario, you can implement a solution to remove your dependency with on-premises to authenticate to VPN
+* After you evaluate this scenario, you can implement a solution to remove your dependency with on-premises to authenticate to VPN.
-### Move Remote Access to internal applications
+### Move remote access to internal applications
-To simplify your environment, you can use [Azure AD Application Proxy](../app-proxy/application-proxy.md) or [Secure hybrid access partners](../manage-apps/secure-hybrid-access.md) to provide remote access. This will allow you to remove the dependency on on-premises reverse proxy solutions.
+To simplify your environment, you can use [Azure AD Application Proxy](../app-proxy/application-proxy.md) or [secure hybrid access](../manage-apps/secure-hybrid-access.md) partners to provide remote access. This will allow you to remove the dependency on on-premises reverse proxy solutions.
-It's important to call out that enabling remote access to an application using the aforementioned technologies is an interim step and more work is needed to completely decouple the application from AD.
+It's important to mention that enabling remote access to an application by using the preceding technologies is an interim step. You'll need to do more work to completely decouple the application from Active Directory.
-Azure AD Domain Services allows you to migrate application servers to the cloud IaaS and decouple from AD, while using Azure AD App Proxy to enable remote access. To learn more about this scenario, check [Deploy Azure AD Application Proxy for Azure AD Domain Services](../../active-directory-domain-services/deploy-azure-app-proxy.md)
+Azure AD DS allows you to migrate application servers to the cloud IaaS and decouple from Active Directory, while using Azure AD Application Proxy to enable remote access. To learn more about this scenario, check [Deploy Azure AD Application Proxy for Azure Active Directory Domain Services](../../active-directory-domain-services/deploy-azure-app-proxy.md).
## Next steps
Azure AD Domain Services allows you to migrate application servers to the cloud
[Establish an Azure AD footprint](road-to-the-cloud-establish.md)
-[Implement a cloud-first approach](road-to-the-cloud-implement.md)
+[Implement a cloud-first approach](road-to-the-cloud-implement.md)
active-directory Road To The Cloud Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-posture.md
Title: Road to the cloud - Determine cloud transformation posture when moving identity and access management from AD to Azure AD
-description: Determine cloud transformation posture when planning your migration of IAM from AD to Azure AD.
+ Title: Road to the cloud - Determine cloud transformation posture when moving identity and access management from Active Directory to Azure AD
+description: Determine your cloud transformation posture when planning your migration of IAM from Active Directory to Azure AD.
documentationCenter: ''
# Cloud transformation posture
-Active Directory (AD) and Azure Active Directory (Azure AD) and other Microsoft tools are at the core of identity and access management. For example, device management in AD is provided by Active Directory Domain Services (AD DS) and Microsoft Endpoint Configuration Manager (MECM). In Azure AD, the same capability is provided using Azure AD and Intune.
+Active Directory, Azure Active Directory (Azure AD), and other Microsoft tools are at the core of identity and access management (IAM). For example, Active Directory Domain Services (AD DS) and Microsoft Endpoint Configuration Manager provide device management in Active Directory. In Azure AD, Intune provides the same capability.
-As part of most modernization, migration, or zero-trust initiatives, identity and access management (IAM) activities are shifted from using on-premises or Infrastructure-as-a-Service (IaaS) solutions to using built-for-the-cloud solutions. For an IT environment with Microsoft products and services, Active Directory (AD) and Azure Active Directory (Azure AD) play a role.
+As part of most modernization, migration, or Zero Trust initiatives, organizations shift IAM activities from using on-premises or infrastructure-as-a-service (IaaS) solutions to using built-for-the-cloud solutions. For an IT environment that uses Microsoft products and services, Active Directory and Azure AD play a role.
-Many companies migrating from Active Directory (AD) to Azure Active Directory (Azure AD) start with an environment similar to the following diagram. The diagram also overlays three pillars:
+Many companies that migrate from Active Directory to Azure AD start with an environment that's similar to the following diagram. The diagram overlays three pillars:
-* **Applications**: This pillar includes the applications, resources, and their underlying domain-joined servers.
+* **Applications**: Includes applications, resources, and their underlying domain-joined servers.
-* **Devices**: This pillar focuses on domain-joined client devices.
+* **Devices**: Focuses on domain-joined client devices.
-* **Users and Groups**: Represent the human and non-human identities and attributes that access resources from different devices as specified.
+* **Users**: Represents the human and non-human identities and attributes that access resources from devices.
-[ ![Architectural diagram depicting applications, devices, and users and groups layers, each containing common technologies found within each layer.](media/road-to-cloud-posture/road-to-the-cloud-start.png) ](media/road-to-cloud-posture/road-to-the-cloud-start.png#lightbox)
+[![Architectural diagram that depicts the common technologies contained in the pillars of applications, devices, and users.](media/road-to-cloud-posture/road-to-the-cloud-start.png)](media/road-to-cloud-posture/road-to-the-cloud-start.png#lightbox)
-Microsoft has modeled five states of transformation that commonly align with the business goals of our customers. As the goals of customers mature, it's typical for them to shift from one state to the next at a pace that suits their resourcing and culture. This approach closely follows [Active Directory in Transition: Gartner Survey| Results and Analysis](https://www.gartner.com/en/documents/4006741).
+Microsoft has modeled five states of transformation that commonly align with the business goals of customers. As the goals of customers mature, it's typical for them to shift from one state to the next at a pace that suits their resources and culture. This approach closely follows [Active Directory in Transition: Gartner Survey Results and Analysis](https://www.gartner.com/en/documents/4006741).
-The five states have exit criteria to help you determine where your environment resides today. Some projects, such as application migration span all five states, while others span a single state.
+The five states have exit criteria to help you determine where your environment resides today. Some projects, such as application migration, span all five states. Other projects span a single state.
-The content then provides more detailed guidance organized to help with intentional changes to people, process, and technology to:
+The content then provides more detailed guidance that's organized to help with intentional changes to people, process, and technology. The guidance can help you:
-* Establish Azure AD footprint
+* Establish an Azure AD footprint.
-* Implement a cloud-first approach
+* Implement a cloud-first approach.
-* Start to migrate out of your AD environment
+* Start to migrate out of your Active Directory environment.
-Guidance is provided organized by user management, device management, and application management per the pillars above.
+Guidance is organized by user management, device management, and application management according to the preceding pillars.
-Organizations that are formed in Azure AD rather than in AD don't have the legacy on-premises environment that more established organizations must contend with. For them, or customers that are completely recreating their IT environment in the cloud, becoming 100% cloud-centric can be accomplished as the new IT environment is established.
+Organizations that are formed in Azure AD rather than in Active Directory don't have the legacy on-premises environment that more established organizations must contend with. For them, or for customers who are completely re-creating their IT environment in the cloud, becoming 100 percent cloud-centric can happen as the new IT environment is established.
-For customers with established on-premises IT capability, the transformation process introduces complexity that requires careful planning. Additionally, since AD and Azure AD are separate products targeted at different IT environments, there aren't like-by-like features. For example, Azure AD does not have the notion of AD domain and forest trusts.
+For customers who have an established on-premises IT capability, the transformation process introduces complexity that requires careful planning. Also, because Active Directory and Azure AD are separate products targeted at different IT environments, they don't have like-for-like features. For example, Azure AD doesn't have the notion of Active Directory domain and forest trusts.
-## Five States of transformation
+## Five states of transformation
-In enterprise-sized organizations, IAM transformation, or even transformation from AD to Azure AD is typically a multi-year effort with multiple states. You analyze your environment to determine your current state, and then set a goal for your next state. Your goal might remove the need for AD entirely, or you might decide not to migrate some capability to Azure AD and leave it in place. The states are meant to logically group initiatives into projects towards completing a transformation. During the state transitions, interim solutions are put in place. The interim solutions enable the IT environment to support IAM operations in both AD and Azure AD. The interim solutions must also enable the two environments to interoperate. The following diagram shows the five states:
+In enterprise-sized organizations, IAM transformation, or even transformation from Active Directory to Azure AD, is typically a multi-year effort with multiple states. You analyze your environment to determine your current state, and then set a goal for your next state. Your goal might remove the need for Active Directory entirely, or you might decide not to migrate some capability to Azure AD and leave it in place.
-[ ![Diagram that shows five elements, each depicting a possible network architecture. Options include cloud attached, hybrid, cloud first, AD minimized, and 100% cloud.](media/road-to-cloud-posture/road-to-the-cloud-five-states.png) ](media/road-to-cloud-posture/road-to-the-cloud-five-states.png#lightbox)
+The states logically group initiatives into projects toward completing a transformation. During the state transitions, you put interim solutions in place. The interim solutions enable the IT environment to support IAM operations in both Active Directory and Azure AD. The interim solutions must also enable the two environments to interoperate.
+
+The following diagram shows the five states:
+
+[![Diagram that shows five network architectures: cloud attached, hybrid, cloud first, Active Directory minimized, and 100% cloud.](media/road-to-cloud-posture/road-to-the-cloud-five-states.png)](media/road-to-cloud-posture/road-to-the-cloud-five-states.png#lightbox)
>[!NOTE]
-> The states in this diagram represent a logical progression of cloud transformation. Your ability to move from one state to the next is dependent on the functionality that you have implemented and the capabilities within that functionality to move to the cloud.
+> The states in this diagram represent a logical progression of cloud transformation. Your ability to move from one state to the next depends on the functionality that you've implemented and the capabilities within that functionality to move to the cloud.
+
+### State 1: Cloud attached
+
+In the cloud-attached state, organizations have created an Azure AD tenant to enable user productivity and collaboration tools. The tenant is fully operational.
+
+Most companies that use Microsoft products and services in their IT environment are already in or beyond this state. In this state, operational costs might be higher because there's an on-premises environment and a cloud environment to maintain and make interactive. People must have expertise in both environments to support their users and the organization.
-**State 1 Cloud attached** - In this state, organizations have created an Azure AD tenant to enable user productivity and collaboration tools and the tenant is fully operational. Most companies that use Microsoft products and services in their IT environment are already in or beyond this state. In this state operational costs may be higher because there's an on-premises environment and cloud environment to maintain and make interactive. Also, people must have expertise in both environments to support their users and the organization. In this state:
+In this state:
-* Devices are joined to AD and managed using group policy and or on-premises device management tools.
-* Users are managed in AD, provisioned via on-premises IDM systems, and synchronized to Azure AD with Azure AD Connect.
-* Apps are authenticated to AD, federation servers like AD FS, or Web Access Manager (WAM), Microsoft 365 or other tools such as SiteMinder and Oracle Access Manager (OAM).
+* Devices are joined to Active Directory and managed through Group Policy or on-premises device management tools.
+* Users are managed in Active Directory, provisioned via on-premises identity management (IDM) systems, and synchronized to Azure AD through Azure AD Connect.
+* Apps are authenticated to Active Directory and to federation servers like Active Directory Federation Services (AD FS) through a web access management (WAM) tool, Microsoft 365, or other tools such as SiteMinder and Oracle Access Manager.
-**State 2 Hybrid** - In this state, the organizations start to enhance their on-premises environment through cloud capabilities. The solutions can be planned to reduce complexity, increase security posture, and reduce the footprint of the on-premises environment. During transition and operating in this state, organizations grow the skills and expertise using Azure AD for IAM solutions. Since user accounts and device attachments are relatively easy and a common part of day-to-day IT operations, this is the approach most organizations have used. In this state:
+### State 2: Hybrid
+
+In the hybrid state, organizations start to enhance their on-premises environment through cloud capabilities. The solutions can be planned to reduce complexity, increase security posture, and reduce the footprint of the on-premises environment.
+
+During the transition and while operating in this state, organizations grow the skills and expertise for using Azure AD for IAM solutions. Because user accounts and device attachments are relatively easy and a common part of day-to-day IT operations, this is the approach that most organizations have used.
+
+In this state:
* Windows clients are hybrid Azure AD joined.
-* Non-Microsoft SaaS-based start being integrated with Azure AD, for example Salesforce and ServiceNow.
+* Non-Microsoft platforms based on software as a service (SaaS) start being integrated with Azure AD. Examples are Salesforce and ServiceNow.
-* Legacy apps are authenticating to Azure AD via App Proxy or secure hybrid access partner solutions.
+* Legacy apps are authenticating to Azure AD via Application Proxy or partner solutions that offer secure hybrid access.
* Self-service password reset (SSPR) and password protection for users are enabled.
-* Some legacy apps are authenticated in the cloud using Azure AD Directory Service (Azure AD DS) and App Proxy.
+* Some legacy apps are authenticated in the cloud through Azure AD DS and Application Proxy.
+
+### State 3: Cloud first
+
+In the cloud-first state, the teams across the organization build a track record of success and start planning to move more challenging workloads to Azure AD. Organizations typically spend the most time in this state of transformation. As complexity, the number of workloads, and the use of Active Directory grow over time, an organization needs to increase its effort and its number of initiatives to shift to the cloud.
-**State 3 Cloud first** - In this state, the teams across the organization build a track record of success and start planning to move more challenging workloads to Azure AD. Organizations typically spend the most time in this state of transformation. As complexity and number of workloads grow over time, the longer an organization has used Active Directory (AD) the greater the effort and number of initiatives needed to shift to the cloud. In this state:
+In this state:
-* New Windows clients are joined to Azure AD and are managed with Intune.
-* ECMA connectors are used for provision users and groups for on-premises apps.
-* All apps that were previously using an AD DS-integrated federated identity provider such as Active Directory Federation Services (ADFS), are updated to use Azure AD for authentication. And, if you were using password-based authentication via that identity provider for Azure AD, that is migrated to Password Hash Synchronization (PHS).
+* New Windows clients are joined to Azure AD and are managed through Intune.
+* ECMA connectors are used to provision users and groups for on-premises apps.
+* All apps that previously used an AD DS-integrated federated identity provider, such as AD FS, are updated to use Azure AD for authentication. If you were using password-based authentication through that identity provider for Azure AD, it's migrated to password hash synchronization.
* Plans to shift file and print services to Azure AD are being developed.
-* Collaboration capability is provided by Azure AD B2B.
+* Azure AD provides a business-to-business (B2B) collaboration capability.
* New groups are created and managed in Azure AD.
-**State 4 AD minimized** - Most IAM capability is provided by Azure AD while edge cases and exceptions continue to utilize on-premises AD. This state is more difficult to achieve, especially for larger organizations with significant on-premises technical debt. Azure AD continues to evolve as your organizationΓÇÖs transformation matures, bringing new features and tools that you can utilize. Organizations are required to deprecate capability or build new capability to provide replacement. In this state:
+### State 4: Active Directory minimized
-* New users provisioned using the HR provisioning capability are created directly in Azure AD.
+Azure AD provides most IAM capability, whereas edge cases and exceptions continue to use on-premises Active Directory. A state of minimizing Active Directory is more difficult to achieve, especially for larger organizations that have significant on-premises technical debt.
-* A plan to move apps that depend on AD and are part of the vision for the future state Azure AD environment is being executed. A plan to replace services that won't move (file, print, fax services) is in place.
+Azure AD continues to evolve as your organization's transformation matures, bringing new features and tools that you can use. Organizations are required to deprecate capabilities or build new capabilities to provide replacement.
-* On-premises workloads have been replaced with cloud alternatives such as Windows Virtual Desktop, Azure Files, Cloud Print. SQL is replaced by Azure SQL Managed Instance.
+In this state:
-**State 5 100% cloud** - In this state, IAM capability is all provided by Azure AD and other Azure tools. This is the long-term aspiration for many organizations. In this state:
+* New users provisioned through the HR provisioning capability are created directly in Azure AD.
-* No on-premises IAM footprint required.
+* A plan to move apps that depend on Active Directory and are part of the vision for the future-state Azure AD environment is being executed. A plan to replace services that won't move (file, print, or fax services) is in place.
-* All devices are managed in Azure AD and cloud solution such as Intune.
+* On-premises workloads have been replaced with cloud alternatives such as Windows Virtual Desktop, Azure Files, or Google Cloud Print. Azure SQL Managed Instance replaces SQL Server.
-* User identity lifecycle is managed using Azure AD.
+### State 5: 100% cloud
+
+In the 100%-cloud state, Azure AD and other Azure tools provide all IAM capability. This is the long-term aspiration for many organizations.
+
+In this state:
+
+* No on-premises IAM footprint is required.
+
+* All devices are managed in Azure AD and cloud solutions such as Intune.
+
+* The user identity lifecycle is managed through Azure AD.
* All users and groups are cloud native.
-* Network services that rely on AD are relocated.
+* Network services that rely on Active Directory are relocated.
+
+## Transformation analogy
The transformation between the states is similar to moving locations:
-* **Establish new location** - You purchase your destination and establish connectivity between the current location and the new location. This enables you to maintain your productivity and ability to operate. In this content, the activities are described in **[Establish Azure AD footprint](road-to-the-cloud-establish.md)**. The results transition you to State 2.
+1. **Establish a new location**: You purchase your destination and establish connectivity between the current location and the new location. These activities enable you to maintain your productivity and ability to operate. For more information, see [Establish an Azure AD footprint](road-to-the-cloud-establish.md). The results transition you to state 2.
+
+1. **Limit new items in the old location**: You stop investing in the old location and set a policy to stage new items in the new location. For more information, see [Implement a cloud-first approach](road-to-the-cloud-implement.md). These activities set the foundation to migrate at scale and reach state 3.
-* **Limit new items in old location** - You stop investing in the old location and set policy to stage new items in new location. In this content, the activities are described in **[Implement cloud-first approach](road-to-the-cloud-implement.md)**. The activities set the foundation to migrate at scale and reach State 3.
+1. **Move existing items to the new location**: You move items from the old location to the new location. You assess the business value of the items to determine if you'll move them as is, upgrade them, replace them, or deprecate them. For more information, see [Transition to the cloud](road-to-the-cloud-migrate.md).
-* **Move existing items to new location** - You move items from the old location to the new location. You assess the business value of the items to determine if you'll move them as-is, upgrade them, replace them, or deprecate them. In this content, the activities are described in **[Transition to the cloud](road-to-the-cloud-migrate.md)**. These activities enable you to complete State 3 and reach State 4 and State 5. Based on your business objectives, you decide what end state you want to target.
+ These activities enable you to complete state 3 and reach states 4 and 5. Based on your business objectives, you decide what end state you want to target.
-Transformation to the cloud isn't only the identity team's responsibility. Coordination across teams to define policies beyond technology that include people and process change are required. Using a coordinated approach helps to ensure consistent progress and reduces the risk of regressing to on-premises solutions. Involve teams that manage:
+Transformation to the cloud isn't only the identity team's responsibility. The organization needs coordination across teams to define policies that include people and process change, along with technology. Using a coordinated approach helps ensure consistent progress and reduces the risk of regressing to on-premises solutions. Involve teams that manage:
-* Device/endpoint
-* Networking
+* Devices/endpoints
+* Networks
* Security/risk * Application owners * Human resources
Transformation to the cloud isn't only the identity team's responsibility. Coord
* Procurement * Operations
-As a migration of IAM to Azure AD is started, organizations must determine the prioritization of efforts based on their specific needs. Teams of operational staff and support staff must be trained to perform their jobs in the new environment. The following shows the high-level journey for AD to Azure AD migration:
+### High-level journey
+
+As organizations start a migration of IAM to Azure AD, they must determine the prioritization of efforts based on their specific needs. Operational staff and support staff must be trained to perform their jobs in the new environment. The following chart shows the high-level journey for migration from Active Directory to Azure AD:
+
+* **Establish an Azure AD footprint**: Initialize your new Azure AD tenant to support the vision for your end-state deployment. Adopt a [Zero Trust](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/) approach and a security model that [helps protect your tenant from on-premises compromise](../fundamentals/protect-m365-from-on-premises-attacks.md) early in your journey.
-* **Establish Azure AD footprint**: Initialize your new Azure AD tenant to supports the vision for your end-state deployment. Adopt a [Zero Trust](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/) approach and a security model that [protects your tenant from on-premises compromise](../fundamentals/protect-m365-from-on-premises-attacks.md) early in your journey.
+* **Implement a cloud-first approach**: Establish a policy that all new devices, apps, and services should be cloud-first. New applications and services that use legacy protocols (for example, NTLM, Kerberos, or LDAP) should be by exception only.
-* **Implement cloud-first approach**: Establish a policy that mandates all new devices, apps and services should be cloud-first. New applications and services using legacy protocols (NTLM, Kerberos, LDAP etc.) should be by exception only.
+* **Transition to the cloud**: Shift the management and integration of users, apps, and devices away from on-premises and over to cloud-first alternatives. Optimize user provisioning by taking advantage of [cloud-first provisioning capabilities](../governance/what-is-provisioning.md) that integrate with Azure AD.
-* **Transition to the cloud**: Shift the management and integration of users, apps and devices away from on-premises and over to cloud-first alternatives. Optimize user provisioning by taking advantage of [cloud-first provisioning capabilities](../governance/what-is-provisioning.md) that integrate with Azure AD.
+The transformation changes how users accomplish tasks and how support teams provide user support. The organization should design and implement initiatives or projects in a way that minimizes the impact on user productivity.
-The transformation changes how users accomplish tasks and how support teams provide end-user support. Initiatives or projects should be designed and implemented in a manner that minimizes the impact on user productivity. As part of the transformation, self-service IAM capabilities are introduced. Some portions of the workforce more easily adapt to the self-service user environment prevalent in cloud-based businesses.
+As part of the transformation, the organization introduces self-service IAM capabilities. Some parts of the workforce more easily adapt to the self-service user environment that's prevalent in cloud-based businesses.
-Aging applications might require updating or replacing to operate well in cloud-based IT environments. Application updates or replacements can be costly and time-consuming. The planning and Stages must also take the age and capability of the applications an organization uses.
+Aging applications might need to be updated or replaced to operate well in cloud-based IT environments. Application updates or replacements can be costly and time-consuming. The planning and other stages must also take the age and capability of the organization's applications into account.
## Next steps
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
If you are using the Azure portal to create a workflow, you can customize existi
1. On the workflows screen, select the workflow template that you want to use. :::image type="content" source="media/create-lifecycle-workflow/template-list.png" alt-text="Screenshot of a list of lifecycle workflows templates.":::
-1. Enter a display name and description for the workflow. The display name must be unique and not match the name of any other workflow you've created.
+1. Enter a unique display name and description for the workflow and select **Next**.
:::image type="content" source="media/create-lifecycle-workflow/template-basics.png" alt-text="Screenshot of workflow template basic information.":::
-1. Select the **Trigger type** to be used for this workflow.
+1. On the **configure scope** page select the **Trigger type** and execution conditions to be used for this workflow. For more information on what can be configured, see: [Configure scope](understanding-lifecycle-workflows.md#configure-scope).
-1. On **Days from event**, you enter a value of days when you want the workflow to go into effect. The valid values are 0 to 60.
-
-1. **Event timing** allows you to choose if the days from event are either before or after.
-
-1. **Event user attribute** is the event being used to trigger the workflow. For example, with the pre hire workflow template, an event user attribute is the employee hire date.
--
-1. Select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department.
+1. Under rules, select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department.
:::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of Lifecycle Workflows template scope configuration options.":::
-1. To view your rule syntax, select the **View rule syntax** button.
+1. To view your rule syntax, select the **View rule syntax** button. You can copy and paste multiple user property rules on this screen. For more detailed information on which properties that can be included see: [User Properties](/graph/aad-advanced-queries?tabs=http#user-properties). When you are finished adding rules, select **Next**.
:::image type="content" source="media/create-lifecycle-workflow/template-syntax.png" alt-text="Screenshot of workflow rule syntax.":::
-1. You can copy and paste multiple user property rules on this screen. For more detailed information on which properties that can be included see: [User Properties](/graph/aad-advanced-queries?tabs=http#user-properties)
-
-1. To Add a task to the template, select **Add task**.
+1. On the **Review tasks** page you can add a task to the template by selecting **Add task**. To enable an existing task on the list, select **enable**. You're also able to disable a task by selecting **disable**. To remove a task from the template, select **Remove** on the selected task. When you are finished with tasks for your workflow, select **Next**.
:::image type="content" source="media/create-lifecycle-workflow/template-tasks.png" alt-text="Screenshot of adding tasks to templates.":::
-1. To enable an existing task on the list, select **enable**. You're also able to disable a task by selecting **disable**.
-
-1. To remove a task from the template, select **Remove** on the selected task.
-
-1. Review the workflow's settings.
+1. On the **Review+create** page you are able to review the workflow's settings. You can also choose whether or not to enable the schedule for the workflow. Select **Create** to create the workflow.
:::image type="content" source="media/create-lifecycle-workflow/template-review.png" alt-text="Screenshot of reviewing and creating a template.":::
-1. Select **Create** to create the workflow.
> [!IMPORTANT]
active-directory Entitlement Management Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-organization.md
To add an external Azure AD directory or domain as a connected organization, fol
The **Select directories + domains** pane opens.
-1. In the search box, enter a domain name to search for the Azure AD directory or domain. Be sure to enter the entire domain name.
+1. In the search box, enter a domain name to search for the Azure AD directory or domain. You can also add domains that are not in Azure AD. Be sure to enter the entire domain name.
-1. Confirm that the organization name and authentication type are correct. User sign in, prior to being able to access the myaccess portal, depends on the authentication type for their organization. If the authentication type for a connected organization is Azure AD, then all users with an account in any verified domain of that Azure AD directory will sign into their directory, and then can request access to access packages that allow that connected organization. If the authentication type is One-time passcode, this allows users with email addresses from just that domain to visit the myaccess portal. Then, after they authenticate with the passcode, the user can make a request.
+1. Confirm that the organization name(s) and authentication type(s) are correct. User sign in, prior to being able to access the MyAccess portal, depends on the authentication type for their organization. If the authentication type for a connected organization is Azure AD, all users with an account in any verified domain of that Azure AD directory will sign into their directory, and then can request access to access packages that allow that connected organization. If the authentication type is One-time passcode, this allows users with email addresses from just that domain to visit the MyAccess portal. After they authenticate with the passcode, the user can make a request.
![The "Select directories + domains" pane](./media/entitlement-management-organization/organization-select-directories-domains.png) > [!NOTE] > Access from some domains could be blocked by the Azure AD business to business (B2B) allow or deny list. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
-1. Select **Add** to add the Azure AD directory or domain. Currently, you can add only one Azure AD directory or domain per connected organization.
+1. Select **Add** to add the Azure AD directory or domain. **You can add multiple Azure AD directories and domains**.
-1. After you've added the Azure AD directory or domain, select **Select**.
+1. After you've added the Azure AD directories or domains, select **Select**.
- The organization appears in the list.
+ The organization(s) appears in the list.
![The "Directory + domain" pane](./media/entitlement-management-organization/organization-directory-domain.png)
active-directory Lifecycle Workflow Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md
For a guide on supplying this information to a custom task extension via Microso
## Next steps
+- [customTaskExtension resource type](/graph/api/resources/identitygovernance-customtaskextension?view=graph-rest-beta)
- [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md) - [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md)
active-directory Lifecycle Workflow History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-history.md
Separating processing of the workflow from the tasks is important because, in a
## Next steps
+- [userProcessingResult resource type](/graph/api/resources/identitygovernance-userprocessingresult?view=graph-rest-beta)
+- [taskReport resource type](/graph/api/resources/identitygovernance-taskreport?view=graph-rest-beta)
+- [run resource type](/graph/api/resources/identitygovernance-run?view=graph-rest-beta)
+- [taskProcessingResult resource type](/graph/api/resources/identitygovernance-taskprocessingresult?view=graph-rest-beta)
- [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md) - [Lifecycle Workflow templates](lifecycle-workflow-templates.md)
active-directory Lifecycle Workflow Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md
The default specific parameters for the **Pre-Offboarding of an employee** templ
The **Offboard an employee** template is designed to configure tasks that will be completed on an employee's last day of work. The default specific parameters for the **Offboard an employee** template are as follows:
The default specific parameters for the **Offboard an employee** template are as
The **Post-Offboarding of an employee** template is designed to configure tasks that will be completed after an employee's last day of work. The default specific parameters for the **Post-Offboarding of an employee** template are as follows:
The default specific parameters for the **Post-Offboarding of an employee** temp
## Next steps
+- [workflowTemplate resource type](/graph/api/resources/identitygovernance-workflowtemplate?view=graph-rest-beta)
- [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md) - [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory Lifecycle Workflow Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-versioning.md
Detailed **Version information** are as follows:
## Next steps
+- [workflowVersion resource type](/graph/api/resources/identitygovernance-workflowversion?view=graph-rest-beta)
- [Manage workflow Properties (Preview)](manage-workflow-properties.md) - [Manage workflow versions (Preview)](manage-workflow-tasks.md)
active-directory Manage Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md
Tasks within workflows can be added, edited, reordered, and removed at will. To
1. You can enable and disable tasks as needed by using the **Enable** and **Disable** buttons.
-1. You can reorder the order in which tasks are executed in the workflow by selecting the **Reorder** button.
+1. You can reorder the order in which tasks are executed in the workflow by selecting the **Reorder** button. You can also remove a task from a workflow by using the **Remove** button.
:::image type="content" source="media/manage-workflow-tasks/manage-tasks-reorder.png" alt-text="Screenshot of reordering tasks in a workflow.":::-
-1. You can remove a task from a workflow by using the **Remove** button.
-
+
1. After making changes, select **save** to capture changes to the tasks.
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
To make use of workload identity risk, including the new **Risky workload identi
- Azure AD Premium P2 licensing - One of the following administrator roles assigned
- - Global administrator
- - Security administrator
- - Security operator
- - Security reader
+ - Global Administrator
+ - Security Administrator
+ - Security Operator
+ - Security Reader
Users assigned the Conditional Access administrator role can create policies that use risk as a condition.
active-directory Howto Identity Protection Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-notifications.md
If your organization has enabled self-remediation as described in the article, [
As an administrator, you can set: - **The user risk level that triggers the generation of this email** - By default, the risk level is set to ΓÇ£HighΓÇ¥ risk.-- **The recipients of this email** - Users in the Global administrator, Security administrator, or Security reader roles are automatically added to this list. We attempt to send emails to the first 20 members of each role. If a user is enrolled in PIM to elevate to one of these roles on demand, then **they will only receive emails if they are elevated at the time the email is sent**.
+- **The recipients of this email** - Users in the Global Administrator, Security Administrator, or Security Reader roles are automatically added to this list. We attempt to send emails to the first 20 members of each role. If a user is enrolled in PIM to elevate to one of these roles on demand, then **they will only receive emails if they are elevated at the time the email is sent**.
- Optionally you can **Add custom email here** users defined must have the appropriate permissions to view the linked reports in the Azure portal. Configure the users at risk email in the **Azure portal** under **Azure Active Directory** > **Security** > **Identity Protection** > **Users at risk detected alerts**.
It includes:
![Weekly digest email](./media/howto-identity-protection-configure-notifications/weekly-digest-email.png)
-Users in the Global administrator, Security administrator, or Security reader roles are automatically added to this list. We attempt to send emails to the first 20 members of each role. If a user is enrolled in PIM to elevate to one of these roles on demand, then **they will only receive emails if they are elevated at the time the email is sent**
+Users in the Global Administrator, Security Administrator, or Security Reader roles are automatically added to this list. We attempt to send emails to the first 20 members of each role. If a user is enrolled in PIM to elevate to one of these roles on demand, then **they will only receive emails if they are elevated at the time the email is sent**
### Configure weekly digest email
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Before organizations enable remediation policies, they may want to [investigate]
### User risk with Conditional Access
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
Before organizations enable remediation policies, they may want to [investigate]
### Sign in risk with Conditional Access
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Overview Identity Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/overview-identity-protection.md
Identity Protection requires users be a Security Reader, Security Operator, Secu
| Role | Can do | Can't do | | | | |
-| Global administrator | Full access to Identity Protection | |
-| Security administrator | Full access to Identity Protection | Reset password for a user |
-| Security operator | View all Identity Protection reports and Overview <br><br> Dismiss user risk, confirm safe sign-in, confirm compromise | Configure or change policies <br><br> Reset password for a user <br><br> Configure alerts |
-| Security reader | View all Identity Protection reports and Overview | Configure or change policies <br><br> Reset password for a user <br><br> Configure alerts <br><br> Give feedback on detections |
-| Global reader | Read-only access to Identity Protection | |
+| Global Administrator | Full access to Identity Protection | |
+| Security Administrator | Full access to Identity Protection | Reset password for a user |
+| Security Operator | View all Identity Protection reports and Overview <br><br> Dismiss user risk, confirm safe sign-in, confirm compromise | Configure or change policies <br><br> Reset password for a user <br><br> Configure alerts |
+| Security Reader | View all Identity Protection reports and Overview | Configure or change policies <br><br> Reset password for a user <br><br> Configure alerts <br><br> Give feedback on detections |
+| Global Reader | Read-only access to Identity Protection | |
-Currently, the security operator role can't access the Risky sign-ins report.
+Currently, the Security Operator role can't access the Risky sign-ins report.
Conditional Access administrators can create policies that factor in user or sign-in risk as a condition. Find more information in the article [Conditional Access: Conditions](../conditional-access/concept-conditional-access-conditions.md#sign-in-risk).
active-directory Pim Resource Roles Activate Your Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md
The following is a sample HTTP request to activate an eligible assignment for an
### Request ````HTTP
-PUT https://management.azure.com/providers/Microsoft.Subscription/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/roleEligibilityScheduleRequests/64caffb6-55c0-4deb-a585-68e948ea1ad6?api-version=2020-10-01-preview
+PUT https://management.azure.com/providers/Microsoft.Subscription/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/roleAssignmentScheduleRequests/fea7a502-9a96-4806-a26f-eee560e52045?api-version=2020-10-01
```` ### Request body ````JSON
-{
- "properties": {
- "principalId": "a3bb8764-cb92-4276-9d2a-ca1e895e55ea",
- "roleDefinitionId": "/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/roleDefinitions/c8d4ff99-41c3-41a8-9f60-21dfdad59608",
- "requestType": "SelfActivate",
- "linkedRoleEligibilityScheduleId": "b1477448-2cc6-4ceb-93b4-54a202a89413",
- "scheduleInfo": {
- "startDateTime": "2020-09-09T21:35:27.91Z",
- "expiration": {
- "type": "AfterDuration",
- "endDateTime": null,
- "duration": "PT8H"
- }
- },
- "condition": "@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:ContainerName] StringEqualsIgnoreCase 'foo_storage_container'",
- "conditionVersion": "1.0"
- }
-}
+{
+"properties": {
+ "principalId": "a3bb8764-cb92-4276-9d2a-ca1e895e55ea",
+ "roleDefinitionId": "/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/roleDefinitions/c8d4ff99-41c3-41a8-9f60-21dfdad59608",
+ "requestType": "SelfActivate",
+ "linkedRoleEligibilityScheduleId": "b1477448-2cc6-4ceb-93b4-54a202a89413",
+ "scheduleInfo": {
+ "startDateTime": "2020-09-09T21:35:27.91Z",
+ "expiration": {
+ "type": "AfterDuration",
+ "endDateTime": null,
+ "duration": "PT8H"
+ }
+ },
+ "condition": "@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:ContainerName] StringEqualsIgnoreCase 'foo_storage_container'",
+ "conditionVersion": "1.0"
+ }
+}
```` ### Response
PUT https://management.azure.com/providers/Microsoft.Subscription/subscriptions/
Status code: 201 ````HTTP
-{
- "properties": {
- "targetRoleAssignmentScheduleId": "c9e264ff-3133-4776-a81a-ebc7c33c8ec6",
- "targetRoleAssignmentScheduleInstanceId": null,
- "scope": "/providers/Microsoft.Subscription/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f",
- "roleDefinitionId": "/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/roleDefinitions/c8d4ff99-41c3-41a8-9f60-21dfdad59608",
- "principalId": "a3bb8764-cb92-4276-9d2a-ca1e895e55ea",
- "principalType": "User",
- "requestType": "SelfActivate",
- "status": "Provisioned",
- "approvalId": null,
- "scheduleInfo": {
- "startDateTime": "2020-09-09T21:35:27.91Z",
- "expiration": {
- "type": "AfterDuration",
- "endDateTime": null,
- "duration": "PT8H"
- }
- },
- "ticketInfo": {
- "ticketNumber": null,
- "ticketSystem": null
- },
- "justification": null,
- "requestorId": "a3bb8764-cb92-4276-9d2a-ca1e895e55ea",
- "createdOn": "2020-09-09T21:35:27.91Z",
- "condition": "@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:ContainerName] StringEqualsIgnoreCase 'foo_storage_container'",
- "conditionVersion": "1.0",
- "expandedProperties": {
- "scope": {
- "id": "/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f",
- "displayName": "Pay-As-You-Go",
- "type": "subscription"
- },
- "roleDefinition": {
- "id": "/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/roleDefinitions/c8d4ff99-41c3-41a8-9f60-21dfdad59608",
- "displayName": "Contributor",
- "type": "BuiltInRole"
- },
- "principal": {
- "id": "a3bb8764-cb92-4276-9d2a-ca1e895e55ea",
- "displayName": "User Account",
- "email": "user@my-tenant.com",
- "type": "User"
- }
- }
- },
- "name": "fea7a502-9a96-4806-a26f-eee560e52045",
- "id": "/providers/Microsoft.Subscription/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/RoleAssignmentScheduleRequests/fea7a502-9a96-4806-a26f-eee560e52045",
- "type": "Microsoft.Authorization/RoleAssignmentScheduleRequests"
-}
+{
+ "properties": {
+ "targetRoleAssignmentScheduleId": "c9e264ff-3133-4776-a81a-ebc7c33c8ec6",
+ "targetRoleAssignmentScheduleInstanceId": null,
+ "scope": "/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f",
+ "roleDefinitionId": "/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/roleDefinitions/c8d4ff99-41c3-41a8-9f60-21dfdad59608",
+ "principalId": "a3bb8764-cb92-4276-9d2a-ca1e895e55ea",
+ "principalType": "User",
+ "requestType": "SelfActivate",
+ "status": "Provisioned",
+ "approvalId": null,
+ "scheduleInfo": {
+ "startDateTime": "2020-09-09T21:35:27.91Z",
+ "expiration": {
+ "type": "AfterDuration",
+ "endDateTime": null,
+ "duration": "PT8H"
+ }
+ },
+ "ticketInfo": {
+ "ticketNumber": null,
+ "ticketSystem": null
+ },
+ "justification": null,
+ "requestorId": "a3bb8764-cb92-4276-9d2a-ca1e895e55ea",
+ "createdOn": "2020-09-09T21:35:27.91Z",
+ "condition": "@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:ContainerName] StringEqualsIgnoreCase 'foo_storage_container'",
+ "conditionVersion": "1.0",
+ "expandedProperties": {
+ "scope": {
+ "id": "/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f",
+ "displayName": "Pay-As-You-Go",
+ "type": "subscription"
+ },
+ "roleDefinition": {
+ "id": "/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/roleDefinitions/c8d4ff99-41c3-41a8-9f60-21dfdad59608",
+ "displayName": "Contributor",
+ "type": "BuiltInRole"
+ },
+ "principal": {
+ "id": "a3bb8764-cb92-4276-9d2a-ca1e895e55ea",
+ "displayName": "User Account",
+ "email": "user@my-tenant.com",
+ "type": "User"
+ }
+ }
+ },
+ "name": "fea7a502-9a96-4806-a26f-eee560e52045",
+ "id": "/subscriptions/dfa2a084-766f-4003-8ae1-c4aeb893a99f/providers/Microsoft.Authorization/RoleAssignmentScheduleRequests/fea7a502-9a96-4806-a26f-eee560e52045",
+ "type": "Microsoft.Authorization/RoleAssignmentScheduleRequests"
+}
```` ## View the status of your requests
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
For each month, we truncate the SLA attainment at three places after the decimal
| May | 99.999% | 99.999% | | June | 99.999% | 99.999% | | July | 99.999% | 99.999% |
-| August | 99.999% | |
+| August | 99.999% | 99.999% |
| September | 99.999% | | | October | 99.999% | | | November | 99.998% | |
active-directory Airtable Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airtable-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Airtable SSO
-Follow the instructions given in the [link](https://support.airtable.com/hc/articles/360037475234) to configure single sign-on on **Airtable** side.
+Follow the instructions given in the [link](https://support.airtable.com/docs/configuring-sso-with-azure-ad) to configure single sign-on on **Airtable** side.
### Create Airtable test user
active-directory Apple Business Manager Provision Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/apple-business-manager-provision-tutorial.md
This tutorial describes the steps you need to perform in both Apple Business Man
> * Create users in Apple Business Manager > * Remove users in Apple Business Manager when they do not require access anymore > * Keep user attributes synchronized between Azure AD and Apple Business Manager
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Apple Business Manager (recommended).
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
2. Click Settings at the bottom of the sidebar click Data Source below Organization Settings, then click Connect to Data Source. 3. Click Connect next to SCIM, carefully read the warning, click Copy, then click Close. [The Connect to SCIM window, which provides a token and a Copy button under it.]
-Leave this window open to copy the Tenant URL from Apple Business Manager to Azure AD, which is: 'https://federation.apple.com/feeds/business/scim'
+Leave this window open to copy the Tenant URL from Apple Business Manager to Azure AD, which is: `https://federation.apple.com/feeds/business/scim`
- ![Apple Business Manager](media/applebusinessmanager-provisioning-tutorial/scim-token.png)
+ ![Screenshot of Apple Business Manager token generation.](media/apple-business-manager-provision-tutorial/scim-token.png)
-> [!NOTE]
-> The secret token shouldnΓÇÖt be shared with anyone other than the Azure AD administrator.
+ > [!NOTE]
+ > The secret token shouldnΓÇÖt be shared with anyone other than the Azure AD administrator.
## Step 3. Add Apple Business Manager from the Azure AD application gallery
-Add Apple Business Manager from the Azure AD application gallery to start managing provisioning to Apple Business Manager. If you have previously setup Apple Business Manager for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+* Add Apple Business Manager from the Azure AD application gallery to start managing provisioning to Apple Business Manager. If you have previously setup Apple Business Manager for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially.
+
+* To add the Apple Business Manager Azure AD app with Microsoft tenants, the administrator of the tenants must go through the federated authentication setup process, including testing authentication. When authentication has succeeded, the Apple Business Manager Azure AD app is populated in the tenant and the administrator can federate domains and configure Apple Business Manager to use SCIM (System for Cross-domain Identity Management) for directory sync.
+ [Use federated authentication with MS Azure AD in Apple Business Manager](https://support.apple.com/en-ke/guide/apple-business-manager/axmb02f73f18/web)
+
## Step 4. Define who will be in scope for provisioning The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
The Azure AD provisioning service allows you to scope who will be provisioned ba
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
2. In the applications list, select **Apple Business Manager**.
- ![The Apple Business Manager in the Applications list](common/all-applications.png)
+ ![Screenshot of the Apple Business Manager in the Applications list.](common/all-applications.png)
3. Select the **Provisioning** tab.
- ![Provisioning tab](common/provisioning.png)
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
4. Set the **Provisioning Mode** to **Automatic**.
- ![Provisioning tab automatic](common/provisioning-automatic.png)
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL and Access Token** values retrieved from Apple Business Manager in **Tenant URL** and **Secret Token** respectively.. Click **Test Connection** to ensure Azure AD can connect to Apple Business Manager. If the connection fails, ensure your Apple Business Manager account has Admin permissions and try again.
+5. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL and Access Token** values retrieved from Apple Business Manager in **Tenant URL** and **Secret Token** respectively. Click **Test Connection** to ensure Azure AD can connect to Apple Business Manager. If the connection fails, ensure your Apple Business Manager account has Admin permissions and try again.
- ![Token](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
> [!NOTE] >If the connection is successful, Apple Business Manager shows the SCIM connection as active. This process can take up to 60 seconds for Apple Business Manager to reflect the latest connection status. 6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
7. Click **Save**.
The Azure AD provisioning service allows you to scope who will be provisioned ba
11. To enable the Azure AD provisioning service for Apple Business Manager, change the **Provisioning Status** to **On** in the Settings section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
12. Define the users and/or groups that you would like to provision to Apple Business Manager by choosing the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
13. When you are ready to provision, click **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
active-directory Apple School Manager Provision Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/apple-school-manager-provision-tutorial.md
This tutorial describes the steps you need to perform in both Apple School Manag
> * Create users in Apple School Manager > * Remove users in Apple School Manager when they do not require access anymore > * Keep specific user attributes synchronized between Azure AD and Apple School Manager
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Apple School Manager (recommended).
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
2. Click Settings at the bottom of the sidebar click Data Source below Organization Settings, then click Connect to Data Source. 3. Click Connect next to SCIM, carefully read the warning, click Copy, then click Close. [The Connect to SCIM window, which provides a token and a Copy button under it.]
-Leave this window open to copy the Tenant URL from Apple Business Manager to Azure AD, which is: 'https://federation.apple.com/feeds/school/scim'
+Leave this window open to copy the Tenant URL from Apple School Manager to Azure AD, which is: `https://federation.apple.com/feeds/school/scim`
- ![Apple School Manager](media/appleschoolmanager-provisioning-tutorial/scim-token.png)
+ ![Screenshot of Apple School Manager token generation.](media/apple-school-manager-provision-tutorial/scim-token.png)
-> [!NOTE]
-> The secret token shouldnΓÇÖt be shared with anyone other than the Azure AD administrator.
+ > [!NOTE]
+ > The secret token shouldnΓÇÖt be shared with anyone other than the Azure AD administrator.
## Step 3. Add Apple School Manager from the Azure AD application gallery
-Add Apple School Manager from the Azure AD application gallery to start managing provisioning to Apple School Manager. If you have previously setup Apple School Manager for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+* Add Apple School Manager from the Azure AD application gallery to start managing provisioning to Apple School Manager. If you have previously setup Apple School Manager for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially.
+
+* To add the Apple School Manager Azure AD app with Microsoft tenants, the administrator of the tenants must go through the federated authentication setup process, including testing authentication. When authentication has succeeded, the Apple School Manager Azure AD app is populated in the tenant and the administrator can federate domains and configure Apple School Manager to use SCIM (System for Cross-domain Identity Management) for directory sync.
+ [Use federated authentication with MS Azure AD in Apple School Manager](https://support.apple.com/en-ke/guide/apple-school-manager/axmb02f73f18/web)
+
## Step 4. Define who will be in scope for provisioning The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
The Azure AD provisioning service allows you to scope who will be provisioned ba
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
2. In the applications list, select **Apple School Manager**.
- ![The Apple School Manager in the Applications list](common/all-applications.png)
+ ![Screenshot of Apple School Manager in the Applications list.](common/all-applications.png)
3. Select the **Provisioning** tab.
- ![Provisioning tab](common/provisioning.png)
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
4. Set the **Provisioning Mode** to **Automatic**.
- ![Provisioning tab automatic](common/provisioning-automatic.png)
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL and Access Token** values retrieved from Apple School Manager in **Tenant URL** and **Secret Token** respectively.. Click **Test Connection** to ensure Azure AD can connect to Apple School Manager. If the connection fails, ensure your Apple School Manager account has Admin permissions and try again.
+5. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL and Access Token** values retrieved from Apple School Manager in **Tenant URL** and **Secret Token** respectively. Click **Test Connection** to ensure Azure AD can connect to Apple School Manager. If the connection fails, ensure your Apple School Manager account has Admin permissions and try again.
- ![Token](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
> [!NOTE] >If the connection is successful, Apple School Manager shows the SCIM connection as active. This process can take up to 60 seconds for Apple School Manager to reflect the latest connection status. 6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
7. Click **Save**.
The Azure AD provisioning service allows you to scope who will be provisioned ba
11. To enable the Azure AD provisioning service for Apple School Manager, change the **Provisioning Status** to **On** in the Settings section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
12. Define the users and/or groups that you would like to provision to Apple School Manager by choosing the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
13. When you are ready to provision, click **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
active-directory Hrworks Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hrworks-single-sign-on-tutorial.md
Previously updated : 07/29/2021 Last updated : 09/09/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://login.hrworks.de/?companyId=<COMPANY_ID>&directssologin=true` > [!NOTE]
- > The value is not real. Update the value with the actual Sign-On URL. Contact [HRworks Single Sign-On Client support team](https://www.hrworks.de/dienstleistungen/support/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The value is not real. Update the value with the actual Sign-On URL. See [HRworks Single Sign-On Helpcenter article](https://help.hrworks.de/en/single-sign-on) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
active-directory Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-gallery.md
To be considered into Entra Verified ID partner documentation, submit your appli
|:-|:--|:--| |![Screenshot of au10tix logo.](media/partner-gallery/au10tix.png) | [AU10TIX](https://www.au10tix.com/solutions/microsoft-azure-active-directory-verifiable-credentials-program) improves Verifiability While Protecting Privacy For Businesses, Employees, Contractors, Vendors, And Customers. | [Configure Verified ID by AU10TIX as your Identity Verification Partner](https://aka.ms/au10tixvc). | | ![Screenshot of a LexisNexis logo.](media/partner-gallery/lexisnexis.png) | [LexisNexis](https://solutions.risk.lexisnexis.com/did-microsoft) risk solutions Verifiable credentials enables faster onboarding for employees, students, citizens, or others to access services. | [Configure Verified ID by LexisNexis Risk Solutions as your Identity Verification Partner](https://aka.ms/lexisnexisvc). |
-| ![Screenshot of a Onfido logo.](media/partner-gallery/onfido.jpeg) | [Onfido](https://onfido.com/landing/onfido-microsoft-idv-service/) Start issuing and accepting verifiable credentials in minutes. With verifiable credentials and Onfido you can verify a personΓÇÖs identity while respecting privacy. Digitally validate information on a personΓÇÖs ID or their biometrics.| Not Available |
-| ![Screenshot of a Vu logo.](media/partner-gallery/vu.png) | [Vu Security](https://landings.vusecurity.com/microsoft-verifiable-credentials) Verifiable credentials with just a selfie and your ID.| Not Available |
-| ![Screenshot of a Jumio logo.](media/partner-gallery/jumio.jpeg) | [Jumio](https://www.jumio.com/microsoft-verifiable-credentials/) is helping to support a new form of digital identity by Microsoft based on verifiable credentials and decentralized identifiers standards to let consumers verify once and use everywhere.| Not Available |
-| ![Screenshot of a Idemia logo.](media/partner-gallery/idemia.png) | [Idemia](https://na.idemia.com/identity/verifiable-credentials/) Integration with Verified ID enables ΓÇ£Verify once, use everywhereΓÇ¥ functionality.| Not Available |
-| ![Screenshot of a Acuant logo.](media/partner-gallery/acuant.png) | [Acuant](https://www.acuant.com/microsoft-acuant-verifiable-credentials-my-digital-id/) - My Digital ID - Create Your Digital Identity Once, Use It Everywhere.| Not Available |
-| ![Screenshot of a Clear logo.](media/partner-gallery/clear.jpeg) | [Clear](https://ir.clearme.com/news-events/press-releases/detail/25/clear-collaborates-with-microsoft-to-create-more-secure) Collaborates with Microsoft to Create More Secure Digital Experience Through Verification Credential.| Not Available |
+| ![Screenshot of a Onfido logo.](media/partner-gallery/onfido.jpeg) | [Onfido](https://onfido.com/landing/onfido-microsoft-idv-service/) Start issuing and accepting verifiable credentials in minutes. With verifiable credentials and Onfido you can verify a personΓÇÖs identity while respecting privacy. Digitally validate information on a personΓÇÖs ID or their biometrics.| * |
+| ![Screenshot of a Vu logo.](media/partner-gallery/vu.png) | [Vu Security](https://landings.vusecurity.com/microsoft-verifiable-credentials) Verifiable credentials with just a selfie and your ID.| * |
+| ![Screenshot of a Jumio logo.](media/partner-gallery/jumio.jpeg) | [Jumio](https://www.jumio.com/microsoft-verifiable-credentials/) is helping to support a new form of digital identity by Microsoft based on verifiable credentials and decentralized identifiers standards to let consumers verify once and use everywhere.| * |
+| ![Screenshot of a Idemia logo.](media/partner-gallery/idemia.png) | [Idemia](https://na.idemia.com/identity/verifiable-credentials/) Integration with Verified ID enables ΓÇ£Verify once, use everywhereΓÇ¥ functionality.| * |
+| ![Screenshot of a Acuant logo.](media/partner-gallery/acuant.png) | [Acuant](https://www.acuant.com/microsoft-acuant-verifiable-credentials-my-digital-id/) - My Digital ID - Create Your Digital Identity Once, Use It Everywhere.| * |
+| ![Screenshot of a Clear logo.](media/partner-gallery/clear.jpeg) | [Clear](https://ir.clearme.com/news-events/press-releases/detail/25/clear-collaborates-with-microsoft-to-create-more-secure) Collaborates with Microsoft to Create More Secure Digital Experience Through Verification Credential.| * |
+
+\* - no documentation available yet
## Next steps
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
To add the required permissions, follow these steps:
1. Website ID registration 1. Domain verification. 1. Select on each section and download the JSON file under each.
-1. Crete a website that you can use to distribute the files. If you specified **https://contoso.com** as your domain, the URLs for each of the files would look as shown below:
+1. Create a website that you can use to distribute the files. If you specified **https://contoso.com** as your domain, the URLs for each of the files would look as shown below:
- `https://contoso.com/.well-known/did.json` - `https://contoso.com/.well-known/did-configuration.json`
Once that you have successfully completed the verification steps, you are ready
## Next steps - [Learn how to issue Microsoft Entra Verified ID credentials from a web application](verifiable-credentials-configure-issuer.md).-- [Learn how to verify Microsoft Entra Verified ID credentials](verifiable-credentials-configure-verifier.md).
+- [Learn how to verify Microsoft Entra Verified ID credentials](verifiable-credentials-configure-verifier.md).
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
The tutorials for deploying and running the [samples](verifiable-credentials-con
- Java - [Deploy to App Service](../../app-service/quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-maven#4deploy-the-app). You need to add the maven plugin for Azure App Service to the sample. - Python - [Deploy using VSCode](../../app-service/quickstart-python.md?tabs=flask%2Cwindows%2Cazure-cli%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli#3deploy-your-application-code-to-azure)
-Regardless of which language of the sample you are using, they will pick up the Azure App Service hostname (https://something.azurewebsites.net/) and use it as the public endpoint. You don't need to configure something extra to make it work. If you make changes to the code or configuration, you need to redeploy the sample to Azure App Service. Troubleshooting/debugging will not be as easy as running the sample on your local machine, where traces to the console window show you errors, but you can achieve almost the same by using the [Log Stream](../../app-service/troubleshoot-diagnostic-logs.md#stream-logs).
+Regardless of which language of the sample you are using, they will pickup the Azure AppService hostname `https://something.azurewebsites.net` and use it as the public endpoint. You don't need to configure something extra to make it work. If you make changes to the code or configuration, you need to redeploy the sample to Azure AppServices. Troubleshooting/debugging will not be as easy as running the sample on your local machine, where traces to the console window shows you errors, but you can achieve almost the same by using the [Log Stream](../../app-service/troubleshoot-diagnostic-logs.md#stream-logs).
## Next steps
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Previously updated : 08/26/2022 Last updated : 09/09/2022 # Use ImageCleaner to clean up stale images on your Azure Kubernetes Service cluster (preview)
It's common to use pipelines to build and deploy images on Azure Kubernetes Serv
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
-* [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] and the `aks-preview` CLI extension installed.
+* [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] and the `aks-preview` 0.5.96 or later CLI extension installed.
* The `EnableImageCleanerPreview` feature flag registered on your subscription: ### [Azure CLI](#tab/azure-cli)
The deletion logs are stored in the `image-cleaner-kind-worker` pods. You can ch
[register-azresourceprovider]: /powershell/module/az.resources/register-azresourceprovider [arm-vms]: https://azure.microsoft.com/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/
-[trivy]: https://github.com/aquasecurity/trivy
+[trivy]: https://github.com/aquasecurity/trivy
analysis-services Analysis Services Connect Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-excel.md
# Connect with Excel
-Once you've created a server and deployed a tabular model to it, clients can connect and begin exploring data. This article describes connecting to an Azure Analysis Services resource by using the Excel desktop app. Connecting to an Azure Analysis Services resource is not supported in Excel for the web.
+Once you've created a server and deployed a tabular model to it, clients can connect and begin exploring data. This article describes connecting to an Azure Analysis Services resource by using the Excel desktop app. Connecting to an Azure Analysis Services resource is not supported in Excel for the Web or Excel for MacOS.
## Before you begin
api-management Authentication Authorization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-authorization-overview.md
+
+ Title: Authentication and authorization - Overview
+
+description: Learn about authentication and authorization features in Azure API Management to secure access to the management, API, and developer features
++++ Last updated : 09/08/2022+++
+# Authentication and authorization in Azure API Management
+
+This article is an introduction to API Management capabilities that help you secure users' access to API Management features and APIs.
+
+API Management provides a rich, flexible set of features to support API authentication and authorization in addition to the standard control-plane authentication and role-based access control (RBAC) required when interacting with Azure services.
+
+API Management also provides a fully customizable, standalone, managed [developer portal](api-management-howto-developer-portal.md), which can be used externally (or internally) to allow developer users to discover and interact with the APIs published through API Management. The developer portal has several options to facilitate secure user sign-up and sign-in.
+
+The following diagram is a conceptual view of Azure API Management, showing the management plane (Azure control plane), API gateway (data plane), and developer portal (user plane), each with at least one option to secure interaction. For an overview of API Management components, see [What is Azure API Management?](api-management-key-concepts.md)
++
+## Management plane
+
+Administrators, operators, developers, and DevOps service principals are examples of the different personas required to manage an Azure API Management instance in a customer environment.
+
+Azure API Management relies on Azure Active Directory (Azure AD), which includes optional features such as multifactor authentication (MFA), and Azure RBAC to enable fine-grained access to the API Management service and its entities including APIs and policies. For more information, see [How to use role-based access control in Azure API Management](api-management-role-based-access-control.md).
+
+The management plane can be accessed via an Azure AD login (or token) through the Azure portal, infrastructure-as-code templates (such as Azure Resource Manager or Bicep), the REST API, client SDKs, the Azure CLI, or Azure PowerShell.
+
+## Gateway (data plane)
+
+API authentication and authorization in API Management involve the end-to-end communication of client apps *through* the API Management gateway to backend APIs.
+
+In many customer environments, [OAuth 2.0](https://oauth.net/2/) is the preferred API authorization protocol. API Management supports OAuth 2.0 across the data plane.
+
+### OAuth concepts
+
+What happens when a client app calls an API with a request that is secured using TLS and OAuth? The following is an abbreviated example flow:
+
+* The client (the calling app, or *bearer*) authenticates using credentials to an *identity provider*.
+* The client obtains a time-limited *access token* (a JSON web token, or JWT) from the identity provider's *authorization server*.
+
+ The identity provider (for example, Azure AD) is the *issuer* of the token, and the token includes an *audience claim* that authorizes access to a *resource server* (for example, to a backend API, or to the API Management gateway itself).
+* The client calls the API and presents the access token - for example, in an Authorization header.
+* The *resource server* validates the access token. Validation is a complex process that includes a check that the *issuer* and *audience* claims contain expected values.
+* Based on token validation criteria, access to resources of the [backend] API is then granted.
+
+Depending on the type of client app and scenarios, different *authentication flows* are needed to request and manage tokens. For example, the authorization code flow and grant type are commonly used in apps that call web APIs. Learn more about [OAuth flows and application scenarios in Azure AD](../active-directory/develop/authentication-flows-app-scenarios.md).
++
+### OAuth 2.0 authorization scenarios
+
+#### Audience is the backend
+
+The most common scenario is when the Azure API Management instance is a "transparent" proxy between the caller and backend API, and the calling application requests access to the API directly. The scope of the access token is between the calling application and backend API.
++
+In this scenario, the access token sent along with the HTTP request is intended for the backend API, not API Management. However, API Management still allows for a defense in depth approach. For example, configure policies to [validate the token](api-management-access-restriction-policies.md#ValidateJWT), rejecting requests that arrive without a token, or a token that's not valid for the intended backend API. You can also configure API Management to check other claims of interest extracted from the token.
+
+For an example, see [Protect an API in Azure API Management using OAuth 2.0 authorization with Azure Active Directory](api-management-howto-protect-backend-with-aad.md).
+
+#### Audience is API Management
+
+In this scenario, the API Management service acts on behalf of the API, and the calling application requests access to the API Management instance. The scope of the access token is between the calling application and API Management.
++
+There are different reasons for wanting to do this. For example:
+
+* The backend is a legacy API that can't be updated to support OAuth.
+
+ API Management should first be configured to validate the token (checking the issuer and audience claims at a minimum). After validation, use one of several options available to secure onward connections from API Management. See [other options](#other-options), later in this article.
+
+* The context required by the backend isnΓÇÖt possible to establish from the caller.
+
+ After API Management has successfully validated the token received from the caller, it then needs to obtain an access token for the backend API using its own context, or context derived from the calling application. This scenario can be accomplished using either:
+
+ * A custom policy to obtain an onward access token valid for the backend API from a configured identity provider.
+
+ * The API Management instance's own identity ΓÇô passing the token from the API Management resource's system-assigned or user-assigned [managed identity](api-management-authentication-policies.md#ManagedIdentity) to the backend API.
+
+### Token management by API Management
+
+API Management also supports acquisition and secure storage of OAuth 2.0 tokens for certain downstream services using the [authorizations](authorizations-overview.md) (preview) feature, including through use of custom policies and caching.
+
+With authorizations, API Management manages the tokens for access to OAuth 2.0 backends, simplifying the development of client apps that access APIs.
+
+### Other options
+
+Although authorization is preferred and OAuth 2.0 has become the dominant method of enabling strong authorization for APIs, API Management enables other authentication options that can be useful if the backend or calling applications are legacy or don't yet support OAuth. Options include:
+
+* Mutual TLS (mTLS), also known as client certificate authentication, between the client (app) and API Management. This authentication can be end-to-end, with the call between API Management and the backend API secured in the same way. For more information, see [How to secure APIs using client certificate authentication in API Management](api-management-howto-mutual-certificates-for-clients.md)
+* Basic authentication, using the [authentication-basic](api-management-authentication-policies.md#Basic) policy.
+* Subscription key, also known as an API key. For more information, see [Subscriptions in API Management](api-management-subscriptions.md).
+
+> [!NOTE]
+> We recommend using a subscription (API) key *in addition to* another method of authentication or authorization. On its own, a subscription key isn't a strong form of authentication, but use of the subscription key might be useful in certain scenarios, for example, tracking individual customers' API usage.
+
+## Developer portal (user plane)
+
+The managed developer portal is an optional API Management feature that allows internal or external developers and other interested parties to discover and use APIs that are published through API Management.
+
+If you elect to customize and publish the developer portal, API Management provides different options to secure it:
+
+* **External users** - The preferred option when the developer portal is consumed externally is to enable business-to-consumer access control through Azure Active Directory B2C (Azure AD B2C).
+ * Azure AD B2C provides the option of using Azure AD B2C native accounts: users sign up to Azure AD B2C and use that identity to access the developer portal.
+ * Azure AD B2C is also useful if you want users to access the developer portal using existing social media or federated organizational accounts.
+ * Azure AD B2C provides many features to improve the end user sign-up and sign-in experience, including conditional access and MFA.
+
+ For steps to enable Azure AD B2C authentication in the developer portal, see [How to authorize developer accounts by using Azure Active Directory B2C in Azure API Management](api-management-howto-aad-b2c.md).
++
+* **Internal users** - The preferred option when the developer portal is consumed internally is to leverage your corporate Azure AD. Azure AD provides a seamless single sign-on (SSO) experience for corporate users who need to access and discover APIs through the developer portal.
+
+ For steps to enable Azure AD authentication in the developer portal, see [How to authorize developer accounts by using Azure Active Directory in Azure API Management](api-management-howto-aad.md).
+
+
+* **Basic authentication** - A default option is to use the built-in developer portal [username and password](developer-portal-basic-authentication.md) provider, which allows developer users to register directly in API Management and sign in using API Management user accounts. User sign up through this option is protected by a CAPTCHA service.
+
+### Developer portal test console
+In addition to providing configuration for developer users to sign up for access and sign in, the developer portal includes a test console where the developers can send test requests through API Management to the backend APIs. This test facility also exists for contributing users of API Management who manage the service using the Azure portal.
+
+In either both cases, if the API exposed through Azure API Management is secured with OAuth 2.0 - that is, a calling application (*bearer*) needs to obtain and pass a valid access token - you can configure API Management to generate a valid token on behalf of an Azure portal or developer portal test console user. For more information, see [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md) .
+
+This OAuth configuration for API testing is independent of the configuration required for user access to the developer portal. However, the identity provider and user could be the same. For example, an intranet application could require user access to the developer portal using SSO with their corporate identity, and that same corporate identity could obtain a token, through the test console, for the backend service being called with the same user context.
+
+## Scenarios
+
+Different authentication and authorization options apply to different scenarios. The following sections explore high level configurations for three example scenarios. More steps are required to fully secure and configure APIs exposed through API Management to either internal or external audiences. However, the scenarios intentionally focus on the minimum configurations recommended in each case to provide the required authentication and authorization.
+
+### Scenario 1 - Intranet API and applications
+
+* An API Management contributor and backend API developer wants to publish an API that is secured by OAuth 2.0.
+* The API will be consumed by desktop applications whose users sign in using SSO through Azure AD.
+* The desktop application developers also need to discover and test the APIs via the API Management developer portal.
+
+Key configurations:
++
+|Configuration |Reference |
+|||
+| Authorize developer users of the API Management developer portal using their corporate identities and Azure AD. | [Authorize developer accounts by using Azure Active Directory in Azure API Management](api-management-howto-aad.md) |
+|Set up the test console in the developer portal to obtain a valid OAuth 2.0 token for the desktop app developers to exercise the backend API. <br/><br/>The same configuration can be used for the test console in the Azure portal, which is accessible to the API Management contributors and backend developers. <br/><br/>The token could be used in combination with an API Management subscription key. | [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md)<br/><br/>[Subscriptions in Azure API Management](api-management-subscriptions.md) |
+| Validate the OAuth 2.0 token and claims when an API is called through API Management with an access token. | [Validate JWT policy](api-management-access-restriction-policies.md#ValidateJWT) |
+
+Go a step further with this scenario by moving API Management into the network perimeter and controlling ingress through a reverse proxy. For a reference architecture, see [Protect APIs with Application Gateway and API Management](/azure/architecture/reference-architectures/apis/protect-apis).
+
+### Scenario 2 - External API, partner application
+
+* An API Management contributor and backend API developer wants to undertake a rapid proof-of-concept to expose a legacy API through Azure API Management. The API through API Management will be externally (internet) facing.
+* The API uses client certificate authentication and will be consumed by a new public-facing single-page Application (SPA) being developed and delivered offshore by a partner.
+* The SPA uses OAuth 2.0 with Open ID Connect (OIDC).
+* Application developers will access the API in a test environment through the developer portal, using a test backend endpoint to accelerate frontend development.
+
+Key configurations:
+
+|Configuration |Reference |
+|||
+| Configure frontend developer access to the developer portal using the default username and password authentication.<br/><br/>Developers can also be invited to the developer portal. | [Configure users of the developer portal to authenticate using usernames and passwords](developer-portal-basic-authentication.md)<br/><br/>[How to manage user accounts in Azure API Management](api-management-howto-create-or-invite-developers.md) |
+| Validate the OAuth 2.0 token and claims when the SPA calls API Management with an access token. In this case, the audience is API Management. | [Validate JWT policy](api-management-access-restriction-policies.md#ValidateJWT) |
+| Set up API Management to use client certificate authentication to the backend. | [Secure backend services using client certificate authentication in Azure API Management](api-management-howto-mutual-certificates.md) |
+
+Go a step further with this scenario by using the [developer portal with Azure AD authorization](api-management-howto-aad.md) and Azure AD [B2B collaboration](../active-directory/external-identities/what-is-b2b.md) to allow the delivery partners to collaborate more closely. Consider delegating access to API Management through RBAC in a development or test environment and enable SSO into the developer portal using their own corporate credentials.
+
+### Scenario 3 - External API, SaaS, open to the public
+
+* An API Management contributor and backend API developer is writing several new APIs that will be available to community developers.
+* The APIs will be publicly available, with full functionality protected behind a paywall and secured using OAuth 2.0. After purchasing a license, the developer will be provided with their own client credentials and subscription key that is valid for production use.
+* External community developers will discover the APIs using the developer portal. Developers will sign up and sign in to the developer portal using their social media accounts.
+* Interested developer portal users with a test subscription key can explore the API functionality in a test context, without needing to purchase a license. The developer portal test console will represent the calling application and generate a default access token to the backend API.
+
+ > [!CAUTION]
+ > Extra care is required when using a client credentials flow with the developer portal test console. See [security considerations](api-management-howto-oauth2.md#security-considerations).
+
+Key configurations:
+
+|Configuration |Reference |
+|||
+| Set up products in Azure API Management to represent the combinations of APIs that are exposed to community developers.<br/><br/> Set up subscriptions to enable developers to consume the APIs. | [Tutorial: Create and publish a product](api-management-howto-add-products.md)<br/><br/>[Subscriptions in Azure API Management](api-management-subscriptions.md) |
+| Configure community developer access to the developer portal using Azure AD B2C. Azure AD B2C can then be configured to work with one or more downstream social media identity providers. | [How to authorize developer accounts by using Azure Active Directory B2C in Azure API Management](api-management-howto-aad-b2c.md) |
+| Set up the test console in the developer portal to obtain a valid OAuth 2.0 token to the backend API using the client credentials flow. | [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md)<br/><br/>Adjust configuration steps shown in this article to use the [client credentials grant flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) instead of the authorization code grant flow. |
+
+Go a step further by delegating [user registration or product subscription](api-management-howto-setup-delegation.md) and extend the process with your own logic.
++
+## Next steps
+* Learn more about [authentication and authorization](../active-directory/develop/authentication-vs-authorization.md) in the Microsoft identity platform.
+* Learn how to [mitigate OWASP API security threats](mitigate-owasp-api-threats.md) using API Management.
api-management Api Version Retirement Sep 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/api-version-retirement-sep-2023.md
+
+ Title: Azure API Management - API version retirements (September 2023) | Microsoft Docs
+description: Azure API Management is retiring all API versions prior to 2021-08-01. If you use one of these API versions, you must update your tools, scripts, or programs to use the latest versions.
+
+documentationcenter: ''
+++ Last updated : 07/25/2022+++
+# API version retirements (September 2023)
+
+Azure API Management uses Azure Resource Manager (ARM) to configure your API Management instances. The API version is embedded in your use of templates that describe your infrastructure, tools that are used to configure the service, and programs that you write to manage your Azure API Management services.
+
+On 30 September 2023, all API versions prior to **2021-08-01** will be retired and API calls using those API versions will fail. This means you'll no longer be able to create or manage your API Management services using your existing templates, tools, scripts, and programs until they've been updated. Data operations (such as accessing the APIs or Products configured on Azure API Management) will be unaffected by this update, including after 30 September 2023.
+
+From now through 30 September 2023, you can continue to use the templates, tools, and programs without impact. You can transition to API version 2021-08-01 or later at any point prior to 30 September 2023.
+
+## Is my service affected by this?
+
+While your service isn't* affected by this change, any tool, script, or program that uses the Azure Resource Manager (such as the Azure CLI, Azure PowerShell, Azure API Management DevOps Resource Kit, or Terraform) is affected by this change. You'll be unable to run those tools successfully unless you update the tools.
+
+## What is the deadline for the change?
+
+The affected API versions will no longer be valid after 30 September 2023.
+
+After 30 September 2023, if you prefer not to update your tools, scripts, and programs, your services will continue to run but you won't be able to add or remove APIs, change API policy, or otherwise configure your API Management service.
+
+## Required action
+
+* **ARM, Bicep, or Terraform templates** - Update the template to use API version 2021-08-01 or later.
+
+* **Azure CLI** - Run `az version -help` to check your version. If you're running version 2.38.0 or later, no action is required. Use the `az upgrade` command to upgrade the Azure CLI if necessary. For more information, see [How to update the Azure CLI](/cli/azure/update-azure-cli).
+
+* **Azure PowerShell** - Run `Get-Module -ListAvailable -Name Az` to check your version. If you're running version 8.1.0 or later, no action is required. Use `Update-Module -Name Az -Repository PSGallery` to update the module if necessary. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+
+* **Other tools** - Use the following versions (or later):
+
+ * API Management DevOps Resource Kit: 1.0.0
+ * Terraform azurerm provider: 3.0.0
+
+* **Azure SDKs** - Update the Azure API Management SDKs to the latest versions (or later):
+
+ * .NET: 8.0.0
+ * Go: 1.0.0
+ * Python: 3.0.0
+ * JavaScript: 8.0.1
+
+## More information
+
+* [Azure CLI](/cli/azure/update-azure-cli)
+* [Azure PowerShell](/powershell/azure/install-az-ps)
+* [Azure Resource Manager](/azure/azure-resource-manager/management/overview)
+* [Terraform on Azure](/azure/developer/terraform/)
+* [Bicep](/azure/azure-resource-manager/bicep/overview)
+* [Microsoft Q&A](/answers/topics/azure-api-management.html)
+
+## Next steps
+
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Captcha Endpoint Change Sep 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/captcha-endpoint-change-sep-2025.md
+
+ Title: Azure API Management CAPTCHA endpoint change (September 2025) | Microsoft Docs
+description: Azure API Management is updating the CAPTCHA endpoint. If your service is hosted in an Azure virtual network, you may need to update network settings to continue using the developer portal.
+
+documentationcenter: ''
+++ Last updated : 09/06/2022+++
+# CAPTCHA endpoint update (September 2025)
+
+On 30 September, 2025 as part of our continuing work to increase the resiliency of API Management services, we're permanently changing the CAPTCHA endpoint used by the developer portal.
+
+This change will have no effect on the availability of your API Management service. However, you may have to take steps described below to continue using the developer portal beyond 30 September, 2025.
+
+## Is my service affected by this change?
+
+Your service may be impacted by this change if:
+
+* Your API Management service is running inside an Azure virtual network.
+* You use the Azure API Management developer portal.
+
+Follow the steps below to confirm if your network restricts connectivity to the new CAPTCHA endpoint.
+
+1. Navigate to your API Management service in the Azure portal.
+2. Select **Network** from the menu and go to the **Network status tab**.
+3. Check the status for the **Captcha endpoint**. Your service is impacted if the status isn't **Success**.
+
+## What is the deadline for the change?
+
+The CAPTCHA endpoint will permanently change on 30 September, 2025. Complete all required networking changes before then.
+
+After 30 September 2025, if you prefer not to make changes to your network configuration, your services will continue to run but the developer portal sign-up and password reset functionality will no longer work.
+
+## What do I need to do?
+
+Update the virtual network configuration to allow connectivity to the new CAPTCHA hostnames.
+
+| Environment | Endpoint |
+| | |
+| Global Azure cloud | `partner.prod.repmap.microsoft.com` |
+| USGov | `partner.prod.repmap.microsoft.us` |
+
+The new CAPTCHA endpoints provide the same functionality as the previous endpoints.
+
+## Help and support
+
+If you have questions, get answers from community experts in [Microsoft Q&A](https://aka.ms/apim/azureqa/change/captcha-2022). If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+1. For **Summary**, type a description of your issue, for example, "stv1 retirement".
+1. Under **Issue type**, select **Technical**.
+1. Under **Subscription**, select your subscription.
+1. Under **Service**, select **My services**, then select **API Management Service**.
+1. Under **Resource**, select the Azure resource that youΓÇÖre creating a support request for.
+1. For **Problem type**, select **General configuration**.
+1. For **Problem subtype**, select **VNET integration**.
+
+## More information
+
+* [Virtual Network](../../virtual-network/index.yml)
+* [API Management VNet Reference](../virtual-network-reference.md)
+* [Microsoft Q&A](/answers/topics/azure-api-management.html)
++
+## Next steps
+
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Identity Provider Adal Retirement Sep 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/identity-provider-adal-retirement-sep-2025.md
+
+ Title: Azure API Management identity providers configuration change (September 2025) | Microsoft Docs
+description: Azure API Management is updating the library used for user authentication in the developer portal. If you use Azure AD or Azure AD B2C identity providers, you need to update application settings and identity provider configuration to use the Microsoft Authentication Library (MSAL).
+
+documentationcenter: ''
+++ Last updated : 09/06/2022+++
+# ADAL-based Azure AD or Azure AD B2C identity provider retirement (September 2025)
+
+On 30 September, 2025 as part of our continuing work to increase the resiliency of API Management services, we're removing the support for the previous library for user authentication and authorization in the developer portal (AD Authentication Library, or ADAL). You need to migrate your Azure AD or Azure AD B2C applications, change identity provider configuration to use the Microsoft Authentication Library (MSAL), and republish your developer portal.
+
+This change will have no effect on the availability of your API Management service. However, you have to take steps described below to configure your API Management service if you wish to continue using Azure AD or Azure AD B2C identity providers beyond 30 September, 2025.
+
+## Is my service affected by this change?
+
+Your service is impacted by this change if:
+
+* You've configured an [Azure AD](../api-management-howto-aad.md) or [Azure AD B2C](../api-management-howto-aad-b2c.md) identity provider for user account authentication using the ADAL and use the provided developer portal.
+
+## What is the deadline for the change?
+
+On 30 September, 2025, these identity providers will stop functioning. To avoid disruption of your developer portal, you need to update your Azure AD applications and identity provider configuration in Azure API Management by that date. Your developer portal might be at a security risk after Microsoft ADAL support ends in December 2022.
+
+Developer portal sign-in and sign-up with Azure AD or Azure AD B2C will stop working past 30 September, 2025 if you don't update your ADAL-based Azure AD or Azure AD B2C identity providers. This new authentication method is more secure, as it relies on the OAuth 2.0 authorization code flow with PKCE and uses an up-to-date software library.
+
+## What do I need to do?
+
+### Update Azure AD and Azure AD B2C applications for MSAL compatibility
+
+[Switch redirect URIs to the single-page application type](../../active-directory/develop/migrate-spa-implicit-to-auth-code.md#switch-redirect-uris-to-spa-platform).
+
+### Update identity provider configuration
+
+1. Go to the [Azure portal](https://portal.azure.com) and navigate to your Azure API Management service.
+2. Select **Identities** in the menu.
+3. Select **Azure Active Directory** or **Azure Active Directory B2C** from the list.
+4. Select **MSAL** in the **Client library** dropdown.
+5. Select **Update**.
+6. [Republish your developer portal](../api-management-howto-developer-portal-customize.md#publish-from-the-azure-portal).
++
+## Help and support
+
+If you have questions, get answers from community experts in [Microsoft Q&A](https://aka.ms/apim/azureqa/change/msal-2022). If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+1. For **Summary**, type a description of your issue, for example, "stv1 retirement".
+1. Under **Issue type**, select **Technical**.
+1. Under **Subscription**, select your subscription.
+1. Under **Service**, select **My services**, then select **API Management Service**.
+1. Under **Resource**, select the Azure resource that youΓÇÖre creating a support request for.
+1. For **Problem type**, select **Authentication and Security**.
+1. For **Problem subtype**, select **Azure Active Directory Authentication** or **Azure Active Directory B2C Authentication**.
++
+## More information
+
+* [Authenticate users with Azure AD](../api-management-howto-aad.md)
+* [Authenticate users with Azure AD B2C](../api-management-howto-aad-b2c.md)
+* [Microsoft Q&A](/answers/topics/azure-api-management.html)
+
+## Next steps
+
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md
Previously updated : 02/07/2022 Last updated : 09/07/2022
The following table lists all the upcoming breaking changes and feature retireme
| Change Title | Effective Date | |:-|:|
-| [Resource Provider Source IP Address Update][bc1] | March 31, 2023 |
+| [Resource provider source IP address updates][bc1] | March 31, 2023 |
+| [Resource provider source IP address updates][rp2023] | September 30, 2023 |
+| [API version retirements][api2023] | September 30, 2023 |
+| [Deprecated (legacy) portal retirement][devportal2023] | October 2023 |
+| [Self-hosted gateway v0/v1 retirement][shgwv0v1] | October 1, 2023 |
+| [stv1 platform retirement][stv12024] | August 31, 2024 |
+| [ADAL-based Azure AD or Azure AD B2C identity provider retirement][msal2025] | September 30, 2025 |
+| [CAPTCHA endpoint update][captcha2025] | September 30, 2025 |
<!-- Links -->
-[bc1]: ./rp-source-ip-address-change-mar2023.md
+[bc1]: ./rp-source-ip-address-change-mar-2023.md
+[rp2023]: ./rp-source-ip-address-change-sep-2023.md
+[api2023]: ./api-version-retirement-sep-2023.md
+[devportal2023]: ../api-management-customize-styles.md
+[shgwv0v1]: ./self-hosted-gateway-v0-v1-retirement-oct-2023.md
+[stv12024]: ./stv1-platform-retirement-august-2024.md
+[msal2025]: ./identity-provider-adal-retirement-sep-2025.md
+[captcha2025]: ./captcha-endpoint-change-sep-2025.md
api-management Rp Source Ip Address Change Mar 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-mar-2023.md
+
+ Title: Azure API Management IP address change (March 2023) | Microsoft Docs
+description: Azure API Management is updating the source IP address of the resource provider in certain regions. If your service is hosted in a Microsoft Azure Virtual Network, you may need to update network settings to continue managing your service.
+
+documentationcenter: ''
+++ Last updated : 02/07/2022+++
+# Resource Provider source IP address updates (March 2023)
+
+On 31 March, 2023 as part of our continuing work to increase the resiliency of API Management services, we're making the resource providers for Azure API Management zone redundant in each region. The IP address that the resource provider uses to communicate with your service will change in seven regions:
+
+| Region | Old IP Address | New IP Address |
+|:-|:--:|:--:|
+| Canada Central | 52.139.20.34 | 20.48.201.76 |
+| Brazil South | 191.233.24.179 | 191.238.73.14 |
+| Germany West Central | 51.116.96.0 | 20.52.94.112 |
+| South Africa North | 102.133.0.79 | 102.37.166.220 |
+| Korea Central | 40.82.157.167 | 20.194.74.240 |
+| Central India | 13.71.49.1 | 20.192.45.112 |
+| South Central US | 20.188.77.119 | 20.97.32.190 |
+
+This change will have NO effect on the availability of your API Management service. However, you **may** have to take steps described below to configure your API Management service beyond 31 March, 2023.
+
+## Is my service affected by this change?
+
+Your service is impacted by this change if:
+
+* The API Management service is in one of the seven regions listed in the table above.
+* The API Management service is running inside an Azure virtual network.
+* The Network Security Group (NSG) or User-defined Routes (UDRs) for the virtual network are configured with explicit source IP addresses.
+
+## What is the deadline for the change?
+
+The source IP addresses for the affected regions will be changed on 31 March, 2023. Complete all required networking changes before then.
+
+After 31 March 2023, if you prefer not to make changes to your IP addresses, your services will continue to run but you will not be able to add or remove APIs, or change API policy, or otherwise configure your API Management service.
+
+## Can I avoid this sort of change in the future?
+
+Yes, you can.
+
+API Management publishes a _service tag_ that you can use to configure the NSG for your virtual network. The service tag includes information about the source IP addresses that API Management uses to manage your service. For more information on this topic, read [Configure NSG Rules] in the API Management documentation.
+
+## What do I need to do?
+
+Update the NSG security rules that allow the API Management resource provider to communicate with your API Management instance. For detailed instructions on how to manage an NSG, review [Create, change, or delete a network security group] in the Azure Virtual Network documentation.
+
+1. Go to the [Azure portal](https://portal.azure.com) to view your NSGs. Search for and select **Network security groups**.
+2. Select the name of the NSG associated with the virtual network hosting your API Management service.
+3. In the menu bar, choose **Inbound security rules**.
+4. The inbound security rules should already have an entry that mentions a Source address matching the _Old IP Address_ from the table above. If it doesn't, you're not using explicit source IP address filtering, and can skip this update.
+5. Select **Add**.
+6. Fill in the form with the following information:
+
+ 1. Source: **Service Tag**
+ 2. Source Service Tag: **ApiManagement**
+ 3. Source port ranges: __*__
+ 4. Destination: **VirtualNetwork**
+ 5. Destination port ranges: **3443**
+ 6. Protocol: **TCP**
+ 7. Action: **Allow**
+ 8. Priority: Pick a suitable priority to place the new rule next to the existing rule.
+
+ The Name and Description fields can be set to anything you wish. All other fields should be left blank.
+
+7. Select **OK**.
+
+In addition, you may have to adjust the network routing for the virtual network to accommodate the new control plane IP addresses. If you've configured a default route (`0.0.0.0/0`) forcing all traffic from the API Management subnet to flow through a firewall instead of directly to the Internet, then additional configuration is required.
+
+If you configured user-defined routes (UDRs) for control plane IP addresses, the new IP addresses must be routed the same way. For more details on the changes necessary to handle network routing of management requests, review [Force tunneling traffic] documentation.
+
+Finally, check for any other systems that may impact the communication from the API Management resource provider to your API Management service subnet. For more information about virtual network configuration, review the [Virtual Network] documentation.
+
+## More Information
+
+* [Virtual Network](../../virtual-network/index.yml)
+* [API Management VNET Reference](../virtual-network-reference.md)
+* [Microsoft Q&A](/answers/topics/azure-api-management.html)
+
+<!-- Links -->
+[Configure NSG Rules]: ../api-management-using-with-internal-vnet.md#configure-nsg-rules
+[Virtual Network]: ../../virtual-network/index.yml
+[Force tunneling traffic]: ../api-management-using-with-internal-vnet.md#force-tunnel-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance
+[Create, change, or delete a network security group]: ../../virtual-network/manage-network-security-group.md
api-management Rp Source Ip Address Change Sep 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-sep-2023.md
+
+ Title: Azure API Management IP address change (September 2023) | Microsoft Docs
+description: Azure API Management is updating the source IP address of the resource provider in Switzerland North. If your service is hosted in a Microsoft Azure virtual network, you may need to update network settings to continue managing your service.
+
+documentationcenter: ''
+++ Last updated : 07/25/2022+++
+# Resource provider source IP address updates (September 2023)
+
+On 30 September 2023 as part of our continuing work to increase the resiliency of API Management services, we're making the resource providers for Azure API Management zone redundant in each region. The IP address that the resource provider uses to communicate with your service will change if it's located in Switzerland North:
+
+* Old IP address: 51.107.0.91
+* New IP address: 51.107.246.176
+
+This change will have *no* effect on the availability of your API Management service. However, you **may** have to take steps described below to configure your API Management service beyond 30 September 2023.
+
+## Is my service affected by this change?
+
+Your service is impacted by this change if:
+
+* The API Management service is in the Switzerland North region.
+* The API Management service is running inside an Azure virtual network.
+* The network security group (NSG) or user-defined routes (UDRs) for the virtual network are configured with explicit source IP addresses.
+
+## What is the deadline for the change?
+
+The source IP addresses for the affected region will be changed on 30 September 2023. Complete all required networking changes before then.
+
+After 30 September 2023, if you prefer not to make changes to your IP addresses, your services will continue to run but you won't be able to add or remove APIs, or change API policy, or otherwise configure your API Management service.
+
+## Can I avoid this sort of change in the future?
+
+Yes, you can.
+
+API Management publishes a _service tag_ that you can use to configure the NSG for your virtual network. The service tag includes information about the source IP addresses that API Management uses to manage your service. For more information on this article, read [Configure NSG Rules] in the API Management documentation.
+
+## What do I need to do?
+
+Update the NSG security rules that allow the API Management resource provider to communicate with your API Management instance. For detailed instructions on how to manage an NSG, review [Create, change, or delete a network security group] in the Azure virtual network documentation.
+
+1. Go to the [Azure portal](https://portal.azure.com) to view your NSGs. Search for and select **Network security groups**.
+2. Select the name of the NSG associated with the virtual network hosting your API Management service.
+3. In the menu bar, choose **Inbound security rules**.
+4. The inbound security rules should already have an entry that mentions a source address matching the _Old IP address_ from the table above. If it doesn't, you're not using explicit source IP address filtering, and can skip this update.
+5. Select **Add**.
+6. Fill in the form with the following information:
+
+ 1. Source: **Service Tag**
+ 2. Source Service Tag: **ApiManagement**
+ 3. Source port ranges: __*__
+ 4. Destination: **VirtualNetwork**
+ 5. Destination port ranges: **3443**
+ 6. Protocol: **TCP**
+ 7. Action: **Allow**
+ 8. Priority: Pick a suitable priority to place the new rule next to the existing rule.
+
+ The Name and Description fields can be set to anything you wish. All other fields should be left blank.
+
+7. Select **OK**.
+
+In addition, you may have to adjust the network routing for the virtual network to accommodate the new control plane IP addresses. If you've configured a default route (`0.0.0.0/0`) forcing all traffic from the API Management subnet to flow through a firewall instead of directly to the Internet, then more configuration is required.
+
+If you configured user-defined routes (UDRs) for control plane IP addresses, the new IP addresses must be routed the same way. For more details on the changes necessary to handle network routing of management requests, review [Force tunneling traffic] documentation.
+
+Finally, check for any other systems that may impact the communication from the API Management resource provider to your API Management service subnet. For more information about virtual network configuration, review the [Virtual Network] documentation.
+
+## More information
+
+* [Virtual network](../../virtual-network/index.yml)
+* [API Management VN24 Reference](../virtual-network-reference.md)
+* [Microsoft Q&A](/answers/topics/azure-api-management.html)
+
+## Next steps
+
+See all [upcoming breaking changes and feature retirements](overview.md).
+
+<!-- Links -->
+[Configure NSG Rules]: ../api-management-using-with-internal-vnet.md#configure-nsg-rules
+[Virtual Network]: ../../virtual-network/index.yml
+[Force tunneling traffic]: ../api-management-using-with-internal-vnet.md#force-tunnel-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance
+[Create, change, or delete a network security group]: ../../virtual-network/manage-network-security-group.md
api-management Self Hosted Gateway V0 V1 Retirement Oct 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/self-hosted-gateway-v0-v1-retirement-oct-2023.md
+
+ Title: Azure API Management - Self-hosted gateway v0/v1 retirement (October 2023) | Microsoft Docs
+description: Azure API Management is retiring the v0 and v1 versions of the self-hosted gateway container image, effective 1 October 2023. If you've deployed one of these versions, you must migrate to the v2 version of the self-hosted gateway.
+
+documentationcenter: ''
+++ Last updated : 09/06/2022+++
+# Support ending for Azure API Management self-hosted gateway version 0 and version 1 container images (October 2023)
+
+The [self-hosted gateway](../self-hosted-gateway-overview.md) is an optional, containerized version of the default managed gateway included in every API Management service. On 1 October 2023 we're removing support for the v0 and v1 versions of the self-hosted gateway container image. If you've deployed the self-hosted gateway using either of these container images, you need to take the steps below to continue using the self-hosted gateway by migrating to the v2 container image and configuration API.
+
+## Is my service affected by this?
+
+Your service is affected by this change if:
+
+* Your service is in the Developer or Premium service tier.
+* You have deployed a self-hosted gateway using the version v0 or v1 of the self-hosted gateway [container image](../self-hosted-gateway-migration-guide.md#using-the-new-configuration-api).
+
+## What is the deadline for the change?
+
+**Support for the v1 configuration API and for the v0 and v1 container images of the self-hosted gateway will retire on 1 October 2023.**
+
+Version 2 of the configuration API and container image is already available, and includes the following improvements:
+
+* A new configuration API that removes the dependency on Azure Storage, unless you're using request tracing or quotas.
+
+* New container images, and new container image tags to let you choose the best way to try our gateway and deploy it in production.
+
+If you are using version 0 or version 1 of the self-hosted gateway, you will need to manually migrate your container images to the newest v2 image and switch to the v2 configuration API.
+
+## What do I need to do?
+
+Migrate all your existing deployments of the self-hosted gateway using version 0 or version 1 to the newest v2 container image and v2 configuration API by 1 October 2023.
+
+Follow the [migration guide](../self-hosted-gateway-migration-guide.md) for a successful migration.
+
+## Help and support
+
+If you have questions, get answers from community experts in [Microsoft Q&A](https://aka.ms/apim/retirement/shgwv0v1). If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+1. For **Summary**, type a description of your issue, for example, "stv1 retirement".
+1. Under **Issue type**, select **Technical**.
+1. Under **Subscription**, select your subscription.
+1. Under **Service**, select **My services**, then select **API Management Service**.
+1. Under **Resource**, select the Azure resource that youΓÇÖre creating a support request for.
+1. For **Summary**, type a description of your issue, for example, "v1/v0 retirement".
+1. For **Problem type**, select **Self-hosted gateway**.
+1. For **Problem subtype**, select **Administration, Configuration and Deployment**.
+
+## Next steps
+
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Stv1 Platform Retirement August 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/stv1-platform-retirement-august-2024.md
+
+ Title: Azure API Management - stv1 platform retirement (August 2024) | Microsoft Docs
+description: Azure API Management is retiring the stv1 compute platform effective 31 August 2024. If your API Management instance is hosted on the stv1 platform, you must migrate to the stv2 platform.
+
+documentationcenter: ''
+++ Last updated : 08/26/2022+++
+# stv1 platform retirement (August 2024)
+
+As a cloud platform-as-a-service (PaaS), Azure API Management abstracts many details of the infrastructure used to host and run your service. **The infrastructure associated with the API Management `stv1` compute platform version will be retired effective 31 August 2024.** A more current compute platform version (`stv2`) is already available, and provides enhanced service capabilities.
+
+The following table summarizes the compute platforms currently used for instances in the different API Management service tiers.
+
+| Version | Description | Architecture | Tiers |
+| -| -| -- | - |
+| `stv2` | Single-tenant v2 | Azure-allocated compute infrastructure that supports availability zones, private endpoints | Developer, Basic, Standard, Premium<sup>1</sup> |
+| `stv1` | Single-tenant v1 | Azure-allocated compute infrastructure | Developer, Basic, Standard, Premium |
+| `mtv1` | Multi-tenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption |
+
+To take advantage of upcoming features, we're recommending that customers migrate their Azure API Management instances from the `stv1` compute platform to the `stv2` compute platform. The `stv2` compute platform comes with additional features and improvements such as support for Azure Private Link and other networking features.
+
+New instances created in service tiers other than the Consumption tier are mostly hosted on the `stv2` platform already. Existing instances on the `stv1` compute platform will continue to work normally until the retirement date, but those instances wonΓÇÖt receive the latest features available to the `stv2` platform. Support for `stv1` instances will be retired by 31 August 2024.
+
+## Is my service affected by this?
+
+If the value of the `platformVersion` property of your service is `stv1`, it is hosted on the `stv1` platform. See [How do I know which platform hosts my API Management instance?](../compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance)
+
+## What is the deadline for the change?
+
+Support for API Management instances hosted on the `stv1` platform will be retired by 31 August 2024.
+
+After 31 August 2024, any instance hosted on the `stv1` platform won't be supported, and could experience system outages.
+
+## What do I need to do?
+
+**Migrate all your existing instances hosted on the `stv1` compute platform to the `stv2` compute platform by 31 August 2024.**
+
+If you have existing instances hosted on the `stv1` platform, you can follow our [migration guide](../compute-infrastructure.md#how-do-i-migrate-to-the-stv2-platform) which provides all the details to ensure a successful migration.
+
+## Help and support
+
+If you have questions, get answers from community experts in [Microsoft Q&A](https://aka.ms/apim/retirement/stv1). If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+1. For **Summary**, type a description of your issue, for example, "stv1 retirement".
+1. Under **Issue type**, select **Technical**.
+1. Under **Subscription**, select your subscription.
+1. Under **Service**, select **My services**, then select **API Management Service**.
+1. Under **Resource**, select the Azure resource that youΓÇÖre creating a support request for.
+1. For **Problem type**, select **Administration and Management**.
+1. For **Problem subtype**, select **Upgrade, Scale or SKU Changes**.
+
+## Next steps
+
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
The following table summarizes the compute platforms currently used for instance
| `stv1` | Single-tenant v1 | Azure-allocated compute infrastructure | Developer, Basic, Standard, Premium | | `mtv1` | Multi-tenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption | - <sup>1</sup> Newly created instances in these tiers, created using the Azure portal or specifying API version 2021-01-01-preview or later. Includes some existing instances in Developer and Premium tiers configured with virtual networks or availability zones. > [!NOTE]
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
[WordPress](https://www.wordpress.org) is an open source content management system (CMS) used by over 40% of the web to create websites, blogs, and other applications. WordPress can be run on a few different Azure
-In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org/) site to [Azure App Service on Linux](overview.md#app-service-on-linux) using the [WordPress Azure Marketplace item by App Service](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). It uses the **Basic** tier and [**incurs a cost**](https://azure.microsoft.com/pricing/details/app-service/linux/) for your Azure subscription. The WordPress installation comes with pre-installed plugins for performance improvements, [W3TC](https://wordpress.org/plugins/w3-total-cache/) for caching and [Smush](https://wordpress.org/plugins/wp-smushit/) for image compression.
+In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org/) site to [Azure App Service on Linux](overview.md#app-service-on-linux) using the [WordPress Azure Marketplace item by App Service](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). This quickstart uses the **Basic** tier and [**incurs a cost**](https://azure.microsoft.com/pricing/details/app-service/linux/) for your Azure subscription. The WordPress installation comes with pre-installed plugins for performance improvements, [W3TC](https://wordpress.org/plugins/w3-total-cache/) for caching and [Smush](https://wordpress.org/plugins/wp-smushit/) for image compression.
To complete this quickstart, you need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs).
azure-app-configuration Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-disaster-recovery.md
Last updated 07/09/2020
# Resiliency and disaster recovery
+> [!IMPORTANT]
+> Azure App Configuration added [geo-replication](./concept-geo-replication.md) support recently. You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. You can also leverage App Configuration provider libraries in your applications for [automatic failover](./howto-geo-replication.md#use-replicas). The geo-replication feature is currently under preview. It will be the recommended solution for high availability when the feature is generally available.
+ Currently, Azure App Configuration is a regional service. Each configuration store is created in a particular Azure region. A region-wide outage affects all stores in that region. App Configuration doesn't offer automatic failover to another region. This article provides general guidance on how you can use multiple configuration stores across Azure regions to increase the geo-resiliency of your application. ## High-availability architecture
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
Title: "Upgrade Azure Arc-enabled Kubernetes agents" Previously updated : 08/02/2022 Last updated : 09/09/2022 description: "Control agent upgrades for Azure Arc-enabled Kubernetes" keywords: "Kubernetes, Arc, Azure, K8s, containers, agent, update, auto upgrade"
keywords: "Kubernetes, Arc, Azure, K8s, containers, agent, update, auto upgrade"
Azure Arc-enabled Kubernetes provides both automatic and manual upgrade capabilities for its [agents](conceptual-agent-overview.md). If you disable automatic upgrade and instead rely on manual upgrade, a [version support policy](#version-support-policy) applies for Arc agents and the underlying Kubernetes clusters.
-## Toggle automatic upgrade on or off when connecting cluster to Azure Arc
+## Toggle automatic upgrade on or off when connecting a cluster to Azure Arc
By default, Azure Arc-enabled Kubernetes provides its agents with out-of-the-box automatic upgrade capabilities.
az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest --dis
> [!TIP] > If you plan to disable automatic upgrade, be aware of the [version support policy](#version-support-policy) for Azure Arc-enabled Kubernetes.
-## Toggle automatic upgrade on or off after connecting cluster to Azure Arc
+## Toggle automatic upgrade on or off after connecting a cluster to Azure Arc
After you connect a cluster to Azure Arc, you can change the automatic upgrade selection by using the `az connectedk8s update` command and setting `--auto-upgrade` to either true or false.
The following command turns automatic upgrade off for a connected cluster:
az connectedk8s update --name AzureArcTest1 --resource-group AzureArcTest --auto-upgrade false ```
+## Check if automatic upgrade is enabled on a cluster
+
+To check whether a cluster is enabled for automatic upgrade, run the following kubectl command. Note that the automatic upgrade configuration is not available in the public API for Azure Arc-enabled Kubernetes.
+
+```console
+kubectl -n azure-arc get cm azure-clusterconfig -o jsonpath="{.data['AZURE_ARC_AUTOUPDATE']}"
+```
+ ## Manually upgrade agents If you've disabled automatic upgrade, you can manually initiate upgrades for the agents by using the `az connectedk8s upgrade` command. When doing so, you must specify the version to which you want to upgrade.
The following command upgrades the agent to version 1.1.0:
az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.1.0 ```
+## Check agent version
+
+To list connected clusters and reported agent version, use the following command:
+
+```azurecli
+az connectedk8s list --query '[].{name:name,rg:resourceGroup,id:id,version:agentVersion}'
+```
+ ## Version support policy When you [create support requests](../../azure-portal/supportability/how-to-create-azure-support-request.md) for Azure Arc-enabled Kubernetes, the following version support policy applies:
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
Title: Connected Machine agent network requirements description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 08/29/2022 Last updated : 09/09/2022
The table below lists the URLs that must be available in order to install and us
|`management.azure.com`|Azure Resource Manager - to create or delete the Arc server resource|When connecting or disconnecting a server, only| Public, unless a [resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) is also configured | |`*.his.arc.azure.com`|Metadata and hybrid identity services|Always| Private | |`*.guestconfiguration.azure.com`| Extension management and guest configuration services |Always| Private |
-|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Private |
+|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Public |
|`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public | |`*.servicebus.windows.net`|For Windows Admin Center and SSH scenarios|If using SSH or Windows Admin Center from Azure|Public| |`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
+
+ Title: How to upgrade the Redis version of Azure Cache for Redis
+description: Learn how to upgrade the version of Azure Cache for Redis
+++++ Last updated : 09/08/2022++++
+# How to upgrade an existing Redis 4 cache to Redis 6
+
+Azure Cache for Redis supports upgrading the version of your Azure Cache for Redis from Redis 4 to Redis 6. Upgrading is permanent, and it might cause a brief connection issue similar to regular monthly maintenance. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading.
+
+For more information, see [here](cache-how-to-import-export-data.md) for details on how to export.
+
+## Prerequisites
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+
+### Limitations
+
+- Upgrading a Basic tier cache results in brief unavailability and data loss.
+- Upgrading on geo-replicated cache isn't supported. You must manually unlink the cache instances before upgrading.
+- Upgrading a cache with a dependency on Cloud Services isn't supported. You should migrate your cache instance to virtual machine scale set before upgrading. For more information, see [Caches with a dependency on Cloud Services (classic)](/azure/azure-cache-for-redis/cache-faq) for details on cloud services hosted caches.
+
+### Check the version of a cache
+
+Before you upgrade, check the Redis version of a cache by selecting **Properties** from the Resource menu of the Azure Cache for Redis. We recommend you use Redis 6.
++
+## Upgrade using the Azure portal
+
+1. In the Azure portal, select the Azure Cache for Redis instance that you want to upgrade from Redis 4 to Redis 6.
+
+1. On the left side of the screen, select **Advanced settings**.
+
+1. If your cache instance is eligible to be upgraded, you should see the following blue banner. If you want to proceed, select the text in the banner.
+
+ :::image type="content" source="media/cache-how-to-upgrade/blue-banner-upgrade-cache.png" alt-text="Screenshot informing you that you can upgrade your cache to Redis 6 with additional features. Upgrading your cache instance cannot be reversed.":::
+
+1. A dialog box displays a popup notifying you that upgrading is permanent and might cause a brief connection blip. Select **Yes** if you would like to upgrade your cache instance.
+
+ :::image type="content" source="media/cache-how-to-upgrade/dialog-version-upgrade.png" alt-text="Screenshot showing a dialog with more information about upgrading your cache with Yes selected.":::
+
+1. To check on the status of the upgrade, navigate to **Overview**.
+
+ :::image type="content" source="media/cache-how-to-upgrade/upgrade-status.png" alt-text="Screenshot showing Overview in the Resource menu. Status shows cache is being upgraded.":::
+
+## Upgrade using Azure CLI
+
+To upgrade a cache from 4 to 6 using the Azure CLI, use the following command:
+
+```azurecli-interactive
+az redis update --name cacheName --resource-group resourceGroupName --set redisVersion=6
+```
+
+## Upgrade using PowerShell
+
+To upgrade a cache from 4 to 6 using PowerShell, use the following command:
+
+```powershell-interactive
+Set-AzRedisCache -Name "CacheName" -ResourceGroupName "ResourceGroupName" -RedisVersion "6"
+```
+
+## Next steps
+
+- To learn more about Azure Cache for Redis versions, see lin[Set Redis version for Azure Cache for Redis](cache-how-to-version.md)
+- To learn more about Redis 6 features, see [Diving Into Redis 6.0 by Redis](https://redis.com/blog/diving-into-redis-6/)
+- To learn more about Azure Cache for Redis features: [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-cache-for-redis Cache How To Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-version.md
Title: Set Redis version for Azure Cache for Redis
-description: Learn how to configure Redis version
+ Title: Set the Redis version of Azure Cache for Redis
+description: Learn how to configure the version of Azure Cache for Redis
+ - Previously updated : 06/03/2022+ Last updated : 09/08/2022
Last updated 06/03/2022
In this article, you'll learn how to configure the Redis software version to be used with your cache instance. Azure Cache for Redis offers the latest major version of Redis and at least one previous version. It will update these versions regularly as newer Redis software is released. You can choose between the two available versions. Keep in mind that your cache will be upgraded to the next version automatically if the version it's using currently is no longer supported. > [!NOTE]
-> At this time, Redis 6 does not support ACL, and geo-replication between a Redis 4 and 6 cache.
+> At this time, Redis 6 does not support Access Control Lists (ACL) or geo-replication between a Redis 4 cache and Redis 6 cache.
> ## Prerequisites
-* Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
-## Create a cache using the Azure portal
+## How to create a cache using the Azure portal
To create a cache, follow these steps:
To create a cache, follow these steps:
| Setting | Suggested value | Description | | | - | -- |
- | **Subscription** | Select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
- | **Resource group** | Select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
+ | **Subscription** | Select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
+ | **Resource group** | Select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
+ | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
| **Location** | Select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. | | **Cache type** | Select a [cache tier and size](https://azure.microsoft.com/pricing/details/cache/). | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). |
To create a cache, follow these steps:
1. Select **Create**.
- It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
+ It takes a while for the cache to be created. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
## Create a cache using Azure PowerShell
+To create a cache using PowerShell:
+ ```azurepowershell New-AzRedisCache -ResourceGroupName "ResourceGroupName" -Name "CacheName" -Location "West US 2" -Size 250MB -Sku "Standard" -RedisVersion "6" ```
For more information on how to manage Azure Cache for Redis with Azure PowerShel
## Create a cache using Azure CLI
+To create a cache using Azure CLI:
+ ```azurecli-interactive az redis create --resource-group resourceGroupName --name cacheName --location westus2 --sku Standard --vm-size c0 --redisVersion="6" ```+ For more information on how to manage Azure Cache for Redis with Azure CLI, see [here](cli-samples.md)
+<!--
## Upgrade an existing Redis 4 cache to Redis 6 Azure Cache for Redis supports upgrading your Redis cache server major version from Redis 4 to Redis 6. Upgrading is permanent and it might cause a brief connection blip. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading. For more information, see [here](cache-how-to-import-export-data.md) for details on how to export.
To upgrade a cache from 4 to 6 using PowerShell, use the following command:
```powershell-interactive Set-AzRedisCache -Name "CacheName" -ResourceGroupName "ResourceGroupName" -RedisVersion "6" ```
+ -->
+
+## How to check the version of a cache
+
+You can check the Redis version of a cache by selecting **Properties** from the Resource menu of the Azure Cache for Redis.
+ ## FAQ ### What features aren't supported with Redis 6?
-At this time, Redis 6 doesn't support ACL, and geo-replication between a Redis 4 and 6 cache.
+At this time, Redis 6 doesn't support Access Control Lists (ACL). Geo-replication between a Redis 4 cache and a Redis 6 cache is also not supported.
### Can I change the version of my cache after it's created?
-You can upgrade your existing Redis 4 caches to Redis 6, see [here](#upgrade-an-existing-redis-4-cache-to-redis-6) for details. Upgrading your cache instance is permanent and you cannot downgrade your Redis 6 caches to Redis 4 caches.
+You can upgrade your existing Redis 4 caches to Redis 6. Upgrading your cache instance is permanent and you can't downgrade your Redis 6 caches to Redis 4 caches.
+
+For more information, see [How to upgrade an existing Redis 4 cache to Redis 6](cache-how-to-upgrade.md).
## Next Steps
+- To learn more about upgrading your cache, see [How to upgrade an existing Redis 4 cache to Redis 6](cache-how-to-upgrade.md)
- To learn more about Redis 6 features, see [Diving Into Redis 6.0 by Redis](https://redis.com/blog/diving-into-redis-6/) - To learn more about Azure Cache for Redis features: [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
Title: Azure Functions error handling and retry guidance
description: Learn to handle errors and retry events in Azure Functions with links to specific binding errors, including information on retry policies. Previously updated : 06/09/2022 Last updated : 08/03/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
Handling errors in Azure Functions is important to avoid lost data, missed event
This article describes general strategies for error handling and the available retry strategies. > [!IMPORTANT]
-> The retry policy support in the runtime for triggers other than Timer and Event Hubs is being removed after this feature becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs will be removed in October 2022.
+> The retry policy support in the runtime for triggers other than Timer, Kafka, and Event Hubs is being removed after this feature becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs will be removed in October 2022.
## Handling errors
Capturing and logging errors is critical to monitoring the health of your applic
### Plan your retry strategy
-Several Functions bindings extensions provide built-in support for retries. In addition, the runtime lets you define retry policies for Timer and Event Hubs triggered functions. To learn more, see [Retries](#retries). For triggers that don't provide retry behaviors, you may want to implement your own retry scheme.
+Several Functions bindings extensions provide built-in support for retries. In addition, the runtime lets you define retry policies for Timer, Kafka, and Event Hubs triggered functions. To learn more, see [Retries](#retries). For triggers that don't provide retry behaviors, you may want to implement your own retry scheme.
### Design for idempotency
There are two kinds of retries available for your functions: built-in retry beha
| RabbitMQ | [Binding extension](functions-bindings-rabbitmq-trigger.md#dead-letter-queues) | [Dead letter queue](https://www.rabbitmq.com/dlx.html) | | Service Bus | [Binding extension](../service-bus-messaging/service-bus-dead-letter-queues.md) | [Dead letter queue](../service-bus-messaging/service-bus-dead-letter-queues.md#maximum-delivery-count) | |Timer | [Retry policies](#retry-policies) | Function-level |
+|Kafka | [Retry policies](#retry-policies) | Function-level |
### Retry policies
-Starting with version 3.x of the Azure Functions runtime, you can define a retry policies for Timer and Event Hubs triggers that are enforced by the Functions runtime. The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.
+Starting with version 3.x of the Azure Functions runtime, you can define a retry policies for Timer, Kafka, and Event Hubs triggers that are enforced by the Functions runtime. The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.
-A retry policy is evaluated when a Timer or Event Hubs triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry. Event Hubs checkpoints won't be written until the retry policy for the execution has completed. Because of this behavior, progress on the specific partition is paused until the current batch has completed.
+A retry policy is evaluated when a Timer, Kafka, or Event Hubs triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry. Event Hubs checkpoints won't be written until the retry policy for the execution has completed. Because of this behavior, progress on the specific partition is paused until the current batch has completed.
#### Retry strategies
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
def main(mytimer: func.TimerRequest) -> None:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) attribute to define the function.
+[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [Isolated process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function.
C# script instead uses a function.json configuration file.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 09/08/2022 Last updated : 09/09/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Kubernetes Service (AKS)](../../aks/index.yml) | &#x2705; | &#x2705; | | [Azure Marketplace portal](https://azuremarketplace.microsoft.com/) | &#x2705; | &#x2705; | | [Azure Maps](../../azure-maps/index.yml) | &#x2705; | &#x2705; |
-| [Azure Metrics Advisor](https://azure.microsoft.com/services/metrics-advisor/) | &#x2705; | &#x2705; |
| [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | | [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cloud Services](../../cloud-services/index.yml) | &#x2705; | &#x2705; | | [Cloud Shell](../../cloud-shell/overview.md) | &#x2705; | &#x2705; | | [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; |
-| [Cognitive
+| [Cognitive
| [Cognitive | [Cognitive | [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive | [Cognitive
-| [Container Instances](../../container-instances/index.yml) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Container Instances](../../container-instances/index.yml) | &#x2705; | &#x2705; |
| [Container Registry](../../container-registry/index.yml) | &#x2705; | &#x2705; | | [Content Delivery Network (CDN)](../../cdn/index.yml) | &#x2705; | &#x2705; | | [Cost Management and Billing](../../cost-management-billing/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dedicated HSM](../../dedicated-hsm/index.yml) | &#x2705; | &#x2705; | | [DevTest Labs](../../devtest-labs/index.yml) | &#x2705; | &#x2705; | | [DNS](../../dns/index.yml) | &#x2705; | &#x2705; |
-| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; |
| [Dynamics 365 Commerce](/dynamics365/commerce/)| &#x2705; | &#x2705; | | [Dynamics 365 Customer Service](/dynamics365/customer-service/overview)| &#x2705; | &#x2705; | | [Dynamics 365 Field Service](/dynamics365/field-service/overview)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [File Sync](../../storage/file-sync/index.yml) | &#x2705; | &#x2705; | | [Firewall](../../firewall/index.yml) | &#x2705; | &#x2705; | | [Firewall Manager](../../firewall-manager/index.yml) | &#x2705; | &#x2705; |
-| [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; |
| [Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; | | [Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; | | [GitHub AE](https://docs.github.com/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Lab Services](../../lab-services/index.yml) | &#x2705; | &#x2705; | | [Lighthouse](../../lighthouse/index.yml) | &#x2705; | &#x2705; | | [Load Balancer](../../load-balancer/index.yml) | &#x2705; | &#x2705; |
-| [Logic Apps](../../logic-apps/index.yml) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Logic Apps](../../logic-apps/index.yml) | &#x2705; | &#x2705; |
| [Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; | | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; | | [Media Services](/azure/media-services/) | &#x2705; | &#x2705; |
+| [Metrics Advisor](../../applied-ai-services/metrics-advisor/index.yml) | &#x2705; | &#x2705; |
| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | | [Microsoft Azure Attestation](../../attestation/index.yml)| &#x2705; | &#x2705; | | [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Defender for IoT](../../defender-for-iot/index.yml) (formerly Azure Security for IoT) | &#x2705; | &#x2705; | | [Microsoft Graph](/graph/) | &#x2705; | &#x2705; | | [Microsoft Intune](/mem/intune/) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Microsoft Sentinel](../../sentinel/index.yml) | &#x2705; | &#x2705; | | [Microsoft Stream](/stream/) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; | | [Migrate](../../migrate/index.yml) | &#x2705; | &#x2705; | | [Network Watcher](../../network-watcher/index.yml) (incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md)) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Power BI](/power-bi/fundamentals/) | &#x2705; | &#x2705; | | [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; | | [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | | [Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; | | [Resource Mover](../../resource-mover/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [SQL Database](/azure/azure-sql/database/sql-database-paas-overview) | &#x2705; | &#x2705; | | [SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates) | &#x2705; | &#x2705; | | [SQL Server Stretch Database](../../sql-server-stretch-database/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Storage: Archive](../../storage/blobs/access-tiers-overview.md) | &#x2705; | &#x2705; | | [Storage: Blobs](../../storage/blobs/index.yml) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Storage: Disks (incl. managed disks)](../../virtual-machines/managed-disks-overview.md) | &#x2705; | &#x2705; | | [Storage: Files](../../storage/files/index.yml) | &#x2705; | &#x2705; | | [Storage: Queues](../../storage/queues/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Virtual Machines](../../virtual-machines/index.yml) (incl. [Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)) | &#x2705; | &#x2705; | | [Virtual Network](../../virtual-network/index.yml) | &#x2705; | &#x2705; | | [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; | | [VPN Gateway](../../vpn-gateway/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; | | [Web Apps (App Service)](../../app-service/index.yml) | &#x2705; | &#x2705; | | [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) | &#x2705; | &#x2705; |
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[GRS Technology Solutions](https://www.grstechnologysolutions.com)| |[Hanu Software Solutions Inc.](https://www.hanusoftware.com/hanu/#contact)| |[Harmonia Holdings Group LLC](https://www.harmonia.com)|
-|[Harborgrid Inc.](https://www.harborgrid.com)|
+|Harborgrid Inc.|
|[HCL Technologies](https://www.hcltech.com/aerospace-and-defense)| |[HD Dynamics](https://www.hddynamics.com/)| |[Heartland Business Systems LLC](https://www.hbs.net/home)|
azure-monitor Container Insights Deployment Hpa Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-deployment-hpa-metrics.md
Title: Deployment & HPA metrics with Container insights | Microsoft Docs description: This article describes what deployment & HPA (Horizontal pod autoscaler) metrics are collected with Container insights. Previously updated : 08/09/2020- Last updated : 08/29/2022+ # Deployment & HPA metrics with Container insights
azure-monitor Container Insights Enable Aks Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks-policy.md
Title: Enable AKS Monitoring Addon using Azure Policy description: Describes how to enable AKS Monitoring Addon using Azure Custom Policy. Previously updated : 02/04/2021- Last updated : 08/29/2022+ # Enable AKS monitoring addon using Azure Policy
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
The Log Analytics workspace must be created before you deploy the Resource Manag
- For **aksResourceId** and **aksResourceLocation**, use the values on the **AKS Overview** page for the AKS cluster. - For **workspaceResourceId**, use the resource ID of your Log Analytics workspace.
- - For **resourceTagValues**, match the existing tag values specified for the existing Container insights extension DCR of the cluster and the name of the data collection rule, which will be MSCI-\<clusterName\>-\<clusterRegion\> and this resource created in Log Analytics Workspace Resource Group. If this first-time onboarding, you can set the arbitrary tag values.
+ - For **resourceTagValues**, match the existing tag values specified for the existing Container insights extension DCR of the cluster and the name of the data collection rule, which will be MSCI-\<clusterName\>-\<clusterRegion\> and this resource created in AKS clusters Resource Group. If this first-time onboarding, you can set the arbitrary tag values.
**If you don't want to enable [managed identity authentication (preview)](container-insights-onboard.md#authentication)**
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
Title: Configure Hybrid Kubernetes clusters with Container insights | Microsoft Docs description: This article describes how you can configure Container insights to monitor Kubernetes clusters hosted on Azure Stack or other environment. Previously updated : 06/30/2020- Last updated : 08/29/2022+ # Configure hybrid Kubernetes clusters with Container insights
Supported API definitions for the Azure Stack Hub cluster can be found in this e
## Configure agent data collection
-Staring with chart version 1.0.0, the agent data collection settings are controlled from the ConfigMap. Refer to documentation about agent data collection settings [here](container-insights-agent-config.md).
+Starting with chart version 1.0.0, the agent data collection settings are controlled from the ConfigMap. Refer to documentation about agent data collection settings [here](container-insights-agent-config.md).
After you have successfully deployed the chart, you can review the data for your hybrid Kubernetes cluster in Container insights from the Azure portal.
To execute with Azure PowerShell, use the following commands in the folder that
## Next steps
-With monitoring enabled to collect health and resource utilization of your hybrid Kubernetes cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
+With monitoring enabled to collect health and resource utilization of your hybrid Kubernetes cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md
Title: Log alerts from Container insights | Microsoft Docs description: This article describes how to create custom log alerts for memory and CPU utilization from Container insights. Previously updated : 07/29/2021- Last updated : 08/29/2022+
To create resource-centric alerts at scale for a subscription or resource group,
You may also decide not to split when you want a condition on multiple resources in the scope. For example, if you want to create an alert if at least five machines in the resource group scope have CPU usage over 80%. You might want to see a list of the alerts by affected computer. You can use a custom workbook that uses a custom [Resource Graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook. ## Create a log query alert rule
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
Title: Query logs from Container insights description: Container insights collects metrics and log data, and this article describes the records and includes sample queries. Previously updated : 07/19/2021- Last updated : 08/29/2022+
KubeMonAgentEvents | where Level != "Info"
The output shows results similar to the following example:
-![Screenshot that shows log query results of informational events from an agent.](./media/container-insights-log-query/log-query-example-kubeagent-events.png)
## Next steps
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
Title: How to manage the Container insights agent | Microsoft Docs description: This article describes managing the most common maintenance tasks with the containerized Log Analytics agent used by Container insights. Previously updated : 07/21/2020- Last updated : 08/29/2022+ # How to manage the Container insights agent
Container insights uses a containerized version of the Log Analytics agent for L
## How to upgrade the Container insights agent
-Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Red Hat OpenShift version 3.x. For a [hybrid Kubernetes cluster](container-insights-hybrid-setup.md) and Azure Red Hat OpenShift version 4.x, the agent is not managed, and you need to manually upgrade the agent.
+Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). For a [hybrid Kubernetes cluster](container-insights-hybrid-setup.md), the agent is not managed, and you need to manually upgrade the agent.
-If the agent upgrade fails for a cluster hosted on AKS or Azure Red Hat OpenShift version 3.x, this article also describes the process to manually upgrade the agent. To follow the versions released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
### Upgrade agent on AKS cluster
If the Log Analytics workspace is in Azure US Government, run the following comm
$ helm upgrade --set omsagent.domain=opinsights.azure.us,omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers ```
-### Upgrade agent on Azure Red Hat OpenShift v4
-
-Perform the following steps to upgrade the agent on a Kubernetes cluster running on Azure Red Hat OpenShift version 4.x.
-
->[!NOTE]
->Azure Red Hat OpenShift version 4.x only supports running in the Azure commercial cloud.
->
-
-```console
-curl -o upgrade-monitoring.sh -L https://aka.ms/upgrade-monitoring-bash-script
-export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>"
-bash upgrade-monitoring.sh --resource-id $ azureAroV4ClusterResourceId
-```
- ## How to disable environment variable collection on a container Container insights collects environmental variables from the containers running in a pod and presents them in the property pane of the selected container in the **Containers** view. You can control this behavior by disabling collection for a specific container either during deployment of the Kubernetes cluster, or after by setting the environment variable *AZMON_COLLECT_ENV*. This feature is available from the agent version ΓÇô ciprod11292018 and higher.
To disable collection of environmental variables on a new or existing container,
value: "False" ```
-Run the following command to apply the change to Kubernetes clusters other than Azure Red Hat OpenShift): `kubectl apply -f <path to yaml file>`. To edit ConfigMap and apply this change for Azure Red Hat OpenShift clusters, run the command:
-
-```bash
-oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging
-```
-
-This opens your default text editor. After setting the variable, save the file in the editor.
+Run the following command to apply the change to Kubernetes clusters: `kubectl apply -f <path to yaml file>`.
To verify the configuration change took effect, select a container in the **Containers** view in Container insights, and in the property panel, expand **Environment Variables**. The section should show only the variable created earlier - **AZMON_COLLECT_ENV=FALSE**. For all other containers, the Environment Variables section should list all the environment variables discovered.
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
Title: Configure Container insights Prometheus integration | Microsoft Docs description: This article describes how you can configure the Container insights agent to scrape metrics from Prometheus with your Kubernetes cluster. Previously updated : 04/22/2020- Last updated : 08/29/2022+ # Configure scraping of Prometheus metrics with Container insights
Typically, to use Prometheus, you need to set up and manage a Prometheus server with a store. If you integrate with Azure Monitor, a Prometheus server isn't required. You only need to expose the Prometheus metrics endpoint through your exporters or pods (application). Then the containerized agent for Container insights can scrape the metrics for you.
-![Diagram that shows container monitoring architecture for Prometheus.](./media/container-insights-prometheus-integration/monitoring-kubernetes-architecture.png)
>[!NOTE]
->The minimum agent version supported for scraping Prometheus metrics is ciprod07092019. The agent version supported for writing configuration and agent errors in the `KubeMonAgentEvents` table is ciprod10112019. For Azure Red Hat OpenShift and Red Hat OpenShift v4, the agent version is ciprod04162020 or later.
+>The minimum agent version supported for scraping Prometheus metrics is ciprod07092019. The agent version supported for writing configuration and agent errors in the `KubeMonAgentEvents` table is ciprod10112019. For Red Hat OpenShift v4, the agent version is ciprod04162020 or later.
> >For more information about the agent versions and what's included in each release, see [Agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod). >To verify your agent version, select the **Insights** tab of the resource. From the **Nodes** tab, select a node. In the properties pane, note the value of the **Agent Image Tag** property.
Scraping of Prometheus metrics is supported with Kubernetes clusters hosted on:
- Azure Kubernetes Service (AKS). - Azure Stack or on-premises. - Azure Arc enabled Kubernetes.-- Azure Red Hat OpenShift and Red Hat OpenShift version 4.x through cluster connect to Azure Arc.
+- Red Hat OpenShift version 4.x through cluster connect to Azure Arc.
### Prometheus scraping settings
Perform the following steps to configure your ConfigMap configuration file for t
* Azure Kubernetes Service (AKS) * Azure Stack or on-premises
-* Azure Red Hat OpenShift version 4.x and Red Hat OpenShift version 4.x
+* Red Hat OpenShift version 4.x
1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap YAML file and save it as container-azm-ms-agentconfig.yaml.
- >[!NOTE]
- >This step isn't required when you're working with Azure Red Hat OpenShift because the ConfigMap template already exists on the cluster.
- 1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics.
- If you're editing the ConfigMap YAML file for Azure Red Hat OpenShift, first run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
-
- >[!NOTE]
- >The following annotation `openshift.io/reconcile-protect: "true"` must be added under the metadata of *container-azm-ms-agentconfig* ConfigMap to prevent reconciliation.
- >```
- >metadata:
- > annotations:
- > openshift.io/reconcile-protect: "true"
- >```
- - To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example: ```
Perform the following steps to configure your ConfigMap configuration file for t
The configuration change can take a few minutes to finish before taking effect. You must restart all omsagent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
-## Configure and deploy ConfigMaps for Azure Red Hat OpenShift v3
-
-This section includes the requirements and steps to successfully configure your ConfigMap configuration file for Azure Red Hat OpenShift v3.x cluster.
-
->[!NOTE]
->For Azure Red Hat OpenShift v3.x, a template ConfigMap file is created in the *openshift-azure-logging* namespace. It isn't configured to actively scrape metrics or data collection from the agent.
-
-### Prerequisites
-
-Before you start, confirm you're a member of the Customer Cluster Admin role of your Azure Red Hat OpenShift cluster to configure the containerized agent and Prometheus scraping settings. To verify you're a member of the *osa-customer-admins* group, run the following command:
-
-``` bash
- oc get groups
-```
-
-The output will resemble the following example:
-
-``` bash
-NAME USERS
-osa-customer-admins <your-user-account>@<your-tenant-name>.onmicrosoft.com
-```
-
-If you're a member of *osa-customer-admins* group, you should be able to list the `container-azm-ms-agentconfig` ConfigMap by using the following command:
-
-``` bash
-oc get configmaps container-azm-ms-agentconfig -n openshift-azure-logging
-```
-
-The output will resemble the following example:
-
-``` bash
-NAME DATA AGE
-container-azm-ms-agentconfig 4 56m
-```
-
-### Enable monitoring
-
-To configure your ConfigMap configuration file for your Azure Red Hat OpenShift v3.x cluster:
-
-1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics. The ConfigMap template already exists on the Red Hat OpenShift v3 cluster. Run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
-
- >[!NOTE]
- >The following annotation `openshift.io/reconcile-protect: "true"` must be added under the metadata of *container-azm-ms-agentconfig* ConfigMap to prevent reconciliation.
- >```
- >metadata:
- > annotations:
- > openshift.io/reconcile-protect: "true"
- >```
-
- - To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
- fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
- kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"]
- ```
-
- - To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
- fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
- urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from
- ```
-
- - To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings ΓÇï
- [prometheus_data_collection_settings.node] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- urls = ["http://$NODE_IP:9103/metrics"] ΓÇï
- fieldpass = ["metric_to_pass1", "metric_to_pass2"] ΓÇï
- fielddrop = ["metric_to_drop"] ΓÇï
- ```
-
- >[!NOTE]
- >$NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
-
- - To configure scraping of Prometheus metrics by specifying a pod annotation:
-
- 1. In the ConfigMap, specify the following configuration:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h
- monitor_kubernetes_pods = true
- ```
-
- 1. Specify the following configuration for pod annotations:
-
- ```
- - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
- - prometheus.io/scheme:"http" #If the metrics endpoint is secured then you will need to set this to `https`, if not default ΓÇÿhttpΓÇÖΓÇï
- - prometheus.io/path:"/mymetrics" #If the metrics path is not /metrics, define it with this annotation. ΓÇï
- - prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï
- ```
-
- If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
-
-1. Save your changes in the editor.
-
-The configuration change can take a few minutes to finish before taking effect. Then all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods. Not all pods restart at the same time. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
-
-You can view the updated ConfigMap by running the command `oc describe configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
- ## Apply updated ConfigMap If you've already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used. Then apply it by using the same commands as before.
For the following Kubernetes environments:
- Azure Kubernetes Service (AKS) - Azure Stack or on-premises-- Azure Red Hat OpenShift and Red Hat OpenShift version 4.x
+- Red Hat OpenShift version 4.x
run the command `kubectl apply -f <config3. map_yaml_file.yaml>`.
The configuration change can take a few minutes to finish before taking effect.
To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n=kube-system`.
->[!NOTE]
->This command isn't applicable to Azure Red Hat OpenShift v3.x cluster.
->
- If there are configuration errors from the omsagent pods, the output will show errors similar to the following example: ```
config::unsupported/missing config schema version - 'v21' , using defaults
Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes and scraping of Prometheus metrics: - From an agent pod logs using the same `kubectl logs` command.
- >[!NOTE]
- >This command isn't applicable to Azure Red Hat OpenShift cluster.
- >
- From Live Data (preview). Live Data (preview) logs show errors similar to the following example:
Errors related to applying configuration changes are also available for review.
``` - From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.-- For Azure Red Hat OpenShift v3.x and v4.x, check the omsagent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.-
-Errors prevent omsagent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
-For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
+Errors prevent omsagent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
## Query Prometheus metrics data
InsightsMetrics
The output will show results similar to the following example.
-![Screenshot that shows the log query results of data ingestion volume.](./media/container-insights-prometheus-integration/log-query-example-usage-03.png)
To estimate what each metrics size in GB is for a month to understand if the volume of data ingested received in the workspace is high, the following query is provided.
InsightsMetrics
The output will show results similar to the following example.
-![Screenshot that shows log query results of data ingestion volume.](./media/container-insights-prometheus-integration/log-query-example-usage-02.png)
For more information on how to analyze usage, see [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md).
azure-monitor Container Insights Transition Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-solution.md
Title: Transition from the Container Monitoring Solution to using Container Insights Previously updated : 1/18/2022 Last updated : 8/29/2022 description: "Learn how to migrate from using the legacy OMS solution to monitoring your containers using Container Insights"-+ # Transition from the Container Monitoring Solution to using Container Insights
azure-monitor Container Insights Update Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-update-metrics.md
Title: Update Container insights for metrics | Microsoft Docs description: This article describes how you update Container insights to enable the custom metrics feature that supports exploring and alerting on aggregated metrics. Previously updated : 10/09/2020 Last updated : 08/29/2022 -+
Container insights now includes support for collecting metrics from Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes cluster nodes and pods, and then writing those metrics to the Azure Monitor metrics store. With this support, you can present timely aggregate calculations (average, count, maximum, minimum, sum) in performance charts, pin performance charts in Azure portal dashboards, and take advantage of metric alerts.
->[!NOTE]
-> This feature doesn't currently support Azure Red Hat OpenShift clusters.
- This feature enables the following metrics: | Metric namespace | Metric | Description |
To update an existing AKS cluster monitored by Container insights:
2. In the banner that appears at the top of the pane, select **Enable** to start the update.
- ![Screenshot of the Azure portal that shows the banner for upgrading an A K S cluster.](./media/container-insights-update-metrics/portal-banner-enable-01.png)
+ :::image type="content" source="./media/container-insights-update-metrics/portal-banner-enable-01.png" alt-text="Screenshot of the Azure portal that shows the banner for upgrading an AKS cluster." lightbox="media/container-insights-update-metrics/portal-banner-enable-01.png":::
The process can take several seconds to finish. You can track its progress under **Notifications** from the menu.
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Title: Azure Monitor overview description: Overview of Microsoft services and functionalities that contribute to a complete monitoring strategy for your Azure services and applications. -- Previously updated : 07/25/2022-++ Last updated : 09/01/2022+
A few examples of what you can do with Azure Monitor include:
[!INCLUDE [azure-lighthouse-supported-service](../../includes/azure-lighthouse-supported-service.md)] ## Overview
-The following diagram gives a high-level view of Azure Monitor. At the center of the diagram are the data stores for metrics and logs, which are the two fundamental types of data used by Azure Monitor. On the left are the [sources of monitoring data](data-sources.md) that populate these [data stores](data-platform.md). On the right are the different functions that Azure Monitor performs with this collected data. This includes such actions as analysis, alerting, and streaming to external systems.
+The following diagram gives a high-level view of Azure Monitor.
+- At the center of the diagram are the data stores for metrics and logs and changes, which are the fundamental types of data used by Azure Monitor.
+- On the left are the [sources of monitoring data](data-sources.md) that populate these [data stores](data-platform.md).
+- On the right are the different functions that Azure Monitor performs with this collected data. This includes such actions as analysis, alerting, and integration such as streaming to external systems.
-The following video uses an earlier version of the preceding diagram, but its explanations are still relevant.
+The following video uses an earlier version of the preceding diagram, but its explanations are still relevant.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4qXeL]
-## Monitor data platform
-All data collected by Azure Monitor fits into one of two fundamental types, [metrics and logs](data-platform.md). [Metrics](essentials/data-platform-metrics.md) are numerical values that describe some aspect of a system at a particular point in time. They're lightweight and capable of supporting near-real-time scenarios. [Logs](logs/data-platform-logs.md) contain different kinds of data organized into records with different sets of properties for each type. Telemetry such as events and traces is stored as logs in addition to performance data so that it can all be combined for analysis.
+
+## Observability and the Azure Monitor data platform
+Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. Observability can be achieved by aggregating and correlating these different types of data across the entire system being monitored.
+
+Natively, Azure Monitor stores data as metrics, logs, or changes. Traces are stored in the Logs store. Each storage platform is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. Features such as data analysis, visualizations, or alerting require you to understand the differences so that you can implement your required scenario in the most efficient and cost effective manner.
+
+| Pillar | Description |
+|:|:|
+| Metrics | Metrics are numerical values that describe some aspect of a system at a particular point in time. They are collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated using various algorithms, compared to other metrics, and analyzed for trends over time.<br><br>Metrics in Azure Monitor are stored in a time-series database, which is optimized for analyzing time-stamped data. For more information, see [Azure Monitor Metrics](essentials/data-platform-metrics.md). |
+| Logs | [Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume.<br><br>Azure Monitor stores logs the Azure Monitor Logs store. The store allows you to segregate logs into separate "Log Analytics workspaces". There you can analyze them using the Log Analytics tool. Log Analytics workspaces are based on [Azure Data Explorer](/azure/data-explorer/), which provides a powerful analysis engine and the [Kusto rich query language](/azure/kusto/query/). For more information, see [Azure Monitor Logs](logs/data-platform-logs.md). |
+| Distributed traces | Traces are series of related events that follow a user request through a distributed system. They can be used to determine behavior of application code and the performance of different transactions. While logs will often be created by individual components of a distributed system, a trace measures the operation and performance of your application across the entire set of components.<br><br>Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing.md). Trace data is stored with other application log data collected by Application Insights and stored in Azure Monitor Logs. For more information, see [What is Distributed Tracing?](app/distributed-tracing.md). |
+| Changes | Changes are tracked using [Change Analysis](change/change-analysis.md). Changes are a series of events that occur in your Azure application and resources. Change Analysis is a subscription-level observability tool that's built on the power of Azure Resource Graph. <br><br> Once Change Analysis is enabled, the `Microsoft.ChangeAnalysis` resource provider is registered with an Azure Resource Manager subscription. Change Analysis' integrations with Monitoring and Diagnostics tools provide data to help users understand what changes might have caused the issues. Read more about Change Analysis in [Use Change Analysis in Azure Monitor](./change/change-analysis.md). |
+
+Azure Monitor aggregates and correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services. Because this data is stored together, it can be correlated and analyzed using a common set of tools.
++
+> [!NOTE]
+> It's important to distinguish between Azure Monitor Logs and sources of log data in Azure. For example, subscription level events in Azure are written to an [activity log](essentials/platform-logs-overview.md) that you can view from the Azure Monitor menu. Most resources will write operational information to a [resource log](essentials/platform-logs-overview.md) that you can forward to different locations. Azure Monitor Logs is a log data platform that collects activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources.
For many Azure resources, you'll see data collected by Azure Monitor right in their overview page in the Azure portal. Look at any virtual machine (VM), for example, and you'll see several charts that display performance metrics. Select any of the graphs to open the data in [Metrics Explorer](essentials/metrics-charts.md) in the Azure portal. With Metrics Explorer, you can chart the values of multiple metrics over time. You can view the charts interactively or pin them to a dashboard to view them with other visualizations. ![Diagram that shows metrics data flowing into Metrics Explorer to use in visualizations.](media/overview/metrics.png)
-Log data collected by Azure Monitor can be analyzed with [queries](logs/log-query-overview.md) to quickly retrieve, consolidate, and analyze collected data. You can create and test queries by using [Log Analytics](./logs/log-query-overview.md) in the Azure portal. You can then either directly analyze the data by using different tools or save queries for use with [visualizations](best-practices-analysis.md) or [alert rules](alerts/alerts-overview.md).
+Log data collected by Azure Monitor can be analyzed with [queries](logs/log-query-overview.md) to quickly retrieve, consolidate, and analyze collected data. You can create and test queries by using the [Log Analytics](./logs/log-query-overview.md) user interface in the Azure portal. You can then either directly analyze the data by using different tools or save queries for use with [visualizations](best-practices-analysis.md) or [alert rules](alerts/alerts-overview.md).
-Azure Monitor uses a version of the [Kusto Query Language](/azure/kusto/query/) that's suitable for simple log queries but also includes advanced functionality such as aggregations, joins, and smart analytics. You can quickly learn the query language by using [multiple lessons](logs/get-started-queries.md). Particular guidance is provided to users who are already familiar with [SQL](/azure/data-explorer/kusto/query/sqlcheatsheet) and [Splunk](/azure/data-explorer/kusto/query/splunk-cheat-sheet).
+Azure Monitor Logs uses a version of the [Kusto Query Language](/azure/kusto/query/) that's suitable for simple log queries but also includes advanced functionality such as aggregations, joins, and smart analytics. You can quickly learn the query language by using [multiple lessons](logs/get-started-queries.md). Particular guidance is provided to users who are already familiar with [SQL](/azure/data-explorer/kusto/query/sqlcheatsheet) and [Splunk](/azure/data-explorer/kusto/query/splunk-cheat-sheet).
![Diagram that shows logs data flowing into Log Analytics for analysis.](media/overview/logs.png)
Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/ov
Azure Monitor can collect data from [sources](monitor-reference.md) that range from your application to any operating system and services it relies on, down to the platform itself. Azure Monitor collects data from each of the following tiers: -- **Application monitoring data**: Data about the performance and functionality of the code you've written, regardless of its platform.-- **Guest operating system monitoring data**: Data about the operating system on which your application is running. The system could be running in Azure, another cloud, or on-premises.-- **Azure resource monitoring data**: Data about the operation of an Azure resource. For a complete list of the resources that have metrics or logs, see [What can you monitor with Azure Monitor?](monitor-reference.md#azure-supported-services).-- **Azure subscription monitoring data**: Data about the operation and management of an Azure subscription, and data about the health and operation of Azure itself.-- **Azure tenant monitoring data**: Data about the operation of tenant-level Azure services, such as Azure Active Directory.-- **Azure resource change data**: Data about changes within your Azure resources and how to address and triage incidents and issues.
+- **Application** - Data about the performance and functionality of the code you've written, regardless of its platform.
+- **Guest operating system** - Data about the operating system on which your application is running. The system could be running in Azure, another cloud, or on-premises.
+- **Azure resource** - Data about the operation of an Azure resource. For a complete list of the resources that have metrics or logs, see [What can you monitor with Azure Monitor?](monitor-reference.md#azure-supported-services).
+- **Azure subscription** - Data about the operation and management of an Azure subscription, and data about the health and operation of Azure itself.
+- **Azure tenant** - Data about the operation of tenant-level Azure services, such as Azure Active Directory.
+- **Azure resource changes** - Data about changes within your Azure resources and how to address and triage incidents and issues.
As soon as you create an Azure subscription and add resources such as VMs and web apps, Azure Monitor starts collecting data. [Activity logs](essentials/platform-logs-overview.md) record when resources are created or modified. [Metrics](essentials/data-platform-metrics.md) tell you how the resource is performing and the resources that it's consuming.
Azure Monitor can collect log data from any REST client by using the [Data Colle
Monitoring data is only useful if it can increase your visibility into the operation of your computing environment. Some Azure resource providers have a "curated visualization," which gives you a customized monitoring experience for that particular service or set of services. They generally require minimal configuration. Larger, scalable, curated visualizations are known as "insights" and marked with that name in the documentation and the Azure portal.
-For more information, see [List of insights and curated visualizations using Azure Monitor](monitor-reference.md#insights-and-curated-visualizations). Some of the larger insights are also described here.
+For more information, see [List of insights and curated visualizations using Azure Monitor](monitor-reference.md#insights-and-curated-visualizations). Some of the larger insights are described here.
### Application Insights
You'll often have the requirement to integrate Azure Monitor with other systems
Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have essentially unlimited possibilities to build custom solutions that integrate with Azure Monitor.
-## Observability data in Azure Monitor
-Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. These are the different kinds of data that a monitoring tool must collect and analyze to provide sufficient observability of a monitored system. Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services.
-
-Azure resources generate a significant amount of monitoring data. Azure Monitor consolidates this data along with monitoring data from other sources into either a Metrics or Logs platform. Each is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. Features such as data analysis, visualizations, or alerting require you to understand the differences so that you can implement your required scenario in the most efficient and cost effective manner. Insights in Azure Monitor such as [Application Insights](app/app-insights-overview.md) or [Container insights](containers/container-insights-overview.md) have analysis tools that allow you to focus on the particular monitoring scenario without having to understand the differences between the two types of data.
-
-| Pillar | Description |
-|:|:|
-| Metrics | Metrics are numerical values that describe some aspect of a system at a particular point in time. They are collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated using a variety of algorithms, compared to other metrics, and analyzed for trends over time.<br><br>Metrics in Azure Monitor are stored in a time-series database which is optimized for analyzing time-stamped data. For more information, see [Azure Monitor Metrics](essentials/data-platform-metrics.md). |
-| Logs | [Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume.<br><br>Logs in Azure Monitor are stored in a Log Analytics workspace that's based on [Azure Data Explorer](/azure/data-explorer/) which provides a powerful analysis engine and [rich query language](/azure/kusto/query/). For more information, see [Azure Monitor Logs](logs/data-platform-logs.md). |
-| Distributed traces | Traces are series of related events that follow a user request through a distributed system. They can be used to determine behavior of application code and the performance of different transactions. While logs will often be created by individual components of a distributed system, a trace measures the operation and performance of your application across the entire set of components.<br><br>Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing.md), and trace data is stored with other application log data collected by Application Insights and stored in Azure Monitor Logs. For more information, see [What is Distributed Tracing?](app/distributed-tracing.md). |
-| Changes | Changes are a series of events that occur in your Azure application and resources. Change Analysis is a subscription-level observability tool that's built on the power of Azure Resource Graph. <br><br> Once Change Analysis is enabled, the `Microsoft.ChangeAnalysis` resource provider is registered with an Azure Resource Manager subscription. Change Analysis' integrations with Monitoring and Diagnostics tools provide data to help users understand what changes might have caused the issues. Read more about Change Analysis in [Use Change Analysis in Azure Monitor](./change/change-analysis.md). |
--
-> [!NOTE]
-> It's important to distinguish between Azure Monitor Logs and sources of log data in Azure. For example, subscription level events in Azure are written to an [activity log](essentials/platform-logs-overview.md) that you can view from the Azure Monitor menu. Most resources will write operational information to a [resource log](essentials/platform-logs-overview.md) that you can forward to different locations. Azure Monitor Logs is a log data platform that collects activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources.
--- ## Next steps
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
Azure Monitor has no ability to monitor the status of a service or daemon. There
> [!NOTE] > The Change Tracking and Analysis solution is different from the [Change Analysis](vminsights-change-analysis.md) feature in VM insights. This feature is in public preview and not yet included in this scenario.
-For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You'll have to [create an Azure Automation account](../../automation/quickstarts/create-account-portal.md) to support the solution.
+For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You'll have to [create an Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal) to support the solution.
When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for log query alert rules.
Use [SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) to
## Next steps * [Learn how to analyze data in Azure Monitor logs using log queries](../logs/get-started-queries.md)
-* [Learn about alerts using metrics and logs in Azure Monitor](../alerts/alerts-overview.md)
+* [Learn about alerts using metrics and logs in Azure Monitor](../alerts/alerts-overview.md)
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 08/26/2022 Last updated : 09/09/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
* North Europe * Norway East * South Central US
+* South India
* Southeast Asia * Switzerland North * UK South
azure-portal Azure Portal Dashboards Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards-structure.md
LetΓÇÖs break down the relevant sections of the JSON. The top-level properties,
The Azure resource ID, subject to the [naming conventions of Azure resources](/azure/architecture/best-practices/resource-naming). When the portal creates a dashboard, it generally chooses an ID in the form of a guid, but you can use any valid name when you create them programmatically.
+When you export a dashboard from the Azure portal, the `id` field is not included. If you import this JSON file to create a new dashboard in the Azure portal, a new ID value will be assigned to each new dashboard.
+ ### Name The name is the segment of the resource ID that does not include the subscription, resource type, or resource group information. Essentially, it's the last segment of the resource ID.
azure-portal Quickstart Portal Dashboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quickstart-portal-dashboard-powershell.md
Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
You'll be using several pieces of information repeatedly. Create variables to store the information. ```azurepowershell-interactive
-# Name of resource group used throughout this article
-$resourceGroupName = 'myResourceGroup'
-
-# Azure region
-$location = 'centralus'
-
-# Dashboard Title
-$dashboardTitle = 'Simple VM Dashboard'
-
-# Dashboard Name
-$dashboardName = $dashboardTitle -replace '\s'
-
-# Your Azure Subscription ID
-$subscriptionID = (Get-AzContext).Subscription.Id
-
-# Name of test VM
-$vmName = 'myVM1'
+# Na
``` ## Create a resource group
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
description: Describes the functions to use in a Bicep file to retrieve values a
Previously updated : 08/05/2022 Last updated : 09/09/2022 # Resource functions for Bicep
module sql './sql.bicep' = {
} ```
-You'll get an error if you attempt to use this function in any other part of the Bicep file. You'll also get an error if you use this function with string interpolation, even when used in the params section.
+You'll get an error if you attempt to use this function in any other part of the Bicep file. You'll also get an error if you use this function with string interpolation, even when used in the params section.
The function can be used only with a module parameter that has the `@secure()` decorator.
The possible uses of `list*` are shown in the following table.
| Microsoft.DevTestLab/labs/schedules | [ListApplicable](/rest/api/dtl/schedules/listapplicable) | | Microsoft.DevTestLab/labs/users/serviceFabrics | [ListApplicableSchedules](/rest/api/dtl/servicefabrics/listapplicableschedules) | | Microsoft.DevTestLab/labs/virtualMachines | [ListApplicableSchedules](/rest/api/dtl/virtualmachines/listapplicableschedules) |
-| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2021-10-15/database-accounts/list-connection-strings) |
-| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-10-15/database-accounts/list-keys) |
-| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-10-15/notebook-workspaces/list-connection-info) |
+| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2022-05-15/database-accounts/list-connection-strings) |
+| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2022-05-15/database-accounts/list-keys) |
+| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2022-05-15/notebook-workspaces/list-connection-info) |
| Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/domains/list-shared-access-keys) |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 05/16/2022 Last updated : 09/09/2022
The possible uses of `list*` are shown in the following table.
| Microsoft.DevTestLab/labs/schedules | [ListApplicable](/rest/api/dtl/schedules/listapplicable) | | Microsoft.DevTestLab/labs/users/serviceFabrics | [ListApplicableSchedules](/rest/api/dtl/servicefabrics/listapplicableschedules) | | Microsoft.DevTestLab/labs/virtualMachines | [ListApplicableSchedules](/rest/api/dtl/virtualmachines/listapplicableschedules) |
-| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2021-10-15/database-accounts/list-connection-strings) |
-| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-10-15/database-accounts/list-keys) |
-| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-10-15/notebook-workspaces/list-connection-info) |
+| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2022-05-15/database-accounts/list-connection-strings) |
+| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2022-05-15/database-accounts/list-keys) |
+| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2022-05-15/notebook-workspaces/list-connection-info) |
| Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/domains/list-shared-access-keys) | | Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/topics/list-shared-access-keys) |
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Azure Video Indexer's insights can be applied to many scenarios, among them are:
* Deep search: Use the insights extracted from the video to enhance the search experience across a video library. For example, indexing spoken words and faces can enable the search experience of finding moments in a video where a person spoke certain words or when two people were seen together. Search based on such insights from videos is applicable to news agencies, educational institutes, broadcasters, entertainment content owners, enterprise LOB apps, and in general to any industry that has a video library that users need to search against. * Content creation: Create trailers, highlight reels, social media content, or news clips based on the insights Azure Video Indexer extracts from your content. Keyframes, scenes markers, and timestamps of the people and label appearances make the creation process smoother and easier, enabling you to easily get to the parts of the video you need when creating content.
-* Accessibility: Whether you want to make your content available for people with disabilities or if you want your content to be distributed to different regions using different languages, you can use the transcription and translation provided by Azure Video Indexer in multiple languages.
+* Accessibility: Whether you want to make your content available for people with disabilities or if you want your content to be distributed to different regions using different languages, you can use the transcription and translation provided by Azure Video Indexer in multiple languages.
* Monetization: Azure Video Indexer can help increase the value of videos. For example, industries that rely on ad revenue (news media, social media, and so on) can deliver relevant ads by using the extracted insights as additional signals to the ad server. * Content moderation: Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content. * Recommendations: Video insights can be used to improve user engagement by highlighting the relevant video moments to users. By tagging each video with additional metadata, you can recommend to users the most relevant videos and highlight the parts of the video that will match their needs. ## Video/audio AI features
-The following list shows the insights you can retrieve from your videos using Azure Video Indexer video and audio AI features (models.
+The following list shows the insights you can retrieve from your video/audio files using Azure Video Indexer video and audio AI features (models).
Unless specified otherwise, a model is generally available.
Unless specified otherwise, a model is generally available.
### Audio models
-* **Audio transcription**: Converts speech to text over 50 languages and allows extensions. For a comprehensive list of language support by service, see [language support](language-support.md).
-* **Automatic language detection**: Identifies the dominant spoken language. For a comprehensive list of language support by service, see [language support](language-support.md). If the language can't be identified with confidence, Azure Video Indexer assumes the spoken language is English. For more information, see [Language identification model](language-identification-model.md).
+* **Audio transcription**: Converts speech to text over 50 languages and allows extensions. For more information, see [Azure Video Indexer language support](language-support.md).
+* **Automatic language detection**: Identifies the dominant spoken language. For more information, see [Azure Video Indexer language support](language-support.md). If the language can't be identified with confidence, Azure Video Indexer assumes the spoken language is English. For more information, see [Language identification model](language-identification-model.md).
* **Multi-language speech identification and transcription**: Identifies the spoken language in different segments from audio. It sends each segment of the media file to be transcribed and then combines the transcription back to one unified transcription. For more information, see [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md). * **Closed captioning**: Creates closed captioning in three formats: VTT, TTML, SRT. * **Two channel processing**: Auto detects separate transcript and merges to single timeline.
Unless specified otherwise, a model is generally available.
* **Speaker statistics**: Provides statistics for speakers' speech ratios. * **Textual content moderation**: Detects explicit text in the audio transcript. * **Emotion detection**: Identifies emotions based on speech (what's being said) and voice tonality (how it's being said). The emotion could be joy, sadness, anger, or fear.
-* **Translation**: Creates translations of the audio transcript to 54 different languages.
+* **Translation**: Creates translations of the audio transcript to many different languages. For more information, see [Azure Video Indexer language support](language-support.md).
* **Audio effects detection** (preview): Detects the following audio effects in the non-speech segments of the content: alarm or siren, dog barking, crowd reactions (cheering, clapping, and booing), gunshot or explosion, laughter, breaking glass, and silence. The detected acoustic events are in the closed captions file. The file can be downloaded from the Azure Video Indexer portal. For more information, see [Audio effects detection](audio-effects-detection.md).
The following list shows the supported browsers that you can use for the Azure V
- Chrome for Android, version: 87 - Firefox for Android, version: 83
+### Supported file formats
+
+See the [input container/file formats](/azure/media-services/latest/encode-media-encoder-standard-formats-reference) article for a list of file formats that you can use with Azure Video Indexer.
+ ### Start using Azure Video Indexer You can access Azure Video Indexer capabilities in three ways:
You're ready to get started with Azure Video Indexer. For more information, see
- [Get started with the Azure Video Indexer website](video-indexer-get-started.md). - [Process content with Azure Video Indexer REST API](video-indexer-use-apis.md). - [Embed visual widgets in your application](video-indexer-embed-widgets.md).+
+For the latest updates, see [Azure Video Indexer release notes](release-notes.md).
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Azure Video Indexer consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft into one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platforms. You can use the API to upload your files, get detailed video insights, get URLs of embeddable insight and player widgets, and more.
-When visiting the [Azure Video Indexer](https://www.videoindexer.ai/) website for the first time, a trial account is automatically created for you. With the trial account, you get a certain number of free indexing minutes. You can later add a paid (ARM-based or classic) account. With the paid option, you pay for indexed minutes.
-
-For details about available accounts (trial and paid options), see [Azure Video Indexer account types](accounts-overview.md).
+When visiting the [Azure Video Indexer](https://www.videoindexer.ai/) website for the first time, a trial account is automatically created for you. With the trial account, you get a certain number of free indexing minutes. You can later add a paid account. With the paid option, you pay for indexed minutes. For details about available accounts (trial and paid options), see [Azure Video Indexer account types](accounts-overview.md).
This article shows how the developers can take advantage of the [Azure Video Indexer API](https://api-portal.videoindexer.ai/).
+## Prerequisite
+
+Before you start, see the [Recommendations](#recommendations) section (that follows later in this article).
+ ## Subscribe to the API 1. Sign in to [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/).
Access tokens expire after 1 hour. Make sure your access token is valid before u
You're ready to start integrating with the API. Find [the detailed description of each Azure Video Indexer REST API](https://api-portal.videoindexer.ai/).
-## Recommendations
-
-This section lists some recommendations when using Azure Video Indexer API.
--- If you're planning to upload a video, it's recommended to place the file in some public network location (for example, an Azure Blob Storage account). Get the link to the video and provide the URL as the upload file param.-
- The URL provided to Azure Video Indexer must point to a media (audio or video) file. An easy verification for the URL (or SAS URL) is to paste it into a browser, if the file starts playing/downloading, it's likely a good URL. If the browser is rendering some visualization, it's likely not a link to a file but to an HTML page.
-- When you call the API that gets video insights for the specified video, you get a detailed JSON output as the response content. [See details about the returned JSON in this topic](video-indexer-output-json-v2.md).-- The JSON output produced by the API contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).-- We do not recommend that you use data directly from the artifacts folder for production purposes. Artifacts are intermediate outputs of the indexing process. They are essentially raw outputs of the various AI engines that analyze the videos; the artifacts schema may change over time. -
- It is recommended that you use the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, as described in [Get insights and artifacts produced by the API](video-indexer-output-json-v2.md#get-insights-produced-by-the-api) and **not** [Get-Video-Artifact-Download-Url](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url).
- ## Operational API calls The Account ID parameter is required in all operational API calls. Account ID is a GUID that can be obtained in one of the following ways:
The Account ID parameter is required in all operational API calls. Account ID is
``` https://www.videoindexer.ai/accounts/00000000-f324-4385-b142-f77dacb0a368/videos/d45bf160b5/ ```
+## Recommendations
+
+This section lists some recommendations when using Azure Video Indexer API.
+
+- If you're planning to upload a video, it's recommended to place the file in some public network location (for example, an Azure Blob Storage account). Get the link to the video and provide the URL as the upload file param.
+
+ The URL provided to Azure Video Indexer must point to a media (audio or video) file. An easy verification for the URL (or SAS URL) is to paste it into a browser, if the file starts playing/downloading, it's likely a good URL. If the browser is rendering some visualization, it's likely not a link to a file but to an HTML page.
+- When you call the API that gets video insights for the specified video, you get a detailed JSON output as the response content. [See details about the returned JSON in this topic](video-indexer-output-json-v2.md).
+- The JSON output produced by the API contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
+- We do not recommend that you use data directly from the artifacts folder for production purposes. Artifacts are intermediate outputs of the indexing process. They are essentially raw outputs of the various AI engines that analyze the videos; the artifacts schema may change over time.
+
+ It is recommended that you use the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, as described in [Get insights and artifacts produced by the API](video-indexer-output-json-v2.md#get-insights-produced-by-the-api) and **not** [Get-Video-Artifact-Download-Url](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url).
## Code sample
backup Azure Backup Architecture For Sap Hana Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-architecture-for-sap-hana-backup.md
See the [high-level architecture of Azure Backup for SAP HANA databases](./sap-h
1. To stream the backup data, Backint creates up to three pipes, which directly write to Azure BackupΓÇÖs Recovery Services vault.
- If you arenΓÇÖt using firewall/NVA in your setup, then the backup stream is transferred over the Azure network to the Recovery Services vault. Also, you can set up [Virtual Network Service Endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) or [Private Endpoint](../private-link/private-endpoint-overview.md) to allow SAP HANA to send backup traffic directly to Azure Storage, skipping NVA/Azure Firewall. Additionally, when you use firewall/NVA, the traffic to Azure Active Directory and Recovery Services vault will pass through the firewall/NVA and it doesnΓÇÖt affect the overall backup performance.
+ If you arenΓÇÖt using firewall/NVA in your setup, then the backup stream is transferred over the Azure network to the Recovery Services vault / Azure Storage. Also, you can set up [Virtual Network Service Endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) or [Private Endpoint](../private-link/private-endpoint-overview.md) to allow SAP HANA to send backup traffic directly to Recovery Services Vault / Azure Storage, skipping NVA/Azure Firewall. Additionally, when you use firewall/NVA, the traffic to Azure Active Directory and Azure Backup Service will pass through the firewall/NVA and it doesnΓÇÖt affect the overall backup performance.
1. Azure Backup attempts to achieve speeds up to 420 MB/sec for non-log backups and up to 100 MB/sec for log backups. [Learn more](./tutorial-backup-sap-hana-db.md#understanding-backup-and-restore-throughput-performance) about backup and restore throughput performance.
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
Previously updated : 03/17/2022 Last updated : 09/09/2022 - # Connect to a VM using a native client This article helps you configure your Bastion deployment, and then connect to a VM in the VNet using the native client (SSH or RDP) on your local computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). Additionally with this feature, you can now also upload or download files, depending on the connection type and client.
-Your capabilities on the VM when connecting via a native client are dependent on what is enabled on the native client. Controlling access to features such as file transfer via Bastion isn't supported.
+Your capabilities on the VM when connecting via native client are dependent on what is enabled on the native client. Controlling access to features such as file transfer via Bastion isn't supported.
> [!NOTE] > This configuration requires the Standard SKU tier for Azure Bastion.
-There are two different sets of connection instructions.
+After you deploy this feature, there are two different sets of connection instructions.
-* Connect to a VM from the [native client on a Windows local computer](#connect). This lets you do the following:
+* [Connect to a VM from the native client on a Windows local computer](#connect). This lets you do the following:
* Connect using SSH or RDP. * [Upload and download files](vm-upload-download-native.md#rdp) over RDP. * If you want to connect using SSH and need to upload files to your target VM, use the **az network bastion tunnel** command instead.
-* Connect to a VM using the [**az network bastion tunnel** command](#connect-tunnel). This lets you do the following:
+* [Connect to a VM using the **az network bastion tunnel** command](#connect-tunnel). This lets you do the following:
* Use native clients on *non*-Windows local computers (example: a Linux PC). * Use the native client of your choice. (This includes the Windows native client.)
Before you begin, verify that you have the following prerequisites:
* [Configure your Windows VM to be Azure AD-joined](../active-directory/devices/concept-azure-ad-join.md). * [Configure your Windows VM to be hybrid Azure AD-joined](../active-directory/devices/concept-azure-ad-join-hybrid.md).
-## <a name="configure"></a>Configure Bastion
+## <a name="configure"></a>Configure the native client support feature
-You can either [modify an existing Bastion deployment](#modify-host), or [deploy Bastion](#configure-new) to a virtual network.
+You can configure this feature by either modifying an existing Bastion deployment, or you can deploy Bastion with the feature configuration already specified.
-### <a name="modify-host"></a>To modify an existing Bastion deployment
+### To modify an existing Bastion deployment
-If you have already deployed Bastion to your VNet, modify the following configuration settings:
+If you've already deployed Bastion to your VNet, modify the following configuration settings:
-1. Navigate to the **Configuration** page for your Bastion resource. Verify that the SKU is **Standard**. If it isn't, change it to **Standard** from the dropdown.
-1. Check the box for **Native Client Support** and apply your changes.
+1. Navigate to the **Configuration** page for your Bastion resource. Verify that the SKU Tier is **Standard**. If it isn't, select **Standard**.
+1. Select the box for **Native Client Support**, then apply your changes.
- :::image type="content" source="./media/connect-native-client-windows/update-host.png" alt-text="Settings for updating an existing host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/update-host-expand.png":::
+ :::image type="content" source="./media/connect-native-client-windows/update-host.png" alt-text="Screenshot that shows settings for updating an existing host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/update-host.png":::
-### <a name="configure-new"></a>To deploy Bastion to a VNet
+### To deploy Bastion with the native client feature
-If you haven't already deployed Bastion to your VNet, [deploy Bastion](tutorial-create-host-portal.md#createhost). When configuring Bastion, specify the following settings:
+If you haven't already deployed Bastion to your VNet, you can deploy with the native client feature specified by deploying Bastion using manual settings. For steps, see [Tutorial - Deploy Bastion with manual settings](tutorial-create-host-portal.md#createhost). When you deploy Bastion, specify the following settings:
-1. On the **Basics** tab, for **Instance Details -> Tier** select **Standard** to deploy Bastion using the Standard SKU.
+1. On the **Basics** tab, for **Instance Details -> Tier** select **Standard**. Native client support requires the Standard SKU.
:::image type="content" source="./media/connect-native-client-windows/standard.png" alt-text="Settings for a new bastion host with Standard SKU selected." lightbox="./media/connect-native-client-windows/standard.png":::
-1. On the **Advanced** tab, check the box for **Native Client Support**.
+1. Before you create the bastion host, go to the **Advanced** tab and check the box for **Native Client Support**, along with the checkboxes for any other additional features that you want to deploy.
- :::image type="content" source="./media/connect-native-client-windows/new-host.png" alt-text="Settings for a new bastion host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/new-host-expand.png":::
+ :::image type="content" source="./media/connect-native-client-windows/new-host.png" alt-text="Screenshot that shows settings for a new bastion host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/new-host.png":::
+
+1. Click **Review + create** to validate, then click **Create** to deploy your Bastion host.
## <a name="verify"></a>Verify roles and ports
-Verify that the following roles and ports are configured in order to connect.
+Verify that the following roles and ports are configured in order to connect to the VM.
-### <a name="roles"></a>Required roles
+### Required roles
* Reader role on the virtual machine. * Reader role on the NIC with private IP of the virtual machine.
To connect to a Windows VM using native client support, you must have the follow
To learn about how to best configure NSGs with Azure Bastion, see [Working with NSG access and Azure Bastion](bastion-nsg.md).
-## <a name="connect"></a>Connect - Windows native client
+## <a name="connect"></a>Connect to VM - Windows native client
This section helps you connect to your virtual machine from the native client on a local Windows computer. If you want to upload and download files after connecting, you must use an RDP connection. For more information about file transfers, see [Upload or download files](vm-upload-download-native.md).
Use the example that corresponds to the type of target VM to which you want to c
``` **SSH:**
-
+ The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example. ```azurecli
Use the example that corresponds to the type of target VM to which you want to c
1. Once you sign in to your target VM, the native client on your computer will open up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
-## <a name="connect-tunnel"></a>Connect - other native clients
+## <a name="connect-tunnel"></a>Connect to VM - other native clients
This section helps you connect to your virtual machine from native clients on *non*-Windows local computers (example: a Linux PC) using the **az network bastion tunnel** command. You can also connect using this method from a Windows computer. This is helpful when you require an SSH connection and want to upload files to your VM.
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
description: Learn how to deploy Bastion with default settings from the Azure po
Previously updated : 08/02/2022 Last updated : 09/09/2022 - # Quickstart: Deploy Azure Bastion with default settings
-In this quickstart, you'll learn how to deploy Azure Bastion with default settings to your virtual network using the Azure portal. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration.
+In this quickstart, you'll learn how to deploy Azure Bastion with default settings to your virtual network using the Azure portal. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration. Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on one of your VMs and maintain yourself. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
-In this quickstart, you deploy Bastion from your VM resource using the Azure portal. Bastion is deployed using default settings that are based on the virtual network in which your VM is located. You then connect to your VM using RDP/SSH connectivity and the VM's private IP address. If your VM has a public IP address that you don't need for anything else, you can remove it.
-
-Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on one of your VMs and maintain yourself. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+The following steps walk you through how to deploy Bastion from your VM resource using the Azure portal. When you deploy using default settings, the settings are based on the virtual network to which Bastion will be deployed. After deploying Bastion, you'll then connect to your VM using RDP/SSH connectivity and the VM's private IP address. If your VM has a public IP address that you don't need for anything else, you can remove it. While the steps in this quickstart help you deploy Bastion from your VM resource, you can deploy Bastion from a virtual network resource instead. The steps are similar, except you start from the virtual network resource instead of the VM resource.
## <a name="prereq"></a>Prerequisites
You can use the following example values when creating this configuration, or yo
**Bastion values:**
-When you deploy from VM settings, Bastion is automatically configured with default values.
+When you deploy from VM settings, Bastion is automatically configured with default values from the VNet
|**Name** | **Default value** | |||
When you create Azure Bastion using default settings, the settings are configure
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the portal, go to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment. 1. On the page for your VM, in the **Operations** section on the left menu, select **Bastion**. When the **Bastion** page opens, it checks to see if you have enough available address space to create the AzureBastionSubnet. If you don't, you'll see settings to allow you to add more address space to your VNet to meet this requirement.
-1. On the **Bastion** page, you can view some of the values that will be used when creating the bastion host for your virtual network. Select **Create Azure Bastion using defaults** to deploy bastion using default settings.
+1. On the **Bastion** page, you can view some of the values that will be used when creating the bastion host for your virtual network. Select **Deploy Bastion** to deploy bastion using default settings.
- :::image type="content" source="./media/quickstart-host-portal/deploy-bastion.png" alt-text="Screenshot of Deploy Bastion." lightbox="./media/quickstart-host-portal/deploy-bastion.png":::
+ :::image type="content" source="./media/quickstart-host-portal/deploy.png" alt-text="Screenshot of Deploy Bastion." lightbox="./media/quickstart-host-portal/deploy.png":::
1. Bastion begins deploying. This can take around 10 minutes to complete.
In this quickstart, you deployed Bastion to your virtual network, and then conne
> [!div class="nextstepaction"] > [VM connections](vm-about.md)
-> [Azure Bastion configuration settings and features](configuration-settings.md).
+
+> [!div class="nextstepaction"]
+> [Azure Bastion configuration settings and features](configuration-settings.md)
batch Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/accounts.md
Batch supports the following types of Azure Storage accounts:
- General-purpose v1 (GPv1) accounts - Blob storage accounts (currently supported for pools in the Virtual Machine configuration)
+> [!IMPORTANT]
+> You can't use the [Application Packages](batch-application-packages.md) feature with Azure Storage accounts configured with [firewall rules](../storage/common/storage-network-security.md), or with **Hierarchical namespace** set to **Enabled**.
+ For more information about storage accounts, see [Azure storage account overview](../storage/common/storage-account-overview.md). You can associate a storage account with your Batch account when you create the Batch account, or later. Consider your cost and performance requirements when choosing a storage account. For example, the GPv2 and blob storage account options support greater [capacity and scalability limits](https://azure.microsoft.com/blog/announcing-larger-higher-scale-storage-accounts/) compared with GPv1. (Contact Azure Support to request an increase in a storage limit.) These account options can improve the performance of Batch solutions that contain a large number of parallel tasks that read from or write to the storage account.
batch Simplified Compute Node Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-compute-node-communication.md
Opting in isn't required at this time. However, in the future, using simplified
Simplified compute node communication in Azure Batch is currently available for the following regions: -- Public: Central US EUAP, East US 2 EUAP, West Central US, North Central US, South Central US, East US, East US 2, West US 2, West US, Central US, West US 3, East Asia, South East Asia, Australia East, Australia Southeast, Brazil Southeast, Brazil South, Canada Central, Canada East, North Europe, West Europe, Central India, Japan East, Japan West, Korea Central, Korea South, Switzerland North, UK West, UK South, UAE North, France Central, Germany West Central, Norway East, South Africa North.
+- Public: Central US EUAP, East US 2 EUAP, West Central US, North Central US, South Central US, East US, East US 2, West US 2, West US, Central US, West US 3, East Asia, South East Asia, Australia East, Australia Southeast, Brazil Southeast, Brazil South, Canada Central, Canada East, North Europe, West Europe, Central India, South India, Japan East, Japan West, Korea Central, Korea South, Sweden Central, Sweden South, Switzerland North, Switzerland West, UK West, UK South, UAE North, France Central, Germany West Central, Norway East, South Africa North.
- Government: USGov Arizona, USGov Virginia, USGov Texas.
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
Grant Azure CDN permission to access the certificates (secrets) in your Azure Ke
- The available certificate/secret versions. > [!NOTE]
- > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the certificate/secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 24 hours for the new version of the certificate/secret to be deployed.
+ > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the certificate/secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 72 hours for the new version of the certificate/secret to be deployed.
5. Select **On** to enable HTTPS.
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
This table lists some of the key configuration parameters for pronunciation asse
| `ReferenceText` | The text that the pronunciation will be evaluated against. | | `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. | | `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels above or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text.|
-| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. |
+| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. |
+| `ScenarioId` | A GUID indicating a customized point system. |
You must create a `PronunciationAssessmentConfig` object with the reference text, grading system, and granularity. Enabling miscue and other configuration settings are optional.
Pronunciation assessment results for the spoken word "hello" are shown as a JSON
"Offset": 7500000, "Duration": 13800000, "DisplayText": "Hello.",
- "SNR": 34.879055,
"NBest": [ { "Confidence": 0.975003,
cognitive-services Cognitive Services Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-environment-variables.md
Previously updated : 08/15/2022 Last updated : 09/09/2022
Use the following command to create and assign a persisted environment variable,
```CMD :: Assigns the env var to the value
-setx ENVIRONMENT_VARIABLE_KEY="value"
+setx ENVIRONMENT_VARIABLE_KEY "value"
``` In a new instance of the Command Prompt, use the following command to read the environment variable.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md
Previously updated : 08/15/2022 Last updated : 09/09/2022 ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-text-analytics
[!INCLUDE [REST API quickstart](includes/quickstarts/rest-api.md)] ::: zone-end-
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
-
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Entity-linking&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
-
-## Next steps
-
-* [Entity Linking overview](overview.md)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
zone_pivot_groups: programming-languages-text-analytics
[!INCLUDE [REST API quickstart](includes/quickstarts/rest-api.md)] ::: zone-end-
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
-
-> [!div class="nextstepaction"]
-> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=PYTHON&Pillar=Language&Product=Sentiment-analysis&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a>
-
-## Next steps
-
-* [Sentiment Analysis overview](overview.md)
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
Most of the events sent by Event Grid are platform agnostic meaning they're emit
### Call Automation webhook events The Call Automation events are sent to the web hook callback URI specified when you answer or place a new outbound call.+ | Event | Description | | -- | | | CallConnected | Your applicationΓÇÖs call leg is connected (inbound or outbound) |
communication-services Get Started Rooms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/get-started-rooms.md
- Previously updated : 07/27/2022+ Last updated : 09/01/2022
This quickstart will help you get started with Azure Communication Services Room
::: zone-end ::: zone pivot="programming-language-python" ::: zone-end ::: zone pivot="programming-language-javascript" ::: zone-end ## Object model
communication-services Get Resource Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-resource-id.md
+ Last updated : 09/07/2022 ++
+ Title: How to find immutable Azure resource ID
++
+description: how to find the immutable Azure Resource ID.
+++
+# How to get your Azure Resource ID
+
+In order to get your Resource ID allowlisted, send your Immutable Azure Resource ID to the Call Recording Team. For reference, see the image below.
+
+![Screenshot of Azure Resource ID.](media/call-recording/immutable-resource-id.png)
+
+## See Also
+
+For more information, see the following articles:
+
+- Check out our [web calling sample](../../samples/web-calling-sample.md)
+- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Get Server Call Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-server-call-id.md
+ Last updated : 09/07/2022 ++
+ Title: Get server Call ID
++
+description: This section describes how to get the serverCallid from a JavaScript server app
+++
+# Get serverCallId as a requirement for call recording server APIs from JavaScript application
+
+In a peer to peer calling scenario using the [Calling client SDK](get-started-with-video-calling.md), in order to use Call Recording from Azure Communications you'll have to get the `serverCallId`.
+The following example shows you how to get the `serverCallId` from a JavaScript server application.
+
+Call recording is an extended feature of the core Call API. You first need to import calling Features from the Calling SDK.
+
+```JavaScript
+import { Features} from "@azure/communication-calling";
+```
+Then you can get the recording feature API object from the call instance:
+
+```JavaScript
+const callRecordingApi = call.feature(Features.Recording);
+```
+Subscribe to recording changes:
+
+```JavaScript
+const recordingStateChanged = () => {
+ let recordings = callRecordingApi.recordings;
+
+ let state = SDK.RecordingState.None;
+ if (recordings.length > 0) {
+ state = recordings.some(r => r.state == SDK.RecordingState.Started)
+ ? SDK.RecordingState.Started
+ : SDK.RecordingState.Paused;
+ }
+
+ console.log(`RecordingState: ${state}`);
+}
+
+const recordingsChangedHandler = (args: { added: SDK.RecordingInfo[], removed: SDK.RecordingInfo[]}) => {
+ args.added?.forEach(a => {
+ a.on('recordingStateChanged', recordingStateChanged);
+ });
+
+ args.removed?.forEach(r => {
+ r.off('recordingStateChanged', recordingStateChanged);
+ });
+
+ recordingStateChanged();
+};
+
+callRecordingApi.on('recordingsUpdated', recordingsChangedHandler);
+```
+Get `servercallId`, which can be used to start/stop/pause/resume recording sessions.
+Once the call is connected, use the `getServerCallId` method to get the server call ID.
+
+```JavaScript
+callAgent.on('callsUpdated', (e: { added: Call[]; removed: Call[] }): void => {
+ e.added.forEach((addedCall) => {
+ addedCall.on('stateChanged', (): void => {
+ if (addedCall.state === 'Connected') {
+ addedCall.info.getServerCallId().then(result => {
+ dispatch(setServerCallId(result));
+ }).catch(err => {
+ console.log(err);
+ });
+ }
+ });
+ });
+});
+```
+
+## See also
+
+For more information, see the following articles:
+
+- Learn about [Calling SDK capabilities]()
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
This article is about role-based access control for data plane operations in Azu
If you are using management plane operations, see [role-based access control](../role-based-access-control.md) applied to your management plane operations article.
-The API for MongoDB exposes a built-in role-based access control (RBAC) system that lets you authorize your data requests with a fine-grained, role-based permission model. Users are roles reside within a database and are managed using the Azure CLI, Azure PowerShell, or ARM for this preview feature.
+The API for MongoDB exposes a built-in role-based access control (RBAC) system that lets you authorize your data requests with a fine-grained, role-based permission model. Users and roles reside within a database and are managed using the Azure CLI, Azure PowerShell, or ARM for this preview feature.
## Concepts
cost-management-billing Cost Analysis Built In Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-built-in-views.md
description: This article helps you understand when to use which view, how each one provides unique insights about your costs and recommended next steps to investigate further. Previously updated : 02/17/2022 Last updated : 09/09/2022
Use the **Invoice details** view to:
## Next steps -- Now that you're familiar with using built-in views, read about [Saving and sharing customized views](save-share-views.md).
+- Now that you're familiar with using built-in views, read about [Saving and sharing customized views](save-share-views.md).
+- Learn about how to [Customize views in cost analysis](customize-cost-analysis-views.md)
cost-management-billing Customize Cost Analysis Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/customize-cost-analysis-views.md
+
+ Title: Customize views in cost analysis
+
+description: This article helps you customize views in cost analysis to understand how you're being charged and to investigate unexpected changes.
++ Last updated : 09/09/2022++++++
+# Customize views in cost analysis
+
+This article helps you customize views in cost analysis to understand how you're being charged and to investigate unexpected changes.
+
+## Prerequisites
+
+To customize views, you must have at least the Cost Management Reader (or Contributor) role.
+
+You should be familiar with the information at [Quickstart: Explore and analyze costs with cost analysis](quick-acm-cost-analysis.md).
+
+## Get started with customizing views
+
+Customizing views in cost analysis includes anything from tweaking display settings to changing what data gets included or how it's summarized. You customize views when trying to understand what you're spending and where the costs originated. For example, you can drill into data, apply specific filters or groupings, or change display settings, like whether to view a chart or table. The following sections cover each of these customization options.
+
+## Group costs
+
+Use the **Group by** option to group common properties so that you get a break down of costs and to identify top contributors. It should be your first change when drilling into data because it helps you identify the largest changes. To group by resource tags, for example, select the tag key you want to group by. Costs are broken down by each tag value, with an extra segment for resources that don't have that tag applied.
+
+Most Azure resources support tagging. However, some tags aren't available in Cost Management and billing. Additionally, resource group tags aren't supported. Support for tags applies to usage reported _after_ the tag was applied to the resource. Tags aren't applied retroactively for cost rollups.
+
+Here's a view of Azure service costs for the current month, grouped by Service name.
++
+The following image shows resource group names. You can group by tag to view total costs per tag or group by **Resource group name**.
++
+When you're grouping costs by a specific attribute, the top 10 cost contributors are shown from highest to lowest. If there are more than 10, the top nine cost contributors are shown with an **Others** group that represents all remaining groups combined. When you're grouping by tags, an **Untagged** group appears for costs that don't have the tag key applied. **Untagged** is always last, even if untagged costs are higher than tagged costs. Untagged costs will be part of **Others**, if 10 or more tag values exist. To view what's grouped into **Others** , either select that segment to apply a filter or switch to the table view and change granularity to **None** to see all values ranked from highest to lowest cost.
+
+Classic virtual machines, networking, and storage resources don't share detailed billing data. They're merged as **Classic services** when grouping costs.
+
+Cost analysis doesn't support grouping by multiple attributes. To work around it, you can apply a filter for a desired attribute and group by the more detailed attribute. For instance, filter down to a specific resource group, then group by resource.
+
+Pivot charts under the main chart show different groupings, which give you a broader picture of your overall costs for the selected time period and filters. Select a property or tag to view aggregated costs by any dimension.
++
+## Select a date range
+
+There are many cases where you need deeper analysis. Customization starts at the top of the page, with the date selection.
+
+Cost analysis shows data for the current month by default. Use the date selector to switch to common date ranges quickly. Examples include the last seven days, the last month, the current year, or a custom date range. Pay-as-you-go subscriptions also include date ranges based on your billing period, which isn't bound to the calendar month, like the current billing period or last invoice.
++
+## Filter charges
+
+Add filters to narrow down or drill into your specific charges. It's especially helpful when trying to understand an unexpected change. Start by selecting the **Add filter** pill, then select the desired attribute, and lastly select the options you want to filter down to. Your view will automatically update once you've applied the filter.
+
+You can add multiple filters. As you add filters, you'll notice that the available values for each filter include the previously selected filters. For instance, if you apply a resource group filter, then add a resource filter, the resource filter options will only show resources in the selected resource group.
+
+When you view charts, you can also select a chart segment to apply a filter. After selecting a chart segment, you should consider changing the group by attribute to see other details about the attribute you selected.
+
+## Switch between actual and amortized cost
+
+By default, cost analysis shows all usage and purchase costs as they're accrued and will show on your invoice, also known as **Actual cost**. Viewing actual cost is ideal for reconciling your invoice. However, purchase spikes in cost can be alarming when you're keeping an eye out for spending anomalies and other changes in cost. To flatten out spikes caused by reservation purchase costs, switch to **Amortized cost**.
++
+Amortized cost breaks down reservation purchases into daily chunks and spreads them over the duration of the reservation term. Most reservation terms are one or three years. Let's look at a one-year reservation example. Instead of seeing a $365 purchase on January 1, you'll see a $1.00 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated by using the specific resources that used the reservation. For example, if that $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of UnusedReservation. Unused reservation costs can be seen only when you view amortized cost.
+
+If you buy a one-year reservation on May 26 with an upfront payment, the amortized cost is divided by 365 (assuming it's not a leap year) and spread from May 26 through May 25 of the next year. If you pay monthly, the monthly fee is divided by the number of days in that month. The free is spread evenly across May 26 through June 25, with the next month's fee spread across June 26 through July 25.
+
+Because of the change in how costs are represented, it's important to note that actual cost and amortized cost views will show different total numbers. In general, the total cost of months with a reservation purchase will decrease when you view amortized costs, and months following a reservation purchase will increase. Amortization is available only for reservation purchases and doesn't apply to Azure Marketplace purchases at this time.
+
+## Select a currency
+
+Costs are shown in your billing currency by default. If you have charges in multiple currencies, costs will automatically be converted to USD. If you have any non-USD charges, you can switch between currencies in the total KPI menu. You may see options like **GBP only** to view only the charges in that one currency or **All costs in USD** to view the normalized costs in USD. You can't view costs normalized to other currencies today.
++
+## Select a budget
+
+When you view a chart, it can be helpful to visualize your charges against a budget. It's especially helpful when showing accumulated daily costs with a forecast trending towards your budget. If your costs go over your budget, you'll see a red critical icon next to your budget. If your forecast goes over your budget, you'll see a yellow warning icon.
+
+When you view daily or monthly costs, your budget may be estimated for the period. For instance, a monthly budget of $31 will be shown as `$1/day (est)`. Note your budget won't be shown as red when it exceeds this estimated amount on a specific day or month.
+
+Budgets that have filters aren't currently supported in cost analysis. You won't see them in the list. Budgets on lower-level scopes are also not shown in cost analysis today. To view a budget for a specific scope, change scope using the scope picker.
+
+## Change granularity
+
+Use **Granularity** to indicate how you want to view cost over time. The lowest level you can view is Daily costs. You can view daily costs for up to 3 months or 92 consecutive days. If you select more than 92 days, cost analysis switches to **Monthly** granularity. It updates your date range to include the start and end of the selected months to provide the most accurate picture of your monthly costs. You can view up to 12 months of monthly costs.
+
+If you'd like to view a running total of charges on either a daily or monthly basis, select **Accumulated**. Accumulated is especially helpful when you view your forecast as it helps you see the trend over time.
+
+If you'd like to view the total for the entire period (no granularity), select **None**. Selecting no granularity is helpful when grouping costs by a specific attribute in either a chart or table.
+
+## Visualize costs in a chart
+
+Cost analysis supports the following chart types:
+
+- Area charts are ideal for showing a running total with forecast trending towards a budget.
+- Line charts are ideal for reviewing relative changes. Line charts aren't stacked, which helps spot changes easily.
+- Column (stacked) charts are ideal for reviewing your daily or monthly run rate. It shows a breakdown by some attribute to easily spot which group has the most charges. Groups are sorted from largest to smallest from left-to-right, bottom-to-top.
+- Column (grouped) charts are helpful when you view grouped costs with no granularity.
+
+## View costs in table format
+
+You can view the full dataset for any view. Whichever selections or filters that you apply affect the data presented. To see the full dataset, select the **chart type** list and then select **Table** view.
++
+## Next steps
+
+- Learn about [Saving and sharing customized views](save-share-views.md).
cost-management-billing Get Started Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/get-started-partners.md
Partners also filter costs in a specific billing currency across customers in th
![Example showing Actual cost selection for currencies](./media/get-started-partners/actual-cost-selector.png)
-Use the [amortized cost view](quick-acm-cost-analysis.md#switch-between-actual-and-amortized-cost) in billing scopes to view reserved instance amortized costs across a reservation term.
+Use the [amortized cost view](customize-cost-analysis-views.md#switch-between-actual-and-amortized-cost) in billing scopes to view reserved instance amortized costs across a reservation term.
### Billing profile scope
To view or change policies:
1. In the Azure portal, navigate to **Cost Management** (not Cost Management + Billing). 1. In the left menu under **Settings**, select **Configuration**.
-1. The billing profile configuration is shown. Polices are shown as Enabled or Disabled. If you want to change a policy, select **Edit** under a policy.
+1. The billing profile configuration is shown. Policies are shown as Enabled or Disabled. If you want to change a policy, select **Edit** under a policy.
:::image type="content" source="./media/get-started-partners/configuration-policy-settings.png" alt-text="Screenshot showing the billing profile configuration page where you can view and edit policy settings." lightbox="./media/get-started-partners/configuration-policy-settings.png" ::: 1. If needed, change the policy settings, and then select **Save**.
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
Title: Quickstart - Explore Azure costs with cost analysis
description: This quickstart helps you use cost analysis to explore and analyze your Azure organizational costs. Previously updated : 03/22/2022 Last updated : 09/09/2022
Before you can properly control and optimize your Azure costs, you need to understand where costs originated within your organization. It's also useful to know how much money your services cost, and in support of which environments and systems. Visibility into the full spectrum of costs is critical to accurately understand organizational spending patterns. You can use spending patterns to enforce cost control mechanisms, like budgets.
-In this quickstart, you use cost analysis to explore and analyze your organizational costs. You can view aggregated costs by organization to understand where costs occur over time and identify spending trends. You can view accumulated costs over time to estimate monthly, quarterly, or even yearly cost trends against a budget. You can use budgets to get notified as cost exceeds specific thresholds.
+In this quickstart, you use cost analysis to explore and analyze your organizational costs. You can view aggregated costs or break them down to understand where costs occur over time and identify spending trends. You can view accumulated costs over time to estimate monthly, quarterly, or even yearly cost trends against a budget. You can use budgets to get notified as cost exceeds specific thresholds.
In this quickstart, you learn how to: - Get started in cost analysis - Select a cost view-- Select a date range - View costs-- Group costs-- Switch between actual and amortized costs-- View costs in table format ## Prerequisites
The scope you select is used throughout Cost Management to provide data consolid
The initial cost analysis view includes the following areas:
-**Currently selected view**: Represents the predefined cost analysis view configuration. Each view includes date range, granularity, group by, and filter settings. The default view shows accumulated costs for the current billing period, but you can change to other built-in views.
+**Currently selected view**: Represents the predefined cost analysis view configuration. Each view includes date range, granularity, group by, and filter settings. The default view shows a running total of your costs for the current billing period with the Accumulated costs view, but you can select other built-in views from the menu. The view menu is between the scope pill and the date selector. For details about saved views, see [Save and share customized views](save-share-views.md).
-**Cost**: Shows the total usage and purchase costs for the current month, as they're accrued and will show on your bill.
+**Filters**: Allow you to limit the results to a subset of your total charges. Filters apply to all summarized totals and charts.
-**Forecast**: Shows the total forecasted costs for time period you choose.
+**Cost**: Shows the total usage and purchase costs for the selected period, as they're accrued and will show on your bill. Costs are shown in your billing currency by default. If you have charges in multiple currencies, cost will automatically be converted to USD.
-**Budget (if selected)**: Shows the planned spending limit for the selected scope, if available.
+**Forecast**: Shows the total forecasted costs the selected period.
-**Accumulated granularity**: Shows the total aggregate daily costs, from the beginning of the billing period. After you create a budget for your billing account or subscription, you can quickly see your spending trend against the budget. Hover over a date to view the accumulated cost for that day.
+**Budget (if selected)**: Shows the current budget amount for the selected scope, if already defined.
+
+**Granularity**: Indicates how to show data over time. Select **Daily** or **Monthly** to view costs broken down by day or month. Select **Accumulated** to view the running total for the period. Select **None** to view the total cost for the period, with no breakdown.
**Pivot (donut) charts**: Provide dynamic pivots, breaking down the total cost by a common set of standard properties. They show the largest to smallest costs for the current month.
Based on your recent usage, cost forecasts show a projection of your estimated c
## Select a cost view
-Cost analysis has four built-in views, optimized for the most common goals:
+Cost analysis has many built-in views, optimized for the most common goals:
View | Answer questions like |
Daily cost | Have there been any increases in the costs per day for the last 30
Cost by service | How has my monthly usage vary over the past three invoices? Cost by resource | Which resources cost the most so far this month? Invoice details | What charges did I have on my last invoice?
+Resources (preview) | Which resources cost the most so far this month? Are there any subscription cost anomalies?
+Resource groups (preview) | Which resource groups cost the most so far this month?
+Subscriptions (preview) | Which subscriptions the most so far this month?
+Services (preview) | Which services cost the most so far this month?
+Reservations (preview) | How much are reservations being used? Which resources are utilizing reservations?
-The cost by resource view is only available for subscription and resource group scopes.
+The Cost by resource and Resources views are only available for subscriptions and resource groups.
![View selector showing an example selection for this month](./media/quick-acm-cost-analysis/view-selector.png)
-## Select a date range
-
-There are many cases where you need deeper analysis. Customization starts at the top of the page, with the date selection.
-
-Cost analysis shows data for the current month by default. Use the date selector to switch to common date ranges quickly. Examples include the last seven days, the last month, the current year, or a custom date range. Pay-as-you-go subscriptions also include date ranges based on your billing period, which isn't bound to the calendar month, like the current billing period or last invoice.
-
-![Date selector showing an example selection for this month](./media/quick-acm-cost-analysis/date-selector.png)
+For more information about views, see:
+- [Use built-in views in Cost analysis](cost-analysis-built-in-views.md)
+- [Save and share customized views](save-share-views.md)
+- [Customize views in cost analysis](customize-cost-analysis-views.md)
## View costs
There's also the **daily** view that shows costs for each day. The daily view do
Here's a daily view of recent spending with spending forecast turned on. ![Daily view showing example daily costs for the current month](./media/quick-acm-cost-analysis/daily-view.png)
-When turn off the spending forecast, you don't see projected spending for future dates. Also, when you look at costs for past time periods, cost forecast doesn't show costs.
-
-Generally, you can expect to see data or notifications for consumed resources within 8 to 12 hours.
-
-## Group costs
-
-**Group by** common properties to break down costs and identify top contributors. To group by resource tags, for example, select the tag key you want to group by. Costs are broken down by each tag value, with an extra segment for resources that don't have that tag applied.
-
-Most Azure resources support tagging. However, some tags aren't available in Cost Management and billing. Additionally, resource group tags aren't supported. Support for tags applies to usage reported *after* the tag was applied to the resource. Tags aren't applied retroactively for cost rollups.
-
-Here's a view of Azure service costs for the current month, grouped by Service name.
-
-![Grouped daily accumulated view showing example Azure service costs for last month](./media/quick-acm-cost-analysis/grouped-daily-accum-view.png)
-
-The following image shows resource group names. You can group by tag to view total costs per tag or use the **Cost by resource** view to see all tags for a particular resource.
-
-![Full data for current view showing resource group names](./media/quick-acm-cost-analysis/full-data-set.png)
-
-When you're grouping costs by a specific attribute, the top 10-cost contributors are shown from highest to lowest. If there are more than 10, the top nine cost contributors are shown with an **Others** group that represents all remaining groups combined. When you're grouping by tags, an **Untagged** group appears for costs that don't have the tag key applied. **Untagged** is always last, even if untagged costs are higher than tagged costs. Untagged costs will be part of **Others**, if 10 or more tag values exist. Switch to the table view and change granularity to **None** to see all values ranked from highest to lowest cost.
-
-Classic virtual machines, networking, and storage resources don't share detailed billing data. They're merged as **Classic services** when grouping costs.
-
-Pivot charts under the main chart show different groupings, which give you a broader picture of your overall costs for the selected time period and filters. Select a property or tag to view aggregated costs by any dimension.
-
-![Example showing pivot charts](./media/quick-acm-cost-analysis/pivot-charts.png)
-
-## Switch between actual and amortized cost
-
-By default, cost analysis shows all usage and purchase costs as they're accrued and will show on your invoice, also known as **Actual cost**. Viewing actual cost is ideal for reconciling your invoice. However, purchase spikes in cost can be alarming when you're keeping an eye out for spending anomalies and other changes in cost. To flatten out spikes caused by reservation purchase costs, switch to **Amortized cost**.
-
-![Change between actual and amortized cost to see reservation purchases spread across the term and allocated to the resources that used the reservation](./media/quick-acm-cost-analysis/metric-picker.png)
-
-Amortized cost breaks down reservation purchases into daily chunks and spreads them over the duration of the reservation term. Most reservation terms are one or three years. Let's look at a one-year reservation example. Instead of seeing a $365 purchase on January 1, you'll see a $1.00 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated by using the specific resources that used the reservation. For example, if that $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of `UnusedReservation`. Unused reservation costs can be seen only when viewing amortized cost.
-
-If you buy a one-year reservation on May 26 with an upfront payment, the amortized cost is divided by 365 (assuming it's not a leap year) and spread from May 26 through May 25 of the next year. If you pay monthly, the monthly fee is divided by the number of days in that month and spread evenly across May 26 through June 25, with the next month's fee spread across June 26 through July 25.
-
-Because of the change in how costs are represented, it's important to note that actual cost and amortized cost views will show different total numbers. In general, the total cost of months with a reservation purchase will decrease when viewing amortized costs, and months following a reservation purchase will increase. Amortization is available only for reservation purchases and doesn't apply to Azure Marketplace purchases at this time.
-
-## View costs in table format
-
-You can view the full dataset for any view. Whichever selections or filters that you apply affect the data presented. To see the full dataset, select the **chart type** list and then select **Table** view.
+When the forecast is disabled, you won't see projected spending for future dates. Also, when you look at costs for past time periods, cost forecast doesn't show costs.
-![Data for current view in a table view](./media/quick-acm-cost-analysis/chart-type-table-view.png)
+Generally, you can expect to see data or notifications for consumed resources within 24 to 48 hours.
## Next steps
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delimited-text.md
Previously updated : 08/05/2022 Last updated : 09/08/2022
The below table lists the properties supported by a delimited text source. You c
| After completion | Delete or move the files after processing. File path starts from the container root | no | Delete: `true` or `false` <br> Move: `['<from>', '<to>']` | purgeFiles <br> moveFiles | | Filter by last modified | Choose to filter files based upon when they were last altered | no | Timestamp | modifiedAfter <br> modifiedBefore | | Allow no files found | If true, an error is not thrown if no files are found | no | `true` or `false` | ignoreNoFilesFound |
+| Maximum columns | The default value is 20480. Customize this value when the column number is over 20480 | no | Integer | maxColumns |
> [!NOTE] > Data flow sources support for list of files is limited to 1024 entries in your file. To include more files, use wildcards in your file list.
The associated data flow script is:
``` source( allowSchemaDrift: true,
- validateSchema: false,
- multiLineRow: true,
- wildcardPaths:['*.csv']) ~> CSVSource
+ validateSchema: false,
+ ignoreNoFilesFound: false,
+ multiLineRow: true,
+ wildcardPaths:['*.csv']) ~> CSVSource
``` > [!NOTE]
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
A private endpoint uses a private IP address in the managed virtual network to e
Only a managed private endpoint in an approved state can send traffic to a specific private link resource.
+> [!NOTE]
+> Custom DNS is not supported in managed virtual network.
## Interactive authoring
dms Faq Mysql Single To Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/faq-mysql-single-to-flex.md
+
+ Title: FAQ about using Azure Database Migration Service for Azure Database MySQL Single Server to Flexible Server migrations
+
+description: Frequently asked questions about using Azure Database Migration Service to perform database migrations from Azure Database MySQL Single Server to Flexible Server.
+++++++++ Last updated : 09/08/2022++
+# Frequently Asked Questions (FAQs)
+
+- **When using Azure Database Migration Service, whatΓÇÖs the difference between an offline and an online migration?**
+Azure Database Migration Service supports both offline and online migrations. With an offline migration, application downtime starts when the migration starts. With an online migration, downtime is limited to the time required to cut over at the end of migration. We suggest that you test an offline migration to determine whether the downtime is acceptable; if not, then perform an online migration.
+Online and Offline migrations are compared in the following table:
+
+ | Area | Online migration | Offline migration |
+ | - |:-:|:-:|
+ | **Database availability for reads during migration** | Available | Available |
+ | **Database availability for writes during migration** | Available | Generally, not recommended. Any ΓÇÿwritesΓÇÖ initiated after the migration is not captured or migrated |
+ | **Application Suitability** | Applications that need maximum uptime | Applications that can afford a planned downtime window |
+ | **Environment Suitability** | Production environment | Usually Development, Testing environment and some production that can afford downtime |
+ | **Suitability for Write-heavy workloads** | Suitable but expected to reduce the workload during migration | Not Applicable. Writes at source after migration begins are not replicated to target |
+ | **Manual Cutover** | Required | Not required |
+ | **Downtime required** | Less | More |
+ | **Migration time** | Depends on Database size and the write activity until cutover | Depends on Database size |
+
+- **IΓÇÖm setting up a migration project in DMS and IΓÇÖm having difficulty connecting to my source database. What should I do?**
+If you have trouble connecting to your source database system while working on migration, create a virtual machine in the same subnet of the virtual network with which you set up your DMS instance. In the virtual machine, you should be able to run a connect test. If the connection test succeeds, you shouldn't have an issue with connecting to your source database. If the connection test doesn't succeed, contact your network administrator.
+
+- **Why is my Azure Database Migration Service unavailable or stopped?**
+If the user explicitly stops Azure Database Migration Service (DMS) or if the service is inactive for a period of 24 hours, the service will be in a stopped or auto paused state. In each case, the service will be unavailable and in a stopped status. To resume active migrations, restart the service.
+
+- **Are there any recommendations for optimizing the performance of Azure Database Migration Service?**
+There are a couple of things to you can try to speed up your database migration using DMS:
+ - Use the multi CPU General Purpose Pricing Tier when you create your service instance to allow the service to take advantage of multiple vCPUs for parallelization and faster data transfer.
+ - Temporarily scale up your Azure MySQL Database target instance to the Premium tier SKU during the data migration operation to minimize Azure MySQL Database throttling that may impact data transfer activities when using lower-level SKUs.
+
+- **Which data, schema, and metadata components are migrated as part of the migration?**
+Azure Database Migration Service migrates schema, data, and metadata from the source to the destination. All of the following data, schema, and metadata components are migrated as part of the database migration:
+ - Data Migration - All tables from all databases/schemas.
+ - Schema Migration - Naming, Primary key, Data type, Ordinal position, Default value, Nullability, Auto-increment attributes, Secondary indexes
+ - Metadata Migration, Stored Procedures, Functions, Triggers, Views, Foreign key constraints
+
+- **Is there an option to rollback a Single Server to Flexible Server migration?**
+You can perform any number of test migrations, and after gaining confidence through testing, perform the final migration. A test migration doesnΓÇÖt affect the source single server, which remains operational and continues replicating until you perform the actual migration. If there are any errors during the test migration, you can choose to postpone the final migration and keep your source server running. You can then reattempt the final migration after you resolve the errors. Note that after you have performed a final migration to Flexible Server and the source single server has been shut down, you cannot perform a rollback from Flexible Server to Single Server.
+
+- **The size of my database is greater than 1 TB, so how should I proceed with migration?**
+To support migrations of databases that are 1 TB+, raise a support ticket with Azure Database Migration Service to scale-up the migration agent to support your 1 TB+ database migrations.
+
+- **Is cross-region migration supported?**
+Azure Database Migration Service supports cross-region migrations, so you can migrate your single server to a flexible server that is deployed in a different region using DMS.
+
+- **Is cross-subscription migration supported?**
+Azure Database Migration Service supports cross-subscription migrations, so you can migrate your single server to a flexible server that deployed on a different subscription using DMS.
+
+- **Is cross-resource group subscription supported?**
+Azure Database Migration Service supports cross-resource group migrations, so you can migrate your single server to a flexible server that is deployed in a different resource group using DMS.
+
+- **Is there cross-version support?**
+Yes, migration from lower version MySQL servers (v5.6 and above) to higher versions is supported.
dms Tutorial Mysql Azure Single To Flex Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md
You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azu
> DMS supports migrating from lower version MySQL servers (v5.6 and above) to higher versions. In addition, DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you can select a different region, resource group, and subscription for the target server than that specified for your source server. > [!IMPORTANT]
-For online migrations, you can use the Enable Transactional Consistency feature supported by DMS together with [Data-in replication](./../mysql/single-server/concepts-data-in-replication.md) or [replicate changes](https://techcommunity.microsoft.com/t5/microsoft-data-migration-blog/azure-dms-mysql-replicate-changes-now-in-preview/ba-p/3601564). Additionally, you can use the online migration scenario to migrate by following the tutorial [here](./tutorial-mysql-azure-single-to-flex-offline-portal.md).
+> For online migrations, you can use the Enable Transactional Consistency feature supported by DMS together with [Data-in replication](./../mysql/single-server/concepts-data-in-replication.md) or [replicate changes](https://techcommunity.microsoft.com/t5/microsoft-data-migration-blog/azure-dms-mysql-replicate-changes-now-in-preview/ba-p/3601564). Additionally, you can use the online migration scenario to migrate by following the tutorial [here](./tutorial-mysql-azure-single-to-flex-offline-portal.md).
In this tutorial, you will learn how to:
With these best practices in mind, create your target flexible server and then c
* innodb_buffer_pool_size ΓÇô can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size. * innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed. * innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
- * Configure the firewall rules and replicas on the target server to match those on the source server.
+ * Configure the replicas on the target server to match those on the source server.
* Replicate the following server management features from the source single server to the target flexible server: * Role assignments, Roles, Deny Assignments, classic administrators, Access Control (IAM) * Locks (read-only and delete)
Selecting this check box prevents Write/Delete operations on the source server d
DMS validates your inputs, and if the validation passes, you will be able to start the migration.
-8. After configuring for schema migration, select **Next : Summary>>**.
+8. After configuring for schema migration, select **Review and start migration**.
> [!NOTE] > You only need to navigate to the Configure migration settings tab if you are trying to troubleshoot failing migrations.
dms Tutorial Mysql Azure Single To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md
# Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server online using DMS via the Azure portal
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+ You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server by using Azure Database Migration Service (DMS), a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms. In this tutorial, weΓÇÖll perform an online migration of a sample database from an Azure Database for MySQL single server to a MySQL flexible server (both running version 5.7) using a DMS migration activity. > [!NOTE]
With these best practices in mind, create your target flexible server and then c
* innodb_buffer_pool_size ΓÇô can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size. * innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed. * innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
- * Configure the firewall rules and replicas on the target server to match those on the source server.
+ * Configure the replicas on the target server to match those on the source server.
* Replicate the following server management features from the source single server to the target flexible server: * Role assignments, Roles, Deny Assignments, classic administrators, Access Control (IAM) * Locks (read-only and delete)
To create a migration project, perform the following steps.
To configure your DMS migration project, perform the following steps. 1. On the **Select source** screen, specify the connection details for the source MySQL instance.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/13-select-source-online.png" alt-text="Screenshot of an Add source details screen.":::
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/select-source-online.png" alt-text="Screenshot of an Add source details screen.":::
2. Select **Next : Select target>>**, and then, on the **Select target** screen, specify the connection details for the target flexible server.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/15-select-target.png" alt-text="Screenshot of a Select target.":::
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/select-target-online.png" alt-text="Screenshot of a Select target.":::
3. Select **Next : Select databases>>**, and then, on the Select databases tab, under [Preview] Select server objects, select the server objects that you want to migrate. :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/16-select-db.png" alt-text="Screenshot of a Select database.":::
To configure your DMS migration project, perform the following steps.
Before the tab populates, DMS fetches the tables from the selected database(s) on the source and target and then determines whether the table exists and contains data. 6. Select the tables that you want to migrate.
- You can only select the source and target tables whose names match that on the source and target server.
- If you select a table in the source database that doesnΓÇÖt exist on the target database, you will see a warning message ΓÇÿNot available at TargetΓÇÖ and you wonΓÇÖt be able to select the table for migration.
+ If the selected source table doesn't exist on the target server, the online migration process will ensure that the table schema and data is migrated to the target server.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/17-select-tables.png" alt-text="Screenshot of a Select Tables."::: DMS validates your inputs, and if the validation passes, you will be able to start the migration.
-7. After configuring for schema migration, select **Next : Summary>>**.
+7. After configuring for schema migration, select **Review and start migration**.
> [!NOTE] > You only need to navigate to the Configure migration settings tab if you are trying to troubleshoot failing migrations.
To configure your DMS migration project, perform the following steps.
9. Select **Start migration**. The migration activity window appears, and the Status of the activity is Initializing. The Status changes to Running when the table migrations start.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/19-running-project-online.png" alt-text="Screenshot of a Running status.":::
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/running-online-migration.png" alt-text="Screenshot of a Running status.":::
### Monitor the migration
-1. On the migration activity screen navigate to **Initial Load**, select **Refresh** to update the display and view the progress and the number of tables completed.
+1. Once the **Initial Load** activity is completed, navigate to the **Initial Load** tab to view the completion status and the number of tables completed.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/completed-initial-load-online.png" alt-text="Screenshot of a completed initial load migration.":::
-2. On the migration activity screen navigate to **Replicate Data Changes** tab, select **Refresh** to update the display and view the seconds behind source.
+2. Once the **Initial Load** activity is completed, you are navigated to the **Replicate Data Changes** tab automatically. You can monitor the migration progress as the screen is auto-refreshed every 30 seconds. Select **Refresh** to update the display and view the seconds behind source as and when needed.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/20-monitor-migration-online.png" alt-text="Screenshot of a Monitoring migration.":::
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/running-replicate-data-changes.png" alt-text="Screenshot of a Monitoring migration.":::
-3. After the **Seconds behind source** hits 0, proceed to start cutover by clicking on the **Start Cutover** menu tab at the top of the migration activity screen. Follow the steps in the cutover window before you are ready to perform a cutover. Once all steps are completed, click on **Confirm** and next click on **Apply**.
+3. Monitor the **Seconds behind source** and as soon as it nears 0, proceed to start cutover by clicking on the **Start Cutover** menu tab at the top of the migration activity screen. Follow the steps in the cutover window before you are ready to perform a cutover. Once all steps are completed, click on **Confirm** and next click on **Apply**.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/21-complete-cutover-online.png" alt-text="Screenshot of a Perform cutover."::: ## Perform post-migration activities
event-hubs Create Schema Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/create-schema-registry.md
Title: Create an Azure Event Hubs schema registry description: This article shows you how to create a schema registry in an Azure Event Hubs namespace. Previously updated : 01/13/2022 Last updated : 09/09/2022
This article shows you how to create a schema group with schemas in a schema reg
> [!NOTE] > - The feature isn't available in the **basic** tier.
+> - Make sure that you are a member of one of these roles: **Owner**, **Contributor**, or **Schema Registry Contributor**. For details about role-based access control, see [Schema Registry overview](schema-registry-overview.md#azure-role-based-access-control).
> - If the event hub is in a **virtual network**, you won't be able to create schemas in the Azure portal unless you access the portal from a VM in the same virtual network. + ## Prerequisites [Create an Event Hubs namespace](event-hubs-create.md#create-an-event-hubs-namespace). You can also use an existing namespace.
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
Azure Front Door can now access this key vault and the certificates it contains.
- The available secret versions. > [!NOTE]
- > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 48 hours for the new version of the certificate/secret to be deployed.
+ > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 72 hours for the new version of the certificate/secret to be deployed.
> > :::image type="content" source="./media/front-door-custom-domain-https/certificate-version.png" alt-text="Screenshot of selecting secret version on update custom domain page.":::
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Azure managed certificates are automatically rotated by the Azure service that v
### <a name="rotate-own-certificate"></a>Use your own certificate
-In order for the certificate to automatically be rotated to the latest version when a newer version of the certificate is available in your key vault, set the secret version to 'Latest'. If a specific version is selected, you have to reselect the new version manually for certificate rotation. It takes up to 24 hours for the new version of the certificate/secret to be automatically deployed.
+In order for the certificate to automatically be rotated to the latest version when a newer version of the certificate is available in your key vault, set the secret version to 'Latest'. If a specific version is selected, you have to reselect the new version manually for certificate rotation. It takes up to 72 hours for the new version of the certificate/secret to be automatically deployed.
If you want to change the secret version from ΓÇÿLatestΓÇÖ to a specified version or vice versa, add a new certificate.
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md
Title: Open-source components and versions - Azure HDInsight 4.0
description: Learn about the open-source components and versions in Azure HDInsight 4.0. Previously updated : 08/24/2022 Last updated : 09/09/2022 # HDInsight 4.0 component versions
The Open-source component versions associated with HDInsight 4.0 are listed in t
| Apache Oozie | 4.3.1 | | Apache Zookeeper | 3.4.6 | | Apache Phoenix | 5 |
-| Apache Spark | 2.4.4, 3.1|
+| Apache Spark | 2.4.4, 3.1 |
| Apache Livy | 0.5 |
-| Apache Kafka | 2.1.1, 2.4.1(Preview) |
+| Apache Kafka | 2.1.1, 2.4.1 |
| Apache Ambari | 2.7.0 | | Apache Zeppelin | 0.8.0 | - This table lists certain HDInsight 4.0 cluster types that have retired or will be retired soon. | Cluster Type | Framework version | Support expiration date | Retirement date |
This table lists certain HDInsight 4.0 cluster types that have retired or will b
| HDInsight 4.0 Kafka | 1.1 | Dec 31, 2020 | Dec 31, 2020 | | HDInsight 4.0 Kafka | 2.1.0 * | Sep 30, 2022 | Oct 1, 2022 |
-* Customers cannot create new Kafka 2.1.0 clusters but existing 2.1.0 clusters will not be impacted and will get basic support till September 30, 2022.
+* Customers can't create new Kafka 2.1.0 clusters but existing 2.1.0 clusters won't be impacted and will get basic support until September 30, 2022.
## Next steps - [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md) - [Enterprise Security Package](./enterprise-security-package.md) - [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)+
iot-dps Concepts X509 Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-x509-attestation.md
X.509 certificates can be stored in a hardware security module HSM.
> [!TIP] > We strongly recommend using an HSM with devices to securely store secrets, like the X.509 certificate, on your devices in production. - ## X.509 certificates Using X.509 certificates as an attestation mechanism is an excellent way to scale production and simplify device provisioning. X.509 certificates are typically arranged in a certificate chain of trust in which each certificate in the chain is signed by the private key of the next higher certificate, and so on, terminating in a self-signed root certificate. This arrangement establishes a delegated chain of trust from the root certificate generated by a trusted root certificate authority (CA) down through each intermediate CA to the end-entity "leaf" certificate installed on a device. To learn more, see [Device Authentication using X.509 CA Certificates](../iot-hub/iot-hub-x509ca-overview.md). Often the certificate chain represents some logical or physical hierarchy associated with devices. For example, a manufacturer may:+ - issue a self-signed root CA certificate - use the root certificate to generate a unique intermediate CA certificate for each factory - use each factory's certificate to generate a unique intermediate CA certificate for each production line in the plant-- and finally use the production line certificate, to generate a unique device (end-entity) certificate for each device manufactured on the line.
+- and finally use the production line certificate, to generate a unique device (end-entity) certificate for each device manufactured on the line.
To learn more, see [Conceptual understanding of X.509 CA certificates in the IoT industry](../iot-hub/iot-hub-x509ca-concept.md).
A root certificate is a self-signed X.509 certificate representing a certificate
An intermediate certificate is an X.509 certificate, which has been signed by the root certificate (or by another intermediate certificate with the root certificate in its chain). The last intermediate certificate in a chain is used to sign the leaf certificate. An intermediate certificate can also be referred to as an intermediate CA certificate.
-##### Why are intermediate certs useful?
+#### Why are intermediate certs useful?
+ Intermediate certificates are used in a variety of ways. For example, intermediate certificates can be used to group devices by product lines, customers purchasing devices, company divisions, or factories. Imagine that Contoso is a large corporation with its own Public Key Infrastructure (PKI) using the root certificate named *ContosoRootCert*. Each subsidiary of Contoso has their own intermediate certificate that is signed by *ContosoRootCert*. Each subsidiary will then use their intermediate certificate to sign their leaf certificates for each device. In this scenario, Contoso can use a single DPS instance where *ContosoRootCert* has been verified with [proof-of-possession](./how-to-verify-certificates.md). They can have an enrollment group for each subsidiary. This way each individual subsidiary will not have to worry about verifying certificates. - ### End-entity "leaf" certificate The leaf certificate, or end-entity certificate, identifies the certificate holder. It has the root certificate in its certificate chain as well as zero or more intermediate certificates. The leaf certificate is not used to sign any other certificates. It uniquely identifies the device to the provisioning service and is sometimes referred to as the device certificate. During authentication, the device uses the private key associated with this certificate to respond to a proof of possession challenge from the service.
The provisioning service exposes two enrollment types that you can use to contro
- [Individual enrollment](./concepts-service.md#individual-enrollment) entries are configured with the device certificate associated with a specific device. These entries control enrollments for specific devices. - [Enrollment group](./concepts-service.md#enrollment-group) entries are associated with a specific intermediate or root CA certificate. These entries control enrollments for all devices that have that intermediate or root certificate in their certificate chain.
-#### DPS device chain requirements
+### Mutual TLS support
+
+When DPS enrollments are configured for X.509 attestation, mutual TLS (mTLS) is supported by DPS.
+
+### DPS device chain requirements
When a device is attempting registration through DPS using an enrollment group, the device must send the certificate chain from the leaf certificate to a certificate verified with [proof-of-possession](how-to-verify-certificates.md). Otherwise, authentication will fail.
If the device sends the full device chain as follows during provisioning, then D
![Example device certificate chain](./media/concepts-x509-attestation/example-device-cert-chain.png) --- > [!NOTE] > Intermediate certificates can also be verified with [proof-of-possession](how-to-verify-certificates.md)..
+### DPS order of operations with certificates
-#### DPS order of operations with certificates
When a device connects to the provisioning service, the service prioritizes more specific enrollment entries over less specific enrollment entries. That is, if an individual enrollment for the device exists, the provisioning service applies that entry. If there is no individual enrollment for the device and an enrollment group for the first intermediate certificate in the device's certificate chain exists, the service applies that entry, and so on, down the chain to the root. The service applies the first applicable entry that it finds, such that: - If the first enrollment entry found is enabled, the service provisions the device.
iot-dps Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tls-support.md
DPS uses [Transport Layer Security (TLS)](http://wikipedia.org/wiki/Transport_Layer_Security) to secure connections from IoT devices.
-Current TLS protocol versions supported by DPS are:
+Current TLS protocol versions supported by DPS are:
+ * TLS 1.2 ## Restrict connections to TLS 1.2
The DPS resource created using this configuration will refuse devices that attem
> [!NOTE] > The `minTlsVersion` property is read-only and cannot be changed once your DPS resource is created. It is therefore essential that you properly test and validate that *all* your IoT devices are compatible with TLS 1.2 and the [recommended ciphers](#recommended-ciphers) in advance. - > [!NOTE] > Upon failovers, the `minTlsVersion` property of your DPS will remain effective in the geo-paired region post-failover.
DPS instances that are configured to accept only TLS 1.2 will also enforce the u
| : | | `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`<br>`TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`<br>`TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384`<br>`TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256` | -
-### Legacy cipher suites
+### Legacy cipher suites
These cipher suites are currently still supported by DPS but will be depreciated. Use the recommended cipher suites above if possible.
These cipher suites are currently still supported by DPS but will be depreciated
| : | | `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256 (uses SHA-1)`<br>`TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384 (uses SHA-1)`<br>`TLS_RSA_WITH_AES_128_GCM_SHA256 (lack of Perfect Forward Secrecy)`<br>`TLS_RSA_WITH_AES_256_GCM_SHA384 (lack of Perfect Forward Secrecy)`<br>`TLS_RSA_WITH_AES_128_CBC_SHA256 (lack of Perfect Forward Secrecy)`<br>`TLS_RSA_WITH_AES_256_CBC_SHA256 (lack of Perfect Forward Secrecy)`<br>`TLS_RSA_WITH_AES_128_CBC_SHA (uses SHA-1, lack of Perfect Forward Secrecy)`<br>`TLS_RSA_WITH_AES_256_CBC_SHA (uses SHA-1, lack of Perfect Forward Secrecy)` |
+## Mutual TLS support
+
+When DPS enrollments are configured for X.509 authentication, mutual TLS (mTLS) is supported by DPS.
## Use TLS 1.2 in the IoT SDKs
iot-hub-device-update Connected Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-configure.md
# Configure Microsoft Connected Cache for Device Update for IoT Hub
+> [!NOTE]
+> This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available.
+ Microsoft Connected Cache (MCC) is deployed to Azure IoT Edge gateways as an IoT Edge module. Like other IoT Edge modules, environment variables and container create options are used to configure MCC modules. This article defines the environment variables and container create options that are required for a customer to successfully deploy the Microsoft Connected Cache module for use by Device Update for IoT Hub. ## Module deployment details
iot-hub-device-update Connected Cache Disconnected Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-disconnected-device-update.md
# Understand support for disconnected device updates
+> [!NOTE]
+> This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available.
+ In a transparent gateway scenario, one or more devices can pass their messages through a single gateway device that maintains the connection to Azure IoT Hub. In these cases, the child devices may not have internet connectivity or may not be allowed to download content from the internet. The Microsoft Connected Cache preview IoT Edge module provides Device Update for IoT Hub customers with the capability of an intelligent in-network cache. The cache enables image-based and package-based updates of Linux OS-based devices behind an IoT Edge gateway (also called *downstream* IoT devices), and also helps reduce the bandwidth used for updates. ## Microsoft Connected Cache preview for Device Update for IoT Hub
iot-hub-device-update Connected Cache Industrial Iot Nested https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-industrial-iot-nested.md
# Microsoft Connected Cache preview deployment scenario sample: Microsoft Connected Cache within an Azure IoT Edge for Industrial IoT configuration
+> [!NOTE]
+> This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available.
+ Manufacturing networks are often organized in hierarchical layers following the [Purdue network model](https://en.wikipedia.org/wiki/Purdue_Enterprise_Reference_Architecture) (included in the [ISA 95](https://en.wikipedia.org/wiki/ANSI/ISA-95) and [ISA 99](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa99) standards). In these networks, only the top layer has connectivity to the cloud and the lower layers in the hierarchy can only communicate with adjacent north and south layers. This GitHub sample, [Azure IoT Edge for Industrial IoT](https://github.com/Azure-Samples/iot-edge-for-iiot), deploys the following:
iot-hub-device-update Connected Cache Nested Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-nested-level.md
# Microsoft Connected Cache preview deployment scenario sample: Two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy
+> [!NOTE]
+> This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available.
+ The diagram below describes the scenario where one Azure IoT Edge gateway has direct access to CDN resources and is acting as the parent to another Azure IoT Edge gateway. The child IoT Edge gateway is acting as the parent to an Azure IoT leaf device such as a Raspberry Pi. Both the Azure IoT Edge child and Azure IoT device are internet isolated. The example below demonstrates the configuration for two-levels of Azure IoT Edge gateways, but there is no limit to the depth of upstream hosts that Microsoft Connected Cache will support. There is no difference in Microsoft Connected Cache container create options from the previous examples. Refer to the documentation [Connect downstream IoT Edge devices - Azure IoT Edge](../iot-edge/how-to-connect-downstream-iot-edge-device.md?preserve-view=true&tabs=azure-portal&view=iotedge-2020-11) for more details on configuring layered deployments of Azure IoT Edge gateways. Additionally note that when deploying Azure IoT Edge, Microsoft Connected Cache, and custom modules, all modules must reside in the same container registry.
iot-hub-device-update Connected Cache Single Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-single-level.md
# Microsoft Connected Cache preview deployment scenario samples
+> [!NOTE]
+> This information relates to a preview feature that's available for early testing and use in a production environment. This feature is fully supported but it's still in active development and may receive substantial changes until it becomes generally available.
+ ## Single level Azure IoT Edge gateway no proxy The diagram below describes the scenario where an Azure IoT Edge gateway that has direct access to CDN resources and there is an Azure IoT leaf device such as a Raspberry PI that is an internet isolated child devices of the Azure IoT Edge gateway.
iot-hub-device-update Device Update Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-overview.md
The Linux *platform layer* integrates with [Delivery Optimization](https://githu
The Linux platform layer implementation can be found in the `src/platform_layers/linux_platform_layer` and it integrates with the [Delivery Optimization client](https://github.com/microsoft/do-client/releases) for downloads. This layer can integrate with different update handlers to implement the
-installers. For instance, the `SWUpdate` update handler, `Apt` update handler, and `Script` update handler.
+installers. For instance, the `SWUpdate` update handler, `Apt` update handler, and `Script` update handler.
+
+If you choose to implement with your own downloader in place of Delivery Optimization, be sure to review the [requirements for large file downloads](device-update-limits.md).
## Update handlers
iot-hub-device-update Device Update Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-control-access.md
$Scope = 'https://api.adu.microsoft.com/.default'
Get-MsalToken -ClientId $clientId -TenantId $tenantId -Authority $authority -Scopes $Scope -ClientCertificate $cert ```
-## Next steps
+## Support for managed identities
-[Create device update resources and configure access control roles](create-device-update-account.md)
+Managed identities provide Azure services with an automatically managed identity in Azure AD in a secure manner. This eliminates the needs for developers having to manage credentials by providing an identity. Device Update for IoT Hub supports system-assigned managed identities.
+
+### System-assigned managed identity
+
+To add and remove a system-assigned managed identity in Azure portal:
+1. Sign in to the Azure portal and navigate to your desired Device Update for IoT Hub account.
+2. Navigate to Identity in your Device Update for IoT Hub portal
+3. Navigate to Identity in your IoT Hub portal
+4. Under System-assigned tab, select On and click Save.
+
+To remove system-assigned managed identity from an Device Update for IoT hub account, select Off and click Save.
+++
+## Next Steps
+* [Create device update resources and configure access control roles](./create-device-update-account.md)
iot-hub-device-update Device Update Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-limits.md
Title: Understand Device Update for IoT Hub limits | Microsoft Docs
description: Key limits for Device Update for IoT Hub. Previously updated : 7/8/2021 Last updated : 9/9/2022
During preview, the Device Update for IoT Hub service is provided at no cost to
[!INCLUDE [device-update-for-iot-hub-limits](../../includes/device-update-for-iot-hub-limits.md)]
+### Requirements for large-file downloads
+If you plan to deploy large-file packages, with file size larger than 100 MB, it is recommended to utilize byte range requests for a reliable download performance.
+
+The Device Update for IoT Hub service utilizes Content Delivery Networks (CDNs) that work optimally with range requests of 1 MB in size. Range requests larger than 100 MB are not supported.
+ ## Next steps - [Create a Device Update for IoT Hub account](create-device-update-account.md)
iot-hub Iot Hub Create Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md
This article describes how to create and manage IoT hubs using the [Azure portal
[!INCLUDE [iot-hub-include-create-hub](../../includes/iot-hub-include-create-hub.md)]
-## Change the settings of the IoT hub
+## Update the IoT hub
-You can change the settings of an existing IoT hub after it's created from the IoT Hub pane. Here are some of the properties you can set for an IoT hub:
+You can change the settings of an existing IoT hub after it's created from the IoT Hub pane. Here are some properties you can set for an IoT hub:
-**Pricing and scale**: You can use this property to migrate to a different tier or set the number of IoT Hub units.
+**Pricing and scale**: Migrate to a different tier or set the number of IoT Hub units.
**IP Filter**: Specify a range of IP addresses that will be accepted or rejected by the IoT hub.
-**Properties**: Provides the list of properties that you can copy and use elsewhere, such as the resource ID, resource group, location, and so on.
+**Properties**: A list of properties that you can copy and use elsewhere, such as the resource ID, resource group, location, and so on.
+
+For a complete list of options to update an IoT hub, see the [**az iot hub update** commands](/cli/azure/iot/hub#az-iot-hub-update) reference page.
### Shared access policies
Here are two ways to find a specific IoT hub in your subscription:
## Delete the IoT hub
-To delete an IoT hub, find the IoT hub you want to delete, then choose **Delete**.
+To delete an IoT hub, open your IoT hub in the Azure portal, then choose **Delete**.
+ ## Next steps
iot-hub Iot Hub Create Use Iot Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-use-iot-toolkit.md
This article shows you how to use the [Azure IoT Tools for Visual Studio Code](h
- [Visual Studio Code](https://code.visualstudio.com/) -- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) installed for Visual Studio Code.
+- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) installed for Visual Studio Code
+
+- An Azure resource group: [create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal#create-resource-groups) in the Azure portal
## Create an IoT hub without an IoT Project
This method allows you to provision in VS Code without leaving your development
:::image type="content" source="media/iot-hub-create-use-iot-toolkit/provision-done.png" alt-text="A screenshot that shows IoT Hub details in the output window in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/provision-done.png":::
+> [!TIP]
+> To delete a device from your IoT hub, use the `Azure IoT Hub: Delete Device` option from the Command Palette. There is no option to delete your IoT hub in Visual Studio Code, however you can [delete your hub in the Azure portal](iot-hub-create-through-portal.md#delete-the-iot-hub).
+ ## Next steps Now that you've deployed an IoT hub using the Azure IoT Tools for Visual Studio Code, explore these articles:
iot-hub Iot Hub Create Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-cli.md
This article shows you how to create an IoT hub using Azure CLI.
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
-## Create an IoT Hub
-
-Use the Azure CLI to create a resource group and then add an IoT hub.
-
-1. When you create an IoT hub, you must create it in a resource group. Either use an existing resource group, or run the following [command to create a resource group](/cli/azure/resource):
+When you create an IoT hub, you must create it in a resource group. Either use an existing resource group, or run the following [command to create a resource group](/cli/azure/resource):
```azurecli-interactive az group create --name {your resource group name} --location westus
Use the Azure CLI to create a resource group and then add an IoT hub.
> ```azurecli-interactive > az account list-locations -o table > ```
- >
-2. Run the following [command to create an IoT hub](/cli/azure/iot/hub#az-iot-hub-create) in your resource group, using a globally unique name for your IoT hub:
+## Create an IoT Hub
+
+Use the Azure CLI to create a resource group and then add an IoT hub.
+
+Run the following [command to create an IoT hub](/cli/azure/iot/hub#az-iot-hub-create) in your resource group, using a globally unique name for your IoT hub:
```azurecli-interactive az iot hub create --name {your iot hub name} \
Use the Azure CLI to create a resource group and then add an IoT hub.
The previous command creates an IoT hub in the S1 pricing tier for which you're billed. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
-## Remove an IoT Hub
+For more information on Azure IoT Hub commands, see the [`az iot hub`](/cli/azure/iot/hub) reference article.
-There are various commands to [delete an individual resource](/cli/azure/resource), such as an IoT hub, or delete a resource group and all its resources, including any IoT hubs.
+## Update the IoT hub
-To [delete an IoT hub](/cli/azure/iot/hub#az-iot-hub-delete), run the following command:
+You can change the settings of an existing IoT hub after it's created. Here are some properties you can set for an IoT hub:
+
+**Pricing and scale**: Migrate to a different tier or set the number of IoT Hub units.
+
+**IP Filter**: Specify a range of IP addresses that will be accepted or rejected by the IoT hub.
+
+**Properties**: A list of properties that you can copy and use elsewhere, such as the resource ID, resource group, location, and so on.
+
+For a complete list of options to update an IoT hub, see the [**az iot hub update** commands](/cli/azure/iot/hub#az-iot-hub-update) reference page.
+
+## Register a new device in the IoT hub
+
+In this section, you create a device identity in the identity registry in your IoT hub. A device can't connect to a hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). This device identity is [IoT Edge](/azure/iot-edge) enabled.
+
+Run the following command to create a device identity. Use your IoT hub name and create a new device ID name in place of `{iothub_name}` and `{device_id}`. This command creates a device identity with default authorization (shared private key).
```azurecli-interactive
-az iot hub delete --name {your iot hub name} -\
- -resource-group {your resource group name}
+az iot hub device-identity create -n {iothub_name} -d {device_id} --ee
```
-To [delete a resource group](/cli/azure/group#az-group-delete) and all its resources, run the following command:
+The result is a JSON printout which includes your keys and other information.
+
+Alternatively, there are several options to register a device using different kinds of authorization. To explore the options, see [Examples](/device-identity?view=azure-cli-latest#az-iot-hub-device-identity-create-examples&preserve-view=true) on the **az iot hub device-identity** reference page.
+
+## Remove an IoT Hub
+
+There are various commands to [delete an individual resource](/cli/azure/resource), such as an IoT hub.
+
+To [delete an IoT hub](/cli/azure/iot/hub#az-iot-hub-delete), run the following command:
```azurecli-interactive
-az group delete --name {your resource group name}
+az iot hub delete --name {your iot hub name} -\
+ -resource-group {your resource group name}
``` ## Next steps
iot-hub Iot Hub Create Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-powershell.md
[!INCLUDE [iot-hub-resource-manager-selector](../../includes/iot-hub-resource-manager-selector.md)]
-## Introduction
- You can use Azure PowerShell cmdlets to create and manage Azure IoT hubs. This tutorial shows you how to create an IoT hub with PowerShell. - [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] Alternatively, you can use Azure Cloud Shell, if you'd rather not install additional modules onto your machine. The following section gets you started with Azure Cloud Shell. [!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-## Connect to your Azure subscription
-
-If you're using Cloud Shell, you're already logged in to your subscription, so you can skip this step. If you're running PowerShell locally instead, enter the following command to sign in to your Azure subscription:
-
-```powershell
-# Log into Azure account.
-Login-AzAccount
-```
-
-## Create a resource group
+## Prerequisites
You need a resource group to deploy an IoT hub. You can use an existing resource group or create a new one.
-To create a resource group for your IoT hub, use the [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup) command. This example creates a resource group called **MyIoTRG1** in the **East US** region:
+To create a new resource group for your IoT hub, use the [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup) command. This example creates a resource group called **MyIoTRG1** in the **East US** region:
```azurepowershell-interactive New-AzResourceGroup -Name MyIoTRG1 -Location "East US" ```
+## Connect to your Azure subscription
+
+If you're using Cloud Shell, you're already logged in to your subscription, so you can skip this section. If you're running PowerShell locally instead, enter the following command to sign in to your Azure subscription:
+
+```powershell
+# Log into Azure account.
+Login-AzAccount
+```
+ ## Create an IoT hub
-To create an IoT hub in the resource group you created in the previous step, use the [New-AzIotHub](/powershell/module/az.IotHub/New-azIotHub) command. This example creates an **S1** hub called **MyTestIoTHub** in the **East US** region:
+Create an IoT hub using your resource group. Use the [New-AzIotHub](/powershell/module/az.IotHub/New-azIotHub) command. This example creates an **S1** hub called **MyTestIoTHub** in the **East US** region:
```azurepowershell-interactive New-AzIotHub `
Remove-AzIotHub `
-Name MyTestIoTHub ```
-Alternatively, to remove a resource group and all the resources it contains, use the [Remove-AzResourceGroup](/powershell/module/az.Resources/Remove-azResourceGroup) command:
+## Update the IoT hub
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name MyIoTRG1
-```
+You can change the settings of an existing IoT hub after it's created. Here are some properties you can set for an IoT hub:
+
+**Pricing and scale**: Migrate to a different tier or set the number of IoT Hub units.
+
+**IP Filter**: Specify a range of IP addresses that will be accepted or rejected by the IoT hub.
+
+**Properties**: A list of properties that you can copy and use elsewhere, such as the resource ID, resource group, location, and so on.
+
+Explore the [**Set-AzIotHub** commands](/powershell/module/az.iothub/set-aziothub) for a complete list of update options.
## Next steps
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/best-practices.md
Suggestions for controlling access to your vault are as follows:
- Use the principle of least privilege access to grant access. - Turn on firewall and [virtual network service endpoints](overview-vnet-service-endpoints.md).
-## Backup
-
-Make sure you take regular backups of your vault. Backups should be performed when you update, delete, or create objects in your vault.
-
-### Azure PowerShell backup commands
-
-* [Backup certificate](/powershell/module/azurerm.keyvault/Backup-AzureKeyVaultCertificate)
-* [Backup key](/powershell/module/azurerm.keyvault/Backup-AzureKeyVaultKey)
-* [Backup secret](/powershell/module/azurerm.keyvault/Backup-AzureKeyVaultSecret)
-
-### Azure CLI backup commands
+## Turn on data protection for your vault
-* [Backup certificate](/cli/azure/keyvault/certificate#az-keyvault-certificate-backup)
-* [Backup key](/cli/azure/keyvault/key#az-keyvault-key-backup)
-* [Backup secret](/cli/azure/keyvault/secret#az-keyvault-secret-backup)
+Turn on purge protection to guard against malicious or accidental deletion of the secrets and key vault even after soft-delete is turned on.
+For more information, see [Azure Key Vault soft-delete overview](soft-delete-overview.md)
## Turn on logging [Turn on logging](logging.md) for your vault. Also, [set up alerts](alert.md).
-## Turn on recovery options
+## Backup
+
+Purge protection prevents malicious and accidental deletion of vault objects for up to 90 days. In scenarios when purge protection is not a possible option, we recommend backup vault objects, which can't be recreated from other sources like encryption keys generated within the vault.
-- Turn on [soft-delete](soft-delete-overview.md).-- Turn on purge protection if you want to guard against force deletion of the secrets and key vault even after soft-delete is turned on.
+For more information about backup, see [Azure Key Vault backup and restore](backup.md)
## Multitenant solutions and Key Vault A multitenant solution is built on an architecture where components are used to serve multiple customers or tenants. Multitenant solutions are often used to support software as a service (SaaS) solutions. If you're building a multitenant solution that includes Key Vault, review [Multitenancy and Azure Key Vault](/azure/architecture/guide/multitenant/service/key-vault).
+## Frequently Asked Questions:
+### Can I use Key Vault role-based access control (RBAC) permission model object-scope assignments to provide isolation for application teams within Key Vault?
+No. RBAC permission model allows to assign access to individual objects in Key Vault to user or application, but any administrative operations like network access control, monitoring, and objects management require vault level permissions which will then expose secure information to operators across application teams.
+ ## Learn more - [Best practices for secrets management in Key Vault](../secrets/secrets-best-practices.md)++
key-vault Secrets Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/secrets-best-practices.md
For more information, see:
- [Monitoring and alerting for Azure Key Vault](../general/alert.md) ## Backup and purge protection
-Turn on [purge protection](../general/soft-delete-overview.md#purge-protection) to guard against forced deletion of the secret. Take regular backups of your vault when you update, delete, or create secrets within a vault.
--- To read about the Azure PowerShell backup command, see [Backup secret](/powershell/module/azurerm.keyvault/Backup-AzureKeyVaultSecret).-- To read about the Azure CLI backup command, see [Backup secret](/cli/azure/keyvault/secret#az-keyvault-secret-backup).
+Turn on [purge protection](../general/soft-delete-overview.md#purge-protection) to guard against malicious or accidental deletion of the secrets.
+In scenarios when purge protection is not a possible option, we recommend [backup](../general/backup.md) secrets, which can't be recreated from other sources.
## Learn more - [About Azure Key Vault secrets](about-secrets.md)
load-balancer Load Balancer Multiple Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-powershell.md
Follow the steps below to achieve the scenario outlined in this article:
Then complete [Create a Windows VM](/previous-versions/azure/virtual-machines/scripts/virtual-machines-windows-powershell-sample-create-vm?toc=%2fazure%2fload-balancer%2ftoc.json) steps 6.3 through 6.8.
-5. Add a second IP configuration to each of the VMs. Follow the instructions in [Assign multiple IP addresses to virtual machines](../virtual-network/ip-services/virtual-network-multiple-ip-addresses-powershell.md#add) article. Use the following configuration settings:
+5. Add a second IP configuration to each of the VMs. Follow the instructions in [Assign multiple IP addresses to virtual machines](../virtual-network/ip-services/virtual-network-multiple-ip-addresses-powershell.md#add-secondary-private-and-public-ip-address) article. Use the following configuration settings:
```powershell $NicName = "VM1-NIC2"
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
Previously updated : 08/05/2022 Last updated : 09/09/2022
These resources are ephemeral and exist only for the duration of the load test r
- The virtual network must be in the same subscription as the Azure Load Testing resource. - The subnet you use for Azure Load Testing must have enough unassigned IP addresses to accommodate the number of load test engines for your test. Learn more about [configuring your test for high-scale load](./how-to-high-scale-load.md). - The subnet shouldn't be delegated to any other Azure service. For example, it shouldn't be delegated to Azure Container Instances (ACI). Learn more about [subnet delegation](/azure/virtual-network/subnet-delegation-overview).
+- Azure CLI version 2.2.0 or later (if you're using CI/CD). Run `az --version` to find the version that's installed on your computer. If you need to install or upgrade the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
## Configure your virtual network
-To test private endpoints, you need an Azure virtual network and at least one subnet. In this section, you'll configure your virtual network and subnet.
+To test private endpoints, you need an existing Azure virtual network. Your virtual network should have at least one subnet, and allow access for traffic coming from the Azure Load Testing service.
### Create a subnet
-When you deploy Azure Load Testing in your virtual network, it's recommended to use different subnets for Azure Load Testing and the application endpoint. This approach enables you to configure network traffic access specifically for each purpose. Learn more about how to [add a subnet to a virtual network](/azure/virtual-network/virtual-network-manage-subnet#add-a-subnet).
+When you deploy Azure Load Testing in your virtual network, it's recommended to use separate subnets for Azure Load Testing and for the application endpoint. This approach enables you to configure network traffic access policies specifically for each purpose. Learn more about how to [add a subnet to a virtual network](/azure/virtual-network/virtual-network-manage-subnet#add-a-subnet).
### Configure traffic access
-Azure Load Testing requires both inbound and outbound access for the injected VMs in your virtual network. If you plan to restrict traffic access to your virtual network, or are already using a network security group, configure the network security group for the subnet in which you deploy the load test.
+Azure Load Testing requires both inbound and outbound access for the injected VMs in your virtual network. If you plan to restrict traffic access to your virtual network, or if you're already using a network security group, configure the network security group for the subnet in which you deploy the load test.
-1. If you don't have an NSG yet, create one in the same region as your virtual network and associate it with your subnet. Follow these steps to [create a network security group](/azure/virtual-network/manage-network-security-group#create-a-network-security-group).
+1. Go to the [Azure portal](https://portal.azure.com).
-1. Go to the [Azure portal](https://portal.azure.com) to view your network security groups. Search for and select **Network security groups**.
+1. If you don't have an NSG yet, follow these steps to [create a network security group](/azure/virtual-network/manage-network-security-group#create-a-network-security-group).
-1. Select the name of your network security group.
+ Create the NSG in the same region as your virtual network, and then associate it with your subnet.
+
+1. Search for and select your network security group.
+
+ <!-- TODO: add screenshot of portal -->
1. Select **Inbound security rules** in the left navigation.
For example, for an endpoint with IP address 10.179.0.7, in a virtual network wi
</HTTPSamplerProxy> ```
-## Set up a load test
+## Configure your load test
-To create a load test for testing your private endpoint, you have to specify the virtual network details in the test creation wizard.
+To include privately hosted endpoints in your load test, you need to configure the virtual network settings for the load test. You can configure the VNET settings in the Azure portal, or specify them in the [YAML test configuration file](./reference-test-config-yaml.md) for CI/CD pipelines.
+
+> [!IMPORTANT]
+> When you deploy Azure Load Testing in a virtual network, you'll incur additional charges. Azure Load Testing deploys an [Azure Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) and a [Public IP address](https://azure.microsoft.com/pricing/details/ip-addresses/) in your subscription and there might be a cost for generated traffic. For more information, see the [Virtual Network pricing information](https://azure.microsoft.com/pricing/details/virtual-network).
+
+### Configure the VNET in the Azure portal
+
+You can specify the VNET configuration settings in the load test creation/update wizard.
1. Sign in to the [Azure portal](https://portal.azure.com) by using the credentials for your Azure subscription.
-1. Go to your Azure Load Testing resource, select **Tests** from the left pane, and then select **+ Create new test**.
-
- :::image type="content" source="media/how-to-test-private-endpoint/create-new-test.png" alt-text="Screenshot that shows the Azure Load Testing page and the button for creating a new test.":::
+1. Go to your Azure Load Testing resource, and select **Tests** from the left pane.
-1. On the **Basics** tab, enter the **Test name** and **Test description** information. Optionally, you can select the **Run test after creation** checkbox.
+1. Open the load test creation/update wizard in either of two ways:
- :::image type="content" source="media/how-to-test-private-endpoint/create-new-test-basics.png" alt-text="Screenshot that shows the 'Basics' tab for creating a test.":::
+ - Select **+ Create > Upload a JMeter script**, if you want to create a new test.
-1. On the **Test plan** tab, select your Apache JMeter script, and then select **Upload** to upload the file to Azure.
+ :::image type="content" source="media/how-to-test-private-endpoint/create-new-test.png" alt-text="Screenshot that shows the Tests page, highlighting the button for creating a new test.":::
+
+ - Select an existing test from the list, and then select **Edit**.
- You can select and upload other Apache JMeter configuration files or other files that are referenced in the JMX file. For example, if your test script uses CSV data sets, you can upload the corresponding *.csv* file(s).
+ :::image type="content" source="media/how-to-test-private-endpoint/edit-test.png" alt-text="Screenshot that shows the Tests page, highlighting the button for editing a test.":::
+
+1. Review or fill the load test information. Follow these steps to [create or manage a test](./how-to-create-manage-test.md).
1. On the **Load** tab, select **Private** traffic mode, and then select your virtual network and subnet.
- :::image type="content" source="media/how-to-test-private-endpoint/create-new-test-load-vnet.png" alt-text="Screenshot that shows the 'Load' tab for creating a test.":::
+ If you have multiple subnets in your virtual network, make sure to select the subnet that will host the injected test engine VMs.
+
+ :::image type="content" source="media/how-to-test-private-endpoint/create-new-test-load-vnet.png" alt-text="Screenshot that shows the Load tab for creating or updating a load test.":::
+
+1. Select **Review + create** and then **Create** (or **Apply**, when updating an existing test).
+
+ When the load test starts, Azure Load Testing injects the test engine VMs in your virtual network and subnet. The test script can now access the privately hosted application endpoint in your VNET.
+
+### Configure the VNET for CI/CD pipelines
+
+To configure the load test with your virtual network settings, update the [YAML test configuration file](./reference-test-config-yaml.md).
+
+1. Open a terminal, and use the Azure CLI to sign in to your Azure subscription:
+
+ ```azurecli
+ az login
+ az account set --subscription <your-Azure-Subscription-ID>
+ ```
+1. Retrieve the subnet ID and copy the resulting value:
+
+ ```azurecli
+ az network vnet subnet show -g <your-resource-group> --vnet-name <your-vnet-name> --name <your-subnet-name> --query id
+ ```
+
+1. Open your YAML test configuration file in your favorite editor.
+
+1. Add the `subnetId` property to the configuration file and provide the subnet ID you copied earlier:
+
+ ```yml
+ version: v0.1
+ testName: SampleTest
+ testPlan: SampleTest.jmx
+ description: 'Load test the website home page'
+ engineInstances: 1
+ subnetId: <your-subnet-id>
+ ```
+
+ For more information about the YAML configuration, see [test configuration YAML reference](./reference-test-config-yaml.md).
+
+1. Save the YAML configuration file, and commit your changes to the source code repository.
- > [!IMPORTANT]
- > When you deploy Azure Load Testing in a virtual network, you'll incur additional charges. Azure Load Testing deploys an [Azure Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) and a [Public IP address](https://azure.microsoft.com/pricing/details/ip-addresses/) in your subscription and there might be a cost for generated traffic. For more information, see the [Virtual Network pricing information](https://azure.microsoft.com/pricing/details/virtual-network).
-
-1. Select **Review + create**. Review all settings, and then select **Create** to create the load test.
+1. After the CI/CD workflow triggers, your load test starts, and can now access the privately hosted application endpoint in your VNET.
## Troubleshooting
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
A test configuration uses the following keys:
| `engineInstances` | integer | | *Required*. Number of parallel instances of the test engine to execute the provided test plan. You can update this property to increase the amount of load that the service can generate. | | `configurationFiles` | array | | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name. | | `description` | string | | Short description of the test run. |
+| `subnetId` | string | | Resource ID of the subnet for testing privately hosted endpoints (VNET injection). This subnet will host the injected test engine VMs. For more information, see [how to load test privately hosted endpoints](./how-to-test-private-endpoint.md). |
| `failureCriteria` | object | | Criteria that indicate when a test should fail. The structure of a pass/fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`. For more information on the supported values, see [Load test pass/fail criteria](./how-to-define-test-criteria.md#load-test-passfail-criteria). | | `properties` | object | | List of properties to configure the load test. | | `properties.userPropertyFile` | string | | File to use as an Apache JMeter [user properties file](https://jmeter.apache.org/usermanual/test_plan.html#properties). The file will be uploaded to the Azure Load Testing resource alongside the JMeter test script and other configuration files. If the file is in a subfolder on your local machine, use a path relative to the location of the test script. |
testName: SampleTest
testPlan: SampleTest.jmx description: Load test website home page engineInstances: 1
+subnetId: /subscriptions/abcdef01-2345-6789-0abc-def012345678/resourceGroups/sample-rg/providers/Microsoft.Network/virtualNetworks/load-testing-vnet/subnets/load-testing
properties: userPropertyFile: 'user.properties' configurationFiles:
logic-apps Call From Power Automate Power Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/call-from-power-automate-power-apps.md
If you want to migrate your flow from Power Automate or Power to Logic Apps inst
* A Power Automate or Power Apps license.
-* A logic app with a request trigger to export.
+* A Consumption logic app workflow with a request trigger to export.
+
+ > [!NOTE]
+ >
+ > The Export capability is available only for Consumption logic app workflows in multi-tenant Azure Logic Apps.
* A flow in Power Automate or Power Apps from which you want to call your logic app.
machine-learning Concept Azure Machine Learning V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md
The workspace is the top-level resource for Azure Machine Learning, providing a
### Create a workspace
-### [CLI](#tab/cli)
+### [Azure CLI](#tab/cli)
To create a workspace using CLI v2, use the following command:
A compute is a designated compute resource where you run your job or host your e
* **Inference cluster** - used to deploy trained machine learning models to Azure Kubernetes Service. You can create an Azure Kubernetes Service (AKS) cluster from your Azure ML workspace, or attach an existing AKS cluster. * **Attached compute** - You can attach your own compute resources to your workspace and use them for training and inference.
-### [CLI](#tab/cli)
+### [Azure CLI](#tab/cli)
To create a compute using CLI v2, use the following command:
Azure Machine Learning datastores securely keep the connection information to yo
* Azure Data Lake * Azure Data Lake Gen2
-### [CLI](#tab/cli)
+### [Azure CLI](#tab/cli)
To create a datastore using CLI v2, use the following command:
Azure machine learning models consist of the binary file(s) that represent a mac
### Creating a model
-### [CLI](#tab/cli)
+### [Azure CLI](#tab/cli)
To create a model using CLI v2, use the following command:
In custom environments, you're responsible for setting up your environment and i
### Create an Azure ML custom environment
-### [CLI](#tab/cli)
+### [Azure CLI](#tab/cli)
To create an environment using CLI v2, use the following command:
machine-learning How To Attach Kubernetes To Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-to-workspace.md
Otherwise, if a user-assigned managed identity is specified in Azure Machine Lea
Azure Relay resource is created during the extension deployment under the same Resource Group as the Arc-enabled Kubernetes cluster.
-### [CLI](#tab/cli)
+### [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
Set the `--type` argument to `Kubernetes`. Use the `identity_type` argument to e
> `--user-assigned-identities` is only required for `UserAssigned` managed identities. Although you can provide a list of comma-separated user managed identities, only the first one is used when you attach your cluster. > > Compute attach won't create the Kubernetes namespace automatically or validate whether the kubernetes namespace existed. You need to verify that the specified namespace exists in your cluster, otherwise, any AzureML workloads submitted to this compute will fail.
-### [Python](#tab/python)
+### [Python SDK](#tab/python)
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Last updated 07/13/2022
# Set up AutoML to train computer vision models
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
> * [v1](v1/how-to-auto-train-image-models-v1.md) > * [v2 (current version)](how-to-auto-train-image-models.md)
Automated ML supports model training for computer vision tasks like image classi
## Prerequisites
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
+ [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
+ * An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). * Install and [set up CLI (v2)](how-to-configure-cli.md#prerequisites) and make sure you install the `ml` extension.
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+ * An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). * The Azure Machine Learning Python SDK v2 (preview) installed.
image classification multi-label | CLI v2: `image_classification_multilabel` <br
image object detection | CLI v2: `image_object_detection` <br> SDK v2: `image_object_detection()` image instance segmentation| CLI v2: `image_instance_segmentation` <br> SDK v2: `image_instance_segmentation()`
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
For example:
task: image_object_detection ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
-Based on the task type, you can create automl image jobs using task specific `automl` functions.
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
++
+Based on the task type, you can create AutoML image jobs using task specific `automl` functions.
For example:
Once your data is in JSONL format, you can create training and validation `MLTab
Automated ML doesn't impose any constraints on training or validation data size for computer vision tasks. Maximum dataset size is only limited by the storage layer behind the dataset (i.e. blob store). There's no minimum number of images or labels. However, we recommend starting with a minimum of 10-15 samples per label to ensure the output model is sufficiently trained. The higher the total number of labels/classes, the more samples you need per label.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
validation_data:
type: mltable ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
You can create data inputs from training and validation MLTable from your local directory or cloud storage with the following code:
Provide a [compute target](concept-azure-machine-learning-architecture.md#comput
The compute target is passed in using the `compute` parameter. For example:
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
The compute target is passed in using the `compute` parameter. For example:
compute: azureml:gpu-cluster ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
```python from azure.ai.ml import automl
In addition to controlling the model algorithm, you can also tune hyperparameter
### Data augmentation
-In general, deep learning model performance can often improve with more data. Data augmentation is a practical technique to amplify the data size and variability of a dataset which helps to prevent overfitting and improve the modelΓÇÖs generalization ability on unseen data. Automated ML applies different data augmentation techniques based on the computer vision task, before feeding input images to the model. Currently, there is no exposed hyperparameter to control data augmentations.
+In general, deep learning model performance can often improve with more data. Data augmentation is a practical technique to amplify the data size and variability of a dataset which helps to prevent overfitting and improve the modelΓÇÖs generalization ability on unseen data. Automated ML applies different data augmentation techniques based on the computer vision task, before feeding input images to the model. Currently, there's no exposed hyperparameter to control data augmentations.
|Task | Impacted dataset | Data augmentation technique(s) applied | |-|-||
In general, deep learning model performance can often improve with more data. Da
Before doing a large sweep to search for the optimal models and hyperparameters, we recommend trying the default values to get a first baseline. Next, you can explore multiple hyperparameters for the same model before sweeping over multiple models and their parameters. This way, you can employ a more iterative approach, because with multiple models and multiple hyperparameters for each, the search space grows exponentially and you need more iterations to find optimal configurations.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
If you wish to use the default hyperparameter values for a given algorithm (say
image_model: model_name: "yolov5" ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using model_name parameter in set_image_model method of the task specific `automl` job. For example,
If you wish to use the default hyperparameter values for a given algorithm (say
image_object_detection_job.set_image_model(model_name="yolov5") ```
-Once you've built a baseline model, you might want to optimize model performance in order to sweep over the model algorithm and hyperparameter space. You can use the following sample config to sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, optimizer, lr_scheduler, etc., to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for the specified algorithm.
+Once you've built a baseline model, you might want to optimize model performance in order to sweep over the model algorithm and hyperparameter space. You can use the following sample config to sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, optimizer, lr_scheduler, etc., to generate a model with the optimal primary metric. If hyperparameter values aren't specified, then default values are used for the specified algorithm.
### Primary metric
The primary metric used for model optimization and hyperparameter tuning depends
### Experiment budget You can optionally specify the maximum time budget for your AutoML Vision training job using the `timeout` parameter in the `limits` - the amount of time in minutes before the experiment terminates. If none specified, default experiment timeout is seven days (maximum 60 days). For example,
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
limits:
timeout: 60 ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=limit-settings)]
Parameter | Detail
You can configure all the sweep related parameters as shown in the example below.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
sweep:
delay_evaluation: 6 ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
sweep:
You can pass fixed settings or parameters that don't change during the parameter space sweep as shown below.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
image_model:
```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=pass-arguments)]
Once the training run is done, you have the option to further train the model by
You can pass the run ID that you want to load the checkpoint from.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
image_model:
```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
To find the run ID from the desired model, you can use the following code.
automl_image_job_incremental = ml_client.jobs.create_or_update(
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
To submit your AutoML job, you run the following CLI v2 command with the path to
az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION] ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
When you've configured your AutoML Job to the desired settings, you can submit the job.
The automated ML training runs generates output model files, evaluation metrics,
> [!TIP] > Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-job-results) section.
-For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview)
+For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview).
## Register and deploy model
-Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric).
+Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric). You can either register the model after downloading or by specifying the azureml path with corresponding jobid. Note: If you want to change the inference settings that are described below you need to download the model and change settings.json and register using the updated model folder.
+
+### Get the best run
+
+# [Azure CLI](#tab/cli)
++
+```yaml
+
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=best_run)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_local_dir)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=download_model)]
++
+### register the model
+
+Register the model either using the azureml path or your locally downloaded path.
-You can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
-Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select the **Deploy**.
+# [Azure CLI](#tab/cli)
-![Select model from the automl runs in studio UI ](./media/how-to-auto-train-image-models/select-model.png)
+
+```azurecli
+ az ml model create --name od-fridge-items-mlflow-model --version 1 --path azureml://jobs/$best_run/outputs/artifacts/outputs/mlflow-model/ --type mlflow_model --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=register_model)]
++
+After you register the model you want to use, you can deploy it using the managed online endpoint [deploy-managed-online-endpoint](how-to-deploy-managed-online-endpoint-sdk-v2.md)
+
+### Configure online endpoint
+
+# [Azure CLI](#tab/cli)
++
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
+name: od-fridge-items-endpoint
+auth_mode: key
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=endpoint)]
+++
+### Create the endpoint
+
+Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
++
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az ml online-endpoint create --file .\create_endpoint.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_endpoint)]
++
+### Configure online deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class. You can use either GPU or CPU VM SKUs for your deployment cluster.
++
+# [Azure CLI](#tab/cli)
++
+```yaml
+name: od-fridge-items-mlflow-deploy
+endpoint_name: od-fridge-items-endpoint
+model: azureml:od-fridge-items-mlflow-model@latest
+instance_type: Standard_DS3_v2
+instance_count: 1
+liveness_probe:
+ failure_threshold: 30
+ success_threshold: 1
+ timeout: 2
+ period: 10
+ initial_delay: 2000
+readiness_probe:
+ failure_threshold: 10
+ success_threshold: 1
+ timeout: 10
+ period: 10
+ initial_delay: 2000
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=deploy)]
+++
+### Create the deployment
+
+Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+# [Azure CLI](#tab/cli)
++
+```azurecli
+az ml online-deployment create --file .\create_deployment.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_deploy)]
++
+### update traffic:
+By default the current deployment is set to receive 0% traffic. you can set the traffic percentage current deployment should receive. Sum of traffic percentages of all the deployments with one end point shouldn't exceed 100%.
+
+# [Azure CLI](#tab/cli)
++
+```azurecli
+az ml online-endpoint update --name 'od-fridge-items-endpoint' --traffic 'od-fridge-items-mlflow-deploy=100' --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
+
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=update_traffic)]
+++
+Alternatively You can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
+Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select on **Deploy** and select **Deploy to real-time endpoint** .
+
+![Screenshot of how the Deployment page looks like after selecting the Deploy option.](./media/how-to-auto-train-image-models/deploy-end-point.png).
+
+this is how your review page looks like. we can select instance type, instance count and set traffic percentage for the current deployment.
+
+![Screenshot of how the top of review page looks like after selecting the options to deploy.](./media/how-to-auto-train-image-models/review-deploy-1.png).
+![Screenshot of how the bottom of review page looks like after selecting the options to deploy.](./media/how-to-auto-train-image-models/review-deploy-2.png).
+
+### Update inference settings
+
+In the previous step, we downloaded a file `mlflow-model/artifacts/settings.json` from the best model. which can be used to update the inference settings before registering the model. Although its's recommended to use the same parameters as training for best performance.
+
+Each of the tasks (and some models) has a set of parameters. By default, we use the same values for the parameters that were used during the training and validation. Depending on the behavior that we need when using the model for inference, we can change these parameters. Below you can find a list of parameters for each task type and model.
+
+| Task | Parameter name | Default |
+| |- | |
+|Image classification (multi-class and multi-label) | `valid_resize_size`<br>`valid_crop_size` | 256<br>224 |
+|Object detection | `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 |
+|Object detection using `yolov5`| `img_size`<br>`model_size`<br>`box_score_thresh`<br>`nms_iou_thresh` | 640<br>medium<br>0.1<br>0.5 |
+|Instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img`<br>`mask_pixel_score_threshold`<br>`max_number_of_polygon_points`<br>`export_as_image`<br>`image_type` | 600<br>1333<br>0.3<br>0.5<br>100<br>0.5<br>100<br>False<br>JPG|
+
+For a detailed description on task specific hyperparameters, please refer to [Hyperparameters for computer vision tasks in automated machine learning](./reference-automl-images-hyperparameters.md).
+
+If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](./how-to-use-automl-small-object-detect.md).
++++
+## Example notebooks
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
-You can configure the model deployment endpoint name and the inferencing cluster to use for your model deployment in the **Deploy a model** pane.
-![Deploy configuration](./media/how-to-auto-train-image-models/deploy-image-model.png)
## Code examples
-# [CLI v2](#tab/CLI-v2)
+
+# [Azure CLI](#tab/cli)
Review detailed code examples and use cases in the [azureml-examples repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs).
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs).
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
Last updated 03/15/2022
# Set up AutoML to train a natural language processing model (preview) > [!div class="op_single_selector" title1="Select the version of the developer platform of Azure Machine Learning you are using:"] > * [v1](./v1/how-to-auto-train-nlp-models-v1.md) > * [v2 (current version)](how-to-auto-train-nlp-models.md)
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
## Prerequisites
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
However, there are key differences:
* If you enable long range text, then a GPU with higher memory is required such as, [NCv3](../virtual-machines/ncv3-series.md) series or [ND](../virtual-machines/nd-series.md) series. * The `enable_long_range_text` parameter is only available for multi-class classification tasks.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
For CLI v2 AutoML jobs you configure your experiment in a YAML file like the fol
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
Multi-label text classification|`"eng"` <br> `"deu"` <br> `"mul"`| English&nbs
Multi-class text classification|`"eng"` <br> `"deu"` <br> `"mul"`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT Named entity recognition (NER)|`"eng"` <br> `"deu"` <br> `"mul"`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased) <br> [German BERT](https://huggingface.co/bert-base-german-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
featurization:
dataset_language: "eng" ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
text_classification_job.set_featurization(dataset_language='eng')
You can also run your NLP experiments with distributed training on an Azure ML compute cluster.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
enable_distributed_dnn_training = True
## Submit the AutoML job
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
To submit your AutoML job, you can run the following CLI v2 command with the pat
az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION] ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
ml_client.jobs.stream(returned_job.name)
## Code examples
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
+
+ [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
+ See the following sample YAML files for each NLP task.
See the following sample YAML files for each NLP task.
* [Multi-label text classification](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/automl-standalone-jobs/cli-automl-text-classification-multilabel-paper-cat/cli-automl-text-classification-multilabel-paper-cat.yml) * [Named entity recognition](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/automl-standalone-jobs/cli-automl-text-ner-conll/cli-automl-text-ner-conll2003.yml)
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
The following snippet creates the autoscale profile:
> [!NOTE] > For more, see the [reference page for autoscale](/cli/azure/monitor/autoscale)
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
In [Azure Machine Learning studio](https://ml.azure.com), select your workspace and then select __Endpoints__ from the left side of the page. Once the endpoints are listed, select the one you want to configure.
The rule is part of the `my-scale-settings` profile (`autoscale-name` matches th
> [!NOTE] > For more information on the CLI syntax, see [`az monitor autoscale`](/cli/azure/monitor/autoscale).
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
In the __Rules__ section, select __Add a rule__. The __Scale rule__ page is displayed. Use the following information to populate the fields on this page:
When load is light, a scaling in rule can reduce the number of VM instances. The
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="scale_in_on_cpu_util" :::
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
In the __Rules__ section, select __Add a rule__. The __Scale rule__ page is displayed. Use the following information to populate the fields on this page:
The previous rules applied to the deployment. Now, add a rule that applies to th
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="scale_up_on_request_latency" :::
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
From the bottom of the page, select __+ Add a scale condition__.
You can also create rules that apply only on certain days or at certain times. I
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="weekend_profile" :::
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
From the bottom of the page, select __+ Add a scale condition__. On the new scale condition, use the following information to populate the fields:
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Last updated 05/24/2022
# Create data assets > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](./v1/how-to-create-register-datasets.md) > * [v2 (current version)](how-to-create-data-assets.md) - In this article, you learn how to create a data asset in Azure Machine Learning. By creating a data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from datastores, Azure Storage, public URLs, and local files. The benefits of creating data assets are:
When you create a data asset in Azure Machine Learning, you'll need to specify a
Below shows you how to create a *folder* as an asset:
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
Create a `YAML` file (`<file-name>.yml`):
Next, create the data asset using the CLI:
az ml data create -f <file-name>.yml ```
-# [Python-SDK](#tab/Python-SDK)
+# [Python SDK](#tab/Python-SDK)
You can create a data asset in Azure Machine Learning using the following Python Code:
ml_client.data.create_or_update(my_data)
Below shows you how to create a *specific file* as a data asset:
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
Sample `YAML` file `<file-name>.yml` for data in local path is as below:
path: <uri>
> az ml data create -f <file-name>.yml ```
-# [Python-SDK](#tab/Python-SDK)
+# [Python SDK](#tab/Python-SDK)
```python from azure.ai.ml.entities import Data from azure.ai.ml.constants import AssetTypes
The `uri` parameter in `mltable.load()` should be a valid path to a local or clo
Below shows you how to create an `mltable` data asset. The `path` can be any of the supported path formats outlined above.
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
Create a `YAML` file (`<file-name>.yml`):
Next, create the data asset using the CLI:
az ml data create -f <file-name>.yml ```
-# [Python-SDK](#tab/Python-SDK)
+# [Python SDK](#tab/Python-SDK)
You can create a data asset in Azure Machine Learning using the following Python Code:
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
ms.devlang: azurecli
# How to deploy an AutoML model to an online endpoint - > [!IMPORTANT] > SDK v2 is currently in public preview.
To deploy using these files, you can use either the studio or the Azure CLI.
1. Complete all the steps in wizard to create an online endpoint and deployment
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
You'll need to modify this file to use the files you downloaded from the AutoML
After you create a deployment, you can score it as described in [Invoke the endpoint to score data by using your model](how-to-deploy-managed-online-endpoints.md#invoke-the-endpoint-to-score-data-by-using-your-model).
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Custom container deployments can use web servers other than the default Python F
* To deploy locally, you must have [Docker engine](https://docs.docker.com/engine/install/) running locally. This step is **highly recommended**. It will help you debug issues.
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
* Install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
Custom container deployments can use web servers other than the default Python F
az configure --defaults workspace=<azureml workspace name> group=<resource group> ```
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
* If you haven't installed Python SDK v2, please install with this command:
Custom container deployments can use web servers other than the default Python F
To follow along with this tutorial, download the source code below.
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples/cli ```
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
Now that you've tested locally, stop the image:
## Deploy your online endpoint to Azure Next, deploy your online endpoint to Azure.
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
### Create a YAML file for your endpoint and deployment
__tfserving-deployment.yml__
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/custom-container/tfserving-deployment.yml":::
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
### Connect to Azure Machine Learning workspace Connect to Azure Machine Learning Workspace, configure workspace details, and get a handle to the workspace as follows:
For example, if you have a directory structure of `/azureml-examples/cli/endpoin
:::image type="content" source="./media/how-to-deploy-custom-container/local-directory-structure.png" alt-text="Diagram showing a tree view of the local directory structure.":::
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
and `tfserving-deployment.yml` contains:
model:
path: ./half_plus_two ```
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
and `Model` class contains:
You can optionally configure your `model_mount_path`. It enables you to change t
> [!IMPORTANT] > The `model_mount_path` must be a valid absolute path in Linux (the OS of the container image).
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
For example, you can have `model_mount_path` parameter in your _tfserving-deployment.yml_:
model_mount_path: /var/tfserving-model-mount
..... ```
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
For example, you can have `model_mount_path` parameter in your `ManagedOnlineDeployment` class:
then your model will be located at `/var/tfserving-model-mount/tfserving-deploym
### Create your endpoint and deployment
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
Now that you've understood how the YAML was constructed, create your endpoint.
az ml online-deployment create --name tfserving-deployment -f endpoints/online/c
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
Using the `MLClient` created earlier, we will now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
ml_client.begin_create_or_update(blue_deployment)
Once your deployment completes, see if you can make a scoring request to the deployed endpoint.
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-tfserving.sh" id="invoke_endpoint":::
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
Using the `MLClient` created earlier, we will get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters: - `endpoint_name` - Name of the endpoint
ml_client.online_endpoints.invoke(
Now that you've successfully scored with your endpoint, you can delete it:
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
```azurecli az ml online-endpoint delete --name tfserving-endpoint ```
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
```python ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md
Title: Model interpretability (preview)
-description: Learn how to understand & explain how your machine learning model makes predictions during training & inferencing using Azure Machine Learning CLI and Python SDK.
+description: Learn how your machine learning model makes predictions during training and inferencing by using the Azure Machine Learning CLI and Python SDK.
Last updated 08/17/2022
This article describes methods you can use for model interpretability in Azure Machine Learning. > [!IMPORTANT]
-> With the release of the Responsible AI dashboard which includes model interpretability, we recommend users to migrate to the new experience as the older SDKv1 preview model interpretability dashboard will no longer be actively maintained.
+> With the release of the Responsible AI dashboard, which includes model interpretability, we recommend that you migrate to the new experience, because the older SDK v1 preview model interpretability dashboard will no longer be actively maintained.
-## Why is model interpretability important to model debugging?
+## Why model interpretability is important to model debugging
-When machine learning models are used in ways that impact peopleΓÇÖs lives, it's critically important to understand what influences the behavior of models. Interpretability helps answer questions in scenarios such as model debugging (Why did my model make this mistake? How can I improve my model?), human-AI collaboration (How can I understand and trust the modelΓÇÖs decisions?), and regulatory compliance (Does my model satisfy legal requirements?).
+When you're using machine learning models in ways that affect peopleΓÇÖs lives, it's critically important to understand what influences the behavior of models. Interpretability helps answer questions in scenarios such as:
+* Model debugging: Why did my model make this mistake? How can I improve my model?
+* Human-AI collaboration: How can I understand and trust the modelΓÇÖs decisions?
+* Regulatory compliance: Does my model satisfy legal requirements?
-The interpretability component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) contributes to the ΓÇ£diagnoseΓÇ¥ stage of the model lifecycle workflow by generating human-understandable descriptions of the predictions of a Machine Learning model. It provides multiple views into a modelΓÇÖs behavior: global explanations (for example, what features affect the overall behavior of a loan allocation model) and local explanations (for example, why a customerΓÇÖs loan application was approved or rejected). One can also observe model explanations for a selected cohort as a subgroup of data points. This is valuable when, for example, assessing fairness in model predictions for individuals in a particular demographic group. The local explanation tab of this component also represents a full data visualization, which is great for general eyeballing the data and looking at differences between correct and incorrect predictions of each cohort.
+The interpretability component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) contributes to the ΓÇ£diagnoseΓÇ¥ stage of the model lifecycle workflow by generating human-understandable descriptions of the predictions of a machine learning model. It provides multiple views into a modelΓÇÖs behavior:
+* Global explanations: For example, what features affect the overall behavior of a loan allocation model?
+* Local explanations: For example, why was a customerΓÇÖs loan application approved or rejected?
-The capabilities of this component are founded by the [InterpretML](https://interpret.ml/) package, generating model explanations.
+You can also observe model explanations for a selected cohort as a subgroup of data points. This approach is valuable when, for example, you're assessing fairness in model predictions for individuals in a particular demographic group. The **Local explanation** tab of this component also represents a full data visualization, which is great for general eyeballing of the data and looking at differences between correct and incorrect predictions of each cohort.
-Use interpretability when you need to...
+The capabilities of this component are founded by the [InterpretML](https://interpret.ml/) package, which generates model explanations.
-- Determine how trustworthy your AI systemΓÇÖs predictions are by understanding what features are most important for the predictions.-- Approach the debugging of your model by understanding it first and identifying if the model is using healthy features or merely spurious correlations.-- Uncover potential sources of unfairness by understanding whether the model is predicting based on sensitive features or features highly correlated with them.-- Build trust with end-users in your modelΓÇÖs decisions by generating local explanations to illustrate their outcomes.-- Complete a regulatory audit of an AI system to validate models and monitor the impact of model decisions on humans.
+Use interpretability when you need to:
-## How to interpret your model?
+* Determine how trustworthy your AI systemΓÇÖs predictions are by understanding what features are most important for the predictions.
+* Approach the debugging of your model by understanding it first and identifying whether the model is using healthy features or merely false correlations.
+* Uncover potential sources of unfairness by understanding whether the model is basing predictions on sensitive features or on features that are highly correlated with them.
+* Build user trust in your modelΓÇÖs decisions by generating local explanations to illustrate their outcomes.
+* Complete a regulatory audit of an AI system to validate models and monitor the impact of model decisions on humans.
-In machine learning, **features** are the data fields used to predict a target data point. For example, to predict credit risk, data fields for age, account size, and account age might be used. In this case, age, account size, and account age are **features**. Feature importance tells you how each data field affected the model's predictions. For example, age may be heavily used in the prediction while account size and account age don't affect the prediction values significantly. This process allows data scientists to explain resulting predictions, so that stakeholders have visibility into what features are most important in the model.
+## How to interpret your model
-Using the classes and methods in the Responsible AI dashboard using SDK v2 and CLI v2, you can:
+In machine learning, *features* are the data fields you use to predict a target data point. For example, to predict credit risk, you might use data fields for age, account size, and account age. Here, age, account size, and account age are features. Feature importance tells you how each data field affects the model's predictions. For example, although you might use age heavily in the prediction, account size and account age might not affect the prediction values significantly. Through this process, data scientists can explain resulting predictions in ways that give stakeholders visibility into the model's most important features.
-- Explain model prediction by generating feature importance values for the entire model (global explanation) and/or individual data points (local explanation).-- Achieve model interpretability on real-world datasets at scale.-- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
+By using the classes and methods in the Responsible AI dashboard and by using SDK v2 and CLI v2, you can:
-Using the classes and methods in the SDK v1, you can:
+* Explain model prediction by generating feature-importance values for the entire model (global explanation) or individual data points (local explanation).
+* Achieve model interpretability on real-world datasets at scale.
+* Use an interactive visualization dashboard to discover patterns in your data and its explanations at training time.
-- Explain model prediction by generating feature importance values for the entire model and/or individual data points.-- Achieve model interpretability on real-world datasets at scale, during training and inference.-- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
+By using the classes and methods in the SDK v1, you can:
-The model interpretability classes are made available through the following SDK v1 package: (Learn how to [install SDK packages for Azure Machine Learning](/python/api/overview/azure/ml/install))
+* Explain model prediction by generating feature-importance values for the entire model or individual data points.
+* Achieve model interpretability on real-world datasets at scale during training and inference.
+* Use an interactive visualization dashboard to discover patterns in your data and its explanations at training time.
-* `azureml.interpret`, contains functionalities supported by Microsoft.
-
-Use `pip install azureml-interpret` for general use.
+Model interpretability classes are made available through the SDK&nbsp;v1 package. For more information, see [Install SDK packages for Azure Machine Learning](/python/api/overview/azure/ml/install) and [azureml.interpret](/python/api/azureml-interpret/azureml.interpret).
## Supported model interpretability techniques
-The Responsible AI dashboard and `azureml-interpret` use the interpretability techniques developed in [Interpret-Community](https://github.com/interpretml/interpret-community/), an open-source Python package for training interpretable models and helping to explain opaque-box AI systems. Opaque-box models are those for which we have no information about their internal workings. interpret-Community serves as the host for this SDK's supported explainers.
-[Interpret-Community](https://github.com/interpretml/interpret-community/) serves as the host for the following supported explainers, and currently supports the following interpretability techniques:
+The Responsible AI dashboard and `azureml-interpret` use the interpretability techniques that were developed in [Interpret-Community](https://github.com/interpretml/interpret-community/), an open-source Python package for training interpretable models and helping to explain opaque-box AI systems. Opaque-box models are those for which we have no information about their internal workings.
+
+Interpret-Community serves as the host for the following supported explainers, and currently supports the interpretability techniques presented in the next sections.
### Supported in Responsible AI dashboard in Python SDK v2 and CLI v2
-|Interpretability Technique|Description|Type|
-|--|--|--|
-|Mimic Explainer (Global Surrogate) + SHAP tree|Mimic explainer is based on the idea of training global surrogate models to mimic opaque-box models. A global surrogate model is an intrinsically interpretable model that is trained to approximate the predictions of any opaque-box model as accurately as possible. Data scientists can interpret the surrogate model to draw conclusions about the opaque-box model. The Responsible AI dashboard uses LightGBM (LGBMExplainableModel), paired with the SHAP (SHapley Additive exPlanations) tree explainer, which is a specific explainer to trees and ensembles of trees. The combination of LightGBM and SHAP tree provide model-agnostic global and local explanations of your machine learning models.|Model-agnostic|
+|Interpretability technique|Description|Type|
+|--|--|--|
+|Mimic Explainer (Global Surrogate) + SHAP tree|Mimic Explainer is based on the idea of training global surrogate models to mimic opaque-box models. A global surrogate model is an intrinsically interpretable model that's trained to approximate the predictions of any opaque-box model as accurately as possible.<br><br> Data scientists can interpret the surrogate model to draw conclusions about the opaque-box model. The Responsible AI dashboard uses LightGBM (LGBMExplainableModel), paired with the SHAP (SHapley Additive exPlanations) Tree Explainer, which is a specific explainer to trees and ensembles of trees. The combination of LightGBM and SHAP tree provides model-agnostic global and local explanations of your machine learning models.|Model-agnostic|
+ ### Supported in Python SDK v1
-|Interpretability Technique|Description|Type|
-|--|--|--|
-|SHAP Tree Explainer| [SHAP](https://github.com/slundberg/shap)'s tree explainer, which focuses on polynomial time fast SHAP value estimation algorithm specific to **trees and ensembles of trees**.|Model-specific|
-|SHAP Deep Explainer| Based on the explanation from SHAP, Deep Explainer "is a high-speed approximation algorithm for SHAP values in deep learning models that builds on a connection with DeepLIFT described in the [SHAP NIPS paper](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions). **TensorFlow** models and **Keras** models using the TensorFlow backend are supported (there's also preliminary support for PyTorch)".|Model-specific|
-|SHAP Linear Explainer| SHAP's Linear explainer computes SHAP values for a **linear model**, optionally accounting for inter-feature correlations.|Model-specific|
-|SHAP Kernel Explainer| SHAP's Kernel explainer uses a specially weighted local linear regression to estimate SHAP values for **any model**.|Model-agnostic|
-|Mimic Explainer (Global Surrogate)| Mimic explainer is based on the idea of training [global surrogate models](https://christophm.github.io/interpretable-ml-book/global.html) to mimic opaque-box models. A global surrogate model is an intrinsically interpretable model that is trained to approximate the predictions of **any opaque-box model** as accurately as possible. Data scientists can interpret the surrogate model to draw conclusions about the opaque-box model. You can use one of the following interpretable models as your surrogate model: LightGBM (LGBMExplainableModel), Linear Regression (LinearExplainableModel), Stochastic Gradient Descent explainable model (SGDExplainableModel), and Decision Tree (DecisionTreeExplainableModel).|Model-agnostic|
-|Permutation Feature Importance Explainer (PFI)| Permutation Feature Importance is a technique used to explain classification and regression models that is inspired by [Breiman's Random Forests paper](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) (see section 10). At a high level, the way it works is by randomly shuffling data one feature at a time for the entire dataset and calculating how much the performance metric of interest changes. The larger the change, the more important that feature is. PFI can explain the overall behavior of **any underlying model** but doesn't explain individual predictions. |Model-agnostic|
+|Interpretability technique|Description|Type|
+|--|--|--|
+|SHAP Tree Explainer| The [SHAP](https://github.com/slundberg/shap) Tree Explainer, which focuses on a polynomial, time-fast, SHAP value-estimation algorithm that's specific to *trees and ensembles of trees*.|Model-specific|
+|SHAP Deep Explainer| Based on the explanation from SHAP, Deep Explainer is a "high-speed approximation algorithm for SHAP values in deep learning models that builds on a connection with DeepLIFT described in the [SHAP NIPS paper](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions). *TensorFlow* models and *Keras* models using the TensorFlow back end are supported (there's also preliminary support for PyTorch)."|Model-specific|
+|SHAP Linear Explainer| The SHAP Linear Explainer computes SHAP values for a *linear model*, optionally accounting for inter-feature correlations.|Model-specific|
+|SHAP Kernel Explainer| The SHAP Kernel Explainer uses a specially weighted local linear regression to estimate SHAP values for *any model*.|Model-agnostic|
+|Mimic Explainer (Global Surrogate)| Mimic Explainer is based on the idea of training [global surrogate models](https://christophm.github.io/interpretable-ml-book/global.html) to mimic opaque-box models. A global surrogate model is an intrinsically interpretable model that's trained to approximate the predictions of *any opaque-box model* as accurately as possible. Data scientists can interpret the surrogate model to draw conclusions about the opaque-box model. You can use one of the following interpretable models as your surrogate model: LightGBM (LGBMExplainableModel), Linear Regression (LinearExplainableModel), Stochastic Gradient Descent explainable model (SGDExplainableModel), or Decision Tree (DecisionTreeExplainableModel).|Model-agnostic|
+|Permutation Feature Importance Explainer| Permutation Feature Importance (PFI) is a technique used to explain classification and regression models that's inspired by [Breiman's Random Forests paper](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) (see section 10). At a high level, the way it works is by randomly shuffling data one feature at a time for the entire dataset and calculating how much the performance metric of interest changes. The larger the change, the more important that feature is. PFI can explain the overall behavior of *any underlying model* but doesn't explain individual predictions. |Model-agnostic|
-Besides the interpretability techniques described above, we support another SHAP-based explainer, called `TabularExplainer`. Depending on the model, `TabularExplainer` uses one of the supported SHAP explainers:
+Besides the interpretability techniques described above, we support another SHAP-based explainer, called Tabular Explainer. Depending on the model, Tabular Explainer uses one of the supported SHAP explainers:
-* TreeExplainer for all tree-based models
-* DeepExplainer for DNN models
-* LinearExplainer for linear models
-* KernelExplainer for all other models
+* Tree Explainer for all tree-based models
+* Deep Explainer for deep neural network (DNN) models
+* Linear Explainer for linear models
+* Kernel Explainer for all other models
-`TabularExplainer` has also made significant feature and performance enhancements over the direct SHAP Explainers:
+Tabular Explainer has also made significant feature and performance enhancements over the direct SHAP explainers:
-* **Summarization of the initialization dataset**. In cases where speed of explanation is most important, we summarize the initialization dataset and generate a small set of representative samples, which speeds up the generation of overall and individual feature importance values.
-* **Sampling the evaluation data set**. If the user passes in a large set of evaluation samples but doesn't actually need all of them to be evaluated, the sampling parameter can be set to true to speed up the calculation of overall model explanations.
+* **Summarization of the initialization dataset**: When speed of explanation is most important, we summarize the initialization dataset and generate a small set of representative samples. This approach speeds up the generation of overall and individual feature importance values.
+* **Sampling the evaluation data set**: If you pass in a large set of evaluation samples but don't actually need all of them to be evaluated, you can set the sampling parameter to `true` to speed up the calculation of overall model explanations.
-The following diagram shows the current structure of supported explainers.
+The following diagram shows the current structure of supported explainers:
:::image type="content" source="./media/how-to-machine-learning-interpretability/interpretability-architecture.png" alt-text=" Diagram of Machine Learning Interpretability architecture." lightbox="./media/how-to-machine-learning-interpretability/interpretability-architecture.png"::: ## Supported machine learning models
-The `azureml.interpret` package of the SDK supports models trained with the following dataset formats:
-- `numpy.array`-- `pandas.DataFrame`-- `iml.datatypes.DenseData`-- `scipy.sparse.csr_matrix`
+The `azureml.interpret` package of the SDK supports models that are trained with the following dataset formats:
+
+* `numpy.array`
+* `pandas.DataFrame`
+* `iml.datatypes.DenseData`
+* `scipy.sparse.csr_matrix`
+
+The explanation functions accept both models and pipelines as input. If a model is provided, it must implement the prediction function `predict` or `predict_proba` that conforms to the Scikit convention. If your model doesn't support this, you can wrap it in a function that generates the same outcome as `predict` or `predict_proba` in Scikit and use that wrapper function with the selected explainer.
-The explanation functions accept both models and pipelines as input. If a model is provided, the model must implement the prediction function `predict` or `predict_proba` that conforms to the Scikit convention. If your model doesn't support this, you can wrap your model in a function that generates the same outcome as `predict` or `predict_proba` in Scikit and use that wrapper function with the selected explainer. If a pipeline is provided, the explanation function assumes that the running pipeline script returns a prediction. Using this wrapping technique, `azureml.interpret` can support models trained via PyTorch, TensorFlow, and Keras deep learning frameworks as well as classic machine learning models.
+If you provide a pipeline, the explanation function assumes that the running pipeline script returns a prediction. When you use this wrapping technique, `azureml.interpret` can support models that are trained via PyTorch, TensorFlow, and Keras deep learning frameworks as well as classic machine learning models.
## Local and remote compute target
-The `azureml.interpret` package is designed to work with both local and remote compute targets. If run locally, The SDK functions won't contact any Azure services.
+The `azureml.interpret` package is designed to work with both local and remote compute targets. If you run the package locally, the SDK functions won't contact any Azure services.
-You can run explanation remotely on Azure Machine Learning Compute and log the explanation info into the Azure Machine Learning Run History Service. Once this information is logged, reports and visualizations from the explanation are readily available on Azure Machine Learning studio for analysis.
+You can run the explanation remotely on Azure Machine Learning Compute and log the explanation info into the Azure Machine Learning Run History Service. After this information is logged, reports and visualizations from the explanation are readily available on Azure Machine Learning studio for analysis.
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).-- Explore the [supported interpretability visualizations](how-to-responsible-ai-dashboard.md#feature-importances-model-explanations) of the Responsible AI dashboard.-- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.-- Learn how to enable [interpretability for automated machine learning models](how-to-machine-learning-interpretability-automl.md).
+* Learn how to generate the Responsible AI dashboard via [CLI v2 and SDK v2](how-to-responsible-ai-dashboard-sdk-cli.md) or the [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
+* Explore the [supported interpretability visualizations](how-to-responsible-ai-dashboard.md#feature-importances-model-explanations) of the Responsible AI dashboard.
+* Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
+* Learn how to enable [interpretability for automated machine learning models](how-to-machine-learning-interpretability-automl.md).
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
Last updated 05/26/2022
# Prepare data for computer vision tasks with automated machine learning (preview)
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
> * [v1](v1/how-to-prepare-datasets-for-automl-images-v1.md) > * [v2 (current version)](how-to-prepare-datasets-for-automl-images.md)
If you already have a data labeling project and you want to use that data, you c
### Using pre-labeled training data If you have previously labeled data that you would like to use to train your model, you will first need to upload the images to the default Azure Blob Storage of your Azure ML Workspace and register it as a data asset.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Create a .yml file with the following configuration.
To upload the images as a data asset, you run the following CLI v2 command with
az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION] ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+ [!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
# Read and write data in a job + > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](v1/how-to-train-with-datasets.md) > * [v2 (current version)](how-to-read-write-data-v2.md) - Learn how to read and write data for your jobs with the Azure Machine Learning Python SDK v2(preview) and the Azure Machine Learning CLI extension v2. ## Prerequisites
Type | Input/Output | `upload` | `download` | `ro_mount` | `rw_mount` | `direct`
## Read data in a job
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
Create a job specification YAML file (`<file-name>.yml`). Specify in the `inputs` section of the job:
Next, run in the CLI
az ml job create -f <file-name>.yml ```
-# [Python-SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
The `Input` class allows you to define:
This section outlines how you can read V1 `FileDataset` and `TabularDataset` dat
#### Read a `FileDataset`
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
Create a job specification YAML file (`<file-name>.yml`), with the type set to `mltable` and the mode set to `eval_mount`:
Next, run in the CLI
az ml job create -f <file-name>.yml ```
-# [Python-SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
In the `Input` object specify the `type` as `AssetTypes.MLTABLE` and `mode` as `InputOutputModes.EVAL_MOUNT`:
returned_job.services["Studio"].endpoint
#### Read a `TabularDataset`
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
Create a job specification YAML file (`<file-name>.yml`), with the type set to `mltable` and the mode set to `direct`:
Next, run in the CLI
az ml job create -f <file-name>.yml ```
-# [Python-SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
In the `Input` object specify the `type` as `AssetTypes.MLTABLE` and `mode` as `InputOutputModes.DIRECT`:
returned_job.services["Studio"].endpoint
In your job you can write data to your cloud-based storage using *outputs*. The [Supported modes](#supported-modes) section showed that only job *outputs* can write data because the mode can be either `rw_mount` or `upload`.
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
Create a job specification YAML file (`<file-name>.yml`), with the `outputs` section populated with the type and path of where you would like to write your data to:
Next create a job using the CLI:
az ml job create --file <file-name>.yml ```
-# [Python-SDK](#tab/Python-SDK)
+# [Python SDK](#tab/python)
```python from azure.ai.ml import command
machine-learning How To Responsible Ai Dashboard Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard-sdk-cli.md
Title: Generate Responsible AI dashboard with YAML and Python (preview)
+ Title: Generate a Responsible AI dashboard with YAML and Python (preview)
-description: Learn how to generate the Responsible AI dashboard with Python and YAML in Azure Machine Learning.
+description: Learn how to generate a Responsible AI dashboard with Python and YAML in Azure Machine Learning.
Last updated 08/17/2022
-# Generate Responsible AI dashboard with YAML and Python (preview)
+# Generate a Responsible AI dashboard with YAML and Python (preview)
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-The Responsible AI (RAI) dashboard can be generated via a pipeline job using RAI components. There are six core components for creating Responsible AI dashboards, along with a couple of helper components. A sample experiment graph:
+You can generate a Responsible AI dashboard via a pipeline job by using Responsible AI components. There are six core components for creating Responsible AI dashboards, along with a couple of helper components. Here's a sample experiment graph:
:::image type="content" source="./media/how-to-responsible-ai-dashboard-sdk-cli/sample-experiment-graph.png" alt-text="Screenshot of a sample experiment graph." lightbox= "./media/how-to-responsible-ai-dashboard-sdk-cli/sample-experiment-graph.png":::
-## Getting started
+## Get started
To use the Responsible AI components, you must first register them in your Azure Machine Learning workspace. This section documents the required steps. ### Prerequisites+ You'll need: -- An AzureML workspace-- A git installation
+- An Azure Machine Learning workspace
+- A Git installation
- A MiniConda installation - An Azure CLI installation
-### Installation Steps
+### Installation steps
+
+1. Clone the repository:
-1. Clone the Repository
```bash git clone https://github.com/Azure/RAI-vNext-Preview.git cd RAI-vNext-Preview ```
-2. Log into Azure
+
+2. Sign in to Azure:
```bash Az login ```
-3. Run the setup script
-
- We provide a setup script which:
-
- - Creates a new conda environment with a name you specify
- - Installs all the required Python packages
- - Registers all the RAI components in your AzureML workspace
- - Registers some sample datasets in your AzureML workspace
- - Sets the defaults for the Azure CLI to point to your workspace
-
- We provide PowerShell and bash versions of the script. From the repository root, run:
+3. From the repository root, run the following PowerShell setup script:
```powershell Quick-Setup.ps1 ```
- This will prompt for the desired conda environment name and AzureML workspace details. Alternatively, use the bash script:
+ Running the script:
+
+ - Creates a new conda environment with a name you specify.
+ - Installs all the required Python packages.
+ - Registers all the Responsible AI components in your Azure Machine Learning workspace.
+ - Registers some sample datasets in your workspace.
+ - Sets the defaults for the Azure CLI to point to your workspace.
+
+ Running the script prompts for the desired conda environment name and Azure Machine Learning workspace details.
+
+ Alternatively, you can use the following Bash script:
```bash ./quick-setup.bash <CONDA-ENV-NAME> <SUBSCRIPTION-ID> <RESOURCE-GROUP-NAME> <WORKSPACE-NAME> ```
- This script will echo the supplied parameters, and pause briefly before continuing.
+ This script echoes the supplied parameters, and it pauses briefly before continuing.
## Responsible AI components
-The core components for constructing a Responsible AI dashboard in AzureML are:
+The core components for constructing the Responsible AI dashboard in Azure Machine Learning are:
-- `RAI Insights Dashboard Constructor`
+- RAI Insights dashboard constructor
- The tool components:
- - `Add Explanation to RAI Insights Dashboard`
- - `Add Causal to RAI Insights Dashboard`
- - `Add Counterfactuals to RAI Insights Dashboard`
- - `Add Error Analysis to RAI Insights Dashboard`
- - `Gather RAI Insights Dashboard`
+ - Add Explanation to RAI Insights dashboard
+ - Add Causal to RAI Insights dashboard
+ - Add Counterfactuals to RAI Insights dashboard
+ - Add Error Analysis to RAI Insights dashboard
+ - Gather RAI Insights dashboard
+The RAI Insights dashboard constructor and Gather RAI Insights dashboard components are always required, plus at least one of the tool components. However, it isn't necessary to use all the tools in every Responsible AI dashboard.
-The ` RAI Insights Dashboard Constructor` and `Gather RAI Insights Dashboard ` components are always required, plus at least one of the tool components. However, it isn't necessary to use all the tools in every Responsible AI dashboard.
-
-Below are specifications of the Responsible AI components and examples of code snippets in YAML and Python. To view the full code, see [sample YAML and Python notebook](https://aka.ms/RAIsamplesProgrammer)
+In the following sections are specifications of the Responsible AI components and examples of code snippets in YAML and Python. To view the full code, see [sample YAML and Python notebook](https://aka.ms/RAIsamplesProgrammer).
### Limitations The current set of components have a number of limitations on their use: -- All models must be in registered in AzureML in MLFlow format with a sklearn flavor.
+- All models must be registered in Azure Machine Learning in MLflow format with a sklearn (scikit-learn) flavor.
- The models must be loadable in the component environment. - The models must be pickleable.-- The models must be supplied to the RAI components using the 'Fetch Registered Model' component that we provide.-- The dataset inputs must be `pandas` DataFrames in Parquet format. -- A model must still be supplied even if only a causal analysis of the data is performed. The `DummyClassifier` and `DummyRegressor` estimators from SciKit-Learn can be used for this purpose.
+- The models must be supplied to the Responsible AI components by using the *fetch registered model* component, which we provide.
+- The dataset inputs must be in pandas.DataFrame.to_parquet format.
+- A model must be supplied even if only a causal analysis of the data is performed. You can use the DummyClassifier and DummyRegressor estimators from scikit-learn for this purpose.
-### RAI Insights Dashboard Constructor
+### RAI Insights dashboard constructor
This component has three input ports:
This component has three input ports:
- The training dataset - The test dataset
-Use the train and test dataset that you used when training your model to generate model-debugging insights with components such as Error analysis and Model explanations. For components like Causal analysis that doesn't require a model, the train dataset will be used to train the causal model to generate the causal insights. The test dataset is used to populate your Responsible AI dashboard visualizations.
+To generate model-debugging insights with components such as error analysis and Model explanations, use the training and test dataset that you used when you trained your model. For components like causal analysis, which doesn't require a model, you use the training dataset to train the causal model to generate the causal insights. You use the test dataset to populate your Responsible AI dashboard visualizations.
-The easiest way to supply the model is using our `Fetch Registered Model` component, which will be discussed below.
+The easiest way to supply the model is to use the fetch registered model component, which we discuss later in this article.
> [!NOTE]
-> Currently only models with MLFlow format, with a sklearn flavor are supported.
+> Currently, only models in MLflow format and with a sklearn flavor are supported.
-The two datasets should be file datasets (of type uri_file) in Parquet format. Tabular datasets aren't supported, we provide a `TabularDataset to Parquet file` component to help with conversions. The training and test datasets provided don't have to be the same datasets used in training the model (although it's permissible for them to be the same). By default, the test dataset is restricted to 5000 rows for performance reasons of the visualization UI.
+The two datasets should be file datasets (of type `uri_file`) in Parquet format. Tabular datasets aren't supported, but we provide a TabularDataset to Parquet file component to help with conversions. The training and test datasets provided don't have to be the same datasets that are used in training the model, but they can be the same. By default, for performance reasons, the test dataset is restricted to 5,000 rows of the visualization UI.
The constructor component also accepts the following parameters:
-| Parameter name | Description | Type |
-|-|--|-|
-| title | Brief description of the dashboard | String |
-| task_type | Specifies whether the model is for classification or regression | String, `classification` or `regression` |
-| target_column_name | The name of the column in the input datasets, which the model is trying to predict | String |
-| maximum_rows_for_test_dataset | The maximum number of rows allowed in the test dataset (for performance reasons) | Integer (defaults to 5000) |
-| categorical_column_names | The columns in the datasets, which represent categorical data | Optional list of strings (see note below) |
-| classes | The full list of class labels in the training dataset | Optional list of strings (see note below) |
+| Parameter name | Description | Type |
+||||
+| `title` | Brief description of the dashboard. | String |
+| `task_type` | Specifies whether the model is for classification or regression. | String, `classification` or `regression` |
+| `target_column_name` | The name of the column in the input datasets, which the model is trying to predict. | String |
+| `maximum_rows_for_test_dataset` | The maximum number of rows allowed in the test dataset, for performance reasons. | Integer, defaults to 5,000 |
+| `categorical_column_names` | The columns in the datasets, which represent categorical data. | Optional list of strings<sup>1</sup> |
+| `classes` | The full list of class labels in the training dataset. | Optional list of strings<sup>1</sup> |
-> [!NOTE]
-> The lists should be supplied as a single JSON encoded string for`categorical_column_names` and `classes` inputs.
+<sup>1</sup> The lists should be supplied as a single JSON-encoded string for `categorical_column_names` and `classes` inputs.
-The constructor component has a single output named `rai_insights_dashboard`. This is an empty dashboard, which the individual tool components will operate on, and then all the results will be assembled by the `Gather RAI Insights Dashboard` component at the end.
+The constructor component has a single output named `rai_insights_dashboard`. This is an empty dashboard, which the individual tool components operate on. All the results are assembled by the Gather RAI Insights dashboard component at the end.
# [YAML](#tab/yaml)
First load the component:
ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» client=ml_client, name="rai_insights_constructor", version="1" ΓÇ» ΓÇ» ΓÇ» ΓÇ» )
-#Then inside the pipeline:
+#Then inside the pipeline:
ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» construct_job = rai_constructor_component( ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» title="From Python",
First load the component:
ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ) ```
-### Exporting pre-built Cohorts for score card generation
+### Export pre-built cohorts for scorecard generation
-Pre-built cohorts can be exported for use in score card generation. Find example of building cohorts in this Jupyter Notebook example: [responsibleaidashboard-diabetes-decision-making.ipynb](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/responsibleaidashboard-diabetes-decision-making.ipynb). Once a cohort is defined, it can be exported to json as follows:
+You can export pre-built cohorts for use in scorecard generation. For example, here are building cohorts in a Jupyter Notebook: [responsibleaidashboard-diabetes-decision-making.ipynb](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/responsibleaidashboard-diabetes-decision-making.ipynb). After you've defined a cohort, you can export it to a JSON file, as shown here:
```python # cohort1, cohort2 are cohorts defined in sample notebook of type raiwidgets.cohort.Cohort import json json.dumps([cohort1.to_json(), cohort2.to_json) ```
-A sample json string generated is shown below:
+A sample generated JSON string is shown here:
```json [
A sample json string generated is shown below:
] ```
-### Add Causal to RAI Insights Dashboard
-
-This component performs a causal analysis on the supplied datasets. It has a single input port, which accepts the output of the `RAI Insights Dashboard Constructor`. It also accepts the following parameters:
-
-| Parameter name | Description | Type |
-|-|--|-|
-| treatment_features | A list of feature names in the datasets, which are potentially ΓÇÿtreatableΓÇÖ to obtain different outcomes. | List of strings (see note below) |
-| heterogeneity_features | A list of feature names in the datasets, which might affect how the ΓÇÿtreatableΓÇÖ features behave. By default all features will be considered | Optional list of strings (see note below).|
-| nuisance_model | The model used to estimate the outcome of changing the treatment features. | Optional string. Must be ΓÇÿlinearΓÇÖ or ΓÇÿAutoMLΓÇÖ defaulting to ΓÇÿlinear.ΓÇÖ |
-| heterogeneity_model | The model used to estimate the effect of the heterogeneity features on the outcome. | Optional string. Must be ΓÇÿlinearΓÇÖ or ΓÇÿforestΓÇÖ defaulting to ΓÇÿlinear.ΓÇÖ |
-| alpha | Confidence level of confidence intervals | Optional floating point number. Defaults to 0.05. |
-| upper_bound_on_cat_expansion | Maximum expansion for categorical features. | Optional integer. Defaults to 50. |
-| treatment_cost | The cost of the treatments. If 0, then all treatments will have zero cost. If a list is passed, then each element is applied to one of the treatment_features. Each element can be a scalar value to indicate a constant cost of applying that treatment or an array indicating the cost for each sample. If the treatment is a discrete treatment, then the array for that feature should be two dimensional with the first dimension representing samples and the second representing the difference in cost between the non-default values and the default value. | Optional integer or list (see note below).|
-| min_tree_leaf_samples | Minimum number of samples per leaf in policy tree. | Optional integer. Defaults to 2 |
-| max_tree_depth | Maximum depth of the policy tree | Optional integer. Defaults to 2 |
-| skip_cat_limit_checks | By default, categorical features need to have several instances of each category in order for a model to be fit robustly. Setting this to True will skip these checks. |Optional Boolean. Defaults to False. |
-| categories | What categories to use for the categorical columns. If `auto`, then the categories will be inferred for all categorical columns. Otherwise, this argument should have as many entries as there are categorical columns. Each entry should be either `auto` to infer the values for that column or the list of values for the column. If explicit values are provided, the first value is treated as the "control" value for that column against which other values are compared. | Optional. `auto` or list (see note below.) |
-| n_jobs | Degree of parallelism to use. | Optional integer. Defaults to 1. |
-| verbose | Whether to provide detailed output during the computation. | Optional integer. Defaults to 1. |
-| random_state | Seed for the PRNG. | Optional integer. |
+### Add Causal to RAI Insights dashboard
-> [!NOTE]
-> For the `list` parameters: Several of the parameters accept lists of other types (strings, numbers, even other lists). To pass these into the component, they must first be JSON-encoded into a single string.
+This component performs a causal analysis on the supplied datasets. It has a single input port, which accepts the output of the RAI Insights dashboard constructor. It also accepts the following parameters:
+
+| Parameter name | Description | Type&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |
+||||
+| `treatment_features` | A list of feature names in the datasets, which are potentially "treatable" to obtain different outcomes. | List of strings<sup>2</sup>. |
+| `heterogeneity_features` | A list of feature names in the datasets, which might affect how the "treatable" features behave. By default, all features will be considered. | Optional list of strings<sup>2</sup>.|
+| `nuisance_model` | The model used to estimate the outcome of changing the treatment features. | Optional string. Must be `linear` or `AutoML`, defaulting to `linear`. |
+| `heterogeneity_model` | The model used to estimate the effect of the heterogeneity features on the outcome. | Optional string. Must be `linear` or `forest`, defaulting to `linear`. |
+| `alpha` | Confidence level of confidence intervals. | Optional floating point number, defaults to 0.05. |
+| `upper_bound_on_cat_expansion` | The maximum expansion of categorical features. | Optional integer, defaults to 50. |
+| `treatment_cost` | The cost of the treatments. If 0, all treatments will have zero cost. If a list is passed, each element is applied to one of the `treatment_features`.<br><br>Each element can be a scalar value to indicate a constant cost of applying that treatment or an array indicating the cost for each sample. If the treatment is a discrete treatment, the array for that feature should be two dimensional, with the first dimension representing samples and the second representing the difference in cost between the non-default values and the default value. | Optional integer or list<sup>2</sup>.|
+| `min_tree_leaf_samples` | The minimum number of samples per leaf in the policy tree. | Optional integer, defaults to 2. |
+| `max_tree_depth` | The maximum depth of the policy tree. | Optional integer, defaults to 2. |
+| `skip_cat_limit_checks` | By default, categorical features need to have several instances of each category in order for a model to be fit robustly. Setting this to `True` will skip these checks. |Optional Boolean, defaults to `False`. |
+| `categories` | The categories to use for the categorical columns. If `auto`, the categories will be inferred for all categorical columns. Otherwise, this argument should have as many entries as there are categorical columns.<br><br>Each entry should be either `auto` to infer the values for that column or the list of values for the column. If explicit values are provided, the first value is treated as the "control" value for that column against which other values are compared. | Optional, `auto` or list<sup>2</sup>. |
+| `n_jobs` | The degree of parallelism to use. | Optional integer, defaults to 1. |
+| `verbose` | Expresses whether to provide detailed output during the computation. | Optional integer, defaults to 1. |
+| `random_state` | Seed for the pseudorandom number generator (PRNG). | Optional integer. |
-This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights Dashboard component.
+<sup>2</sup> For the `list` parameters: Several of the parameters accept lists of other types (strings, numbers, even other lists). To pass these into the component, they must first be JSON-encoded into a single string.
+
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights dashboard component.
# [YAML](#tab/yaml)
This component has a single output port, which can be connected to one of the `i
-### Add Counterfactuals to RAI Insights Dashboard
+### Add Counterfactuals to RAI Insights dashboard
-This component generates counterfactual points for the supplied test dataset. It has a single input port, which accepts the output of the RAI Insights Dashboard Constructor. It also accepts the following parameters:
+This component generates counterfactual points for the supplied test dataset. It has a single input port, which accepts the output of the RAI Insights dashboard constructor. It also accepts the following parameters:
-| Parameter Name | Description | Type |
-|-|-||
-| total_CFs | How many counterfactual points to generate for each row in the test dataset | Optional integer. Defaults to 10 |
-| method | The `dice-ml` explainer to use | Optional string. Either `random`, `genetic` or `kdtree`. Defaults to `random` |
-| desired_class | Index identifying the desired counterfactual class. For binary classification, this should be set to `opposite` | Optional string or integer. Defaults to 0 |
-| desired_range | For regression problems, identify the desired range of outcomes | Optional list of two numbers (see note below). |
-| permitted_range | Dictionary with feature names as keys and permitted range in list as values. Defaults to the range inferred from training data. | Optional string or list (see note below).|
-| features_to_vary | Either a string "all" or a list of feature names to vary. | Optional string or list (see note below)|
-| feature_importance | Flag to enable computation of feature importances using `dice-ml` |Optional Boolean. Defaults to True |
+| Parameter name | Description | Type |
+||||
+| `total_CFs` | The number of counterfactual points to generate for each row in the test dataset. | Optional integer, defaults to 10. |
+| `method` | The `dice-ml` explainer to use. | Optional string. Either `random`, `genetic`, or `kdtree`. Defaults to `random`. |
+| `desired_class` | Index identifying the desired counterfactual class. For binary classification, this should be set to `opposite`. | Optional string or integer. Defaults to 0. |
+| `desired_range` | For regression problems, identify the desired range of outcomes. | Optional list of two numbers<sup>3</sup>. |
+| `permitted_range` | Dictionary with feature names as keys and the permitted range in a list as values. Defaults to the range inferred from training data. | Optional string or list<sup>3</sup>.|
+| `features_to_vary` | Either a string `all` or a list of feature names to vary. | Optional string or list<sup>3</sup>.|
+| `feature_importance` | Flag to enable computation of feature importances by using `dice-ml`. |Optional Boolean. Defaults to `True`. |
-> [!NOTE]
-> For the non-scalar parameters: Parameters which are lists or dictionaries should be passed as single JSON-encoded strings.
+<sup>3</sup> For the non-scalar parameters: Parameters that are lists or dictionaries should be passed as single JSON-encoded strings.
-This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights Dashboard component.
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights dashboard component.
# [YAML](#tab/yaml)
This component has a single output port, which can be connected to one of the `i
-### Add Error Analysis to RAI Insights Dashboard
+### Add Error Analysis to RAI Insights dashboard
-This component generates an error analysis for the model. It has a single input port, which accepts the output of the RAI Insights Dashboard Constructor. It also accepts the following parameters:
+This component generates an error analysis for the model. It has a single input port, which accepts the output of the RAI Insights dashboard constructor. It also accepts the following parameters:
-| Parameter Name | Description | Type |
-|-|-||
-| max_depth | The maximum depth of the error analysis tree | Optional integer. Defaults to 3 |
-| num_leaves | The maximum number of leaves in the error tree | Optional integer. Defaults to 31 |
-| min_child_samples | The minimum number of datapoints required to produce a leaf | Optional integer. Defaults to 20 |
-| filter_features | A list of one or two features to use for the matrix filter | Optional list of two feature names (see note below). |
-
-> [!NOTE]
-> filter_features: This list of one or two feature names should be passed as a single JSON-encoded string.
+| Parameter name | Description | Type |
+||||
+| `max_depth` | The maximum depth of the error analysis tree. | Optional integer. Defaults to 3. |
+| `num_leaves` | The maximum number of leaves in the error tree. | Optional integer. Defaults to 31. |
+| `min_child_samples` | The minimum number of datapoints required to produce a leaf. | Optional integer. Defaults to 20. |
+| `filter_features` | A list of one or two features to use for the matrix filter. | Optional list, to be passed as a single JSON-encoded string. |
-This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights Dashboard component.
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights dashboard component.
# [YAML](#tab/yaml)
This component has a single output port, which can be connected to one of the `i
-### Add Explanation to RAI Insights Dashboard
+### Add Explanation to RAI Insights dashboard
-This component generates an explanation for the model. It has a single input port, which accepts the output of the RAI Insights Dashboard Constructor. It accepts a single, optional comment string as a parameter.
+This component generates an explanation for the model. It has a single input port, which accepts the output of the RAI Insights dashboard constructor. It accepts a single, optional comment string as a parameter.
-This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights Dashboard component.
+This component has a single output port, which can be connected to one of the `insight_[n]` input ports of the Gather RAI Insights dashboard component.
# [YAML](#tab/yaml)
This component has a single output port, which can be connected to one of the `i
```
-### Gather RAI Insights Dashboard
+### Gather RAI Insights dashboard
This component assembles the generated insights into a single Responsible AI dashboard. It has five input ports: -- The `constructor` port that must be connected to the RAI Insights Dashboard Constructor component.
+- The `constructor` port that must be connected to the RAI Insights dashboard constructor component.
- Four `insight_[n]` ports that can be connected to the output of the tool components. At least one of these ports must be connected.
-There are two output ports. The `dashboard` port contains the completed `RAIInsights` object, while the `ux_json` contains the data required to display a minimal dashboard.
+There are two output ports:
+- The `dashboard` port contains the completed `RAIInsights` object.
+- The `ux_json` port contains the data required to display a minimal dashboard.
# [YAML](#tab/yaml)
We provide two helper components to aid in connecting the Responsible AI compone
### Fetch registered model
-This component produces information about a registered model, which can be consumed by the `model_info_path` input port of the RAI Insights Dashboard Constructor component. It has a single input parameter ΓÇô the AzureML ID (`<NAME>:<VERSION>`) of the desired model.
+This component produces information about a registered model, which can be consumed by the `model_info_path` input port of the RAI Insights dashboard constructor component. It has a single input parameter: the Azure Machine Learning ID (`<NAME>:<VERSION>`) of the desired model.
# [YAML](#tab/yaml)
This component produces information about a registered model, which can be consu
### Tabular dataset to parquet file
-This component converts the tabular dataset named in its sole input parameter into a Parquet file, which can be consumed by the `train_dataset` and `test_dataset` input ports of the RAI Insights Dashboard Constructor component. Its single input parameter is the name of the desired dataset.
+This component converts the tabular dataset named in its sole input parameter into a Parquet file, which can be consumed by the `train_dataset` and `test_dataset` input ports of the RAI Insights dashboard constructor component. Its single input parameter is the name of the desired dataset.
# [YAML](#tab/yaml)
This component converts the tabular dataset named in its sole input parameter in
### What model formats and flavors are supported?
-The model must be in MLFlow directory with a sklearn flavor available. Furthermore, the model needs to be loadable in the environment used by the Responsible AI components.
+The model must be in the MLflow directory with a sklearn flavor available. Additionally, the model needs to be loadable in the environment that's used by the Responsible AI components.
### What data formats are supported?
-The supplied datasets should be file datasets (uri_file type) in Parquet format. We provide the `TabularDataset to Parquet File` component to help convert the data into the required format.
+The supplied datasets should be file datasets (of type `uri_file`) in Parquet format. We provide the `TabularDataset to Parquet File` component to help convert the data into the required format.
## Next steps -- Once your Responsible AI dashboard is generated, [view how to access and use it in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md)
+- After you've generated your Responsible AI dashboard, [view how to access and use it in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md).
- Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](how-to-responsible-ai-scorecard.md).-- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).-- Learn more about how to [collect data responsibly](concept-sourcing-human-data.md)-- View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python.-- Learn more about how the Responsible AI dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)-- Learn about how the Responsible AI dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)-- Explore the features of the Responsible AI dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Learn more about how to [collect data responsibly](concept-sourcing-human-data.md).
+- View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate the Responsible AI dashboard with YAML or Python.
+- Learn more about how to use the Responsible AI dashboard and scorecard to debug data and models and inform better decision-making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
+- Learn about how the Responsible AI dashboard and scorecard were used by the UK National Health Service (NHS) in a [real life customer story](https://aka.ms/NHSCustomerStory).
+- Explore the features of the Responsible AI dashboard through this [interactive AI lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
machine-learning How To Responsible Ai Dashboard Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard-ui.md
Title: Generate Responsible AI dashboard in the studio UI (preview)
+ Title: Generate a Responsible AI dashboard (preview) in the studio UI
-description: Learn how to generate the Responsible AI dashboard with no-code experience in the Azure Machine Learning studio UI.
+description: Learn how to generate a Responsible AI dashboard with no-code experience in the Azure Machine Learning studio UI.
Last updated 08/17/2022
-# Generate Responsible AI dashboard in the studio UI (preview)
+# Generate a Responsible AI dashboard (preview) in the studio UI
-You can create a Responsible AI dashboard with a no-code experience in the [Azure Machine Learning studio UI](https://ml.azure.com/). Use the following steps to access the dashboard generation wizard:
+In this article, you create a Responsible AI dashboard with a no-code experience in the [Azure Machine Learning studio UI](https://ml.azure.com/). To access the dashboard generation wizard, do the following:
-- [Register your model](how-to-manage-models.md) in Azure Machine Learning before being able to access the no-code experience.-- Navigate to the **Models** tab from the left navigation bar in Azure Machine Learning studio.-- Select the registered model youΓÇÖd like to create Responsible AI insights for and select the **Details** tab.-- Select the **Create Responsible AI dashboard (preview)** button from the top panel.
+1. [Register your model](how-to-manage-models.md) in Azure Machine Learning so that you can access the no-code experience.
+1. On the left pane of Azure Machine Learning studio, select the **Models** tab.
+1. Select the registered model that you want to create Responsible AI insights for, and then select the **Details** tab.
+1. Select **Create Responsible AI dashboard (preview)**.
-To learn more, see the Responsible AI dashboard's [supported model types, and limitations](concept-responsible-ai-dashboard.md#supported-scenarios-and-limitations)
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard-ui/model-page.png" alt-text="Screenshot of the wizard details pane with 'Create Responsible AI dashboard (preview)' tab highlighted." lightbox ="./media/how-to-responsible-ai-dashboard-ui/model-page.png":::
+To learn more, see the Responsible AI dashboard [supported model types and limitations](concept-responsible-ai-dashboard.md#supported-scenarios-and-limitations).
+The wizard provides an interface for entering all the necessary parameters to create your Responsible AI dashboard without having to touch code. The experience takes place entirely in the Azure Machine Learning studio UI. The studio presents a guided flow and instructional text to help contextualize the variety of choices about which Responsible AI components youΓÇÖd like to populate your dashboard with.
-The wizard is designed to provide an interface to input all the necessary parameters to instantiate your Responsible AI dashboard without having to touch code. The experience takes place entirely in the Azure Machine Learning studio UI with a guided flow and instructional text to help contextualize the variety of choices in which Responsible AI components youΓÇÖd like to populate your dashboard with. The wizard is divided into five steps:
+The wizard is divided into five sections:
1. Datasets
-2. Modeling task
-3. Dashboard components
-4. Component parameters
-5. Experiment configuration
+1. Modeling task
+1. Dashboard components
+1. Component parameters
+1. Experiment configuration
## Select your datasets
-The first step is to select the train and test dataset that you used when training your model to generate model-debugging insights. For components like Causal analysis, which doesn't require a model, the train dataset will be used to train the causal model to generate the causal insights.
+In the first section, you select the train and test datasets that you used when you trained your model to generate model-debugging insights. For components like causal analysis, which doesn't require a model, you use the train dataset to train the causal model to generate the causal insights.
> [!NOTE] > Only tabular dataset formats are supported.
-1. **Select a dataset for training**: Select the dropdown to view your registered datasets in Azure Machine Learning workspace. This dataset will be used to generate Responsible AI insights for components such as model explanations and error analysis.
-2. **Create new dataset**: If the desired datasets aren't in your Azure Machine Learning workspace, select ΓÇ£New datasetΓÇ¥ to upload your dataset
-3. **Select a dataset for testing**: Select the dropdown to view your registered datasets in Azure Machine Learning workspace. This dataset is used to populate your Responsible AI dashboard visualizations.
+1. **Select a dataset for training**: In the dropdown list of registered datasets in the Azure Machine Learning workspace, select the dataset you want to use to generate Responsible AI insights for components, such as model explanations and error analysis.
+
+1. **Select a dataset for testing**: In the dropdown list, select the dataset you want to use to populate your Responsible AI dashboard visualizations.
+
+1. If the train or test dataset you want to use isn't listed, select **New dataset** to upload it.
## Select your modeling task
-After you picked your dataset, select your modeling task type.
+After you've picked your datasets, select your modeling task type, as shown in the following image:
:::image type="content" source="./media/how-to-responsible-ai-dashboard-ui/modeling.png" alt-text="Screenshot of the wizard on modeling task type." lightbox= "./media/how-to-responsible-ai-dashboard-ui/modeling.png"::: > [!NOTE]
-> The wizard only supports models with MLflow format and scikit-learn flavor.
+> The wizard supports only models in MLflow format and with a sklearn (scikit-learn) flavor.
## Select your dashboard components
-The Responsible AI dashboard offers two profiles for recommended sets of tools you can generate:
+The Responsible AI dashboard offers two profiles for recommended sets of tools that you can generate:
-- **Model debugging**: Understand and debug erroneous data cohorts in your ML model using Error analysis, Counterfactual what-if examples, and Model explainability-- **Real life interventions**: Understand and debug erroneous data cohorts in your ML model using Causal analysis
+- **Model debugging**: Understand and debug erroneous data cohorts in your machine learning model by using error analysis, counterfactual what-if examples, and model explainability.
+- **Real-life interventions**: Understand and debug erroneous data cohorts in your machine learning model by using causal analysis.
-> [!NOTE]
-> Multi-class classification does not support Real-life intervention analysis profile.
-Select the desired profile, then **Next**.
+ > [!NOTE]
+ > Multi-class classification doesn't support the real-life interventions analysis profile.
++
+1. Select the profile you want to use.
+1. Select **Next**.
## Configure parameters for dashboard components
-Once youΓÇÖve selected a profile, the configuration step for the corresponding components will appear.
+After youΓÇÖve selected a profile, the **Component parameters for model debugging** configuration pane for the corresponding components appears.
Component parameters for model debugging:
-1. **Target feature (required)**: Specify the feature that your model was trained to predict
-2. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This is pre-loaded for you based on your dataset metadata.
-3. **Generate error tree and heat map**: Toggle on and off to generate an error analysis component for your Responsible AI dashboard
-4. **Features for error heat map**: Select up to two features to pre-generate an error heatmap for.
-5. **Advanced configuration**: Specify additional parameters for your error tree such as Maximum depth, Number of leaves, Minimum number of samples in one leaf.
-6. **Generate counterfactual what-if examples**: Toggle on and off to generate counterfactual what-if component for your Responsible AI dashboard
-7. **Number of counterfactuals (required)**: Specify the number of counterfactual examples you want generated per datapoint. A minimum of at least 10 should be generated to enable a bar chart view in the dashboard of which features were most perturbed on average to achieve the desired prediction.
-8. **Range of value predictions (required)**: Specify for regression scenarios the desired range you want counterfactual examples to have prediction values in. For binary classification scenarios, it will automatically be set to generate counterfactuals for the opposite class of each datapoint. For multi-classification scenarios, there will be a drop-down to specify which class you want each datapoints to be predicted as.
-9. **Specify features to perturb**: By default, all features will be perturbed. However, if there are specific features you want perturbed, clicking this will open a panel with the list of features to select. (See below)
-10. **Generate explanations**: Toggle on and off to generate a model explanation component for your Responsible AI dashboard. No configuration is necessary as a default opaque box mimic explainer will be used to generate feature importances.
+1. **Target feature (required)**: Specify the feature that your model was trained to predict.
+1. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This field is pre-loaded for you based on your dataset metadata.
+1. **Generate error tree and heat map**: Toggle on and off to generate an error analysis component for your Responsible AI dashboard.
+1. **Features for error heat map**: Select up to two features that you want to pre-generate an error heatmap for.
+1. **Advanced configuration**: Specify additional parameters, such as **Maximum depth of error tree**, **Number of leaves in error tree**, and **Minimum number of samples in each leaf node**.
+1. **Generate counterfactual what-if examples**: Toggle on and off to generate a counterfactual what-if component for your Responsible AI dashboard.
+1. **Number of counterfactuals (required)**: Specify the number of counterfactual examples that you want generated per data point. A minimum of 10 should be generated to enable a bar chart view of the features that were most perturbed, on average, to achieve the desired prediction.
+1. **Range of value predictions (required)**: Specify for regression scenarios the range that you want counterfactual examples to have prediction values in. For binary classification scenarios, the range will automatically be set to generate counterfactuals for the opposite class of each data point. For multi-classification scenarios, use the dropdown list to specify which class you want each data point to be predicted as.
+1. **Specify which features to perturb**: By default, all features will be perturbed. However, if you want only specific features to be perturbed, select **Specify which features to perturb for generating counterfactual explanations** to display a pane with a list of features to select.
+
+ When you select **Specify which features to perturb**, you can specify the range you want to allow perturbations in. For example: for the feature YOE (Years of experience), specify that counterfactuals should have feature values ranging from only 10 to 21 instead of the default values of 5 to 21.
-For counterfactuals when you select ΓÇ£Specify features to perturbΓÇ¥, you can specify which range you want to allow perturbations in. For example: for the feature YOE (Years of experience), specify that counterfactuals should only have feature values ranging from 10 to 21 instead of the default 5 to 21.
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard-ui/model-debug-counterfactuals.png" alt-text="Screenshot of the wizard, showing a pane of features you can specify to perturb." lightbox = "./media/how-to-responsible-ai-dashboard-ui/model-debug-counterfactuals.png":::
+1. **Generate explanations**: Toggle on and off to generate a model explanation component for your Responsible AI dashboard. No configuration is necessary, because a default opaque box mimic explainer will be used to generate feature importances.
-Alternatively, if you're interested in selecting **Real-life interventions** profile, youΓÇÖll see the following screen generate a causal analysis. This will help you understand causal effects of features you want to ΓÇ£treatΓÇ¥ on a certain outcome you wish to optimize.
+Alternatively, if you select the **Real-life interventions** profile, youΓÇÖll see the following screen generate a causal analysis. This will help you understand the causal effects of features you want to ΓÇ£treatΓÇ¥ on a certain outcome you want to optimize.
-Component parameters for real-life intervention use causal analysis:
+Component parameters for real-life interventions use causal analysis. Do the following:
1. **Target feature (required)**: Choose the outcome you want the causal effects to be calculated for.
-2. **Treatment features (required)**: Choose one or more features youΓÇÖre interested in changing (ΓÇ£treatingΓÇ¥) to optimize the target outcome.
-3. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This is pre-loaded for you based on your dataset metadata.
-4. **Advanced settings**: Specify additional parameters for your causal analysis such as heterogenous features (additional features to understand causal segmentation in your analysis in addition to your treatment features) and which causal model youΓÇÖd like to be used.
+1. **Treatment features (required)**: Choose one or more features that youΓÇÖre interested in changing (ΓÇ£treatingΓÇ¥) to optimize the target outcome.
+1. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This field is pre-loaded for you based on your dataset metadata.
+1. **Advanced settings**: Specify additional parameters for your causal analysis, such as heterogenous features (that is, additional features to understand causal segmentation in your analysis, in addition to your treatment features) and which causal model you want to be used.
-## Experiment configuration
+## Configure your experiment
Finally, configure your experiment to kick off a job to generate your Responsible AI dashboard. +
+On the **Training job or experiment configuration** pane, do the following:
1. **Name**: Give your dashboard a unique name so that you can differentiate it when youΓÇÖre viewing the list of dashboards for a given model.
-2. **Experiment name**: Select an existing experiment to run the job in, or create a new experiment.
-3. **Existing experiment**: Select an existing experiment from drop-down.
-4. **Select compute type**: Specify which compute type youΓÇÖd like to use to execute your job.
-5. **Select compute**: Select from a drop-down that compute youΓÇÖd like to use. If there are no existing compute resources, select the ΓÇ£+ΓÇ¥ to create a new compute resource and refresh the list.
-6. **Description**: Add a more verbose description for your Responsible AI dashboard.
-7. **Tags**: Add any tags to this Responsible AI dashboard.
+1. **Experiment name**: Select an existing experiment to run the job in, or create a new experiment.
+1. **Existing experiment**: In the dropdown list, select an existing experiment.
+1. **Select compute type**: Specify which compute type you want to use to execute your job.
+1. **Select compute**: In the dropdown list, select the compute you want to use. If there are no existing compute resources, select the plus sign (**+**), create a new compute resource, and then refresh the list.
+1. **Description**: Add a longer description of your Responsible AI dashboard.
+1. **Tags**: Add any tags to this Responsible AI dashboard.
+
+After youΓÇÖve finished configuring your experiment, select **Create** to start generating your Responsible AI dashboard. You'll be redirected to the experiment page to track the progress of your job.
-After youΓÇÖve finished your experiment configuration, select **Create** to start the generation of your Responsible AI dashboard. You'll be redirected to the experiment page to track the progress of your job. See below next steps on how to view your Responsible AI dashboard.
+In the "Next steps" section, you can learn how to view and use your Responsible AI dashboard.
## Next steps -- Once your Responsible AI dashboard is generated, [view how to access and use it in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md)
+- After you've generated your Responsible AI dashboard, [view how to access and use it in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md).
- Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](how-to-responsible-ai-scorecard.md). - Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).-- Learn more about how to [collect data responsibly](concept-sourcing-human-data.md)-- Learn more about how the Responsible AI dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)-- Learn about how the Responsible AI dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)-- Explore the features of the Responsible AI dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+- Learn more about how to [collect data responsibly](concept-sourcing-human-data.md).
+- Learn more about how to use the Responsible AI dashboard and scorecard to debug data and models and inform better decision-making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
+- Learn about how the Responsible AI dashboard and scorecard were used by the UK National Health Service (NHS) in a [real life customer story](https://aka.ms/NHSCustomerStory).
+- Explore the features of the Responsible AI dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
machine-learning How To Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard.md
Title: How to use the Responsible AI dashboard in studio (preview)
+ Title: Use the Responsible AI dashboard in Azure Machine Learning studio (preview)
-description: Learn how to use the different tools and visualization charts in the Responsible AI dashboard in Azure Machine Learning.
+description: Learn how to use the various tools and visualization charts in the Responsible AI dashboard in Azure Machine Learning.
Last updated 08/17/2022
-# How to use the Responsible AI dashboard in studio (preview)
+# Use the Responsible AI dashboard (preview) in Azure Machine Learning studio
-Responsible AI dashboards are linked to your registered models. To view your Responsible AI dashboard, go into your model registry and select the registered model you've generated a Responsible AI dashboard for. Once you select your model, select the **Responsible AI (preview)** tab to view a list of generated dashboards.
+Responsible AI dashboards are linked to your registered models. To view your Responsible AI dashboard, go into your model registry and select the registered model you've generated a Responsible AI dashboard for. Then, select the **Responsible AI (preview)** tab to view a list of generated dashboards.
-Multiple dashboards can be configured and attached to your registered model. Different combinations of components (interpretability, error analysis, causal analysis, etc.) can be attached to each Responsible AI dashboard. The list below reminds you of your dashboard(s)' customization and what components were generated within the Responsible AI dashboard. However, once opening each dashboard, different components can be viewed or hidden within the dashboard UI itself.
+You can configure multiple dashboards and attach them to your registered model. Various combinations of components (interpretability, error analysis, causal analysis, and so on) can be attached to each Responsible AI dashboard. The following image displays a dashboard's customization and the components that were generated within it. In each dashboard, you can view or hide various components within the dashboard UI itself.
-Selecting the name of the dashboard will open up your dashboard into a full view in your browser. At anytime, select the **Back to models details** to get back to your list of dashboards.
+Select the name of the dashboard to open it into a full view in your browser. To return to your list of dashboards, you can select **Back to models details** at any time.
## Full functionality with integrated compute resource
-Some features of the Responsible AI dashboard require dynamic, on-the-fly, and real-time computation (for example, what if analysis). Without connecting a compute resource to the dashboard, you may find some functionality missing. Connecting to a compute resource will enable full functionality of your Responsible AI dashboard for the following components:
+Some features of the Responsible AI dashboard require dynamic, on-the-fly, and real-time computation (for example, what-if analysis). Unless you connect a compute resource to the dashboard, you might find some functionality missing. When you connect to a compute resource, you enable full functionality of your Responsible AI dashboard for the following components:
- **Error analysis** - Setting your global data cohort to any cohort of interest will update the error tree instead of disabling it. - Selecting other error or performance metrics is supported. - Selecting any subset of features for training the error tree map is supported. - Changing the minimum number of samples required per leaf node and error tree depth is supported.
- - Dynamically updating the heatmap for up to two features is supported.
+ - Dynamically updating the heat map for up to two features is supported.
- **Feature importance** - An individual conditional expectation (ICE) plot in the individual feature importance tab is supported. - **Counterfactual what-if**
- - Generating a new what-if counterfactual datapoint to understand the minimum change required for a desired outcome is supported.
+ - Generating a new what-if counterfactual data point to understand the minimum change required for a desired outcome is supported.
- **Causal analysis**
- - Selecting any individual datapoint, perturbing its treatment features, and seeing the expected causal outcome of causal what-if is supported (only for regression ML scenarios).
+ - Selecting any individual data point, perturbing its treatment features, and seeing the expected causal outcome of causal what-if is supported (only for regression machine learning scenarios).
-The information above can also be found on the Responsible AI dashboard page by selecting the information icon button:
+You can also find this information on the Responsible AI dashboard page by selecting the **Information** icon, as shown in the following image:
-### How to enable full functionality of Responsible AI dashboard
+### Enable full functionality of the Responsible AI dashboard
-1. Select a running compute instance from compute drop-down above your dashboard. If you donΓÇÖt have a running compute, create a new compute instance by selecting ΓÇ£+ ΓÇ¥ button next to the compute dropdown, or ΓÇ£Start computeΓÇ¥ button to start a stopped compute instance. Creating or starting a compute instance may take few minutes.
+1. Select a running compute instance in the **Compute** dropdown list at the top of the dashboard. If you donΓÇÖt have a running compute, create a new compute instance by selecting the plus sign (**+**) next to the dropdown. Or you can select the **Start compute** button to start a stopped compute instance. Creating or starting a compute instance might take few minutes.
- :::image type="content" source="./media/how-to-responsible-ai-dashboard/select-compute.png" alt-text="Screenshot showing how to select a compute." lightbox = "./media/how-to-responsible-ai-dashboard/select-compute.png":::
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/select-compute.png" alt-text="Screenshot of the 'Compute' dropdown box for selecting a running compute instance." lightbox = "./media/how-to-responsible-ai-dashboard/select-compute.png":::
-2. Once compute is in ΓÇ£RunningΓÇ¥ state, your Responsible AI dashboard will start to connect to the compute instance. To achieve this, a terminal process will be created on the selected compute instance, and Responsible AI endpoint will be started on the terminal. Select **View terminal outputs** to view current terminal process.
+2. When a compute is in a *Running* state, your Responsible AI dashboard starts to connect to the compute instance. To achieve this, a terminal process is created on the selected compute instance, and a Responsible AI endpoint is started on the terminal. Select **View terminal outputs** to view the current terminal process.
- :::image type="content" source="./media/how-to-responsible-ai-dashboard/compute-connect-terminal.png" alt-text="Screenshot showing the responsible A I dashboard is connecting to a compute resource." lightbox = "./media/how-to-responsible-ai-dashboard/compute-connect-terminal.png":::
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/compute-connect-terminal.png" alt-text="Screenshot showing that the responsible AI dashboard is connecting to a compute resource." lightbox = "./media/how-to-responsible-ai-dashboard/compute-connect-terminal.png":::
3. When your Responsible AI dashboard is connected to the compute instance, you'll see a green message bar, and the dashboard is now fully functional. :::image type="content" source="./media/how-to-responsible-ai-dashboard/compute-terminal-connected.png" alt-text="Screenshot showing that the dashboard is connected to the compute instance." lightbox= "./media/how-to-responsible-ai-dashboard/compute-terminal-connected.png":::
-4. If it takes a while and your Responsible AI dashboard is still not connected to the compute instance, or a red error message bar shows up, it means there are issues with starting your Responsible AI endpoint. Select **View terminal outputs** and scroll down to the bottom to view the error message.
+4. If the process takes a while and your Responsible AI dashboard is still not connected to the compute instance, or a red error message bar is displayed, it means there are issues with starting your Responsible AI endpoint. Select **View terminal outputs** and scroll down to the bottom to view the error message.
:::image type="content" source="./media/how-to-responsible-ai-dashboard/compute-terminal-error.png" alt-text="Screenshot of an error connecting to a compute." lightbox ="./media/how-to-responsible-ai-dashboard/compute-terminal-error.png":::
- If you're having issues with figuring out how to resolve the failed to connect to compute instance issue, select the ΓÇ£smileΓÇ¥ icon on the upper right corner, and submit feedback to us to let us know what error or issue you hit. You can include screenshot and/or your email address in the feedback form.
+ If you're having difficulty figuring out how to resolve the "failed to connect to compute instance" issue, select the **Smile** icon at the upper right. Submit feedback to us about any error or issue you encounter. You can include a screenshot and your email address in the feedback form.
## UI overview of the Responsible AI dashboard
-The Responsible AI dashboard includes a robust and rich set of visualizations and functionality to help you analyze your machine learning model or making data-driven business decisions:
+The Responsible AI dashboard includes a robust, rich set of visualizations and functionality to help you analyze your machine learning model or make data-driven business decisions:
- [Global controls](#global-controls) - [Error analysis](#error-analysis) - [Model overview](#model-overview) - [Data explorer](#data-explorer)-- [Feature importances (model explanations)](#feature-importances-model-explanations)
+- [Feature importance (model explanations)](#feature-importances-model-explanations)
- [Counterfactual what-if](#counterfactual-what-if) - [Causal analysis](#causal-analysis) ### Global controls
-At the top of the dashboard, you can create cohorts, subgroups of datapoints sharing specified characteristics, to focus your analysis in each component on. The name of the cohort currently applied to the dashboard is always shown on the top left above your dashboard. The default shown in your dashboard will always be your whole dataset denoted by the title **All data (default)**.
+At the top of the dashboard, you can create cohorts (subgroups of data points that share specified characteristics) to focus your analysis of each component. The name of the cohort that's currently applied to the dashboard is always shown at the top left of your dashboard. The default view in your dashboard is your whole dataset, titled **All data (default)**.
-1. **Cohort settings**: allows you to view and modify the details of each cohort in a side panel.
-2. **Dashboard configuration**: allows you to view and modify the layout of the overall dashboard in a side panel.
-3. **Switch cohort**: allows you to select a different cohort and view its statistics in a popup.
-4. **New cohort**: allows you to create and add a new cohort to your dashboard.
+1. **Cohort settings**: Allows you to view and modify the details of each cohort in a side panel.
+2. **Dashboard configuration**: Allows you to view and modify the layout of the overall dashboard in a side panel.
+3. **Switch cohort**: Allows you to select a different cohort and view its statistics in a pop-up window.
+4. **New cohort**: Allows you to create and add a new cohort to your dashboard.
-Selecting Cohort settings will open a panel with a list of your cohorts, where you can create, edit, duplicate, or delete your cohorts.
+Select **Cohort settings** to open a panel with a list of your cohorts, where you can create, edit, duplicate, or delete them.
-Selecting the **New cohort** button on the top of the dashboard or in the Cohort settings opens a new panel with options to filter on the following:
+Select **New cohort** at the top of the dashboard or in the Cohort settings to open a new panel with options to filter on the following:
-1. **Index**: filters by the position of the datapoint in the full dataset
-2. **Dataset**: filters by the value of a particular feature in the dataset
-3. **Predicted Y**: filters by the prediction made by the model
-4. **True Y**: filters by the actual value of the target feature
-5. **Error (regression)**: filters by error or Classification Outcome (classification): filters by type and accuracy of classification
-6. **Categorical Values**: filter by a list of values that should be included
-7. **Numerical Values**: filter by a Boolean operation over the values (for example, select datapoints where age < 64)
+1. **Index**: Filters by the position of the data point in the full dataset.
+2. **Dataset**: Filters by the value of a particular feature in the dataset.
+3. **Predicted Y**: Filters by the prediction made by the model.
+4. **True Y**: Filters by the actual value of the target feature.
+5. **Error (regression)**: Filters by error (or **Classification Outcome (classification)**: Filters by type and accuracy of classification).
+6. **Categorical Values**: Filter by a list of values that should be included.
+7. **Numerical Values**: Filter by a Boolean operation over the values (for example, select data points where age < 64).
:::image type="content" source="./media/how-to-responsible-ai-dashboard/view-dashboard-cohort-panel.png" alt-text="Screenshot of making multiple new cohorts." lightbox= "./media/how-to-responsible-ai-dashboard/view-dashboard-cohort-panel.png":::
-You can name your new dataset cohort, select **Add filter** to add each desired filter, then select **Save** to save the new cohort to your cohort list or Save and switch to save and immediately switch the global cohort of the dashboard to the newly created cohort.
+You can name your new dataset cohort, select **Add filter** to add each filter you want to use, and then do either of the following:
+* Select **Save** to save the new cohort to your cohort list.
+* Select **Save and switch** to save and immediately switch the global cohort of the dashboard to the newly created cohort.
:::image type="content" source="./media/how-to-responsible-ai-dashboard/view-dashboard-new-cohort.png" alt-text="Screenshot of making a new cohort in the dashboard." lightbox= "./media/how-to-responsible-ai-dashboard/view-dashboard-new-cohort.png":::
-Selecting **Dashboard configuration** will open a panel with a list of the components youΓÇÖve configured in your dashboard. You can hide components in your dashboard by selecting the ΓÇÿtrashΓÇÖ icon.
+Select **Dashboard configuration** to open a panel with a list of the components youΓÇÖve configured on your dashboard. You can hide components on your dashboard by selecting the **Trash** icon, as shown in the following image:
:::image type="content" source="./media/how-to-responsible-ai-dashboard/dashboard-configuration.png" alt-text="Screenshot showing the dashboard configuration." lightbox="./media/how-to-responsible-ai-dashboard/dashboard-configuration.png":::
-You can add components back into your dashboard via the blue circular ΓÇÿ+ΓÇÖ icon in the divider between each component.
+You can add components back to your dashboard via the blue circular plus sign (**+**) icon in the divider between each component, as shown in the following image:
:::image type="content" source="./media/how-to-responsible-ai-dashboard/dashboard-add-component.png" alt-text="Screenshot of adding a component to the dashboard." lightbox= "./media/how-to-responsible-ai-dashboard/dashboard-add-component.png"::: ### Error analysis
+The next sections cover how to interpret and use error tree maps and heat maps.
+ #### Error tree map
-The first tab of the Error analysis component is the Tree map, which illustrates how model failure is distributed across different cohorts with a tree visualization. Select any node to see the prediction path on your features where error was found.
+The first pane of the error analysis component is a tree map, which illustrates how model failure is distributed across various cohorts with a tree visualization. Select any node to see the prediction path on your features where an error was found.
-1. **Heatmap view**: switches to heatmap visualization of error distribution.
-2. **Feature list:** allows you to modify the features used in the heatmap using a side panel.
-3. **Error coverage**: displays the percentage of all error in the dataset concentrated in the selected node.
-4. **Error (regression) or Error rate (classification)**: displays the error or percentage of failures of all the datapoints in the selected node.
-5. **Node**: represents a cohort of the dataset, potentially with filters applied, and the number of errors out of the total number of datapoints in the cohort.
-6. **Fill line**: visualizes the distribution of datapoints into child cohorts based on filters, with number of datapoints represented through line thickness.
-7. **Selection information**: contains information about the selected node in a side panel.
-8. **Save as a new cohort:** creates a new cohort with the given filters.
-9. **Instances in the base cohort**: displays the total number of points in the entire dataset and the number of correctly and incorrectly predicted points.
-10. **Instances in the selected cohort**: displays the total number of points in the selected node and the number of correctly and incorrectly predicted points.
-11. **Prediction path (filters)**: lists the filters placed over the full dataset to create this smaller cohort.
+1. **Heat map view**: Switches to heat map visualization of error distribution.
+2. **Feature list:** Allows you to modify the features used in the heat map using a side panel.
+3. **Error coverage**: Displays the percentage of all error in the dataset concentrated in the selected node.
+4. **Error (regression) or Error rate (classification)**: Displays the error or percentage of failures of all the data points in the selected node.
+5. **Node**: Represents a cohort of the dataset, potentially with filters applied, and the number of errors out of the total number of data points in the cohort.
+6. **Fill line**: Visualizes the distribution of data points into child cohorts based on filters, with the number of data points represented through line thickness.
+7. **Selection information**: Contains information about the selected node in a side panel.
+8. **Save as a new cohort:** Creates a new cohort with the specified filters.
+9. **Instances in the base cohort**: Displays the total number of points in the entire dataset and the number of correctly and incorrectly predicted points.
+10. **Instances in the selected cohort**: Displays the total number of points in the selected node and the number of correctly and incorrectly predicted points.
+11. **Prediction path (filters)**: Lists the filters placed over the full dataset to create this smaller cohort.
-Selecting the "Feature list" button opens a side panel, which allows you to retrain the error tree on specific features.
+Select the **Feature list** button to open a side panel, from which you can retrain the error tree on specific features.
-1. **Search features**: allows you to find specific features in the dataset.
-2. **Features:** lists the name of the feature in the dataset.
-3. **Importances**: A guideline for how related the feature may be to the error. Calculated via mutual information score between the feature and the error on the labels. You can use this score to help you decide which features to choose in Error Analysis.
-4. **Check mark**: allows you to add or remove the feature from the tree map.
+1. **Search features**: Allows you to find specific features in the dataset.
+2. **Features**: Lists the name of the feature in the dataset.
+3. **Importances**: A guideline for how related the feature might be to the error. Calculated via mutual information score between the feature and the error on the labels. You can use this score to help you decide which features to choose in the error analysis.
+4. **Check mark**: Allows you to add or remove the feature from the tree map.
5. **Maximum depth**: The maximum depth of the surrogate tree trained on errors. 6. **Number of leaves**: The number of leaves of the surrogate tree trained on errors.
-7. **Minimum number of samples in one leaf**: The minimum number of data required to create one leaf.
+7. **Minimum number of samples in one leaf**: The minimum amount of data required to create one leaf.
#### Error heat map
-Selecting the **Heat map** tab switches to a different view of the error in the dataset. You can select one or many heat map cells and create new cohorts. You can choose up to two features to create a heatmap.
+Select the **Heat map** tab to switch to a different view of the error in the dataset. You can select one or many heat map cells and create new cohorts. You can choose up to two features to create a heat map.
-1. **Number of Cells**: displays the number of cells selected.
-2. **Error coverage**: displays the percentage of all errors concentrated in the selected cell(s).
-3. **Error rate**: displays the percentage of failures of all datapoints in the selected cell(s).
-4. **Axis features**: selects the intersection of features to display in the heatmap.
-5. **Cells**: represents a cohort of the dataset, with filters applied, and the percentage of errors out of the total number of datapoints in the cohort. A blue outline indicates selected cells, and the darkness of red represents the concentration of failures.
-6. **Prediction path (filters)**: lists the filters placed over the full dataset for each selected cohort.
+1. **Cells**: Displays the number of cells selected.
+2. **Error coverage**: Displays the percentage of all errors concentrated in the selected cell(s).
+3. **Error rate**: Displays the percentage of failures of all data points in the selected cell(s).
+4. **Axis features**: Selects the intersection of features to display in the heat map.
+5. **Cells**: Represents a cohort of the dataset, with filters applied, and the percentage of errors out of the total number of data points in the cohort. A blue outline indicates selected cells, and the darkness of red represents the concentration of failures.
+6. **Prediction path (filters)**: Lists the filters placed over the full dataset for each selected cohort.
### Model overview
-The Model overview component provides a comprehensive set of performance and fairness metrics to evaluate your model, along with key performance disparity metrics along specified features and dataset cohorts.
+The model overview component provides a comprehensive set of performance and fairness metrics for evaluating your model, along with key performance disparity metrics along specified features and dataset cohorts.
#### Dataset cohorts
-The **Dataset cohorts** tab allows you to investigate your model by comparing the model performance of different user-specified dataset cohorts (accessible via the Cohort settings icon on the top right corner of the dashboard).
+On the **Dataset cohorts** pane, you can investigate your model by comparing the model performance of various user-specified dataset cohorts (accessible via the **Cohort settings** icon at the top right of the dashboard).
> [!NOTE] > You can create new dataset cohorts from the UI experience or pass your pre-built cohorts to the dashboard via the SDK experience. -
-1. **Help me choose metrics**: Selecting this icon will open a panel with more information about what model performance metrics are available to be shown in the table below. Easily adjust which metrics you can view by using the multi-select drop down to select and deselect performance metrics. (see more below)
-2. **Show heatmap**: Toggle on and off to see heatmap visualization in the table below. The gradient of the heatmap corresponds to the range normalized between the lowest value and the highest value in each column.
-3. **Table of metrics for each dataset cohort**: Table with columns for dataset cohorts, sample size of each cohort, and the selected model performance metrics for each cohort.
-4. **Bar chart visualizing individual metric**(mean absolute error) across the cohorts for easy comparison.
-5. **Choose metric (x-axis)**: Selecting this will allow you to select which metric to view in the bar chart.
-6. **Choose cohorts (y-axis)**: Selecting this will allow you to select which cohorts you want to view in the bar chart. You may see ΓÇ£Feature cohortΓÇ¥ selection disabled unless you specify your desired features in the ΓÇ£Feature cohort tabΓÇ¥ of the component first.
-Selecting ΓÇ£Help me choose metricsΓÇ¥ will open a panel with the list of model performance metrics and the corresponding metrics definition to aid users in selecting the right metric to view.
+1. **Help me choose metrics**: Select this icon to open a panel with more information about what model performance metrics are available to be shown in the table. Easily adjust which metrics to view by using the multi-select dropdown list to select and deselect performance metrics.
+2. **Show heat map**: Toggle on and off to show or hide heat map visualization in the table. The gradient of the heat map corresponds to the range normalized between the lowest value and the highest value in each column.
+3. **Table of metrics for each dataset cohort**: View columns of dataset cohorts, the sample size of each cohort, and the selected model performance metrics for each cohort.
+4. **Bar chart visualizing individual metric**: View mean absolute error across the cohorts for easy comparison.
+5. **Choose metric (x-axis)**: Select this button to choose which metrics to view in the bar chart.
+6. **Choose cohorts (y-axis)**: Select this button to choose which cohorts to view in the bar chart. **Feature cohort** selection might be disabled unless you first specify the features you want on the **Feature cohort tab** of the component.
-| ML scenario | Metrics |
-|-|-|
-| Regression | Mean absolute error, Mean squared error, R,<sup>2</sup>, Mean prediction. |
-| Classification | Accuracy, Precision, Recall, F1 score, False positive rate, False negative rate, Selection rate |
+Select **Help me choose metrics** to open a panel with a list of model performance metrics and their definitions, which can help you select the right metrics to view.
-Classification scenarios will support accuracy, F1 score, precision score, recall score, false positive rate, false negative rate and selection rate (the percentage of predictions with label 1):
+| Machine learning scenario | Metrics |
+|||
+| Regression | Mean absolute error, Mean squared error, R-squared, Mean prediction. |
+| Classification | Accuracy, Precision, Recall, F1 score, False positive rate, False negative rate, Selection rate. |
+Classification scenarios support accuracy scores, precision scores, recall, false positive rate, false negative rate, and selection rate (the percentage of predictions with label 1):
-Regression scenarios will support mean absolute error, mean squared error, and mean prediction:
+Regression scenarios support mean absolute error, mean squared error, and mean prediction:
#### Feature cohorts
-The **Feature cohorts** tab allows you to investigate your model by comparing model performance across user-specified sensitive/non-sensitive features (for example, performance across different gender, race, income level cohorts).
-
+On the **Feature cohorts** pane, you can investigate your model by comparing model performance across user-specified sensitive and non-sensitive features (for example, performance across various gender, race, and income level cohorts).
-1. **Help me choose metrics**: Selecting this icon will open a panel with more information about what metrics are available to be shown in the table below. Easily adjust which metrics you can view by using the multi-select drop down to select and deselect performance metrics.
-2. **Help me choose features**: Selecting this icon will open a panel with more information about what features are available to be shown in the table below with descriptors of each feature and binning capability (see below). Easily adjust which features you can view by using the multi-select drop-down to select and deselect features.
- Selecting ΓÇ£Help me choose featuresΓÇ¥ will open a panel with the list of features and their properties:
+1. **Help me choose metrics**: Select this icon to open a panel with more information about what metrics are available to be shown in the table. Easily adjust which metrics to view by using the multi-select dropdown to select and deselect performance metrics.
+2. **Help me choose features**: Select this icon to open a panel with more information about what features are available to be shown in the table, with descriptors of each feature and their binning capability (see below). Easily adjust which features to view by using the multi-select dropdown to select and deselect them.
- :::image type="content" source="./media/how-to-responsible-ai-dashboard/model-overview-choose-features.png" alt-text="Screenshot of the dashboard's model overview tab showing how to choose features." lightbox= "./media/how-to-responsible-ai-dashboard/model-overview-choose-features.png":::
-3. **Show heatmap**: toggle on and off to see heatmap visualization in the table below. The gradient of the heatmap corresponds to the range normalized between the lowest value and the highest value in each column.
-4. **Table of metrics for each feature cohort**: Table with columns for feature cohorts (sub-cohort of your selected feature), sample size of each cohort, and the selected model performance metrics for each feature cohort.
-5. **Fairness metrics/disparity metrics**: Table that corresponds to the above metrics table and shows the maximum difference or maximum ratio in performance scores between any two feature cohorts.
-6. **Bar chart visualizing individual metric** (for example, mean absolute error) across the cohort for easy comparison.
-7. **Choose cohorts (y-axis)**: Selecting this will allow you to select which cohorts you want to view in the bar chart.
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/model-overview-choose-features.png" alt-text="Screenshot of the dashboard 'Model overview' pane, showing how to choose features." lightbox= "./media/how-to-responsible-ai-dashboard/model-overview-choose-features.png":::
+3. **Show heat map**: Toggle on and off to see a heat map visualization. The gradient of the heat map corresponds to the range that's normalized between the lowest value and the highest value in each column.
+4. **Table of metrics for each feature cohort**: A table with columns for feature cohorts (sub-cohort of your selected feature), sample size of each cohort, and the selected model performance metrics for each feature cohort.
+5. **Fairness metrics/disparity metrics**: A table that corresponds to the metrics table and shows the maximum difference or maximum ratio in performance scores between any two feature cohorts.
+6. **Bar chart visualizing individual metric**: View mean absolute error across the cohorts for easy comparison.
+7. **Choose cohorts (y-axis)**: Select this button to choose which cohorts to view in the bar chart.
- Selecting ΓÇ£Choose cohortsΓÇ¥ will open a panel with an option to either show a comparison of selected dataset cohorts or feature cohorts based on what is selected in the multi-select drop-down below it. Select ΓÇ£ConfirmΓÇ¥ to save the changes to the bar chart view.
+ Selecting **Choose cohorts** opens a panel with an option to either show a comparison of selected dataset cohorts or feature cohorts, depending on what you select in the multi-select dropdown list below it. Select **Confirm** to save the changes to the bar chart view.
- :::image type="content" source="./media/how-to-responsible-ai-dashboard/model-overview-choose-cohorts.png" alt-text="Screenshot of the dashboard's model overview tab showing how to choose cohorts." lightbox= "./media/how-to-responsible-ai-dashboard/model-overview-choose-cohorts.png":::
-8. **Choose metric (x-axis)**: Selecting this will allow you to select which metric to view in the bar chart.
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/model-overview-choose-cohorts.png" alt-text="Screenshot of the dashboard 'Model overview' pane, showing how to choose cohorts." lightbox= "./media/how-to-responsible-ai-dashboard/model-overview-choose-cohorts.png":::
+8. **Choose metric (x-axis)**: Select this button to choose which metric to view in the bar chart.
### Data explorer
-The Data explorer component allows you to analyze data statistics along axes filters such as predicted outcome, dataset features and error groups. This component helps you understand over and underrepresentation in your dataset.
+With the data explorer component, you can analyze data statistics along the x-axis and y-axis by using filters such as predicted outcome, dataset features, and error groups. This component helps you understand overrepresentation and underrepresentation in your dataset.
1. **Select a dataset cohort to explore**: Specify which dataset cohort from your list of cohorts you want to view data statistics for.
-2. **X-axis**: displays the type of value being plotted horizontally, modify by selecting the button to open a side panel.
-3. **Y-axis**: displays the type of value being plotted vertically, modify by selecting the button to open a side panel.
-4. **Chart type**: specifies chart type, choose between aggregate plots (bar charts) or individual datapoints (scatter plot).
+2. **X-axis**: Displays the type of value being plotted horizontally. Modify the values by selecting the button to open a side panel.
+3. **Y-axis**: Displays the type of value being plotted vertically. Modify the values by selecting the button to open a side panel.
+4. **Chart type**: Specifies the chart type. Choose between aggregate plots (bar charts) or individual data points (scatter plot).
- Selecting the "Individual datapoints" option under "Chart type" shifts to a disaggregated view of the data with the availability of a color axis.
+ By selecting the **Individual data points** option under **Chart type**, you can shift to a disaggregated view of the data with the availability of a color axis.
### Feature importances (model explanations)
-The model explanation component allows you to see which features were most important in your modelΓÇÖs predictions. You can view what features impacted your modelΓÇÖs prediction overall in the **Aggregate feature importance** tab or view feature importances for individual datapoints in the **Individual feature importance** tab.
+By using the model explanation component, you can see which features were most important in your modelΓÇÖs predictions. You can view what features affected your modelΓÇÖs prediction overall on the **Aggregate feature importance** pane or view feature importances for individual data points on the **Individual feature importance** pane.
#### Aggregate feature importances (global explanations)
-1. **Top k features**: lists the most important global features for a prediction and allows you to change it through a slider bar.
-2. **Aggregate feature importance**: visualizes the weight of each feature in influencing model decisions across all predictions.
-3. **Sort by**: allows you to select which cohort's importances to sort the aggregate feature importance graph by.
-4. **Chart type**: allows you to select between a bar plot view of average importances for each feature and a box plot of importances for all data.
+1. **Top k features**: Lists the most important global features for a prediction and allows you to change it by using a slider bar.
+2. **Aggregate feature importance**: Visualizes the weight of each feature in influencing model decisions across all predictions.
+3. **Sort by**: Allows you to select which cohort's importances to sort the aggregate feature importance graph by.
+4. **Chart type**: Allows you to select between a bar plot view of average importances for each feature and a box plot of importances for all data.
-When you select one of the features in the bar plot, the below dependence plot will be populated. The dependence plot shows the relationship of the values of a feature to its corresponding feature importance values impacting the model prediction.
+ When you select one of the features in the bar plot, the dependence plot is populated, as shown in the following image. The dependence plot shows the relationship of the values of a feature to its corresponding feature importance values, which affect the model prediction.
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/aggregate-feature-importance-2.png" alt-text="Screenshot of the dashboard, showing a populated dependence plot on the 'Aggregate feature importances' pane." lightbox="./media/how-to-responsible-ai-dashboard/aggregate-feature-importance-2.png":::
-5. **Feature importance of [feature] (regression) or Feature importance of [feature] on [predicted class] (classification)**: plots the importance of a particular feature across the predictions. For regression scenarios, the importance values are in terms of the output so positive feature importance means it contributed positively towards the output; vice versa for negative feature importance. For classification scenarios, positive feature importances mean that feature value is contributing towards the predicted class denoted in the y-axis title; and negative feature importance means it's contributing against the predicted class.
-6. **View dependence plot for**: selects the feature whose importances you want to plot.
-7. **Select a dataset cohort**: selects the cohort whose importances you want to plot.
+5. **Feature importance of [feature] (regression) or Feature importance of [feature] on [predicted class] (classification)**: Plots the importance of a particular feature across the predictions. For regression scenarios, the importance values are in terms of the output, so positive feature importance means it contributed positively toward the output. The opposite applies to negative feature importance. For classification scenarios, positive feature importances mean that feature value is contributing toward the predicted class denoted in the y-axis title. Negative feature importance means it's contributing against the predicted class.
+6. **View dependence plot for**: Selects the feature whose importances you want to plot.
+7. **Select a dataset cohort**: Selects the cohort whose importances you want to plot.
#### Individual feature importances (local explanations)
-This tab explains how features influence the predictions made on specific datapoints. You can choose up to five datapoints to compare feature importances for.
+The following image illustrates how features influence the predictions that are made on specific data points. You can choose up to five data points to compare feature importances for.
-**Point selection table**: view your datapoints and select up to five points to display in the feature importance plot or the ICE plot below the table.
+**Point selection table**: View your data points and select up to five points to display in the feature importance plot or the ICE plot below the table.
-**Feature importance plot**: bar plot of the importance of each feature for the model's prediction on the selected datapoint(s)
+**Feature importance plot**: A bar plot of the importance of each feature for the model's prediction on the selected data points.
-1. **Top k features**: allows you to specify the number of features to show importances for through a slider.
-2. **Sort by**: allows you to select the point (of those checked above) whose feature importances are displayed in descending order on the feature importance plot.
-3. **View absolute values**: Toggle on to sort the bar plot by the absolute values; this allows you to see the top highest impacting features regardless of its positive or negative direction.
-4. **Bar plot**: displays the importance of each feature in the dataset for the model prediction of the selected datapoints.
+1. **Top k features**: Allows you to specify the number of features to show importances for by using a slider.
+2. **Sort by**: Allows you to select the point (of those checked above) whose feature importances are displayed in descending order on the feature importance plot.
+3. **View absolute values**: Toggle on to sort the bar plot by the absolute values. This allows you to see the most impactful features regardless of their positive or negative direction.
+4. **Bar plot**: Displays the importance of each feature in the dataset for the model prediction of the selected data points.
-**Individual conditional expectation (ICE) plot**: switches to the ICE plot showing model predictions across a range of values of a particular feature
+**Individual conditional expectation (ICE) plot**: Switches to the ICE plot, which shows model predictions across a range of values of a particular feature.
-- **Min (numerical features)**: specifies the lower bound of the range of predictions in the ICE plot.-- **Max (numerical features)**: specifies the upper bound of the range of predictions in the ICE plot.-- **Steps (numerical features)**: specifies the number of points to show predictions for within the interval.-- **Feature values (categorical features)**: specifies which categorical feature values to show predictions for.-- **Feature**: specifies the feature to make predictions for.
+- **Min (numerical features)**: Specifies the lower bound of the range of predictions in the ICE plot.
+- **Max (numerical features)**: Specifies the upper bound of the range of predictions in the ICE plot.
+- **Steps (numerical features)**: Specifies the number of points to show predictions for within the interval.
+- **Feature values (categorical features)**: Specifies which categorical feature values to show predictions for.
+- **Feature**: Specifies the feature to make predictions for.
### Counterfactual what-if
-Counterfactual analysis provides a diverse set of ΓÇ£what-ifΓÇ¥ examples generated by changing the values of features minimally to produce the desired prediction class (classification) or range (regression).
+Counterfactual analysis provides a diverse set of *what-if* examples generated by changing the values of features minimally to produce the desired prediction class (classification) or range (regression).
-1. **Point selection**: selects the point to create a counterfactual for and display in the top-ranking features plot below
- :::image type="content" source="./media/how-to-responsible-ai-dashboard/counterfactuals-top-ranked-features.png" alt-text="Screenshot of the dashboard showing a the top ranked features plot." lightbox="./media/how-to-responsible-ai-dashboard/counterfactuals-top-ranked-features.png":::
+1. **Point selection**: Selects the point to create a counterfactual for and display in the top-ranking features plot below it.
- **Top ranked features plot**: displays, in descending order in terms of average frequency, the features to perturb to create a diverse set of counterfactuals of the desired class. You must generate at least 10 diverse counterfactuals per datapoint to enable this chart due to lack of accuracy with a lesser number of counterfactuals.
-2. **Selected datapoint**: performs the same action as the point selection in the table, except in a dropdown menu.
-3. **Desired class for counterfactual(s)**: specifies the class or range to generate counterfactuals for.
-4. **Create what-if counterfactual**: opens a panel for counterfactual what-if datapoint creation.
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/counterfactuals-top-ranked-features.png" alt-text="Screenshot of the dashboard, showing a top ranked features plot." lightbox="./media/how-to-responsible-ai-dashboard/counterfactuals-top-ranked-features.png":::
-Selecting the **Create what-if counterfactual** button opens a full window panel.
+ **Top ranked features plot**: Displays, in descending order of average frequency, the features to perturb to create a diverse set of counterfactuals of the desired class. You must generate at least 10 diverse counterfactuals per data point to enable this chart, because there's a lack of accuracy with a lesser number of counterfactuals.
+2. **Selected data point**: Performs the same action as the point selection in the table, except in a dropdown menu.
+3. **Desired class for counterfactual(s)**: Specifies the class or range to generate counterfactuals for.
+4. **Create what-if counterfactual**: Opens a panel for counterfactual what-if data point creation.
+ Select the **Create what-if counterfactual** button to open a full window panel.
-5. **Search features**: finds features to observe and change values.
-6. **Sort counterfactual by ranked features**: sorts counterfactual examples in order of perturbation effect (see above for top ranked features plot).
-7. **Counterfactual Examples**: lists feature values of example counterfactuals with the desired class or range. The first row is the original reference datapoint. Select ΓÇ£Set valueΓÇ¥ to set all the values of your own counterfactual datapoint in the bottom row with the values of the pre-generated counterfactual example.
-8. **Predicted value or class** lists the model prediction of a counterfactual's class given those changed features.
-9. **Create your own counterfactual**: allows you to perturb your own features to modify the counterfactual, features that have been changed from the original feature value will be denoted by the title being bolded (ex. Employer and Programming language). Selecting ΓÇ£See prediction deltaΓÇ¥ will show you the difference in the new prediction value from the original datapoint.
-10. **What-if counterfactual name**: allows you to name the counterfactual uniquely.
-11. **Save as new datapoint**: saves the counterfactual you've created.
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/counterfactuals-examples.png" alt-text="Screenshot of the dashboard, showing what-if counterfactuals." lightbox="./media/how-to-responsible-ai-dashboard/counterfactuals-examples.png":::
+
+5. **Search features**: Finds features to observe and change values.
+6. **Sort counterfactual by ranked features**: Sorts counterfactual examples in order of perturbation effect. (Also see **Top ranked features plot**, discussed earlier.)
+7. **Counterfactual examples**: Lists feature values of example counterfactuals with the desired class or range. The first row is the original reference data point. Select **Set value** to set all the values of your own counterfactual data point in the bottom row with the values of the pre-generated counterfactual example.
+8. **Predicted value or class**: Lists the model prediction of a counterfactual's class given those changed features.
+9. **Create your own counterfactual**: Allows you to perturb your own features to modify the counterfactual. Features that have been changed from the original feature value are denoted by the title being bolded (for example, Employer and Programming language). Select **See prediction delta** to view the difference in the new prediction value from the original data point.
+10. **What-if counterfactual name**: Allows you to name the counterfactual uniquely.
+11. **Save as new data point**: Saves the counterfactual you've created.
### Causal analysis #### Aggregate causal effects
-Selecting the **Aggregate causal effects** tab of the Causal analysis component shows the average causal effects for pre-defined treatment features (the features that you want to treat to optimize your outcome).
+Select the **Aggregate causal effects** tab of the causal analysis component to display the average causal effects for pre-defined treatment features (the features that you want to treat to optimize your outcome).
> [!NOTE] > Global cohort functionality is not supported for the causal analysis component. +
+1. **Direct aggregate causal effect table**: Displays the causal effect of each feature aggregated on the entire dataset and associated confidence statistics.
-1. **Direct aggregate causal effect table**: displays the causal effect of each feature aggregated on the entire dataset and associated confidence statistics
- 1. **Continuous treatments**: On average in this sample, increasing this feature by one unit will cause the probability of class to increase by X units, where X is the causal effect.
- 1. **Binary treatments**: On average in this sample, turning on this feature will cause the probability of class to increase by X units, where X is the causal effect.
-1. **Direct aggregate causal effect whisker plot**: visualizes the causal effects and confidence intervals of the points in the table
+ * **Continuous treatments**: On average in this sample, increasing this feature by one unit will cause the probability of class to increase by X units, where X is the causal effect.
+ * **Binary treatments**: On average in this sample, turning on this feature will cause the probability of class to increase by X units, where X is the causal effect.
+
+1. **Direct aggregate causal effect whisker plot**: Visualizes the causal effects and confidence intervals of the points in the table.
#### Individual causal effects and causal what-if
-To get a granular view of causal effects on an individual datapoint, switch to the **Individual causal what-if** tab.
+To get a granular view of causal effects on an individual data point, switch to the **Individual causal what-if** tab.
-1. **X axis**: selects feature to plot on the x-axis.
-2. **Y axis**: selects feature to plot on the y-axis.
-3. **Individual causal scatter plot**: visualizes points in table as scatter plot to select datapoint for analyzing causal-what-if and viewing the individual causal effects below
-4. **Set new treatment value**
- 1. **(numerical)**: shows slider to change the value of the numerical feature as a real-world intervention.
- 1. **(categorical)**: shows drop-down to select the value of the categorical feature.
+1. **X-axis**: Selects the feature to plot on the x-axis.
+2. **Y-axis**: Selects the feature to plot on the y-axis.
+3. **Individual causal scatter plot**: Visualizes points in the table as a scatter plot to select data points for analyzing causal what-if and viewing the individual causal effects below it.
+4. **Set new treatment value**:
+ * **(numerical)**: Shows a slider to change the value of the numerical feature as a real-world intervention.
+ * **(categorical)**: Shows a dropdown list to select the value of the categorical feature.
#### Treatment policy
-Selecting the Treatment policy tab switches to a view to help determine real-world interventions and shows treatment(s) to apply to achieve a particular outcome.
+Select the **Treatment policy** tab to switch to a view to help determine real-world interventions and show treatments to apply to achieve a particular outcome.
++
+1. **Set treatment feature**: Selects a feature to change as a real-world intervention.
+
+2. **Recommended global treatment policy**: Displays recommended interventions for data cohorts to improve the target feature value. The table can be read from left to right, where the segmentation of the dataset is first in rows and then in columns. For example, for 658 individuals whose employer isn't Snapchat and whose programming language isn't JavaScript, the recommended treatment policy is to increase the number of GitHub repos contributed to.
+ **Average gains of alternative policies over always applying treatment**: Plots the target feature value in a bar chart of the average gain in your outcome for the above recommended treatment policy versus always applying treatment.
-1. **Set treatment feature**: selects feature to change as a real-world intervention
-2. **Recommended global treatment policy**: displays recommended interventions for data cohorts to improve target feature value. The table can be read from left to right, where the segmentation of the dataset is first in rows and then in columns. For example, 658 individuals whose employer isn't Snapchat, and their programming language isn't JavaScript, the recommended treatment policy is to increase the number of GitHub repos contributed to.
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/causal-treatment-policy-2.png" alt-text="Screenshot of the dashboard showing a bar chart of the average gains of alternative policies over always applying treatment on the treatment policy tab." lightbox= "./media/how-to-responsible-ai-dashboard/causal-treatment-policy-2.png":::
-**Average gains of alternative policies over always applying treatment**: plots the target feature value in a bar chart of the average gain in your outcome for the above recommended treatment policy versus always applying treatment.
+ **Recommended individual treatment policy**:
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/causal-treatment-policy-3.png" alt-text="Screenshot of the dashboard showing a recommended individual treatment policy table on the treatment policy tab." lightbox= "./media/how-to-responsible-ai-dashboard/causal-treatment-policy-3.png":::
-**Recommended individual treatment policy**:
+3. **Show top k data point samples ordered by causal effects for recommended treatment feature**: Selects the number of data points to show in the table.
-3. **Show top k datapoint samples ordered by causal effects for recommended treatment feature**: selects the number of datapoints to show in the table below.
-4. **Recommended individual treatment policy table**: lists, in descending order of causal effect, the datapoints whose target features would be most improved by an intervention.
+4. **Recommended individual treatment policy table**: Lists, in descending order of causal effect, the data points whose target features would be most improved by an intervention.
## Next steps - Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](how-to-responsible-ai-scorecard.md).-- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md).
- View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python.-- Explore the features of the Responsible AI Dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)-- Learn more about how the Responsible AI dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)-- Learn about how the Responsible AI dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)
+- Explore the features of the Responsible AI dashboard through this [interactive AI lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
+- Learn more about how you can use the Responsible AI dashboard and scorecard to debug data and models and inform better decision-making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
+- Learn about how the Responsible AI dashboard and scorecard were used by the UK National Health Service (NHS) in a [real-life customer story](https://aka.ms/NHSCustomerStory).
machine-learning How To Responsible Ai Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-scorecard.md
Title: Share insights with Responsible AI scorecard (preview)
+ Title: Share insights with a Responsible AI scorecard (preview)
description: Share insights with non-technical business stakeholders by exporting a PDF Responsible AI scorecard from Azure Machine Learning.
Last updated 08/17/2022
-# Share insights with Responsible AI scorecard (preview)
+# Share insights with a Responsible AI scorecard (preview)
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-Azure Machine LearningΓÇÖs Responsible AI scorecard is a PDF report generated based our Responsible AI dashboard insights and customizations to accompany your machine learning models. You can easily configure, download, and share your PDF scorecard with your technical and non-technical stakeholders to educate them about your data and model health, compliance, and build trust. This scorecard could also be used in audit reviews to inform the stakeholders about the characteristics of your model.
+An Azure Machine Learning Responsible AI scorecard is a PDF report that's generated based on Responsible AI dashboard insights and customizations to accompany your machine learning models. You can easily configure, download, and share your PDF scorecard with your technical and non-technical stakeholders to educate them about your data and model health and compliance, and to help build trust. You can also use the scorecard in audit reviews to inform the stakeholders about the characteristics of your model.
-## Why Responsible AI scorecard?
+## Why a Responsible AI scorecard?
-Our Responsible AI dashboard is designed for machine learning professionals and data scientists to explore and evaluate model insights and inform their data-driven decisions, and while it can help you implement Responsible AI practically in your machine learning lifecycle, there are some needs left unaddressed:
+The Responsible AI dashboard is designed for machine learning professionals and data scientists to explore and evaluate model insights and inform their data-driven decisions. Though the dashboard can help you implement Responsible AI practically in your machine learning lifecycle, there are some needs left unaddressed:
-- There often exists a gap between the technical Responsible AI tools (designed for machine-learning professionals) and the ethical, regulatory, and business requirements that define the production environment.-- While an end-to-end machine learning life cycle includes both technical and non-technical stakeholders in the loop, there's very little support to enable an effective multi-stakeholder alignment, helping technical experts get timely feedback and direction from the non-technical stakeholders.
+- There often exists a gap between the technical Responsible AI tools (designed for machine learning professionals) and the ethical, regulatory, and business requirements that define the production environment.
+- Although an end-to-end machine learning lifecycle keeps both technical and non-technical stakeholders in the loop, there's very little support to enable an effective multi-stakeholder alignment where technical experts get timely feedback and direction from the non-technical stakeholders.
- AI regulations make it essential to be able to share model and data insights with auditors and risk officers for auditability purposes.
-One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Run History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard to empower ML professionals to generate and share their data and model health records easily.
+One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the ability to archive, for quick future reference, model and data insights in the Azure Machine Learning run history. As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we're introducing the Responsible AI scorecard to empower machine learning professionals to generate and share their data and model health records easily.
## Who should use a Responsible AI scorecard? -- If you are a data scientist or a machine learning professional, after training your model and generating its corresponding Responsible AI dashboard(s) for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard and share the report easily with your technical and non-technical stakeholders to build trust and gain their approval for deployment.
+* If you're a data scientist or machine learning professional:
+
+ After training your model and generating its corresponding Responsible AI dashboards for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard and share the report easily with your technical and non-technical stakeholders. Doing so helps build trust and gain their approval for deployment.
-- If you're a product manager, business leader, or an accountable stakeholder on an AI product, you can pass your desired model performance and fairness target values such as your target accuracy, target error rate, etc., to your data science team, asking them to generate this scorecard with respect to your identified target values and whether your model meets them. That can provide guidance into whether the model should be deployed or further improved.
+* If you're a product manager, a business leader, or an accountable stakeholder on an AI product:
-## How to generate a Responsible AI scorecard?
+ You can pass your desired model performance and fairness target values, such as target accuracy or target error rate, to your data science team. The team can generate a scorecard with respect to your identified target values, assess whether your model meets them, and then advise as to whether the model should be deployed or further improved.
+
+## Generate a Responsible AI scorecard
The configuration stage requires you to use your domain expertise around the problem to set your desired target values on model performance and fairness metrics.
-Like other Responsible AI dashboard components [configured in the YAML pipeline](how-to-responsible-ai-dashboard-sdk-cli.md?tabs=yaml#responsible-ai-components), you can add a component to generate the scorecard in the YAML pipeline.
+As with other Responsible AI dashboard components [configured in the YAML pipeline](how-to-responsible-ai-dashboard-sdk-cli.md?tabs=yaml#responsible-ai-components), you can add a component to generate the scorecard in the YAML pipeline.
-Where pdf_gen.json is the scorecard generation configuration json file and cohorts.json is the prebuilt cohorts definition json file.
+In the following code, the *pdf_gen.json* file is the JSON configuration file for scorecard generation, and *cohorts.json* is the JSON definition file for pre-built cohorts.
```yml scorecard_01:
scorecard_01:
```
-Sample json for cohorts definition and score card generation config can be found below:
+Here's a sample JSON file for cohorts definition and scorecard-generation configuration:
Cohorts definition:
Cohorts definition:
} ] ```
-Scorecard generation config for a regression example:
+
+Here's a scorecard-generation configuration file as a regression example:
```yml { "Model": {
- "ModelName": "GPT2 Access",
+ "ModelName": "GPT-2 Access",
"ModelType": "Regression",
- "ModelSummary": "This is a regression model to analyzer how likely a programmer is given access to gpt 2"
+ "ModelSummary": "This is a regression model to analyze how likely a programmer is given access to GPT-2"
}, "Metrics": { "mean_absolute_error": {
Scorecard generation config for a regression example:
] } ```
-Scorecard generation config for a classification example:
+
+Here's a scorecard-generation configuration file as a classification example:
```yml { "Model": { "ModelName": "Housing Price Range Prediction", "ModelType": "Classification",
- "ModelSummary": "This model is a classifier predicting if the house sells for more than median price or not."
+ "ModelSummary": "This model is a classifier that predicts whether the house will sell for more than the median price."
}, "Metrics" :{ "accuracy_score": {
Scorecard generation config for a classification example:
```
-### Definition of inputs of the Responsible AI scorecard component
+### Definition of inputs for the Responsible AI scorecard component
-This section defines the list of parameters required to configure the Responsible AI scorecard component.
+This section lists and defines the parameters that are required to configure the Responsible AI scorecard component.
#### Model
-| ModelName | Name of Model |
-|--|-|
-| ModelType | Values in [ΓÇÿclassificationΓÇÖ, ΓÇÿregressionΓÇÖ]. |
-| ModelSummary | Input a blurb of text summarizing what the model is for. |
+| ModelName | Name of model |
+|||
+| `ModelType` | Values in ['classification', 'regression']. |
+| `ModelSummary` | Enter text that summarizes what the model is for. |
> [!NOTE]
-> For multi-class classification, you should first use the One-vs-Rest strategy to choose your reference class, and hence, split your multi-class classification model into a binary classification problem for your selected reference class vs the rest of classes.
+> For multi-class classification, you should first use the One-vs-Rest strategy to choose your reference class, and then split your multi-class classification model into a binary classification problem for your selected reference class versus the rest of the classes.
#### Metrics
-| Performance Metric | Definition | Model Type |
-|--|-|-|
-| accuracy_score | The fraction of data points classified correctly. | Classification |
-| precision_score | The fraction of data points classified correctly among those classified as 1. | Classification |
-| recall_score | The fraction of data points classified correctly among those whose true label is 1. Alternative names: true positive rate, sensitivity | Classification |
-| f1_score | F1-score is the harmonic mean of precision and recall. | Classification |
-| error_rate | Proportion of instances misclassified over the whole set of instances. | Classification |
-| mean_absolute_error | The average of absolute values of errors. More robust to outliers than MSE. | Regression |
-| mean_squared_error | The average of squared errors. | Regression |
-| median_absolute_error | The median of squared errors. | Regression |
-| r2_score | The fraction of variance in the labels explained by the model. | Regression |
-
-Threshold:
- Desired threshold for selected metric. Allowed mathematical tokens are >, <, >=, and <= followed by a real number. For example, >= 0.75 means that the target for selected metric is greater than or equal to 0.75.
+| Performance metric | Definition | Model type |
+||||
+| `accuracy_score` | The fraction of data points that are classified correctly. | Classification |
+| `precision_score` | The fraction of data points that are classified correctly among those classified as 1. | Classification |
+| `recall_score` | The fraction of data points that are classified correctly among those whose true label is 1. Alternative names: true positive rate, sensitivity. | Classification |
+| `f1_score` | The F1 score is the harmonic mean of precision and recall. | Classification |
+| `error_rate` | The proportion of instances that are misclassified over the whole set of instances. | Classification |
+| `mean_absolute_error` | The average of absolute values of errors. More robust to outliers than `mean_squared_error`. | Regression |
+| `mean_squared_error` | The average of squared errors. | Regression |
+| `median_absolute_error` | The median of squared errors. | Regression |
+| `r2_score` | The fraction of variance in the labels explained by the model. | Regression |
+
+Threshold: The desired threshold for the selected metric. Allowed mathematical tokens are >, <, >=, and <=m, followed by a real number. For example, >= 0.75 means that the target for the selected metric is greater than or equal to 0.75.
#### Feature importance
-top_n:
-Number of features to show with a maximum of 10. Positive integers up to 10 are allowed.
+top_n: The number of features to show, with a maximum of 10. Positive integers up to 10 are allowed.
#### Fairness | Metric | Definition | |--|--|
-| metric | Primary metric for evaluation fairness |
-| sensitive_features | A list of feature name from input dataset to be designated as sensitive feature for fairness report. |
-| fairness_evaluation_kind | Values in [ΓÇÿdifferenceΓÇÖ, ΓÇÿratioΓÇÖ]. |
-| threshold | **Desired target values** of the fairness evaluation. Allowed mathematical tokens are >, <, >=, and <= followed by a real number. For example, metric=ΓÇ£accuracyΓÇ¥, fairness_evaluation_kind=ΓÇ¥differenceΓÇ¥ <= 0.05 means that the target of for the difference in accuracy is less than or equal to 0.05. |
+| `metric` | The primary metric for evaluation fairness. |
+| `sensitive_features` | A list of feature names from the input dataset to be designated as sensitive features for the fairness report. |
+| `fairness_evaluation_kind` | Values in ['difference', 'ratio']. |
+| `threshold` | The *desired target values* of the fairness evaluation. Allowed mathematical tokens are >, <, >=, and <=, followed by a real number.<br>For example, metric="accuracy", fairness_evaluation_kind="difference".<br><= 0.05 means that the target for the difference in accuracy is less than or equal to 0.05. |
> [!NOTE]
- Your choice of `fairness_evaluation_kind` (selecting ΓÇÿdifferenceΓÇÖ vs ΓÇÿratio) impacts the scale of your target value. Be mindful of your selection to choose a meaningful target value.
+> Your choice of `fairness_evaluation_kind` (selecting 'difference' versus 'ratio') affects the scale of your target value. In your selection, be sure to choose a meaningful target value.
-You can select from the following metrics, paired with the `fairness_evaluation_kind` to configure your fairness assessment component of the scorecard:
+You can select from the following metrics, paired with `fairness_evaluation_kind`, to configure your fairness assessment component of the scorecard:
-| Metric | fairness_evaluation_kind | Definition | Model Type |
+| Metric | fairness_evaluation_kind | Definition | Model type |
|||||
-| accuracy_score | difference | The maximum difference in accuracy score between any two groups. | Classification |
-|accuracy_score | ratio | The minimum ratio in accuracy score between any two groups. | Classification |
-| precision_score | difference | The maximum difference in precision score between any two groups. | Classification |
-| precision_score | ratio | The maximum ratio in precision score between any two groups. | Classification |
-| recall_score | difference | The maximum difference in recall score between any two groups. | Classification|
-| recall_score | ratio | The maximum ratio in recall score between any two groups. | Classification|
-|f1_score | difference | The maximum difference in f1 score between any two groups.|Classification|
-| f1_score | ratio | The maximum ratio in f1 score between any two groups.| Classification|
-| error_rate | difference | The maximum difference in error rate between any two groups. | Classification |
-| error_rate | ratio | The maximum ratio in error rate between any two groups.|Classification|
-| Selection_rate | difference | The maximum difference in selection rate between any two groups. | Classification |
-| Selection_rate | ratio | The maximum ratio in selection rate between any two groups. | Classification |
-| mean_absolute_error | difference | The maximum difference in mean absolute error between any two groups. | Regression |
-| mean_absolute_error | ratio | The maximum ratio in mean absolute error between any two groups. | Regression |
-| mean_squared_error | difference | The maximum difference in mean squared error between any two groups. | Regression |
-| mean_squared_error | ratio | The maximum ratio in mean squared error between any two groups. | Regression |
-| median_absolute_error | difference | The maximum difference in median absolute error between any two groups. | Regression |
-| median_absolute_error | ratio | The maximum ratio in median absolute error between any two groups. | Regression |
-| r2_score | difference | The maximum difference in R<sup>2</sup> score between any two groups. | Regression |
-| r2_Score | ratio | The maximum ratio in R<sup>2</sup> score between any two groups. | Regression |
+| `accuracy_score` | difference | The maximum difference in accuracy score between any two groups. | Classification |
+| `accuracy_score` | ratio | The minimum ratio in accuracy score between any two groups. | Classification |
+| `precision_score` | difference | The maximum difference in precision score between any two groups. | Classification |
+| `precision_score` | ratio | The maximum ratio in precision score between any two groups. | Classification |
+| `recall_score` | difference | The maximum difference in recall score between any two groups. | Classification |
+| `recall_score` | ratio | The maximum ratio in recall score between any two groups. | Classification |
+| `f1_score` | difference | The maximum difference in f1 score between any two groups. | Classification |
+| `f1_score` | ratio | The maximum ratio in f1 score between any two groups. | Classification |
+| `error_rate` | difference | The maximum difference in error rate between any two groups. | Classification |
+| `error_rate` | ratio | The maximum ratio in error rate between any two groups.|Classification|
+| `Selection_rate` | difference | The maximum difference in selection rate between any two groups. | Classification |
+| `Selection_rate` | ratio | The maximum ratio in selection rate between any two groups. | Classification |
+| `mean_absolute_error` | difference | The maximum difference in mean absolute error between any two groups. | Regression |
+| `mean_absolute_error` | ratio | The maximum ratio in mean absolute error between any two groups. | Regression |
+| `mean_squared_error` | difference | The maximum difference in mean squared error between any two groups. | Regression |
+| `mean_squared_error` | ratio | The maximum ratio in mean squared error between any two groups. | Regression |
+| `median_absolute_error` | difference | The maximum difference in median absolute error between any two groups. | Regression |
+| `median_absolute_error` | ratio | The maximum ratio in median absolute error between any two groups. | Regression |
+| `r2_score` | difference | The maximum difference in R<sup>2</sup> score between any two groups. | Regression |
+| `r2_Score` | ratio | The maximum ratio in R<sup>2</sup> score between any two groups. | Regression |
-## How to view your Responsible AI scorecard?
+## View your Responsible AI scorecard
-Responsible AI scorecards are linked to your Responsible AI dashboards. To view your Responsible AI scorecard, go into your model registry and select the registered model you've generated a Responsible AI dashboard for. Once you select your model, select the Responsible AI (preview) tab to view a list of generated dashboards. Select which dashboard youΓÇÖd like to export a Responsible AI scorecard PDF for by selecting **Responsible AI scorecard (preview)**.
+The Responsible AI scorecard is linked to a Responsible AI dashboard. To view your Responsible AI scorecard, go into your model registry and select the registered model that you've generated a Responsible AI dashboard for. After you've selected your model, select the **Responsible AI (preview)** tab to view a list of generated dashboards. Select which dashboard you want to export a Responsible AI scorecard PDF for by selecting **Responsible AI scorecard (preview)**.
-Selecting **Responsible AI scorecard (preview)** will show you a dropdown to view all Responsible A I scorecards generated for this dashboard.
+1. Select **Responsible AI scorecard (preview)** to display a list of all Responsible AI scorecards that are generated for this dashboard.
+ :::image type="content" source="./media/how-to-responsible-ai-scorecard/scorecard-studio-dropdown.png" alt-text="Screenshot of Responsible AI scorecard dropdown." lightbox ="./media/how-to-responsible-ai-scorecard/scorecard-studio-dropdown.png":::
-Select which scorecard youΓÇÖd like to download from the list and select Download to download the PDF to your machine.
+1. In the list, select the scorecard you want to download, and then select **Download** to download the PDF to your machine.
+ :::image type="content" source="./media/how-to-responsible-ai-scorecard/studio-select-scorecard.png" alt-text="Screenshot of the 'Responsible AI scorecards' pane for selecting a scorecard to download." lightbox= "./media/how-to-responsible-ai-scorecard/studio-select-scorecard.png":::
-## How to read your Responsible AI scorecard?
+## Read your Responsible AI scorecard
-The Responsible AI scorecard is a PDF summary of your key insights from the Responsible AI dashboard. The first summary segment of the scorecard gives you an overview of the machine learning model and the key target values you have set to help all stakeholders determine if your model is ready to be deployed.
+The Responsible AI scorecard is a PDF summary of key insights from your Responsible AI dashboard. The first summary segment of the scorecard gives you an overview of the machine learning model and the key target values you've set to help your stakeholders determine whether the model is ready to be deployed:
-The data explorer segment shows you characteristics of your data, as any model story is incomplete without the right understanding of data
+The data explorer segment shows you characteristics of your data, because any model story is incomplete without a correct understanding of your data:
-The model performance segment displays your modelΓÇÖs most important metrics and characteristics of your predictions and how well they satisfy your desired target values.
+The model performance segment displays your model's most important metrics and characteristics of your predictions and how well they satisfy your desired target values:
-Next, you can also view the top performing and worst performing data cohorts and subgroups that are automatically extracted for you to see the blind spots of your model.
+Next, you can also view the top performing and worst performing data cohorts and subgroups that are automatically extracted for you to see the blind spots of your model:
-Then you can see the top important factors impacting your model predictions, which is a requirement to build trust with how your model is performing its task.
+You can see the top important factors that affect your model predictions, which is a requirement to build trust with how your model is performing its task:
-You can further see your model fairness insights summarized and inspect how well your model is satisfying the fairness target values you had set for your desired sensitive groups.
+You can further see your model fairness insights summarized and inspect how well your model is satisfying the fairness target values you've set for your desired sensitive groups:
-Finally, you can observe your datasetΓÇÖs causal insights summarized, figuring out whether your identified factors/treatments have any causal effect on the real-world outcome.
+Finally, you can see your dataset's causal insights summarized, which can help you determine whether your identified factors or treatments have any causal effect on the real-world outcome:
## Next steps -- See the how-to guide for generating a Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md).
+- See the how-to guide for generating a Responsible AI dashboard via [CLI&nbsp;v2 and SDK&nbsp;v2](how-to-responsible-ai-dashboard-sdk-cli.md) or the [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md). - View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python.-- Learn more about how the Responsible AI dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)-- See how the Responsible AI dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)-- Explore the features of the Responsible AI dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+- Learn more about how you can use the Responsible AI dashboard and scorecard to debug data and models and inform better decision-making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
+- Learn about how the Responsible AI dashboard and scorecard were used by the UK National Health Service (NHS) in a [real-life customer story](https://aka.ms/NHSCustomerStory).
+- Explore the features of the Responsible AI dashboard through this [interactive AI lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard).
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
There are many ways to create a training job with Azure Machine Learning. You ca
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://aka.ms/AMLFree) today.
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/machine-learning/search/) today.
* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
# Troubleshooting online endpoints deployment and scoring - > [!IMPORTANT] > SDK v2 is currently in public preview.
Local deployment is deploying a model to a local Docker environment. Local deplo
Local deployment supports creation, update, and deletion of a local endpoint. It also allows you to invoke and get logs from the endpoint.
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
To use local deployment, add `--local` to the appropriate CLI command:
To use local deployment, add `--local` to the appropriate CLI command:
az ml online-deployment create --endpoint-name <endpoint-name> -n <deployment-name> -f <spec_file.yaml> --local ```
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
To use local deployment, add `local=True` parameter in the command:
To debug conda installation problems, try the following:
You can't get direct access to the VM where the model is deployed. However, you can get logs from some of the containers that are running on the VM. The amount of information depends on the provisioning status of the deployment. If the specified container is up and running you'll see its console output, otherwise you'll get a message to try again later.
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
To see log output from container, use the following CLI command:
You can also get logs from the storage initializer container by passing `ΓÇô-con
Add `--help` and/or `--debug` to commands to see more information.
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
To see log output from container, use the `get_logs` method as follows:
If your container could not start, this means scoring could not happen. It might
To get the exact reason for an error, run:
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/cli)
```azurecli az ml online-deployment get-logs -e <endpoint-name> -n <deployment-name> -l 100 ```
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
```python ml_client.online_deployments.get_logs(
Make sure the model is registered to the same workspace as the deployment. Use t
- For example:
- # [CLI](#tab/CLI)
+ # [Azure CLI](#tab/cli)
```azurecli az ml model show --name <model-name> --version <version> ```
- # [Python](#tab/python)
+ # [Python SDK](#tab/python)
```python ml_client.models.get(name="<model-name>", version=<version>)
You can also check if the blobs are present in the workspace storage account.
- If the blob is present, you can use this command to obtain the logs from the storage initializer:
- # [CLI](#tab/CLI)
+ # [Azure CLI](#tab/cli)
```azurecli az ml online-deployment get-logs --endpoint-name <endpoint-name> --name <deployment-name> ΓÇô-container storage-initializer` ```
- # [Python](#tab/python)
+ # [Python SDK](#tab/python)
```python ml_client.online_deployments.get_logs(
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid.md
Previously updated : 06/21/2022 Last updated : 09/09/2022 # Trigger applications, processes, or CI/CD workflows based on Azure Machine Learning events (preview)
When to use Event Grid for event driven actions:
* Trigger an ML pipeline when drift is detected ## Prerequisites+ To use Event Grid, you need contributor or owner access to the Azure Machine Learning workspace you will create events for. ## The event model & types
When setting up your events, you can apply filters to only trigger on specific e
Subscriptions for Azure Machine Learning events are protected by Azure role-based access control (Azure RBAC). Only [contributor or owner](how-to-assign-roles.md#default-roles) of a workspace can create, update, and delete event subscriptions. Filters can be applied to event subscriptions either during the [creation](/cli/azure/eventgrid/event-subscription) of the event subscription or at a later time.
-1. Go to the Azure portal, select a new subscription or an existing one.
-
+1. Go to the Azure portal, select a new subscription or an existing one.
+1. Select the Events entry from the left navigation area, and then select **+ Event subscription**.
1. Select the filters tab and scroll down to Advanced filters. For the **Key** and **Value**, provide the property types you want to filter by. Here you can see the event will only trigger when the run type is a pipeline run or pipeline step run. :::image type="content" source="media/how-to-use-event-grid/select-event-filters.png" alt-text="filter events":::
Use [Azure Logic Apps](../logic-apps/index.yml) to configure emails for all your
### Example: Data drift triggers retraining
+> [!IMPORTANT]
+> This example relies on a feature (data drift) that is only available when using Azure Machine Learning SDK v1 or Azure CLI extension v1 for Azure Machine Learning. For more information, see [What is Azure ML CLI & SDK v2](concept-v2.md).
+ Models go stale over time, and not remain useful in the context it is running in. One way to tell if it's time to retrain the model is detecting data drift. This example shows how to use event grid with an Azure Logic App to trigger retraining. The example triggers an Azure Data Factory pipeline when data drift occurs between a model's training and serving datasets.
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-identities.md
When creating a compute cluster with the [AmlComputeProvisioningConfiguration](/
az ml compute create --name cpucluster --type <cluster name> --identity-type systemassigned ```
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
# Tutorial: Train an object detection model (preview) with AutoML and Python
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
> * [v1](v1/tutorial-auto-train-image-models-v1.md) > * [v2 (current version)](tutorial-auto-train-image-models.md)
You'll write code using the Python SDK in this tutorial and learn the following
* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb) section of the notebook.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
+ [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs/cli-automl-image-object-detection-task-fridge-items). If you wish to run it in your own local environment, setup using the following instructions * Install and [set up CLI (v2)](how-to-configure-cli.md#prerequisites) and make sure you install the `ml` extension.
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+ This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items). If you wish to run it in your own local environment, setup using the following instructions
This tutorial uses the NCsv3-series (with V100 GPUs) as this type of compute tar
The following code creates a GPU compute of size `Standard_NC24s_v3` with four nodes.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
+ [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Create a .yml file with the following configuration.
The created compute can be provided using `compute` key in the `automl` task con
compute: azureml:gpu-cluster ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
```python from azure.ai.ml.entities import AmlCompute
This compute is used later while creating the task specific `automl` job.
You can use an Experiment to track your model training runs.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Experiment name can be provided using `experiment_name` key as follows:
Experiment name can be provided using `experiment_name` key as follows:
experiment_name: dpv2-cli-automl-image-object-detection-experiment ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+ Experiment name is used later while creating the task specific `automl` job. ```python exp_name = "dpv2-image-object-detection-experiment"
In order to use the data for training, upload data to default Blob Storage of yo
- Versioning of the metadata (location, description, etc) - Lineage tracking
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Create a .yml file with the following configuration.
To upload the images as a data asset, you run the following CLI v2 command with
az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION] ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+ [!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
Next step is to create `MLTable` from your data in jsonl format as shown below.
:::code language="yaml" source="~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/data/training-mltable-folder/MLTable":::
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] The following configuration creates training and validation data from the MLTable.
validation_data:
type: mltable ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+ You can create data inputs from training and validation MLTable with the following code:
You can create data inputs from training and validation MLTable with the followi
To configure automated ML runs for image-related tasks, create a task specific AutoML job.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ```yaml
task: image_object_detection
primary_metric: mean_average_precision ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+ [!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=image-object-detection-configuration)]
For the tuning settings, use random sampling to pick samples from this parameter
The Bandit early termination policy is also used. This policy terminates poor performing configurations; that is, those configurations that are not within 20% slack of the best performing configuration, which significantly saves compute resources.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
search_space:
min_size: "choice(600, 800)" ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+ [!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
search_space:
Once the search space and sweep settings are defined, you can then submit the job to train an image model using your training dataset.
-# [CLI v2](#tab/CLI-v2)
+# [Azure CLI](#tab/cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
To submit your AutoML job, you run the following CLI v2 command with the path to
az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION] ```
-# [Python SDK v2 (preview)](#tab/SDK-v2)
+# [Python SDK](#tab/python)
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
When you've configured your AutoML Job to the desired settings, you can submit the job.
When you've configured your AutoML Job to the desired settings, you can submit t
When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main automl_image_run from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this one.
-# [Python SDK v2 (preview)](#tab/SDK-v2-)
- Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+ ```python hd_job = ml_client.jobs.get(returned_job.name + '_HD') hd_job
hd_job
-## Register and deploy model as a web service
+## Register and deploy model
-Once you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances (ACI) or Azure Kubernetes Service (AKS). ACI is the perfect option for testing deployments, while AKS is better suited for high-scale, production usage.
+Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric). You can either register the model after downloading or by specifying the azureml path with corresponding jobid.
-You can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
-Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select the **Deploy**.
+### Get the best run
-![Select model from the automl runs in studio UI ](./media/how-to-auto-train-image-models/select-model.png)
-You can configure the model deployment endpoint name and the inferencing cluster to use for your model deployment in the **Deploy a model** pane.
+# [Azure CLI](#tab/cli)
-![Deploy configuration](./media/how-to-auto-train-image-models/deploy-image-model.png)
+```yaml
+ to be supported
+```
-## Test the web service
-You can test the deployed web service to predict new images. For this tutorial, pass a random image from the dataset and pass it to the scoring URI.
+# [Python SDK](#tab/python)
-```python
-import requests
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=best_run)]
-# URL for the web service
-scoring_uri = <scoring_uri from web service>
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_local_dir)]
-# If the service is authenticated, set the key or token
-key, _ = <keys from the web service>
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=download_model)]
+
-sample_image = './test_image.jpg'
+### Register the model
-# Load image data
-data = open(sample_image, 'rb').read()
+Register the model either using the azureml path or your locally downloaded path.
-# Set the content type
-headers = {'Content-Type': 'application/octet-stream'}
+# [Azure CLI](#tab/cli)
-# If authentication is enabled, set the authorization header
-headers['Authorization'] = f'Bearer {key}'
-# Make the request and display the response
-resp = requests.post(scoring_uri, data, headers=headers)
-print(resp.text)
+```azurecli
+ az ml model create --name od-fridge-items-mlflow-model --version 1 --path azureml://jobs/$best_run/outputs/artifacts/outputs/mlflow-model/ --type mlflow_model --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
```
+# [Python SDK](#tab/python)
-## Visualize detections
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=register_model)]
+
-Now that you have scored a test image, you can visualize the bounding boxes for this image. To do so, be sure you have matplotlib installed.
+After you register the model you want to use, you can deploy it using the managed online endpoint [deploy-managed-online-endpoint](how-to-deploy-managed-online-endpoint-sdk-v2.md)
+### Configure online endpoint
+
+# [Azure CLI](#tab/cli)
++
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
+name: od-fridge-items-endpoint
+auth_mode: key
```
-%pip install --upgrade matplotlib
+
+# [Python SDK](#tab/python)
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=endpoint)]
++
+### Create the endpoint
+
+Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
++
+# [Azure CLI](#tab/cli)
++
+```azurecli
+az ml online-endpoint create --file .\create_endpoint.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
```
-```python
-%matplotlib inline
-import matplotlib.pyplot as plt
-import matplotlib.image as mpimg
-import matplotlib.patches as patches
-from PIL import Image
-import numpy as np
-import json
+# [Python SDK](#tab/python)
-IMAGE_SIZE = (18,12)
-plt.figure(figsize=IMAGE_SIZE)
-img_np=mpimg.imread(sample_image)
-img = Image.fromarray(img_np.astype('uint8'),'RGB')
-x, y = img.size
-
-fig,ax = plt.subplots(1, figsize=(15,15))
-# Display the image
-ax.imshow(img_np)
-
-# draw box and label for each detection
-detections = json.loads(resp.text)
-for detect in detections['boxes']:
- label = detect['label']
- box = detect['box']
- conf_score = detect['score']
- if conf_score > 0.6:
- ymin, xmin, ymax, xmax = box['topY'],box['topX'], box['bottomY'],box['bottomX']
- topleft_x, topleft_y = x * xmin, y * ymin
- width, height = x * (xmax - xmin), y * (ymax - ymin)
- print('{}: [{}, {}, {}, {}], {}'.format(detect['label'], round(topleft_x, 3),
- round(topleft_y, 3), round(width, 3),
- round(height, 3), round(conf_score, 3)))
-
- color = np.random.rand(3) #'red'
- rect = patches.Rectangle((topleft_x, topleft_y), width, height,
- linewidth=3, edgecolor=color,facecolor='none')
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_endpoint)]
++
+### Configure online deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class. You can use either GPU or CPU VM SKUs for your deployment cluster.
- ax.add_patch(rect)
- plt.text(topleft_x, topleft_y - 10, label, color=color, fontsize=20)
-plt.show()
+# [Azure CLI](#tab/cli)
++
+```yaml
+name: od-fridge-items-mlflow-deploy
+endpoint_name: od-fridge-items-endpoint
+model: azureml:od-fridge-items-mlflow-model@latest
+instance_type: Standard_DS3_v2
+instance_count: 1
+liveness_probe:
+ failure_threshold: 30
+ success_threshold: 1
+ timeout: 2
+ period: 10
+ initial_delay: 2000
+readiness_probe:
+ failure_threshold: 10
+ success_threshold: 1
+ timeout: 10
+ period: 10
+ initial_delay: 2000
+```
+
+# [Python SDK](#tab/python)
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=deploy)]
+++
+### Create the deployment
+
+Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+# [Azure CLI](#tab/cli)
++
+```azurecli
+az ml online-deployment create --file .\create_deployment.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
+```
+
+# [Python SDK](#tab/python)
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_deploy)]
++
+### Update traffic:
+By default the current deployment is set to receive 0% traffic. you can set the traffic percentage current deployment should receive. Sum of traffic percentages of all the deployments with one end point should not exceed 100%.
+
+# [Azure CLI](#tab/cli)
++
+```azurecli
+az ml online-endpoint update --name 'od-fridge-items-endpoint' --traffic 'od-fridge-items-mlflow-deploy=100' --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION]
```
+# [Python SDK](#tab/python)
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=update_traffic)]
++
+## Test the deployment
+# [Azure CLI](#tab/cli)
+
+```yaml
+
+```
+
+# [Python SDK](#tab/python)
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=create_inference_request)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=dump_inference_request)]
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=invoke_inference)]
++
+## Visualize detections
+
+Now that you have scored a test image, you can visualize the bounding boxes for this image. To do so, be sure you have matplotlib installed.
+# [Azure CLI](#tab/cli)
+
+```yaml
+
+```
+
+# [Python SDK](#tab/python)
+
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=visualize_detections)]
++ ## Clean up resources Do not complete this section if you plan on running other Azure Machine Learning tutorials.
In this automated machine learning tutorial, you did the following tasks:
* [Learn how to configure incremental training on computer vision models](how-to-auto-train-image-models.md#incremental-training-optional). * See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters.md). * Code examples:
- # [CLI v2](#tab/CLI-v2)
- * Review detailed code examples and use cases in the [azureml-examples repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs). Please check the folders with 'cli-automl-image-' prefix for samples specific to building computer vision models.
- # [Python SDK v2 (preview)](#tab/SDK-v2)
- * Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
-
+ # [Azure CLI](#tab/cli)
+ [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
+
+ * Review detailed code examples and use cases in the [azureml-examples repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs). Please check the folders with 'cli-automl-image-' prefix for samples specific to building computer vision models.
+
+ # [Python SDK](#tab/python)
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+ * Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/sdk-preview/sdk/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.
+
+
> [!NOTE] > Use of the fridge objects dataset is available through the license under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
machine-learning How To Consume Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-consume-web-service.md
If you know the name of the deployed service, use the [az ml service show](/cli/
az ml service show -n <service-name> ```
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
From Azure Machine Learning studio, select __Endpoints__, __Real-time endpoints__, and then the endpoint name. In details for the endpoint, the __REST endpoint__ field contains the scoring URI. The __Swagger URI__ contains the swagger URI.
machine-learning How To Prepare Datasets For Automl Images V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images-v1.md
Last updated 10/13/2021
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
> * [v1](how-to-prepare-datasets-for-automl-images-v1.md) > * [v2 (current version)](../how-to-prepare-datasets-for-automl-images.md)
machine-learning How To Tune Hyperparameters V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-tune-hyperparameters-v1.md
Refer to train-hyperparameter-* notebooks in this folder:
## Next steps * [Track an experiment](../how-to-log-view-metrics.md)
-* [Deploy a trained model](../how-to-deploy-and-where.md)
+* [Deploy a trained model](../v1/how-to-deploy-and-where.md)
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-managed-identities.md
When creating a compute cluster with the [AmlComputeProvisioningConfiguration](/
az ml computetarget create amlcompute --name <cluster name> -w <workspace> -g <resource group> --vm-size <vm sku> --assign-identity '[system]' ```
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](../how-to-create-attach-compute-cluster.md#set-up-managed-identity).
machine-learning Tutorial Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models-v1.md
# Tutorial: Train an object detection model (preview) with AutoML and Python (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
> * [v1](tutorial-auto-train-image-models-v1.md) > * [v2 (current version)](../tutorial-auto-train-image-models.md) - >[!IMPORTANT] > The features presented in this article are in preview. They should be considered [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features that might change at any time.
managed-grafana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/high-availability.md
Last updated 7/27/2022 + # Azure Managed Grafana service reliability
-An Azure Managed Grafana instance in the Standard tier is hosted on a dedicated set of virtual machines (VMs). By default, two VMs are deployed to provide redundancy. Each VM runs a Grafana server. A network load balancer distributes browser requests amongst the Grafana servers. On the backend, the Grafana servers are connected to a shared database that stores the configuration and other persistent data for an entire Managed Grafana instance.
+An Azure Managed Grafana instance in the Standard tier is hosted on a dedicated set of virtual machines (VMs). By default, two VMs are deployed to provide redundancy. Each VM runs a Grafana server. A network load balancer distributes browser requests amongst the Grafana servers. On the backend, the Grafana servers are connected to a common database that stores the configuration and other persistent data for an entire Managed Grafana instance.
:::image type="content" source="media/service-reliability/diagram.png" alt-text="Diagram of the Managed Grafana Standard tier instance setup.":::
Microsoft is not providing or setting up disaster recovery for this service. In
## Zone redundancy
-Normally the network load balancer, VMs and database that underpin a Managed Grafana instance are located in a region based on system resource availability, and could end up being in a same Azure datacenter
+Normally the network load balancer, VMs and database that underpin a Managed Grafana instance are located in a region based on system resource availability, and could end up being in a same Azure datacenter.
### With zone redundancy enabled
-When the zone redundancy option is enabled, VMs are spread across [availability zones](../availability-zones/az-overview.md#availability-zones) and other resources with availability zone enabled.
+When the zone redundancy option is enabled, VMs are spread across [availability zones](../availability-zones/az-overview.md#availability-zones). Other resources such as network load balancer and database are also configured for availability zones.
In a zone-wide outage, no user action is required. An impacted Managed Grafana instance will rebalance itself to take advantage of the healthy zone automatically. The Managed Grafana service will attempt to heal the affected instances during zone recovery.
In a zone-wide outage, no user action is required. An impacted Managed Grafana i
### With zone redundancy disabled
-Zone redundancy is disabled in the Managed Grafana Standard tier by default. In this scenario, virtual machines are created as regional resources and should not be expected to survive zone-downs scenarios as they can go down at same time.
+Zone redundancy is disabled in the Managed Grafana Standard tier by default. In this scenario, virtual machines are created as single-region resources and should not be expected to survive zone-downs scenarios as they can go down at same time.
+
+## Supported regions
+
+Zone redundancy support is enabled in the following regions:
++
+| Americas | Europe | Africa | Asia Pacific |
+||-|-|-|
+| East US | West Europe | | Australia East |
+| South Central US | | | |
++
+For a complete list of regions where Managed Grafana is available, see [Products available by region - Azure Managed Grafana](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=managed-grafana&regions=all)
## Next steps
managed-grafana How To Authentication Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-authentication-permissions.md
System-assigned managed identity is the default authentication method provided t
1. Under **Grafana administrator role**, the box **Include myself** is checked by default. Optionally select **Add** to grant the Grafana administrator role to more members.
+ :::image type="content" source="media/authentication/create-form-permission.png" alt-text="Screenshot of the Azure portal. Create workspace form. Permission.":::
##### With managed identity disabled
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
+
+ Title: Azure Managed Grafana limitations
+description: List of known limitations in Azure Managed Grafana
++ Last updated : 08/31/2022++++
+# Limitations of Azure Managed Grafana
+
+Azure Managed Grafana delivers the native Grafana functionality in the highest possible fidelity. There are some differences between what it provides and what you can get by self-hosting Grafana. As a general rule, Azure Managed Grafana disables features and settings that may affect the security or reliability of the service and individual Grafana instances it manages.
+
+## Current limitations
+
+Managed Grafana has the following known limitations:
+
+* All users must have accounts in an Azure Active Directory. Microsoft (also known as MSA) and 3rd-party accounts aren't supported. As a workaround, use the default tenant of your Azure subscription with your Grafana instance and add other users as guests.
+
+* Installing, uninstalling and upgrading plugins from the Grafana Catalog aren't allowed.
+
+* Data source query results are capped at 80 MB. To mitigate this constraint, reduce the size of the query, for example, by shortening the time duration.
+
+* Querying Azure Data Explorer may take a long time or return 50x errors. To resolve these issues, use a table format instead of a time series, shorten the time duration, or avoid having many panels querying the same data cluster that can trigger throttling.
+
+* API key usage isn't included in the audit log.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Troubleshooting](./troubleshoot-managed-grafana.md)
notification-hubs Notification Hubs Push Notification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-overview.md
Push notifications are vital for consumer apps in increasing app engagement and
For more information on push notifications for a few popular platforms, see the following topics: -- [Android](https://developer.android.com/guide/topics/ui/notifiers/notifications.html)
+- [Android](https://developer.android.com/develop/ui/views/notifications)
- [iOS](https://developer.apple.com/notifications/) - [Windows](/previous-versions/windows/apps/hh779725(v=win.10))
purview Catalog Private Link End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-end-to-end.md
Using one of the deployment options explained further in this guide, you can dep
> [!NOTE] > For DNS configuration, you can also use your existing Azure Private DNS Zones from the dropdown list or add the required DNS records to your DNS Servers manually later. For more information, see [Configure DNS Name Resolution for private endpoints](./catalog-private-link-name-resolution.md)
-7. Go to the summary page, and select **Create** to create the portal private endpoint.
+7. Go to the summary page, and select **Create** to create the account private endpoint.
-8. Follow the same steps when you select **portal** for **Target sub-resource**.
+8. Repeat steps 2 through 7 to create the portal private endpoint. Make sure you select **portal** for **Target sub-resource**.
9. From your Microsoft Purview account, under **Settings** select **Networking**, and then select **Ingestion private endpoint connections**.
purview Concept Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-network.md
Here are some best practices:
:::image type="content" source="media/concept-best-practices/network-azure-runtime.png" alt-text="Screenshot that shows the connection flow between Microsoft Purview, the Azure runtime, and data sources."lightbox="media/concept-best-practices/network-azure-runtime.png":::
- 1. A manual or automatic scan is initiated from the Microsoft Purview data map through the Azure integration runtime.
+ 1. A manual or automatic scan is initiated from the Microsoft Purview Data Map through the Azure integration runtime.
2. The Azure integration runtime connects to the data source to extract metadata. 3. Metadata is queued in Microsoft Purview managed storage and stored in Azure Blob Storage.
- 4. Metadata is sent to the Microsoft Purview data map.
+ 4. Metadata is sent to the Microsoft Purview Data Map.
-- Scanning on-premises and VM-based data sources always requires using a self-hosted integration runtime. The Azure integration runtime is not supported for these data sources. The following steps show the communication flow at a high level when you're using a self-hosted integration runtime to scan a data source:
+- Scanning on-premises and VM-based data sources always requires using a self-hosted integration runtime. The Azure integration runtime isn't supported for these data sources. The following steps show the communication flow at a high level when you're using a self-hosted integration runtime to scan a data source. The first diagram shows a scenario where resources are within Azure or on a VM in Azure. The second diagram shows a scenario with on-premises resources. The steps between the two are the same from Microsoft Purview's perspective:
:::image type="content" source="media/concept-best-practices/network-self-hosted-runtime.png" alt-text="Screenshot that shows the connection flow between Microsoft Purview, a self-hosted runtime, and data sources."lightbox="media/concept-best-practices/network-self-hosted-runtime.png":::
+ :::image type="content" source="media/concept-best-practices/security-self-hosted-runtime-on-premises.png" alt-text="Screenshot that shows the connection flow between Microsoft Purview, an on-premises self-hosted runtime, and data sources in on-premises network."lightbox="media/concept-best-practices/security-self-hosted-runtime-on-premises.png":::
+ 1. A manual or automatic scan is triggered. Microsoft Purview connects to Azure Key Vault to retrieve the credential to access a data source.
- 2. The scan is initiated from the Microsoft Purview data map through a self-hosted integration runtime.
+ 2. The scan is initiated from the Microsoft Purview Data Map through a self-hosted integration runtime.
- 3. The self-hosted integration runtime service from the VM connects to the data source to extract metadata.
+ 3. The self-hosted integration runtime service from the VM or on-premises machine connects to the data source to extract metadata.
- 4. Metadata is processed in VM memory for the self-hosted integration runtime. Metadata is queued in Microsoft Purview managed storage and then stored in Azure Blob Storage.
+ 4. Metadata is processed in the machine's memory for the self-hosted integration runtime. Metadata is queued in Microsoft Purview managed storage and then stored in Azure Blob Storage. Actual data never leaves the boundary of your network.
- 5. Metadata is sent to the Microsoft Purview data map.
+ 5. Metadata is sent to the Microsoft Purview Data Map.
### Authentication options
When you're scanning a data source in Microsoft Purview, you need to provide a c
- **Data source type**. For example, if the data source is Azure SQL Database, you need to use SQL authentication with db_datareader access to each database. This can be a user-managed identity or a Microsoft Purview managed identity. Or it can be a service principal in Azure Active Directory added to SQL Database as db_datareader.
- If the data source is Azure Blob Storage, you can use a Microsoft Purview managed identity or a service principal in Azure Active Directory added as a Blob Storage Data Reader role on the Azure storage account. Or simply use the storage account's key.
+ If the data source is Azure Blob Storage, you can use a Microsoft Purview managed identity, or a service principal in Azure Active Directory added as a Blob Storage Data Reader role on the Azure storage account. Or use the storage account's key.
- **Authentication type**. We recommend that you use a Microsoft Purview managed identity to scan Azure data sources when possible, to reduce administrative overhead. For any other authentication types, you need to [set up credentials for source authentication inside Microsoft Purview](manage-credentials.md):
When you're scanning a data source in Microsoft Purview, you need to provide a c
- **Runtime type that's used in the scan**. Currently, you can't use a Microsoft Purview managed identity with a self-hosted integration runtime.
-### Additional considerations
+### Other considerations
- If you choose to scan data sources using public endpoints, your self-hosted integration runtime VMs must have outbound access to data sources and Azure endpoints.
When you're scanning a data source in Microsoft Purview, you need to provide a c
## Option 2: Use private endpoints
-You can use [Azure private endpoints](../private-link/private-endpoint-overview.md) for your Microsoft Purview accounts. This option is useful if you need to do either of the following:
--- Scan Azure infrastructure as a service (IaaS) and PaaS data sources inside Azure virtual networks and on-premises data sources through a private connection.-- Allow users on a virtual network to securely access Microsoft Purview over [Azure Private Link](../private-link/private-link-overview.md). -
-Similar to other PaaS solutions, Microsoft Purview does not support deploying directly into a virtual network. So you can't use certain networking features with the offering's resources, such as network security groups, route tables, or other network-dependent appliances such as Azure Firewall. Instead, you can use private endpoints that can be enabled on your virtual network. You can then disable public internet access to securely connect to Microsoft Purview.
+Similar to other PaaS solutions, Microsoft Purview doesn't support deploying directly into a virtual network. So you can't use certain networking features with the offering's resources, such as network security groups, route tables, or other network-dependent appliances such as Azure Firewall. Instead, you can use private endpoints that can be enabled on your virtual network. You can then disable public internet access to securely connect to Microsoft Purview.
You must use private endpoints for your Microsoft Purview account if you have any of the following requirements:
You must use private endpoints for your Microsoft Purview account if you have an
### Current limitations -- Scanning multiple Azure sources by using the entire subscription or resource group through ingestion private endpoints and a self-hosted integration runtime is not supported when you're using private endpoints for ingestion. Instead, you can register and scan data sources individually.
+- Scanning multiple Azure sources by using the entire subscription or resource group through ingestion private endpoints and a self-hosted integration runtime isn't supported when you're using private endpoints for ingestion. Instead, you can register and scan data sources individually.
- For limitations related to Microsoft Purview private endpoints, see [Known limitations](catalog-private-link-troubleshoot.md#known-limitations).
The self-hosted integration runtime VMs can be deployed inside the same Azure vi
:::image type="content" source="media/concept-best-practices/network-pe-multi-vnet.png" alt-text="Screenshot that shows Microsoft Purview with private endpoints in a scenario of multiple virtual networks."lightbox="media/concept-best-practices/network-pe-multi-vnet.png":::
-You can optionally deploy an additional self-hosted integration runtime in the spoke virtual networks.
+You can optionally deploy another self-hosted integration runtime in the spoke virtual networks.
#### Multiple regions, multiple virtual networks
For performance and cost optimization, we highly recommended deploying one or mo
#### Name resolution for multiple Microsoft Purview accounts
-It is recommended to follow these recommendations, if your organization needs to deploy and maintain multiple Microsoft Purview accounts using private endpoints:
+It's recommended to follow these recommendations, if your organization needs to deploy and maintain multiple Microsoft Purview accounts using private endpoints:
1. Deploy at least one _account_ private endpoint for each Microsoft Purview account. 2. Deploy at least one set of _ingestion_ private endpoints for each Microsoft Purview account.
It is recommended to follow these recommendations, if your organization needs to
:::image type="content" source="media/concept-best-practices/network-pe-dns.png" alt-text="Screenshot that shows how to handle private endpoints and DNS records for multiple Microsoft Purview accounts."lightbox="media/concept-best-practices/network-pe-dns.png":::
-This scenario also applies if multiple Microsoft Purview accounts are deployed across multiple subscriptions and multiple VNets that are connected through VNet peering. _Portal_ private endpoint mainly renders static assets related to the Microsoft Purview governance portal, thus, it is independent of Microsoft Purview account, therefore, only one _portal_ private endpoint is needed to visit all Microsoft Purview accounts in the Azure environment if VNets are connected.
+This scenario also applies if multiple Microsoft Purview accounts are deployed across multiple subscriptions and multiple VNets that are connected through VNet peering. _Portal_ private endpoint mainly renders static assets related to the Microsoft Purview governance portal, thus, it's independent of Microsoft Purview account, therefore, only one _portal_ private endpoint is needed to visit all Microsoft Purview accounts in the Azure environment if VNets are connected.
:::image type="content" source="media/concept-best-practices/network-pe-dns-multi-vnet.png" alt-text="Screenshot that shows how to handle private endpoints and DNS records for multiple Microsoft Purview accounts in multiple vnets."lightbox="media/concept-best-practices/network-pe-dns-multi-vnet.png":::
For scanning data sources across your on-premises and Azure networks, you may ne
- To simplify management, when possible, use Azure runtime and [Microsoft Purview Managed runtime](catalog-managed-vnet.md) to scan Azure data sources. -- The Self-hosted integration runtime service can communicate with Microsoft Purview through public or private network over port 443. For more information see, [self-hosted integration runtime networking requirements](manage-integration-runtimes.md#networking-requirements).
+- The Self-hosted integration runtime service can communicate with Microsoft Purview through public or private network over port 443. For more information, see, [self-hosted integration runtime networking requirements](manage-integration-runtimes.md#networking-requirements).
-- One self-hosted integration runtime VM can be used to scan one or multiple data sources in Microsoft Purview, however, self-hosted integration runtime must be only registered for Microsoft Purview and cannot be used for Azure Data Factory or Azure Synapse at the same time.
+- One self-hosted integration runtime VM can be used to scan one or multiple data sources in Microsoft Purview, however, self-hosted integration runtime must be only registered for Microsoft Purview and can't be used for Azure Data Factory or Azure Synapse at the same time.
-- You can register and use one or multiple self-hosted integration runtime in one Microsoft Purview account. It is recommended to place at least one self-hosted integration runtime VM in each region or on-premises network where your data sources reside.
+- You can register and use one or multiple self-hosted integration runtimes in one Microsoft Purview account. It's recommended to place at least one self-hosted integration runtime VM in each region or on-premises network where your data sources reside.
-- It is recommended to define a baseline for required capacity for each self-hosted integration runtime VM and scale the VM capacity based on demand.
+- It's recommended to define a baseline for required capacity for each self-hosted integration runtime VM and scale the VM capacity based on demand.
-- It is recommended to setup network connection between self-hosted integration runtime VMs and Microsoft Purview and its managed resources through private network, when possible.
+- It's recommended to set up network connection between self-hosted integration runtime VMs and Microsoft Purview and its managed resources through private network, when possible.
- Allow outbound connectivity to download.microsoft.com, if auto-update is enabled. -- The self-hosted integration runtime service does not require outbound internet connectivity, if self-hosted integration runtime VMs are deployed in an Azure VNet or in the on-premises network that is connected to Azure through an ExpressRoute or Site to Site VPN connection. In this case, the scan and metadata ingestion process can be done through private network.
+- The self-hosted integration runtime service doesn't require outbound internet connectivity, if self-hosted integration runtime VMs are deployed in an Azure VNet or in the on-premises network that is connected to Azure through an ExpressRoute or Site to Site VPN connection. In this case, the scan and metadata ingestion process can be done through private network.
- Self-hosted integration runtime can communicate Microsoft Purview and its managed resources directly or through [a proxy server](manage-integration-runtimes.md#proxy-server-considerations). Avoid using proxy settings if self-hosted integration runtime VM is inside an Azure VNet or connected through ExpressRoute or Site to Site VPN connection.
purview How To Deploy Profisee Purview Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-deploy-profisee-purview-integration.md
For example, DNSHOSTNAME="purviewprofisee.southcentralus.cloudapp.azure.com". Su
:::image type="content" alt-text="Screenshot of Profisee Managed Identity Azure Role Assignments." source="./media/how-to-deploy-profisee-purview/profisee-managed-identity-azure-role-assignments.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-managed-identity-azure-role-assignments.png":::
-1. [Create an application registration](/active-directory/develop/howto-create-service-principal-portal#register-an-application-with-azure-ad-and-create-a-service-principal) that will act as the login identity once Profisee is installed. It needs to be a part of the Azure Active Directory that will be used to sign in to Profisee. Save the **Application (client) ID** for use later.
+1. [Create an application registration](/azure/active-directory/develop/howto-create-service-principal-portal#register-an-application-with-azure-ad-and-create-a-service-principal) that will act as the login identity once Profisee is installed. It needs to be a part of the Azure Active Directory that will be used to sign in to Profisee. Save the **Application (client) ID** for use later.
- Set authentication to match the settings below: - Support ID tokens (used for implicit and hybrid flows) - Set the redirect URL to: https://\<your-deployment-url>/profisee/auth/signin-microsoft - Your deployment URL is the URL you'll have provided Profisee in step 1
-1. [Create a service principal](/active-directory/develop/howto-create-service-principal-portal#register-an-application-with-azure-ad-and-create-a-service-principal) that Microsoft Purview will use to take some actions on itself during this Profisee deployment. To create a service principal, create an application like you did in the previous step, then [create an application secret](/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret). Save the **Object ID** for the application, and the **Value** of the secret you created for later use.
+1. [Create a service principal](/azure/active-directory/develop/howto-create-service-principal-portal#register-an-application-with-azure-ad-and-create-a-service-principal) that Microsoft Purview will use to take some actions on itself during this Profisee deployment. To create a service principal, create an application like you did in the previous step, then [create an application secret](/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret). Save the **Object ID** for the application, and the **Value** of the secret you created for later use.
- Give this service principal (using the name or Object ID to locate it) **Data Curator** permissions on the root collection of your Microsoft Purview account. 1. Go to [https://github.com/Profisee/kubernetes](https://github.com/Profisee/kubernetes) and select Microsoft Purview [**Azure ARM**](https://github.com/profisee/kubernetes/blob/master/Azure-ARM/README.md#deploy-profisee-platform-on-to-aks-using-arm-template).
For example, DNSHOSTNAME="purviewprofisee.southcentralus.cloudapp.azure.com". Su
1. For your Profisee configuration, you can have your information stored in Key Vault or supply the details during deployment. 1. Choose your Profisee version, and provide your admin user account and license. 1. Select to configure using Microsoft Purview.
- 1. For the Application Registration Client ID, provide the [**application (client) ID**](/active-directory/develop/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in) for the [application registration you created earlier](#microsoft-purviewprofisee-integration-deployment-on-azure-kubernetes-service-aks).
+ 1. For the Application Registration Client ID, provide the [**application (client) ID**](/azure/active-directory/develop/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in) for the [application registration you created earlier](#microsoft-purviewprofisee-integration-deployment-on-azure-kubernetes-service-aks).
1. Select your Microsoft Purview account. 1. Add the **object ID** for the [service principal you created earlier](#microsoft-purviewprofisee-integration-deployment-on-azure-kubernetes-service-aks). 1. Add the value for the secret you created for that service principal.
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 08/20/2022 Last updated : 09/09/2022
The following table provides a brief description of each built-in role. Click th
> | [Site Recovery Reader](#site-recovery-reader) | Lets you view Site Recovery status but not perform other management operations | dbaa88c4-0c30-4179-9fb3-46319faa6149 | > | [Support Request Contributor](#support-request-contributor) | Lets you create and manage Support requests | cfd33db0-3dd1-45e3-aa9d-cdbdf3b6f24e | > | [Tag Contributor](#tag-contributor) | Lets you manage tags on entities, without providing access to the entities themselves. | 4a9ae827-6dc8-4573-8ac7-8239d42aa03f |
+> | [Template Spec Contributor](#template-spec-contributor) | Allows full access to Template Spec operations at the assigned scope. | 1c9b6475-caf0-4164-b5a1-2142a7116f4b |
+> | [Template Spec Reader](#template-spec-reader) | Allows read access to Template Specs at the assigned scope. | 392ae280-861d-42bd-9ea5-08ee6d83b80e |
> | **Virtual desktop infrastructure** | | | > | [Desktop Virtualization Application Group Contributor](#desktop-virtualization-application-group-contributor) | Contributor of the Desktop Virtualization Application Group. | 86240b0e-9422-4c43-887b-b61143f32ba8 | > | [Desktop Virtualization Application Group Reader](#desktop-virtualization-application-group-reader) | Reader of the Desktop Virtualization Application Group. | aebf23d0-b568-4e86-b8f9-fe83a2c6ab55 |
Manage the web plans for websites. Does not allow you to assign roles in Azure R
> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket | > | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/* | Create and manage server farms | > | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/hostingEnvironments/Join/Action | Joins an App Service Environment |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/autoscalesettings/* | |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Manage the web plans for websites. Does not allow you to assign roles in Azure R
"Microsoft.Resources/subscriptions/resourceGroups/read", "Microsoft.Support/*", "Microsoft.Web/serverFarms/*",
- "Microsoft.Web/hostingEnvironments/Join/Action"
+ "Microsoft.Web/hostingEnvironments/Join/Action",
+ "Microsoft.Insights/autoscalesettings/*"
], "notActions": [], "dataActions": [],
Microsoft Sentinel Reader [Learn more](../sentinel/roles.md)
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/templateSpecs/*/read | |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/templateSpecs/*/read | Get or list template specs and template spec versions |
> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket | > | **NotActions** | | > | [Microsoft.SecurityInsights](resource-provider-operations.md#microsoftsecurityinsights)/ConfidentialWatchlists/* | |
Can read all monitoring data and edit monitoring settings. See also [Get started
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | [microsoft.monitor](resource-provider-operations.md#microsoftmonitor)/accounts/data/metrics/read | Read metrics data in any Monitoring Account |
+> | *none* | |
> | **NotDataActions** | | > | *none* | |
Can read all monitoring data and edit monitoring settings. See also [Get started
"Microsoft.AlertsManagement/migrateFromSmartDetection/*" ], "notActions": [],
- "dataActions": [
- "microsoft.monitor/accounts/data/metrics/read"
- ],
+ "dataActions": [],
"notDataActions": [] } ],
Can read all monitoring data (metrics, logs, etc.). See also [Get started with r
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | [Microsoft.Monitor](resource-provider-operations.md#microsoftmonitor)/accounts/data/metrics/read | Read metrics data in any Monitoring Account |
+> | *none* | |
> | **NotDataActions** | | > | *none* | |
Can read all monitoring data (metrics, logs, etc.). See also [Get started with r
"Microsoft.Support/*" ], "notActions": [],
- "dataActions": [
- "Microsoft.Monitor/accounts/data/metrics/read"
- ],
+ "dataActions": [],
"notDataActions": [] } ],
Lets you manage tags on entities, without providing access to the entities thems
} ```
+### Template Spec Contributor
+
+Allows full access to Template Spec operations at the assigned scope.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/templateSpecs/* | Create and manage template specs and template spec versions |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allows full access to Template Spec operations at the assigned scope.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/1c9b6475-caf0-4164-b5a1-2142a7116f4b",
+ "name": "1c9b6475-caf0-4164-b5a1-2142a7116f4b",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Resources/templateSpecs/*",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Template Spec Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Template Spec Reader
+
+Allows read access to Template Specs at the assigned scope.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/templateSpecs/*/read | Get or list template specs and template spec versions |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allows read access to Template Specs at the assigned scope.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/392ae280-861d-42bd-9ea5-08ee6d83b80e",
+ "name": "392ae280-861d-42bd-9ea5-08ee6d83b80e",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Resources/templateSpecs/*/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Template Spec Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ## Virtual desktop infrastructure
Full access role for Digital Twins data-plane [Learn more](../digital-twins/conc
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/eventroutes/* | Read, delete, create, or update any Event Route |
> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/* | Read, create, update, or delete any Digital Twin | > | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/commands/* | Invoke any Command on a Digital Twin | > | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/relationships/* | Read, create, update, or delete any Digital Twin Relationship |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/eventroutes/* | Read, delete, create, or update any Event Route |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/jobs/* | |
> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/models/* | Read, create, update, or delete any Model | > | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/query/* | Query any Digital Twins Graph | > | **NotDataActions** | |
Full access role for Digital Twins data-plane [Learn more](../digital-twins/conc
"actions": [], "notActions": [], "dataActions": [
- "Microsoft.DigitalTwins/eventroutes/*",
"Microsoft.DigitalTwins/digitaltwins/*", "Microsoft.DigitalTwins/digitaltwins/commands/*", "Microsoft.DigitalTwins/digitaltwins/relationships/*",
+ "Microsoft.DigitalTwins/eventroutes/*",
+ "Microsoft.DigitalTwins/jobs/*",
"Microsoft.DigitalTwins/models/*", "Microsoft.DigitalTwins/query/*" ],
Read-only role for Digital Twins data-plane properties [Learn more](../digital-t
> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/read | Read any Digital Twin | > | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/relationships/read | Read any Digital Twin Relationship | > | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/eventroutes/read | Read any Event Route |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/jobs/import/read | Read any Bulk Import Job |
> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/models/read | Read any Model | > | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/query/action | Query any Digital Twins Graph | > | **NotDataActions** | |
Read-only role for Digital Twins data-plane properties [Learn more](../digital-t
"Microsoft.DigitalTwins/digitaltwins/read", "Microsoft.DigitalTwins/digitaltwins/relationships/read", "Microsoft.DigitalTwins/eventroutes/read",
+ "Microsoft.DigitalTwins/jobs/import/read",
"Microsoft.DigitalTwins/models/read", "Microsoft.DigitalTwins/query/action" ],
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 08/20/2022 Last updated : 09/09/2022
Azure service: core
> | Microsoft.Marketplace/privateStores/listSubscriptionsContext/action | List the subscription in private store context | > | Microsoft.Marketplace/privateStores/listNewPlansNotifications/action | List new plans notifications | > | Microsoft.Marketplace/privateStores/queryUserOffers/action | Fetch the approved offers from the offers ids and the user subscriptions in the payload |
+> | Microsoft.Marketplace/privateStores/queryUserRules/action | Fetch the approved rules for the user under the user subscriptions |
> | Microsoft.Marketplace/privateStores/anyExistingOffersInTheStore/action | Return true if there is an existing offer for at least one enabled collection |
+> | Microsoft.Marketplace/privateStores/queryInternalOfferIds/action | List of all internal offers under given azure application and plans |
> | Microsoft.Marketplace/privateStores/adminRequestApprovals/read | Read all request approvals details, only admins | > | Microsoft.Marketplace/privateStores/adminRequestApprovals/write | Admin update the request with decision on the request | > | Microsoft.Marketplace/privateStores/collections/approveAllItems/action | Delete all specific approved items and set collection to allItemsApproved |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/RevertRelocation/action | Revert the relocation and revert back to the old volume. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/BreakFileLocks/action | Breaks file locks on a volume | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/MigrateBackups/action | Migrate Volume Backups to BackupVault. |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/PopulateAvailabilityZone/action | Populates logical availability zone for a volume in a zone aware region and storage. |
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/backups/read | Reads a backup resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/backups/write | Writes a backup resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/backups/delete | Deletes a backup resource. |
Azure service: [App Service Certificates](../app-service/configure-ssl-certifica
> | Microsoft.CertificateRegistration/certificateOrders/reissue/Action | Reissue an existing certificateorder | > | Microsoft.CertificateRegistration/certificateOrders/renew/Action | Renew an existing certificateorder | > | Microsoft.CertificateRegistration/certificateOrders/retrieveCertificateActions/Action | Retrieve the list of certificate actions |
+> | Microsoft.CertificateRegistration/certificateOrders/retrieveContactInfo/Action | Retrieve certificate order contact information |
> | Microsoft.CertificateRegistration/certificateOrders/retrieveEmailHistory/Action | Retrieve certificate email history | > | Microsoft.CertificateRegistration/certificateOrders/resendEmail/Action | Resend certificate email | > | Microsoft.CertificateRegistration/certificateOrders/verifyDomainOwnership/Action | Verify domain ownership |
Azure service: [App Service](../app-service/index.yml)
> | Microsoft.DomainRegistration/domains/Write | Add a new Domain or update an existing one | > | Microsoft.DomainRegistration/domains/Delete | Delete an existing domain. | > | Microsoft.DomainRegistration/domains/renew/Action | Renew an existing domain. |
+> | Microsoft.DomainRegistration/domains/retrieveContactInfo/Action | Retrieve contact info for existing domain |
> | Microsoft.DomainRegistration/domains/Read | Transfer out a domain to another registrar. | > | Microsoft.DomainRegistration/domains/domainownershipidentifiers/Read | List ownership identifiers | > | Microsoft.DomainRegistration/domains/domainownershipidentifiers/Read | Get ownership identifier |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | Microsoft.Web/hostingEnvironments/Join/Action | Joins an App Service Environment | > | Microsoft.Web/hostingEnvironments/reboot/Action | Reboot all machines in an App Service Environment | > | Microsoft.Web/hostingEnvironments/upgrade/Action | Upgrades an App Service Environment |
+> | Microsoft.Web/hostingEnvironments/testUpgradeAvailableNotification/Action | Send test upgrade notification for an App Service Environment |
> | Microsoft.Web/hostingEnvironments/PrivateEndpointConnectionsApproval/action | Approve Private Endpoint Connections | > | microsoft.web/hostingenvironments/resume/action | Resume Hosting Environments. | > | microsoft.web/hostingenvironments/suspend/action | Suspend Hosting Environments. |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | Microsoft.Web/sites/applySlotConfig/Action | Apply web app slot configuration from target slot to the current web app | > | Microsoft.Web/sites/resetSlotConfig/Action | Reset web app configuration | > | Microsoft.Web/sites/PrivateEndpointConnectionsApproval/action | Approve Private Endpoint Connections |
+> | microsoft.web/sites/deployWorkflowArtifacts/action | Create the artifacts in a Logic App. |
> | microsoft.web/sites/functions/action | Functions Web Apps. | > | microsoft.web/sites/listsyncfunctiontriggerstatus/action | List Sync Function Trigger Status. | > | microsoft.web/sites/networktrace/action | Network Trace Web Apps. |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | Microsoft.Web/sites/slots/applySlotConfig/Action | Apply web app slot configuration from target slot to the current slot. | > | Microsoft.Web/sites/slots/resetSlotConfig/Action | Reset web app slot configuration | > | Microsoft.Web/sites/slots/Read | Get the properties of a Web App deployment slot |
+> | microsoft.web/sites/slots/deployWorkflowArtifacts/action | Create the artifacts in a deployment slot in a Logic App. |
> | microsoft.web/sites/slots/listsyncfunctiontriggerstatus/action | List Sync Function Trigger Status for deployment slot. | > | microsoft.web/sites/slots/newpassword/action | Newpassword Web Apps Slots. | > | microsoft.web/sites/slots/sync/action | Sync Web Apps Slots. |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | microsoft.web/sites/slots/virtualnetworkconnections/write | Update Web Apps Slots Virtual Network Connections. | > | microsoft.web/sites/slots/virtualnetworkconnections/gateways/write | Update Web Apps Slots Virtual Network Connections Gateways. | > | microsoft.web/sites/slots/webjobs/read | Get Web Apps Slots WebJobs. |
+> | microsoft.web/sites/slots/workflows/read | List the workflows in a deployment slot in a Logic App. |
+> | microsoft.web/sites/slots/workflowsconfiguration/read | Get workflow app's configuration information by its ID in a deployment slot in a Logic App. |
> | microsoft.web/sites/snapshots/read | Get Web Apps Snapshots. | > | Microsoft.Web/sites/sourcecontrols/Read | Get Web App's source control configuration settings | > | Microsoft.Web/sites/sourcecontrols/Write | Update Web App's source control configuration settings |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | microsoft.web/sites/virtualnetworkconnections/gateways/read | Get Web Apps Virtual Network Connections Gateways. | > | microsoft.web/sites/virtualnetworkconnections/gateways/write | Update Web Apps Virtual Network Connections Gateways. | > | microsoft.web/sites/webjobs/read | Get Web Apps WebJobs. |
+> | microsoft.web/sites/workflows/read | List the workflows in a Logic App. |
+> | microsoft.web/sites/workflowsconfiguration/read | Get workflow app's configuration information by its ID in a Logic App. |
> | microsoft.web/skus/read | Get SKUs. | > | microsoft.web/sourcecontrols/read | Get Source Controls. | > | microsoft.web/sourcecontrols/write | Update Source Controls. |
Azure service: [Container Instances](../container-instances/index.yml)
> | Microsoft.ContainerInstance/containerGroups/delete | Delete the specific container group. | > | Microsoft.ContainerInstance/containerGroups/restart/action | Restarts a specific container group. | > | Microsoft.ContainerInstance/containerGroups/stop/action | Stops a specific container group. Compute resources will be deallocated and billing will stop. |
+> | Microsoft.ContainerInstance/containerGroups/refreshDelegatedResourceIdentity/action | Refresh delegated resource identity for a specific container group. |
> | Microsoft.ContainerInstance/containerGroups/start/action | Starts a specific container group. | > | Microsoft.ContainerInstance/containerGroups/containers/exec/action | Exec into a specific container. | > | Microsoft.ContainerInstance/containerGroups/containers/attach/action | Attach to the output stream of a container. |
Azure service: [Container Registry](../container-registry/index.yml)
> | Microsoft.ContainerRegistry/registries/connectedRegistries/read | Gets the properties of the specified connected registry or lists all the connected registries for the specified container registry. | > | Microsoft.ContainerRegistry/registries/connectedRegistries/write | Creates or updates a connected registry for a container registry with the specified parameters. | > | Microsoft.ContainerRegistry/registries/connectedRegistries/delete | Deletes a connected registry from a container registry. |
+> | Microsoft.ContainerRegistry/registries/deleted/read | Gets the deleted artifacts in a container registry |
+> | Microsoft.ContainerRegistry/registries/deleted/restore/action | Restores deleted artifacts in a container registry |
> | Microsoft.ContainerRegistry/registries/eventGridFilters/read | Gets the properties of the specified event grid filter or lists all the event grid filters for the specified container registry. | > | Microsoft.ContainerRegistry/registries/eventGridFilters/write | Creates or updates an event grid filter for a container registry with the specified parameters. | > | Microsoft.ContainerRegistry/registries/eventGridFilters/delete | Deletes an event grid filter from a container registry. |
Azure service: [Azure Database for MySQL](../mysql/index.yml)
> | Microsoft.DBforMySQL/flexibleServers/read | Returns the list of servers or gets the properties for the specified server. | > | Microsoft.DBforMySQL/flexibleServers/write | Creates a server with the specified parameters or updates the properties or tags for the specified server. | > | Microsoft.DBforMySQL/flexibleServers/delete | Deletes an existing server. |
+> | Microsoft.DBforMySQL/flexibleServers/checkHaReplica/action | |
> | Microsoft.DBforMySQL/flexibleServers/cutoverMigration/action | Performs a migration cutover with the specified parameters. | > | Microsoft.DBforMySQL/flexibleServers/failover/action | Failovers a specific server. | > | Microsoft.DBforMySQL/flexibleServers/restart/action | Restarts a specific server. |
Azure service: [Azure Database for PostgreSQL](../postgresql/index.yml)
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
+> | Microsoft.DBforPostgreSQL/assessForMigration/action | Performs a migration assessment with the specified parameters |
> | Microsoft.DBforPostgreSQL/privateEndpointConnectionsApproval/action | Determines if user is allowed to approve a private endpoint connection | > | Microsoft.DBforPostgreSQL/register/action | Register PostgreSQL Resource Provider | > | Microsoft.DBforPostgreSQL/checkNameAvailability/action | Verify whether given server name is available for provisioning worldwide for a given subscription. |
Azure service: [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-m
> | Microsoft.SqlVirtualMachine/sqlVirtualMachineGroups/availabilityGroupListeners/write | Create a new or changes properties of existing SQL availability group listener | > | Microsoft.SqlVirtualMachine/sqlVirtualMachineGroups/availabilityGroupListeners/delete | Delete existing availability group listener | > | Microsoft.SqlVirtualMachine/sqlVirtualMachineGroups/sqlVirtualMachines/read | List Sql virtual machines by a particular sql virtual virtual machine group |
+> | Microsoft.SqlVirtualMachine/sqlVirtualMachines/startAssessment/action | |
> | Microsoft.SqlVirtualMachine/sqlVirtualMachines/redeploy/action | Redeploy existing SQL virtual machine | > | Microsoft.SqlVirtualMachine/sqlVirtualMachines/read | Retrieve details of SQL virtual machine | > | Microsoft.SqlVirtualMachine/sqlVirtualMachines/write | Create a new or change properties of existing SQL virtual machine | > | Microsoft.SqlVirtualMachine/sqlVirtualMachines/delete | Delete existing SQL virtual machine |
-> | Microsoft.SqlVirtualMachine/sqlVirtualMachines/startAssessment/action | |
## Analytics
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/deployments/read | Reads deployments. | > | Microsoft.CognitiveServices/accounts/deployments/write | Writes deployments. | > | Microsoft.CognitiveServices/accounts/deployments/delete | Deletes deployments. |
+> | Microsoft.CognitiveServices/accounts/encryptionScopes/read | Reads deployments. |
+> | Microsoft.CognitiveServices/accounts/encryptionScopes/write | Writes deployments. |
+> | Microsoft.CognitiveServices/accounts/encryptionScopes/delete | Deletes deployments. |
> | Microsoft.CognitiveServices/accounts/models/read | Reads available models. | > | Microsoft.CognitiveServices/accounts/networkSecurityPerimeterAssociationProxies/read | Reads a network security perimeter association. | > | Microsoft.CognitiveServices/accounts/networkSecurityPerimeterAssociationProxies/write | Writes a network security perimeter association. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/AudioContentCreation/Synthesis/SpeakMetadata/action | Query TTS synthesis metadata like F0, duration(used for intonation tuning). | > | Microsoft.CognitiveServices/accounts/AudioContentCreation/Synthesis/SpeakMetadataForPronunciation/action | Query TTS synthesis metadata for pronunciation. | > | Microsoft.CognitiveServices/accounts/AudioContentCreation/Synthesis/Speak/action | TTS synthesis API for all ACC voices. |
+> | Microsoft.CognitiveServices/accounts/AudioContentCreation/Synthesis/PredictSsmlTagsRealtime/action | Realtime API for predict ssml tag. |
> | Microsoft.CognitiveServices/accounts/AudioContentCreation/TuneSsml/ConfigureSsmlFileReferenceFiles/action | Add/update/delete item(s) of SSML reference file plugin. | > | Microsoft.CognitiveServices/accounts/AudioContentCreation/TuneSsml/ApplySequenceTuneOnFiles/action | Apply serveral ssml tag tune on one ssml file sequencly. | > | Microsoft.CognitiveServices/accounts/AudioContentCreation/TuneSsml/SequenceTune/action | Apply serveral ssml tag tune on one ssml sequencly. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/Face/compare/action | Compare two faces from source image and target image based on a their similarity. | > | Microsoft.CognitiveServices/accounts/Face/detectliveness/multimodal/action | <p>Performs liveness detection on a target face in a sequence of infrared, color and/or depth images, and returns the liveness classification of the target face as either &lsquo;real face&rsquo;, &lsquo;spoof face&rsquo;, or &lsquo;uncertain&rsquo; if a classification cannot be made with the given inputs.</p> | > | Microsoft.CognitiveServices/accounts/Face/detectliveness/singlemodal/action | <p>Performs liveness detection on a target face in a sequence of images of the same modality (e.g. color or infrared), and returns the liveness classification of the target face as either &lsquo;real face&rsquo;, &lsquo;spoof face&rsquo;, or &lsquo;uncertain&rsquo; if a classification cannot be made with the given inputs.</p> |
+> | Microsoft.CognitiveServices/accounts/Face/detectlivenesswithverify/singlemodal/action | Detects liveness of a target face in a sequence of images of the same stream type (e.g. color) and then compares with VerifyImage to return confidence score for identity scenarios. |
> | Microsoft.CognitiveServices/accounts/Face/dynamicpersongroups/write | Creates a new dynamic person group with specified dynamicPersonGroupId, name, and user-provided userData.<br>Update an existing dynamic person group name, userData, add, or remove persons.<br>The properties keep unchanged if they are not in request body.* | > | Microsoft.CognitiveServices/accounts/Face/dynamicpersongroups/delete | Deletes an existing dynamic person group with specified dynamicPersonGroupId. Deleting this dynamic person group only delete the references to persons data. To delete actual person see PersonDirectory Person - Delete. | > | Microsoft.CognitiveServices/accounts/Face/dynamicpersongroups/read | Retrieve the information of a dynamic person group, including its name and userData. This API returns dynamic person group information List all existing dynamic person groups by dynamicPersonGroupId along with name and userData.* |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/ImageSearch/search/action | Get relevant images for a given query. | > | Microsoft.CognitiveServices/accounts/ImageSearch/trending/action | Get currently trending images. | > | Microsoft.CognitiveServices/accounts/ImmersiveReader/getcontentmodelforreader/action | Creates an Immersive Reader session |
+> | Microsoft.CognitiveServices/accounts/Knowledge/entitymatch/action | Entity Match |
+> | Microsoft.CognitiveServices/accounts/Knowledge/entitymatchwithattributes/action | Entity Match with Attributes |
+> | Microsoft.CognitiveServices/accounts/Knowledge/annotation/dataverse/action | Dataverse search annotation |
> | Microsoft.CognitiveServices/accounts/Language/query-knowledgebases/action | Answer Knowledgebase. | > | Microsoft.CognitiveServices/accounts/Language/query-text/action | Answer Text. | > | Microsoft.CognitiveServices/accounts/Language/query-dataverse/action | Query Dataverse. |
Azure service: [Machine Learning](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/workspaces/computes/applicationaccess/action | Access compute resource in Machine Learning Services Workspace | > | Microsoft.MachineLearningServices/workspaces/computes/updateSchedules/action | Edit compute start/stop schedules | > | Microsoft.MachineLearningServices/workspaces/computes/applicationaccessuilinks/action | Enable compute instance UI links |
+> | Microsoft.MachineLearningServices/workspaces/computes/reimage/action | Reimages compute resource in Machine Learning Services Workspace |
> | Microsoft.MachineLearningServices/workspaces/connections/read | Gets the Machine Learning Services Workspace connection(s) | > | Microsoft.MachineLearningServices/workspaces/connections/write | Creates or updates a Machine Learning Services connection(s) | > | Microsoft.MachineLearningServices/workspaces/connections/delete | Deletes the Machine Learning Services connection(s) |
Azure service: [IoT security](../iot-fundamentals/iot-security-architecture.md)
> | Microsoft.IoTSecurity/defenderSettings/packageDownloads/action | Gets downloadable IoT Defender packages information | > | Microsoft.IoTSecurity/defenderSettings/downloadManagerActivation/action | Download manager activation file | > | Microsoft.IoTSecurity/locations/read | Gets location |
+> | Microsoft.IoTSecurity/locations/alertSuppressionRules/read | Gets alert suppression rule |
+> | Microsoft.IoTSecurity/locations/alertSuppressionRules/write | Creates alert suppression rule |
+> | Microsoft.IoTSecurity/locations/alertSuppressionRules/delete | Deletes alert suppression rule |
> | Microsoft.IoTSecurity/locations/deviceGroups/read | Gets device group | > | Microsoft.IoTSecurity/locations/deviceGroups/alerts/read | Gets IoT Alerts | > | Microsoft.IoTSecurity/locations/deviceGroups/alerts/write | Updates IoT Alert properties |
Azure service: [Security Center](../security-center/index.yml)
> | Microsoft.Security/assessmentMetadata/write | Create or update a security assessment metadata | > | Microsoft.Security/assessments/read | Get security assessments on your subscription | > | Microsoft.Security/assessments/write | Create or update security assessments on your subscription |
+> | Microsoft.Security/assessments/subAssessments/read | Get security sub assessments on your subscription |
+> | Microsoft.Security/assessments/subAssessments/write | Create or update security sub assessments on your subscription |
> | Microsoft.Security/automations/read | Gets the automations for the scope | > | Microsoft.Security/automations/write | Creates or updates the automation for the scope | > | Microsoft.Security/automations/delete | Deletes the automation for the scope |
Azure service: [Microsoft Sentinel](../sentinel/index.yml)
> | Microsoft.SecurityInsights/ConfidentialWatchlists/read | Gets Confidential Watchlists | > | Microsoft.SecurityInsights/ConfidentialWatchlists/write | Creates Confidential Watchlists | > | Microsoft.SecurityInsights/ConfidentialWatchlists/delete | Deletes Confidential Watchlists |
+> | Microsoft.SecurityInsights/ContentPackages/read | Read available Content Packages. |
+> | Microsoft.SecurityInsights/ContentPackages/write | Install or uninstall Content Packages. |
+> | Microsoft.SecurityInsights/ContentTemplates/read | Read installed Content Templates. |
> | Microsoft.SecurityInsights/dataConnectors/read | Gets the data connectors | > | Microsoft.SecurityInsights/dataConnectors/write | Updates a data connector | > | Microsoft.SecurityInsights/dataConnectors/delete | Deletes a data connector |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | microsoft.monitor/accounts/read | Read any Monitoring Account | > | microsoft.monitor/accounts/write | Create or Update any Monitoring Account | > | microsoft.monitor/accounts/delete | Delete any Monitoring Account |
-> | microsoft.monitor/accounts/metrics/read | Read Monitoring Account metrics |
-> | microsoft.monitor/accounts/metrics/namespaces/read | Read Monitoring Account metrics namespaces |
-> | microsoft.monitor/accounts/metrics/namespaces/metrics/read | Read Monitoring Account metrics namespaces metrics |
-> | microsoft.monitor/accounts/metrics/namespaces/metrics/write | Create or update Monitoring Account metrics namespaces metrics |
-> | microsoft.monitor/accounts/metrics/namespaces/metrics/delete | Delete Monitoring Account metrics namespaces metrics |
> | **DataAction** | **Description** |
-> | microsoft.monitor/accounts/data/logs/read | Read logs data in any Monitoring Account |
-> | microsoft.monitor/accounts/data/logs/write | Write logs data to any Monitoring Account |
> | microsoft.monitor/accounts/data/metrics/read | Read metrics data in any Monitoring Account | > | microsoft.monitor/accounts/data/metrics/write | Write metrics data to any Monitoring Account |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | microsoft.operationalinsights/querypacks/queries/write | Create or Update Query Pack Queries. | > | microsoft.operationalinsights/querypacks/queries/read | Get Query Pack Queries. | > | microsoft.operationalinsights/querypacks/queries/delete | Delete Query Pack Queries. |
-> | microsoft.operationalinsights/restoreLogs/write | Restore data from a table. |
-> | microsoft.operationalinsights/searchJobs/write | Run a search job. |
> | Microsoft.OperationalInsights/workspaces/write | Creates a new workspace or links to an existing workspace by providing the customer id from the existing workspace. | > | Microsoft.OperationalInsights/workspaces/read | Gets an existing workspace | > | Microsoft.OperationalInsights/workspaces/delete | Deletes a workspace. If the workspace was linked to an existing workspace at creation time then the workspace it was linked to is not deleted. |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AmlPipelineEvent/read | Read data from the AmlPipelineEvent table | > | Microsoft.OperationalInsights/workspaces/query/AmlRunEvent/read | Read data from the AmlRunEvent table | > | Microsoft.OperationalInsights/workspaces/query/AmlRunStatusChangedEvent/read | Read data from the AmlRunStatusChangedEvent table |
+> | Microsoft.OperationalInsights/workspaces/query/ANFFileAccess/read | Read data from the ANFFileAccess table |
> | Microsoft.OperationalInsights/workspaces/query/Anomalies/read | Read data from the Anomalies table | > | Microsoft.OperationalInsights/workspaces/query/ApiManagementGatewayLogs/read | Read data from the ApiManagementGatewayLogs table | > | Microsoft.OperationalInsights/workspaces/query/AppAvailabilityResults/read | Read data from the AppAvailabilityResults table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AppServiceHTTPLogs/read | Read data from the AppServiceHTTPLogs table | > | Microsoft.OperationalInsights/workspaces/query/AppServiceIPSecAuditLogs/read | Read data from the AppServiceIPSecAuditLogs table | > | Microsoft.OperationalInsights/workspaces/query/AppServicePlatformLogs/read | Read data from the AppServicePlatformLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/AppServiceServerlessSecurityPluginData/read | Read data from the AppServiceServerlessSecurityPluginData table |
> | Microsoft.OperationalInsights/workspaces/query/AppSystemEvents/read | Read data from the AppSystemEvents table | > | Microsoft.OperationalInsights/workspaces/query/AppTraces/read | Read data from the AppTraces table | > | Microsoft.OperationalInsights/workspaces/query/ASimDnsActivityLogs/read | Read data from the ASimDnsActivityLogs table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/DatabricksUnityCatalog/read | Read data from the DatabricksUnityCatalog table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksWebTerminal/read | Read data from the DatabricksWebTerminal table | > | Microsoft.OperationalInsights/workspaces/query/DatabricksWorkspace/read | Read data from the DatabricksWorkspace table |
+> | Microsoft.OperationalInsights/workspaces/query/DefenderIoTRawEvent/read | Read data from the DefenderIoTRawEvent table |
> | Microsoft.OperationalInsights/workspaces/query/dependencies/read | Read data from the dependencies table | > | Microsoft.OperationalInsights/workspaces/query/DeviceAppCrash/read | Read data from the DeviceAppCrash table | > | Microsoft.OperationalInsights/workspaces/query/DeviceAppLaunch/read | Read data from the DeviceAppLaunch table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/ExchangeOnlineAssessmentRecommendation/read | Read data from the ExchangeOnlineAssessmentRecommendation table | > | Microsoft.OperationalInsights/workspaces/query/FailedIngestion/read | Read data from the FailedIngestion table | > | Microsoft.OperationalInsights/workspaces/query/FunctionAppLogs/read | Read data from the FunctionAppLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/GCPAuditLogs/read | Read data from the GCPAuditLogs table |
> | Microsoft.OperationalInsights/workspaces/query/HDInsightAmbariClusterAlerts/read | Read data from the HDInsightAmbariClusterAlerts table | > | Microsoft.OperationalInsights/workspaces/query/HDInsightAmbariSystemMetrics/read | Read data from the HDInsightAmbariSystemMetrics table | > | Microsoft.OperationalInsights/workspaces/query/HDInsightGatewayAuditLogs/read | Read data from the HDInsightGatewayAuditLogs table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/WVDHostRegistrations/read | Read data from the WVDHostRegistrations table | > | Microsoft.OperationalInsights/workspaces/query/WVDManagement/read | Read data from the WVDManagement table | > | Microsoft.OperationalInsights/workspaces/query/WVDSessionHostManagement/read | Read data from the WVDSessionHostManagement table |
+> | microsoft.operationalinsights/workspaces/restoreLogs/write | Restore data from a table. |
> | microsoft.operationalinsights/workspaces/rules/read | Get all alert rules. | > | Microsoft.OperationalInsights/workspaces/savedSearches/read | Gets a saved search query | > | Microsoft.OperationalInsights/workspaces/savedSearches/write | Creates a saved search query |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | microsoft.operationalinsights/workspaces/scopedPrivateLinkProxies/write | Put Scoped Private Link Proxy. | > | microsoft.operationalinsights/workspaces/scopedPrivateLinkProxies/delete | Delete Scoped Private Link Proxy. | > | microsoft.operationalinsights/workspaces/search/read | Get search results. Deprecated. |
+> | microsoft.operationalinsights/workspaces/searchJobs/write | Run a search job. |
> | Microsoft.OperationalInsights/workspaces/sharedKeys/read | Retrieves the shared keys for the workspace. These keys are used to connect Microsoft Operational Insights agents to the workspace. | > | Microsoft.OperationalInsights/workspaces/storageinsightconfigs/write | Creates a new storage configuration. These configurations are used to pull data from a location in an existing storage account. | > | Microsoft.OperationalInsights/workspaces/storageinsightconfigs/read | Gets a storage configuration. |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Action | Description | > | | | > | Microsoft.RecoveryServices/register/action | Registers subscription for given Resource Provider |
-> | microsoft.recoveryservices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
-> | microsoft.recoveryservices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupPreValidateProtection/action | |
-> | microsoft.recoveryservices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
-> | microsoft.recoveryservices/Locations/backupValidateFeatures/action | Validate Features |
+> | Microsoft.RecoveryServices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
+> | Microsoft.RecoveryServices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupPreValidateProtection/action | |
+> | Microsoft.RecoveryServices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
+> | Microsoft.RecoveryServices/Locations/backupValidateFeatures/action | Validate Features |
> | Microsoft.RecoveryServices/locations/allocateStamp/action | AllocateStamp is internal operation used by service | > | Microsoft.RecoveryServices/locations/checkNameAvailability/action | Check Resource Name Availability is an API to check if resource name is available | > | Microsoft.RecoveryServices/locations/allocatedStamp/read | GetAllocatedStamp is internal operation used by service |
-> | microsoft.recoveryservices/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
-> | microsoft.recoveryservices/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupProtectedItem/write | Create a backup Protected Item |
-> | microsoft.recoveryservices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | Microsoft.RecoveryServices/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
+> | Microsoft.RecoveryServices/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Locations/backupProtectedItem/write | Create a backup Protected Item |
+> | Microsoft.RecoveryServices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
> | Microsoft.RecoveryServices/locations/operationStatus/read | Gets Operation Status for a given Operation | > | Microsoft.RecoveryServices/operations/read | Operation returns the list of Operations for a Resource Provider |
-> | microsoft.recoveryservices/Vaults/backupJobsExport/action | Export Jobs |
-> | microsoft.recoveryservices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
-> | microsoft.recoveryservices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupJobsExport/action | Export Jobs |
+> | Microsoft.RecoveryServices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/write | Create Vault operation creates an Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/read | The Get Vault operation gets an object representing the Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/delete | The Delete Vault operation deletes the specified Azure resource of type 'vault' |
-> | microsoft.recoveryservices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
-> | microsoft.recoveryservices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
-> | microsoft.recoveryservices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
-> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
-> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
-> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
-> | microsoft.recoveryservices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
-> | microsoft.recoveryservices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
-> | microsoft.recoveryservices/Vaults/backupJobs/cancel/action | Cancel the Job |
-> | microsoft.recoveryservices/Vaults/backupJobs/read | Returns all Job Objects |
-> | microsoft.recoveryservices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
-> | microsoft.recoveryservices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
-> | microsoft.recoveryservices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupPolicies/delete | Delete a Protection Policy |
-> | microsoft.recoveryservices/Vaults/backupPolicies/read | Returns all Protection Policies |
-> | microsoft.recoveryservices/Vaults/backupPolicies/write | Creates Protection Policy |
-> | microsoft.recoveryservices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
-> | microsoft.recoveryservices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
-> | microsoft.recoveryservices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
-> | microsoft.recoveryservices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
-> | microsoft.recoveryservices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
-> | microsoft.recoveryservices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
-> | microsoft.recoveryservices/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
-> | microsoft.recoveryservices/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
+> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
+> | Microsoft.RecoveryServices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/cancel/action | Cancel the Job |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/read | Returns all Job Objects |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/delete | Delete a Protection Policy |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/read | Returns all Protection Policies |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/write | Creates Protection Policy |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
+> | Microsoft.RecoveryServices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
+> | Microsoft.RecoveryServices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | Microsoft.RecoveryServices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
+> | Microsoft.RecoveryServices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
+> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
+> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
+> | Microsoft.RecoveryServices/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
+> | Microsoft.RecoveryServices/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/certificates/write | The Update Resource Certificate operation updates the resource/vault credential certificate. | > | Microsoft.RecoveryServices/Vaults/extendedInformation/read | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.RecoveryServices/Vaults/extendedInformation/write | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/Vaults/monitoringAlerts/write | Resolves the alert. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/read | Gets the Recovery services vault notification configuration. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/write | Configures e-mail notifications to Recovery services vault. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
+> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
> | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/read | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/write | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/logDefinitions/read | Azure Backup Logs |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/replicationVaultSettings/read | Read any | > | Microsoft.RecoveryServices/vaults/replicationVaultSettings/write | Create or Update any | > | Microsoft.RecoveryServices/vaults/replicationvCenters/read | Read any vCenters |
-> | microsoft.recoveryservices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
+> | Microsoft.RecoveryServices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
> | Microsoft.RecoveryServices/vaults/usages/read | Read any Vault Usages | > | Microsoft.RecoveryServices/Vaults/vaultTokens/read | The Vault Token operation can be used to get Vault Token for vault level backend operations. |
Azure service: [Azure Digital Twins](../digital-twins/index.yml)
> | Microsoft.DigitalTwins/eventroutes/read | Read any Event Route | > | Microsoft.DigitalTwins/eventroutes/delete | Delete any Event Route | > | Microsoft.DigitalTwins/eventroutes/write | Create or Update any Event Route |
+> | Microsoft.DigitalTwins/jobs/import/read | Read any Bulk Import Job |
+> | Microsoft.DigitalTwins/jobs/import/write | Create any Bulk Import Job |
+> | Microsoft.DigitalTwins/jobs/import/delete | Delete any Bulk Import Job |
> | Microsoft.DigitalTwins/models/read | Read any Model | > | Microsoft.DigitalTwins/models/write | Create or Update any Model | > | Microsoft.DigitalTwins/models/delete | Delete any Model |
search Search Data Sources Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-terms-of-use.md
Previously updated : 05/29/2021 Last updated : 09/07/2022
search Search Get Started Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-javascript.md
ms.devlang: javascript Previously updated : 07/08/2021 Last updated : 09/09/2022
> * [Python](search-get-started-python.md) > * [REST](search-get-started-rest.md) - Use the [JavaScript/TypeScript SDK for Azure Cognitive Search](/javascript/api/overview/azure/search-documents-readme) to create a Node.js application in JavaScript that creates, loads, and queries a search index. This article demonstrates how to create the application step by step. Alternatively, you can [download the source code and data](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/quickstart/v11) and run the application from the command line.
Before you begin, have the following tools and
+ An Azure Cognitive Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). You can use a free service for this quickstart.
-+ [Node.js](https://nodejs.org) and [NPM](https://www.npmjs.com)
++ [Node.js](https://nodejs.org) and [npm](https://www.npmjs.com) + [Visual Studio Code](https://code.visualstudio.com) or another IDE - ## Set up your project
-Start by getting the endpoint and key for your search service. Then create a new project with NPM as outlined below.
+Start by getting the endpoint and key for your search service. Then create a new project with npm as outlined below.
<a name="get-service-info"></a> ### Copy a key and endpoint
-Calls to the service require a URL endpoint and an access key on every request. As a first step, find the API key and URL to add to your project. You will specify both values when creating the client in a later step.
+Calls to the service require a URL endpoint and an access key on every request. As a first step, find the API key and URL to add to your project. You'll specify both values when creating the client in a later step.
1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
-2. In **Settings** > **Keys**, get an admin key for full rights on the service, required if you are creating or deleting objects. There are two interchangeable primary and secondary keys. You can use either one.
+2. In **Settings** > **Keys**, get an admin key for full rights on the service, required if you're creating or deleting objects. There are two interchangeable primary and secondary keys. You can use either one.
![Get an HTTP endpoint and access key](media/search-get-started-rest/get-url-key.png "Get an HTTP endpoint and access key") All requests require an api-key on every request sent to your service. Having a valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it.
-### Create a new NPM project
+### Create a new npm project
Begin by opening VS Code and its [integrated terminal](https://code.visualstudio.com/docs/editor/integrated-terminal) or another terminal such as the Node.js command prompt.
Begin by opening VS Code and its [integrated terminal](https://code.visualstudio
cd quickstart ```
-2. Initialize an empty project with NPM by running
+2. Initialize an empty project with npm by running the following command. To fully initialize the project, press Enter multiple times to accept the default values, except for the License, which you should set to "MIT".
```cmd npm init ```
- Accept the default values, except for the License, which you should set to "MIT".
-
-3. Install `@azure/search-documents`, the [JavaScript/TypeScript SDK for Azure Cognitive Search](/javascript/api/overview/azure/search-documents-readme).
+
+3. Install `@azure/search-documents`, the [JavaScript/TypeScript SDK for Azure Cognitive Search](/javascript/api/overview/azure/search-documents-readme).
```cmd npm install @azure/search-documents ```
-4. Install `dotenv`, which is used to import the environment variables such as our service name and API key.
+4. Install `dotenv`, which is used to import the environment variables such as your search service name and API key.
+ ```cmd npm install dotenv ```
Begin by opening VS Code and its [integrated terminal](https://code.visualstudio
"author": "Your Name", "license": "MIT", "dependencies": {
- "@azure/search-documents": "^11.2.0",
- "dotenv": "^8.2.0"
+ "@azure/search-documents": "^11.3.0",
+ "dotenv": "^16.0.2"
} } ```
With that in place, we're ready to create an index.
Create a file **hotels_quickstart_index.json**. This file defines how Azure Cognitive Search works with the documents you'll be loading in the next step. Each field will be identified by a `name` and have a specified `type`. Each field also has a series of index attributes that specify whether Azure Cognitive Search can search, filter, sort, and facet upon the field. Most of the fields are simple data types, but some, like `AddressType` are complex types that allow you to create rich data structures in your index. You can read more about [supported data types](/rest/api/searchservice/supported-data-types) and index attributes described in [Create Index (REST)](/rest/api/searchservice/create-index).
-Add the following to **hotels_quickstart_index.json** or [download the file](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/quickstart/v11/hotels_quickstart_index.json).
+Add the following content to **hotels_quickstart_index.json** or [download the file](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/quickstart/v11/hotels_quickstart_index.json).
```json {
Within the main function, we then create a `SearchIndexClient`, which is used to
const indexClient = new SearchIndexClient(endpoint, new AzureKeyCredential(apiKey)); ```
-Next, we want to delete the index if it already exists. This is a common practice for test/demo code.
+Next, we want to delete the index if it already exists. This operation is a common practice for test/demo code.
We do this by defining a simple function that tries to delete the index.
If you [downloaded the source code](https://github.com/Azure-Samples/azure-searc
You should see a series of messages describing the actions being taken by the program.
-Open the **Overview** of your search service in the Azure portal. Select the **Indexes** tab. You should see something like the following:
+Open the **Overview** of your search service in the Azure portal. Select the **Indexes** tab. You should see something like the following example:
:::image type="content" source="media/search-get-started-javascript/create-index-no-data.png" alt-text="Screenshot of Azure portal, search service Overview, Indexes tab" border="false":::
The queries are written in a `sendQueries()` function that we'll call in the mai
await sendQueries(searchClient); ```
-Queries are sent using the `search()` method of `searchClient`. The first parameter is the search text and the second parameter is any additional search options.
+Queries are sent using the `search()` method of `searchClient`. The first parameter is the search text and the second parameter specifies search options.
The first query searches `*`, which is equivalent to searching everything and selects three of the fields in the index. It's a best practice to only `select` the fields you need because pulling back unnecessary data can add latency to your queries.
async function sendQueries(searchClient) {
} ```
-The remaining queries outlined below should also be added to the `sendQueries()` function. They are separated here for readability.
+The remaining queries outlined below should also be added to the `sendQueries()` function. They're separated here for readability.
In the next query, we specify the search term `"wifi"` and also include a filter to only return results where the state is equal to `'FL'`. Results are also ordered by the Hotel's `Rating`.
for await (const result of searchResults.results) {
} ```
-Next, the search is limited to a single searchable field using the `searchFields` parameter. This is a great option to make your query more efficient if you know you're only interested in matches in certain fields.
+Next, the search is limited to a single searchable field using the `searchFields` parameter. This approach is a great option to make your query more efficient if you know you're only interested in matches in certain fields.
```javascript console.log('Query #3 - Limit searchFields:');
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember the limit of three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
search Search Howto Index Changed Deleted Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-changed-deleted-blobs.md
Previously updated : 01/19/2022 Last updated : 09/09/2022 # Change and delete detection using indexers for Azure Storage in Azure Cognitive Search
There are two ways to implement a soft delete strategy:
For this deletion detection approach, Cognitive Search depends on the [native blob soft delete](../storage/blobs/soft-delete-blob-overview.md) feature in Azure Blob Storage to determine whether blobs have transitioned to a soft deleted state. When blobs are detected in this state, a search indexer uses this information to remove the corresponding document from the index. > [!IMPORTANT]
-> Support for native blob soft delete is in preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [REST API version 2020-06-30-Preview](./search-api-preview.md) provides this feature. There is currently no portal or .NET SDK support.
+> Support for native blob soft delete is in preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [REST API version 2020-06-30-Preview](./search-api-preview.md) provides this feature. There is currently no .NET SDK support.
### Requirements for native soft delete + [Enable soft delete for blobs](../storage/blobs/soft-delete-blob-enable.md). + Blobs must be in an Azure Blob Storage container. The Cognitive Search native blob soft delete policy is not supported for blobs in ADLS Gen2. + Document keys for the documents in your index must be mapped to either be a blob property or blob metadata.
-+ You must use the preview REST API (`api-version=2020-06-30-Preview`) to configure support for soft delete.
++ You must use the preview REST API (`api-version=2020-06-30-Preview`) or the indexer Data Source configuration in your Cognitive Search Service from the Azure portal, to configure support for soft delete. ### How to configure deletion detection using native soft delete 1. In Blob storage, when enabling soft delete, set the retention policy to a value that's much higher than your indexer interval schedule. This way if there's an issue running the indexer or if you have a large number of documents to index, there's plenty of time for the indexer to eventually process the soft deleted blobs. Azure Cognitive Search indexers will only delete a document from the index if it processes the blob while it's in a soft deleted state.
-1. In Cognitive Search, set a native blob soft deletion detection policy on the data source. An example is shown below. Because this feature is in preview, you must use the preview REST API.
+1. In Cognitive Search, set a native blob soft deletion detection policy on the data source. You can do this either from the Azure portal or by using preview REST API (`api-version=2020-06-30-Preview`).
- ```http
- PUT https://[service name].search.windows.net/datasources/blob-datasource?api-version=2020-06-30-Preview
- Content-Type: application/json
- api-key: [admin key]
- {
- "name" : "blob-datasource",
- "type" : "azureblob",
- "credentials" : { "connectionString" : "<your storage connection string>" },
- "container" : { "name" : "my-container", "query" : null },
- "dataDeletionDetectionPolicy" : {
- "@odata.type" :"#Microsoft.Azure.Search.NativeBlobSoftDeleteDeletionDetectionPolicy"
- }
- }
- ```
+### [**Azure portal**](#tab/portal)
+
+1. [Sign in to Azure portal](https://portal.azure.com).
+
+1. On the Cognitive Search service Overview page, go to **New Data Source**, a visual editor for specifying a data source definition.
+
+ The following screenshot shows where you can find this feature in the portal.
+
+ :::image type="content" source="media/search-indexing-changed-deleted-blobs/new-data-source.png" alt-text="Screenshot of portal data source." border="true":::
+
+1. On the **New Data Source** form, fill out the required fields, select the **Track deletions** checkbox and choose **Native blob soft delete**. Then hit **Save** to enable the feature on Data Source creation.
+
+ :::image type="content" source="media/search-indexing-changed-deleted-blobs/native-soft-delete.png" alt-text="Screenshot of portal data source native soft delete." border="true":::
++
+### [**REST**](#tab/rest-api)
+
+An example of using REST API to set soft deletion detection policy on the data source is shown below.
+
+```http
+PUT https://[service name].search.windows.net/datasources/blob-datasource?api-version=2020-06-30-Preview
+Content-Type: application/json
+api-key: [admin key]
+{
+ "name" : "blob-datasource",
+ "type" : "azureblob",
+ "credentials" : { "connectionString" : "<your storage connection string>" },
+ "container" : { "name" : "my-container", "query" : null },
+ "dataDeletionDetectionPolicy" : {
+ "@odata.type" :"#Microsoft.Azure.Search.NativeBlobSoftDeleteDeletionDetectionPolicy"
+ }
+}
+```
1. [Run the indexer](/rest/api/searchservice/run-indexer) or set the indexer to run [on a schedule](search-howto-schedule-indexers.md). When the indexer runs and processes a blob having a soft delete state, the corresponding search document will be removed from the index.
You can reverse a soft-delete if the original source file still physically exist
+ [Indexers in Azure Cognitive Search](search-indexer-overview.md) + [How to configure a blob indexer](search-howto-indexing-azure-blob-storage.md)
-+ [Blob indexing overview](search-blob-storage-integration.md)
++ [Blob indexing overview](search-blob-storage-integration.md)
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-gremlin.md
Previously updated : 07/12/2022 Last updated : 09/08/2022 # Index data from Azure Cosmos DB using the Gremlin API
api-key: [Search service admin key]
} ```
+Even if you enable deletion detection policy, deleting complex (`Edm.ComplexType`) fields from the index is not supported. This policy requires that the 'active' column in the Gremlin database to be of type integer, string or boolean.
++ <a name="MappingGraphData"></a> ## Mapping graph data to fields in a search index
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
Previously updated : 08/25/2022 Last updated : 09/08/2022 # Index data from SharePoint document libraries
api-key: [admin key]
An indexer connects a data source with a target search index and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create the indexer.
-During this section youΓÇÖll be asked to sign in with your organization credentials that have access to the SharePoint site. If possible, we recommend creating a new organizational user account and giving that new user the exact permissions that you want the indexer to have.
+During this section youΓÇÖll be asked to sign in with your organization credentials that have access to the SharePoint site. If possible, we recommend creating a new organizational user account and giving that new user the exact permissions that you want the indexer to have.
There are a few steps to creating the indexer:
There are a few steps to creating the indexer:
"mappingFunction" : { "name" : "base64Encode" }
- }
+ }
+ ]
} ```
There are a few steps to creating the indexer:
"maxFailedItemsPerBatch": null, "base64EncodeKeys": null, "configuration:" {
- "indexedFileNameExtensions" : null,
- "excludedFileNameExtensions" : null,
- "dataToExtract": "contentAndMetadata"
+ "dataToExtract": "contentAndMetadata",
+ "indexedFileNameExtensions" : ".pdf, .docx",
+ "excludedFileNameExtensions" : ".png, .jpg"
} }, "schedule" : { },
There are a few steps to creating the indexer:
"mappingFunction" : { "name" : "base64Encode" }
- }
+ }
+ ]
} ```
You can also continue indexing if errors happen at any point of processing, eith
## See also + [Indexers in Azure Cognitive Search](search-indexer-overview.md)
-+ [Content metadata properties used in Azure Cognitive Search](search-blob-metadata-properties.md)
++ [Content metadata properties used in Azure Cognitive Search](search-blob-metadata-properties.md)
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Built-in roles include generally available and preview roles. If these roles are
| - | - | | [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default.</br></br> (Preview) This role has the same access as the Search Service Contributor role on the data plane. It includes access to all data plane actions except the ability to query the search index or index documents. | | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. </br></br> (Preview) This role has the same access as the Search Service Contributor role on the data plane. It includes access to all data plane actions except the ability to query the search index or index documents. |
-| [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: service name, resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>This role doesn't allow access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
+| [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: service name, resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>This role doesn't allow access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). </br></br> (Preview) When you enable the RBAC preview for the data plane, the Reader role has read access across the entire service. This allows you to read search metrics, content metrics (storage consumed, number of objects), and the definitions of data plane resources (indexes, indexers, etc.). The Reader role still won't have access to read API keys or read content within indexes. |
| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role does not give you access to query search indexes or index documents. This role is for search service administrators who need to manage the search service and its objects, but without the ability to view or access object data. </br></br>Like Contributor, members of this role can't make or manage role assignments or change authorization options. To use the preview capabilities of this role, your service must have the preview feature enabled, as described in this article. | | [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | (Preview) Provides full data plane access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. | | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only data plane access to search indexes on the search service. This role is for apps and users who run queries. |
sentinel Reference Systemconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig.md
PAHI_FULL = <True/False>
AGR_AGRS_FULL = <True/False> USRSTAMP_FULL = <True/False> USRSTAMP_INCREMENTAL = <True/False>
+SNCSYSACL_FULL = <True/False> (Preview)
+USRACL_FULL = <True/False> (Preview)
``` ## Next steps
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
For best results, refer to these tables using the name in the **Sentinel functio
| AGR_DEFINE | Role definition | SAP_AGR_DEFINE | | AGR_AGRS | Roles in composite roles | SAP_AGR_AGRS | | PAHI | History of the system, database, and SAP parameters | SAP_PAHI |-
+| SNCSYSACL (PREVIEW)| SNC Access Control List (ACL): Systems | SAP_SNCSYSACL |
+| USRACL (PREVIEW)| SNC Access Control List (ACL): User | SAP_USRACL |
## Next steps
service-health Resource Health Checks Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-checks-resource-types.md
Below is a complete list of all the checks executed through resource health by r
## Microsoft.classiccompute/virtualmachines |Executed Checks| ||
-|<ul><li>Is the host server up and running?</li><li>Has the host OS booting completed?</li><li>Is the virtual machine container provisioned and powered up?</li><li>Is there network connectivity between the host and the storage account?</li><li>Has the booting of the guest OS completed?</li><li>Is there ongoing planned maintenance?</li><li>Is the host hardware degraded and predicted to fail soon?</li></ul>|
+|<ul><li>Is the server hosting this virtual machine up and running?</li><li>Is the virtual machine container provisioned and powered up?</li><li>Is there network connectivity between the host and the storage account?</li><li>Is there ongoing planned maintenance?</li><li>Is there heartbeats between Guest and host agent *(if Guest extension is installed)*?</li></ul>|
## Microsoft.classiccompute/domainnames |Executed Checks|
spring-apps Concepts For Java Memory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concepts-for-java-memory-management.md
+
+ Title: Java memory management
+
+description: Introduces concepts for Java memory management to help you understand Java applications in Azure Spring Apps.
++++ Last updated : 07/15/2022+++
+# Java memory management
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article describes various concepts related to Java memory management to help you understand the behavior of Java applications hosted in Azure Spring Apps.
+
+## Java memory model
+
+A Java application's memory has several parts, and there are different ways to divide the parts. This article discusses Java memory as divided into heap memory, non-heap memory, and direct memory.
+
+### Heap memory
+
+Heap memory stores all class instances and arrays. Each Java virtual machine (JVM) has only one heap area, which is shared among threads.
+
+Spring Boot Actuator can observe the value of heap memory. Spring Boot Actuator takes the heap value as part of `jvm.memory.used/committed/max`. For more information, see the [jvm.memory.used/committed/max](tools-to-troubleshoot-memory-issues.md#jvmmemoryusedcommittedmax) section in [Tools to troubleshoot memory issues](tools-to-troubleshoot-memory-issues.md).
+
+Heap memory is divided into *young generation* and *old generation*. These terms are described in the following list, along with related terms.
+
+- *Young generation*: all new objects are allocated and aged in young generation.
+
+ - *Eden space*: new objects are allocated in Eden space.
+ - *Survivor space*: objects will be moved from Eden to survivor space after surviving one garbage collection cycle. Survivor space can be divided to two parts: s1 and s2.
+
+- *Old generation*: also called *tenured space*. Objects that have remained in the survivor spaces for a long time will be moved to old generation.
+
+Before Java 8, another section called *permanent generation* was also part of the heap. Starting with Java 8, permanent generation was replaced by metaspace in non-heap memory.
+
+### Non-heap memory
+
+Non-heap memory is divided into the following parts:
+
+- The part of non-heap memory that replaced the permanent generation (or *permGen*) starting with Java 8. Spring Boot Actuator observes this section and takes it as part of `jvm.memory.used/committed/max`. In other words, `jvm.memory.used/committed/max` is the sum of heap memory and the former permGen part of non-heap memory. The former permanent generation is composed of the following parts:
+
+ - *Metaspace*, which stores the class definitions loaded by class loaders.
+ - *Compressed class space*, which is for compressed class pointers.
+ - *Code cache*, which stores native code compiled by JIT.
+
+- Other memory such as the thread stack, which isn't observed by Spring Boot Actuator.
+
+### Direct memory
+
+Direct memory is native memory allocated by `java.nio.DirectByteBuffer`, which is used in third party libraries like nio and gzip.
+
+Spring Boot Actuator doesn't observe the value of direct memory.
+
+The following diagram summarizes the Java memory model described in the previous section.
++
+## Java garbage collection
+
+There are three terms regarding of Java Garbage Collection (GC): "Minor GC", "Major GC", and "Full GC". These terms aren't clearly defined in the JVM specification. Here, we consider "Major GC" and "Full GC" to be equivalent.
+
+Minor GC performs when Eden space is full. It removes all dead objects in young generation and moves live objects to from Eden space to s1 of survivor space, or from s1 to s2.
+
+Full GC or major GC does garbage collection in the entire heap. Full GC can also collect parts like metaspace and direct memory, which can be cleaned only by full GC.
+
+The maximum heap size influences the frequency of minor GC and full GC. The maximum metaspace and maximum direct memory size influence full GC.
+
+When you set the maximum heap size to a lower value, garbage collections occur more frequently, which slow the app a little, but better limits the memory usage. When you set the maximum heap size to a higher value, garbage collections occur less frequently, which may create more out-of-memory (OOM) risk. For more information, see the [Types of out-of-memory issues](how-to-fix-app-restart-issues-caused-by-out-of-memory.md#types-of-out-of-memory-issues) section of [App restart issues caused by out-of-memory issues](how-to-fix-app-restart-issues-caused-by-out-of-memory.md).
+
+Metaspace and direct memory can be collected only by full GC. When metaspace or direct memory is full, full GC will occur.
+
+## Java memory configurations
+
+The following sections describe important aspects of Java memory configuration.
+
+### Java containerization
+
+Applications in Azure Spring Apps run in container environments. For more information, see [Containerize your Java applications](/azure/developer/java/containers/overview?toc=/azure/spring-cloud/toc.json&bc=/azure/spring-cloud/breadcrumb/toc.json).
+
+### Important JVM options
+
+You can configure the maximum size of each part of memory by using JVM options. You can set JVM options by using Azure CLI commands or through the Azure portal. For more information, see the [Modify configurations to fix problems](tools-to-troubleshoot-memory-issues.md#modify-configurations-to-fix-problems) section of [Tools to troubleshoot memory issues](tools-to-troubleshoot-memory-issues.md).
+
+The following list describes the JVM options:
+
+- Heap size configuration
+
+ - `-Xms` sets the initial heap size by absolute value.
+ - `-Xmx` sets the maximum heap size by absolute value.
+ - `-XX:InitialRAMPercentage` sets the initial heap size by the percentage of heap size / app memory size.
+ - `-XX:MaxRAMPercentage` sets the maximum heap size by the percentage of heap size / app memory size.
+
+- Direct memory size configuration
+
+ - `-XX:MaxDirectMemorySize` sets the maximum direct memory size by absolute value. For more information, see [MaxDirectMemorySize](https://docs.oracle.com/en/java/javase/11/tools/java.html#GUID-3B1CE181-CD30-4178-9602-230B800D4FAE__GUID-2E02B495-5C36-4C93-8597-0020EFDC9A9C) in the Oracle documentation.
+
+- Metaspace size configuration
+
+ - `-XX:MaxMetaspaceSize` sets the maximum metaspace size by absolute value.
+
+### Default maximum memory size
+
+The following sections describe how default maximum memory sizes are set.
+
+#### Default maximum heap size
+
+Azure Spring Apps sets the default maximum heap memory size to about 50%-80% of app memory for Java apps. Specifically, Azure Spring Apps uses the following settings:
+
+- If the app memory < 1 GB, the default maximum heap size will be 50% of app memory.
+- If 1 GB <= the app memory < 2 GB, the default maximum heap size will be 60% of app memory.
+- If 2 GB <= the app memory < 3 GB, the default maximum heap size will be 70% of app memory.
+- If 3 GB <= the app memory, the default maximum heap size will be 80% of app memory.
+
+#### Default maximum direct memory size
+
+When the maximum direct memory size isn't set using JVM options, the JVM automatically sets the maximum direct memory size to the value returned by [Runtime.getRuntime.maxMemory()](https://docs.oracle.com/javase/8/docs/api/java/lang/Runtime.html#maxMemory--). This value is approximately equal to the maximum heap memory size. For more information, see the [JDK 8 VM.java file](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/a71d26266469/src/share/classes/sun/misc/VM.java#l282&gt;%20jdk8).
+
+### Memory usage layout
+
+Heap size is influenced by your throughput. Basically, when configuring, you can keep the default maximum heap size, which leaves reasonable memory for other parts.
+
+The metaspace size depends on the complexity of your code, such as the number of classes.
+
+The direct memory size depends on your throughput and your use of third party libraries like nio and gzip.
+
+The following list describes a typical memory layout sample for 2-GB apps. You can refer to this list to configure your memory size settings.
+
+- Total Memory (2048M)
+- Heap memory: Xmx is 1433.6M (70% of total memory). The reference value of daily memory usage is 1200M.
+ - Young generation
+ - Survivor space (S0, S1)
+ - Eden space
+ - Old generation
+- Non-heap memory
+ - Observed part (observed by Spring Boot Actuator)
+ - Metaspace: the daily usage reference value is 50M-256M
+ - Code cache
+ - Compressed class space
+ - Not observed part (not observed by Spring Boot Actuator): the daily usage reference value is 150M-250M.
+ - Thread stack
+ - GC, internal symbol and other
+- Direct memory: the daily usage reference value is 10M-200M.
+
+The following diagram shows the same information. Numbers in grey are the reference values of daily memory usage.
++
+Overall, when configuring maximum memory sizes, you should consider the usage of each part in memory, and the sum of all maximum sizes shouldn't exceed total available memory.
+
+## Java OOM
+
+OOM means the application is out of memory. There are two different concepts: container OOM and JVM OOM. For more information, see [App restart issues caused by out-of-memory issues](how-to-fix-app-restart-issues-caused-by-out-of-memory.md).
+
+## See also
+
+- [App restart issues caused by out-of-memory issues](how-to-fix-app-restart-issues-caused-by-out-of-memory.md)
+- [Tools to troubleshoot memory issues](tools-to-troubleshoot-memory-issues.md)
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
Instead of manually configuring your Spring Boot applications, you can automatically bind select Azure services to your applications by using Azure Spring Apps. This article demonstrates how to bind your application to an Azure Cosmos DB database.
-Prerequisites:
+## Prerequisites
-* A deployed Azure Spring Apps instance. Follow our [quickstart on deploying via the Azure CLI](./quickstart.md) to get started.
-* An Azure Cosmos DB account with a minimum permission level of Contributor.
+* A deployed Azure Spring Apps instance.
+* An Azure Cache for Redis service instance.
+* The Azure Spring Apps extension for the Azure CLI.
+
+If you don't have a deployed Azure Spring Apps instance, follow the steps in the [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
## Prepare your Java project
Prerequisites:
```xml <dependency> <groupId>com.azure.spring</groupId>
- <artifactId>azure-spring-boot-starter-cosmos</artifactId>
- <version>3.6.0</version>
+ <artifactId>spring-cloud-azure-starter-data-cosmos</artifactId>
+ <version>4.3.0</version>
</dependency> ```
Prerequisites:
```xml <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-storage-spring-boot-starter</artifactId>
- <version>2.0.5</version>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-starter-storage-blob</artifactId>
+ <version>4.3.0</version>
</dependency> ```
Prerequisites:
## Bind your app to the Azure Cosmos DB #### [Service Binding](#tab/Service-Binding)+ Azure Cosmos DB has five different API types that support binding. The following procedure shows how to use them: 1. Create an Azure Cosmos DB database. Refer to the quickstart on [creating a database](../cosmos-db/create-cosmosdb-resources-portal.md) for help.
Azure Cosmos DB has five different API types that support binding. The following
1. To ensure the service is bound correctly, select the binding name and verify its details. The `property` field should be similar to this example: ```properties
- azure.cosmosdb.uri=https://<some account>.documents.azure.com:443
- azure.cosmosdb.key=abc******
- azure.cosmosdb.database=testdb
+ spring.cloud.azure.cosmos.endpoint=https://<some account>.documents.azure.com:443
+ spring.cloud.azure.cosmos.key=abc******
+ spring.cloud.azure.cosmos.database=testdb
``` #### [Terraform](#tab/Terraform)
-The following Terraform script shows how to set up an Azure Spring Apps app with Azure Cosmos DB MongoDB API.
+
+The following Terraform script shows how to set up an Azure Spring Apps app with an Azure Cosmos DB account.
```terraform provider "azurerm" {
resource "azurerm_cosmosdb_account" "cosmosdb" {
resource_group_name = azurerm_resource_group.example.name location = azurerm_resource_group.example.location offer_type = "Standard"
- kind = "MongoDB"
+ kind = "GlobalDocumentDB"
consistency_policy { consistency_level = "Session"
resource "azurerm_cosmosdb_account" "cosmosdb" {
} }
-resource "azurerm_cosmosdb_mongo_database" "cosmosdb" {
+resource "azurerm_cosmosdb_sql_database" "cosmosdb" {
name = "cosmos-${var.application_name}-001" resource_group_name = azurerm_cosmosdb_account.cosmosdb.resource_group_name account_name = azurerm_cosmosdb_account.cosmosdb.name
resource "azurerm_spring_cloud_app" "example" {
resource "azurerm_spring_cloud_java_deployment" "example" { name = "default" spring_cloud_app_id = azurerm_spring_cloud_app.example.id
- cpu = 2
- memory_in_gb = 4
+ quota {
+ cpu = "2"
+ memory = "4Gi"
+ }
instance_count = 2 jvm_options = "-XX:+PrintGC" runtime_version = "Java_11" environment_variables = {
- "azure.cosmosdb.uri" : azurerm_cosmosdb_account.cosmosdb.connection_strings[0]
- "azure.cosmosdb.database" : azurerm_cosmosdb_mongo_database.cosmosdb.name
+ "spring.cloud.azure.cosmos.endpoint" : azurerm_cosmosdb_account.cosmosdb.endpoint
+ "spring.cloud.azure.cosmos.key" : azurerm_cosmosdb_account.cosmosdb.primary_key
+ "spring.cloud.azure.cosmos.database" : azurerm_cosmosdb_sql_database.cosmosdb.name
} }
spring-apps How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-mysql.md
With Azure Spring Apps, you can bind select Azure services to your applications
* An Azure Database for MySQL account * Azure CLI
-If you don't have a deployed Azure Spring Apps instance, follow the instructions in [Quickstart: Launch an application in Azure Spring Apps by using the Azure portal](./quickstart.md) to deploy your first Spring app.
+If you don't have a deployed Azure Spring Apps instance, follow the instructions in [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md) to deploy your first Spring app.
## Prepare your Java project
If you don't have a deployed Azure Spring Apps instance, follow the instructions
## Bind your app to the Azure Database for MySQL instance #### [Service Binding](#tab/Service-Binding)+ 1. Note the admin username and password of your Azure Database for MySQL account. 1. Connect to the server, create a database named **testdb** from a MySQL client, and then create a new non-admin account.
resource "azurerm_spring_cloud_app" "example" {
resource "azurerm_spring_cloud_java_deployment" "example" { name = "default" spring_cloud_app_id = azurerm_spring_cloud_app.example.id
- cpu = 2
- memory_in_gb = 4
+ quota {
+ cpu = "2"
+ memory = "4Gi"
+ }
instance_count = 2 jvm_options = "-XX:+PrintGC" runtime_version = "Java_11"
spring-apps How To Bind Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-redis.md
Instead of manually configuring your Spring Boot applications, you can automatic
* An Azure Cache for Redis service instance * The Azure Spring Apps extension for the Azure CLI
-If you don't have a deployed Azure Spring Apps instance, follow the steps in the [quickstart on deploying an Azure Spring Apps app](./quickstart.md).
+If you don't have a deployed Azure Spring Apps instance, follow the steps in the [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
## Prepare your Java project
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
## Bind your app to the Azure Cache for Redis #### [Service Binding](#tab/Service-Binding)+ 1. Go to your Azure Spring Apps service page in the Azure portal. Go to **Application Dashboard** and select the application to bind to Azure Cache for Redis. This application is the same one you updated or deployed in the previous step. 1. Select **Service binding** and select **Create service binding**. Fill out the form, being sure to select the **Binding type** value **Azure Cache for Redis**, your Azure Cache for Redis server, and the **Primary** key option.
resource "azurerm_spring_cloud_app" "example" {
resource "azurerm_spring_cloud_java_deployment" "example" { name = "default" spring_cloud_app_id = azurerm_spring_cloud_app.example.id
- cpu = 2
- memory_in_gb = 4
+ quota {
+ cpu = "2"
+ memory = "4Gi"
+ }
instance_count = 2 jvm_options = "-XX:+PrintGC" runtime_version = "Java_11"
spring-apps How To Fix App Restart Issues Caused By Out Of Memory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-fix-app-restart-issues-caused-by-out-of-memory.md
+
+ Title: App restart issues caused by out-of-memory issues
+
+description: Explains how to understand out-of-memory (OOM) issues for Java applications in Azure Spring Apps.
++++ Last updated : 07/15/2022+++
+# App restart issues caused by out-of-memory issues
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article describes out-of-memory (OOM) issues for Java applications in Azure Spring Apps.
+
+## Types of out-of-memory issues
+
+There are two types of out-of-memory issues: container OOM and JVM OOM.
+
+- Container OOM, also called *system OOM*, occurs when the available app memory has run out. Container OOM issue causes app restart events, which are reported in the **Resource Health** section of the Azure portal. Normally, container OOM is caused by incorrect memory size configurations.
+
+- JVM OOM occurs when the amount of used memory has reached the maximum size set in JVM options. JVM OOM won't cause an app to restart. Normally, JVM OOM is a result of bad code, which you can find by looking for `java.lang.OutOfMemoryError` exceptions in the application log. JVM OOM has a negative effect on application and Java Profiling tools, such as Java Flight Recorder.
+
+This article focuses on how to fix container OOM issues. To fix JVM OOM issues, check tools such as heap dump, thread dump, and Java Flight Recorder. For more information, see [Capture heap dump and thread dump manually and use Java Flight Recorder in Azure Spring Apps](how-to-capture-dumps.md).
+
+## Fix app restart issues due to OOM
+
+The following sections describe the tools, metrics, and JVM options that you can use to diagnose and fix container OOM issues.
+
+### View alerts on the Resource health page
+
+The **Resource health** page on the Azure portal shows app restart events due to container OOM, as shown in the following screenshot:
++
+### Configure memory size
+
+The metrics *App memory Usage*, `jvm.memory.used`, and `jvm.memory.committed` provide a view of memory usage. For more information, see the [Metrics](tools-to-troubleshoot-memory-issues.md#metrics) section of [Tools to troubleshoot memory issues](tools-to-troubleshoot-memory-issues.md). Configure the maximum memory sizes in JVM options to ensure that memory is under the limit.
+
+The sum of the maximum memory sizes of all the parts in the [Java memory model](concepts-for-java-memory-management.md#java-memory-model) should be less than the real available app memory. To set your maximum memory sizes, see the typical memory layout described in the [Memory usage layout](concepts-for-java-memory-management.md#memory-usage-layout) section of [Java memory management](concepts-for-java-memory-management.md).
+
+Find a balance when you set the maximum memory size. When you set the maximum memory size too high, there's a risk of container OOM. When you set the maximum memory size too low, there's a risk of JVM OOM, and garbage collection will be of and will slow down the app.
+
+#### Control heap memory
+
+You can set the maximum heap size by using the `-Xms`, `-Xmx`, `-XX:InitialRAMPercentage`, and `-XX:MaxRAMPercentage` JVM options.
+
+You may need to adjust the maximum heap size settings when the value of `jvm.memory.used` is too high in the metrics. For more information, see the [jvm.memory.used/committed/max](tools-to-troubleshoot-memory-issues.md#jvmmemoryusedcommittedmax) section of [Tools to troubleshoot memory issues](tools-to-troubleshoot-memory-issues.md).
+
+#### Control direct memory
+
+It's important to set the `-XX:MaxDirectMemorySize` JVM option for the following reasons:
+
+- You may not notice when frameworks such as nio and gzip use direct memory.
+- Garbage collection of direct memory is only handled during full garbage collection, and full garbage collection occurs only when the heap is near full.
+
+Normally, you can set `MaxDirectMemorySize` to a value less than the app memory size minus the heap memory minus the non-heap memory.
+
+#### Control metaspace
+
+You can set the maximum metaspace size by setting the `-XX:MaxMetaspaceSize` JVM option. The `-XX:MetaspaceSize` option sets the threshold value to trigger full garbage collection.
+
+Metaspace memory is usually stable.
+
+## See also
+
+- [Java memory management](concepts-for-java-memory-management.md)
+- [Tools to troubleshoot memory issues](tools-to-troubleshoot-memory-issues.md)
spring-apps Tools To Troubleshoot Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tools-to-troubleshoot-memory-issues.md
+
+ Title: Tools to troubleshoot memory issues
+
+description: Provides a list of tools for troubleshooting Java memory issues.
++++ Last updated : 07/15/2022+++
+# Tools to troubleshoot memory issues
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article describes various tools that are useful for troubleshooting Java memory issues. You can use these tools in many scenarios not limited to memory issues, but this article focuses only on the topic of memory.
+
+## Alerts and diagnostics
+
+The following sections describe resource health alerts and diagnostics available through the Azure portal.
+
+### Resource health
+
+You can monitor app lifecycle events and set up alerts with Azure Activity log and Azure Service Health. For more information, see [Monitor app lifecycle events using Azure Activity log and Azure Service Health](monitor-app-lifecycle-events.md).
+
+Resource health sends alerts about app restart events due to container out-of-memory (OOM) issues. For more information, see [App restart issues caused by out-of-memory issues](how-to-fix-app-restart-issues-caused-by-out-of-memory.md).
+
+The following screenshot shows an app resource health alert indicating an OOM issue.
++
+### Diagnose and solve problems
+
+Azure Spring Apps diagnostics is an interactive experience to troubleshoot your app without configuration. For more information, see [Self-diagnose and solve problems in Azure Spring Apps](how-to-self-diagnose-solve.md).
+
+In the Azure portal, you can find **Memory Usage** under **Diagnose and solve problems**, as shown in the following screenshot.
++
+**Memory Usage** provides a simple diagnosis for app memory usage, as shown in the following screenshot.
++
+### Metrics
+
+The following sections describe metrics that cover issues including high memory usage, heap memory that's too large, and abnormal garbage collection abnormal (too frequent or not frequent enough). For more information, see [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](quickstart-logs-metrics-tracing.md?tabs=Azure-CLI&pivots=programming-language-java).
+
+#### App memory usage
+
+App memory usage is a percentage equal to the app memory used divided by the app memory limit. This value shows the whole app memory.
+
+#### jvm.memory.used/committed/max
+
+For JVM memory, there are three metrics: `jvm.memory.used`, `jvm.memory.committed`, and `jvm.memory.max`, which are described in the following list.
+
+"JVM memory" isn't a clearly defined concept. Here, `jvm.memory` is the sum of [heap memory](concepts-for-java-memory-management.md#heap-memory) and former permGen part of [non-heap memory](concepts-for-java-memory-management.md#non-heap-memory). JVM memory doesn't include direct memory or other memory like the thread stack. These three metrics are gathered by Spring Boot Actuator, and the scope of `jvm.memory` is also determined by Spring Boot Actuator.
+
+- `jvm.memory.used` is the amount of used JVM memory, including used heap memory and used former permGen in non-heap memory.
+
+ `jvm.memory.used` is a major reflection of the change of heap memory, because the former permGen part is usually stable.
+
+ If you find `jvm.memory.used` too large, consider setting a smaller maximum heap memory size.
+
+- `jvm.memory.committed` is the amount of memory committed for the JVM to use. The size of `jvm.memory.committed` is basically the limit of usable JVM memory.
+
+- `jvm.memory.max` is the maximum amount of JVM memory, not to be confused with the real available amount.
+
+ The value of `jvm.memory.max` can sometimes be confusing because it can be much higher than the available app memory. To clarify, `jvm.memory.max` is the sum of all maximum sizes of heap memory and the former permGen part of [non-heap memory](concepts-for-java-memory-management.md#non-heap-memory), regardless of the real available memory. For example, if an app is set with 1 GB memory in the Azure Spring Apps portal, then the default heap memory size will be 0.5 GB. For more information, see the [Default maximum heap size](concepts-for-java-memory-management.md#default-maximum-heap-size) section of [Java memory management](concepts-for-java-memory-management.md).
+
+ If the default *compressed class space* size is 1 GB, then the value of `jvm.memory.max` will be larger than 1.5 GB regardless of whether the app memory size 1 GB. For more information, see [Java Platform, Standard Edition HotSpot Virtual Machine Garbage Collection Tuning Guide: Other Considerations](https://docs.oracle.com/javase/9/gctuning/other-considerations.htm) in the Oracle documentation.
+
+#### jvm.gc.memory.allocated/promoted
+
+These two metrics are for observing Java garbage collection (GC). For more information, see the [Java garbage collection](concepts-for-java-memory-management.md#java-garbage-collection) section of [Java memory management](concepts-for-java-memory-management.md). The maximum heap size influences the frequency of minor GC and full GC. The maximum metaspace and maximum direct memory size influence full GC. If you want to adjust the frequency of garbage collection, consider modifying the following maximum memory sizes.
+
+- `jvm.gc.memory.allocated` is the amount of increase in the size of the young generation memory pool after one GC and before the next. This value reflects minor GC.
+
+- `jvm.gc.memory.promoted` is the amount of increase in the size of the old generation memory pool after GC. This value reflects full GC.
+
+You can find this feature on the Azure portal, as shown in the following screenshot. You can choose specific metrics and add filters for a specific app, deployment, or instance. You can also apply splitting.
++
+## Further debugging
+
+For further debugging, you can manually capture heap dumps and thread dumps, and use Java Flight Recorder (JFR). For more information, see [Capture heap dump and thread dump manually and use Java Flight Recorder in Azure Spring Apps](how-to-capture-dumps.md).
+
+Heap dump records the state of the Java heap memory. Thread dump records the stacks of all live threads. These tools are available through the Azure CLI and on the app page of the Azure portal, as shown in the following screenshot.
++
+You can also use third party tools like [Memory Analyzer](https://www.eclipse.org/mat/) to analyze heap dumps.
+
+## Modify configurations to fix problems
+
+Some issues you might identify include [container OOM](how-to-fix-app-restart-issues-caused-by-out-of-memory.md#fix-app-restart-issues-due-to-oom), heap memory that's too large, and abnormal garbage collection. If you identify any of these issues, you may need to configure the maximum memory size in the JVM options. For more information, see the [Important JVM options](concepts-for-java-memory-management.md#important-jvm-options) section of [Java memory management](concepts-for-java-memory-management.md#important-jvm-options).
+
+This feature is available on Azure CLI and on the Azure portal, as shown in the following screenshot:
++
+## See also
+
+- [Java memory management](concepts-for-java-memory-management.md)
+- [App restart issues caused by out-of-memory issues](how-to-fix-app-restart-issues-caused-by-out-of-memory.md)
spring-apps Troubleshoot Exit Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot-exit-code.md
+
+ Title: Troubleshoot common exit code issues in Azure Spring Apps
+description: Describes how to troubleshoot common exit codes in Azure Spring Apps
+++ Last updated : 08/24/2022+++
+# Troubleshoot common exit code issues in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article describes troubleshooting actions you can take when your application in Azure Spring Apps exits with an error code. You may receive an error code if your application deployment is unsuccessful, or if the application exits when it's running.
+
+## Exit codes
+
+The exit code indicates the reason the application terminated. The following list describes some common exit codes:
+
+- **0** - The application exited because it ran to completion. Update your server application so that it runs continuously.
+
+ Deployed Azure apps in Azure Spring Apps should offer services continuously. An exit code of *0* indicates that the application isn't running continuously. Check your logs and source code.
+
+- **1** - If the application exits with a non-zero exit code, debug the code and related services, and then deploy the application again.
+
+ Consider the following possible causes of a non-zero exit code:
+
+ - There's something wrong with your Spring Boot configuration.
+
+ For example, you need a *spring.db.url* parameter to connect to the database, but it's not found in your configuration file.
+
+ - You're disconnected from a third-party service.
+
+ For example, you need to connect to a Redis service, but the service isn't working or available.
+
+ - You don't have sufficient access to a third-party service.
+
+ For example, you need to connect to Azure Key Vault to import certificates in your application, but your application doesn't have the necessary permissions to access it.
+
+- **137** - The application exited because of an out-of-memory error. The application requested resources that the hosting platform failed to provide. Update your application's Java Virtual Machine (JVM) parameters to restrict resource usage or scale up application resources.
+
+ If the application is a Java application, check the JVM parameter values. They may exceed the memory limit of your application.
+
+ For example, suppose you set the *Xmx* JVM parameter to 10 GB, but the application is using up to 5 GB of memory. Decrease the *Xmx* value or increase the application memory to make sure that the value of the *Xmx* parameter is lower or equal to the memory limit of the application.
+
+- **143** - The application exited because it failed to respond to a health check due to an out-of-memory error or some other error.
+
+ This error code is most often generated by an out-of-memory error. For more information, see [App restart issues caused by out-of-memory issues](./how-to-fix-app-restart-issues-caused-by-out-of-memory.md).
+
+ You can also find more information from the application log by using the Azure CLI [az spring app logs](/cli/azure/spring/app#az-spring-app-logs) command.
+
+## Next steps
+
+- [Troubleshoot common Azure Spring Apps issues](./troubleshoot.md)
spring-apps Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot.md
Enterprise tier has built-in VMware Spring Runtime Support, so you can open supp
## Next steps * [How to self-diagnose and solve problems in Azure Spring Apps](./how-to-self-diagnose-solve.md)
+* [Troubleshoot common exit code issues in Azure Spring Apps](./troubleshoot-exit-code.md)
spring-apps Tutorial Managed Identities Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-mysql.md
The following video describes how to manage secrets using Azure Key Vault.
A resource group is a logical container where Azure resources are deployed and managed. Create a resource group to contain both the Key Vault and Spring Cloud using the command [az group create](/cli/azure/group#az-group-create): ```azurecli
-az group create --location <myLocation> --name <myResourceGroup>
+az group create --location <location> --name <resource-group-name>
``` ## Set up your Key Vault
az group create --location <myLocation> --name <myResourceGroup>
To create a Key Vault, use the command [az keyvault create](/cli/azure/keyvault#az-keyvault-create): > [!IMPORTANT]
-> Each Key Vault must have a unique name. Replace *\<myKeyVaultName>* with the name of your Key Vault in the following examples.
+> Each Key Vault must have a unique name. Replace *\<key-vault-name>* with the name of your Key Vault in the following examples.
```azurecli
-az keyvault create --name <myKeyVaultName> -g <myResourceGroup>
+az keyvault create --resource-group <resource-group-name> --name <key-vault-name>
```
-Make a note of the returned `vaultUri`, which will be in the format `https://<your-keyvault-name>.vault.azure.net`. It will be used in the following step.
+Make a note of the returned `vaultUri`, which will be in the format `https://<key-vault-name>.vault.azure.net`. It will be used in the following step.
You can now place a secret in your Key Vault with the command [az keyvault secret set](/cli/azure/keyvault/secret#az-keyvault-secret-set): ```azurecli az keyvault secret set \
- --vault-name <your-keyvault-name> \
- --name <MYSQL-PASSWORD> \
- --value <MySQL-PASSWORD>
+ --vault-name <key-vault-name> \
+ --name <mysql-password> \
+ --value <mysql-password>
``` ## Set up your Azure Database for MySQL
Create a database named *demo* for later use.
```azurecli az mysql db create \
- --resource-group <myResourceGroup> \
+ --resource-group <resource-group-name> \
--name demo \
- --server-name <mysqlName>
+ --server-name <mysql-instance-name>
``` ## Create an app and service in Azure Spring Apps
After installing the corresponding extension, create an Azure Spring Apps instan
```azurecli az extension add --name spring
-az spring create --name <myService> --group <myResourceGroup>
+az spring create --name <Azure-Spring-Apps-instance-name> --resource-group <resource-group-name>
``` The following example creates an app named `springapp` with a system-assigned managed identity, as requested by the `--assign-identity` parameter. ```azurecli az spring app create \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name>
--name springapp
- --service <myService>
- --group <myResourceGroup> \
--assign-endpoint true \ --assign-identity
-export SERVICE_IDENTITY=$(az spring app show --name springapp -s <myService> -g <myResourceGroup> | jq -r '.identity.principalId')
+export SERVICE_IDENTITY=$(az spring app show --name springapp -s <Azure-Spring-Apps-instance-name> -g <resource-group-name> | jq -r '.identity.principalId')
```
-Make a note of the returned `url`, which will be in the format `https://<your-app-name>.azuremicroservices.io`. It will be used in the following step.
+Make a note of the returned `url`, which will be in the format `https://<app-name>.azuremicroservices.io`. It will be used in the following step.
## Grant your app access to Key Vault
Use [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy) to gran
```azurecli az keyvault set-policy
- --name <myKeyVaultName> \
+ --name <key-vault-name> \
--object-id ${SERVICE_IDENTITY} \ --secret-permissions set get list ``` > [!NOTE]
-> Use `az keyvault delete-policy --name <myKeyVaultName> --object-id ${SERVICE_IDENTITY}` to remove the access for your app after system-assigned managed identity is disabled.
+> Use `az keyvault delete-policy --name <key-vault-name> --object-id ${SERVICE_IDENTITY}` to remove the access for your app after system-assigned managed identity is disabled.
## Build a sample Spring Boot app with Spring Boot starter
This [sample](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/m
1. Clone a sample project.
- ```azurecli
- git clone https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples.git
- ```
+ ```azurecli
+ git clone https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples.git
+ ```
2. Specify your Key Vault and Azure Database for MySQL information in your app's `application.properties`.
- ```
- spring.datasource.url=jdbc:mysql://<mysql-instance-name>.mysql.database.azure.com:3306/demo?serverTimezone=UTC
- spring.datasource.username=<mysql-username>@<mysql-instance-name>
- spring.cloud.azure.keyvault.secret.endpoint=https://<keyvault-instance-name>.vault.azure.net/
- ```
+ ```properties
+ spring.datasource.url=jdbc:mysql://<mysql-instance-name>.mysql.database.azure.com:3306/demo?serverTimezone=UTC
+ spring.datasource.username=<mysql-username>@<mysql-instance-name>
+ spring.cloud.azure.keyvault.secret.endpoint=https://<keyvault-instance-name>.vault.azure.net/
+ ```
3. Package your sample app.
- ```azurecli
- mvn clean package
- ```
+ ```azurecli
+ mvn clean package
+ ```
4. Now deploy the app to Azure with the Azure CLI command [az spring app deploy](/cli/azure/spring/app#az-spring-cloud-app-deploy).
- ```azurecli
- az spring app deploy \
+ ```azurecli
+ az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
--name springapp \
- --service <myService> \
- --group <myResourceGroup> \
--jar-path target/asc-managed-identity-mysql-sample-0.1.0.jar
- ```
+ ```
5. Access the public endpoint or test endpoint to test your app.
- ```
- # Create an entry in table
- curl --header "Content-Type: application/json" \
- --request POST \
- --data '{"description":"configuration","details":"congratulations, you have set up JDBC correctly!","done": "true"}' \
- https://myspringcloud-springapp.azuremicroservices.io
-
- # List entires in table
- curl https://myspringcloud-springapp.azuremicroservices.io
- ```
-
+ ```bash
+ # Create an entry in table
+ curl --header "Content-Type: application/json" \
+ --request POST \
+ --data '{"description":"configuration","details":"congratulations, you have set up JDBC correctly!","done": "true"}' \
+ https://myspringcloud-springapp.azuremicroservices.io
+
+ # List entires in table
+ curl https://myspringcloud-springapp.azuremicroservices.io
+ ```
+ ## Next Steps * [Managed identity to connect Key Vault](tutorial-managed-identities-key-vault.md)
spring-apps Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md
The following list shows the resource requirements for Azure Spring Apps service
## Azure Spring Apps network requirements
-| Destination Endpoint | Port | Use | Note |
-| | - | -- | |
-| \*:1194 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:1194 | UDP:1194 | Underlying Kubernetes Cluster management. | |
-| \*:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Apps Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. |
-| \*:9000 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:9000 | TCP:9000 | Underlying Kubernetes Cluster management. | |
-| \*:123 *or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | |
-| \*.azure.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
-| \*.core.windows.net:443 and \*.core.windows.net:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling *Azure Storage* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
-| \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
+| Destination Endpoint | Port | Use | Note |
+|-||-|--|
+| \*:1194 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:1194 | UDP:1194 | Underlying Kubernetes Cluster management. | |
+| \*:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Apps Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. |
+| \*:123 *or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | |
+| \*.azure.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
+| \*.core.windows.net:443 and \*.core.windows.net:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling *Azure Storage* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
+| \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
## Azure Spring Apps FQDN requirements/application rules Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the following configurations:
-| Destination FQDN | Port | Use |
-| | | |
-| <i>*.azmk8s.io</i> | HTTPS:443 | Underlying Kubernetes Cluster management. |
-| <i>mcr.microsoft.com</i> | HTTPS:443 | Microsoft Container Registry (MCR). |
-| <i>*.cdn.mscr.io</i> | HTTPS:443 | MCR storage backed by the Azure CDN. |
-| <i>*.data.mcr.microsoft.com</i> | HTTPS:443 | MCR storage backed by the Azure CDN. |
-| <i>management.azure.com</i> | HTTPS:443 | Underlying Kubernetes Cluster management. |
-| <i>*login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. |
-| <i>*login.microsoft.com</i> | HTTPS:443 | Azure Active Directory authentication. |
-| <i>packages.microsoft.com</i> | HTTPS:443 | Microsoft packages repository. |
+| Destination FQDN | Port | Use |
+|--|--||
+| <i>*.azmk8s.io</i> | HTTPS:443 | Underlying Kubernetes Cluster management. |
+| <i>mcr.microsoft.com</i> | HTTPS:443 | Microsoft Container Registry (MCR). |
+| <i>*.cdn.mscr.io</i> | HTTPS:443 | MCR storage backed by the Azure CDN. |
+| <i>*.data.mcr.microsoft.com</i> | HTTPS:443 | MCR storage backed by the Azure CDN. |
+| <i>management.azure.com</i> | HTTPS:443 | Underlying Kubernetes Cluster management. |
+| <i>*login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. |
+| <i>*login.microsoft.com</i> | HTTPS:443 | Azure Active Directory authentication. |
+| <i>packages.microsoft.com</i> | HTTPS:443 | Microsoft packages repository. |
| <i>acs-mirror.azureedge.net</i> | HTTPS:443 | Repository required to install required binaries like kubenet and Azure CNI. |
-| *mscrl.microsoft.com*<sup>1</sup> | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
-| *crl.microsoft.com*<sup>1</sup> | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
-| *crl3.digicert.com*<sup>1</sup> | HTTPS:80 | Third-Party TLS/SSL Certificate Chain Paths. |
+| *mscrl.microsoft.com*<sup>1</sup> | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
+| *crl.microsoft.com*<sup>1</sup> | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
+| *crl3.digicert.com*<sup>1</sup> | HTTPS:80 | Third-Party TLS/SSL Certificate Chain Paths. |
<sup>1</sup> Please note that these FQDNs aren't included in the FQDN tag. ## Azure Spring Apps optional FQDN for third-party application performance management
-| Destination FQDN | Port | Use |
-| - | - | |
+| Destination FQDN | Port | Use |
+||||
| <i>collector*.newrelic.com</i> | TCP:443/80 | Required networks of New Relic APM agents from US region, also see [APM Agents Networks](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents). | | <i>collector*.eu01.nr-data.net</i> | TCP:443/80 | Required networks of New Relic APM agents from EU region, also see [APM Agents Networks](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents). |
-| <i>*.live.dynatrace.com</i> | TCP:443 | Required network of Dynatrace APM agents. |
-| <i>*.live.ruxit.com</i> | TCP:443 | Required network of Dynatrace APM agents. |
-| <i>*.saas.appdynamics.com</i> | TCP:443/80 | Required network of AppDynamics APM agents, also see [SaaS Domains and IP Ranges](https://docs.appdynamics.com/display/PAA/SaaS+Domains+and+IP+Ranges). |
+| <i>*.live.dynatrace.com</i> | TCP:443 | Required network of Dynatrace APM agents. |
+| <i>*.live.ruxit.com</i> | TCP:443 | Required network of Dynatrace APM agents. |
+| <i>*.saas.appdynamics.com</i> | TCP:443/80 | Required network of AppDynamics APM agents, also see [SaaS Domains and IP Ranges](https://docs.appdynamics.com/display/PAA/SaaS+Domains+and+IP+Ranges). |
## Next steps
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
ms.devlang: csharp + # Quickstart: Azure Blob Storage client library v12 for .NET
Get started with the Azure Blob Storage client library v12 for .NET. Azure Blob
The examples in this quickstart show you how to use the Azure Blob Storage client library v12 for .NET to: -- [Get the connection string](#get-the-connection-string)-- [Create a container](#create-a-container)-- [Upload a blob to a container](#upload-a-blob-to-a-container)-- [List blobs in a container](#list-blobs-in-a-container)-- [Download a blob](#download-a-blob)-- [Delete a container](#delete-a-container)
+* [Create the project and configure dependencies](#setting-up)
+* [Authenticate to Azure](#authenticate-the-app-to-azure)
+* [Create a container](#create-a-container)
+* [Upload a blob to a container](#upload-a-blob-to-a-container)
+* [List blobs in a container](#list-blobs-in-a-container)
+* [Download a blob](#download-a-blob)
+* [Delete a container](#delete-a-container)
Additional resources: - [API reference documentation](/dotnet/api/azure.storage.blobs) - [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) - [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs)-- [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples)
+- [Samples](/azure/storage/common/storage-samples-dotnet?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#blob-samples)
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- Azure storage account - [create a storage account](../common/storage-account-create.md)
+- Azure storage account - [create a storage account](/azure/storage/common/storage-account-create)
- Current [.NET Core SDK](https://dotnet.microsoft.com/download/dotnet-core) for your operating system. Be sure to get the SDK and not the runtime. ## Setting up
This section walks you through preparing a project to work with the Azure Blob S
### Create the project
-Create a .NET Core application named *BlobQuickstartV12*.
+For the steps ahead, you'll need to create a .NET console app using either the .NET CLI or Visual Studio 2022.
+
+### [Visual Studio 2022](#tab/visual-studio)
+
+1. At the top of Visual Studio, navigate to **File** > **New** > **Project..**.
+
+1. In the dialog window, enter *console app* into the project template search box and select the first result. Choose **Next** at the bottom of the dialog.
+
+ :::image type="content" source="media/storage-quickstart-blobs-dotnet/visual-studio-new-console-app.png" alt-text="A screenshot showing how to create a new project using Visual Studio.":::
+
+1. For the **Project Name**, enter *BlobQuickstartV12*. Leave the default values for the rest of the fields and select **Next**.
+
+1. For the **Framework**, ensure .NET 6.0 is selected. Then choose **Create**. The new project will open inside the Visual Studio environment.
+
+### [.NET CLI](#tab/net-cli)
1. In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name *BlobQuickstartV12*. This command creates a simple "Hello World" C# project with a single source file: *Program.cs*.
- ```console
+ ```dotnetcli
dotnet new console -n BlobQuickstartV12 ```
Create a .NET Core application named *BlobQuickstartV12*.
cd BlobQuickstartV12 ```
-1. In side the *BlobQuickstartV12* directory, create another directory called *data*. This is where the blob data files will be created and stored.
+1. Open the project in your desired code editor. To open the project in:
+ * Visual Studio, locate and double-click the `BlobQuickStartV12.csproj` file.
+ * Visual Studio Code, run the following command:
- ```console
- mkdir data
+ ```bash
+ code .
```+ ### Install the package
-While still in the application directory, install the Azure Blob Storage client library for .NET package by using the `dotnet add package` command.
+To interact with Azure Blob Storage, install the Azure Blob Storage client library for .NET.
-```console
+### [Visual Studio 2022](#tab/visual-studio)
+
+1. In **Solution Explorer**, right-click the **Dependencies** node of your project. Select **Manage NuGet Packages**.
+
+1. In the resulting window, search for *Azure.Storage.Blobs*. Select the appropriate result, and select **Install**.
+
+ :::image type="content" source="media/storage-quickstart-blobs-dotnet/visual-studio-add-package.png" alt-text="A screenshot showing how to add a new package using Visual Studio.":::
+
+### [.NET CLI](#tab/net-cli)
+
+```dotnetcli
dotnet add package Azure.Storage.Blobs ```
-### Set up the app framework
+
-From the project directory:
+### Set up the app code
-1. Open the *Program.cs* file in your editor.
-1. Remove the `Console.WriteLine("Hello World!");` statement.
-1. Add `using` directives.
-1. Update the `Main` method declaration to support async.
+Replace the starting code in the `Program.cs` file so that it matches the following example, which includes the necessary `using` statements for this exercise.
- Here's the code:
+```csharp
+using Azure.Storage.Blobs;
+using Azure.Storage.Blobs.Models;
+using System;
+using System.IO;
- :::code language="csharp" source="~/azure-storage-snippets/blobs/quickstarts/dotnet/BlobQuickstartV12/app_framework.cs":::
+// See https://aka.ms/new-console-template for more information
+Console.WriteLine("Hello, World!");
+```
## Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data
The following diagram shows the relationship between these resources.
-![Diagram of Blob storage architecture](./media/storage-blobs-introduction/blob1.png)
+![Diagram of Blob storage architecture.](media/storage-quickstart-blobs-dotnet/blob-1.png)
Use the following .NET classes to interact with these resources:
Use the following .NET classes to interact with these resources:
## Code examples
-The sample code snippets in the following sections show you how to perform basic data operations with the Azure Blob Storage client library for .NET.
+The sample code snippets in the following sections demonstrate how to perform basic data operations with the Azure Blob Storage client library for .NET.
-### Get the connection string
-
-The code below retrieves the connection string for the storage account from the environment variable created in the [Configure your storage connection string](#configure-your-storage-connection-string) section.
-
-Add this code inside the `Main` method:
+> [!IMPORTANT]
+> Make sure you have installed the correct NuGet packages and added the necessary using statements in order for the code samples to work, as described in the [setting up](#setting-up) section.
+* **Azure.Identity** (if you are using the passwordless approach)
+* **Azure.Storage.Blobs**
### Create a container
Decide on a name for the new container. The code below appends a GUID value to t
> [!IMPORTANT] > Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-Create an instance of the [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) class. Then, call the [CreateBlobContainerAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.createblobcontainerasync) method to create the container in your storage account.
+You can call the [CreateBlobContainerAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.createblobcontainerasync) method on the `blobServiceClient` to create a container in your storage account.
-Add this code to the end of the `Main` method:
+Add this code to the end of the `Program.cs` class:
+```csharp
+// TODO: Replace <storage-account-name> with your actual storage account name
+var blobServiceClient = new BlobServiceClient(
+ new Uri("https://<storage-account-name>.blob.core.windows.net"),
+ new DefaultAzureCredential());
+
+//Create a unique name for the container
+string containerName = "quickstartblobs" + Guid.NewGuid().ToString();
+
+// Create the container and return a container client object
+BlobContainerClient containerClient = await blobServiceClient.CreateBlobContainerAsync(containerName);
+```
### Upload a blob to a container
-The following code snippet:
+Add the following code to the end of the `Program.cs` class:
+
+```csharp
+// Create a local file in the ./data/ directory for uploading and downloading
+string localPath = "data";
+Directory.CreateDirectory(localPath);
+string fileName = "quickstart" + Guid.NewGuid().ToString() + ".txt";
+string localFilePath = Path.Combine(localPath, fileName);
+
+// Write text to the file
+await File.WriteAllTextAsync(localFilePath, "Hello, World!");
+
+// Get a reference to a blob
+BlobClient blobClient = containerClient.GetBlobClient(fileName);
+
+Console.WriteLine("Uploading to Blob storage as blob:\n\t {0}\n", blobClient.Uri);
+
+// Upload data from the local file
+await blobClient.UploadAsync(localFilePath, true);
+```
+
+The code snippet completes the following steps:
1. Creates a text file in the local *data* directory. 1. Gets a reference to a [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) object by calling the [GetBlobClient](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobclient) method on the container from the [Create a container](#create-a-container) section. 1. Uploads the local text file to the blob by calling the [UploadAsync](/dotnet/api/azure.storage.blobs.blobclient.uploadasync#Azure_Storage_Blobs_BlobClient_UploadAsync_System_String_System_Boolean_System_Threading_CancellationToken_) method. This method creates the blob if it doesn't already exist, and overwrites it if it does.
-Add this code to the end of the `Main` method:
-- ### List blobs in a container List the blobs in the container by calling the [GetBlobsAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobsasync) method. In this case, only one blob has been added to the container, so the listing operation returns just that one blob.
-Add this code to the end of the `Main` method:
+Add the following code to the end of the `Program.cs` class:
+```csharp
+Console.WriteLine("Listing blobs...");
+
+// List all blobs in the container
+await foreach (BlobItem blobItem in containerClient.GetBlobsAsync())
+{
+ Console.WriteLine("\t" + blobItem.Name);
+}
+```
### Download a blob Download the previously created blob by calling the [DownloadToAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadtoasync) method. The example code adds a suffix of "DOWNLOADED" to the file name so that you can see both files in local file system.
-Add this code to the end of the `Main` method:
+Add the following code to the end of the `Program.cs` class:
+
+```csharp
+// Download the blob to a local file
+// Append the string "DOWNLOADED" before the .txt extension
+// so you can compare the files in the data directory
+string downloadFilePath = localFilePath.Replace(".txt", "DOWNLOADED.txt");
+Console.WriteLine("\nDownloading blob to\n\t{0}\n", downloadFilePath);
+
+// Download the blob's contents and save it to a file
+await blobClient.DownloadToAsync(downloadFilePath);
+```
### Delete a container
The following code cleans up the resources the app created by deleting the entir
The app pauses for user input by calling `Console.ReadLine` before it deletes the blob, container, and local files. This is a good chance to verify that the resources were actually created correctly, before they are deleted.
-Add this code to the end of the `Main` method:
+Add the following code to the end of the `Program.cs` class:
+
+```csharp
+// Clean up
+Console.Write("Press any key to begin clean up");
+Console.ReadLine();
+
+Console.WriteLine("Deleting blob container...");
+await containerClient.DeleteAsync();
+
+Console.WriteLine("Deleting the local source and downloaded files...");
+File.Delete(localFilePath);
+File.Delete(downloadFilePath);
+
+Console.WriteLine("Done");
+```
+
+## The completed code
+After completing these steps the code in your `Program.cs` file should now resemble the following:
+
+## [Passwordless (Recommended)](#tab/managed-identity)
+
+```csharp
+using Azure.Storage.Blobs;
+using Azure.Storage.Blobs.Models;
+using Azure.Identity;
+
+// TODO: Replace <storage-account-name> with your actual storage account name
+var blobServiceClient = new BlobServiceClient(
+ new Uri("https://<storage-account-name>.blob.core.windows.net"),
+ new DefaultAzureCredential());
+
+//Create a unique name for the container
+string containerName = "quickstartblobs" + Guid.NewGuid().ToString();
+
+// Create the container and return a container client object
+BlobContainerClient containerClient = await blobServiceClient.CreateBlobContainerAsync(containerName);
+
+// Create a local file in the ./data/ directory for uploading and downloading
+string localPath = "data";
+Directory.CreateDirectory(localPath);
+string fileName = "quickstart" + Guid.NewGuid().ToString() + ".txt";
+string localFilePath = Path.Combine(localPath, fileName);
+
+// Write text to the file
+await File.WriteAllTextAsync(localFilePath, "Hello, World!");
+
+// Get a reference to a blob
+BlobClient blobClient = containerClient.GetBlobClient(fileName);
+
+Console.WriteLine("Uploading to Blob storage as blob:\n\t {0}\n", blobClient.Uri);
+
+// Upload data from the local file
+await blobClient.UploadAsync(localFilePath, true);
+
+Console.WriteLine("Listing blobs...");
+
+// List all blobs in the container
+await foreach (BlobItem blobItem in containerClient.GetBlobsAsync())
+{
+ Console.WriteLine("\t" + blobItem.Name);
+}
+
+// Download the blob to a local file
+// Append the string "DOWNLOADED" before the .txt extension
+// so you can compare the files in the data directory
+string downloadFilePath = localFilePath.Replace(".txt", "DOWNLOADED.txt");
+
+Console.WriteLine("\nDownloading blob to\n\t{0}\n", downloadFilePath);
+
+// Download the blob's contents and save it to a file
+await blobClient.DownloadToAsync(downloadFilePath);
+
+// Clean up
+Console.Write("Press any key to begin clean up");
+Console.ReadLine();
+
+Console.WriteLine("Deleting blob container...");
+await containerClient.DeleteAsync();
+
+Console.WriteLine("Deleting the local source and downloaded files...");
+File.Delete(localFilePath);
+File.Delete(downloadFilePath);
+
+Console.WriteLine("Done");
+```
+
+## [Connection String](#tab/connection-string)
+
+```csharp
+using Azure.Storage.Blobs;
+using Azure.Storage.Blobs.Models;
+
+// TODO: Replace <storage-account-name> with your actual storage account name
+var blobServiceClient = new BlobServiceClient("<storage-account-connection-string>");
+
+//Create a unique name for the container
+string containerName = "quickstartblobs" + Guid.NewGuid().ToString();
+
+// Create the container and return a container client object
+BlobContainerClient containerClient = await blobServiceClient.CreateBlobContainerAsync(containerName);
+
+// Create a local file in the ./data/ directory for uploading and downloading
+string localPath = "data";
+Directory.CreateDirectory(localPath);
+string fileName = "quickstart" + Guid.NewGuid().ToString() + ".txt";
+string localFilePath = Path.Combine(localPath, fileName);
+
+// Write text to the file
+await File.WriteAllTextAsync(localFilePath, "Hello, World!");
+
+// Get a reference to a blob
+BlobClient blobClient = containerClient.GetBlobClient(fileName);
+
+Console.WriteLine("Uploading to Blob storage as blob:\n\t {0}\n", blobClient.Uri);
+
+// Upload data from the local file
+await blobClient.UploadAsync(localFilePath, true);
+
+Console.WriteLine("Listing blobs...");
+
+// List all blobs in the container
+await foreach (BlobItem blobItem in containerClient.GetBlobsAsync())
+{
+ Console.WriteLine("\t" + blobItem.Name);
+}
+
+// Download the blob to a local file
+// Append the string "DOWNLOADED" before the .txt extension
+// so you can compare the files in the data directory
+string downloadFilePath = localFilePath.Replace(".txt", "DOWNLOADED.txt");
+
+Console.WriteLine("\nDownloading blob to\n\t{0}\n", downloadFilePath);
+
+// Download the blob's contents and save it to a file
+await blobClient.DownloadToAsync(downloadFilePath);
+
+// Clean up
+Console.Write("Press any key to begin clean up");
+Console.ReadLine();
+
+Console.WriteLine("Deleting blob container...");
+await containerClient.DeleteAsync();
+
+Console.WriteLine("Deleting the local source and downloaded files...");
+File.Delete(localFilePath);
+File.Delete(downloadFilePath);
+
+Console.WriteLine("Done");
+```
++ ## Run the code This app creates a test file in your local *data* folder and uploads it to Blob storage. The example then lists the blobs in the container and downloads the file with a new name so that you can compare the old and new files.
-Navigate to your application directory, then build and run the application.
+If you're using Visual Studio, press F5 to build and run the code and interact with the console app. If you're using the .NET CLI, navigate to your application directory, then build and run the application.
```console dotnet build
storage Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/migrate-azure-credentials.md
+
+ Title: Migrate applications to use passwordless authentication with Azure Storage
+
+description: Learn to migrate existing applications away from Shared Key authorization with the account key to instead use Azure AD and Azure RBAC for enhanced security.
+++++ Last updated : 07/28/2022+++++
+# Migrate an application to use passwordless connections with Azure services
+
+Application requests to Azure Storage must be authenticated using either account access keys or passwordless connections. However, you should prioritize passwordless connections in your applications when possible. This tutorial explores how to migrate from traditional authentication methods to more secure, passwordless connections.
+
+## Security risks associated with Shared Key authorization
+
+The following code example demonstrates how to connect to Azure Storage using a storage account key. When you create a storage account, Azure generates access keys for that account. Many developers gravitate towards this solution because it feels familiar to options they have worked with in the past. For example, connection strings for storage accounts also use access keys as part of the string. If your application currently uses access keys, consider migrating to passwordless connections using the steps described later in this document.
+
+```csharp
+var blobServiceClient = new BlobServiceClient(
+ new Uri("https://<storage-account-name>.blob.core.windows.net"),
+ new StorageSharedKeyCredential("<storage-account-name>", "<your-access-key>"));
+```
+
+Storage account keys should be used with caution. Developers must be diligent to never expose the keys in an unsecure location. Anyone who gains access to the key is able to authenticate. For example, if an account key is accidentally checked into source control, sent through an unsecure email, pasted into the wrong chat, or viewed by someone who shouldn't have permission, there's risk of a malicious user accessing the application. Instead, consider updating your application to use passwordless connections.
+
+## Migrating to passwordless connections
+
+Many Azure services support passwordless connections through Azure AD and Role Based Access control (RBAC). These techniques provide robust security features and can be implemented using `DefaultAzureCredential` from the Azure Identity client libraries.
+
+> [!IMPORTANT]
+> Some languages must implement `DefaultAzureCredential` explicitly in their code, while others utilize `DefaultAzureCredential` internally through underlying plugins or drivers.
+
+ `DefaultAzureCredential` supports multiple authentication methods and automatically determines which should be used at runtime. This approach enables your app to use different authentication methods in different environments (local dev vs. production) without implementing environment-specific code.
+
+The order and locations in which `DefaultAzureCredential` searches for credentials can be found in the [Azure Identity library overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential) and varies between languages. For example, when working locally with .NET, `DefaultAzureCredential` will generally authenticate using the account the developer used to sign-in to Visual Studio. When the app is deployed to Azure, `DefaultAzureCredential` will automatically switch to use a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview). No code changes are required for this transition.
++
+> [!NOTE]
+> A managed identity provides a security identity to represent an app or service. The identity is managed by the Azure platform and does not require you to provision or rotate any secrets. You can read more about managed identities in the [overview](/azure/active-directory/managed-identities-azure-resources/overview) documentation.
+
+The following code example demonstrates how to connect to an Azure Storage account using passwordless connections. The next section describes how to migrate to this setup in more detail.
+
+A .NET Core application can pass an instance of `DefaultAzureCredential` into the constructor of a service client class. `DefaultAzureCredential` will automatically discover the credentials that are available in that environment.
+
+```csharp
+var blobServiceClient = new BlobServiceClient(
+ new Uri("https://<your-storage-account>.blob.core.windows.net"),
+ new DefaultAzureCredential());
+```
+
+## Steps to migrate an app to use passwordless authentication
+
+The following steps explain how to migrate an existing application to use passwordless connections instead of a key-based solution. These same migration steps should apply whether you are using access keys directly, or through connection strings.
+
+### Configure roles and users for local development authentication
++
+### Sign-in and migrate the app code to use passwordless connections
+
+For local development, make sure you're authenticated with the same Azure AD account you assigned the role to on your Blob Storage account. You can authenticate via the Azure CLI, Visual Studio, Azure PowerShell, or other tools such as IntelliJ.
++
+Next you will need to update your code to use passwordless connections.
+
+1. To use `DefaultAzureCredential` in a .NET application, add the **Azure.Identity** NuGet package to your application.
+
+ ```dotnetcli
+ dotnet add package Azure.Identity
+ ```
+
+1. At the top of your `Program.cs` file, add the following `using` statement:
+
+ ```csharp
+ using Azure.Identity;
+ ```
+
+1. Identify the locations in your code that currently create a `BlobServiceClient` to connect to Azure Storage. This task is often handled in `Program.cs`, potentially as part of your service registration with the .NET dependency injection container. Update your code to match the following example:
+
+ ```csharp
+ // TODO: Update <storage-account-name> placeholder to your account name
+ var blobServiceClient = new BlobServiceClient(
+ new Uri("https://<storage-account-name>.blob.core.windows.net"),
+ new DefaultAzureCredential());
+ ```
+
+1. Make sure to update the storage account name in the URI of your `BlobServiceClient`. The storage account name can be found on the overview page of the Azure portal.
+
+ :::image type="content" source="../blobs/media/storage-quickstart-blobs-dotnet/storage-account-name.png" alt-text="A screenshot showing how to find the storage account name.":::
+
+#### Run the app locally
+
+After making these code changes, run your application locally. The new configuration should pick up your local credentials, such as the Azure CLI, Visual Studio, or IntelliJ. The roles you assigned to your local dev user in Azure will allow your app to connect to the Azure service locally.
+
+### Configure the Azure hosting environment
+
+Once your application is configured to use passwordless connections and runs locally, the same code can authenticate to Azure services after it is deployed to Azure. For example, an application deployed to an Azure App Service instance that has a managed identity enabled can connect to Azure Storage.
+
+#### Create the managed identity using the Azure portal
+
+The following steps demonstrate how to create a system-assigned managed identity for various web hosting services. The managed identity can securely connect to other Azure Services using the app configurations you setup previously.
+
+### [Service Connector](#tab/service-connector)
+
+Some app hosting environments support Service Connector, which helps you connect Azure compute services to other backing services. Service Connector automatically configures network settings and connection information. You can learn more about Service Connector and which scenarios are supported on the [overview page](/azure/service-connector/overview).
+
+The following compute services are currently supported:
+
+* Azure App Service
+* Azure Spring Cloud
+* Azure Container Apps (preview)
+
+For this migration guide you will use App Service, but the steps are similar on Azure Spring Apps and Azure Container Apps.
+
+> [!NOTE]
+> Azure Spring Apps currently only supports Service Connector using connection strings.
+
+1. On the main overview page of your App Service, select **Service Connector** from the left navigation.
+
+1. Select **+ Create** from the top menu and the **Create connection** panel will open. Enter the following values:
+
+ * **Service type**: Choose **Storage blob**.
+ * **Subscription**: Select the subscription you would like to use.
+ * **Connection Name**: Enter a name for your connection, such as *connector_appservice_blob*.
+ * **Client type**: Leave the default value selected or choose the specific client you'd like to use.
+
+ Select **Next: Authentication**.
+
+ :::image type="content" source="media/migration-create-identity-small.png" alt-text="A screenshot showing how to create a system assigned managed identity." lightbox="media/migration-create-identity.png":::
+
+1. Make sure **System assigned managed identity (Recommended)** is selected, and then choose **Next: Networking**.
+1. Leave the default values selected, and then choose **Next: Review + Create**.
+1. After Azure validates your settings, click **Create**.
+
+The Service Connector will automatically create a system-assigned managed identity for the app service. The connector will also assign the managed identity a **Storage Blob Data Contributor** role for the storage account you selected.
+
+### [App Service](#tab/app-service)
+
+1. On the main overview page of your App Service, select **Identity** from the left navigation.
+
+1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code.
+
+ :::image type="content" source="media/migration-create-identity-small.png" alt-text="A screenshot showing how to create a system assigned managed identity." lightbox="media/migration-create-identity.png":::
+
+### [Spring Apps](#tab/spring-apps)
+
+1. On the main overview page of your Azure Spring App, select **Identity** from the left navigation.
+
+1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code.
+
+ :::image type="content" source="media/storage-migrate-credentials/spring-apps-identity.png" alt-text="A screenshot showing how to enable managed identity for spring apps.":::
+
+### [Container Apps](#tab/container-apps)
+
+1. On the main overview page of your Azure Container App, select **Identity** from the left navigation.
+
+1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code.
+
+ :::image type="content" source="media/storage-migrate-credentials/container-apps-identity.png" alt-text="A screenshot showing how to enable managed identity for container apps.":::
+
+### [Virtual Machines](#tab/virtual-machines)
+
+1. On the main overview page of your Azure Spring App, select **Identity** from the left navigation.
+
+1. Under the **System assigned** tab, make sure to set the **Status** field to **on**. A system assigned identity is managed by Azure internally and handles administrative tasks for you. The details and IDs of the identity are never exposed in your code.
+
+ :::image type="content" source="media/storage-migrate-credentials/virtual-machine-identity.png" alt-text="A screenshot showing how to enable managed identity for virtual machines.":::
+++
+You can also enable managed identity on an Azure hosting environment using the Azure CLI.
+
+### [Service Connector](#tab/service-connector-identity)
+
+You can create a Service Connection between an Azure compute hosting environment and a target service using the Azure CLI. The CLI automatically handles creating a managed identity and assigns the proper role, as explained in the [portal instructions](#create-the-managed-identity-using-the-azure-portal).
+
+If you are using an Azure App Service, use the `az webapp connection` command:
+
+```azurecli
+az webapp connection create storage-blob --resource-group <resource-group-name> --name <app-service-name> --target-resource-group <target-resource-group-name> --account <target-storage-account-name> --system-identity
+```
+
+If you are using Azure Spring Apps, use `the az spring-cloud connection` command:
+
+```azurecli
+az spring-cloud connection create storage-blob --resource-group <resource-group-name> --service <spring-cloud-service-name> --app <spring-app-name> --deployment <deployment-name> --target-resource-group <target-resource-group> --account <target-storage-account-name> --system-identity
+```
+
+If you are using Azure Container Apps, use the `az containerapp connection` command:
+
+```azurecli
+az containerapp connection create storage-blob --resource-group <resource-group-name> --name <containerapp-name> --target-resource-group <target-resource-group-name> --account <target-storage-account-name> --system-identity
+```
+
+### [App Service](#tab/app-service-identity)
+
+You can assign a managed identity to an Azure App Service with the [az webapp identity assign](/cli/azure/webapp/identity) command.
+
+```azurecli
+az webapp identity assign --resource-group <resource-group-name> --name <app-service-name>
+```
+
+### [Spring Apps](#tab/spring-apps-identity)
+
+You can assign a managed identity to an Azure Spring App with the [az spring app identity assign](/cli/azure/spring/app/identity) command.
+
+```azurecli
+az spring app identity assign --resource-group <resource-group-name> --name <app-service-name> --service <service-name>
+```
+
+### [Container Apps](#tab/container-apps-identity)
+
+You can assign a managed identity to an Azure Container App with the [az containerapp identity assign](/cli/azure/containerapp/identity) command.
+
+```azurecli
+az containerapp identity assign --resource-group <resource-group-name> --name <app-service-name>
+```
+
+### [Virtual Machines](#tab/virtual-machines-identity)
+
+You can assign a managed identity to a Virtual Machine with the [az vm identity assign](/cli/azure/vm/identity) command.
+
+```azurecli
+az vm identity assign --resource-group <resource-group-name> --name <app-service-name>
+```
+
+### [AKS](#tab/aks-identity)
+
+You can assign a managed identity to an Azure Kubernetes Service with the [az aks update](/cli/azure/aks) command.
+
+```azurecli
+az vm identity assign --resource-group <resource-group-name> --name <app-service-name>
+```
+++
+#### Assign roles to the managed identity
+
+Next, you need to grant permissions to the managed identity you created to access your storage account. You can do this by assigning a role to the managed identity, just like you did with your local development user.
+
+### [Service Connector](#tab/assign-role-service-connector)
+
+If you connected your services using the Service Connector you do not need to complete this step. The necessary configurations were handled for you:
+
+* If you selected a managed identity while creating the connection, a system-assigned managed identity was created for your app and assigned the **Storage Blob Data Contributor** role on the storage account.
+
+* If you selected connection string, the connection string was added as an app environment variable.
+
+### [Azure portal](#tab/assign-role-azure-portal)
+
+1. Navigate to your storage account overview page and select **Access Control (IAM)** from the left navigation.
+
+1. Choose **Add role assignment**
+
+ :::image type="content" source="media/migration-add-role-small.png" alt-text="A screenshot showing how to add a role to a managed identity." lightbox="media/migration-add-role.png":::
+
+1. In the **Role** search box, search for *Storage Blob Data Contributor*, which is a common role used to manage data operations for blobs. You can assign whatever role is appropriate for your use case. Select the *Storage Blob Data Contributor* from the list and choose **Next**.
+
+1. On the **Add role assignment** screen, for the **Assign access to** option, select **Managed identity**. Then choose **+Select members**.
+
+1. In the flyout, search for the managed identity you created by entering the name of your app service. Select the system assigned identity, and then choose **Select** to close the flyout menu.
+
+ :::image type="content" source="media/migration-select-identity-small.png" alt-text="A screenshot showing how to select the assigned managed identity." lightbox="media/migration-select-identity.png":::
+
+1. Select **Next** a couple times until you're able to select **Review + assign** to finish the role assignment.
+
+### [Azure CLI](#tab/assign-role-azure-cli)
+
+To assign a role at the resource level using the Azure CLI, you first must retrieve the resource ID using the az storage account show command. You can filter the output properties using the --query parameter.
+
+```azurecli
+az storage account show --resource-group '<your-resource-group-name>' --name '<your-storage-account-name>' --query id
+```
+
+Copy the output ID from the preceding command. You can then assign roles using the az role command of the Azure CLI.
+
+```azurecli
+az role assignment create --assignee "<your-username>" \
+ --role "Storage Blob Data Contributor" \
+ --scope "<your-resource-id>"
+```
+++
+#### Test the app
+
+After making these code changes, browse to your hosted application in the browser. Your app should be able to connect to the storage account successfully. Keep in mind that it may take several minutes for the role assignments to propagate through your Azure environment. Your application is now configured to run both locally and in a production environment without the developers having to manage secrets in the application itself.
+
+## Next steps
+
+In this tutorial, you learned how to migrate an application to passwordless connections.
+
+You can read the following resources to explore the concepts discussed in this article in more depth:
+
+- For more information on authorizing access with managed identity, visit [Authorize access to blob data with managed identities for Azure resources](/azure/storage/blobs/authorize-managed-identity).
+-[Authorize with Azure roles](/azure/storage/blobs/authorize-access-azure-active-directory)
+- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
+- To learn more about authorizing from a web application, visit [Authorize from a native or web application](/azure/storage/common/storage-auth-aad-app)
storage Multiple Identity Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/multiple-identity-scenarios.md
+
+ Title: Configure passwordless connections between multiple services
+
+description: Learn to work with user-assigned managed identities to configure passwordless connections between multiple Azure services.
++++ Last updated : 08/01/2022+++++
+# Configure passwordless connections between multiple Azure apps and services
+
+Applications often require secure connections between multiple Azure services simultaneously. For example, an enterprise Azure App Service instance might connect to several different storage accounts, an Azure SQL database instance, a service bus, and more.
+
+[Managed identities](/azure/active-directory/managed-identities-azure-resources/overview) are the recommended authentication option for secure, passwordless connections between Azure resources. Developers do not have to manually track and manage many different secrets for managed identities, since most of these tasks are handled internally by Azure. This tutorial explores how to manage connections between multiple services using managed identities and the Azure Identity client library.
+
+## Compare the types of managed identities
+
+Azure provides the following types of managed identities:
+
+* **System-assigned managed identities** are directly tied to a single Azure resource. When you enable a system-assigned managed identity on a service, Azure will create a linked identity and handle administrative tasks for that identity internally. When the Azure resource is deleted, the identity is also deleted.
+* **User-assigned managed identities** are independent identities that are created by an administrator and can be associated with one or more Azure resources. The lifecycle of the identity is independent of those resources.
+
+You can read more about best practices and when to use system-assigned identities versus user-assigned identities in the [identities best practice recommendations](/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations).
+
+## Explore DefaultAzureCredential
+
+Managed identities are generally implemented in your application code through a class called `DefaultAzureCredential` from the `Azure.Identity` client library. `DefaultAzureCredential` supports multiple authentication methods and automatically determines which should be used at runtime. You can read more about this approach in the [DefaultAzureCredential overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential).
+
+## Connect an Azure hosted app to multiple Azure services
+
+You have been tasked with connecting an existing app to multiple Azure services and databases using passwordless connections. The application is an ASP.NET Core Web API hosted on Azure App Service, though the steps below apply to other Azure hosting environments as well, such as Azure Spring Apps, Virtual Machines, Container Apps and AKS.
+
+This tutorial applies to the following architectures, though it can be adapted to many other scenarios as well through minimal configuration changes.
++
+The following steps demonstrate how to configure an app to use a system-assigned managed identity and your local development account to connect to multiple Azure Services.
+
+### Create a system-assigned managed identity
+
+1) In the Azure portal, navigate to the hosted application that you would like to connect to other services.
+
+2) On the service overview page, select **Identity**.
+
+3) Toggle the **Status** setting to **On** to enable a system assigned managed identity for the service.
+
+ :::image type="content" source="media/enable-system-assigned-identity.png" alt-text="A screenshot showing how to assign a system assigned managed identity." :::
+
+### Assign roles to the managed identity for each connected service
+
+1) Navigate to the overview page of the storage account you would like to grant access your identity access to.
+
+3) Select **Access Control (IAM)** from the storage account navigation.
+
+4) Choose **+ Add** and then **Add role assignment**.
+
+ :::image type="content" source="media/assign-role-system-identity.png" alt-text="A screenshot showing how to assign a system-assigned identity." :::
+
+5) In the **Role** search box, search for *Storage Blob Data Contributor*, which grants permissions to perform read and write operations on blob data. You can assign whatever role is appropriate for your use case. Select the *Storage Blob Data Contributor* from the list and choose **Next**.
+
+6) On the **Add role assignment** screen, for the **Assign access to** option, select **Managed identity**. Then choose **+Select members**.
+
+7) In the flyout, search for the managed identity you created by entering the name of your app service. Select the system assigned identity, and then choose **Select** to close the flyout menu.
+
+ :::image type="content" source="media/migration-select-identity.png" alt-text="A screenshot showing how to select a system-assigned identity." :::
+
+8) Select **Next** a couple times until you're able to select **Review + assign** to finish the role assignment.
+
+9) Repeat this process for the other services you would like to connect to.
+
+#### Local development considerations
+
+You can also enable access to Azure resources for local development by assigning roles to a user account the same way you assigned roles to your managed identity.
+
+1) After assigning the **Storage Blob Data Contributor** role to your managed identity, under **Assign access to**, this time select **User, group or service principal**. Choose **+ Select members** to open the flyout menu again.
+
+2) Search for the *user@domain* account or Azure AD security group you would like to grant access to by email address or name, and then select it. This should be the same account you use to sign-in to your local development tooling with, such as Visual Studio or the Azure CLI.
+
+> [!NOTE]
+> You can also assign these roles to an Azure Active Directory security group if you are working on a team with multiple developers. You can then place any developer inside that group who needs access to develop the app locally.
+
+### Implement the application code
+
+Inside of your project, add a reference to the `Azure.Identity` NuGet package. This library contains all of the necessary entities to implement `DefaultAzureCredential`. You can also add any other Azure libraries that are relevant to your app. For this example, the `Azure.Storage.Blobs` and `Azure.KeyVault.Keys` packages are added in order to connect to Blob Storage and Key Vault.
+
+```dotnetcli
+dotnet add package Azure.Identity
+dotnet add package Azure.Storage.Blobs
+dotnet add package Azure.KeyVault.Keys
+```
+
+At the top of your `Program.cs` file, add the following using statements:
+
+```csharp
+using Azure.Identity;
+using Azure.Storage.Blobs;
+using Azure.Security.KeyVault.Keys;
+```
+
+In the `Program.cs` file of your project code, create instances of the necessary services your app will connect to. The following examples connect to Blob Storage and service bus using the corresponding SDK classes.
+
+```csharp
+var blobServiceClient = new BlobServiceClient(
+ new Uri("https://<your-storage-account>.blob.core.windows.net"),
+ new DefaultAzureCredential(credOptions));
+
+var serviceBusClient = new ServiceBusClient("<your-namespace>", new DefaultAzureCredential());
+var sender = serviceBusClient.CreateSender("producttracking");
+```
+
+When this application code runs locally, `DefaultAzureCredential` will search down a credential chain for the first available credentials. If the `Managed_Identity_Client_ID` is null locally, it will automatically use the credentials from your local Azure CLI or Visual Studio sign-in. You can read more about this process in the [Azure Identity library overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential).
+
+When the application is deployed to Azure, `DefaultAzureCredential` will automatically retrieve the `Managed_Identity_Client_ID` variable from the app service environment. That value becomes available when a managed identity is associated with your app.
+
+This overall process ensures that your app can run securely locally and in Azure without the need for any code changes.
+
+## Connect multiple apps using multiple managed identities
+
+Although the apps in the previous example all shared the same service access requirements, real environments are often more nuanced. Consider a scenario where multiple apps all connect to the same storage accounts, but two of the apps also access different services or databases.
++
+To configure this setup in your code, make sure your application registers separate services to connect to each storage account or database. Make sure to pull in the correct managed identity client IDs for each service when configuring `DefaultAzureCredential`. The following code example configures the following service connections:
+* Two connections to separate storage accounts using a shared user-assigned managed identity
+* A connection to Azure Cosmos DB and Azure SQL services using a second shared user-assigned managed identity
+
+```csharp
+// Get the first user-assigned managed identity ID to connect to shared storage
+var clientIDstorage = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID_Storage");
+
+// First blob storage client that using a managed identity
+BlobServiceClient blobServiceClient = new BlobServiceClient(
+ new Uri("https://<receipt-storage-account>.blob.core.windows.net"),
+ new DefaultAzureCredential()
+ {
+ ManagedIdentityClientId = clientIDstorage
+ });
+
+// Second blob storage client that using a managed identity
+BlobServiceClient blobServiceClient2 = new BlobServiceClient(
+ new Uri("https://<contract-storage-account>.blob.core.windows.net"),
+ new DefaultAzureCredential()
+{
+ ManagedIdentityClientId = clientIDstorage
+ });
++
+// Get the second user-assigned managed identity ID to connect to shared databases
+var clientIDdatabases = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID_Databases");
+
+// Create a Cosmos DB client
+CosmosClient client = new CosmosClient(
+ accountEndpoint: Environment.GetEnvironmentVariable("COSMOS_ENDPOINT", EnvironmentVariableTarget.Process),
+ new DefaultAzureCredential()
+ {
+ ManagedIdentityClientId = clientIDdatabases
+ });
+
+// Open a connection to Azure SQL using a managed identity
+string ConnectionString1 = @"Server=<azure-sql-hostname>.database.windows.net; User Id=ObjectIdOfManagedIdentity; Authentication=Active Directory Default; Database=<database-name>";
+
+using (SqlConnection conn = new SqlConnection(ConnectionString1))
+{
+ conn.Open();
+}
+
+```
+
+You can also associate a user-assigned managed identity as well as a system-assigned managed identity to a resource simultaneously. This can be useful in scenarios where all of the apps require access to the same shared services, but one of the apps also has a very specific dependency on an additional service. Using a system-assigned identity also ensures that the identity tied to that specific app is deleted when the app is deleted, which can help keep your environment clean.
++
+These types of scenarios are explored in more depth in the [identities best practice recommendations](/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations).
+
+## Next steps
+
+In this tutorial, you learned how to migrate an application to passwordless connections. You can read the following resources to explore the concepts discussed in this article in more depth:
+
+- For more information on authorizing access with managed identity, visit [Authorize access to blob data with managed identities for Azure resources](/azure/storage/blobs/authorize-managed-identity).
+-[Authorize with Azure roles](/azure/storage/blobs/authorize-access-azure-active-directory)
+- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
+- To learn more about authorizing from a web application, visit [Authorize from a native or web application](/azure/storage/common/storage-auth-aad-app)
storage File Sync Troubleshoot Cloud Tiering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-cloud-tiering.md
If files fail to tier to Azure Files:
| 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to tier due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. | | 0x8e5e03fe | -1906441218 | JET_errDiskIO | The file failed to tier due to an I/O error when writing to the cloud tiering database. | If the error persists, run chkdsk on the volume and check the storage hardware. | | 0x8e5e0442 | -1906441150 | JET_errInstanceUnavailable | The file failed to tier because the cloud tiering database is not running. | To resolve this issue, restart the FileSyncSvc service or server. If the error persists, run chkdsk on the volume and check the storage hardware. |
-| 0x80C80285 | -2160591493 | ECS_E_GHOSTING_SKIPPED_BY_CUSTOM_EXCLUSION_LIST | The file cannot be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync. |
-| 0x80C86050 | -2160615504 | ECS_E_REPLICA_NOT_READY_FOR_TIERING | The file failed to tier because the current sync mode is initial upload or reconciliation. | No action required. The file will be tiered once sync completes initial upload or reconciliation. |
+| 0x80C80285 | -2134375803 | ECS_E_GHOSTING_SKIPPED_BY_CUSTOM_EXCLUSION_LIST | The file cannot be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync. |
+| 0x80C86050 | -2134351792 | ECS_E_REPLICA_NOT_READY_FOR_TIERING | The file failed to tier because the current sync mode is initial upload or reconciliation. | No action required. The file will be tiered once sync completes initial upload or reconciliation. |
## How to troubleshoot files that fail to be recalled If files fail to be recalled:
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
description: Learn how to enable identity-based Kerberos authentication for hybr
Previously updated : 09/01/2022 Last updated : 09/09/2022
Azure AD Kerberos authentication only supports using AES-256 encryption.
## Regional availability
-Azure Files authentication with Azure AD Kerberos public preview is available in Azure public cloud in [all Azure regions](https://azure.microsoft.com/global-infrastructure/locations/).
+Azure Files authentication with Azure AD Kerberos public preview is available in Azure public cloud in [all Azure regions](https://azure.microsoft.com/global-infrastructure/locations/) except China (Mooncake).
## Enable Azure AD Kerberos authentication for hybrid user accounts (preview)
stream-analytics Stream Analytics Real Time Fraud Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-real-time-fraud-detection.md
Previously updated : 07/15/2022 Last updated : 09/08/2022 #Customer intent: As an IT admin/developer, I want to run a Stream Analytics job to analyze phone call data and visualize results in a Power BI dashboard.
In this tutorial, you learn how to:
## Prerequisites
-Before you start, make sure you have completed the following steps:
+Before you start, make sure you've completed the following steps:
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-* Download the phone call event generator app [TelcoGenerator.zip](https://aka.ms/asatelcodatagen) from the Microsoft Download Center or get the source code from [GitHub](https://github.com/Azure/azure-stream-analytics/tree/master/DataGenerators/TelcoGeneratorCore).
-* You will need Power BI account.
+* If you don't have an **Azure subscription**, create a [free account](https://azure.microsoft.com/free/).
+* Download the **phone call event generator app**, [TelcoGenerator.zip](https://aka.ms/asatelcodatagen) from the Microsoft Download Center or get the source code from [GitHub](https://github.com/Azure/azure-stream-analytics/tree/master/DataGenerators/TelcoGeneratorCore).
+* You'll need a **Power BI** account.
## Sign in to Azure
Sign in to the [Azure portal](https://portal.azure.com).
## Create an event hub
-Before Stream Analytics can analyze the fraudulent calls data stream, the data needs to be sent to Azure. In this tutorial, you will send data to Azure by using [Azure Event Hubs](../event-hubs/event-hubs-about.md).
+Before Stream Analytics can analyze the fraudulent calls data stream, the data needs to be sent to Azure. In this tutorial, you'll send data to Azure by using [Azure Event Hubs](../event-hubs/event-hubs-about.md).
Use the following steps to create an event hub and send call data to that event hub: 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Select **Create a resource** > **Internet of Things** > **Event Hubs**.
-
- ![Create an event hub in the Azure portal.](media/stream-analytics-real-time-fraud-detection/find-event-hub-resource.png)
+2. Select **Create a resource** > **Internet of Things** > **Event Hubs**.
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/find-event-hub-resource.png" alt-text="Screenshot showing the Event Hubs creation page.":::
3. Fill out the **Create Namespace** pane with the following values: |**Setting** |**Suggested value** |**Description** | ||||
- |Name | asaTutorialEventHub | A unique name to identify the event hub namespace. |
|Subscription | \<Your subscription\> | Select an Azure subscription where you want to create the event hub. | |Resource group | MyASADemoRG | Select **Create New** and enter a new resource-group name for your account. |
+ |Namespace name | asaTutorialEventHubNS | A unique name to identify the event hub namespace. |
|Location | West US2 | Location where the event hub namespace can be deployed. |- 4. Use default options on the remaining settings and select **Review + create**. Then select **Create** to start the deployment.
- ![Create event hub namespace in the Azure portal](media/stream-analytics-real-time-fraud-detection/create-event-hub-namespace.png)
-
-5. When the namespace has finished deploying, go to **All resources** and find *asaTutorialEventHub* in the list of Azure resources. Select *asaTutorialEventHub* to open it.
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/create-event-hub-namespace.png" alt-text="Screenshot showing the Create Namespace page.":::
+5. After the namespace is deployed successfully, select **Go to resource** to navigate to the **Event Hubs Namespace** page.
+6. On the **Event Hubs Namespace** page, select **+Event Hub** on the command bar.
-6. Next select **+Event Hub** and enter a **Name** for the event hub. Set the **Partition Count** to 2. Use the default options in the remaining settings and select **Create**. Then wait for the deployment to succeed.
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/add-event-hub-button.png" alt-text="Screenshot showing the Add event hub button on the Event Hubs Namespace page.":::
+1. On the **Create Event Hub** page, enter a **Name** for the event hub. Set the **Partition Count** to 2. Use the default options in the remaining settings and select **Create**. Then wait for the deployment to succeed.
- ![Event hub configuration in the Azure portal](media/stream-analytics-real-time-fraud-detection/create-event-hub-portal.png)
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/create-event-hub-portal.png" alt-text="Screenshot showing the Create event hub page.":::
### Grant access to the event hub and get a connection string Before an application can send data to Azure Event Hubs, the event hub must have a policy that allows access. The access policy produces a connection string that includes authorization information.
-1. Navigate to the event hub you created in the previous step, *MyEventHub*. Select **Shared access policies** under **Settings**, and then select **+ Add**.
+1. On the **Event Hubs Namespace**, select **Event Hubs** under **Entities** on the left menu, and then select the event hub you created.
-2. Name the policy **MyPolicy** and ensure **Manage** is checked. Then select **Create**.
-
- ![Create event hub shared access policy](media/stream-analytics-real-time-fraud-detection/create-event-hub-access-policy.png)
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/select-event-hub.png" alt-text="Screenshot showing the selection of an event hub on the Event Hubs page.":::
+1. On the **Event Hubs instance** page, select **Shared access policies** under **Settings** on the left menu, and then select **+ Add** on the command bar.
+2. Name the policy **MyPolicy**, ensure **Manage** is checked, and then select **Create**.
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/create-event-hub-access-policy.png" alt-text="Screenshot showing Shared access policies page for an event hub.":::
3. Once the policy is created, select the policy name to open the policy. Find the **Connection stringΓÇôprimary key**. Select the **copy** button next to the connection string. ![Save the shared access policy connection string](media/stream-analytics-real-time-fraud-detection/save-connection-string.png)
Before an application can send data to Azure Event Hubs, the event hub must have
Before you start the TelcoGenerator app, you should configure it to send data to the Azure Event Hubs you created earlier. 1. Extract the contents of [TelcoGenerator.zip](https://aka.ms/asatelcodatagen) file.
-2. Open the `TelcoGenerator\TelcoGenerator\telcodatagen.exe.config` file in a text editor of your choice There is more than one `.config` file, so be sure that you open the correct one.
-
+2. Open the `TelcoGenerator\TelcoGenerator\telcodatagen.exe.config` file in a text editor of your choice There's more than one `.config` file, so be sure that you open the correct one.
3. Update the `<appSettings>` element in the config file with the following details:
- * Set the value of the *EventHubName* key to the value of the EntityPath in the connection string.
- * Set the value of the *Microsoft.ServiceBus.ConnectionString* key to the connection string without the EntityPath value. Don't forget to remove the semicolon that precedes the EntityPath value.
-
+ * Set the value of the **EventHubName** key to the value of the **EntityPath** at the end of the connection string.
+ * Set the value of the **Microsoft.ServiceBus.ConnectionString** key to the connection string **without** the EntityPath value at the end. **Don't forget** to remove the semicolon that precedes the EntityPath value.
4. Save the file. 5. Next open a command window and change to the folder where you unzipped the TelcoGenerator application. Then enter the following command:
Before you start the TelcoGenerator app, you should configure it to send data to
Now that you have a stream of call events, you can create a Stream Analytics job that reads data from the event hub. 1. To create a Stream Analytics job, navigate to the [Azure portal](https://portal.azure.com/).- 2. Select **Create a resource** and search for **Stream Analytics job**. Select the **Stream Analytics job** tile and select **Create**.- 3. Fill out the **New Stream Analytics job** form with the following values: |**Setting** |**Suggested value** |**Description** | ||||
- |Job name | ASATutorial | A unique name to identify the event hub namespace. |
|Subscription | \<Your subscription\> | Select an Azure subscription where you want to create the job. | |Resource group | MyASADemoRG | Select **Use existing** and enter a new resource-group name for your account. |
+ |Job name | ASATutorial | A unique name to identify the event hub namespace. |
|Location | West US2 | Location where the job can be deployed. It's recommended to place the job and the event hub in the same region for best performance and so that you don't pay to transfer data between regions. | |Hosting environment | Cloud | Stream Analytics jobs can be deployed to cloud or edge. **Cloud** allows you to deploy to Azure Cloud, and **Edge** allows you to deploy to an IoT Edge device. | |Streaming units | 1 | Streaming units represent the computing resources that are required to execute a job. By default, this value is set to 1. To learn about scaling streaming units, see [understanding and adjusting streaming units](stream-analytics-streaming-unit-consumption.md) article. |- 4. Use default options on the remaining settings, select **Create**, and wait for the deployment to succeed. ![Create an Azure Stream Analytics job](media/stream-analytics-real-time-fraud-detection/create-stream-analytics-job.png)
+5. After the job is deployed, select **Go to resource** to navigate to the **Stream Analytics job** page.
## Configure job input The next step is to define an input source for the job to read data using the event hub you created in the previous section.
-1. From the Azure portal, open the **All resources** page, and find the *ASATutorial* Stream Analytics job.
-
-2. In the **Job Topology** section of the Stream Analytics job, select **Inputs**.
+2. On the **Stream Analytics job** page, in the **Job Topology** section on the left menu, select **Inputs**.
+3. On the **Inputs** page, select **+ Add stream input** and **Event hub**.
-3. Select **+ Add stream input** and **Event hub**. Fill out the input form with the following values:
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/add-input-event-hub-menu.png" lightbox="media/stream-analytics-real-time-fraud-detection/add-input-event-hub-menu.png" alt-text="Screenshot showing the Input page for a Stream Analytics job.":::
+4. Fill out the input form with the following values:
|**Setting** |**Suggested value** |**Description** | ||||
The next step is to define an input source for the job to read data using the ev
|Subscription | \<Your subscription\> | Select the Azure subscription where you created the event hub. The event hub can be in same or a different subscription as the Stream Analytics job. | |Event hub namespace | asaTutorialEventHub | Select the event hub namespace you created in the previous section. All the event hub namespaces available in your current subscription are listed in the dropdown. | |Event hub name | MyEventHub | Select the event hub you created in the previous section. All the event hubs available in your current subscription are listed in the dropdown. |
- |Event hub policy name | MyPolicy | Select the event hub shared access policy you created in the previous section. All the event hubs policies available in your current subscription are listed in the dropdown. |
+ | Authentication mode | Connection string | In this tutorial, you'll use the connection string to connect to the event hub. |
+ |Event hub policy name | MyPolicy | Select **Use existing**, and then select the policy you created earlier in this tutorial. |
4. Use default options on the remaining settings and select **Save**.
- ![Configure Azure Stream Analytics input](media/stream-analytics-real-time-fraud-detection/configure-stream-analytics-input.png)
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/configure-stream-analytics-input.png" alt-text="Screenshot showing the Event Hubs configuration page for an input.":::
## Create a consumer group
We recommend that you use a distinct consumer group for each Stream Analytics jo
To add a new consumer group: 1. In the Azure portal, go to your Event Hubs instance.- 1. In the left menu, under **Entities**, select **Consumer groups**.
+1. Select **+ Consumer group** on the command bar.
-1. Select **+ Consumer group**.
-
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/create-consumer-group.png" alt-text="Screenshot that shows creating a new consumer group.":::
1. In **Name**, enter a name for your new consumer group. For example, *MyConsumerGroup*.- 1. Select **Create**.
- :::image type="content" source="media/stream-analytics-real-time-fraud-detection/create-consumer-group.png" alt-text="Screenshot that shows creating a new consumer group.":::
- ## Configure job output The last step is to define an output sink where the job can write the transformed data. In this tutorial, you output and visualize data with Power BI. 1. From the Azure portal, open **All resources**, and select the *ASATutorial* Stream Analytics job.- 2. In the **Job Topology** section of the Stream Analytics job, select the **Outputs** option.-
-3. Select **+ Add** > **Power BI**. Then, select **Authorize** and follow the prompts to authenticate Power BI.
--
-4. Fill the output form with the following details and select **Save**:
+3. Select **+ Add** > **Power BI**.
+4. Fill the output form with the following details:
|**Setting** |**Suggested value** | |||
The last step is to define an output sink where the job can write the transforme
|Dataset name | ASAdataset | |Table name | ASATable | | Authentication mode | User token |
+5. Select **Authorize** and follow the prompts to authenticate Power BI.
![Configure Stream Analytics output](media/stream-analytics-real-time-fraud-detection/configure-stream-analytics-output.png)
+6. Select **Save** at the bottom of the **Power BI** page.
This tutorial uses the *User token* authentication mode. To use Managed Identity, see [Use Managed Identity to authenticate your Azure Stream Analytics job to Power BI](powerbi-output-managed-identity.md).
To learn more about the language, see the [Azure Stream Analytics Query Language
If you want to archive every event, you can use a pass-through query to read all the fields in the payload of the event.
-1. Navigate to your Stream Analytics job in the Azure portal and select **Query** under *Job topology*.
-
+1. Navigate to your Stream Analytics job in the Azure portal and select **Query** under **Job topology** on the left menu.
2. In the query window, enter this query: ```SQL
If you want to archive every event, you can use a pass-through query to read all
>As with SQL, keywords are not case-sensitive, and whitespace is not significant. In this query, `CallStream` is the alias that you specified when you created the input. If you used a different alias, use that name instead.- 3. Select **Test query**. The Stream Analytics job runs the query against the sample data from the input and displays the output at the bottom of the window. The results indicate that the Event Hubs and the Streaming Analytics job are configured correctly.
In many cases, your analysis doesn't need all the columns from the input stream.
Run the following query and notice the output. ```SQL
-SELECT CallRecTime, SwitchNum, CallingIMSI, CallingNumCalledNum
+SELECT CallRecTime, SwitchNum, CallingIMSI, CallingNum, CalledNum
+INTO
+ [MyPBIoutput]
FROM CallStream ```
For this transformation, you want a sequence of temporal windows that don't over
The projection includes `System.Timestamp`, which returns a timestamp for the end of each window.
- To specify that you want to use a Tumbling window, you use the [TUMBLINGWINDOW](/stream-analytics-query/tumbling-window-azure-stream-analytics) function in the `GROUP BY` clause. In the function, you specify a time unit (anywhere from a microsecond to a day) and a window size (how many units). In this example, the Tumbling window consists of 5-second intervals, so you will get a count by country/region for every 5 seconds' worth of calls.
+ To specify that you want to use a Tumbling window, you use the [TUMBLINGWINDOW](/stream-analytics-query/tumbling-window-azure-stream-analytics) function in the `GROUP BY` clause. In the function, you specify a time unit (anywhere from a microsecond to a day) and a window size (how many units). In this example, the Tumbling window consists of 5-second intervals, so you'll get a count by country/region for every 5 seconds' worth of calls.
2. Select **Test query**. In the results, notice that the timestamps under **WindowEnd** are in 5-second increments.
For this transformation, you want a sequence of temporal windows that don't over
For this example, consider fraudulent usage to be calls that originate from the same user but in different locations within 5 seconds of one another. For example, the same user can't legitimately make a call from the US and Australia at the same time.
-To check for these cases, you can use a self-join of the streaming data to join the stream to itself based on the `CallRecTime` value. You can then look for call records where the `CallingIMSI` value (the originating number) is the same, but the `SwitchNum` value (country/region of origin) is not the same.
+To check for these cases, you can use a self-join of the streaming data to join the stream to itself based on the `CallRecTime` value. You can then look for call records where the `CallingIMSI` value (the originating number) is the same, but the `SwitchNum` value (country/region of origin) isn't the same.
When you use a join with streaming data, the join must provide some limits on how far the matching rows can be separated in time. As noted earlier, the streaming data is effectively endless. The time bounds for the relationship are specified inside the `ON` clause of the join, using the `DATEDIFF` function. In this case, the join is based on a 5-second interval of call data.
When you use a join with streaming data, the join must provide some limits on ho
GROUP BY TumblingWindow(Duration(second, 1)) ```
- This query is like any SQL join except for the `DATEDIFF` function in the join. This version of `DATEDIFF` is specific to Streaming Analytics, and it must appear in the `ON...BETWEEN` clause. The parameters are a time unit (seconds in this example) and the aliases of the two sources for the join. This is different from the standard SQL `DATEDIFF` function.
+ This query is like any SQL join except for the `DATEDIFF` function in the join. This version of `DATEDIFF` is specific to Streaming Analytics, and it must appear in the `ON...BETWEEN` clause. The parameters are a time unit (seconds in this example) and the aliases of the two sources for the join. This function is different from the standard SQL `DATEDIFF` function.
- The `WHERE` clause includes the condition that flags the fraudulent call: the originating switches are not the same.
+ The `WHERE` clause includes the condition that flags the fraudulent call: the originating switches aren't the same.
2. Select **Test query**. Review the output, and then select **Save query**.
When you use a join with streaming data, the join must provide some limits on ho
For this part of the tutorial, you'll use a sample [ASP.NET](https://asp.net/) web application created by the Power BI team to embed your dashboard. For more information about embedding dashboards, see [embedding with Power BI](/power-bi/developer/embedding) article.
-To set up the application, go to the [PowerBI-Developer-Samples](https://github.com/Microsoft/PowerBI-Developer-Samples) GitHub repository and follow the instructions under the **User Owns Data** section (use the redirect and homepage URLs under the **integrate-web-app** subsection). Since we are using the Dashboard example, use the **integrate-web-app** sample code located in the [GitHub repository](https://github.com/microsoft/PowerBI-Developer-Samples/tree/master/.NET%20Framework/Embed%20for%20your%20organization/).
+To set up the application, go to the [PowerBI-Developer-Samples](https://github.com/Microsoft/PowerBI-Developer-Samples) GitHub repository and follow the instructions under the **User Owns Data** section (use the redirect and homepage URLs under the **integrate-web-app** subsection). Since we're using the Dashboard example, use the **integrate-web-app** sample code located in the [GitHub repository](https://github.com/microsoft/PowerBI-Developer-Samples/tree/master/.NET%20Framework/Embed%20for%20your%20organization/).
Once you've got the application running in your browser, follow these steps to embed the dashboard you created earlier into the web page: 1. Select **Sign in to Power BI**, which grants the application access to the dashboards in your Power BI account.
synapse-analytics Apache Spark Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/apache-spark-advisor.md
Title: Spark Advisor
-description: Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when a customer executes code or query.
+ Title: Troubleshoot Spark application issues with Spark Advisor
+description: Learn how to troubleshoot Spark application issues with Spark Advisor. The advisor automatically analyzes queries and commands, and offers advice.
Last updated 06/23/2022
-# Spark Advisor
+# Troubleshoot Spark application issues with Spark Advisor
-Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when customer executes code or query. After applying the advice, you would have chance to improve your execution performance, decrease cost and fix the execution failures.
+Spark Advisor is a system that automatically analyzes your code, queries, and commands, and advises you about them. By following this advice, you can improve your execution performance, fix execution failures, and decrease costs. This article helps you solve common problems with Spark Advisor.
+## Advice on query hints
-## Advice provided
-
-### May return inconsistent results when using 'randomSplit'
-Inconsistent or inaccurate results may be returned when working with the results of the 'randomSplit' method. Use Apache Spark (RDD) caching before using the 'randomSplit' method.
-
-Method randomSplit() is equivalent to performing sample() on your data frame multiple times, with each sample refetching, partitioning, and sorting your data frame within partitions. The data distribution across partitions and sorting order is important for both randomSplit() and sample(). If either changes upon data refetch, there may be duplicates, or missing values across splits and the same sample using the same seed may produce different results.
-
-These inconsistencies may not happen on every run, but to eliminate them completely, cache your data frame, repartition on a column(s), or apply aggregate functions such as groupBy.
-
-### Table/view name is already in use
-A view already exists with the same name as the created table, or a table already exists with the same name as the created view.
-When this name is used in queries or applications, only the view will be returned no matter, which one created first. To avoid conflicts, rename either the table or the view.
-
-## Hints related advise
-### Unable to recognize a hint
-The selected query contains a hint that isn't recognized. Verify that the hint is spelled correctly.
+### May return inconsistent results when using 'randomsplit'
+Verify that the hint is spelled correctly.
```scala spark.sql("SELECT /*+ unknownHint */ * FROM t1") ```
-### Unable to find a specified relation name(s)
-Unable to find the relation(s) specified in the hint. Verify that the relation(s) are spelled correctly and accessible within the scope of the hint.
+### Unable to find specified relation names
+Verify that the relations are spelled correctly and are accessible within the scope of the hint.
```scala spark.sql("SELECT /*+ BROADCAST(unknownTable) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str") ``` ### A hint in the query prevents another hint from being applied
-The selected query contains a hint that prevents another hint from being applied.
```scala spark.sql("SELECT /*+ BROADCAST(t1), MERGE(t1, t2) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str") ```
-## Enable 'spark.advise.divisionExprConvertRule.enable' to reduce rounding error propagation
-This query contains the expression with Double type. We recommend that you enable the configuration 'spark.advise.divisionExprConvertRule.enable', which can help reduce the division expressions and to reduce the rounding error propagation.
+### Reduce rounding error propagation caused by division.
+This query contains the expression with the `double` type. We recommend that you enable the configuration `spark.advise.divisionExprConvertRule.enable`, which can help reduce the division expressions and the rounding error propagation.
```text "t.a/t.b/t.c" convert into "t.a/(t.b * t.c)" ```
-## Enable 'spark.advise.nonEqJoinConvertRule.enable' to improve query performance
-This query contains time consuming join due to "Or" condition within query. We recommend that you enable the configuration 'spark.advise.nonEqJoinConvertRule.enable', which can help to convert the join triggered by "Or" condition to SMJ or BHJ to accelerate this query.
+### Improve query performance for non-equal join.
+This query contains a time-consuming join because of an `Or` condition within the query. We recommend that you enable the configuration `spark.advise.nonEqJoinConvertRule.enable`. It can help convert the join triggered by the `Or` condition to shuffle sort merge join (SMJ) or broadcast hash join (BHJ) to accelerate this query.
-## Next steps
+### The use of the randomSplit method might return inconsistent results
+Spark Advisor might return inconsistent or inaccurate results when you work with the results of the `randomSplit` method. Use Apache Spark resilient distributed dataset caching (RDD) before you use the `randomSplit` method.
+
+The `randomSplit()` method is equivalent to performing a `sample()` action on your DataFrame multiple times, with each sample refetching, partitioning, and sorting your DataFrame within partitions. The data distribution across partitions and sort order is important for both `randomSplit()` and `sample()` methods. If either changes upon data refetch, there might be duplicates or missing values across splits, and the same sample that uses the same seed might produce different results.
+
+These inconsistencies might not happen on every run. To eliminate them completely, cache your DataFrame, repartition on columns, or apply aggregate functions such as `groupBy`.
-For more information on monitoring pipeline runs, see the [Monitor pipeline runs using Synapse Studio](how-to-monitor-pipeline-runs.md) article.
+### A table or view name might already be in use
+A view already exists with the same name as the created table, or a table already exists with the same name as the created view. When you use this name in queries or applications, Spark Advisor returns only the view, regardless of which one was created first. To avoid conflicts, rename either the table or the view.
+
+## Next steps
+For more information on monitoring pipeline runs, see [Monitor pipeline runs using Synapse Studio](how-to-monitor-pipeline-runs.md).
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
To enable successful interaction with Azure Synapse Dedicated SQL Pool, followin
## API documentation
-Azure Synapse Dedicated SQL Pool Connector for Apache Spark - [API Documentation](https://synapsesql.blob.core.windows.net/docs/latest/scala/https://docsupdatetracker.net/index.html).
+Azure Synapse Dedicated SQL Pool Connector for Apache Spark - API Documentation.
### Configuration options
synapse-analytics Query Delta Lake Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-delta-lake-format.md
With the explicit specification of the result set schema, you can minimize the t
## Dataset
-[NYC Yellow Taxi](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-yellow-taxi-trip-records/) dataset is used in this sample. You can query Parquet files the same way you [read CSV files](query-parquet-files.md). The only difference is that the `FILEFORMAT` parameter should be set to `PARQUET`. Examples in this article show the specifics of reading Parquet files.
+[NYC Yellow Taxi](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-yellow-taxi-trip-records/) dataset is used in this sample. The original `PARQUET` data set is converted to `DELTA` format, and the `DELTA` version is used in the examples.
### Query partitioned data
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
If the issue persists, create a support ticket.
The error "Incorrect syntax near 'NOT'" indicates there are some external tables with columns that contain the NOT NULL constraint in the column definition. Update the table to remove NOT NULL from the column definition. This error can sometimes also occur transiently with tables created from a CETAS statement. If the problem doesn't resolve, you can try dropping and re-creating the external table.
-### Partitione column returns NULL values
+### Partition column returns NULL values
If your query returns NULL values instead of partitioning columns or can't find the partition columns, you have a few possible troubleshooting steps:
Here's the solution:
WITH ( FORMAT_TYPE = PARQUET) ```
-### Operation isn't allowed for a replicated database
-
-If you're trying to create SQL objects, users, or change permissions in a database, you might get errors like "Operation is not allowed for a replicated database." This error might be returned when you try to modify a Lake database that's [shared with Spark pool](../metadat). The Lake databases that are replicated from the Apache Spark pool are managed by Synapse and you cannot create objects like in SQL Databases by using T-SQL.
-Only the following operations are allowed in the Lake databases:
-- Creating, dropping, or altering views, procedures, and inline table-value functions (iTVF) in the schemas other than `dbo`. If you are creating a SQL object in `dbo` schema (or omitting schema and using the default one that is usually `dbo`), you will get the error message.-- Creating and dropping the database users from Azure Active Directory.-- Adding or removing database users from `db_datareader` schema.-
-Other operations are not allowed in Lake databases.
- ### Can't create Azure AD sign-in or user If you get an error while you're trying to create a new Azure AD sign-in or user in a database, check the sign-in you used to connect to your database. The sign-in that's trying to create a new Azure AD user must have permission to access the Azure AD domain and check if the user exists. Be aware that:
There are two options available to circumvent this error:
Our engineering team is currently working on a full support for Spark 3.3.
+## Lake database
+
+The Lake database tables that are created using Spark or Synapse designer are automatically available in serverless SQL pool for querying. You can use serverless SQL pool to query the Parquet, CSV, and Delta Lake tables that are created using Spark pool, and add additional schemas, views, procedures, table-value functions, and Azure AD users in `db_datareader` role to your Lake database. Possible issues are listed in this section.
+
+### A table created in Spark is not available in serverless pool
+
+Tables that are created might not be immediately available in serverless SQL pool.
+- The tables will be available in serverless pools with some delay. You might need to wait 5-10 minutes after creation of a table in Spark to see it in serverless SQL pool.
+- Only the tables that reference Parquet, CSV, and Delta formats are available in serverless SQL pool. Other table types are not available.
+- A table that contains some [unsupported column types](../metadat#share-spark-tables) will not be available in serverless SQL pool.
+- Accessing Delta Lake tables in Lake databases is in **public preview**. Check other issues listed in this section or in the Delta Lake section.
+
+### Operation isn't allowed for a replicated database
+
+This error is returned if you are trying to create external tables, external data sources, database scoped credentials or other objects in your Lake databases. These objects can be created only on SQL databases.
+
+If you're trying to create SQL objects, users, or change permissions in a database, you might get errors like "Operation is not allowed for a replicated database." This error might be returned when you try to modify a Lake database that's [shared with Spark pool](../metadat) and trying to create external tables, external data sources, database scoped credentials, or other objects in your Lake databases. These objects can be created only on SQL databases. The Lake databases are replicated from the Apache Spark pool and managed by Synapse. Therefore, you cannot create objects like in SQL Databases by using T-SQL.
+
+Only the following operations are allowed in the Lake databases:
+- Creating, dropping, or altering views, procedures, and inline table-value functions (iTVF) in the schemas other than `dbo`.
+- Creating and dropping the database users from Azure Active Directory.
+- Adding or removing database users from `db_datareader` schema.
+
+Other operations are not allowed in Lake databases.
+
+> [!NOTE]
+> If you are creating a view, procedure, or function in `dbo` schema (or omitting schema and using the default one that is usually `dbo`), you will get the error message.
+
+### Dataverse real-time snapshot tables are not available in serverless SQL pool
+
+If you are exporting your [Dataverse table to Azure Data Lake storage](/power-apps/maker/data-platform/azure-synapse-link-data-lake#manage-table-data-to-the-data-lake) to Data Lake, and you don't see the [snapshot data](/power-apps/maker/data-platform/azure-synapse-link-synapse#access-near-real-time-data-and-read-only-snapshot-data) (the tables with the `_partitioned` suffix) in your Lake database, make sure that your workspace Managed Identity has read-access on the ADLS storage that contains exported data. The serverless SQL pool reads the schema of the exported data using Managed Identity access to create the table schema.
+
+### Delta tables in Lake databases are not available in serverless SQL pool
+
+Make sure that your workspace Managed Identity has read access on the ADLS storage that contains Delta folder. The serverless SQL pool reads the Delta Lake table schema from the Delta log that are placed in ADLS and use the workspace Managed Identity to access the Delta transaction logs.
+
+Try to setup a data source in some SQL Database that references your Azure Data Lake storage using Managed Identity credential, and try to [create external table on top of data source with Managed Identity](/sql/develop-storage-files-storage-access-control.md?tabs=managed-identity#access-a-data-source-using-credentials) to confirm that a table with the Managed Identity can access your storage.
+ ### Delta tables in Lake databases do not have identical schema in Spark and serverless pools Serverless SQL pools enable you to access Parquet, CSV, and Delta tables that are created in Lake database using Spark or Synapse designer. Accessing the Delta tables is still in public preview, and currently serverless will synchronize a Delta table with Spark at the time of creation but will not update the schema if the columns are added later using the `ALTER TABLE` statement in Spark.
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Follow the instructions in [Enforce Azure Active Directory Multi-Factor Authenti
### Passwordless authentication
-You can use any authentication type supported by Azure AD, such as [Windows Hello for Business](/security/identity-protection/hello-for-business/hello-overview) and other [passwordless authentication options](../active-directory/authentication/concept-authentication-passwordless.md) (for example, FIDO keys), to authenticate to the service.
+You can use any authentication type supported by Azure AD, such as [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) and other [passwordless authentication options](../active-directory/authentication/concept-authentication-passwordless.md) (for example, FIDO keys), to authenticate to the service.
### Smart card authentication
Once you're connected to your remote app or desktop, you may be prompted for aut
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Azure Virtual Desktop supports in-session passwordless authentication (preview) using [Windows Hello for Business](/security/identity-protection/hello-for-business/hello-overview) or security devices like FIDO keys. Passwordless authentication is currently only available for certain versions of Windows Insider. When deploying new session hosts, choose one of the following images:
+Azure Virtual Desktop supports in-session passwordless authentication (preview) using [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) or security devices like FIDO keys. Passwordless authentication is currently only available for certain versions of Windows Insider. When deploying new session hosts, choose one of the following images:
- Windows 11 version 22H2 Enterprise, (Preview) - X64 Gen 2. - Windows 11 version 22H2 Enterprise multi-session, (Preview) - X64 Gen2.
virtual-machines Openshift Okd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/openshift-okd.md
You can use one of two ways to deploy OKD (formerly OpenShift Origin) in Azure: - You can manually deploy all the necessary Azure infrastructure components, and then follow the [OKD documentation](https://docs.okd.io).-- You can also use an existing [Resource Manager template](https://github.com/Microsoft/openshift-origin) that simplifies the deployment of the OKD cluster.
+- You can also use an existing [Resource Manager template](https://github.com/openshift/origin) that simplifies the deployment of the OKD cluster.
## Deploy using the OKD template
Some common customization options include, but aren't limited to:
- Naming conventions (variables in azuredeploy.json) - OpenShift cluster specifics, modified via hosts file (deployOpenShift.sh)
-The [OKD template](https://github.com/Microsoft/openshift-origin) has multiple branches available for different versions of OKD. Based on your needs, you can deploy directly from the repo or you can fork the repo and make custom changes before deploying.
+The [OKD template](https://github.com/openshift/origin) has multiple branches available for different versions of OKD. Based on your needs, you can deploy directly from the repo or you can fork the repo and make custom changes before deploying.
Use the `appId` value from the service principal that you created earlier for the `aadClientId` parameter.
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
There are three main ways to share images in an Azure Compute Gallery, depending
| Share with\: | Option | |-|-| | [Specific people, groups, or service principals](./share-gallery.md) | Role-based access control (RBAC) lets you share resources to specific people, groups, or service principals on a granular level. |
-| [Subscriptions or tenants](explained in this article) | Direct shared gallery lets you share to everyone in a subscription or tenant. |
+| [Subscriptions or tenants](explained in this article) | Direct shared gallery lets you share to everyone in a subscription or tenant (all users, service principals and managed identities) |
| [Everyone](./share-gallery-community.md) | Community gallery lets you share your entire gallery publicly, to all Azure users. |
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
az vm list-skus \
--output table ```
-### Custom images
+### Custom images (or) Azure compute gallery images
If you're using a custom image and your image supports Accelerated Networking, make sure that you have the required drivers to work with Mellanox ConnectX-3, ConnectX-4 Lx, and ConnectX-5 NICs on Azure. Also, Accelerated Networking requires network configurations that exempt the configuration of the virtual functions (mlx4_en and mlx5_core drivers). In images that have cloud-init >=19.4, networking is correctly configured to support Accelerated Networking during provisioning.
virtual-network Configure Public Ip Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-nat-gateway.md
To change the IP, you'll associate a new public IP address created previously wi
## Add public IP prefix
-Public IP prefixes extend the extensibility of SNAT for outbound connections from the NAT gateway. A public IP prefix avoids SNAT port exhaustion. Each IP provides 64,000 ephemeral ports that can be used.
+Public IP prefixes extend the extensibility of SNAT for outbound connections from the NAT gateway. A public IP prefix avoids SNAT port exhaustion. Each IP provides 64,512 ephemeral ports to NAT gateway for connecting outbound.
> [!NOTE] > When assigning a public IP prefix to a NAT gateway, the entire range will be used.
virtual-network Virtual Network Multiple Ip Addresses Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-powershell.md
-
Title: Multiple IP addresses for Azure virtual machines - PowerShell | Microsoft Docs
-description: Learn how to assign multiple IP addresses to a virtual machine using PowerShell. | Resource Manager
+
+ Title: Multiple IP addresses for Azure virtual machines - Azure PowerShell
+
+description: Learn how to create a virtual machine with multiple IP addresses with Azure PowerShell.
- Previously updated : 03/24/2017 Last updated : 09/09/2022 -
-# Assign multiple IP addresses to virtual machines using PowerShell
---
-This article explains how to create a virtual machine (VM) through the Azure Resource Manager deployment model using PowerShell. Multiple IP addresses cannot be assigned to a single NIC created through the classic deployment model, although a classic VM can have multiple NICs, each with their own IP address. To learn more about Azure deployment models, read the [Understand deployment models](../../azure-resource-manager/management/deployment-models.md) article.
--
-## <a name = "create"></a>Create a VM with multiple IP addresses
-
-The steps that follow explain how to create an example VM with multiple IP addresses, as described in the scenario. Change variable values as required for your implementation.
-
-1. Open a PowerShell command prompt and complete the remaining steps in this section within a single PowerShell session. If you don't already have PowerShell installed and configured, complete the steps in the [How to install and configure Azure PowerShell](/powershell/azure/) article.
-2. Login to your account with the `Connect-AzAccount` command.
-3. Replace *myResourceGroup* and *westus* with a name and location of your choosing. Create a resource group. A resource group is a logical container into which Azure resources are deployed and managed.
-
- ```powershell
- $RgName = "MyResourceGroup"
- $Location = "westus"
-
- New-AzResourceGroup `
- -Name $RgName `
- -Location $Location
- ```
-
-4. Create a virtual network (VNet) and subnet in the same location as the resource group:
-
- ```powershell
-
- # Create a subnet configuration
- $SubnetConfig = New-AzVirtualNetworkSubnetConfig `
- -Name MySubnet `
- -AddressPrefix 10.0.0.0/24
-
- # Create a virtual network
- $VNet = New-AzVirtualNetwork `
- -ResourceGroupName $RgName `
- -Location $Location `
- -Name MyVNet `
- -AddressPrefix 10.0.0.0/16 `
- -Subnet $subnetConfig
-
- # Get the subnet object
- $Subnet = Get-AzVirtualNetworkSubnetConfig -Name $SubnetConfig.Name -VirtualNetwork $VNet
- ```
-
-5. Create a network security group (NSG) and a rule. The NSG secures the VM using inbound and outbound rules. In this case, an inbound rule is created for port 3389, which allows incoming remote desktop connections.
-
- ```powershell
-
- # Create an inbound network security group rule for port 3389
-
- $NSGRule = New-AzNetworkSecurityRuleConfig `
- -Name MyNsgRuleRDP `
- -Protocol Tcp `
- -Direction Inbound `
- -Priority 1000 `
- -SourceAddressPrefix * `
- -SourcePortRange * `
- -DestinationAddressPrefix * `
- -DestinationPortRange 3389 -Access Allow
-
- # Create a network security group
- $NSG = New-AzNetworkSecurityGroup `
- -ResourceGroupName $RgName `
- -Location $Location `
- -Name MyNetworkSecurityGroup `
- -SecurityRules $NSGRule
- ```
-
-6. Define the primary IP configuration for the NIC. Change 10.0.0.4 to a valid address in the subnet you created, if you didn't use the value defined previously. Before assigning a static IP address, it's recommended that you first confirm it's not already in use. Enter the command `Test-AzPrivateIPAddressAvailability -IPAddress 10.0.0.4 -VirtualNetwork $VNet`. If the address is available, the output returns *True*. If it's not available, the output returns *False* and a list of addresses that are available.
-
- In the following commands, **Replace \<replace-with-your-unique-name> with the unique DNS name to use.** The name must be unique across all public IP addresses within an Azure region. This is an optional parameter. It can be removed if you only want to connect to the VM using the public IP address.
-
- ```powershell
-
- # Create a public IP address
- $PublicIP1 = New-AzPublicIpAddress `
- -Name "MyPublicIP1" `
- -ResourceGroupName $RgName `
- -Location $Location `
- -DomainNameLabel <replace-with-your-unique-name> `
- -AllocationMethod Static
-
- #Create an IP configuration with a static private IP address and assign the public IP address to it
- $IpConfigName1 = "IPConfig-1"
- $IpConfig1 = New-AzNetworkInterfaceIpConfig `
- -Name $IpConfigName1 `
- -Subnet $Subnet `
- -PrivateIpAddress 10.0.0.4 `
- -PublicIpAddress $PublicIP1 `
- -Primary
- ```
-
- When you assign multiple IP configurations to a NIC, one configuration must be assigned as the *-Primary*.
-
- > [!NOTE]
- > Public IP addresses have a nominal fee. To learn more about IP address pricing, read the [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses) page. There is a limit to the number of public IP addresses that can be used in a subscription. To learn more about the limits, read the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits) article.
-
-7. Define the secondary IP configurations for the NIC. You can add or remove configurations as necessary. Each IP configuration must have a private IP address assigned. Each configuration can optionally have one public IP address assigned.
-
- ```powershell
-
- # Create a public IP address
- $PublicIP2 = New-AzPublicIpAddress `
- -Name "MyPublicIP2" `
- -ResourceGroupName $RgName `
- -Location $Location `
- -AllocationMethod Static
-
- #Create an IP configuration with a static private IP address and assign the public IP address to it
- $IpConfigName2 = "IPConfig-2"
- $IpConfig2 = New-AzNetworkInterfaceIpConfig `
- -Name $IpConfigName2 `
- -Subnet $Subnet `
- -PrivateIpAddress 10.0.0.5 `
- -PublicIpAddress $PublicIP2
-
- $IpConfigName3 = "IpConfig-3"
- $IpConfig3 = New-AzNetworkInterfaceIpConfig `
- -Name $IPConfigName3 `
- -Subnet $Subnet `
- -PrivateIpAddress 10.0.0.6
- ```
-
-8. Create the NIC and associate the three IP configurations to it:
-
- ```powershell
- $NIC = New-AzNetworkInterface `
- -Name MyNIC `
- -ResourceGroupName $RgName `
- -Location $Location `
- -NetworkSecurityGroupId $NSG.Id `
- -IpConfiguration $IpConfig1,$IpConfig2,$IpConfig3
- ```
-
- >[!NOTE]
- >Though all configurations are assigned to one NIC in this article, you can assign multiple IP configurations to every NIC attached to the VM. To learn how to create a VM with multiple NICs, read the [Create a VM with multiple NICs](../../virtual-machines/windows/multiple-nics.md) article.
-
-9. Create the VM by entering the following commands:
-
- ```powershell
-
- # Define a credential object. When you run these commands, you're prompted to enter a username and password for the VM you're creating.
- $cred = Get-Credential
-
- # Create a virtual machine configuration
- $VmConfig = New-AzVMConfig `
- -VMName MyVM `
- -VMSize Standard_DS1_v2 | `
- Set-AzVMOperatingSystem -Windows `
- -ComputerName MyVM `
- -Credential $cred | `
- Set-AzVMSourceImage `
- -PublisherName MicrosoftWindowsServer `
- -Offer WindowsServer `
- -Skus 2016-Datacenter `
- -Version latest | `
- Add-AzVMNetworkInterface `
- -Id $NIC.Id
-
- # Create the VM
- New-AzVM `
- -ResourceGroupName $RgName `
- -Location $Location `
- -VM $VmConfig
- ```
-
-10. Add the private IP addresses to the VM operating system by completing the steps for your operating system in the [Add IP addresses to a VM operating system](#os-config) section of this article. Do not add the public IP addresses to the operating system.
-
-## <a name="add"></a>Add IP addresses to a VM
-
-You can add private and public IP addresses to the Azure network interface by completing the steps that follow. The examples in the following sections assume that you already have a VM with the three IP configurations described in the [scenario](#scenario) in this article, but it's not required that you do.
-
-1. Open a PowerShell command prompt and complete the remaining steps in this section within a single PowerShell session. If you don't already have PowerShell installed and configured, complete the steps in the [How to install and configure Azure PowerShell](/powershell/azure/) article.
-2. Change the "values" of the following $Variables to the name of the NIC you want to add IP address to and the resource group and location the NIC exists in:
-
- ```powershell
- $NicName = "MyNIC"
- $RgName = "MyResourceGroup"
- $Location = "westus"
- ```
-
- If you don't know the name of the NIC you want to change, enter the following commands, then change the values of the previous variables:
-
- ```powershell
- Get-AzNetworkInterface | Format-Table Name, ResourceGroupName, Location
- ```
-
-3. Create a variable and set it to the existing NIC by typing the following command:
-
- ```powershell
- $MyNIC = Get-AzNetworkInterface -Name $NicName -ResourceGroupName $RgName
- ```
-
-4. In the following commands, change *MyVNet* and *MySubnet* to the names of the VNet and subnet the NIC is connected to. Enter the commands to retrieve the VNet and subnet objects the NIC is connected to:
-
- ```powershell
- $MyVNet = Get-AzVirtualnetwork -Name MyVNet -ResourceGroupName $RgName
- $Subnet = $MyVnet.Subnets | Where-Object { $_.Name -eq "MySubnet" }
- ```
-
- If you don't know the VNet or subnet name the NIC is connected to, enter the following command:
-
- ```powershell
- $MyNIC.IpConfigurations
- ```
-
- In the output, look for text similar to the following example output:
-
- ```
- "Id": "/subscriptions/[Id]/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/MyVNet/subnets/MySubnet"
- ```
+# Assign multiple IP addresses to virtual machines using Azure PowerShell
- In this output, *MyVnet* is the VNet and *MySubnet* is the subnet the NIC is connected to.
+An Azure Virtual Machine (VM) has one or more network interfaces (NIC) attached to it. Any NIC can have one or more static or dynamic public and private IP addresses assigned to it.
-5. Complete the steps in one of the following sections, based on your requirements:
+Assigning multiple IP addresses to a VM enables the following capabilities:
- **Add a private IP address**
+* Hosting multiple websites or services with different IP addresses and TLS/SSL certificates on a single server.
- To add a private IP address to a NIC, you must create an IP configuration. The following command creates a configuration with a static IP address of 10.0.0.7. When specifying a static IP address, it must be an unused address for the subnet. It's recommended that you first test the address to ensure it's available by entering the `Test-AzPrivateIPAddressAvailability -IPAddress 10.0.0.7 -VirtualNetwork $myVnet` command. If the IP address is available, the output returns *True*. If it's not available, the output returns *False*, and a list of addresses that are available.
+* Serve as a network virtual appliance, such as a firewall or load balancer.
- ```powershell
- Add-AzNetworkInterfaceIpConfig -Name IPConfig-4 -NetworkInterface `
- $MyNIC -Subnet $Subnet -PrivateIpAddress 10.0.0.7
- ```
+* The ability to add any of the private IP addresses for any of the NICs to an Azure Load Balancer back-end pool. In the past, only the primary IP address for the primary NIC could be added to a back-end pool. For more information about load balancing multiple IP configurations, see [Load balancing multiple IP configurations](../../load-balancer/load-balancer-multiple-ip.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
- Create as many configurations as you require, using unique configuration names and private IP addresses (for configurations with static IP addresses).
+Every NIC attached to a VM has one or more IP configurations associated to it. Each configuration is assigned one static or dynamic private IP address. Each configuration may also have one public IP address resource associated to it. To learn more about IP addresses in Azure, read the [IP addresses in Azure](../../virtual-network/ip-services/public-ip-addresses.md) article.
- Add the private IP address to the VM operating system by completing the steps for your operating system in the [Add IP addresses to a VM operating system](#os-config) section of this article.
+> [!NOTE]
+> All IP configurations on a single NIC must be associated to the same subnet. If multiple IPs on different subnets are desired, multiple NICs on a VM can be used. To learn more about multiple NICs on a VM in Azure, read the [Create VM with Multiple NICs](../../virtual-machines/windows/multiple-nics.md) article.
- **Add a public IP address**
+There's a limit to how many private IP addresses can be assigned to a NIC. There's also a limit to how many public IP addresses that can be used in an Azure subscription. See the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) article for details.
- A public IP address is added by associating a public IP address resource to either a new IP configuration or an existing IP configuration. Complete the steps in one of the sections that follow, as you require.
+This article explains how to add multiple IP addresses to a virtual machine using the Azure portal.
- > [!NOTE]
- > Public IP addresses have a nominal fee. To learn more about IP address pricing, read the [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses) page. There is a limit to the number of public IP addresses that can be used in a subscription. To learn more about the limits, read the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits) article.
- >
+## Prerequisites
- **Associate the public IP address resource to a new IP configuration**
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- Whenever you add a public IP address in a new IP configuration, you must also add a private IP address, because all IP configurations must have a private IP address. You can either add an existing public IP address resource, or create a new one. To create a new one, enter the following command:
+- Azure PowerShell installed locally or Azure Cloud Shell.
- ```powershell
- $myPublicIp3 = New-AzPublicIpAddress `
- -Name "myPublicIp3" `
- -ResourceGroupName $RgName `
- -Location $Location `
- -AllocationMethod Static
- ```
+- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
- To create a new IP configuration with a static private IP address and the associated *myPublicIp3* public IP address resource, enter the following command:
+- Ensure your Az. Network module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module
+-Name "Az. Network" if necessary.
- ```powershell
- Add-AzNetworkInterfaceIpConfig `
- -Name IPConfig-4 `
- -NetworkInterface $myNIC `
- -Subnet $Subnet `
- -PrivateIpAddress 10.0.0.7 `
- -PublicIpAddress $myPublicIp3
- ```
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
- **Associate the public IP address resource to an existing IP configuration**
+> [!NOTE]
+> Though the steps in this article assigns all IP configurations to a single NIC, you can also assign multiple IP configurations to any NIC in a multi-NIC VM. To learn how to create a VM with multiple NICs, see [Create a VM with multiple NICs](../../virtual-machines/windows/multiple-nics.md).
- A public IP address resource can only be associated to an IP configuration that doesn't already have one associated. You can determine whether an IP configuration has an associated public IP address by entering the following command:
+ :::image type="content" source="./media/virtual-network-multiple-ip-addresses-portal/multiple-ipconfigs.png" alt-text="Diagram of network configuration resources created in How-to article.":::
- ```powershell
- $MyNIC.IpConfigurations | Format-Table Name, PrivateIPAddress, PublicIPAddress, Primary
- ```
+ *Figure: Diagram of network configuration resources created in How-to article.*
- You see output similar to the following:
+## Create a resource group
- ```
- Name PrivateIpAddress PublicIpAddress Primary
+An Azure resource group is a logical container into which Azure resources are deployed and managed.
- IPConfig-1 10.0.0.4 Microsoft.Azure.Commands.Network.Models.PSPublicIpAddress True
- IPConfig-2 10.0.0.5 Microsoft.Azure.Commands.Network.Models.PSPublicIpAddress False
- IpConfig-3 10.0.0.6 False
- ```
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) named **myResourceGroup** in the **eastus2** location.
- Since the **PublicIpAddress** column for *IpConfig-3* is blank, no public IP address resource is currently associated to it. You can add an existing public IP address resource to IpConfig-3, or enter the following command to create one:
+```azurepowershell-interactive
+$rg =@{
+ Name = 'myResourceGroup'
+ Location = 'eastus2'
+}
+New-AzResourceGroup @rg
+```
- ```powershell
- $MyPublicIp3 = New-AzPublicIpAddress `
- -Name "MyPublicIp3" `
- -ResourceGroupName $RgName `
- -Location $Location -AllocationMethod Static
- ```
+## Create a virtual network
- Enter the following command to associate the public IP address resource to the existing IP configuration named *IpConfig-3*:
+In this section, you'll create a virtual network for the virtual machine.
- ```powershell
- Set-AzNetworkInterfaceIpConfig `
- -Name IpConfig-3 `
- -NetworkInterface $mynic `
- -Subnet $Subnet `
- -PublicIpAddress $myPublicIp3
- ```
+Use [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) and [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to create a virtual network.
-6. Set the NIC with the new IP configuration by entering the following command:
+```azurepowershell-interactive
+## Create backend subnet config ##
+$subnet = @{
+ Name = 'myBackendSubnet'
+ AddressPrefix = '10.1.0.0/24'
+}
+$subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet
- ```powershell
- Set-AzNetworkInterface -NetworkInterface $MyNIC
- ```
+## Create the virtual network ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ AddressPrefix = '10.1.0.0/16'
+ Subnet = $subnetConfig
+}
+New-AzVirtualNetwork @net
-7. View the private IP addresses and the public IP address resources assigned to the NIC by entering the following command:
+```
- ```powershell
- $MyNIC.IpConfigurations | Format-Table Name, PrivateIPAddress, PublicIPAddress, Primary
- ```
+## Create primary public IP address
+
+Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a primary public IP address.
-8. Add the private IP address to the VM operating system by completing the steps for your operating system in the [Add IP addresses to a VM operating system](#os-config) section of this article. Do not add the public IP address to the operating system.
+```azurepowershell-interactive
+$ip1 = @{
+ Name = 'myPublicIP-1'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ IpAddressVersion = 'IPv4'
+ Zone = 1,2,3
+}
+New-AzPublicIpAddress @ip1
+```
+
+## Create a network security group
+
+In this section, you'll create a network security group for the virtual machine and virtual network. You'll create a rule to allow connections to the virtual machine on port 22 for SSH.
+
+Use [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) and [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) to create the network security group and rules.
+
+```azurepowershell-interactive
+## Create rule for network security group and place in variable. ##
+$nsgrule1 = @{
+ Name = 'myNSGRuleSSH'
+ Description = 'Allow SSH'
+ Protocol = '*'
+ SourcePortRange = '*'
+ DestinationPortRange = '22'
+ SourceAddressPrefix = 'Internet'
+ DestinationAddressPrefix = '*'
+ Access = 'Allow'
+ Priority = '200'
+ Direction = 'Inbound'
+}
+$rule1 = New-AzNetworkSecurityRuleConfig @nsgrule1
+
+## Create network security group ##
+$nsg = @{
+ Name = 'myNSG'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ SecurityRules = $rule1
+}
+New-AzNetworkSecurityGroup @nsg
+```
+
+### Create network interface
+
+You'll use [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) and [New-AzNetworkInterfaceIpConfig](/powershell/module/az.network/new-aznetworkinterfaceipconfig) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
+
+```azurepowershell-interactive
+## Place the virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place the network security group into a variable. ##
+$ns = @{
+ Name = 'myNSG'
+ ResourceGroupName = 'myResourceGroup'
+}
+$nsg = Get-AzNetworkSecurityGroup @ns
+
+## Place the primary public IP address into a variable. ##
+$pub1 = @{
+ Name = 'myPublicIP-1'
+ ResourceGroupName = 'myResourceGroup'
+}
+$pubIP1 = Get-AzPublicIPAddress @pub1
+
+## Create primary configuration for NIC. ##
+$IP1 = @{
+ Name = 'ipconfig1'
+ Subnet = $vnet.Subnets[0]
+ PrivateIpAddressVersion = 'IPv4'
+ PublicIPAddress = $pubIP1
+}
+$IP1Config = New-AzNetworkInterfaceIpConfig @IP1 -Primary
+
+## Create tertiary configuration for NIC. ##
+$IP3 = @{
+ Name = 'ipconfig3'
+ Subnet = $vnet.Subnets[0]
+ PrivateIpAddressVersion = 'IPv4'
+ PrivateIpAddress = '10.1.0.6'
+}
+$IP3Config = New-AzNetworkInterfaceIpConfig @IP3
+
+## Command to create network interface for VM ##
+$nic = @{
+ Name = 'myNIC1'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ NetworkSecurityGroup = $nsg
+ IpConfiguration = $IP1Config,$IP3Config
+}
+New-AzNetworkInterface @nic
+```
+
+> [!NOTE]
+> When adding a static IP address, you must specify an unused, valid address on the subnet the NIC is connected to.
+
+### Create virtual machine
+
+Use the following commands to create the virtual machine:
+
+* [New-AzVM](/powershell/module/az.compute/new-azvm)
+
+* [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
+
+* [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
+
+* [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
+
+* [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
+
+```azurepowershell-interactive
+$cred = Get-Credential
+
+## Place network interface into a variable. ##
+$nic = @{
+ Name = 'myNIC1'
+ ResourceGroupName = 'myResourceGroup'
+}
+$nicVM = Get-AzNetworkInterface @nic
+
+## Create a virtual machine configuration for VMs ##
+$vmsz = @{
+ VMName = 'myVM'
+ VMSize = 'Standard_DS1_v2'
+}
+$vmos = @{
+ ComputerName = 'myVM'
+ Credential = $cred
+}
+$vmimage = @{
+ PublisherName = 'Debian'
+ Offer = 'debian-11'
+ Skus = '11'
+ Version = 'latest'
+}
+$vmConfig = New-AzVMConfig @vmsz `
+ | Set-AzVMOperatingSystem @vmos -Linux `
+ | Set-AzVMSourceImage @vmimage `
+ | Add-AzVMNetworkInterface -Id $nicVM.Id
+
+## Create the virtual machine for VMs ##
+$vm = @{
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ VM = $vmConfig
+ SshKeyName = 'mySSHKey'
+ }
+New-AzVM @vm -GenerateSshKey
+```
+
+## Add secondary private and public IP address
+
+Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a secondary public IP address.
+
+```azurepowershell-interactive
+$ip2 = @{
+ Name = 'myPublicIP-2'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ IpAddressVersion = 'IPv4'
+ Zone = 1,2,3
+}
+New-AzPublicIpAddress @ip2
+```
+
+Use [New-AzNetworkInterfaceIpConfig](/powershell/module/az.network/new-aznetworkinterfaceipconfig) to create the secondary IP configuration for the virtual machine.
+
+```azurepowershell-interactive
+## Place the virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place your virtual network subnet into a variable. ##
+$sub = @{
+ Name = 'myBackendSubnet'
+ VirtualNetwork = $vnet
+}
+$subnet = Get-AzVirtualNetworkSubnetConfig @sub
+
+## Place the secondary public IP address you created previously into a variable. ##
+$pip = @{
+ Name = 'myPublicIP-2'
+ ResourceGroupName = 'myResourceGroup'
+}
+$pubIP2 = Get-AzPublicIPAddress @pip
+
+## Place the network interface into a variable. ##
+$net = @{
+ Name = 'myNIC1'
+ ResourceGroupName = 'myResourceGroup'
+}
+$nic = Get-AzNetworkInterface @net
+
+## Create secondary configuration for NIC. ##
+$IPc2 = @{
+ Name = 'ipconfig2'
+ Subnet = $vnet.Subnets[0]
+ PrivateIpAddressVersion = 'IPv4'
+ PrivateIpAddress = '10.1.0.5'
+ PublicIPAddress = $pubIP2
+}
+$IP2Config = New-AzNetworkInterfaceIpConfig @IPc2
+
+## Add the IP configuration to the network interface. ##
+$nic.IpConfigurations.Add($IP2Config)
+
+## Save the configuration to the network interface. ##
+$nic | Set-AzNetworkInterface
+```
[!INCLUDE [virtual-network-multiple-ip-addresses-os-config.md](../../../includes/virtual-network-multiple-ip-addresses-os-config.md)]
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
The version of the DRS that you use also determines which content types are supp
### DRS 2.0
+DRS 2.0 rules offer better protection than earlier versions of the DRS. It also supports transformations beyond just URL decoding.
+ DRS 2.0 includes 17 rule groups, as shown in the following table. Each group contains multiple rules, and you can disable individual rules as well as entire rule groups. > [!NOTE]
web-application-firewall Waf Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-geo-filtering.md
By default, Azure Front Door will respond to all user requests regardless of the location where the request is coming from. In some scenarios, you may want to restrict the access to your web application by countries/regions. The Web application firewall (WAF) service in Front Door enables you to define a policy using custom access rules for a specific path on your endpoint to either allow or block access from specified countries/regions.
-A WAF policy contains a set of custom rules. The rule consists of match conditions, an action, and a priority. In a match condition, you define a match variable, operator, and match value. For a geo filtering rule, a match variable is REMOTE_ADDR, the operator is GeoMatch, and the value is a two letter country/region code of interest. "ZZ" country code or "Unknown" country captures IP addresses that are not yet mapped to a country in our dataset. You may add ZZ to your match condition to avoid false positives. You can combine a GeoMatch condition and a REQUEST_URI string match condition to create a path-based geo-filtering rule.
+A WAF policy contains a set of custom rules. The rule consists of match conditions, an action, and a priority. In a match condition, you define a match variable, operator, and match value. For a geo filtering rule, a match variable is either RemoteAddr or SocketAddr. RemoteAddr is the original client IP that is usually sent via X-Forwarded-For request header. SocketAddr is the source IP address WAF sees. If your user is behind a proxy, SocketAddr is often the proxy server address.
+The operator in the case of this geo filtering rule is GeoMatch, and the value is a two letter country/region code of interest. "ZZ" country code or "Unknown" country captures IP addresses that are not yet mapped to a country in our dataset. You may add ZZ to your match condition to avoid false positives. You can combine a GeoMatch condition and a REQUEST_URI string match condition to create a path-based geo-filtering rule.
You can configure a geo-filtering policy for your Front Door by using [Azure PowerShell](../../frontdoor/front-door-tutorial-geo-filtering.md) or by using a [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering).
web-application-firewall Waf Front Door Tutorial Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-tutorial-geo-filtering.md
Two letter country/region codes to country/region mapping are provided in [What
```azurepowershell-interactive $nonUSGeoMatchCondition = New-AzFrontDoorWafMatchConditionObject `--MatchVariable RemoteAddr `
+-MatchVariable SocketAddr `
-OperatorProperty GeoMatch ` -NegateCondition $true ` -MatchValue "US"