Updates from: 09/03/2022 01:07:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
Previously updated : 08/17/2022 Last updated : 09/1/2022
To complete this article, you need the following resources:
Azure AD DS requires a service principal to authenticate and communicate and an Azure AD group to define which users have administrative permissions in the managed domain.
-First, create an Azure AD service principal by using a specific application ID named *Domain Controller Services*. The ID value is *2565bd9d-da50-47d4-8b85-4c97f669dc36*. Don't change this application ID.
+First, create an Azure AD service principal by using a specific application ID named *Domain Controller Services*. The ID value is *2565bd9d-da50-47d4-8b85-4c97f669dc36* for global Azure and *6ba9a5d4-8456-4118-b521-9c5ca10cdf84* for other Azure clouds. Don't change this application ID.
Create an Azure AD service principal using the [New-AzureADServicePrincipal][New-AzureADServicePrincipal] cmdlet:
When the Azure portal shows that the managed domain has finished provisioning, t
## Complete PowerShell script
-The following complete PowerShell script combines all of the tasks shown in this article. Copy the script and save it to a file with a `.ps1` extension. Run the script in a local PowerShell console or the [Azure Cloud Shell][cloud-shell].
+The following complete PowerShell script combines all of the tasks shown in this article. Copy the script and save it to a file with a `.ps1` extension. For Azure Global, use AppId value *2565bd9d-da50-47d4-8b85-4c97f669dc36*. For other Azure clouds, use AppId value *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*. Run the script in a local PowerShell console or the [Azure Cloud Shell][cloud-shell].
> [!NOTE] > To enable Azure AD DS, you must be a global administrator for the Azure AD tenant. You also need at least *Contributor* privileges in the Azure subscription.
Connect-AzureAD
Connect-AzAccount # Create the service principal for Azure AD Domain Services.
-New-AzureADServicePrincipal -AppId "6ba9a5d4-8456-4118-b521-9c5ca10cdf84"
+New-AzureADServicePrincipal -AppId "2565bd9d-da50-47d4-8b85-4c97f669dc36"
# First, retrieve the object ID of the 'AAD DC Administrators' group. $GroupObjectId = Get-AzureADGroup `
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
description: Learn how to use additional context in MFA notifications
Previously updated : 08/18/2022 Last updated : 09/01/2022 # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use additional context in Microsoft Authenticator app notifications (Preview) - Authentication Methods Policy
+# How to use additional context in Microsoft Authenticator notifications (Preview) - Authentication methods policy
-This topic covers how to improve the security of user sign-in by adding the application name and geographic location of the sign-in to Microsoft Authenticator push and passwordless notifications. The schema for the API to enable application name and geographic location is currently being updated. **While the API is updated over the next two weeks, you should only use the Azure AD portal to enable application name and geographic location.**
+This topic covers how to improve the security of user sign-in by adding the application name and geographic location of the sign-in to Microsoft Authenticator push and passwordless notifications.
## Prerequisites
The additional context can be combined with [number matching](how-to-mfa-number-
:::image type="content" border="false" source="./media/howto-authentication-passwordless-phone/location-with-number-match.png" alt-text="Screenshot of additional context with number matching in the MFA push notification.":::
-## Enable additional context
+## Enable additional context using Graph API
-To enable application name or geographic location, complete the following steps:
+>[!NOTE]
+>In Graph Explorer, ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
+
+You can enable and disable application name and geographic location separately. Under featureSettings, you can use the following mapping for the following features:
+
+- Application name: displayAppInformationRequiredState
+- Geographic location: displayLocationInformationRequiredState
++
+Identify your single target group for each of the features. Then use the following API endpoint to change the displayAppInformationRequiredState or displayLocationInformationRequiredState properties under featureSettings to **enabled** and include or exclude the groups you want::
+
+`https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator`
+
+>[!NOTE]
+>For Passwordless phone sign-in, the Authenticator app does not retrieve policy information just in time for each sign-in request. Instead, the Authenticator app does a best effort retrieval of the policy once every 7 days. We understand this limitation is less than ideal and are working to optimize the behavior. In the meantime, if you want to force a policy update to test using additional context with Passwordless phone sign-in, you can remove and re-add the account in the Authenticator app.
+
+### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|||-|
+| id | String | The authentication method policy identifier. |
+| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
+
+**RELATIONSHIPS**
+
+| Relationship | Type | Description |
+|--||-|
+| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of users or groups who are enabled to use the authentication method. |
+| featureSettings | [microsoftAuthenticatorFeatureSettings](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of Microsoft Authenticator features. |
+
+### MicrosoftAuthenticator includeTarget properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
+| id | String | Object ID of an Azure AD user or group. |
+| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.|
+
+### MicrosoftAuthenticator featureSettings properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| numberMatchingRequiredState | authenticationMethodFeatureConfiguration | Require number matching for MFA notifications. Value is ignored for phone sign-in notifications. |
+| displayAppInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown application name in Microsoft Authenticator notification. |
+| displayLocationInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown geographic location context in Microsoft Authenticator notification. |
+
+### Authentication Method Feature Configuration properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br>You can only exclude one group for each feature.|
+| includeTarget | featureTarget | A single entity that is included in this feature. <br>You can only include one group for each feature.|
+| State | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
+
+### Feature Target properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| id | String | ID of the entity targeted. |
+| targetType | featureTargetType | The kind of entity targeted, such as group, role, or administrative unit. The possible values are: ΓÇÿgroupΓÇÖ, 'administrativeUnitΓÇÖ, ΓÇÿroleΓÇÖ, unknownFutureValueΓÇÖ. |
+
+### Example of how to enable additional context for all users
+
+In **featureSettings**, change **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **default** to **enabled**.
+
+The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you do not want to allow passwordless, use **push**.
+
+You might need to PATCH the entire schema to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example shows how to update **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
+
+```json
+//Retrieve your existing policy via a GET.
+//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
+//Change the Query to PATCH and Run query
+
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "displayAppInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "all_users"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ },
+ "displayLocationInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "all_users"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+```
+
+
+### Example of how to enable application name and geographic location for separate groups
+
+In **featureSettings**, change **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **default** to **enabled.**
+Inside the **includeTarget** for each featureSetting, change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
+
+You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "displayAppInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ },
+ "displayLocationInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "a229e768-961a-4401-aadb-11d836885c11"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+```
+
+To verify, RUN GET again and verify the ObjectID:
+
+```http
+GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+```
+
+### Example of how to disable application name and only enable geographic location
+
+In **featureSettings**, change the state of **displayAppInformationRequiredState** to **default** or **disabled** and **displayLocationInformationRequiredState** to **enabled.**
+Inside the **includeTarget** for each featureSetting, change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
+
+You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "displayAppInformationRequiredState": {
+ "state": "disabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ },
+ "displayLocationInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "a229e768-961a-4401-aadb-11d836885c11"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+```
+
+### Example of how to exclude a group from application name and geographic location
+
+In **featureSettings**, change the states of **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** to from **default** to **enabled.**
+Inside the **includeTarget** for each featureSetting, change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
+
+In addition, for each of the features, you will change the id of the excludeTarget to the ObjectID of the group from the Azure AD portal. This will exclude that group from seeing application name or geographic location.
+
+You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "displayAppInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "5af8a0da-5420-4d69-bf3c-8b129f3449ce"
+ }
+ },
+ "displayLocationInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "a229e768-961a-4401-aadb-11d836885c11"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "b6bab067-5f28-4dac-ab30-7169311d69e8"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+```
+
+### Example of removing the excluded group
+
+In **featureSettings**, change the states of **displayAppInformationRequiredState** from **default** to **enabled.**
+You need to change the **id** of the **excludeTarget** to `00000000-0000-0000-0000-000000000000`.
+
+You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ " displayAppInformationRequiredState ": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": " 00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any"
+ }
+ ]
+}
+```
+
+## Turn off additional context
+
+To turn off additional context, you'll need to PATCH **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **enabled** to **disabled**/**default**. You can also turn off just one of the features.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "displayAppInformationRequiredState": {
+ "state": "disabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ },
+ "displayLocationInformationRequiredState": {
+ "state": "disabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "a229e768-961a-4401-aadb-11d836885c11"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+```
+
+## Enable additional context in the portal
+
+To enable application name or geographic location in the Azure AD portal, complete the following steps:
1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**. 1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone, and change **Authentication mode** to **Any**.
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 08/08/2022 Last updated : 09/01/2022
# How to use number matching in multifactor authentication (MFA) notifications (Preview) - Authentication Methods Policy
-This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. The schema for the API to enable number match is currently being updated. **While the API is updated over the next two weeks, you should only use the Azure AD portal to enable number match.**
+This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. Number matching can be enabled by using the Azure portal or Microsoft Graph API.
>[!NOTE] >Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator that will be enabled by default for all tenants a few months after general availability (GA).<br>
Your organization will need to enable Authenticator (traditional second factor)
## Number matching
-<!check below with Mayur. The bit about the policy came from the number match FAQ at the end.>
- Number matching can be targeted to only a single group, which can be dynamic or nested. On-premises synchronized security groups and cloud-only security groups are supported for the Authentication Method Policy. Number matching is available for the following scenarios. When enabled, all scenarios support number matching.
During self-service password reset, the Authenticator app notification will show
### Combined registration
-When a user is goes through combined registration to set up the Authenticator app, the user is asked to approve a notification as part of adding the account. For users who are enabled for number matching, this notification will show a number that they need to type in their Authenticator app notification.
+When a user goes through combined registration to set up the Authenticator app, the user is asked to approve a notification as part of adding the account. For users who are enabled for number matching, this notification will show a number that they need to type in their Authenticator app notification.
### AD FS adapter
To create the registry key that overrides push notifications:
## Enable number matching
-To enable number matching, complete the following steps:
+
+>[!NOTE]
+>In Graph Explorer, ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
+
+Identify your single target group for the schema configuration. Then use the following API endpoint to change the numberMatchingRequiredState property under featureSettings to **enabled** and include or exclude groups:
+
+```http
+https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+```
++
+### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|||-|
+| id | String | The authentication method policy identifier. |
+| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
+
+**RELATIONSHIPS**
+
+| Relationship | Type | Description |
+|--||-|
+| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget?view=graph-rest-beta&preserve-view=true) collection | A collection of users or groups who are enabled to use the authentication method |
+| featureSettings | [microsoftAuthenticatorFeatureSettings](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of Microsoft Authenticator features. |
+
+### MicrosoftAuthenticator includeTarget properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
+| id | String | Object ID of an Azure AD user or group. |
+| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.|
+++
+### MicrosoftAuthenticator featureSettings properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| numberMatchingRequiredState | authenticationMethodFeatureConfiguration | Require number matching for MFA notifications. Value is ignored for phone sign-in notifications. |
+| displayAppInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown application name in Microsoft Authenticator notification. |
+| displayLocationInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown geographic location context in Microsoft Authenticator notification. |
+
+### Authentication Method Feature Configuration properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br> Please note: You will be able to only exclude one group for number matching. |
+| includeTarget | featureTarget | A single entity that is included in this feature. <br> Please note: You will be able to only set one group for number matching. |
+| State | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
+
+### Feature Target properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| id | String | ID of the entity targeted. |
+| targetType | featureTargetType | The kind of entity targeted, such as group, role, or administrative unit. The possible values are: ΓÇÿgroupΓÇÖ, 'administrativeUnitΓÇÖ, ΓÇÿroleΓÇÖ, unknownFutureValueΓÇÖ. |
+
+>[!NOTE]
+>Number matching can be enabled only for a single group.
+
+### Example of how to enable number matching for all users
+
+In **featureSettings**, you will need to change the **numberMatchingRequiredState** from **default** to **enabled**.
+
+Note that the value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we will use **any**, but if you do not want to allow passwordless, use **push**.
+
+>[!NOTE]
+>For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
+
+You might need to patch the entire schema to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example only shows the update to the **numberMatchingRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
+
+```json
+//Retrieve your existing policy via a GET.
+//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
+//Change the Query to PATCH and Run query
+
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "numberMatchingRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "all_users"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+
+```
+
+To confirm this has applied, please run the GET request below using the endpoint below.
+
+```http
+GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+```
+
+### Example of how to enable number matching for a single group
+
+In **featureSettings**, you will need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
+Inside the **includeTarget**, you will need to change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
+
+You need to PATCH the entire configuration to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **numberMatchingRequiredState**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "numberMatchingRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any"
+ }
+ ]
+}
+```
+
+To verify, RUN GET again and verify the ObjectID
+
+```http
+GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+```
+
+### Example of removing the excluded group from number matching
+
+In **featureSettings**, you will need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
+You need to change the **id** of the **excludeTarget** to `00000000-0000-0000-0000-000000000000`.
+
+You need to PATCH the entire configuration to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **numberMatchingRequiredState**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will be excluded from the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "numberMatchingRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": " 00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any"
+ }
+ ]
+}
+```
+
+## Turn off number matching
+
+To turn number matching off, you will need to PATCH remove **numberMatchingRequiredState** from **enabled** to **disabled**/**default**.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "numberMatchingRequiredState": {
+ "state": "default",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": " 00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any"
+ }
+ ]
+}
+```
+
+## Enable number matching in the portal
+
+To enable number matching in the Azure AD portal, complete the following steps:
1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**. 1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone, and change **Authentication mode** to **Push**.
To enable number matching, complete the following steps:
## Next steps
-[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
+[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
active-directory Azuread Joined Devices Frx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/azuread-joined-devices-frx.md
Title: Join a new Windows 10 device with Azure AD during a first run | Microsoft Docs
-description: How users can set up Azure AD Join during the out of box experience.
+ Title: Join a new Windows 10 device with Azure AD during the out of box experience
+description: How users can set up Azure AD Join during OOBE.
Previously updated : 06/28/2019 Last updated : 08/31/2022
-#Customer intent: As a user, I want to join my corporate device during a first-run so that I can access my corporate resources
-
-# Tutorial: Join a new Windows 10 device with Azure AD during a first run
-
-With device management in Azure Active Directory (Azure AD), you can ensure that your users are accessing your resources from devices that meet your standards for security and compliance. For more information, see the [introduction to device management in Azure Active Directory](overview.md).
+# Azure AD join a new Windows device during the out of box experience
-With Windows 10, You can join a new device to Azure AD during the first-run out-of-box experience (OOBE).
-This enables you to distribute shrink-wrapped devices to your employees or students.
+Starting in Windows 10 users can join new Windows devices to Azure AD during the first-run out-of-box experience (OOBE). This functionality enables you to distribute shrink-wrapped devices to your employees or students.
-If you have either Windows 10 Professional or Windows 10 Enterprise installed on a device, the experience defaults to the setup process for company-owned devices.
-
-In the Windows *out-of-box experience*, joining an on-premises Active Directory (AD) domain is not supported. If you plan to join a computer to an AD domain, during setup, you should select the link **Set up Windows with a local account**. You can then join the domain from the settings on your computer.
-
-In this tutorial, you learn how to join a device to Azure AD during FRX:
- > [!div class="checklist"]
-> * Prerequisites
-> * Joining a device
-> * Verification
+This functionality pairs well with mobile device management platforms like [Microsoft Intune](/mem/intune/fundamentals/what-is-intune) and tools like [Windows Autopilot](/mem/autopilot/windows-autopilot) to ensure devices are configured according to your standards.
## Prerequisites
-To join a Windows 10 device, the device registration service must be configured to enable you to register devices. In addition to having permission to joining devices in your Azure AD tenant, you must have fewer devices registered than the configured maximum. For more information, see [configure device settings](device-management-azure-portal.md#configure-device-settings).
-
-In addition, if your tenant is federated, your Identity provider MUST support WS-Fed and WS-Trust username/password endpoint. This can be version 1.3 or 2005. This protocol support is required to both join the device to Azure AD and sign in to the device with a password.
-
-## Joining a device
-
-**To join a Windows 10 device to Azure AD during FRX:**
-
-1. When you turn on your new device and start the setup process, you should see the **Getting Ready** message. Follow the prompts to set up your device.
-1. Start by customizing your region and language. Then accept the Microsoft Software License Terms.
-
- <!--![Customize for your region](./media/azuread-joined-devices-frx/01.png)-->
+To Azure AD join a Windows device, the device registration service must be configured to enable you to register devices. For more information about prerequisites, see the article [How to: Plan your Azure AD join implementation](azureadjoin-plan.md).
-1. Select the network you want to use for connecting to the Internet.
-1. Click **This device belongs to my organization**.
+> [!TIP]
+> Windows Home Editions do not support Azure AD join. These editions can still access many of the benefits by using [Azure AD registration](concept-azure-ad-register.md).
+>
+> For information about how complete Azure AD registration on a Windows device see the support article [Register your personal device on your work or school network](https://support.microsoft.com/account-billing/register-your-personal-device-on-your-work-or-school-network-8803dd61-a613-45e3-ae6c-bd1ab25bf8a8).
- <!--![Who owns this PC screen](./media/azuread-joined-devices-frx/02.png)-->
+## Join a new Windows 11 device to Azure AD
-1. Enter the credentials that were provided to you by your organization, and then click **Sign in**.
+Your device may restart several times as part of the setup process. Your device must be connected to the Internet to complete Azure AD join.
- <!--![Sign-in screen](./media/azuread-joined-devices-frx/03.png)-->
+1. Turn on your new device and start the setup process. Follow the prompts to set up your device.
+1. When prompted **How would you like to set up this device?**, select **Set up for work or school**.
+ :::image type="content" source="media/azuread-joined-devices-frx/windows-11-first-run-experience-work-or-school.png" alt-text="Screenshot of Windows 11 out-of-box experience showing the option to set up for work or school.":::
+1. On the **Let's set things up for your work or school** page, provide the credentials that your organization provided.
+ 1. Optionally you can choose to **Sign in with a security key** if one was provided to you.
+ 1. If your organization requires it, you may be prompted to perform multifactor authentication.
+ :::image type="content" source="media/azuread-joined-devices-frx/windows-11-first-run-experience-device-sign-in-info.png" alt-text="Screenshot of Windows 11 out-of-box experience showing the sign-in experience.":::
+1. Continue to follow the prompts to set up your device.
+1. Azure AD checks if an enrollment in mobile device management is required and starts the process.
+ 1. Windows registers the device in the organizationΓÇÖs directory in Azure AD and enrolls it in mobile device management, if applicable.
+1. If you sign in with a managed user account, Windows takes you to the desktop through the automatic sign-in process. Federated users are directed to the Windows sign-in screen to enter your credentials.
+ :::image type="content" source="media/azuread-joined-devices-frx/windows-11-first-run-experience-complete-automatic-sign-in-desktop.png" alt-text="Screenshot of Windows 11 at the desktop after first run experience Azure AD joined.":::
-1. Your device locates a matching tenant in Azure AD. If you are in a federated domain, you are redirected to your on-premises Secure Token Service (STS) server, for example, Active Directory Federation Services (AD FS).
-1. If you are a user in a non-federated domain, enter your credentials directly on the Azure AD-hosted page.
-1. You are prompted for a multi-factor authentication challenge.
-1. Azure AD checks whether an enrollment in mobile device management is required.
-1. Windows registers the device in the organizationΓÇÖs directory in Azure AD and enrolls it in mobile device management, if applicable.
-1. If you are:
- - A managed user, Windows takes you to the desktop through the automatic sign-in process.
- - A federated user, you are directed to the Windows sign-in screen to enter your credentials.
+For more information about the out-of-box experience, see the support article [Join your work device to your work or school network](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973).
## Verification
-To verify whether a device is joined to your Azure AD, review the **Access work or school** dialog on your Windows device. The dialog should indicate that you are connected to your Azure AD directory.
+To verify whether a device is joined to your Azure AD, review the **Access work or school** dialog on your Windows device found in **Settings** > **Accounts**. The dialog should indicate that you're connected to Azure AD, and provides information about areas managed by your IT staff.
-![Access work or school](./media/azuread-joined-devices-frx/13.png)
## Next steps -- For more information, see the [introduction to device management in Azure Active Directory](overview.md). - For more information about managing devices in the Azure AD portal, see [managing devices using the Azure portal](device-management-azure-portal.md).
+- [What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)
+- [Overview of Windows Autopilot](/mem/autopilot/windows-autopilot)
+- [Passwordless authentication options for Azure Active Directory](../authentication/concept-authentication-passwordless.md)
active-directory Troubleshoot Device Dsregcmd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-device-dsregcmd.md
Previously updated : 11/21/2019 Last updated : 08/31/2022
- # Troubleshoot devices by using the dsregcmd command This article covers how to use the output from the `dsregcmd` command to understand the state of devices in Azure Active Directory (Azure AD). The `dsregcmd /status` utility must be run as a domain user account.
This section lists the device join state parameters. The criteria that are requi
| NO | NO | YES | Domain Joined | | YES | NO | YES | Hybrid AD Joined | | NO | YES | YES | On-premises DRS Joined |
-| | |
> [!NOTE] > The Workplace Joined (Azure AD registered) state is displayed in the ["User state"](#user-state) section.
Active Directory Federation Services (AD FS). For hybrid Azure AD-joined devices
This field is skipped if no diagnostics information is available. The diagnostics information fields are same as **AcquirePrtDiagnostics** - ### Sample SSO state output ```
The following example shows that diagnostics tests are passing but the registrat
This diagnostics section displays the output of sanity checks performed on a device that's joined to the cloud. - **AadRecoveryEnabled**: If the value is *YES*, the keys stored in the device aren't usable, and the device is marked for recovery. The next sign-in will trigger the recovery flow and re-register the device.-- **KeySignTest**: If the value is *PASSED*, the device keys are in good health. If KeySignTest fails, the device is usually marked for recovery. The next sign-in will trigger the recovery flow and re-register the device. For hybrid Azure AD-joined devices, the recovery is silent. While the devices are Azure AD-joined or Azure AD registered, they will prompt for user authentication to recover and re-register the device, if necessary.
+- **KeySignTest**: If the value is *PASSED*, the device keys are in good health. If KeySignTest fails, the device is usually marked for recovery. The next sign-in will trigger the recovery flow and re-register the device. For hybrid Azure AD-joined devices, the recovery is silent. While the devices are Azure AD-joined or Azure AD registered, they'll prompt for user authentication to recover and re-register the device, if necessary.
> [!NOTE] > The KeySignTest requires elevated privileges.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 07/21/2022 Last updated : 09/22/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on July 21st, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on September 22nd, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Azure Multi-Factor Authentication | MFA_STANDALONE | cb2020b1-d8f6-41c0-9acd-8ff3d6d7831b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0) | | Microsoft Defender for Office 365 (Plan 2) | THREAT_INTELLIGENCE | 3dd6cf57-d688-4eed-ba52-9e40b5468c3e | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70) | | Microsoft 365 A1 | M365EDU_A1 | b17653a4-2443-4e8c-a550-18249dda78bb | AAD_EDU (3a3976ce-de18-4a87-a78e-5e9245e252df)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | Azure Active Directory for Education (3a3976ce-de18-4a87-a78e-5e9245e252df)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Windows Store Service (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) |
-| Microsoft 365 A3 for Faculty | M365EDU_A3_FACULTY | 4b590615-0888-425a-a965-b3bf7789848d | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/> Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
-| Microsoft 365 A3 for Students | M365EDU_A3_STUDENT | 7cfd9a2b-e110-4c39-bf20-c6a3f36a3121 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PowerApps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
-| Microsoft 365 A3 for Students use benefit | M365EDU_A3_STUUSEBNFT | 18250162-5d87-4436-a834-d795c15c80f3 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
-| Microsoft 365 A3 - Unattended License for students use benefit | M365EDU_A3_STUUSEBNFT_RPA1 | 1aa94593-ca12-4254-a738-81a5972958e8 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION_unattended (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (unattended) (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Microsoft 365 A3 for faculty | M365EDU_A3_FACULTY | 4b590615-0888-425a-a965-b3bf7789848d | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) |
+| Microsoft 365 A3 for students | M365EDU_A3_STUDENT | 7cfd9a2b-e110-4c39-bf20-c6a3f36a3121 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) |
+| Microsoft 365 A3 student use benefits | M365EDU_A3_STUUSEBNFT | 18250162-5d87-4436-a834-d795c15c80f3 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9) |
+| Microsoft 365 A3 - Unattended License for students use benefit | M365EDU_A3_STUUSEBNFT_RPA1 | 1aa94593-ca12-4254-a738-81a5972958e8 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>OFFICESUBSCRIPTION_unattended (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Apps for Enterprise (Unattended) (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9) |
| Microsoft 365 A5 for Faculty | M365EDU_A5_FACULTY | e97c048c-37a4-45fb-ab50-922fbf07a370 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Kaizala Pro (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) | | Microsoft 365 A5 for Students | M365EDU_A5_STUDENT | 46c119d4-0379-4a9d-85e4-97c66d3f909e | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Advanced Threat Protection (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Microsoft 365 A5 for students use benefit | M365EDU_A5_STUUSEBNFT | 31d57bc7-3a05-4867-ab53-97a17835a411 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
active-directory 2 Secure Access Current State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/2-secure-access-current-state.md
Previously updated : 12/18/2020 Last updated : 09/02/2022
active-directory Active Directory Accessmanagement Managing Group Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners.md
- Title: Add or remove group owners - Azure Active Directory | Microsoft Docs
-description: Instructions about how to add or remove group owners using Azure Active Directory.
-------- Previously updated : 08/17/2022-----
-# Add or remove group owners in Azure Active Directory
-
-Azure Active Directory (Azure AD) groups are owned and managed by group owners. Group owners can be users or service principals, and are able to manage the group including membership. Only existing group owners or group-managing administrators can assign group owners. Group owners aren't required to be members of the group.
-
-When a group has no owner, group-managing administrators are still able to manage the group. It is recommended for every group to have at least one owner. Once owners are assigned to a group, the last owner of the group can't be removed. Make sure to select another owner before removing the last owner from the group.
-
-## Add an owner to a group
-Below are instructions for adding a user as an owner to a group using the Azure AD portal. To add a service principal as an owner of a group, follow the instructions to do so using [PowerShell](/powershell/module/Azuread/Add-AzureADGroupOwner).
-
-### To add a group owner
-1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory.
-
-2. Select **Azure Active Directory**, select **Groups**, and then select the group for which you want to add an owner (for this example, *MDM policy - West*).
-
-3. On the **MDM policy - West Overview** page, select **Owners**.
-
- ![MDM policy - West Overview page with Owners option highlighted](media/active-directory-accessmanagement-managing-group-owners/add-owners-option-overview-blade.png)
-
-4. On the **MDM policy - West - Owners** page, select **Add owners**, and then search for and select the user that will be the new group owner, and then choose **Select**.
-
- ![MDM policy - West - Owners page with Add owners option highlighted](media/active-directory-accessmanagement-managing-group-owners/add-owners-owners-blade.png)
-
- After you select the new owner, you can refresh the **Owners** page and see the name added to the list of owners.
-
-## Remove an owner from a group
-Remove an owner from a group using Azure AD.
-
-### To remove an owner
-1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory.
-
-2. Select **Azure Active Directory**, select **Groups**, and then select the group for which you want to remove an owner (for this example, *MDM policy - West*).
-
-3. On the **MDM policy - West Overview** page, select **Owners**.
-
- ![MDM policy - West Overview page with Remove Owners option highlighted](media/active-directory-accessmanagement-managing-group-owners/remove-owners-option-overview-blade.png)
-
-4. On the **MDM policy - West - Owners** page, select the user you want to remove as a group owner, choose **Remove** from the user's information page, and select **Yes** to confirm your decision.
-
- ![User's information page with Remove option highlighted](media/active-directory-accessmanagement-managing-group-owners/remove-owner-info-blade.png)
-
- After you remove the owner, you can return to the **Owners** page, and see the name has been removed from the list of owners.
-
-## Next steps
-- [Managing access to resources with Azure Active Directory groups](active-directory-manage-groups.md)--- [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md)--- [Use groups to assign access to an integrated SaaS app](../enterprise-users/groups-saasapps.md)--- [Integrating your on-premises identities with Azure Active Directory](../hybrid/whatis-hybrid-identity.md)--- [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-v2-cmdlets.md)
active-directory Active Directory Groups Create Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-create-azure-portal.md
- Title: Create a basic group and add members - Azure Active Directory | Microsoft Docs
-description: Instructions about how to create a basic group using Azure Active Directory.
-------- Previously updated : 08/17/2022------
-# Create a basic group and add members using Azure Active Directory
-
-You can create a basic group using the Azure Active Directory (Azure AD) portal. For the purposes of this article, a basic group is added to a single resource by the resource owner (administrator) and includes specific members (employees) that need to access that resource. For more complex scenarios, including dynamic memberships and rule creation, see the [Azure Active Directory user management documentation](../enterprise-users/index.yml).
-
-## Group and membership types
-
-There are several group and membership types. The following information explains each group and membership type and why they are used, to help you decide which options to use when you create a group.
-
-### Group types:
-- **Security**. Used to manage member and computer access to shared resources for a group of users. For example, you can create a security group for a specific security policy. By doing it this way, you can give a set of permissions to all the members at once, instead of having to add permissions to each member individually. A security group can have users, devices, groups and service principals as its members and users and service principals as its owners. For more info about managing access to resources, see [Manage access to resources with Azure Active Directory groups](active-directory-manage-groups.md).-- **Microsoft 365**. Provides collaboration opportunities by giving members access to a shared mailbox, calendar, files, SharePoint site, and more. This option also lets you give people outside of your organization access to the group. A Microsoft 365 group can have only users as its members. Both users and service principals can be owners of a Microsoft 365 group. For more info about Microsoft 365 Groups, see [Learn about Microsoft 365 Groups](https://support.office.com/article/learn-about-office-365-groups-b565caa1-5c40-40ef-9915-60fdb2d97fa2).-
-### Membership types:
--- **Assigned.** Lets you add specific users to be members of this group and to have unique permissions. For the purposes of this article, we're using this option.-- **Dynamic user.** Lets you use dynamic membership rules to automatically add and remove members. If a member's attributes change, the system looks at your directory's dynamic group rules to see if the member meets the rule requirements (is added) or no longer meets the rules requirements (is removed).-- **Dynamic device.** Lets you use dynamic group rules to automatically add and remove devices. If a device's attributes change, the system looks at your dynamic group rules for the directory to see if the device meets the rule requirements (is added) or no longer meets the rules requirements (is removed).-
- > [!IMPORTANT]
- > You can create a dynamic group for either devices or users, but not for both. You also can't create a device group based on the device owners' attributes. Device membership rules can only reference device attributions. For more info about creating a dynamic group for users and devices, see [Create a dynamic group and check status](../enterprise-users/groups-create-rule.md)
-
-## Create a basic group and add members
-You can create a basic group and add your members at the same time. To create a basic group and add members use the following procedure:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory.
-
-1. Search for and select **Azure Active Directory**.
-
-1. On the **Active Directory** page, select **Groups** and then select **New group**.
-
- ![Azure AD page, with Groups showing](media/active-directory-groups-create-azure-portal/group-full-screen.png)
-
-1. The **New Group** pane will appear and you must fill out the required information.
-
- ![New group page, filled out with example info](media/active-directory-groups-create-azure-portal/new-group-blade.png)
-
-1. Select a pre-defined **Group type**. For more information on group types, see [Group and membership types](#group-types).
-
-1. Create and add a **Group name.** Choose a name that you'll remember and that makes sense for the group. A check will be performed to determine if the name is already in use by another group. If the name is already in use, to avoid duplicate naming, you'll be asked to change the name of your group.
-
-1. Add a **Group email address** for the group, or keep the email address that is filled in automatically.
-
-1. **Group description.** Add an optional description to your group.
-
-1. Select a pre-defined **Membership type (required).** For more information on membership types, see [Group and membership types](#membership-types).
-
-1. Select **Create**. Your group is created and ready for you to add members.
-
-1. Select the **Members** area from the **Group** page, and then begin searching for the members to add to your group from the **Select members** page.
-
- ![Selecting members for your group during the group creation process](media/active-directory-groups-create-azure-portal/select-members-create-group.png)
-
-1. When you're done adding members, choose **Select**.
-
- The **Group Overview** page updates to show the number of members who are now added to the group.
-
- ![Group Overview page with number of members highlighted](media/active-directory-groups-create-azure-portal/group-overview-blade-number-highlight.png)
-
-## Turn off group welcome email
-
-When any new Microsoft 365 group is created, whether with dynamic or static membership, a welcome notification is sent to all users who are added to the group. When any user or device attributes change, all dynamic group rules in the organization are processed for potential membership changes. Users who are added then also receive the welcome notification. You can turn off this behavior in [Exchange PowerShell](/powershell/module/exchange/users-and-groups/Set-UnifiedGroup).
-
-## Next steps
--- [Manage access to SaaS apps using groups](../enterprise-users/groups-saasapps.md)-- [Manage groups using PowerShell commands](../enterprise-users/groups-settings-v2-cmdlets.md)
active-directory Active Directory Groups Delete Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-delete-group.md
- Title: Delete a group - Azure Active Directory | Microsoft Docs
-description: Instructions about how to delete a group using Azure Active Directory.
-------- Previously updated : 08/17/2022------
-# Delete a group using Azure Active Directory
-
-You can delete an Azure Active Directory (Azure AD) group for any number of reasons, but typically it will be because you:
--- Incorrectly set the **Group type** to the wrong option.--- Created the wrong or a duplicate group by mistake. --- No longer need the group.-
-## To delete a group
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory.
-
-2. Select **Azure Active Directory**, and then select **Groups**.
-
-3. From the **Groups - All groups** page, search for and select the group you want to delete. For these steps, we'll use **MDM policy - East**.
-
- ![Groups-All groups page, group name highlighted](media/active-directory-groups-delete-group/group-all-groups-screen.png)
-
-4. On the **MDM policy - East Overview** page, and then select **Delete**.
-
- The group is deleted from your Azure Active Directory tenant.
-
- ![MDM policy - East Overview page, delete option highlighted](media/active-directory-groups-delete-group/group-overview-blade.png)
-
-## Next steps
--- If you delete a group by mistake, you can create it again. For more information, see [How to create a basic group and add members](active-directory-groups-create-azure-portal.md).--- If you delete a Microsoft 365 group by mistake, you might be able to restore it. For more information, see [Restore a deleted Office 365 group](../enterprise-users/groups-restore-deleted.md).
active-directory Active Directory Groups Members Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-members-azure-portal.md
- Title: Add or remove group members - Azure Active Directory | Microsoft Docs
-description: Instructions about how to add or remove members from a group using Azure Active Directory.
-------- Previously updated : 08/17/2022------
-# Add or remove group members using Azure Active Directory
-Using Azure Active Directory, you can continue to add and remove group members.
-
-## To add group members
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory.
-
-2. Select **Azure Active Directory**, and then select **Groups**.
-
-3. From the **Groups - All groups** page, search for and select the group you want to add the member to. In this case, use our previously created group, **MDM policy - West**.
-
- ![Groups-All groups page, group name highlighted](media/active-directory-groups-members-azure-portal/group-all-groups-screen.png)
-
-4. From the **MDM policy - West Overview** page, select **Members** from the **Manage** area.
-
- ![MDM policy - West Overview page, with Members option highlighted](media/active-directory-groups-members-azure-portal/group-overview-blade.png)
-
-5. Select **Add members**, and then search and select each of the members you want to add to the group, and then choose **Select**.
-
- You'll get a message that says the members were added successfully.
-
- ![Add members page, with searched for member shown](media/active-directory-groups-members-azure-portal/update-members.png)
-
-6. Refresh the screen to see all of the member names added to the group.
-
-## To remove group members
-
-1. From the **Groups - All groups** page, search for and select the group you want to remove the member from. Again we'll use, **MDM policy - West**.
-
-2. Select **Members** from the **Manage** area, search for and select the name of the member to remove, and then select **Remove**.
-
- ![Member info page, with Remove option](media/active-directory-groups-members-azure-portal/remove-members-from-group.png)
-
-## Next steps
--- [View your groups and members](active-directory-groups-view-azure-portal.md)--- [Edit your group settings](active-directory-groups-settings-azure-portal.md)--- [Manage access to resources using groups](active-directory-manage-groups.md)--- [Manage dynamic rules for users in a group](../enterprise-users/groups-create-rule.md)--- [Associate or add an Azure subscription to Azure Active Directory](active-directory-how-subscriptions-associated-directory.md)
active-directory Active Directory Groups Membership Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-membership-azure-portal.md
- Title: Add or remove a group from another group - Azure AD
-description: Instructions about how to add or remove a group from another group using Azure Active Directory.
-------- Previously updated : 08/17/2022------
-# Add or remove a group from another group using Azure Active Directory
-
-This article helps you to add and remove a group from another group using Azure Active Directory.
-
->[!Note]
->If you're trying to delete the parent group, see [How to update or delete a group and its members](active-directory-groups-delete-group.md).
-
-## Add a group to another group
-
-You can add an existing Security group to another existing Security group (also known as nested groups), creating a member group (subgroup) and a parent group. The member group inherits the attributes and properties of the parent group, saving you configuration time.
-
->[!IMPORTANT]
->We don't currently support:<ul><li>Adding groups to a group synced with on-premises Active Directory.</li><li>Adding Security groups to Microsoft 365 groups.</li><li>Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.</li><li>Assigning apps to nested groups.</li><li>Applying licenses to nested groups.</li><li>Adding distribution groups in nesting scenarios.</li><li>Adding security groups as members of mail-enabled security groups</li><li> Adding groups as members of a role-assignable group.</li></ul>
-
-### To add a group as a member of another group
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory.
-
-2. Select **Azure Active Directory**, and then select **Groups**.
-
-3. On the **Groups - All groups** page, search for and select the group that's to become a member of another group. For this exercise, we're using the **MDM policy - West** group.
-
- >[!NOTE]
- >You can add your group as a member to only one group at a time. Additionally, the **Select Group** box filters the display based on matching your entry to any part of a user or device name. However, wildcard characters aren't supported.
-
- ![Groups - All groups page with MDM policy - West group selected](media/active-directory-groups-membership-azure-portal/group-all-groups-screen.png)
-
-4. On the **MDM policy - West - Group memberships** page, select **Group memberships**, select **Add**, locate the group you want your group to be a member of, and then choose **Select**. For this exercise, we're using the **MDM policy - All org** group.
-
- The **MDM policy - West** group is now a member of the **MDM policy - All org** group, inheriting all the properties and configuration of the MDM policy - All org group.
-
- ![Create a group membership by adding group to another group](media/active-directory-groups-membership-azure-portal/group-add-group-membership.png)
-
-5. Review the **MDM policy - West - Group memberships** page to see the group and member relationship.
-
-6. For a more detailed view of the group and member relationship, select the group name (**MDM policy - All org**) and take a look at the **MDM policy - West** page details.
-
-## Remove a group from another group
-
-You can remove an existing Security group from another Security group. However, removing the group also removes any inherited attributes and properties for its members.
-
-### To remove a member group from another group
-
-1. On the **Groups - All groups** page, search for and select the group that's to be removed as a member of another group. For this exercise, we're again using the **MDM policy - West** group.
-
-2. On the **MDM policy - West overview** page, select **Group memberships**.
-
-3. Select the **MDM policy - All org** group from the **MDM policy - West - Group memberships** page, and then select **Remove** from the **MDM policy - West** page details.
-
- ![Group membership page showing both the member and the group details](media/active-directory-groups-membership-azure-portal/group-membership-remove.png)
-
-## Additional information
-
-These articles provide additional information on Azure Active Directory.
--- [View your groups and members](active-directory-groups-view-azure-portal.md)--- [Create a basic group and add members](active-directory-groups-create-azure-portal.md)--- [Add or remove members from a group](active-directory-groups-members-azure-portal.md)--- [Edit your group settings](active-directory-groups-settings-azure-portal.md)--- [Using a group to manage access to SaaS applications](../enterprise-users/groups-saasapps.md)--- [Scenarios, limitations, and known issues using groups to manage licensing in Azure Active Directory](../enterprise-users/licensing-group-advanced.md#limitations-and-known-issues)
active-directory Active Directory Groups Settings Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-settings-azure-portal.md
- Title: Edit your group information - Azure Active Directory | Microsoft Docs
-description: Instructions about how to edit your group's information using Azure Active Directory.
-------- Previously updated : 08/17/2022------
-# Edit your group information using Azure Active Directory
-
-Using Azure Active Directory (Azure AD), you can edit a group's settings, including updating its name, description, or membership type.
-
-## To edit your group settings
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory.
-
-2. Select **Azure Active Directory**, and then select **Groups**.
-
- The **Groups - All groups** page appears, showing all of your active groups.
-
-3. From the **Groups - All groups** page, type as much of the group name as you can into the **Search** box. For the purposes of this article, we're searching for the **MDM policy - West** group.
-
- The search results appear under the **Search** box, updating as you type more characters.
-
- ![All groups page, with search text in the Search box](media/active-directory-groups-settings-azure-portal/search-for-specific-group.png)
-
-4. Select the group **MDM policy - West**, and then select **Properties** from the **Manage** area.
-
- ![Group Overview page, with Member option and information highlighted](media/active-directory-groups-settings-azure-portal/group-overview-blade.png)
-
-5. Update the **General settings** information as needed, including:
-
- ![Properties settings for a group](media/active-directory-groups-settings-azure-portal/group-properties-settings.png)
-
- - **Group name.** Edit the existing group name.
-
- - **Group description.** Edit the existing group description.
-
- - **Group type.** You can't change the type of group after it's been created. To change the **Group type**, you must delete the group and create a new one.
-
- - **Membership type.** Change the membership type. For more info about the various available membership types, see [How to: Create a basic group and add members using the Azure Active Directory portal](active-directory-groups-create-azure-portal.md).
-
- - **Object ID.** You can't change the Object ID, but you can copy it to use in your PowerShell commands for the group. For more info about using PowerShell cmdlets, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-v2-cmdlets.md).
-
-## Next steps
-
-These articles provide additional information on Azure Active Directory.
--- [View your groups and members](active-directory-groups-view-azure-portal.md)--- [Create a basic group and add members](active-directory-groups-create-azure-portal.md)--- [How to add or remove members from a group](active-directory-groups-members-azure-portal.md)--- [Manage dynamic rules for users in a group](../enterprise-users/groups-create-rule.md)--- [Manage memberships of a group](active-directory-groups-membership-azure-portal.md)--- [Manage access to resources using groups](active-directory-manage-groups.md)--- [Associate or add an Azure subscription to Azure Active Directory](active-directory-how-subscriptions-associated-directory.md)
active-directory Active Directory Groups View Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-view-azure-portal.md
#Customer intent: As a brand-new Azure AD administrator, I need to view my organizationΓÇÖs groups along with the assigned members, so I can manage permissions to apps and services for people in my organization.
-# Quickstart: View your organization's groups and members in Azure Active Directory
+# Quickstart: Create a group with members and view all groups and members in Azure Active Directory
+You can view your organization's existing groups and group members using the Azure portal. Groups are used to manage users that all need the same access and permissions for potentially restricted apps and services.
-You can view your organization's existing groups and group members using the Azure portal. Groups are used to manage users (members) that all need the same access and permissions for potentially restricted apps and services.
-
-In this quickstart, youΓÇÖll view all of your organization's existing groups and view the assigned members.
+In this quickstart, youΓÇÖll set up a new group and assign members to the group. Then you'll view your organization's group and assigned members. Throughout this guide, you'll create a user and group that you can use in other Azure AD Fundamentals quickstarts and tutorials.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
You must sign in to the [Azure portal](https://portal.azure.com/) using a Global
Create a new group, named _MDM policy - West_. For more information about creating a group, see [How to create a basic group and add members](active-directory-groups-create-azure-portal.md).
-1. Select **Azure Active Directory**, **Groups**, and then select **New group**.
+1. Go to **Azure Active Directory** > **Groups**.
+
+1. Select **New group**.
-2. Complete the **Group** page:
+1. Complete the **Group** page:
- **Group type:** Select **Security**
Create a new group, named _MDM policy - West_. For more information about creati
- **Membership type:** Select **Assigned**.
-3. Select **Create**.
+1. Select **Create**.
## Create a new user
-Create a new user, named _Alain Charon_. A user must exist before being added as a group member. Check the "Custom domain names" tab first to get the verified domain name in which to create users. For more information about creating a user, see [How to add or delete users](add-users-azure-active-directory.md).
+A user must exist before being added as a group member, so you'll need to create a new user. For this quickstart, we've added a user named _Alain Charon_. Check the "Custom domain names" tab first to get the verified domain name in which to create users. For more information about creating a user, see [How to add or delete users](add-users-azure-active-directory.md).
-1. Select **Azure Active Directory**, **Users**, and then select **New user**.
+1. Go to **Azure Active Directory** > **Users**.
-2. Complete the **User** page:
+1. Select **New user**.
+
+1. Complete the **User** page:
- **Name:** Type _Alain Charon_. - **User name:** Type *alain\@contoso.com*.
-3. Copy the auto-generated password provided in the **Password** box, and then select **Create**.
+1. Copy the auto-generated password provided in the **Password** box and select **Create**.
## Add a group member
+Now that you have a group and a user, you can add _Alain Charon_ as a member to the _MDM policy - West_ group. For more information about adding group members, see the [Manage groups](how-to-manage-groups.md) article.
-Now that you have a group and a user, you can add _Alain Charon_ as a member to the _MDM policy - West_ group. For more information about adding group members, see [How to add or remove group members](active-directory-groups-members-azure-portal.md).
-
-1. Select **Azure Active Directory** > **Groups**.
+1. Go to **Azure Active Directory** > **Groups**.
2. From the **Groups - All groups** page, search for and select the **MDM policy - West** group.
Now that you have a group and a user, you can add _Alain Charon_ as a member to
## View all groups You can see all the groups for your organization in the **Groups - All groups** page of the Azure portal. -- Select Azure **Active Directory** > **Groups**.
+- Go to **Azure Active Directory** > **Groups**.
The **Groups - All groups** page appears, showing all your active groups.
- ![Groups-All groups page, showing all existing groups](media/active-directory-groups-view-azure-portal/groups-all-groups-blade-with-all-groups.png)
+ ![Screenshot of the 'Groups-All groups' page, showing all existing groups.](media/active-directory-groups-view-azure-portal/groups-search.png)
-## Search for the group
+## Search for a group
Search the **Groups ΓÇô All groups** page to find the **MDM policy ΓÇô West** group. 1. From the **Groups - All groups** page, type _MDM_ into the **Search** box. The search results appear under the **Search** box, including the _MDM policy - West_ group.
- ![Groups ΓÇô All groups page with search box filled out](media/active-directory-groups-view-azure-portal/search-for-specific-group.png)
+ ![Screenshot of the 'Groups' search page showing matching search results.](media/active-directory-groups-view-azure-portal/groups-search-group-name.png)
-3. Select the group **MDM policy ΓÇô West**.
+1. Select the group **MDM policy ΓÇô West**.
-4. View the group info on the **MDM policy - West Overview** page, including the number of members of that group.
+1. View the group info on the **MDM policy - West Overview** page, including the number of members of that group.
- ![MDM policy ΓÇô West Overview page with member info](media/active-directory-groups-view-azure-portal/group-overview-blade.png)
+ ![Screenshot of MDM policy ΓÇô West Overview page with member info.](media/active-directory-groups-view-azure-portal/groups-overview.png)
## View group members Now that youΓÇÖve found the group, you can view all the assigned members. -- Select **Members** from the **Manage** area, and then review the complete list of member names assigned to that specific group, including _Alain Charon_.
+Select **Members** from the **Manage** area, and then review the complete list of member names assigned to that specific group, including _Alain Charon_.
- ![List of members assigned to the MDM policy ΓÇô West group](media/active-directory-groups-view-azure-portal/groups-all-members.png)
+![Screenshot of the list of members assigned to the MDM policy ΓÇô West group.](media/active-directory-groups-view-azure-portal/groups-all-members.png)
## Clean up resources
-This group is used in several of the how-to processes that are available in the **How-to guides** section of this documentation. However, if you'd rather not use this group, you can delete it and its assigned members using the following steps:
+The group you just created is used in other articles in the Azure AD Fundamentals documentation. If you'd rather not use this group, you can delete it and its assigned members using the following steps:
1. On the **Groups - All groups** page, search for the **MDM policy - West** group.
-2. Select the **MDM policy - West** group.
+1. Select the **MDM policy - West** group.
The **MDM policy - West Overview** page appears.
-3. Select **Delete**.
+1. Select **Delete**.
The group and its associated members are deleted.
- ![MDM policy ΓÇô West Overview page with Delete link highlighted](media/active-directory-groups-view-azure-portal/group-overview-blade-delete.png)
+ ![Screenshot of the MDM policy ΓÇô West Overview page with Delete link highlighted.](media/active-directory-groups-view-azure-portal/groups-delete.png)
>[!Important] >This doesn't delete the user Alain Charon, just his membership in the deleted group.
active-directory Active Directory Licensing Whatis Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-licensing-whatis-azure-portal.md
# What is group-based licensing in Azure Active Directory?
-Microsoft paid cloud services, such as Microsoft 365, Enterprise Mobility + Security, Dynamics 365, and other similar products, require licenses. These licenses are assigned to each user who needs access to these services. To manage licenses, administrators use one of the management portals (Office or Azure) and PowerShell cmdlets. Azure Active Directory (Azure AD) is the underlying infrastructure that supports identity management for all Microsoft cloud services. Azure AD stores information about license assignment states for users.
+Microsoft paid cloud services, such as Microsoft 365, Enterprise Mobility + Security, Dynamics 365, and other similar products, require licenses. These licenses are assigned to each user who needs access to these services. To manage licenses, administrators use one of the management portals (Office or Azure) and PowerShell cmdlets. Azure AD is the underlying infrastructure that supports identity management for all Microsoft cloud services. Azure AD stores information about license assignment states for users.
-Until now, licenses could only be assigned at the individual user level, which can make large-scale management difficult. For example, to add or remove user licenses based on organizational changes, such as users joining or leaving the organization or a department, an administrator often must write a complex PowerShell script. This script makes individual calls to the cloud service.
-
-To address those challenges, Azure AD now includes group-based licensing. You can assign one or more product licenses to a group. Azure AD ensures that the licenses are assigned to all members of the group. Any new members who join the group are assigned the appropriate licenses. When they leave the group, those licenses are removed. This licensing management eliminates the need for automating license management via PowerShell to reflect changes in the organization and departmental structure on a per-user basis.
+Azure AD includes group-based licensing, which allows you to assign one or more product licenses to a group. Azure AD ensures that the licenses are assigned to all members of the group. Any new members who join the group are assigned the appropriate licenses. When they leave the group, those licenses are removed. This licensing management eliminates the need for automating license management via PowerShell to reflect changes in the organization and departmental structure on a per-user basis.
## Licensing requirements You must have one of the following licenses **for every user who benefits from** group-based licensing:
For any groups assigned a license, you must also have a license for each unique
Here are the main features of group-based licensing: -- Licenses can be assigned to any security group in Azure AD. Security groups can be synced from on-premises, by using Azure AD Connect. You can also create security groups directly in Azure AD (also called cloud-only groups), or automatically via the Azure AD dynamic group feature.
+- Licenses can be assigned to any security group in Azure AD. Security groups can be synced from on-premises, by using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md). You can also create security groups directly in Azure AD (also called cloud-only groups), or automatically via the [Azure AD dynamic group feature](../enterprise-users/groups-create-rule.md).
- When a product license is assigned to a group, the administrator can disable one or more service plans in the product. Typically, this assignment is done when the organization is not yet ready to start using a service included in a product. For example, the administrator might assign Microsoft 365 to a department, but temporarily disable the Yammer service.
To learn more about other scenarios for license management through group-based l
* [How to migrate individual licensed users to group-based licensing in Azure Active Directory](../enterprise-users/licensing-groups-migrate-users.md) * [How to migrate users between product licenses using group-based licensing in Azure Active Directory](../enterprise-users/licensing-groups-change-licenses.md) * [Azure Active Directory group-based licensing additional scenarios](../enterprise-users/licensing-group-advanced.md)
-* [PowerShell examples for group-based licensing in Azure Active Directory](../enterprise-users/licensing-ps-examples.md)
+* [PowerShell examples for group-based licensing in Azure Active Directory](../enterprise-users/licensing-ps-examples.md)
active-directory Active Directory Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-manage-groups.md
- Title: Manage app & resource access using groups - Azure AD
-description: Learn about how to manage access to your organization's cloud-based apps, on-premises apps, and resources using Azure Active Directory groups.
-------- Previously updated : 08/17/2022------
-# Manage app and resource access using Azure Active Directory groups
-Azure Active Directory (Azure AD) lets you use groups to manage access to your cloud-based apps, on-premises apps, and your resources. Your resources can be part of the Azure AD organization, such as permissions to manage objects through roles in Azure AD, or external to the organization, such as for Software as a Service (SaaS) apps, Azure services, SharePoint sites, and on-premises resources.
-
->[!NOTE]
-> In the Azure portal, you can see some groups whose membership and group details you can't manage in the portal:
->
-> - Groups synced from on-premises Active Directory can be managed only in on-premises Active Directory.
-> - Other group types such as distribution lists and mail-enabled security groups are managed only in Exchange admin center or Microsoft 365 admin center. You must sign in to Exchange admin center or Microsoft 365 admin center to manage these groups.
-
-## How access management in Azure AD works
-
-Azure AD helps you give access to your organization's resources by providing access rights to a single user or to an entire Azure AD group. Using groups lets the resource owner (or Azure AD directory owner), assign a set of access permissions to all the members of the group, instead of having to provide the rights one-by-one. The resource or directory owner can also give management rights for the member list to someone else, such as a department manager or a Helpdesk administrator, letting that person add and remove members, as needed. For more information about how to manage group owners, see [Manage group owners](active-directory-accessmanagement-managing-group-owners.md)
-
-![Azure Active Directory access management diagram](./media/active-directory-manage-groups/active-directory-access-management-works.png)
-
-## Ways to assign access rights
-
-There are four ways to assign resource access rights to your users:
--- **Direct assignment.** The resource owner directly assigns the user to the resource.--- **Group assignment.** The resource owner assigns an Azure AD group to the resource, which automatically gives all of the group members access to the resource. Group membership is managed by both the group owner and the resource owner, letting either owner add or remove members from the group. For more information about adding or removing group membership, see [How to: Add or remove a group from another group using the Azure Active Directory portal](active-directory-groups-membership-azure-portal.md). --- **Rule-based assignment.** The resource owner creates a group and uses a rule to define which users are assigned to a specific resource. The rule is based on attributes that are assigned to individual users. The resource owner manages the rule, determining which attributes and values are required to allow access the resource. For more information, see [Create a dynamic group and check status](../enterprise-users/groups-create-rule.md).--- **External authority assignment.** Access comes from an external source, such as an on-premises directory or a SaaS app. In this situation, the resource owner assigns a group to provide access to the resource and then the external source manages the group members.-
- ![Overview of access management diagram](./media/active-directory-manage-groups/access-management-overview.png)
-
-## Can users join groups without being assigned?
-The group owner can let users find their own groups to join, instead of assigning them. The owner can also set up the group to automatically accept all users that join or to require approval.
-
-After a user requests to join a group, the request is forwarded to the group owner. If it's required, the owner can approve the request, and the user is notified of the group membership. However, if you have multiple owners and one of them disapproves, the user is notified, but isn't added to the group. For more information and instructions about how to let your users request to join groups, see [Set up Azure AD so users can request to join groups](../enterprise-users/groups-self-service-management.md)
-
-## Next steps
-Now that you have a bit of an introduction to access management using groups, you start to manage your resources and apps.
--- [Create a new group using Azure Active Directory](active-directory-groups-create-azure-portal.md) or [Create and manage a new group using PowerShell cmdlets](../enterprise-users/groups-settings-v2-cmdlets.md)--- [Use groups to assign access to an integrated SaaS app](../enterprise-users/groups-saasapps.md)--- [Sync an on-premises group to Azure using Azure AD Connect](../hybrid/whatis-hybrid-identity.md)
active-directory Concept Learn About Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-learn-about-groups.md
+
+ Title: Learn about groups and group membership - Azure Active Directory | Microsoft Docs
+description: Information about Azure Active Directory groups and access rights
++++++++ Last updated : 08/29/2022+++++
+# Learn about groups and access rights in Azure Active Directory
+
+Azure Active Directory (Azure AD) provides several ways to manage access to resources, applications, and tasks. With Azure AD groups, you can grant access and permissions to a group of users instead of for each individual user. Limiting access to Azure AD resources to only those users who need access is one of the core security principals of [Zero Trust](/security/zero-trust/zero-trust-overview). This article provides an overview of how groups and access rights can be used together to make managing your Azure AD users easier while also applying security best practices.
+
+Azure AD lets you use groups to manage access to applications, data, and resources. Resources can be:
+
+- Part of the Azure AD organization, such as permissions to manage objects through roles in Azure AD
+- External to the organization, such as for Software as a Service (SaaS) apps
+- Azure services
+- SharePoint sites
+- On-premises resources
+
+Some groups can't be managed in the Azure AD portal:
+
+- Groups synced from on-premises Active Directory can be managed only in on-premises Active Directory.
+- Distribution lists and mail-enabled security groups are managed only in Exchange admin center or Microsoft 365 admin center. You must sign in to Exchange admin center or Microsoft 365 admin center to manage these groups.
+
+## What to know before creating a group
+
+There are two group types and three group membership types. Review the options to find the right combination for your scenario.
+
+### Group types:
+
+**Security:** Used to manage user and computer access to shared resources.
+
+For example, you can create a security group so that all group members have the same set of security permissions. Members of a security group can include users, devices, other groups, and [service principals](../fundamentals/service-accounts-principal.md), which define access policy and permissions. Owners of a security group can include users and service principals.
+
+**Microsoft 365:** Provides collaboration opportunities by giving group members access to a shared mailbox, calendar, files, SharePoint sites, and more.
+
+This option also lets you give people outside of your organization access to the group. Members of a Microsoft 365 group can only include users. Owners of a Microsoft 365 group can include users and service principals. For more info about Microsoft 365 Groups, see [Learn about Microsoft 365 Groups](https://support.office.com/article/learn-about-office-365-groups-b565caa1-5c40-40ef-9915-60fdb2d97fa2).
+
+### Membership types:
+- **Assigned:** Lets you add specific users as members of a group and have unique permissions.
+- **Dynamic user:** Lets you use dynamic membership rules to automatically add and remove members. If a member's attributes change, the system looks at your dynamic group rules for the directory to see if the member meets the rule requirements (is added), or no longer meets the rules requirements (is removed).
+- **Dynamic device:** Lets you use dynamic group rules to automatically add and remove devices. If a device's attributes change, the system looks at your dynamic group rules for the directory to see if the device meets the rule requirements (is added), or no longer meets the rules requirements (is removed).
+
+ > [!IMPORTANT]
+ > You can create a dynamic group for either devices or users, but not for both. You can't create a device group based on the device owners' attributes. Device membership rules can only reference device attributions. For more info about creating a dynamic group for users and devices, see [Create a dynamic group and check status](../enterprise-users/groups-create-rule.md)
++
+## What to know before adding access rights to a group
+
+After creating an Azure AD group, you need to grant it the appropriate access. Each application, resource, and service that requires access permissions needs to be managed separately because the permissions for one may not be the same as another. Grant access using the [principle of least privilege](../develop/secure-least-privileged-access.md) to help reduce the risk of attack or a security breach.
+
+### How access management in Azure AD works
+
+Azure AD helps you give access to your organization's resources by providing access rights to a single user or to an entire Azure AD group. Using groups lets the resource owner or Azure AD directory owner assign a set of access permissions to all the members of the group. The resource or directory owner can also give management rights to someone such as a department manager or a help desk administrator, letting that person add and remove members. For more information about how to manage group owners, see the [Manage groups](how-to-manage-groups.md) article.
+
+![Diagram of Azure Active Directory access management.](./media/concept-learn-about-groups/access-management-overview.png)
+
+### Ways to assign access rights
+
+After creating a group, you need to decide how to assign access rights. Explore the ways to assign access rights to determine the best process for your scenario.
+
+- **Direct assignment.** The resource owner directly assigns the user to the resource.
+
+- **Group assignment.** The resource owner assigns an Azure AD group to the resource, which automatically gives all of the group members access to the resource. Group membership is managed by both the group owner and the resource owner, letting either owner add or remove members from the group. For more information about managing group membership, see the [Manage groups](how-to-manage-groups.md) article.
+
+- **Rule-based assignment.** The resource owner creates a group and uses a rule to define which users are assigned to a specific resource. The rule is based on attributes that are assigned to individual users. The resource owner manages the rule, determining which attributes and values are required to allow access the resource. For more information, see [Create a dynamic group and check status](../enterprise-users/groups-create-rule.md).
+
+- **External authority assignment.** Access comes from an external source, such as an on-premises directory or a SaaS app. In this situation, the resource owner assigns a group to provide access to the resource and then the external source manages the group members.
+
+ ![Diagram of access management overview.](./media/concept-learn-about-groups/access-management-process.png)
+
+### Can users join groups without being assigned?
+The group owner can let users find their own groups to join, instead of assigning them. The owner can also set up the group to automatically accept all users that join or to require approval.
+
+After a user requests to join a group, the request is forwarded to the group owner. If it's required, the owner can approve the request and the user is notified of the group membership. If you have multiple owners and one of them disapproves, the user is notified, but isn't added to the group. For more information and instructions about how to let your users request to join groups, see [Set up Azure AD so users can request to join groups](../enterprise-users/groups-self-service-management.md).
+
+## Next steps
+
+- [Create and manage Azure AD groups and group membership](how-to-manage-groups.md)
+
+- [Learn about group-based licensing in Azure AD](active-directory-licensing-whatis-azure-portal.md)
+
+- [Manage access to SaaS apps using groups](../enterprise-users/groups-saasapps.md)
+
+- [Manage dynamic rules for users in a group](../enterprise-users/groups-create-rule.md)
+
+- [Learn about Privileged Identity Management for Azure AD roles](../../active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md)
active-directory How To Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-groups.md
+
+ Title: How to manage groups - Azure Active Directory | Microsoft Docs
+description: Instructions about how to manage Azure AD groups and group membership.
++++++++ Last updated : 08/29/2022+++++
+# Manage Azure Active Directory groups and group membership
+
+Azure Active Directory (Azure AD) groups are used to manage users that all need the same access and permissions to resources, such as potentially restricted apps and services. Instead of adding special permissions to individual users, you create a group that applies the special permissions to every member of that group.
+
+This article covers basic group scenarios where a single group is added to a single resource and users are added as members to that group. For more complex scenarios like dynamic memberships and rule creation, see the [Azure Active Directory user management documentation](../enterprise-users/index.yml).
+
+Before adding groups and members, [learn about groups and membership types](concept-learn-about-groups.md) to help you decide which options to use when you create a group.
+
+## Create a basic group and add members
+You can create a basic group and add your members at the same time using the Azure Active Directory (Azure AD) portal. Azure AD roles that can manage groups include **Groups Administrator**, **User Administrator**, **Privileged Role Administrator**, or **Global Administrator**. Review the [appropriate Azure AD roles for managing groups](../roles/delegate-by-task.md#groups)
+
+To create a basic group and add members:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Go to **Azure Active Directory** > **Groups** > **New group**.
+
+ ![Screenshot of the 'Azure AD Groups' page with 'New group' option highlighted.](media/how-to-manage-groups/new-group.png)
+
+1. Select a **Group type**. For more information on group types, see the [learn about groups and membership types](concept-learn-about-groups.md) article.
+
+ - Selecting the **Microsoft 365** Group type enables the **Group email address** option.
+
+1. Enter a **Group name.** Choose a name that you'll remember and that makes sense for the group. A check will be performed to determine if the name is already in use. If the name is already in use, you'll be asked to change the name of your group.
+
+1. **Group email address**: Only available for Microsoft 365 group types. Enter an email address manually or use the email address built from the Group name you provided.
+
+1. **Group description.** Add an optional description to your group.
+
+1. Switch the **Azure AD roles can be assigned to the group** setting to yes to use this group to assign Azure AD roles to members.
+ - This option is only available with Premium P1 or P2 licenses.
+ - You must have the **Privileged Role Administrator** or **Global Administrator** role.
+ - Enabling this option automatically selects **Assigned** as the Membership type.
+ - The ability to add roles while creating the group is added to the process.
+ - [Learn more about role-assignable groups](../roles/groups-create-eligible.md).
+
+1. Select a **Membership type.** For more information on membership types, see the [learn about groups and membership types](concept-learn-about-groups.md) article.
+
+1. Optionally add **Owners** or **Members**. Members and owners can be added after creating your group.
+ 1. Select the link under **Owners** or **Members** to populate a list of every user in your directory.
+ 1. Choose users from the list and then select the **Select** button at the bottom of the window.
+
+ ![Screenshot of selecting members for your group during the group creation process.](media/how-to-manage-groups/add-members.png)
+
+1. Select **Create**. Your group is created and ready for you to manage other settings.
+
+### Turn off group welcome email
+
+A welcome notification is sent to all users when they're added to a new Microsoft 365 group, regardless of the membership type. When an attribute of a user or device changes, all dynamic group rules in the organization are processed for potential membership changes. Users who are added then also receive the welcome notification. You can turn off this behavior in [Exchange PowerShell](/powershell/module/exchange/users-and-groups/Set-UnifiedGroup).
+
+## Add or remove members and owners
+Members and owners can be added to and removed from existing Azure AD groups. The process is the same for members and owners. You'll need the **Groups Administrator** or **User Administrator** role to add and remove members and owners.
+
+Need to add multiple members at one time? Learn about the [add members in bulk](../enterprise-users/groups-bulk-import-members.md) option.
+
+### Add members or owners of a group:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Go to **Azure Active Directory** > **Groups**.
+
+1. Select the group you need to manage.
+
+1. Select either **Members** or **Owners**.
+
+ ![Screenshot of the 'Group overview' page with Members and Owners menu options highlighted.](media/how-to-manage-groups/groups-members-owners.png)
+
+1. Select **+ Add** (members or owners).
+
+1. Scroll through the list or enter a name in the search box. You can choose multiple names at one time. When you're ready, select the **Select** button.
+
+ The **Group Overview** page updates to show the number of members who are now added to the group.
+
+### Remove members or owners of a group:
+
+1. Go to **Azure Active Directory** > **Groups**.
+
+1. Select the group you need to manage.
+
+1. Select either **Members** or **Owners**.
+
+1. Check the box next to a name from the list and select the **Remove** button.
+
+ ![Screenshot of group members with a name selected and the Remove button highlighted.](media/how-to-manage-groups/groups-remove-member.png)
+
+## Edit group settings
+Using Azure AD, you can edit a group's name, description, or membership type. You'll need the **Groups Administrator** or **User Administrator** role to edit a group's settings.
+
+To edit your group settings:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Go to **Azure Active Directory** > **Groups**. The **Groups - All groups** page appears, showing all of your active groups.
+
+1. Scroll through the list or enter a group name in the search box. Select the group you need to manage.
+
+1. Select **Properties** from the side menu.
+
+ ![Screenshot of the 'Group overview' page with Properties menu option highlighted.](media/how-to-manage-groups/groups-overview.png)
+
+1. Update the **General settings** information as needed, including:
+
+ - **Group name.** Edit the existing group name.
+
+ - **Group description.** Edit the existing group description.
+
+ - **Group type.** You can't change the type of group after it's been created. To change the **Group type**, you must delete the group and create a new one.
+
+ - **Membership type.** Change the membership type. If you enabled the **Azure AD roles can be assigned to the group** option, you can't change the membership type. For more info about the available membership types, see the [learn about groups and membership types](concept-learn-about-groups.md) article.
+
+ - **Object ID.** You can't change the Object ID, but you can copy it to use in your PowerShell commands for the group. For more info about using PowerShell cmdlets, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-v2-cmdlets.md).
+
+## Add or remove a group from another group
+You can add an existing Security group to another Security group (also known as nested groups), creating a member group (subgroup) and a parent group. The member group inherits the attributes and properties of the parent group, saving you configuration time. You'll need the **Groups Administrator** or **User Administrator** role to edit group membership.
+
+We currently don't support:
+- Adding groups to a group synced with on-premises Active Directory.
+- Adding Security groups to Microsoft 365 groups.
+- Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.
+- Assigning apps to nested groups.
+- Applying licenses to nested groups.
+- Adding distribution groups in nesting scenarios.
+- Adding security groups as members of mail-enabled security groups.
+- Adding groups as members of a role-assignable group.
+
+### Add a group to another group
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Go to **Azure Active Directory** > **Groups**.
+
+1. On the **Groups - All groups** page, search for and select the group you want to become a member of another group.
+
+ >[!Note]
+ >You only can add your group as a member to one other group at a time. Wildcard characters aren't supported in the **Select Group** search box.
+
+1. On the group Overview page, select **Group memberships** from the side menu.
+
+1. Select **+ Add memberships**.
+
+1. Locate the group you want your group to be a member of and choose **Select**.
+
+ For this exercise, we're adding "MDM policy - West" to the "MDM policy - All org" group, so "MDM - policy - West" inherits all the properties and configurations of the "MDM policy - All org" group.
+
+ ![Screenshot of making a group the member of another group with 'Group membership' from the side menu and 'Add membership' option highlighted.](media/how-to-manage-groups/nested-groups-selected.png)
+
+Now you can review the "MDM policy - West - Group memberships" page to see the group and member relationship.
+
+For a more detailed view of the group and member relationship, select the parent group name (MDM policy - All org) and take a look at the "MDM policy - West" page details.
+
+### Remove a group from another group
+You can remove an existing Security group from another Security group; however, removing the group also removes any inherited attributes and properties for its members.
+
+1. On the **Groups - All groups** page, search for and select the group you need to remove as a member of another group.
+
+1. On the group Overview page, select **Group memberships**.
+
+1. Select the parent group from the **Group memberships** page.
+
+1. Select **Remove**.
+
+ For this exercise, we're now going to remove "MDM policy - West" from the "MDM policy - All org" group.
+
+ ![Screenshot of the 'Group membership' page showing both the member and the group details with 'Remove membership' option highlighted.](media/how-to-manage-groups/remove-nested-group.png)
+
+## Delete a group
+You can delete an Azure AD group for any number of reasons, but typically it will be because you:
+
+- Chose the incorrect **Group type** option.
+
+- Created a duplicate group by mistake.
+
+- No longer need the group.
+
+To delete a group you'll need the **Groups Administrator** or **User Administrator** role.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Go to **Azure Active Directory** > **Groups**.
+
+3. Search for and select the group you want to delete.
+
+4. Select **Delete**.
+
+ The group is deleted from your Azure Active Directory tenant.
+
+## Next steps
+
+- [Learn about groups and assigning access rights to groups](concept-learn-about-groups.md)
+
+- [Manage groups using PowerShell commands](../enterprise-users/groups-settings-v2-cmdlets.md)
+
+- [Manage dynamic rules for users in a group](../enterprise-users/groups-create-rule.md)
+
+- [Scenarios, limitations, and known issues using groups to manage licensing in Azure Active Directory](../enterprise-users/licensing-group-advanced.md#limitations-and-known-issues)
+
+- [Associate or add an Azure subscription to Azure Active Directory](active-directory-how-subscriptions-associated-directory.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
- Deprecated functionality - Plans for changes ++
+## February 2022
+
++
+
+
+### General Availability - France digital accessibility requirement
+
+**Type:** Plan for change
+**Service category:** Other
+**Product capability:** End User Experiences
+
+
+This change provides users who are signing into Azure Active Directory on iOS, Android, and Web UI flavors information about the accessibility of Microsoft's online services via a link on the sign-in page. This ensures that the France digital accessibility compliance requirements are met. The change will only be available for French language experiences.[Learn more](https://www.microsoft.com/fr-fr/accessibility/accessibilite/accessibility-statement)
+
++
+
+
+### General Availability - Downloadable access review history report
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+
+With Azure Active Directory (Azure AD) Access Reviews, you can create a downloadable review history to help your organization gain more insight. The report pulls the decisions that were taken by reviewers when a report is created. These reports can be constructed to include specific access reviews, for a specific time frame, and can be filtered to include different review types and review results.[Learn more](../governance/access-reviews-downloadable-review-history.md)
+
++++
+
+
+### Public Preview of Identity Protection for Workload Identities
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+
+Azure AD Identity Protection is extending its core capabilities of detecting, investigating, and remediating identity-based risk to workload identities. This allows organizations to better protect their applications, service principals, and managed identities. We are also extending Conditional Access so you can block at-risk workload identities. [Learn more](../identity-protection/concept-workload-identity-risk.md)
+
++
+
+
+### Public Preview - Cross-tenant access settings for B2B collaboration
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** Collaboration
+
+
+
+Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now youΓÇÖll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. [Learn more](../external-identities/cross-tenant-access-overview.md)
+
++
+
+
+### Public preview - Create Azure AD access reviews with multiple stages of reviewers
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+
+Use multi-stage reviews to create Azure AD access reviews in sequential stages, each with its own set of reviewers and configurations. Supports multiple stages of reviewers to satisfy scenarios such as: independent groups of reviewers reaching quorum, escalations to other reviewers, and reducing burden by allowing for later stage reviewers to see a filtered-down list. For public preview, multi-stage reviews are only supported on reviews of groups and applications. [Learn more](../governance/create-access-review.md)
+
++
+
+
+### New Federated Apps available in Azure AD Application gallery - February 2022
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Third Party Integration
+
+
+In February 2022 we added the following 20 new applications in our App gallery with Federation support:
+
+[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [Salus](https://salus.com/login), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md).
+
+You can also find the documentation of all the applications from here: [https://aka.ms/AppsTutorial](../saas-apps/tutorial-list.md),
+
+For listing your application in the Azure AD app gallery, please read the details here: [https://aka.ms/AzureADAppRequest](../manage-apps/v2-howto-app-gallery-listing.md)
+
+
++
+
+
+### Two new MDA detections in Identity Protection
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+
+Identity Protection has added two new detections from Microsoft Defender for Cloud Apps, (formerly MCAS). The Mass Access to Sensitive Files detection detects anomalous user activity, and the Unusual Addition of Credentials to an OAuth app detects suspicious service principal activity.[Learn more](../identity-protection/concept-identity-protection-risks.md)
+
++
+
+
+### Public preview - New provisioning connectors in the Azure AD Application Gallery - February 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [BullseyeTDP](../saas-apps/bullseyetdp-provisioning-tutorial.md)
+- [GitHub Enterprise Managed User (OIDC)](../saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md)
+- [Gong](../saas-apps/gong-provisioning-tutorial.md)
+- [LanSchool Air](../saas-apps/lanschool-air-provisioning-tutorial.md)
+- [ProdPad](../saas-apps/prodpad-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+
++
+
+
+### General Availability - Privileged Identity Management (PIM) role activation for SharePoint Online enhancements
+
+**Type:** Changed feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+
+We've improved the Privileged Identity management (PIM) time to role activation for SharePoint Online. Now, when activating a role in PIM for SharePoint Online, you should be able to use your permissions right away in SharePoint Online. This change will roll out in stages, so you might not yet see these improvements in your organization. [Learn more](../privileged-identity-management/pim-how-to-activate-role.md)
+
+ ## January 2022
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For listing your application in the Azure AD app gallery, please read the detail
-
-
-## February 2022
-
--
-
-
-### General Availability - France digital accessibility requirement
-
-**Type:** Plan for change
-**Service category:** Other
-**Product capability:** End User Experiences
-
-
-This change provides users who are signing into Azure Active Directory on iOS, Android, and Web UI flavors information about the accessibility of Microsoft's online services via a link on the sign-in page. This ensures that the France digital accessibility compliance requirements are met. The change will only be available for French language experiences.[Learn more](https://www.microsoft.com/fr-fr/accessibility/accessibilite/accessibility-statement)
-
--
-
-
-### General Availability - Downloadable access review history report
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-
-With Azure Active Directory (Azure AD) Access Reviews, you can create a downloadable review history to help your organization gain more insight. The report pulls the decisions that were taken by reviewers when a report is created. These reports can be constructed to include specific access reviews, for a specific time frame, and can be filtered to include different review types and review results.[Learn more](../governance/access-reviews-downloadable-review-history.md)
-
----
-
-
-### Public Preview of Identity Protection for Workload Identities
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-
-Azure AD Identity Protection is extending its core capabilities of detecting, investigating, and remediating identity-based risk to workload identities. This allows organizations to better protect their applications, service principals, and managed identities. We are also extending Conditional Access so you can block at-risk workload identities. [Learn more](../identity-protection/concept-workload-identity-risk.md)
-
--
-
-
-### Public Preview - Cross-tenant access settings for B2B collaboration
-
-**Type:** New feature
-**Service category:** B2B
-**Product capability:** Collaboration
-
-
-
-Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now youΓÇÖll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. [Learn more](../external-identities/cross-tenant-access-overview.md)
-
--
-
-
-### Public preview - Create Azure AD access reviews with multiple stages of reviewers
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-
-Use multi-stage reviews to create Azure AD access reviews in sequential stages, each with its own set of reviewers and configurations. Supports multiple stages of reviewers to satisfy scenarios such as: independent groups of reviewers reaching quorum, escalations to other reviewers, and reducing burden by allowing for later stage reviewers to see a filtered-down list. For public preview, multi-stage reviews are only supported on reviews of groups and applications. [Learn more](../governance/create-access-review.md)
-
--
-
-
-### New Federated Apps available in Azure AD Application gallery - February 2022
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Third Party Integration
-
-
-In February 2022 we added the following 20 new applications in our App gallery with Federation support:
-
-[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [Salus](https://salus.com/login), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md).
-
-You can also find the documentation of all the applications from here: [https://aka.ms/AppsTutorial](../saas-apps/tutorial-list.md),
-
-For listing your application in the Azure AD app gallery, please read the details here: [https://aka.ms/AzureADAppRequest](../manage-apps/v2-howto-app-gallery-listing.md)
-
-
--
-
-
-### Two new MDA detections in Identity Protection
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-
-Identity Protection has added two new detections from Microsoft Defender for Cloud Apps, (formerly MCAS). The Mass Access to Sensitive Files detection detects anomalous user activity, and the Unusual Addition of Credentials to an OAuth app detects suspicious service principal activity.[Learn more](../identity-protection/concept-identity-protection-risks.md)
-
--
-
-
-### Public preview - New provisioning connectors in the Azure AD Application Gallery - February 2022
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [BullseyeTDP](../saas-apps/bullseyetdp-provisioning-tutorial.md)-- [GitHub Enterprise Managed User (OIDC)](../saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md)-- [Gong](../saas-apps/gong-provisioning-tutorial.md)-- [LanSchool Air](../saas-apps/lanschool-air-provisioning-tutorial.md)-- [ProdPad](../saas-apps/prodpad-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
-
--
-
-
-### General Availability - Privileged Identity Management (PIM) role activation for SharePoint Online enhancements
-
-**Type:** Changed feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-
-We've improved the Privileged Identity management (PIM) time to role activation for SharePoint Online. Now, when activating a role in PIM for SharePoint Online, you should be able to use your permissions right away in SharePoint Online. This change will roll out in stages, so you might not yet see these improvements in your organization. [Learn more](../privileged-identity-management/pim-how-to-activate-role.md)
-
--
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
For further details on task definitions and their parameters, see [Lifecycle Wor
## Next steps
+- [Create workflow (lifecycle workflow)](/graph/api/identitygovernance-lifecycleworkflowscontainer-post-workflows?view=graph-rest-beta)
- [Manage a workflow's properties](manage-workflow-properties.md) - [Manage Workflow Versions](manage-workflow-tasks.md)
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/delet
> Permanently deleted workflows are not able to be restored. ## Next steps+
+- [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta)
- [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md) - [Manage Lifecycle Workflow Versions](manage-workflow-tasks.md)
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The following table shows the scheduling (trigger) relevant attributes and the m
These attributes **are not** automatically populated using such synchronization methods such as Azure AD Connect or Azure AD Connect cloud sync. > [!NOTE]
-> Currently, automatic synchronization of the employeeLeaveDateTime attribute for HR Inbound scenarios is not available. To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually. Manually setting the attribute can be done in the portal or with Graph. For more information see [User profile in Azure](../fundamentals/active-directory-users-profile-azure-portal.md) and [Update user](/graph/api/user-update?view=graph-rest-1.0&tabs=http).
+> Currently, automatic synchronization of the employeeLeaveDateTime attribute for HR Inbound scenarios is not available. To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually. Manually setting the attribute can be done in the portal or with Graph. For more information see [User profile in Azure](../fundamentals/active-directory-users-profile-azure-portal.md) and [Update user](/graph/api/user-update?view=graph-rest-beta&tabs=http).
This document explains how to set up synchronization from on-premises Azure AD Connect cloud sync and Azure AD Connect for the required attributes.
active-directory On Demand Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md
Content-type: application/json
## Next steps
+- [workflow: activate (run a workflow on-demand)](/graph/api/identitygovernance-workflow-activate?view=graph-rest-beta)
- [Customize the schedule of workflows](customize-workflow-schedule.md) - [Delete a Lifecycle workflow](delete-lifecycle-workflow.md)
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
The following reference document provides an overview of a workflow created usin
[!INCLUDE [Azure AD Premium P2 license](../../../includes/active-directory-p2-license.md)]
-|Column1 |Display String |Description |Admin Consent Required |
+|Parameter |Display String |Description |Admin Consent Required |
||||| |LifecycleWorkflows.Read.All | Read all Lifecycle workflows, tasks, user states| Allows the app to list and read all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes |LifecycleWorkflows.ReadWrite.All | Read and write all lifecycle workflows, tasks, user states.| Allows the app to create, update, list, read and delete all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
active-directory Asignet Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asignet-sso-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with AsignetSSOIntegration'
+description: Learn how to configure single sign-on between Azure Active Directory and AsignetSSOIntegration.
++++++++ Last updated : 08/23/2022++++
+# Tutorial: Azure AD SSO integration with AsignetSSOIntegration
+
+In this tutorial, you'll learn how to integrate AsignetSSOIntegration with Azure Active Directory (Azure AD). When you integrate AsignetSSOIntegration with Azure AD, you can:
+
+* Control in Azure AD who has access to AsignetSSOIntegration.
+* Enable your users to be automatically signed-in to AsignetSSOIntegration with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* AsignetSSOIntegration single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* AsignetSSOIntegration supports **SP** and **IDP** initiated SSO.
+
+## Add AsignetSSOIntegration from the gallery
+
+To configure the integration of AsignetSSOIntegration into Azure AD, you need to add AsignetSSOIntegration from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **AsignetSSOIntegration** in the search box.
+1. Select **AsignetSSOIntegration** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for AsignetSSOIntegration
+
+Configure and test Azure AD SSO with AsignetSSOIntegration using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AsignetSSOIntegration.
+
+To configure and test Azure AD SSO with AsignetSSOIntegration, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure AsignetSSOIntegration SSO](#configure-asignetssointegration-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create AsignetSSOIntegration test user](#create-asignetssointegration-test-user)** - to have a counterpart of B.Simon in AsignetSSOIntegration that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **AsignetSSOIntegration** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://trim.corp.microsoft.com/sso.ashx`
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up AsignetSSOIntegration** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AsignetSSOIntegration.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **AsignetSSOIntegration**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure AsignetSSOIntegration SSO
+
+To configure single sign-on on **AsignetSSOIntegration** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [AsignetSSOIntegration support team](mailto:us@asignet.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create AsignetSSOIntegration test user
+
+In this section, you create a user called Britta Simon in AsignetSSOIntegration. Work with [AsignetSSOIntegration support team](mailto:us@asignet.com) to add the users in the AsignetSSOIntegration platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to AsignetSSOIntegration Sign-on URL where you can initiate the login flow.
+
+* Go to AsignetSSOIntegration Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the AsignetSSOIntegration for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the AsignetSSOIntegration tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AsignetSSOIntegration for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure AsignetSSOIntegration you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Fourkites Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fourkites-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with FourKites SAML2.0 SSO for Tracking'
+description: Learn how to configure single sign-on between Azure Active Directory and FourKites SAML2.0 SSO for Tracking.
++++++++ Last updated : 08/26/2022++++
+# Tutorial: Azure AD SSO integration with FourKites SAML2.0 SSO for Tracking
+
+In this tutorial, you'll learn how to integrate FourKites SAML2.0 SSO for Tracking with Azure Active Directory (Azure AD). When you integrate FourKites SAML2.0 SSO for Tracking with Azure AD, you can:
+
+* Control in Azure AD who has access to FourKites SAML2.0 SSO for Tracking.
+* Enable your users to be automatically signed-in to FourKites SAML2.0 SSO for Tracking with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* FourKites SAML2.0 SSO for Tracking single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* FourKites SAML2.0 SSO for Tracking supports **SP** and **IDP** initiated SSO.
+* FourKites SAML2.0 SSO for Tracking supports **Just In Time** user provisioning.
+
+## Add FourKites SAML2.0 SSO for Tracking from the gallery
+
+To configure the integration of FourKites SAML2.0 SSO for Tracking into Azure AD, you need to add FourKites SAML2.0 SSO for Tracking from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **FourKites SAML2.0 SSO for Tracking** in the search box.
+1. Select **FourKites SAML2.0 SSO for Tracking** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for FourKites SAML2.0 SSO for Tracking
+
+Configure and test Azure AD SSO with FourKites SAML2.0 SSO for Tracking using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in FourKites SAML2.0 SSO for Tracking.
+
+To configure and test Azure AD SSO with FourKites SAML2.0 SSO for Tracking, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure FourKites SAML2.0 SSO for Tracking SSO](#configure-fourkites-saml20-sso-for-tracking-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create FourKites SAML2.0 SSO for Tracking test user](#create-fourkites-saml20-sso-for-tracking-test-user)** - to have a counterpart of B.Simon in FourKites SAML2.0 SSO for Tracking that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **FourKites SAML2.0 SSO for Tracking** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type one of the following URLs:
+
+ | **Sign-on URL**|
+ |-|
+ | `https://upsgff.fourkites.com` |
+ | `https://upsgff-staging.fourkites.com` |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to FourKites SAML2.0 SSO for Tracking.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **FourKites SAML2.0 SSO for Tracking**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure FourKites SAML2.0 SSO for Tracking SSO
+
+To configure single sign-on on **FourKites SAML2.0 SSO for Tracking** side, you need to send the **App Federation Metadata Url** to [FourKites SAML2.0 SSO for Tracking support team](mailto:support@fourkites.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create FourKites SAML2.0 SSO for Tracking test user
+
+In this section, a user called B.Simon is created in FourKites SAML2.0 SSO for Tracking. FourKites SAML2.0 SSO for Tracking supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in FourKites SAML2.0 SSO for Tracking, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to FourKites SAML2.0 SSO for Tracking Sign-on URL where you can initiate the login flow.
+
+* Go to FourKites SAML2.0 SSO for Tracking Sign-on URL directly and initiate the login flow from there.
+
+### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the FourKites SAML2.0 SSO for Tracking for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the FourKites SAML2.0 SSO for Tracking tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the FourKites SAML2.0 SSO for Tracking for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure FourKites SAML2.0 SSO for Tracking you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Ideagen Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideagen-cloud-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|name.familyName|String||&check; |externalId|String||&check;
+ >[!NOTE]
+ >All the required fields (for example, first name, last name and email) are required to be filled in Azure AD in order get the auto provision work without any issue.
+ 1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 1. To enable the Azure AD provisioning service for Ideagen Cloud, change the **Provisioning Status** to **On** in the **Settings** section.
active-directory Nice Cxone Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nice-cxone-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with NICE CXone'
+description: Learn how to configure single sign-on between Azure Active Directory and NICE CXone.
++++++++ Last updated : 08/26/2022++++
+# Tutorial: Azure AD SSO integration with NICE CXone
+
+In this tutorial, you'll learn how to integrate NICE CXone with Azure Active Directory (Azure AD). When you integrate NICE CXone with Azure AD, you can:
+
+* Control in Azure AD who has access to NICE CXone.
+* Enable your users to be automatically signed-in to NICE CXone with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* NICE CXone single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* NICE CXone supports **SP** initiated SSO.
+
+## Add NICE CXone from the gallery
+
+To configure the integration of NICE CXone into Azure AD, you need to add NICE CXone from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **NICE CXone** in the search box.
+1. Select **NICE CXone** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for NICE CXone
+
+Configure and test Azure AD SSO with NICE CXone using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at NICE CXone.
+
+To configure and test Azure AD SSO with NICE CXone, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure NICE CXone SSO](#configure-nice-cxone-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create NICE CXone test user](#create-nice-cxone-test-user)** - to have a counterpart of B.Simon in NICE CXone that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **NICE CXone** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ |-|
+ | `https://cxone.niceincontact,com/<guid>` |
+ | `https://cxone-gov.niceincontact.com/<guid>` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://cxone.niceincontact.com/auth/authorize?tenantId=<guid>` |
+ | `https://cxone-gov.niceincontact.com/auth/authorize?tenantId=<guid>` |
+
+ c. In the **Sign-on URL** text box, type one of the following URLs:
+
+ | **Sign-on URL** |
+ |-|
+ | `https://cxone.niceincontact.com` |
+ | `https://cxone-gov.niceincontact.com` |
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [NICE CXone support team](https://www.nice.com/services/customer-support) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up NICE CXone** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to NICE CXone.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **NICE CXone**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure NICE CXone SSO
+
+To configure single sign-on on **NICE CXone** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [NICE CXone support team](https://www.nice.com/services/customer-support). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create NICE CXone test user
+
+In this section, you create a user called Britta Simon at NICE CXone. Work with [NICE CXone support team](https://www.nice.com/services/customer-support) to add the users in the NICE CXone platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to NICE CXone Sign-on URL where you can initiate the login flow.
+
+* Go to NICE CXone Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the NICE CXone tile in the My Apps, this will redirect to NICE CXone Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure NICE CXone you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
Before you can continue with the steps below you need to meet the following requ
## Scenario description
-When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access.
+When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access. Learn more about [account onboarding](https://docs.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
As a developer you can share these steps with your tenant administrator to obtai
1. Go to QuickStart > Verification Request > [Start](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/QuickStartVerifierBlade) 1. Choose **Select Issuer**. 1. Look for AU10TIX in the **Search/select issuers** drop-down.
- :::image type="content" source="media/verified-id-partner-au10tix/select-issuers.png" alt-text="Screenshot of the portal section used to choose issuers.":::
+
+ [ ![Screenshot of the portal section used to choose issuers.](./media/verified-id-partner-au10tix/select-issuers.png)](./media/verified-id-partner-au10tix/select-issuers.png#lightbox)
1. Check the **Government Issued ID ΓÇô Global** or other credential type. 1. Select **Add** and then select **Review**. 1. Download the request body and Copy/paste POST API request URL.
active-directory Howto Verifiable Credentials Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md
Previously updated : 08/26/2022 Last updated : 09/1/2022 # Customer intent: As a developer, I'm looking for information about the open standards that are supported by Microsoft Entra Verified ID.
You can use Entra Verified ID with LexisNexis Risk Solutions to enable faster on
## Scenario description
-Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access.
+Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access. Learn more about [account onboarding](https://docs.microsoft.com/azure/active-directory/verifiable-credentials/plan-verification-solution#account-onboarding).
:::image type="content" source="media/verified-id-partner-au10tix/vc-solution-architecture-diagram.png" alt-text="Diagram of the verifiable credential solution.":::
As a developer you'll provide the steps below to your tenant administrator. The
1. Select on **Select Issuer**. 1. Look for LexisNexis in the Search/select issuers drop-down.
- ![Screenshot of the select issuer section of the portal showing LexisNexis as the choice.](media/verified-id-partner-lexisnexis/select-issuer.png)
+ [ ![Screenshot of the select issuer section of the portal showing LexisNexis as the choice.](./media/verified-id-partner-lexisnexis/select-issuer.png)](./media/verified-id-partner-lexisnexis/select-issuer.png#lightbox)
1. Check the credential type you've discussed with LexisNexis Customer success manager for your specific needs. 1. Choose **Add** and then choose **Review**.
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
The Request Service REST API issuance request requires the following HTTP header
|`Authorization`| Attach the access token as a bearer token to the authorization header in an HTTP request. For example, `Authorization: Bearer <token>`.| |`Content-Type`| `Application/json`|
-Construct an HTTP POST request to the Request Service REST API. Replace the `{tenantID}` with your tenant ID or tenant name.
+Construct an HTTP POST request to the Request Service REST API.
```http https://verifiedid.did.msidentity.com/v1.0/verifiableCredentials/createIssuanceRequest
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
- Title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) (Preview)
-description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet.
--- Previously updated : 08/29/2022--
-# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
-
-The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod either from a pre-reserved set of IPs on every node or from a separate subnet reserved for pods. This approach requires IP address planning and could lead to address exhaustion and difficulties in scaling your clusters as your application demands grow.
-
-With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
-
-> [!NOTE]
-> Azure CNI overlay is currently only available in US West Central region.
-
-## Overview of overlay networking
-
-In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
-
-A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
--
-Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You will have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet.
-
-Outbound (egress) connectivity to the internet for overlay pods can be provided using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting).
-
-Ingress connectivity to the cluster can be achieved using an ingress controller such as Nginx or [HTTP application routing](./http-application-routing.md).
-
-## Difference between Kubenet and Azure CNI Overlay
-
-Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet but has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you do not want to assign VNet IP addresses to pods due to IP shortage, then Azure CNI Overlay is the recommended solution.
-
-| Area | Azure CNI Overlay | Kubenet |
-| -- | :--: | -- |
-| Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node |
-| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
-| Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency |
-| Kubernetes Network Policies | Azure Network Policies, Calico | Calico |
-| OS platforms supported | Linux only | Linux only |
-
-## IP address planning
-
-* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so ensure that you have a subnet big enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
-
-* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
-The following are additional factors to consider when planning pod address space:
- * Pod CIDR space must not overlap with the cluster subnet range.
- * Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
- * The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
-
-* **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
-
-* **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that will be used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the kubernetes.default.svc.cluster.local address.
-
-## Maximum pods per node
-
-You can configure the maximum number of pods per node at the time of cluster creation or when you add a new node pool. The default for Azure CNI Overlay is 30. The maximum value that you can specify in Azure CNI Overlay is 250, and the minimum value is 10. The maximum pods per node value configured during creation of a node pool applies to the nodes in that node pool only.
-
-## Choosing a network model to use
-
-Azure CNI offers two IP addressing options for pods- the traditional configuration that assigns VNet IPs to pods, and overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
-
-Use overlay networking when:
-
-* You would like to scale to a large number of Pods but have limited IP address space in your VNet.
-* Most of the pod communication is within the cluster.
-* You don't need advanced AKS features, such as virtual nodes.
-
-Use the traditional VNet option when:
-
-* You have available IP address space.
-* Most of the pod communication is to resources outside of the cluster.
-* Resources outside the cluster need to reach pods directly.
-* You need AKS advanced features, such as virtual nodes.
-
-## Limitations with Azure CNI Overlay
-
-The overlay solution has the following limitations today
-
-* Only available for Linux and not for Windows.
-* You can't deploy multiple overlay clusters in the same subnet.
-* Overlay can be enabled only for new clusters. Existing (already deployed) clusters can't be configured to use overlay.
-* You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
-
-## Steps to set up overlay clusters
--
-The following example walks through the steps to create a new virtual network with a subnet for the cluster nodes and an AKS cluster that uses Azure CNI Overlay. Be sure to replace the variables with your own values.
-
-First, opt into the feature by running the following command:
-
-```azurecli-interactive
-az feature register --namespace Microsoft.ContainerService --name AzureOverlayPreview
-```
-
-Create a virtual network with a subnet for the cluster nodes.
-
-```azurecli-interactive
-resourceGroup="myResourceGroup"
-vnet="myVirtualNetwork"
-location="westcentralus"
-
-# Create the resource group
-az group create --name $resourceGroup --location $location
-
-# Create a VNet and a subnet for the cluster nodes
-az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
-az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefix 10.10.0.0/16 -o none
-```
-
-Create a cluster with Azure CNI Overlay. Use `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16.
-
-```azurecli-interactive
-clusterName="myOverlayCluster"
-subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
-
-az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet
-```
-
-## Frequently asked questions
-
-* *How do pods and cluster nodes communicate with each other?*
-
- Pods and nodes talk to each other directly without any SNAT requirements.
--
-* *Can I configure the size of the address space assigned to each space?*
-
- No, this is fixed at `/24` today and can't be changed.
--
-* *Can I add more private pod CIDRs to a cluster after the cluster has been created?*
-
- No, a private pod CIDR can only be specified at the time of cluster creation.
--
-* *What are the max nodes and pods per cluster supported by Overlay?*
-
- The max scale in terms of nodes and pods per cluster is the same as the max limits supported by AKS today.
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
API Management uses a public IP address for a connection outside the VNet or a p
If your API Management service is a Consumption tier service, it doesn't have a dedicated IP address. Consumption tier service runs on a shared infrastructure and without a deterministic IP address.
-For traffic restriction purposes, you can use the range of IP addresses of Azure data centers. Refer to [the Azure Functions documentation article](../azure-functions/ip-addresses.md#data-center-outbound-ip-addresses) for precise steps.
+If you need to add the outbound IP addresses used by your Consumption tier instance to an allowlist, you can add the instance's data center (Azure region) to an allowlist. You can [download a JSON file that lists IP addresses for all Azure data centers](https://www.microsoft.com/download/details.aspx?id=56519). Then find the JSON fragment that applies to the region that your instance runs in.
+
+For example, the following JSON fragment is what the allowlist for Western Europe might look like:
+
+```json
+{
+ "name": "AzureCloud.westeurope",
+ "id": "AzureCloud.westeurope",
+ "properties": {
+ "changeNumber": 9,
+ "region": "westeurope",
+ "platform": "Azure",
+ "systemService": "",
+ "addressPrefixes": [
+ "13.69.0.0/17",
+ "13.73.128.0/18",
+ ... Some IP addresses not shown here
+ "213.199.180.192/27",
+ "213.199.183.0/24"
+ ]
+ }
+}
+```
+
+For information about when this file is updated and when the IP addresses change, expand the **Details** section of the [Download Center page](https://www.microsoft.com/en-us/download/details.aspx?id=56519).
## Changes to the IP addresses
api-management Developer Portal Basic Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-basic-authentication.md
+
+ Title: Set up basic authentication to developer portal
+
+description: Learn how to set up user accounts with username and password authentication to the developer portal in Azure API Management.
++++ Last updated : 08/30/2022+++
+# Configure users of the developer portal to authenticate using usernames and passwords
+
+In the developer portal for Azure API Management, the default authentication method for users is to provide a username and password. In this article, learn how to set up users with basic authentication credentials to the developer portal.
++
+## Prerequisites
+
+- Complete the [Create an Azure API Management instance](get-started-create-service-instance.md) quickstart.
++++
+## Confirm the username and password provider
+
+By default, the username and password *identity provider* is enabled in the developer portal. To confirm this setting:
+
+1. In the left menu of your API Management instance, under **Developer portal**, select **Identities**.
+1. In the **Provider type** list, confirm that **Username and password** appears.
+
+If the provider isn't already enabled, you can add it:
+
+1. In the left menu of your API Management instance, under **Developer portal**, select **Identities** > **+ Add**.
+1. Under **Type**, select **Username and password**, and then select **Add**.
+
+## Add a username and password
+
+There are two ways to add a username and password for authentication to the developer portal:
+
+* An API publisher can add a user through the Azure portal, or with equivalent Azure tools such as the [New-AzApiManagementUser](/powershell/module/az.apimanagement/new-azapimanagementuser) Azure PowerShell cmdlet. For steps to use the portal, see [How to manage user accounts in Azure API Management](api-management-howto-create-or-invite-developers.md).
+
+ :::image type="content" source="media/developer-portal-basic-authentication/add-user-portal.png" alt-text="Screenshot showing how to add a user in the Azure portal.":::
+
+* An API consumer (developer) can sign up directly in the developer portal, using the **Sign up** page.
+
+ :::image type="content" source="media/developer-portal-basic-authentication/developer-portal-sign-up-page.png" alt-text="Screenshot of the sign-up page in the developer portal.":::
+
+> [!NOTE]
+> API Management enforces password strength requirements including password length. When you add a user in the Azure portal, the password must be at least 6 characters long. When a developer signs up or resets a password through the developer portal, the password must be at least 8 characters long.
+
+## Delete the username and password provider
+
+If you've configured another identity provider for the developer portal such as [Azure AD](api-management-howto-aad.md) or [Azure AD B2C](api-management-howto-aad-b2c.md), you might want to delete the username and password provider.
+
+Deleting the identity provider prevents adding users to use username and password authentication. Existing users configured for basic authentication are also prevented from signing into the developer portal.
+
+1. In the left menu of your API Management instance, under **Developer portal**, select **Identities**.
+1. In the **Provider type** list, select **Username and password**. In the context menu (**...**), select **Delete**.
+
+> [!TIP]
+> If you want to disable all sign up or sign in functionality in the developer portal, see [How do I disable sign up in the developer portal?](developer-portal-faq.md#how-do-i-disable-sign-up-in-the-developer-portal)
++
+## Next steps
+
+For steps to add other identity providers for developer sign-up to the developer portal, see:
+
+- [Authorize developer accounts by using Azure Active Directory in Azure API Management](api-management-howto-aad.md)
+- [Authorize developer accounts by using Azure Active Directory B2C in Azure API Management](api-management-howto-aad-b2c.md)
api-management How To Configure Local Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-local-metrics-logs.md
The self-hosted gateway also supports a number of protocols including `localsysl
| Field | Default | Description | | - | - | - | | telemetry.logs.std | `text` | Enables logging to standard streams. Value can be `none`, `text`, `json` |
-| telemetry.logs.local | `auto` | Enables local logging. Value can be `none`, `auto`, `localsyslog`, `rfc5424`, `journal`, `json` |
+| telemetry.logs.local | `none` | Enables local logging. Value can be `none`, `auto`, `localsyslog`, `rfc5424`, `journal`, `json` |
| telemetry.logs.local.localsyslog.endpoint | n/a | Specifies localsyslog endpoint. | | telemetry.logs.local.localsyslog.facility | n/a | Specifies localsyslog [facility code](https://en.wikipedia.org/wiki/Syslog#Facility). e.g., `7` | telemetry.logs.local.rfc5424.endpoint | n/a | Specifies rfc5424 endpoint. |
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
You can put your web application firewall devices, such as Azure Application Gat
Your application will use one of the default outbound addresses for egress traffic to public endpoints. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet. > [!NOTE]
-> Outbound SMTP connectivity (port 25) is supported for App Service Environment v3. However, the supportability is determined by the subscription where the virtual network is deployed. For virtual networks created before 1. August 2022, you will have to re-enable outbound SMTP connectivity support on the subscription. For more information on subscription type support and how to request support to re-enable outbound SMTP connectivity, see [Troubleshoot outbound SMTP connectivity problems in Azure](../../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
+> Outbound SMTP connectivity (port 25) is supported for App Service Environment v3. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information and troubleshooting see [Troubleshoot outbound SMTP connectivity problems in Azure](../../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
## Private endpoint
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
With App Service Environment v3, the pricing model varies depending on the type
- **App Service Environment v3**: If the App Service Environment is empty, there's a charge as though you have one instance of Windows I1v2. The one instance charge isn't an additive charge but is applied only if the App Service Environment is empty. - **Zone redundant App Service Environment v3**: There's a minimum charge of nine instances. There's no added charge for availability zone support if you have nine or more App Service plan instances. If you have fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, the difference between nine and the running instance count is charged as additional Windows I1v2 instances.-- **Dedicated host App Service Environment v3**: With a dedicated host deployment, you're charged for two dedicated hosts per our pricing when you create the App Service Environment v3 and then, as you scale, you're charged a small percentage of the Isolated v2 rate per core.
+- **Dedicated host App Service Environment v3**: With a dedicated host deployment, you're charged for two dedicated hosts per our pricing when you create the App Service Environment v3 and then, as you scale, you're charged a specialized Isolated v2 rate per vCore. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance.
Reserved Instance pricing for Isolated v2 is available and is described in [How reservation discounts apply to Azure App Service](../../cost-management-billing/reservations/reservation-discount-app-service.md). The pricing, along with Reserved Instance pricing, is available at [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/) under the Isolated v2 plan.
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Apps in App Service are hosted on worker roles. Regional virtual network integra
When regional virtual network integration is enabled, your app makes outbound calls through your virtual network. The outbound addresses that are listed in the app properties portal are the addresses still used by your app. However, if your outbound call is to a virtual machine or private endpoint in the integration virtual network or peered virtual network, the outbound address will be an address from the integration subnet. The private IP assigned to an instance is exposed via the environment variable, WEBSITE_PRIVATE_IP.
-When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet will be sent into the virtual network and outbound traffic to the internet will go through the same channels as normal.
+When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet will be sent into the virtual network, and outbound traffic to the internet will go through the same channels as normal.
The feature supports only one virtual interface per worker. One virtual interface per worker means one regional virtual network integration per App Service plan. All the apps in the same App Service plan can only use the same virtual network integration to a specific subnet. If you need an app to connect to another virtual network or another subnet in the same virtual network, you need to create another App Service plan. The virtual interface used isn't a resource that customers have direct access to.
You must have at least the following Role-based access control permissions on th
| Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition | | Microsoft.Network/virtualNetworks/subnets/join/action | Joins a virtual network |
-If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the Microsoft.Web resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it will also automatically be registered when creating the first web app in a subscription.
+If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it will also automatically be registered when creating the first web app in a subscription.
### Routes You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of your app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic are routed from your virtual network and out.
-Through application routing or configuration routing options, you can configure what traffic will be sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it is sent through the virtual network integration.
+Through application routing or configuration routing options, you can configure what traffic will be sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it's sent through the virtual network integration.
#### Application routing
-Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during start up. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
+Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during startup. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
* Only traffic configured in application or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet. * When **Route All** is enabled, outbound traffic from your app is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
Learn [how to configure application routing](./configure-vnet-integration-routin
We recommend that you use the **Route All** configuration setting to enable routing of all traffic. Using the configuration setting allows you to audit the behavior with [a built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33228571-70a4-4fa1-8ca1-26d0aba8d6ef). The existing `WEBSITE_VNET_ROUTE_ALL` app setting can still be used, and you can enable all traffic routing with either setting. > [!NOTE]
-> Outbound SMTP connectivity (port 25) is supported for App Service when the SMTP traffic is routed through the virtual network integration. The supportability is determined by the subscription where the virtual network is deployed. For virtual networks created before 1. August 2022, you will have to re-enable outbound SMTP connectivity support on the subscription. For more information on subscription type support and how to request support to re-enable outbound SMTP connectivity, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
+> Outbound SMTP connectivity (port 25) is supported for App Service when the SMTP traffic is routed through the virtual network integration. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information and troubleshooting see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
#### Configuration routing
-When you are using virtual network integration, you can configure how parts of the configuration traffic is managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration.
+When you're using virtual network integration, you can configure how parts of the configuration traffic are managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration.
##### Content storage
App settings using Key Vault references will attempt to get secrets over the pub
You can use route tables to route outbound traffic from your app without restriction. Common destinations can include firewall devices or gateways. You can also use a [network security group](../virtual-network/network-security-groups-overview.md) (NSG) to block outbound traffic to resources in your virtual network or the internet. An NSG that's applied to your integration subnet is in effect regardless of any route tables applied to your integration subnet.
-Route tables and network security groups only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Routes wont affect replies to inbound app requests and inbound rules in an NSG don't apply to your app because virtual network integration affects only outbound traffic from your app. To control inbound traffic to your app, use the Access Restrictions feature.
+Route tables and network security groups only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Routes won't affect replies to inbound app requests and inbound rules in an NSG don't apply to your app because virtual network integration affects only outbound traffic from your app. To control inbound traffic to your app, use the Access Restrictions feature.
-When configuring network security groups or route tables that affect outbound traffic, you must make sure you consider your application dependencies. Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, this could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you are using [continuous deployment in App Service](./deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you will need to allow `oryx-cdn.microsoft.io:443`.
+When configuring network security groups or route tables that affect outbound traffic, you must make sure you consider your application dependencies. Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, this could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](./deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you'll need to allow `oryx-cdn.microsoft.io:443`.
When you want to route outbound traffic on-premises, you can use a route table to send outbound traffic to your Azure ExpressRoute gateway. If you do route traffic to a gateway, set routes in the external network to send any replies back. Border Gateway Protocol (BGP) routes also affect your app traffic. If you have BGP routes from something like an ExpressRoute gateway, your app outbound traffic is affected. Similar to user-defined routes, BGP routes affect traffic according to your routing scope setting.
Gateway-required virtual network integration supports connecting to a virtual ne
* Enables an app to connect to only one virtual network at a time. * Enables up to five virtual networks to be integrated within an App Service plan.
-* Allows the same virtual network to be used by multiple apps in an App Service plan without affecting the total number that can be used by an App Service plan. If you have six apps using the same virtual network in the same App Service plan, that counts as one virtual network being used.
-* SLA on the gateway can impact the overall [SLA](https://azure.microsoft.com/support/legal/sla/).
+* Allows the same virtual network to be used by multiple apps in an App Service plan without affecting the total number that can be used by an App Service plan. If you have six apps using the same virtual network in the same App Service plan that counts as one virtual network being used.
+* SLA on the gateway can affect the overall [SLA](https://azure.microsoft.com/support/legal/sla/).
* Enables your apps to use the DNS that the virtual network is configured with. * Requires a virtual network route-based gateway configured with an SSTP point-to-site VPN before it can be connected to an app.
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
The table below compares custom template and custom neural features:
|||| |Document structure|Template, form, and structured | Structured, semi-structured, and unstructured| |Training time | 1 to 5 minutes | 20 minutes to 1 hour |
-|Data extraction | Key-value pairs, tables, selection marks, coordinates, and signatures | Key-value pairs and selection marks |
+|Data extraction | Key-value pairs, tables, selection marks, coordinates, and signatures | Key-value pairs, selection marks and tables|
|Document variations | Requires a model per each variation | Uses a single model for all variations | |Language support | Multiple [language support](language-support.md#read-layout-and-custom-form-template-model) | United States English (en-US) [language support](language-support.md#custom-neural-model) |
This table compares the supported data extraction areas:
|Model| Form fields | Selection marks | Structured fields (Tables) | Signature | Region labeling | |--|:--:|:--:|:--:|:--:|:--:| |Custom template| Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-|Custom neural| Γ£ö| Γ£ö |**n/a**| **n/a** | **n/a** |
+|Custom neural| Γ£ö| Γ£ö | Γ£ö | **n/a** | **n/a** |
**Table symbols**: Γ£öΓÇösupported; **n/aΓÇöcurrently unavailable
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md
When you receive errors during runbook execution in Azure Automation, you can us
## Scenario: Unable to create new Automation job in West Europe region ### Issue
-When creating new Automation jobs, you might experience a delay or failure of job creation.
+When creating new Automation jobs, you might experience a delay or failure of job creation. Scheduled jobs will automatically be retired, and jobs executed through the portal can be retired if you see a failure.
### Cause
-This is because of scalability limits with the Automation service in the West Europe region.
+This is because of the high load from customers' runbooks using the Automation service in the West Europe region.
### Resolution
-Do one of the following actions if it is feasible as per your requirement and environment to reduce the chance of failure:
+Perform the following action if it is feasible as per your requirement and environment to reduce the chance of failure:
-- During the peak hours of job creation, typically on the hour, and half hour, move the job start time to five minutes before or after the hour/half hour.-- Run the Automation jobs from alternate data centres until the transition work is complete.-
->[!NOTE]
-> The optimization of existing load and transitioning the load to a new design by the product group is in progress.
+- If youΓÇÖre using the top of the hour for the job creation (at 12:00, 1:00, 2:00, and so on.), typically on the hour, or half hour, we recommend that you move the job start time to five minutes before or after the hour/half hour. This is because a most of the customers use the beginning of the hour for job execution which drastically increases the load on the service, while the load is relatively low at the other time slots.
## <a name="runbook-fails-no-permission"></a>Scenario: Runbook fails with "this.Client.SubscriptionId cannot be null." error message
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
The following image shows a properly configured distributed availability group:
2. Provision the managed instance in the secondary site and configure as a disaster recovery instance. At this point, the system databases are not part of the contained availability group.
+> [!NOTE]
+> - It is important to specify `--license-type DisasterRecovery` **during** the Azure Arc SQL MI creation. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect.
+++ ```azurecli az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --license-type DisasterRecovery --k8s-namespace <namespace> --use-k8s ```
azure-arc Resize Persistent Volume Claim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/resize-persistent-volume-claim.md
+
+ Title: Resize persistent volume claim (PVC) for Azure Arc-enabled data services volume
+description: Explains how to resize a persistent volume claim for a volume used for Azure Arc-enabled data services.
++++++ Last updated : 08/29/2022+++
+# Resize persistent volume to increase size
+
+This article explains how to resize an existing persistent volume to increase its size by editing the `PersistentVolumeClaim` (PVC) object.
+
+> [!NOTE]
+> Resizing PVCs using this method only works your `StorageClass` supports `AllowVolumeExpansion=True`.
+
+When you deploy an Azure Arc enabled SQL managed instance, you can configure the size of the persistent volume (PV) for `data`, `logs`, `datalogs`, and `backups`. The deployment creates these volumes based on the values set by parameters `--volume-size-data`, `--volume-size-logs`, `--volume-size-datalogs`, and `--volume-size-backups`. When these volumes become full, you will need to resize the `PersistentVolumes`. Azure Arc enabled SQL Managed Instance is deployed as part of a `StatefulSet` for both General Purpose or Business Critical service tiers. Kubernetes supports automatic resizing for persistent volumes but not for volumes attached to `StatefulSet`.
+
+Following are the steps to resize persistent volumes attached to `StatefulSet`:
+
+1. Scale the `StatefulSet` replicas to 0
+2. Patch the PVC to the new size
+3. Scale the `StatefulSet` replicas back to the original size
+
+During the patching of `PersistentVolumeClaim`, the status of the persistent volume claim will likely change from: `Attached` to `Resizing` to `FileSystemResizePending` to `Attached`. The exact states will depend on the storage provisioner.
+
+> [!NOTE]
+> Ensure the managed instance is in a healthy state before you proceed. Run `kubectl get sqlmi -n <namespace>` and check the status of the managed instance.
+
+## 1. Scale the `StatefulSet` replicas to 0
+
+There is one `StatefulSet` deployed for each Arc SQL MI. The number of replicas in the `StatefulSet` is equal to the number of replicas in the Arc SQL MI. For General Purpose service tier, this is 1. For Business Critical service tier it could be 1, 2 or 3 depending on how many replicas were specified. Run the below command to get the number of `StatefulSet` replicas if you have a Business Critical instance.
+
+```console
+kubectl get sts --namespace <namespace>
+```
+
+For example, if the namespace is `arc`, run:
+
+```console
+kubectl get sts --namespace arc
+```
+
+Notice the number of stateful sets under the `READY` column for the SQL managed instance(s).
+
+Run the below command to scale the `StatefulSet` replicas to 0:
+
+```console
+kubectl scale statefulsets <statefulset> --namespace <namespace> --replicas= <number>
+```
+
+For example:
+
+```console
+kubectl scale statefulsets sqlmi1 --namespace arc --replicas=0
+```
+
+## 2. Patch the PVC to the new size
+
+Run the below command to get the name of the `PersistentVolumeClaim` which needs to be resized:
+
+```console
+kubectl get pvc --namespace <namespace>
+```
+
+For example:
+
+```console
+kubectl get pvc --namespace arc
+```
++
+Once the stateful `StatefulSet` replicas have completed scaling down to 0, patch the `StatefulSet`. Run the following command:
+
+```console
+$newsize='{\"spec\":{\"resources\":{\"requests\":{\"storage\":\"<newsize>Gi\"}}}}'
+kubectl patch pvc <name of PVC> --namespace <namespace> --type merge --patch $newsize
+```
+
+For example: the following command will resize the data PVC to 50Gi.
+
+```console
+$newsize='{\"spec\":{\"resources\":{\"requests\":{\"storage\":\"50Gi\"}}}}'
+kubectl patch pvc data-a6gt3be7mrtq60eao0gmgxgd-sqlmi1-0 --namespace arcns --type merge --patch $newsize
+```
+
+## 3. Scale the `StatefulSet` replicas to original size
+
+Once the resize completes, scale the `StatefulSet` replicas back to its original size by running the below command:
+
+```console
+kubectl scale statefulsets <statefulset> --namespace <namespace> --replicas= <number>
+```
+
+For example: The below command sets the `StatefulSet` replicas to 3.
+
+```
+kubectl scale statefulsets sqlmi1 --namespace arc --replicas=3
+```
+Ensure the Arc enabled SQL managed instance is back to ready status by running:
+
+```console
+kubectl get sqlmi -A
+```
+
+## See also
+
+[Sizing Guidance](sizing-guidance.md)
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
If these resource providers are not already registered, you can register them us
Azure PowerShell: ```azurepowershell-interactive
-Login-AzAccount
+Connect-AzAccount
Set-AzContext -SubscriptionId [subscription you want to onboard] Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 08/19/2022 Last updated : 09/02/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Media Services](/azure/media-services/) | &#x2705; | &#x2705; | | [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | | [Microsoft Azure Attestation](../../attestation/index.yml)| &#x2705; | &#x2705; |
-| [Microsoft Azure Marketplace portal](https://azuremarketplace.microsoft.com/marketplace/)| &#x2705; | &#x2705; |
| [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| &#x2705; | &#x2705; | | [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) (formerly Azure Security Center) | &#x2705; | &#x2705; | | [Microsoft Defender for Cloud Apps](/defender-cloud-apps/) (formerly Microsoft Cloud App Security) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Graph](/graph/) | &#x2705; | &#x2705; | | [Microsoft Intune](/mem/intune/) | &#x2705; | &#x2705; | | [Microsoft Sentinel](../../sentinel/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Microsoft Stream](/stream/) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; | | [Migrate](../../migrate/index.yml) | &#x2705; | &#x2705; | | [Network Watcher](../../network-watcher/index.yml) (incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md)) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | | [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | | [Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; | | [Resource Mover](../../resource-mover/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates) | &#x2705; | &#x2705; | | [SQL Server Stretch Database](../../sql-server-stretch-database/index.yml) | &#x2705; | &#x2705; | | [Storage: Archive](../../storage/blobs/access-tiers-overview.md) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Storage: Blobs](../../storage/blobs/index.yml) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Storage: Disks (incl. managed disks)](../../virtual-machines/managed-disks-overview.md) | &#x2705; | &#x2705; | | [Storage: Files](../../storage/files/index.yml) | &#x2705; | &#x2705; | | [Storage: Queues](../../storage/queues/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Virtual Network](../../virtual-network/index.yml) | &#x2705; | &#x2705; | | [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | &#x2705; | &#x2705; | | [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [VPN Gateway](../../vpn-gateway/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; | | [Web Apps (App Service)](../../app-service/index.yml) | &#x2705; | &#x2705; | | [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) | &#x2705; | &#x2705; |
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Container insights is a feature designed to monitor the performance of container
Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Moby and any CRI-compatible runtime such as CRI-O and ContainerD. Docker is no longer supported as a container runtime as of September 2022. For more information about this deprecation, see the [AKS release notes][aks-release-notes]. >[!NOTE]
-> Container insights support for Windows Server 2022 operating system is in public preview.
+> Container insights support for Windows Server 2022 operating system and AKS for ARM nodes is in public preview.
Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications.
The main differences in monitoring a Windows Server cluster compared to a Linux
To begin monitoring your Kubernetes cluster, review [Enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring. <!-- LINKS - external -->
-[aks-release-notes]: https://github.com/Azure/AKS/releases
+[aks-release-notes]: https://github.com/Azure/AKS/releases
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
If the Dependency agent fails to start, check the logs for detailed error inform
Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. As of Dependency agent version 9.10.* the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
+>[!NOTE]
+> Dependency agent is not supported for Azure Virtual Machines with Ampere Altra ARMΓÇôbased processors.
+ | Distribution | OS version | Kernel version | |:|:|:| | Red Hat Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
Since the Dependency agent works at the kernel level, support is also dependent
## Next steps
-If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
+If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
azure-portal Capture Browser Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/capture-browser-trace.md
Google Chrome and Microsoft Edge are both based on the [Chromium open source pro
![Screenshot that shows how to Export HAR on the Network tab.](media/capture-browser-trace/chromium-network-export-har.png)
-1. Stop Steps Recorder, and save the recording.
+1. Stop the Steps Recorder and save the recording.
1. Back in the browser developer tools pane, select the **Console** tab. Right-click one of the messages, then select **Save as...**, and save the console output to a text file. ![Screenshot that shows how to save the console output.](media/capture-browser-trace/chromium-console-select.png)
-1. You can now share the browser trace HAR file, console output, and screen recording files with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md#upload-files).
+1. Package the browser trace HAR file, console output, and screen recording files in a compressed format such as .zip.
+
+1. Share the compressed file with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md#upload-files).
## Safari
The following steps show how to use the developer tools in Apple Safari on Mac.
![Screenshot that shows where you can view and copy the console output.](media/capture-browser-trace/safari-console-select.png)
-1. You can now share the browser trace HAR file, console output, and screen recording files with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md#upload-files).
+1. Package the browser trace HAR file, console output, and screen recording files in a compressed format such as .zip.
+
+1. Share the compressed file with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md#upload-files).
## Firefox
The following steps show how to use the developer tools in Firefox. For more inf
![Screenshot of the "Save All As HAR" command on the Network tab.](media/capture-browser-trace/firefox-network-export-har.png)
-1. Stop Steps Recorder on Windows or the screen recording on Mac, and save the recording.
+1. Stop the Steps Recorder on Windows or the screen recording on Mac, and save the recording.
1. Back in the browser developer tools pane, select the **Console** tab. Right-click one of the messages, then select **Save All Messages to File**, and save the console output to a text file. :::image type="content" source="media/capture-browser-trace/firefox-console-select.png" alt-text="Screenshot of the Save All Messages to File command on the Console tab.":::
-1. You can now share the browser trace HAR file, console output, and screen recording files with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md#upload-files).
+1. Package the browser trace HAR file, console output, and screen recording files in a compressed format such as .zip.
+
+1. Share the compressed file with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md#upload-files).
## Next steps
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md
If you're still unable to resolve the issue, continue creating your support requ
Next, we collect additional details about the problem. Providing thorough and detailed information in this step helps us route your support request to the right engineer.
-1. Complete the **problem details** so that we have more information about your issue. If possible, tell us when the problem started and any steps to reproduce it. You can optionally upload one or more files, such as a log file or [browser trace](../capture-browser-trace.md). For more information on file uploads, see [File upload guidelines](how-to-manage-azure-support-request.md#file-upload-guidelines).
+1. Complete the **problem details** so that we have more information about your issue. If possible, tell us when the problem started and any steps to reproduce it. You can optionally upload one file (or a compressed file such as .zip that contains multiple files), such as a log file or [browser trace](../capture-browser-trace.md). For more information on file uploads, see [File upload guidelines](how-to-manage-azure-support-request.md#file-upload-guidelines).
1. In the **Advanced diagnostic information** section, select **Yes** or **No**. Selecting **Yes** allows Azure support to gather [advanced diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources. If you prefer not to share this information, select **No**. See the [Advanced diagnostic information logs](#advanced-diagnostic-information-logs) section for more details about the types of files we might collect.
azure-portal How To Manage Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-manage-azure-support-request.md
To change your **Advanced diagnostic information** selection after the request h
## Upload files
-You can use the file upload option to upload diagnostic files such as a [browser trace](../capture-browser-trace.md) or any other files that you think are relevant to a support request.
+You can use the file upload option to upload a diagnostic file, such as a [browser trace](../capture-browser-trace.md) or any other files that you think are relevant to a support request.
1. On the **All support requests** page, select the support request.
-1. On the **Support Request** page, select the **File upload** box, then browse to find your file and select **Upload**. Repeat the process if you have multiple files.
+1. On the **Support Request** page, select the **File upload** box, then browse to find your file and select **Upload**.
### File upload guidelines
Follow these guidelines when you use the file upload option:
- To protect your privacy, don't include personal information in your upload. - The file name must be no longer than 110 characters.-- You can't upload more than one file.
+- You can't upload more than one file. To include multiple different files, package them together in a compressed format such as .zip.
- Files can't be larger than 4 MB. - All files must have a valid file name extension, such as *.docx* or *.xlsx*. Most file name extensions are supported, but you can't upload files with the extensions .bat, .cmd, .exe, .ps1, .js, .vbs, .com, .lnk, .reg, .bin,. cpl, .inf, .ins, .isu, .job, .jse, .msi, .msp, .paf, .pif, .rgs, .scr, .sct, .vbe, .vb, .ws, .wsf, or .wsh.
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Applying locks can lead to unexpected results. Some operations, which don't seem
- A read-only lock on an Azure Kubernetes Service (AKS) cluster limits how you can access cluster resources through the portal. A read-only lock prevents you from using the AKS cluster's Kubernetes resources section in the Azure portal to choose a cluster resource. These operations require a POST method request for authentication.
+- A cannot-delete lock on a **Virtual Machine** that is protected by **Site Recovery** prevents certain resource links related to Site Recovery from being removed properly when you remove the protection or disable replication. If you plan to re-protect the VM later, you need to remove the lock prior to disabling protection. In case you miss to remove the lock, you need to follow certain steps to clean up the stale links before you can re-protect the VM. For more information, see [Troubleshoot Azure VM replication](../../site-recovery/azure-to-azure-troubleshoot-errors.md#replication-not-enabled-on-vm-with-stale-resources-error-code-150226).
+ ## Who can create or delete locks To create or delete management locks, you need access to `Microsoft.Authorization/*` or `Microsoft.Authorization/locks/*` actions. Only the **Owner** and the **User Access Administrator** built-in roles can create and delete management locks. You can create a custom role with the required permissions.
azure-resource-manager App Service Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md
When you move a Web App across subscriptions, the following guidance applies:
- Uploaded or imported TLS/SSL certificates - App Service Environments - All App Service resources in the resource group must be moved together.-- App Service Environments can't be moved to a new resource group or subscription. However, you can move a web app and app service plan to a new subscription without moving the App Service Environment. After the move, the web app is no longer hosted in the App Service Environment.
+- App Service Environments can't be moved to a new resource group or subscription. However, you can move a web app and app service plan to a new subscription without moving the App Service Environment.
- You can move a certificate bound to a web without deleting the TLS bindings, as long as the certificate is moved with all other resources in the resource group. However, you can't move a free App Service managed certificate. For that scenario, see [Move with free managed certificates](#move-with-free-managed-certificates). - App Service apps with private endpoints cannot be moved. Delete the private endpoint(s) and recreate it after the move. - App Service resources can only be moved from the resource group in which they were originally created. If an App Service resource is no longer in its original resource group, move it back to its original resource group. Then, move the resource across subscriptions. For help with finding the original resource group, see the next section.
backup Backup Azure Reserved Pricing Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-reserved-pricing-optimize-cost.md
+
+ Title: Optimize costs for Azure Backup Storage with reserved capacity
+description: This article explains about how to optimize costs for Azure Backup Storage with reserved capacity.
++ Last updated : 09/03/2022++++
+# Optimize costs for Azure Backup Storage with reserved capacity
+
+You can save money on backup storage costs for the vault-standard tier using Azure Backup Storage reserved capacity. Azure Backup Storage reserved capacity offers you a discount on capacity for backup data stored for the vault-standard tier when you commit to a reservation for either one year or three years. A reservation provides a fixed amount of backup storage capacity for the term of the reservation.
+
+Azure Backup Storage reserved capacity can significantly reduce your capacity costs for Azure Backup data. The cost savings achieved depend on the duration of your reservation, the total capacity you choose to reserve, and the vault tier, and type of redundancy you've chosen for your vault. Reserved capacity provides a billing discount and doesn't affect the state of your Azure Backup Storage resources.
+
+For information about Azure Backup pricing, see [Azure Backup pricing page](https://azure.microsoft.com/pricing/details/backup/).
+
+## Reservation terms for Azure Storage
+
+The following sections describe the terms of an Azure Backup Storage reservation.
+
+#### Reservation capacity
+
+You can purchase Azure Backup Storage reserved capacity in units of 100 TiB and 1 PiB per month for a one-year or three-year term.
+
+#### Reservation scope
+
+Azure Backup Storage reserved capacity is available for a single subscription, multiple subscriptions (shared scope), and management groups.
+
+- When scoped to a single subscription, the reservation discount is applied only to the selected subscription.
+- When scoped to multiple subscriptions, the reservation discount is shared across those subscriptions within your billing context.
+- When scoped to management group, the reservation discount is shared across the subscriptions that are a part of management group and billing scope.
+
+When you purchase Azure Backup Storage reserved capacity, you can use your reservation for backup data stored in the vault-standard tier only. A reservation is applied to your usage within the purchased scope and canΓÇÖt be limited to a specific storage account, container, or object within the subscription.
+
+An Azure Backup Storage reservation covers only the amount of data that's stored in a subscription or shared resource group. Early deletion, operations, bandwidth, and data transfer charges arenΓÇÖt included in the reservation. As soon as you purchase a reservation, you're charged for the capacity charges that match the reservation attributes at the discount rates, instead of pay-as-you-go rates. For more information on Azure reservations, see [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
+
+#### Supported account types, tiers, and redundancy options
+
+Azure Backup Storage reserved capacity is available for backup data stored in the vault-standard tier.
+
+LRS, GRS, and RA-GRS redundancies are supported for reservations. For more information about redundancy options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md).
+
+>[!Note]
+>Azure Backup Storage reserved capacity isn't applicable for Protected Instance cost. It's also not applicable to vault-archive tier.
+
+#### Security requirements for purchase
+
+To purchase reserved capacity:
+
+- You must be in the Owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
+- For Enterprise subscriptions, the policy to add reserved instances must be enabled. For direct EA agreements, the Reserved Instances policy must be enabled in the Azure portal. For indirect EA agreements, the Add Reserved Instances policy must be enabled in the EA portal. Or, if those policy settings are disabled, you must be an EA Admin on the subscription.
+- For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can purchase Azure Backup Blob Storage reserved capacity.
+
+## Determine required capacity before purchase
+
+When you purchase an Azure Backup Storage reservation, you must choose the reservationΓÇÖs region, vault tier, and redundancy option. Your reservation is valid only for data stored in that region, vault tier, and redundancy level. For example, you purchase a reservation for data in US West for the vault-standard tier using geo-redundant storage (GRS). You can't use the same reservation for data in US East, or data in locally redundant storage (LRS). However, you can purchase another reservation for your additional needs.
+
+Reservations are currently available for 100 TiB or 1 PiB blocks, with higher discounts for 1 PiB blocks. When you purchase a reservation in the Azure portal, Microsoft may provide you with recommendations based on your previous usage to help determine which reservation you should purchase.
+
+## Purchase Azure Backup Storage reserved capacity
+
+You can purchase Azure Backup Storage reserved capacity through the [Azure portal](https://portal.azure.com/). Pay for the reservation up front or with monthly payments. For more information about purchasing with monthly payments, see [Purchase Azure reservations with up front or monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md).
+
+For help with identifying the reservation terms that are right for your scenario, see [Understand how reservation discounts are applied to Azure Backup storage](backup-azure-reserved-pricing-overview.md).
+
+To purchase reserved capacity, follow these steps:
+
+1. Go to the [Purchase reservations](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/Browse_AddCommand) pane in the Azure portal.
+
+1. Select **Azure Backup** to purchase a new reservation.
+
+1. Enter required information as described in the following table:
+
+ :::image type="content" source="./media/backup-azure-reserved-pricing/purchase-reserved-capacity-enter-information.png" alt-text="Screenshot showing the information to enter to purchase reservation capability for Azure Backup Storage.":::
+
+ | Field | Description |
+ | | |
+ | Scope | Indicates the number of subscriptions you can use for the billing benefit associated with the reservation. It also controls how the reservation is applied to specific subscriptions. <br><br> If you select Shared, the reservation discount is applied to Azure Backups Storage capacity in any subscription within your billing context. The billing context is based on how you signed up for Azure. If you're an enterprise customer, the shared scope is the enrollment and includes all subscriptions within the enrollment. If you're a pay-as-you-go customer, the shared scope includes all individual subscriptions with pay-as-you-go rates created by the account administrator. <br><br> If you select Single subscription, the reservation discount is applied to Azure Backup Storage capacity in the selected subscription. <br><br> If you select Single resource group, the reservation discount is applied to Azure Backup Storage capacity in the selected subscription and the selected resource group within that subscription. <br><br> If you select Management group, the reservation discount is applied to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. To buy a reservation for a management group, you must have at least read permission on the management group and be a reservation owner or reservation purchaser on the billing subscription. <br><br> You can change the reservation scope after you purchase the reservation. |
+ | Subscription | The subscription that's used to pay for the Azure Backup Storage reservation. The payment method on the selected subscription is used in charging the costs. The subscription must be one of the following types: <br><br> - **Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P)**: For an Enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. <br><br> - **Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P)**: For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription. <br><br> - Microsoft Customer Agreement subscriptions <br><br> - CSP subscriptions. |
+ | Region | The region where the reservation is in effect. |
+ | Vault tier | The vault tier for which the reservation is in effect. Currently, only reservations for vault-standard tier are supported. |
+ | Redundancy | The redundancy option for the reservation. Options include LRS, GRS, and RA-GRS. For more information about redundancy options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md). |
+ | Billing frequency | Indicates how often the account is billed for the reservation. Options include Monthly or Upfront. |
+ | Size | The amount of capacity to reserve. |
+ | Term | One year or three years. |
+
+1. After you select the parameters for your reservation, the Azure portal displays the cost. The portal also shows the discount percentage over pay-as-you-go billing.
+
+1. In the **Purchase reservations** pane, review the total cost of the reservation.
+
+ You can also provide a name for the reservation.
+
+ :::image type="content" source="./media/backup-azure-reserved-pricing/purchase-reserved-capacity-review-total-cost-inline.png" alt-text="Screenshot showing the Purchase reservation pane to review the total cost of the reservation." lightbox="./media/backup-azure-reserved-pricing/purchase-reserved-capacity-review-total-cost-expanded.png":::
+
+After you purchase a reservation, it's automatically applied to any existing Azure Backup Storage data that matches the terms of the reservation. If you haven't created any Azure Backup Storage data yet, the reservation will apply whenever you create a resource that matches the terms of the reservation. In either case, the term of the reservation begins immediately after a successful purchase.
+
+## Exchange or refund a reservation
+
+You can exchange or refund a reservation, with certain limitations. These limitations are described in the following sections.
+
+To exchange or refund a reservation, follow these steps:
+
+1. Go to the reservation details in the Azure portal.
+
+1. Select **Exchange or Refund**, and follow the instructions to submit a support request.
+
+You'll receive an email confirmation when the request is processed. For more information about Azure Reservations policies, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
+
+## Exchange a reservation
+
+Exchanging a reservation enables you to receive a prorated refund based on the unused portion of the reservation. You can then apply the refund to the purchase price of a new Azure Backup Storage reservation.
+
+There's no limit on the number of exchanges you can make. Additionally, there's no fee associated with an exchange. The new reservation that you purchase must be of equal or greater value than the prorated credit from the original reservation. An Azure Backup Storage reservation can be exchanged only for another Azure Backup Storage reservation, and not for a reservation for any other Azure service.
+
+## Refund a reservation
+
+You may cancel an Azure Backup Storage reservation at any time. When you cancel, you'll receive a prorated refund based on the remaining term of the reservation. The maximum refund per year is *$50,000*.
+
+Cancelling a reservation immediately terminates the reservation and returns the remaining months to Microsoft. The remaining prorated balance, minus the fee, will be refunded to your original form of purchase.
+
+## Expiration of a reservation
+
+When a reservation expires, any Azure Backup Storage capacity that you've used under that reservation is billed at the pay-as-you go rate. Reservations don't renew automatically.
+
+You'll receive an email notification 30 days prior to the expiration of the reservation, and again on the expiration date. To continue taking advantage of the cost savings that a reservation provides, renew it no later than the expiration date.
+
+>[!Note]
+>If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+- [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
+- [Understand how reservation discounts are applied to Azure Backup storage](backup-azure-reserved-pricing-overview.md).
+
backup Backup Azure Reserved Pricing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-reserved-pricing-overview.md
+
+ Title: Reservation discounts for Azure Backup storage
+description: This article explains about how reservation discounts are applied to Azure Backup storage.
++ Last updated : 09/09/2022++++
+# Understand how reservation discounts are applied to Azure Backup storage
+
+Azure Backup enables you to save money on backup storage costs using Reserved capacity pricing. After you purchase reserved capacity, the reservation discount is automatically applied to the backup storage that matches the terms of the reservation.
+
+>[!Note]
+>The reservation discount applies to storage capacity only.
+
+- For more information about Azure Backup Storage reserved capacity, see [Optimize costs for Azure Backup storage with reserved capacity](backup-azure-reserved-pricing-optimize-cost.md).
+- For information about Azure Backup storage pricing, see [Azure Backup pricing page](https://azure.microsoft.com/pricing/details/backup/).
+
+## How's the reservation discount applied?
+
+The reserved capacity discount applies to supported backup storage resources on an hourly basis. The reserved capacity discount is a use-it-or-lose-it discount. If you don't have any backup storage that meets the terms of the reservation for a given hour, then you lose a reservation quantity for that hour. You can't carry forward the unused reserved hours.
+
+When you delete the backup storage, the reservation discount automatically applies to another matching backup storage in the specified scope. If there's no matching backup storage in the specified scope, the reserved hours are lost.
+
+## Discount examples
+
+The following examples show how the reserved capacity discount applies, depending on the deployments.
+
+For example, you've purchased 100 TiB of reserved capacity in the *US West 2* region for a *1-year* term. Your reservation is for locally redundant storage (LRS) blob storage in the vault-standard tier.
+
+For the cost of the reservation, you can either pay the full amount up front or pay fixed monthly installments per month for the next 12 months. If you've signed up for a monthly reservation payment plan, you may encounter the following scenarios if you under-use or overuse your reserved capacity.
+
+### Underuse of your capacity
+
+As an example, in each hour within the reservation period, if you used only 80 TiB of your 100 TiB reserved capacity, the remaining 20 TiB isn't applied for that hour and it doesn't get carried forward.
+
+### Overuse of your capacity
+
+As an example, in each hour within the reservation period, if you've used 101 TiB of backup storage capacity, the reservation discount applies to 100 TiB of your data, and the remaining 1 TiB is charged at pay-as-you-go rates for that hour. If in the next hour your usage changes to 100 TiB, then all usage is covered by the reservation.
+
+>[!Note]
+>For further support, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+- [Optimize costs for Azure Backup storage with reserved capacity](backup-azure-reserved-pricing-optimize-cost.md).
+- [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
+
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
You may use a private DNS zone ending with one of the names listed above (ex: pr
Azure Bastion isn't supported with Azure Private DNS Zones in national clouds.
+### <a name="dns"></a>Does Azure Bastion support Private Link?"
+
+No, Azure Bastion does not currently support private link.
+ ### <a name="subnet"></a>Can I have an Azure Bastion subnet of size /27 or smaller (/28, /29, etc.)? For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work. However, we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 8/12/2022 Last updated : 9/2/2022
The following tables show the Microsoft Security Response Center (MSRC) updates
## August 2022 Guest OS
->[!NOTE]
-
->The August Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the August Guest OS. This list is subject to change.
-
-| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
-| | | | | |
-| Rel 22-08 | [5016623] | Latest Cumulative Update(LCU) | 6.45 | Aug 9, 2022 |
-| Rel 22-08 | [5016618] | IE Cumulative Updates | 2.127, 3.117, 4.105 | Aug 9, 2022 |
-| Rel 22-08 | [5016627] | Latest Cumulative Update(LCU) | 7.15 | Aug 9, 2022 |
-| Rel 22-08 | [5016622] | Latest Cumulative Update(LCU) | 5.71 | Aug 9, 2022 |
-| Rel 22-08 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | 2.127 | Aug 9, 2022 |
-| Rel 22-08 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup | 2.127 | May 10, 2022 |
-| Rel 22-08 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | 4.107 | Jun 14, 2022 |
-| Rel 22-08 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup | 4.107 | May 10, 2022 |
-| Rel 22-08 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | 3.114 | Aug 9, 2022 |
-| Rel 22-08 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup | 3.114 | May 10, 2022 |
-| Rel 22-08 | [5013641] | .NET Framework 3.5 and 4.7.2 Cumulative Update | 6.47 | May 10, 2022 |
-| Rel 22-08 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | 7.15 | May 10, 2022 |
-| Rel 22-08 | [5016676] | Monthly Rollup | 2.127 | Aug 9, 2022 |
-| Rel 22-08 | [5016672] | Monthly Rollup | 3.114 | Aug 9, 2022 |
-| Rel 22-08 | [5016681] | Monthly Rollup | 4.107 | Aug 9, 2022 |
-| Rel 22-08 | [5016263] | Servicing Stack update | 3.114 | Jul 12, 2022 |
-| Rel 22-08 | [5016264] | Servicing Stack update | 4.107 | Jul 12, 2022 |
-| Rel 22-08 | [4578013] | OOB Standalone Security Update | 4.107 | Aug 19, 2020 |
-| Rel 22-08 | [5017095] | Servicing Stack update | 5.71 | Aug 9, 2022 |
-| Rel 22-08 | [5016057] | Servicing Stack update | 2.127 | Jul 12, 2022 |
-| Rel 22-08 | [4494175] | Microcode | 5.71 | Sep 1, 2020 |
-| Rel 22-08 | [4494174] | Microcode | 6.47 | Sep 1, 2020 |
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-08 | [5016623] | Latest Cumulative Update(LCU) | [6.45] | Aug 9, 2022 |
+| Rel 22-08 | [5016618] | IE Cumulative Updates | [2.127], [3.117], [4.105] | Aug 9, 2022 |
+| Rel 22-08 | [5016627] | Latest Cumulative Update(LCU) | [7.15] | Aug 9, 2022 |
+| Rel 22-08 | [5016622] | Latest Cumulative Update(LCU) | [5.71] | Aug 9, 2022 |
+| Rel 22-08 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | [2.127] | Aug 9, 2022 |
+| Rel 22-08 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup | [2.127] | May 10, 2022 |
+| Rel 22-08 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | [4.107] | Jun 14, 2022 |
+| Rel 22-08 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup | [4.107] | May 10, 2022 |
+| Rel 22-08 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | [3.114] | Aug 9, 2022 |
+| Rel 22-08 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup | [3.114] | May 10, 2022 |
+| Rel 22-08 | [5013641] | .NET Framework 3.5 and 4.7.2 Cumulative Update | [6.47] | May 10, 2022 |
+| Rel 22-08 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | [7.15] | May 10, 2022 |
+| Rel 22-08 | [5016676] | Monthly Rollup | [2.127] | Aug 9, 2022 |
+| Rel 22-08 | [5016672] | Monthly Rollup | [3.114] | Aug 9, 2022 |
+| Rel 22-08 | [5016681] | Monthly Rollup | [4.107] | Aug 9, 2022 |
+| Rel 22-08 | [5016263] | Servicing Stack update | [3.114] | Jul 12, 2022 |
+| Rel 22-08 | [5016264] | Servicing Stack update | [4.107] | Jul 12, 2022 |
+| Rel 22-08 | [4578013] | OOB Standalone Security Update | [4.107] | Aug 19, 2020 |
+| Rel 22-08 | [5017095] | Servicing Stack update | [5.71] | Aug 9, 2022 |
+| Rel 22-08 | [5016057] | Servicing Stack update | [2.127] | Jul 12, 2022 |
+| Rel 22-08 | [4494175] | Microcode | [5.71] | Sep 1, 2020 |
+| Rel 22-08 | [4494174] | Microcode | [6.47] | Sep 1, 2020 |
[5016623]: https://support.microsoft.com/kb/5016623 [5016618]: https://support.microsoft.com/kb/5016618
The following tables show the Microsoft Security Response Center (MSRC) updates
[5016057]: https://support.microsoft.com/kb/5016057 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174
+[2.127]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.114]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.107]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.71]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.47]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.15]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## July 2022 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 8/03/2022 Last updated : 9/02/2022 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **September 2, 2022**
+The August Guest OS has released.
###### **August 3, 2022** The July Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.15_202208-01 | September 2, 2022 | Post 7.17 |
| WA-GUEST-OS-7.14_202207-01 | August 3, 2022 | Post 7.16 |
-| WA-GUEST-OS-7.13_202206-01 | July 11, 2022 | Post 7.15 |
+|~~WA-GUEST-OS-7.13_202206-01~| July 11, 2022 | September 2, 2022 |
|~~WA-GUEST-OS-7.12_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-7.11_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-7.10_202203-01~~| March 19, 2022 | May 26, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.47_202208-01 | September 2, 2022 | Post 6.49 |
| WA-GUEST-OS-6.46_202207-01 | August 3, 2022 | Post 6.48 |
-| WA-GUEST-OS-6.45_202206-01 | July 11, 2022 | Post 6.47 |
+|~~WA-GUEST-OS-6.45_202206-01~| July 11, 2022 | September 2, 2022 |
|~~WA-GUEST-OS-6.44_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-6.43_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-6.42_202203-01~~| March 19, 2022 | May 26, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.71_202208-01 | September 2, 2022 | Post 5.73 |
| WA-GUEST-OS-5.70_202207-01 | August 3, 2022 | Post 5.72 |
-| WA-GUEST-OS-5.69_202206-01 | July 11, 2022 | Post 5.71 |
+|~~WA-GUEST-OS-5.69_202206-01~~| July 11, 2022 | September 2, 2022 |
|~~WA-GUEST-OS-5.68_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-5.67_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-5.66_202203-01~~| March 19, 2022 | May 26, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.107_202208-01 | September 2, 2022 | Post 4.109 |
| WA-GUEST-OS-4.106_202207-02 | August 3, 2022 | Post 4.108 |
-| WA-GUEST-OS-4.105_202206-02 | July 11, 2022 | Post 4.107 |
+|~~WA-GUEST-OS-4.105_202206-02~~| July 11, 2022 | September 2, 2022 |
|~~WA-GUEST-OS-4.103_202205-01~~| May 26, 2022 | August 2, 2022 | |~~WA-GUEST-OS-4.102_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-4.101_202203-01~~| March 19, 2022 | May 26, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.114_202208-01 | September 2, 2022 | Post 3.116 |
| WA-GUEST-OS-3.113_202207-02 | August 3, 2022 | Post 3.115 |
-| WA-GUEST-OS-3.112_202206-02 | July 11, 2022 | Post 3.114 |
+|~~WA-GUEST-OS-3.112_202206-02~~| July 11, 2022 | September 2, 2022 |
|~~WA-GUEST-OS-3.110_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-3.109_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-3.108_202203-01~~| March 19, 2022 | May 26, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.127_202208-01 | September 2, 2022 | Post 2.129 |
| WA-GUEST-OS-2.126_202207-02 | August 3, 2022 | Post 2.128 |
-| WA-GUEST-OS-2.125_202206-02 | July 11, 2022 | Post 2.127 |
+|~~WA-GUEST-OS-2.125_202206-02~~| July 11, 2022 | September 2, 2022 |
|~~WA-GUEST-OS-2.123_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-2.122_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-2.121_202203-01~~| March 19, 2022 | May 26, 2022 |
cognitive-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/improve-accuracy-phrase-list.md
Previously updated : 04/14/2022 Last updated : 09/01/2022 zone_pivot_groups: programming-languages-set-two-with-js-spx
Phrase lists are simple and lightweight:
- **Just-in-time**: A phrase list is provided just before starting the speech recognition, eliminating the need to train a custom model. - **Lightweight**: You don't need a large data set. Simply provide a word or phrase to boost its recognition.
-You can use phrase lists with the [Speech Studio](speech-studio-overview.md), [Speech SDK](quickstarts/setup-platform.md), or [Speech Command Line Interface (CLI)](spx-overview.md).
+You can use phrase lists with the [Speech Studio](speech-studio-overview.md), [Speech SDK](quickstarts/setup-platform.md), or [Speech Command Line Interface (CLI)](spx-overview.md). The [Batch transcription API](batch-transcription.md) doesn't support phrase lists.
-There are some situations where [training a custom model](custom-speech-overview.md) that includes phrases is likely the best option to improve accuracy. In these cases you would not use a phrase list:
+You can use phrase lists with both standard and [custom speech](custom-speech-overview.md). There are some situations where training a custom model that includes phrases is likely the best option to improve accuracy. For example, in the following cases you would use Custom Speech:
- If you need to use a large list of phrases. A phrase list shouldn't have more than 500 phrases. - If you need a phrase list for languages that are not currently supported.-- If you need to do batch transcription. The Batch transcription API does not support phrase lists.-
-> [!TIP]
-> You can use phrase lists with both standard and custom speech.
## Try it in Speech Studio
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-sdk.md
The Speech SDK supports the following languages and platforms:
[!INCLUDE [License Notice](~/articles/cognitive-services/Speech-Service/includes/cognitive-services-speech-service-license-notice.md)]
+## Speech SDK demo
+
+The following video shows how to install the [Speech SDK for C#](quickstarts/setup-platform.md) and write a simple .NET console application for speech-to-text.
+
+> [!VIDEO c20d3b0c-e96a-4154-9299-155e27db7117]
+ ## Code samples Speech SDK code samples are available in the documentation and GitHub.
cognitive-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/authentication.md
Previously updated : 07/22/2021 Last updated : 09/01/2022
curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-versio
[!INCLUDE [](../../includes/cognitive-services-azure-active-directory-authentication.md)]
+## Use Azure key vault to securely access credentials
+
+You can [use Azure Key Vault](./use-key-vault.md) to securely develop Cognitive Services applications. Key Vault enables you to store your authentication credentials in the cloud, and reduces the chances that secrets may be accidentally leaked, because you won't store security information in your application.
+
+Authentication is done via Azure Active Directory. Authorization may be done via Azure role-based access control (Azure RBAC) or Key Vault access policy. Azure RBAC can be used for both management of the vaults and access data stored in a vault, while key vault access policy can only be used when attempting to access data stored in a vault.
+ ## See also * [What is Cognitive Services?](./what-are-cognitive-services.md)
cognitive-services Responsible Guidance Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-guidance-integration.md
When you get ready to integrate and responsibly use AI-powered products or featu
- **Transparency**: Consider providing users with information about how the content was personalized. For example, you can give your users a button labeled Why These Suggestions? that shows which top features of the user and actions played a role in producing the Personalizer results. -- **Adversarial use**: consider establishing a process to detect and act on malicious manipulation. There are actors that will take advantage of machine learning and AI systems' ability to learn from their environment. With coordinated attacks, they can artificially fake patterns of behavior that shift the data and AI models toward their goals. If your use of Personalizer could influence important choices, make sure you have the appropriate means to detect and mitigate these types of attacks in place.
+- **Adversarial use**: consider establishing a process to detect and act on malicious manipulation. There are actors that will take advantage of machine learning and AI systems' ability to learn from their environment. With coordinated attacks, they can artificially fake patterns of behavior that shift the data and AI models toward their goals. If your use of Personalizer could influence important choices, make sure you have the appropriate means to detect and mitigate these types of attacks in place.
+
+- **Opt out**: Consider providing a control for users to opt out of receiving personalized recommendations. For these users, the Personalizer Rank API will not be called from your application. Instead, your application can use an alternative mechanism for deciding what action is taken. For example, by opting out of personalized recommendations and choosing the default or baseline action, the user would experience the action that would be taken without Personalizer's recommendation. Alternatively, your application can use recommendations based on aggregate or population-based measures (e.g., now trending, top 10 most popular, etc.).
## Your responsibility
cognitive-services Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/use-key-vault.md
+
+ Title: Develop Azure Cognitive Services applications with Key Vault
+description: Learn how to develop Cognitive Services applications securely by using Key Vault.
++++ Last updated : 09/01/2022
+zone_pivot_groups: programming-languages-set-twenty-eight
++
+# Develop Azure Cognitive Services applications with Key Vault
+
+Use this article to learn how to develop Cognitive Services applications securely by using [Azure Key Vault](/azure/key-vault/general/overview).
+
+Key Vault reduces the chances that secrets may be accidentally leaked, because you won't store security information in your application.
+
+## Prerequisites
++
+* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free)
+* [Visual Studio IDE](https://visualstudio.microsoft.com/vs/)
+* An [Azure Key Vault](/azure/key-vault/general/quick-create-portal)
+* [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
+++
+* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free).
+* [Python 3.7 or later](https://www.python.org/)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+* An [Azure Key Vault](/azure/key-vault/general/quick-create-portal)
+* [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
+++
+* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free).
+* [Java Development Kit (JDK) version 8 or above](/azure/developer/java/fundamentals/)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+* An [Azure Key Vault](/azure/key-vault/general/quick-create-portal)
+* [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
+++
+* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free).
+* [Current Node.js v14 LTS or later](https://nodejs.org/)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+* An [Azure Key Vault](/azure/key-vault/general/quick-create-portal)
+* [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
++
+> [!NOTE]
+> Review the documentation and quickstart articles for the Cognitive Service you're using to get an understanding of:
+> * The credentials and other information you will need to send API calls.
+> * The packages and code you will need to run your application.
+
+## Get your credentials from your Cognitive Services resource
+
+Before you add your credential information to your Azure key vault, you need to retrieve them from your Cognitive Services resource. For example, if your service needs a key and endpoint you would find them using the following steps:
+
+1. Navigate to your Azure resource in the [Azure portal](https://portal.azure.com/).
+1. From the collapsible menu on the left, select **Keys and Endpoint**.
+
+ :::image type="content" source="language-service/custom-text-classification/media/get-endpoint-azure.png" alt-text="A screenshot showing the key and endpoint page in the Azure portal." lightbox="language-service/custom-text-classification/media/get-endpoint-azure.png":::
+
+Some Cognitive Services require different information to authenticate API calls, such as a key and region. Make sure to retrieve this information before continuing on.
+
+## Add your credentials to your key vault
+
+For your application to retrieve and use your credentials to authenticate API calls, you will need to add them to your [key vault secrets](/azure/key-vault/secrets/about-secrets).
+
+Repeat these steps to generate a secret for each required resource credential. For example, a key and endpoint. These secret names will be used later to authenticate your application.
+
+1. Open a new browser tab or window. Navigate to your key vault in the <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Go to Azure portal" target="_blank">Azure portal</a>.
+1. From the collapsible menu on the left, select **Objects** > **Secrets**.
+1. Select **Generate/Import**.
+
+ :::image type="content" source="media/key-vault/store-secrets.png" alt-text="A screenshot showing the key vault key page in the Azure portal." lightbox="media/key-vault/store-secrets.png":::
+
+1. On the **Create a secret** screen, enter the following values:
+
+ |Name | Value |
+ |||
+ |Upload options | Manual |
+ |Name | A secret name for your key or endpoint. For example: "CognitiveServicesKey" or "CognitiveServicesEndpoint" |
+ |Value | Your Azure Cognitive Services resource key or endpoint. |
+
+ Later your application will use the secret "Name" to securely access the "Value".
+
+1. Leave the other values as their defaults. Select **Create**.
+
+ >[!TIP]
+ > Make sure to remember the names that you set for your secrets, as you'll use them later in your application.
+
+You should now have named secrets for your resource information.
+
+## Create an environment variable for your key vault's name
+
+We recommend creating an environment variable for your Azure key vault's name. Your application will read this environment variable at runtime to retrieve your key and endpoint information.
+
+To set environment variables, use one the following commands. `KEY_VAULT_NAME` with the name of the environment variable, and replace `Your-Key-Vault-Name` with the name of your key vault, which will be stored in the environment variable.
+
+# [Azure CLI](#tab/azure-cli)
+
+Create and assign persisted environment variable, given the value.
+
+```CMD
+setx KEY_VAULT_NAME="Your-Key-Vault-Name"
+```
+
+In a new instance of the **Command Prompt**, read the environment variable.
+
+```CMD
+echo %KEY_VAULT_NAME%
+```
+
+# [PowerShell](#tab/powershell)
+
+Create and assign a persisted environment variable. Replace `Your-Key-Vault-Name` with the name of your key vault.
+
+```powershell
+[System.Environment]::SetEnvironmentVariable('KEY_VAULT_NAME', 'Your-Key-Vault-Name', 'User')
+```
+
+In a new instance of the **Windows PowerShell**, read the environment variable.
+
+```powershell
+[System.Environment]::GetEnvironmentVariable('KEY_VAULT_NAME')
+```
++++
+## Authenticate to Azure using Visual Studio
+
+Developers using Visual Studio 2017 or later can authenticate an Azure Active Directory account through Visual Studio. This enables you to access secrets in your key vault by signing into your Azure subscription from within the IDE.
+
+To authenticate in Visual Studio, select **Tools** from the top navigation menu, and select **Options**. Navigate to the **Azure Service Authentication** option to sign in with your user name and password.
+
+## Authenticate using the command line
++
+## Create a new C# application
+
+Using the Visual Studio IDE, create a new .NET Core console app. This will create a "Hello World" project with a single C# source file: `program.cs`.
+
+Install the following client libraries by right-clicking on the solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse** and search for the following libraries, and select **Install** for each:
+
+* `Azure.Security.KeyVault.Secrets`
+* `Azure.Identity`
+
+## Import the example code
+
+Copy the following example code into your `program.cs` file. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using Azure;
+using Azure.Identity;
+using Azure.Security.KeyVault.Secrets;
+using System.Net;
+
+namespace key_vault_console_app
+{
+ class Program
+ {
+ static async Task Main(string[] args)
+ {
+ //Name of your key vault
+ var keyVaultName = Environment.GetEnvironmentVariable("KEY_VAULT_NAME");
+
+ //variables for retrieving the key and endpoint from your key vault.
+ //Set these variables to the names you created for your secrets
+ const string keySecretName = "Your-Key-Secret-Name";
+ const string endpointSecretName = "Your-Endpoint-Secret-Name";
+
+ //Endpoint for accessing your key vault
+ var kvUri = $"https://{keyVaultName}.vault.azure.net";
+
+ var keyVaultClient = new SecretClient(new Uri(kvUri), new DefaultAzureCredential());
+
+ Console.WriteLine($"Retrieving your secrets from {keyVaultName}.");
+
+ //Key and endpoint secrets retrieved from your key vault
+ var keySecret = await keyVaultClient.GetSecretAsync(keySecretName);
+ var endpointSecret = await keyVaultClient.GetSecretAsync(endpointSecretName);
+ Console.WriteLine($"Your key secret value is: {keySecret.Value.Value}");
+ Console.WriteLine($"Your endpoint secret value is: {endpointSecret.Value.Value}");
+ Console.WriteLine("Secrets retrieved successfully");
+
+ }
+ }
+}
+```
+
+## Run the application
+
+Run the application by selecting the **Debug** button at the top of Visual studio. Your key and endpoint secrets will be retrieved from your key vault.
+
+## Send a test Language service call (optional)
+
+If you're using a multi-service resource or Language resource, you can update [your application](#create-a-new-c-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
+
+1. Install the `Azure.AI.TextAnalytics` library by right-clicking on the solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse** and search for the following libraries, and select **Install** for each:
+
+1. Add the following directive to the top of your `program.cs` file.
+
+ ```csharp
+ using Azure.AI.TextAnalytics;
+ ```
+
+1. Add the following code sample to your application.
+
+ ```csharp
+ // Example method for extracting named entities from text
+ private static void EntityRecognitionExample(string keySecret, string endpointSecret)
+ {
+ //String to be sent for Named Entity Recognition
+ var exampleString = "I had a wonderful trip to Seattle last week.";
+
+ AzureKeyCredential azureKeyCredential = new AzureKeyCredential(keySecret);
+ Uri endpoint = new Uri(endpointSecret);
+ var languageServiceClient = new TextAnalyticsClient(endpoint, azureKeyCredential);
+
+ Console.WriteLine($"Sending a Named Entity Recognition (NER) request");
+ var response = languageServiceClient.RecognizeEntities(exampleString);
+ Console.WriteLine("Named Entities:");
+ foreach (var entity in response.Value)
+ {
+ Console.WriteLine($"\tText: {entity.Text},\tCategory: {entity.Category},\tSub-Category: {entity.SubCategory}");
+ Console.WriteLine($"\t\tScore: {entity.ConfidenceScore:F2},\tLength: {entity.Length},\tOffset: {entity.Offset}\n");
+ }
+ }
+ ```
+
+3. Add the following code to call `EntityRecognitionExample()` from your main method, with your key and endpoint values.
+
+ ```csharp
+ EntityRecognitionExample(keySecret.Value.Value, endpointSecret.Value.Value);
+ ```
+
+4. Run the application.
+++
+## Authenticate your application
++
+## Create a python application
+
+Create a new folder named `keyVaultExample`. Then use your preferred code editor to create a file named `program.py` inside the newly created folder.
+
+### Install Key Vault and Language service packages
+
+1. In a terminal or command prompt, navigate to your project folder and install the Azure Active Directory identity library:
+
+ ```terminal
+ pip install azure-identity
+ ```
+
+1. Install the Key Vault secrets library:
+
+ ```terminal
+ pip install azure-keyvault-secrets
+ ```
+
+## Import the example code
+
+Add the following code sample to the file named `program.py`. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
+
+```python
+import os
+from azure.keyvault.secrets import SecretClient
+from azure.identity import DefaultAzureCredential
+from azure.core.credentials import AzureKeyCredential
+
+keyVaultName = os.environ["KEY_VAULT_NAME"]
+
+# Set these variables to the names you created for your secrets
+keySecretName = "Your-Key-Secret-Name"
+endpointSecretName = "Your-Endpoint-Secret-Name"
+
+# URI for accessing key vault
+KVUri = f"https://{keyVaultName}.vault.azure.net"
+
+# Instantiate the client and retrieve secrets
+credential = DefaultAzureCredential()
+kv_client = SecretClient(vault_url=KVUri, credential=credential)
+
+print(f"Retrieving your secrets from {keyVaultName}.")
+
+retrieved_key = kv_client.get_secret(keySecretName).value
+retrieved_endpoint = kv_client.get_secret(endpointSecretName).value
+
+print(f"Your secret key value is {retrieved_key}.")
+print(f"Your secret endpoint value is {retrieved_endpoint}.")
+
+```
+
+## Run the application
+
+Use the following command to run the application. Your key and endpoint secrets will be retrieved from your key vault.
+
+```terminal
+python ./program.py
+```
+
+## Send a test Language service call (optional)
+
+If you're using a multi-service resource or Language resource, you can update [your application](#create-a-python-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
+
+1. Install the Language service library:
+
+ ```console
+ pip install azure-ai-textanalytics==5.1.0
+ ```
+
+1. Add the following code to your application
+
+ ```python
+ from azure.ai.textanalytics import TextAnalyticsClient
+ # Authenticate the key vault secrets client using your key and endpoint
+ azure_key_credential = AzureKeyCredential(retrieved_key)
+ # Now you can use key vault credentials with the Language service
+ language_service_client = TextAnalyticsClient(
+ endpoint=retrieved_endpoint,
+ credential=azure_key_credential)
+
+ # Example of recognizing entities from text
+
+ print("Sending NER request")
+
+ try:
+ documents = ["I had a wonderful trip to Seattle last week."]
+ result = language_service_client.recognize_entities(documents = documents)[0]
+ print("Named Entities:\n")
+ for entity in result.entities:
+ print("\tText: \t", entity.text, "\tCategory: \t", entity.category, "\tSubCategory: \t", entity.subcategory,
+ "\n\tConfidence Score: \t", round(entity.confidence_score, 2), "\tLength: \t", entity.length, "\tOffset: \t", entity.offset, "\n")
+
+ except Exception as err:
+ print("Encountered exception. {}".format(err))
+ ```
+
+1. Run the application.
+++
+## Authenticate your application
++
+## Create a java application
+
+In your preferred IDE, create a new Java console application project, and create a class named `Example`.
+
+## Add dependencies
+
+In your project, add the following dependencies to your `pom.xml` file.
+
+```xml
+<dependencies>
+
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-security-keyvault-secrets</artifactId>
+ <version>4.2.3</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.2.0</version>
+ </dependency>
+ </dependencies>
+```
+
+## Import the example code
+
+Copy the following code into a file named `Example.java`. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
+
+```java
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.security.keyvault.secrets.SecretClient;
+import com.azure.security.keyvault.secrets.SecretClientBuilder;
+import com.azure.core.credential.AzureKeyCredential;
+
+public class Example {
+
+ public static void main(String[] args) {
+
+ String keyVaultName = System.getenv("KEY_VAULT_NAME");
+ String keyVaultUri = "https://" + keyVaultName + ".vault.azure.net";
+
+ //variables for retrieving the key and endpoint from your key vault.
+ //Set these variables to the names you created for your secrets
+ String keySecretName = "Your-Key-Secret-Name";
+ String endpointSecretName = "Your-Endpoint-Secret-Name";
+
+ //Create key vault secrets client
+ SecretClient secretClient = new SecretClientBuilder()
+ .vaultUrl(keyVaultUri)
+ .credential(new DefaultAzureCredentialBuilder().build())
+ .buildClient();
+
+ //retrieve key and endpoint from key vault
+ String keyValue = secretClient.getSecret(keySecretName).getValue();
+ String endpointValue = secretClient.getSecret(endpointSecretName).getValue();
+ System.out.printf("Your secret key value is: %s", keyValue)
+ System.out.printf("Your secret endpoint value is: %s", endpointValue)
+ }
+}
+```
+
+## Send a test Language service call (optional)
+
+If you're using a multi-service resource or Language resource, you can update [your application](#create-a-java-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
+
+1. In your application, add the following dependency:
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-ai-textanalytics</artifactId>
+ <version>5.1.12</version>
+ </dependency>
+ ```
+
+1. add the following import statements to your file.
+
+ ```java
+ import com.azure.ai.textanalytics.models.*;
+ import com.azure.ai.textanalytics.TextAnalyticsClientBuilder;
+ import com.azure.ai.textanalytics.TextAnalyticsClient;
+ ```
+
+1. Add the following code to the `main()` method in your application:
+
+ ```java
+
+ TextAnalyticsClient languageClient = new TextAnalyticsClientBuilder()
+ .credential(new AzureKeyCredential(keyValue))
+ .endpoint(endpointValue)
+ .buildClient();
+
+ // Example for recognizing entities in text
+ String text = "I had a wonderful trip to Seattle last week.";
+
+ for (CategorizedEntity entity : languageClient.recognizeEntities(text)) {
+ System.out.printf(
+ "Recognized entity: %s, entity category: %s, entity sub-category: %s, score: %s, offset: %s, length: %s.%n",
+ entity.getText(),
+ entity.getCategory(),
+ entity.getSubcategory(),
+ entity.getConfidenceScore(),
+ entity.getOffset(),
+ entity.getLength());
+ }
+ ```
+
+1. Run your application
+++
+## Authenticate your application
++
+## Create a new Node.js application
+
+Create a Node.js application that uses your key vault.
+
+In a terminal, create a folder named `key-vault-js-example` and change into that folder:
+
+```terminal
+mkdir key-vault-js-example && cd key-vault-js-example
+```
+
+Initialize the Node.js project:
+
+```terminal
+npm init -y
+```
+
+### Install Key Vault and Language service packages
+
+1. Using the terminal, install the Azure Key Vault secrets library, [@azure/keyvault-secrets](https://www.npmjs.com/package/@azure/keyvault-secrets) for Node.js.
+
+ ```terminal
+ npm install @azure/keyvault-secrets
+ ```
+
+1. Install the Azure Identity library, [@azure/identity](https://www.npmjs.com/package/@azure/identity) package to authenticate to a Key Vault.
+
+ ```terminal
+ npm install @azure/identity
+ ```
+
+## Import the code sample
+
+Add the following code sample to a file named `index.js`. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
+
+```javascript
+const { SecretClient } = require("@azure/keyvault-secrets");
+const { DefaultAzureCredential } = require("@azure/identity");
+// Load the .env file if it exists
+const dotenv = require("dotenv");
+dotenv.config();
+
+async function main() {
+ const credential = new DefaultAzureCredential();
+
+ const keyVaultName = process.env["KEY_VAULT_NAME"];
+ const url = "https://" + keyVaultName + ".vault.azure.net";
+
+ const kvClient = new SecretClient(url, credential);
+
+ // Set these variables to the names you created for your secrets
+ const keySecretName = "Your-Key-Secret-Name";
+ const endpointSecretName = "Your-Endpoint-Secret-Name";
+
+ console.log("Retrieving secrets from ", keyVaultName);
+ const retrievedKey = await (await kvClient.getSecret(keySecretName)).value;
+ const retrievedEndpoint = await (await kvClient.getSecret(endpointSecretName)).value;
+ console.log("Your secret key value is: ", retrievedKey);
+ console.log("Your secret endpoint value is: ", retrievedEndpoint);
+}
+
+main().catch((error) => {
+ console.error("An error occurred:", error);
+ process.exit(1);
+});
+```
+
+## Run the sample application
+
+Use the following command to run the application. Your key and endpoint secrets will be retrieved from your key vault.
+
+```terminal
+node index.js
+```
+
+## Send a test Language service call (optional)
+
+If you're using a multi-service resource or Language resource, you can update [your application](#create-a-new-nodejs-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
+
+1. Install the Azure Cognitive Service for Language library, [@azure/ai-text-analytics](https://www.npmjs.com/package/@azure/ai-text-analytics/) to send API requests to the [Language service](./language-service/overview.md).
+
+ ```terminal
+ npm install @azure/ai-text-analytics@5.1.0
+ ```
+
+2. Add the following code to your application:
+
+ ```javascript
+ const { TextAnalyticsClient, AzureKeyCredential } = require("@azure/ai-text-analytics");
+ // Authenticate the language client with your key and endpoint
+ const languageClient = new TextAnalyticsClient(retrievedEndpoint, new AzureKeyCredential(retrievedKey));
+
+ // Example for recognizing entities in text
+ console.log("Sending NER request")
+ const entityInputs = [
+ "I had a wonderful trip to Seattle last week."
+ ];
+ const entityResults = await languageClient.recognizeEntities(entityInputs);
+ entityResults.forEach(document => {
+ console.log(`Document ID: ${document.id}`);
+ document.entities.forEach(entity => {
+ console.log(`\tName: ${entity.text} \tCategory: ${entity.category} \tSubcategory: ${entity.subCategory ? entity.subCategory : "N/A"}`);
+ console.log(`\tScore: ${entity.confidenceScore}`);
+ });
+ });
+ ```
+
+3. Run the application.
++
+## Next steps
+
+* See [What are Cognitive Services](./what-are-cognitive-services.md) for available features you can develop along with [Azure key vault](/key-vault/general).
+* For additional information on secure application development, see:
+ * [Best practices for using Azure Key Vault](/key-vault/general/best-practices)
+ * [Cognitive Services security](cognitive-services-security.md)
+ * [Azure security baseline for Cognitive Services](/security/benchmark/azure/baselines/cognitive-services-security-baseline?toc=/azure/cognitive-services/TOC.json)
+
communication-services Port Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/port-phone-number.md
Record your resource's **Azure ID** and **Immutable Resource ID**:
## Initiate the port request
-Toll-free and geographic numbers based in the United States are eligible for porting. Use one of the following forms to submit your port request:
--- For toll-free numbers: [Toll-free number port request](https://aka.ms/acs-port-form-tollfree)-- For geographic numbers based in the US: [Geographic number port request](https://aka.ms/acs-port-form-geographic)-
-When completed, send your completed port request form to acsporting@microsoft.com. Please ensure that your email subject line begins with "ACS Port-In Request".
+Toll-free and geographic numbers are eligible for porting. Follow the ["New Port In Request" instructions](https://github.com/Azure/Communication/blob/master/special-order-numbers.md) to submit your port request
## Next steps
communication-services Call Recording Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/call-recording-sample.md
This quickstart gets you started recording voice and video calls. This quickstart assumes you've already used the [Calling client SDK](get-started-with-video-calling.md) to build the end-user calling experience. Using the **Calling Server APIs and SDKs** you can enable and manage recordings. > [!NOTE]
-> **Unmixed audio-only** is still in a **Private Preview** and NOT enabled for Teams Interop meetings.
+> **Unmixed audio recording** is still in a **Private Preview**.
::: zone pivot="programming-language-csharp" [!INCLUDE [Build Call Recording server sample with C#](./includes/call-recording-samples/recording-server-csharp.md)]
container-apps Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/alerts.md
+
+ Title: Set up alerts in Azure Container Apps
+description: Set up alerts to monitor your container app.
+++++ Last updated : 08/30/2022+++
+# Set up alerts in Azure Container Apps
+
+Azure Monitor alerts notify you so that you can respond quickly to critical issues. There are two types of alerts that you can define:
+
+- [Metric alerts](../azure-monitor/alerts/alerts-metric-overview.md) based on Azure Monitor metric data
+- [Log alerts](../azure-monitor/alerts/alerts-unified-log.md) based on Azure Monitor Log Analytics data
+
+You can create alert rules from metric charts in the metric explorer and from queries in Log Analytics. You can also define and manage alerts from the **Monitor>Alerts** page. To learn more about alerts, refer to [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
+
+The **Alerts** page in the **Monitoring** section on your container app page displays all of your app's alerts. You can filter the list by alert type, resource, time and severity. You can also modify and create new alert rules from this page.
+
+## Create metric alert rules
+
+When you create alerts rules based on a metric chart in the metrics explorer, alerts are triggered when the metric data matches alert rule conditions. For more information about creating metrics charts, see [Using metrics explorer](metrics.md#using-metrics-explorer)
+
+After creating a metric chart, you can create a new alert rule.
+
+1. Select **New alert rule**. The **Create an alert rule** page is opened to the **Condition** tab. Here you'll find a *condition* that is populated with the metric chart settings.
+1. Select the condition.
+ :::image type="content" source="media/observability/metrics-alert-create-condition.png" alt-text="Screenshot of the metric explorer alert rule editor. A condition is automatically created based on the chart settings.":::
+1. Modify the **Alert logic** section to set the alert criteria. You can set the alert to trigger when the metric value is greater than, less than, or equal to a threshold value. You can also set the alert to trigger when the metric value is outside of a range of values.
+ :::image type="content" source="media/observability/screenshot-configure-alert-signal-logic.png" alt-text="Screenshot of the configure alert signal logic in Azure Container Apps.":::
+1. Select **Done**.
+1. You can add more conditions to the alert rule by selecting **Add condition** on the **Create an alert rule** page.
+1. Select the **Details** tab.
+1. Enter a name and description for the alert rule.
+1. Select **Review + create**.
+1. Select **Create**.
+ :::image type="content" source="media/observability/screenshot-alert-details-dialog.png" alt-text="Screen shot of the alert details configuration page.":::
++
+### Add conditions to an alert rule
+
+To add more conditions to your alert rule:
+
+1. Select **Alerts** from the left side menu of your container app page.
+1. Select **Alert rules** from the top menu.
+1. Select an alert from the table.
+1. Select **Add condition** in the **Condition** section.
+1. Select from the metrics listed in the **Select a signal** pane.
+ :::image type="content" source="media/observability/metrics-alert-select-a-signal.png" alt-text="Screenshot of the metric explorer alert rule editor showing the Select a signal pane.":::
+1. Configure the settings for your alert condition. For more information about configuring alerts, see [Manage metric alerts](../azure-monitor/alerts/alerts-metric.md).
+
+ You can receive individual alerts for specific revisions or replicas by enabling alert splitting and selecting **Revision** or **Replica** from the **Dimension name** list.
+
+Example of selecting a dimension to split an alert.
++
+ To learn more about configuring alerts, visit [Create a metric alert for an Azure resource](../azure-monitor/alerts/tutorial-metric-alert.md)
+
+## Create log alert rules
+
+You can create log alerts from queries in Log Analytics. When you create an alert rule from a query, the query is run at set intervals triggering alerts when the log data matches the alert rule conditions. To learn more about creating log alert rules, see [Manage log alerts](../azure-monitor/alerts/alerts-log.md).
+
+To create an alert rule:
+
+1. First, create and run a query to validate the query.
+1. Select **New alert rule**.
+1. The **Create an alert rule** editor is opened to the **Condition** tab, which is populated with your log query.
+ :::image type="content" source="media/observability/log-alerts-rule-editor.png" alt-text="Screenshot of the Log Analytics alert rule editor.":::
+1. Configure the settings in the **Measurement** section
+ :::image type="content" source="media/observability/screenshot-metrics-alerts-measurements.png" alt-text="Screen shot of metrics Create an alert rule measurement section.":::
+1. Optionally, you can enable alert splitting in the alert rule to send individual alerts for each dimension you select in the **Split by dimensions** section of the editor.
+ :::image type="content" source="media/observability/log-alerts-splitting.png" alt-text="Screenshot of the Create an alert rule Split by dimensions section":::
+1. Enter the threshold criteria in the**Alert logic** section.
+ :::image type="content" source="media/observability/log-alert-alert-logic.png" alt-text="Screenshot of the Create an alert rule Alert logic section.":::
+1. Select the **Details** tab.
+1. Enter a name and description for the alert rule.
+1. Select **Review + create**.
+1. Select **Create**.
+
+> [!div class="nextstepaction"]
+> [View log streams from the Azure portal](log-streaming.md)
container-apps Container Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/container-console.md
+
+ Title: Connect to a container console in Azure Container Apps
+description: Connect to a container console in your container app.
+++++ Last updated : 08/30/2022++++
+# Connect to a container console in Azure Container Apps
+
+Connecting to a container's console is useful when you want to troubleshoot your application inside a container. Azure Container Apps lets you connect to a container's console using the Azure portal or the Azure CLI.
+
+## Azure portal
+
+To connect to a container's console in the Azure portal, follow these steps.
+
+1. Select **Console** in the **Monitoring** menu group from your container app page in the Azure portal.
+1. Select the revision, replica and container you want to connect to.
+1. Choose to access your console via bash, sh, or a custom executable. If you choose a custom executable, it must be available in the container.
++
+## Azure CLI
+
+Use the `az containerapp exec` command to connect to a container console. Select **Ctrl-D** to exit the console.
+
+For example, connect to a container console in a container app with a single container using the following command. Replace the \<placeholders\> with your container app's values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp exec \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup>
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp exec `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup>
+```
+++
+To connect to a container console in a container app with multiple revisions, replicas, and containers include the following parameters in the `az containerapp exec` command.
+
+| Argument | Description |
+|-|-|
+| `--revision` | The revision names of the container to connect to. |
+| `--replica` | The replica name of the container to connect to. |
+| `--container` | The container name of the container to connect to. |
+
+You can get the revision names with the `az containerapp revision list` command. Replace the \<placeholders\> with your container app's values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp revision list \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup> \
+ --query "[].name"
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp revision list `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup> `
+ --query "[].name"
+```
+++
+Use the `az containerapp replica list` command to get the replica and container names. Replace the \<placeholders\> with your container app's values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp replica list \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup> \
+ --revision <RevisionName> \
+ --query "[].{Containers:properties.containers[].name, Name:name}"
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp replica list `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup> `
+ --revision <RevisionName> `
+ --query "[].{Containers:properties.containers[].name, Name:name}"
+```
+++
+Connect to the container console with the `az containerapp exec` command. Replace the \<placeholders\> with your container app's values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp exec \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup> \
+ --revision <RevisionName> \
+ --replica <ReplicaName> \
+ --container <ContainerName>
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp exec `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup> `
+ --revision <RevisionName> `
+ --replica <ReplicaName> `
+ --container <ContainerName>
+```
+++
+> [!div class="nextstepaction"]
+> [View log streams from the Azure portal](log-streaming.md)
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
Previously updated : 4/05/2022- Last updated : 09/01/2022+ # Tutorial: Deploy to Azure Container Apps using Visual Studio Code
In this tutorial, you'll deploy a containerized application to Azure Container A
## Clone the project
-To follow along with this tutorial, [Download the Sample Project](https://github.com/azure-samples/containerapps-albumapi-javascript/archive/refs/heads/master.zip) from [the repository](https://github.com/azure-samples/containerapps-albumapi-javascript) or clone it using the Git command below:
+1. Begin by cloning the [sample repository]((https://github.com/azure-samples/containerapps-albumapi-javascript) to your machine using the following command.
-```bash
-git clone https://github.com/Azure-Samples/containerapps-albumapi-javascript.git
-cd containerapps-albumapi-javascript
-```
+ ```bash
+ git clone https://github.com/Azure-Samples/containerapps-albumapi-javascript.git
+ ```
-This tutorial uses a JavaScript project, but the steps are language agnostic. To open the project after cloning on Windows, navigate to the project's folder, and right click and choose **Open in VS Code**. For Mac or Linux, you can also use the Visual Studio Code user interface to open the sample project. Select **File -> Open Folder** and then navigate to the folder of the cloned project.
+ > [!NOTE]
+ > This tutorial uses a JavaScript project, but the steps are language agnostic.
+
+1. Open Visual Studio Code.
+
+1. Select **F1** to open the command palette.
+
+1. Select **File > Open Folder...** and select the folder where you cloned the sample project.
## Sign in to Azure
-To work with Container Apps and complete this tutorial, you'll need to be signed into Azure. Once you have the Azure Account extension installed, you can sign in using the command palette by typing **Ctrl + shift + p** on Windows and searching for the Azure Sign In command.
+1. Select **F1** to open the command palette.
+1. Select **Azure: Sign In** and follow the prompts to authenticate.
-Select **Azure: Sign in**, and Visual Studio Code will launch a browser for you to sign into Azure. Login with the account you'd like to use to work with Container Apps, and then switch back to Visual Studio Code.
+1. Once signed in, return to Visual Studio Code.
## Create the container registry and Docker image
-The sample project includes a Dockerfile that is used to build a container image for the application. Docker images contain all of the source code and dependencies necessary to run an application. You can build and publish the image for your app directly in Azure; a local Docker installation is not required. An image is required to create and run a container app.
-
-Container images are stored inside of container registries. You can easily create a container registry and upload an image of your app to it in a single workflow using Visual Studio Code.
+Docker images contain the source code and dependencies necessary to run an application. This sample project includes a Dockerfile used to build the application's container. Since you can build and publish the image for your app directly in Azure, a local Docker installation isn't required.
-1) First, right click on the Dockerfile in the explorer, and select **Build Image in Azure**. You can also begin this workflow from the command palette by entering **Ctrl + Shift + P** on Windows or **Cmd + Shift + P** on a Mac. When the command palette opens, search for *Build Image in Azure* and select **Enter** on the matching suggestion.
+Container images are stored inside container registries. You can create a container registry and upload an image of your app in a single workflow using Visual Studio Code.
- :::image type="content" source="media/visual-studio-code/visual-studio-code-build-in-azure-small.png" lightbox="media/visual-studio-code/visual-studio-code-build-in-azure.png" alt-text="A screenshot showing how to build the image in Azure.":::
+1. In the _Explorer_ window, expand the _src_ folder to reveal the Dockerfile.
-2) As the command palette opens, you are prompted to enter a tag for the container. Accept the default, which uses the project name with the `{{.Run.ID}}` replacement token as a suffix. Select **Enter** to continue.
+1. Right select on the Dockerfile, and select **Build Image in Azure**.
- :::image type="content" source="media/visual-studio-code/visual-studio-code-container-tag.png" alt-text="A screenshot showing Container Apps tagging.":::
+ This action opens the command palette and prompts you to define a container tag.
-3) Choose the subscription you would like to use to create your container registry and build your image, and then press enter to continue.
+1. Enter a tag for the container. Accept the default, which is the project name with the *latest* suffix.
-4) Select **+ Create new registry**, or if you already have a registry you'd like to use, select that item and skip to creating and deloying to the container app.
+1. Select **+ Create new registry**, or if you already have a registry you'd like to use, select that item and skip to creating and deploying to the container app.
-5) Enter a unique name for the new registry such as *msdocscapps123*, where 123 are unique numbers of your own choosing, and then press enter. Container registry names must be globally unique across all over Azure.
+1. Enter a unique name for the new registry such as `msdocscapps123`, where `123` are unique numbers of your own choosing, and then select enter.
-6) Select **Basic** as the SKU.
+ Container registry names must be globally unique across all over Azure.
-7) Choose **+ Create new resource group**, or select an existing resource group you'd like to use. For a new resource group, enter a name such as `msdocscontainerapps`, and press enter.
+1. Select **Basic** as the SKU.
-8) Finally, select the location that is nearest to you. Select **Enter** to finalize the workflow, and Azure begins creating the container registry and building the image. This may take a few moments to complete.
+1. Choose **+ Create new resource group**, or select an existing resource group you'd like to use.
-Once the registry is created and the image is built successfully, you are ready to create the container app to host the published image.
+ For a new resource group, enter a name such as `msdocscontainerapps`, and press enter.
-## Create and deploy to the container app
+1. Select the location that is nearest to you. Select **Enter** to finalize the workflow, and Azure begins creating the container registry and building the image.
-The Azure Container Apps extension for Visual Studio Code enables you to choose existing Container Apps resources, or create new ones to deploy your applications to. In this scenario you create a new Container App environment and container app to host your application. After installing the Container Apps extension, you can access its features under the Azure control panel in Visual Studio Code.
+ This process may take a few moments to complete.
-### Create the Container Apps environment
+Once the registry is created and the image is built successfully, you're ready to create the container app to host the published image.
-Every container app must be part of a Container Apps environment. An environment provides an isolated network for one or more container apps, making it possible for them to easily invoke each other. You will need to create an environment before you can create the container app itself.
+## Create and deploy to the container app
-1) In the Container Apps extension panel, right click on the subscription you would like to use and select **Create Container App Environment**.
+The Azure Container Apps extension for Visual Studio Code enables you to choose existing Container Apps resources, or create new ones to deploy your applications to. In this scenario, you create a new Container App environment and container app to host your application. After installing the Container Apps extension, you can access its features under the Azure control panel in Visual Studio Code.
- :::image type="content" source="media/visual-studio-code/visual-studio-code-create-app-environment.png" alt-text="A screenshot showing how to create a Container Apps environment.":::
+### Create the Container Apps environment
-2) A command palette workflow will open at the top of the screen. Enter a name for the new Container Apps environment, such as `msdocsappenvironment`, and select **Enter**.
+Every container app must be part of a Container Apps environment. An environment provides an isolated network for one or more container apps, making it possible for them to easily invoke each other. You'll need to create an environment before you can create the container app itself.
- :::image type="content" source="media/visual-studio-code/visual-studio-code-container-app-environment.png" alt-text="A screenshot showing the container app environment.":::
+1. Select <kbd>F1</kbd> to open the command palette.
-3) Select the desired location for the container app from the list of options.
+1. Enter **Azure Container Apps: Create Container Apps Environment...** and enter the following values as prompted by the extension.
- :::image type="content" source="media/visual-studio-code/visual-studio-code-container-env-location.png" alt-text="A screenshot showing the app environment location.":::
+ | Prompt | Value |
+ |--|--|
+ | Name | Enter **my-aca-environment** |
+ | Region | Select the region closest to you |
-Visual Studio Code and Azure will create the environment for you. This process may take a few moments to complete. Creating a container app environment also creates a log analytics workspace for you in Azure.
+Once you issue this command, Azure begins to create the environment for you. This process may take a few moments to complete. Creating a container app environment also creates a log analytics workspace for you in Azure.
### Create the container app and deploy the Docker image Now that you have a container app environment in Azure you can create a container app inside of it. You can also publish the Docker image you created earlier as part of this workflow.
-1) In the Container Apps extension panel, right click on the container environment you created previously and select **Create Container App**
-
- :::image type="content" source="media/visual-studio-code/visual-studio-code-create-container-app.png" alt-text="A screenshot showing how to create the container app.":::
-
-2) A new command palette workflow will open at the top of the screen. Enter a name for the new container app, such as `msdocscontainerapp`, and then select **Enter**.
-
- :::image type="content" source="media/visual-studio-code/visual-studio-code-container-name.png" alt-text="A screenshot showing the container app name.":::
-
-3) Next, you're prompted to choose a container registry hosting solution to pull a Docker image from. For this scenario, select **Azure Container Registries**, though Docker Hub is also supported.
-
-4) Select the container registry you created previously when publishing the Docker image.
-
-5) Select the container registry repository you published the Docker image to. Repositories allow you to store and organize your containers in logical groupings.
-
-6) Select the tag of the image you published earlier.
-
-7) When prompted for environment variables, choose **Skip for now**. This application does not require any environment variables.
-
-8) Select **Enable** on the ingress settings prompt to enable an HTTP endpoint for your application.
+1. Select <kbd>F1</kbd> to open the command palette.
-9) Choose **External** to configure the HTTP traffic that the endpoint will accept.
+1. Enter **Azure Container Apps: Create Container App...** and enter the following values as prompted by the extension.
-10) Enter a value of 3500 for the port, and then select **Enter** to complete the workflow. This value should be set to the port number that your container uses, which in the case of the sample app is 3500.
+ | Prompt | Value | Remarks |
+ |--|--|--|
+ | Environment | Select **my-aca-environment** | |
+ | Name | Enter **my-container-app** | |
+ | Container registry | Select **Azure Container Registries**, then select the registry you created as you published the container image. | |
+ | Repository | Select the container registry repository where you published the container image. | |
+ | Tag | Select **latest** | |
+ | Environment variables | Select **Skip for now** | |
+ | Ingress | Select **Enable** | |
+ | HTTP traffic type | Select **External** | |
+ | Port | Enter **3500** | You set this value to the port number that your container uses. |
-During this process, Visual Studio Code and Azure create the container app for you. The published Docker image you created earlier is also be deployed to the app. Once this process finishes, Visual Studio Code displays a notification with a link to browse to the site. Click this link, and to view your app in the browser.
+During this process, Visual Studio Code and Azure create the container app for you. The published Docker image you created earlier is also be deployed to the app. Once this process finishes, Visual Studio Code displays a notification with a link to browse to the site. Select this link, and to view your app in the browser.
:::image type="content" source="media/visual-studio-code/visual-studio-code-app-deploy.png" alt-text="A screenshot showing the deployed app.":::
-You can also append the `/albums` path at the end of the app URL to view data from a sample API request.
+You can also append the `/albums` path at the end of the app URL to view data from a sample API request.
Congratulations! You successfully created and deployed your first container app using Visual Studio code.
container-apps Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress.md
# Set up HTTPS ingress in Azure Container Apps
-Azure Container Apps allows you to expose your container app to the public web by enabling ingress. When you enable ingress, you do not need to create an Azure Load Balancer, public IP address, or any other Azure resources to enable incoming HTTPS requests.
+Azure Container Apps allows you to expose your container app to the public web by enabling ingress. When you enable ingress, you don't need to create an Azure Load Balancer, public IP address, or any other Azure resources to enable incoming HTTPS requests.
With ingress enabled, your container app features the following characteristics:
With ingress enabled, your container app features the following characteristics:
## Configuration
-Ingress is an application-wide setting. Changes to ingress settings apply to all revisions simultaneously, and do not generate new revisions.
+Ingress is an application-wide setting. Changes to ingress settings apply to all revisions simultaneously, and don't generate new revisions.
The ingress configuration section has the following form:
The following settings are available when configuring ingress:
| Property | Description | Values | Required | |||||
-| `external` | The ingress IP and fully qualified domain name (FQDN) can either be accessible externally from the internet or a VNET, or internally within the app environment only. | `true` for external visibility from the internet or a VNET, `false` for internal visibility within app environment only (default) | Yes |
+| `external` | When enabled, the environment is assigned a public IP and fully qualified domain name (FQDN) for external ingress and an internal IP and FQDN for internal ingress. When disabled, only an internal IP/FQDN is created. |`true` for external visibility, `false` for internal visibility (default) | Yes |
| `targetPort` | The port your container listens to for incoming requests. | Set this value to the port number that your container uses. Your application ingress endpoint is always exposed on port `443`. | Yes | | `transport` | You can use either HTTP/1.1 or HTTP/2, or you can set it to automatically detect the transport type. | `http` for HTTP/1, `http2` for HTTP/2, `auto` to automatically detect the transport type (default) | No |
-| `allowInsecure` | Allows insecure traffic to your container app. | `false` (default), `true`<br><br>If set to `true`, HTTP requests to port 80 are not automatically redirected to port 443 using HTTPS, allowing insecure connections. | No |
+| `allowInsecure` | Allows insecure traffic to your container app. | `false` (default), `true`<br><br>If set to `true`, HTTP requests to port 80 aren't automatically redirected to port 443 using HTTPS, allowing insecure connections. | No |
> [!NOTE] > To disable ingress for your application, you can omit the `ingress` configuration property entirely.
container-apps Log Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-monitoring.md
+
+ Title: Monitor logs in Azure Container Apps with Log Analytics
+description: Monitor your container app logs with Log Analytics
+++++ Last updated : 08/30/2022+++
+# Monitor logs in Azure Container Apps with Log Analytics
+
+Azure Container Apps is integrated with Azure Monitor Log Analytics to monitor and analyze your container app's logs. Each Container Apps environment includes a Log Analytics workspace that provides a common place to store the system and application log data from all container apps running in the environment.
+
+Log entries are accessible by querying Log Analytics tables through the Azure portal or a command shell using the [Azure CLI](/cli/azure/monitor/log-analytics).
+
+There are two types of logs for Container Apps.
+
+- Console logs, which are emitted by your app.
+- System logs, which are emitted by the Container Apps service.
++
+## System Logs
+
+The Container Apps service provides system log messages at the container app level. System logs emit the following messages:
+
+| Source | Type | Message |
+||||
+| Dapr | Info | Successfully created dapr component \<component-name\> with scope \<dapr-component-scope\> |
+| Dapr | Info | Successfully updated dapr component \<component-name\> with scope \<component-type\> |
+| Dapr | Error | Error creating dapr component \<component-name\> |
+| Volume Mounts | Info | Successfully mounted volume \<volume-name\> for revision \<revision-scope\> |
+| Volume Mounts | Error | Error mounting volume \<volume-name\> |
+| Domain Binding | Info | Successfully bound Domain \<domain\> to the container app \<container app name\> |
+| Authentication | Info | Auth enabled on app. Creating authentication config |
+| Authentication | Info | Auth config created successfully |
+| Traffic weight | Info | Setting a traffic weight of \<percentage>% for revision \<revision-name\\> |
+| Revision Provisioning | Info | Creating a new revision: \<revision-name\> |
+| Revision Provisioning | Info | Successfully provisioned revision \<name\> |
+| Revision Provisioning | Info| Deactivating Old revisions since 'ActiveRevisionsMode=Single' |
+| Revision Provisioning | Error | Error provisioning revision \<revision-name>. ErrorCode: \<[ErrImagePull]\|[Timeout]\|[ContainerCrashing]\> |
+
+The system log data is accessible by querying the `ContainerAppSystemlogs_CL` table. The most used Container Apps specific columns in the table are:
+
+| Column | Description |
+|||
+| `ContainerAppName_s` | Container app name |
+| `EnvironmentName_s` | Container Apps environment name |
+| `Log_s` | Log message |
+| `RevisionName_s` | Revision name |
+
+## Console Logs
+
+Console logs originate from the `stderr` and `stdout` messages from the containers in your container app and Dapr sidecars. You can view console logs by querying the `ContainerAppConsolelogs_CL` table.
+
+> [!TIP]
+> Instrumenting your code with well-defined log messages can help you to understand how your code is performing and to debug issues. To learn more about best practices refer to [Design for operations](/azure/architecture/guide/design-principles/design-for-operations).
+
+The most commonly used Container Apps specific columns in ContainerAppConsoleLogs_CL include:
+
+|Column |Description |
+|||
+| `ContainerAppName_s` | Container app name |
+| `ContainerGroupName_g` | Replica name |
+| `ContainerId_s` | Container identifier |
+| `ContainerImage_s` | Container image name |
+| `EnvironmentName_s` | Container Apps environment name |
+| `Log_s` | Log message |
+| `RevisionName_s` | Revision name |
+
+## Query Log with Log Analytics
+
+Log Analytics is a tool in the Azure portal that you can use to view and analyze log data. Using Log Analytics, you can write Kusto queries and then sort, filter, and visualize the results in charts to spot trends and identify issues. You can work interactively with the query results or use them with other features such as alerts, dashboards, and workbooks.
+
+### Azure portal
+
+Start Log Analytics from **Logs** in the sidebar menu on your container app page. You can also start Log Analytics from **Monitor>Logs**.
+
+You can query the logs using the tables listed in the **CustomLogs** category **Tables** tab. The tables in this category are the `ContainerAppSystemlogs_CL` and `ContainerAppConsolelogs_CL` tables.
++
+Below is a Kusto query that displays console log entries for the container app named *album-api*.
+
+```kusto
+ContainerAppConsoleLogs_CL
+| where ContainerAppName_s == 'album-api'
+| project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s
+| take 100
+```
+
+Below is a Kusto query that displays system log entries for the container app named *album-api*.
+
+```kusto
+ContainerAppSystemLogs_CL
+| where ContainerAppName_s == 'album-api'
+| project Time=TimeGenerated, EnvName=EnvironmentName_s, AppName=ContainerAppName_s, Revision=RevisionName_s, Message=Log_s
+| take 100
+```
+
+For more information regarding Log Analytics and log queries, see the [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md).
+
+### Azure CLI/PowerShell
+
+Container Apps logs can be queried using the [Azure CLI](/cli/azure/monitor/log-analytics).
+
+These example Azure CLI queries output a table containing log records for the container app name **album-api**. The table columns are specified by the parameters after the `project` operator. The `$WORKSPACE_CUSTOMER_ID` variable contains the GUID of the Log Analytics workspace.
++
+This example queries the `ContainerAppConsoleLogs_CL` table:
+
+# [Bash](#tab/bash)
+
+```azurecli
+az monitor log-analytics query --workspace $WORKSPACE_CUSTOMER_ID --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s, LogLevel_s | take 5" --out table
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $WORKSPACE_CUSTOMER_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s, LogLevel_s | take 5"
+$queryResults.Results
+```
+++
+This example queries the `ContainerAppSystemLogs_CL` table:
+
+# [Bash](#tab/bash)
+
+```azurecli
+az monitor log-analytics query --workspace $WORKSPACE_CUSTOMER_ID --analytics-query "ContainerAppSystemLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Message=Log_s, LogLevel_s | take 5" --out table
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $WORKSPACE_CUSTOMER_ID -Query "ContainerAppSystemLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Message=Log_s, LogLevel_s | take 5"
+$queryResults.Results
+```
+++
+For more information about using Azure CLI to view container app logs, see [Viewing Logs](monitor.md#viewing-logs).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [View log streams from the Azure portal](log-streaming.md)
container-apps Log Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-streaming.md
+
+ Title: View log streams in Azure Container Apps
+description: View your container app's log stream.
+++++ Last updated : 08/30/2022+++
+# View log streams in Azure Container Apps
+
+While developing and troubleshooting your container app, it's important to see a container's logs in real-time. Container Apps lets you view a stream of your container's `stdout` and `stderr` log messages through the Azure portal or the Azure CLI.
+
+## Azure portal
+
+View a container app's log stream in the Azure portal with these steps.
+
+1. Navigate to your container app in the Azure portal.
+1. Select **Log stream** under the *Monitoring* section on the sidebar menu.
+1. If you have multiple revisions, replicas, or containers, you can select from the pull-down menus to choose a container. If your app has only one container, you can skip this step.
+
+After a container is selected, the log stream is displayed in the viewing pane.
++
+## Azure CLI
+
+You can view a container's log stream from the Azure CLI with the `az containerapp logs show` command. You can use these arguments to:
+
+- View previous log entries with the `--tail` argument.
+- View a live stream with the `--follow`argument.
+
+Use `Ctrl/Cmd-C` to stop the live stream.
+
+For example, you can list the last 50 container log entries in a container app with a single container using the following command.
+
+This example live streams a container's log entries.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp logs show \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup> \
+ --tail 50
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp logs show `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup> `
+ --tail 50
+```
+++
+To connect to a container console in a container app with multiple revisions, replicas, and containers include the following parameters in the `az containerapp logs show` command.
+
+| Argument | Description |
+|-|-|
+| `--revision` | The revision name of the container to connect to. |
+| `--replica` | The replica name of the container to connect to. |
+| `--container` | The container name of the container to connect to. |
+
+You can get the revision names with the `az containerapp revision list` command. Replace the \<placeholders\> with your container app's values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp revision list \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup> \
+ --query "[].name"
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp revision list `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup> `
+ --query "[].name"
+```
+++
+Use the `az containerapp replica list` command to get the replica and container names. Replace the \<placeholders\> with your container app's values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp replica list \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup> \
+ --revision <RevisionName> \
+ --query "[].{Containers:properties.containers[].name, Name:name}"
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp replica list `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup> `
+ --revision <RevisionName> `
+ --query "[].{Containers:properties.containers[].name, Name:name}"
+```
+++
+Stream the container logs with the `az container app show` command. Replace the \<placeholders\> with your container app's values.
++
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp logs show \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup> \
+ --revision <RevisionName> \
+ --replica <ReplicaName> \
+ --container <ContainerName> \
+ --follow
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp logs show `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup> `
+ --revision <RevisionName> `
+ --replica <ReplicaName> `
+ --container <ContainerName> `
+ --follow
+```
++++
+Enter **Ctrl-C** to stop the log stream.
+
+> [!div class="nextstepaction"]
+> [View log streams from the Azure portal](log-streaming.md)
container-apps Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/metrics.md
+
+ Title: Monitor Azure Container Apps metrics
+description: Monitor your running apps metrics
+++++ Last updated : 08/30/2022+++
+# Monitor Azure Container Apps metrics
+
+Azure Monitor collects metric data from your container app at regular interval to help you gain insights into the performance and health of your container app.
+
+The metrics explorer in the Azure portal allows you to visualize the data. You can also retrieve raw metric data through the [Azure CLI](/cli/azure/monitor/metrics) and Azure [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric).
+
+## Available metrics
+
+Container Apps provides these metrics.
+
+|Title | Description | Metric ID |Unit |
+|||||
+|CPU usage nanocores | CPU usage in nanocores (1,000,000,000 nanocores = 1 core) | UsageNanoCores| nanocores|
+|Memory working set bytes |Working set memory used in bytes |WorkingSetBytes |bytes|
+|Network in bytes|Network received bytes|RxBytes|bytes|
+|Network out bytes|Network transmitted bytes|TxBytes|bytes|
+|Requests|Requests processed|Requests|n/a|
+|Replica count| Number of active replicas| Replicas | n/a |
+|Replica Restart Count| Number of replica restarts | RestartCount | n/a |
+
+The metrics namespace is `microsoft.app/containerapps`.
+
+## Metrics snapshots
+
+Select the **Monitoring** tab on your app's **Overview** page to display charts showing your container app's current CPU, memory, and network utilization.
++
+From this view, you can pin one or more charts to your dashboard or select a chart to open it in the metrics explorer.
+
+## Using metrics explorer
+
+The Azure Monitor metrics explorer lets you create charts from metric data to help you analyze your container app's resource and network usage over time. You can pin charts to a dashboard or in a shared workbook.
+
+1. Open the metrics explorer in the Azure portal by selecting **Metrics** from the sidebar menu on your container app's page. To learn more about metrics explorer, go to [Getting started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md).
+
+1. Create a chart by selecting **Metric**. You can modify the chart by changing aggregation, adding more metrics, changing time ranges and intervals, adding filters, and applying splitting.
+
+### Add filters
+
+Optionally, you can create filters to limit the data shown based on revisions and replicas. To create a filter:
+1. Select **Add filter**.
+1. Select a revision or replica from the **Property** list.
+1. Select values from the **Value** list.
+ :::image type="content" source="media/observability/metrics-add-filter.png" alt-text="Screenshot of the metrics explorer showing the chart filter options.":::
+
+### Split metrics
+
+When your chart contains a single metric, you can choose to split the metric information by revision or replica with the exceptions:
+
+* The *Replica count* metric can only split by revision.
+* The *Requests* metric can also be split by status code and status code category.
+
+To split by revision or replica:
+
+1. Select **Apply splitting**
+1. Select **Revision** or **Replica** from the **Values** drop-down list.
+1. You can set the limit of the number of revisions or replicas to display in the chart. The default is 10.
+1. You can set Sort order to **Ascending** or **Descending**. The default is **Descending**.
++
+### Add scopes
+
+You can add more scopes to view metrics across multiple container apps.
++
+> [!div class="nextstepaction"]
+> [Set up alerts in Azure Container Apps](alerts.md)
container-apps Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/monitor.md
Title: Monitor an app in Azure Container Apps
-description: Learn how applications are monitored and logged in Azure Container Apps.
+ Title: Write and view application logs in Azure Container Apps
+description: Learn write and view logs in Azure Container Apps.
Set the name of your resource group and Log Analytics workspace, and then retrie
```azurecli RESOURCE_GROUP="my-containerapps" LOG_ANALYTICS_WORKSPACE="containerapps-logs"- LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show --query customerId -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv` ``` # [PowerShell](#tab/powershell)
-```azurecli
+```powershell
$RESOURCE_GROUP="my-containerapps" $LOG_ANALYTICS_WORKSPACE="containerapps-logs"-
-$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=az monitor log-analytics workspace show --query customerId -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv
+$LOG_ANALYTICS_WORKSPACE_CLIENT_ID = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $RESOURCE_GROUP -Name $LOG_ANALYTICS_WORKSPACE).CustomerId
```
az monitor log-analytics query \
# [PowerShell](#tab/powershell)
-```azurecli
-az monitor log-analytics query `
- --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'my-container-app' | project ContainerAppName_s, Log_s, TimeGenerated | take 3" `
- --out table
+```powershell
+$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $LOG_ANALYTICS_WORKSPACE_CLIENT_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'my-container-app' | project ContainerAppName_s, Log_s, TimeGenerated | take 3"
+$queryResults.Results
```
my-container-app listening on port 80 PrimaryResult 2021-10-23T02:11:43.1
## Next steps > [!div class="nextstepaction"]
-> [Manage secrets](manage-secrets.md)
+> [Monitor logs in Azure Container Apps with Log Analytics](log-monitoring.md)
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
description: Monitor your running app in Azure Container Apps
- Previously updated : 07/05/2022 Last updated : 07/29/2022 # Observability in Azure Container Apps
-Azure Container Apps provides several built-in observability features that give you a holistic view of your container appΓÇÖs health throughout its application lifecycle. These features help you monitor and diagnose the state of your app to improve performance and respond to critical problems.
+Azure Container Apps provides several built-in observability features that together give you a holistic view of your container appΓÇÖs health throughout its application lifecycle. These features help you monitor and diagnose the state of your app to improve performance and respond to trends and critical problems.
These features include: -- [Log streaming](#log-streaming)-- [Container console](#container-console)-- [Azure Monitor metrics](#azure-monitor-metrics)-- [Azure Monitor Log Analytics](#azure-monitor-log-analytics)-- [Azure Monitor alerts](#azure-monitor-alerts)-
->[!NOTE]
-> While not a built-in feature, [Azure Monitor's Application Insights](../azure-monitor/app/app-insights-overview.md) is a powerful tool to monitor your web and background applications.
-> Although Container Apps doesn't support the Application Insights auto-instrumentation agent, you can instrument your application code using Application Insights SDKs.
-
-## Log streaming
-
-While developing and troubleshooting your container app, you often want to see a container's logs in real-time. Container Apps lets you view a stream of your container's `stdout` and `stderr` log messages using the Azure portal or the Azure CLI.
-
-### View log streams from the Azure portal
-
-Go to your container app page in the Azure portal. Select **Log stream** under the **Monitoring** section on the sidebar menu. For apps with more than one container, choose a container from the drop-down lists. When there are multiple revisions and replicas, first choose from the **Revision**, **Replica**, and then the **Container** drop-down lists.
-
-After you select the container, you can view the log stream in the viewing pane. You can stop the log stream and clear the log messages from the viewing pane. To save the log messages, you can copy and paste them into the editor of your choice.
--
-### Show log streams from Azure CLI
-
-Show a container's application logs from the Azure CLI with the `az containerapp logs show` command. You can view previous log entries using the `--tail` argument. To view a live stream, use the `--follow` argument. Select Ctrl-C to stop the live stream.
-
-For example, you can list the last 50 container log entries in a container app with a single revision, replica, and container using the following command.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az containerapp logs show \
- --name album-api \
- --resource-group album-api-rg \
- --tail 50
-```
-
-# [PowerShell](#tab/powershell)
-
-```azurecli
-az containerapp logs show `
- --name album-api `
- --resource-group album-api-rg `
- --tail 50
-```
---
-You can view a log stream from a container in a container app with multiple revisions, replicas, and containers by adding the `--revision`, `--replica`, `--container` arguments to the `az containerapp show` command.
-
-Use the `az containerapp revision list` command to get the revision, replica, and container names to use in the `az containerapp logs show` command.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az containerapp revision list \
- --name album-api \
- --resource-group album-api-rg
-```
-
-# [PowerShell](#tab/powershell)
-
-```azurecli
-az containerapp revision list `
- --name album-api `
- --resource-group album-api-rg
-```
---
-Show the streaming container logs:
-
-# [Bash](#tab/bash)
-
-```azurecli
-az containerapp logs show \
- --name album-api \
- --resource-group album-api-rg \
- --revision album-api--v2 \
- --replica album-api--v2-5fdd5b4ff5-6mblw \
- --container album-api-container \
- --follow
-```
-
-# [PowerShell](#tab/powershell)
-
-```azurecli
-az containerapp logs show `
- --name album-api `
- --resource-group album-api-rg `
- --revision album-api--v2 `
- --replica album-api--v2-5fdd5b4ff5-6mblw `
- --container album-api-container `
- --follow
-```
---
-## Container console
-
-Connecting to a container's console is useful when you want to troubleshoot and modify something inside a container. Azure Container Apps lets you connect to a container's console using the Azure portal or the Azure CLI.
-
-### Connect to a container console via the Azure portal
-
-Select **Console** in the **Monitoring** menu group from your container app page in the Azure portal. When your app has more than one container, choose a container from the drop-down list. When there are multiple revisions and replicas, first choose from the **Revision**, **Replica**, and then the **Container** drop-down lists.
-
-You can choose to access your console via bash, sh, or a custom executable. If you choose a custom executable, it must be available in the container.
--
-### Connect to a container console via the Azure CLI
-
-Use the `az containerapp exec` command to connect to a container console. Select Ctrl-D to exit the console.
-
-For example, you can connect to a container console in a container app with a single revision, replica, and container using the following command.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az containerapp exec \
- --name album-api \
- --resource-group album-api-rg
-```
-
-# [PowerShell](#tab/powershell)
-
-```azurecli
-az containerapp exec `
- --name album-api `
- --resource-group album-api-rg
-```
---
-To connect to a container console in a container app with multiple revisions, replicas, and containers include the `--revision`, `--replica`, and `--container` arguments in the `az containerapp exec` command.
-
-Use the `az containerapp revision list` command to get the revision, replica and container names to use in the `az containerapp exec` command.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az containerapp revision list \
- --name album-api \
- --resource-group album-api-rg
-```
-
-# [PowerShell](#tab/powershell)
-
-```azurecli
-az containerapp revision list `
- --name album-api `
- --resource-group album-api-rg
-```
---
-Connect to the container console.
-
-# [Bash](#tab/bash)
-
-```azurecli
-az containerapp exec \
- --name album-api \
- --resource-group album-api-rg \
- --revision album-api--v2 \
- --replica album-api--v2-5fdd5b4ff5-6mblw \
- --container album-api-container
-```
-
-# [PowerShell](#tab/powershell)
-
-```azurecli
-az containerapp exec `
- --name album-api `
- --resource-group album-api-rg `
- --revision album-api--v2 `
- --replica album-api--v2-5fdd5b4ff5-6mblw `
- --container album-api-container
-```
---
-## Azure Monitor metrics
-
-Azure Monitor collects metric data from your container app at regular intervals. These metrics help you gain insights into the performance and health of your container app. You can use metrics explorer in the Azure portal to monitor and analyze the metric data. You can also retrieve metric data through the [Azure CLI](/cli/azure/monitor/metrics) and Azure [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric).
-
-### Available metrics for Container Apps
-
-Container Apps provides these metrics.
-
-|Title | Description | Metric ID |Unit |
-|||||
-|CPU usage nanocores | CPU usage in nanocores (1,000,000,000 nanocores = 1 core) | UsageNanoCores| nanocores|
-|Memory working set bytes |Working set memory used in bytes |WorkingSetBytes |bytes|
-|Network in bytes|Network received bytes|RxBytes|bytes|
-|Network out bytes|Network transmitted bytes|TxBytes|bytes|
-|Requests|Requests processed|Requests|n/a|
-|Replica count| Number of active replicas| Replicas | n/a |
-|Replica Restart Count| Number of replica restarts | RestartCount | n/a |
-
-The metrics namespace is `microsoft.app/containerapps`.
-
-### View a current snapshot of your app's metrics
-
-On your container app **Overview** page in the Azure portal, select the **Monitoring** tab to display charts showing your container app's current CPU, memory, and network utilization.
--
-From this view, you can pin one or more charts to your dashboard or select a chart to open it in the metrics explorer.
-
-### View metrics with metrics explorer
-
-The Azure Monitor metrics explorer lets you create charts from metric data to help you analyze your container app's resource and network usage over time. You can pin charts to a dashboard or in a shared workbook.
-
-Open the metrics explorer in the Azure portal by selecting **Metrics** from the sidebar menu on your container app page. To learn more about metrics explorer, go to [Getting started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md).
-
-Create a chart by selecting a **Metric**. You can modify the chart by changing aggregation, adding more metrics, changing time ranges and intervals, adding filters, and applying splitting.
--
-You can filter your metrics by revision or replica. For example, to filter by a replica, select **Add filter** and select a replica from the **Value** drop-down list.
--
-When applying splitting, you can split the metric information in your chart by revision or replica (except for Replica count, which you can only split by revision). The requests metric can also be split by status code and status code category. For example, to split by revision, select **Apply splitting** and select **Revision** from the **Values** drop-down list. Splitting is only available when the chart contains a single metric.
--
-You can add more scopes to view metrics across multiple container apps.
--
-## Azure Monitor Log Analytics
-
-Azure Container Apps is integrated with Azure Monitor Log Analytics to monitor and analyze your container app's logs. Each Container Apps environment includes a Log Analytics workspace that provides a common place to store the system and application log data from all container apps running in the environment.
-
-Log entries are accessible by querying Log Analytics tables through the Azure portal or a command shell using the [Azure CLI](/cli/azure/monitor/log-analytics).
-
-<!--
-Azure Monitor collects application logs and stores them in a Log Analytics workspace. Each Container Apps environment includes a Log Analytics workspace that provides a common place to store the application log data from all containers running in the environment.
>-
-There are two types of logs for Container Apps.
-
-1. Console logs, which are emitted by your app.
-1. System logs, which are emitted by the Container Apps service.
--
-### Container Apps System Logs
-
-The Container Apps service provides system log messages at the container app level. System logs emits the following messages:
-
-| Source | Type | Message |
-||||
-| Dapr | info | Successfully created dapr component \<component-name\> with scope \<dapr-component-scope\> |
-| Dapr | info | Successfully updated dapr component \<component-name\> with scope \<component-type\> |
-| Dapr | error | Error creating dapr component \<component-name\> |
-| Volume Mounts | info | Successfully mounted volume \<volume-name\> for revision \<revision-scope\> |
-| Volume Mounts | error | Error mounting volume \<volume-name\> |
-| Domain Binding | info | Successfully bound Domain \<domain\> to the container app \<container app name\> |
-| Authentication | info | Auth enabled on app. Creating authentication config |
-| Authentication | info | Auth config created successfully |
-| Traffic weight | info | Setting a traffic weight of \<percentage>% for revision \<revision-name\\> |
-| Revision Provisioning | info | Creating a new revision: \<revision-name\> |
-| Revision Provisioning | info | Successfully provisioned revision \<name\> |
-| Revision Provisioning | info| Deactivating Old revisions since 'ActiveRevisionsMode=Single' |
-| Revision Provisioning | error | Error provisioning revision \<revision-name>. ErrorCode: \<[ErrImagePull]\|[Timeout]\|[ContainerCrashing]\> |
-
-The system log data is accessible by querying the `ContainerAppSystemlogs_CL` table. The most used Container Apps specific columns in the table are:
-
-| Column | Description |
-|||
-| `ContainerAppName_s` | Container app name |
-| `EnvironmentName_s` | Container Apps environment name |
-| `Log_s` | Log message |
-| `RevisionName_s` | Revision name |
-
-### Container Apps Console Logs
-
-Console logs originate from the `stderr` and `stdout` messages from the containers in your container app and Dapr sidecars. You can view console logs by querying the `ContainerAppConsolelogs_CL` table.
-
-> [!TIP]
-> Instrumenting your code with well-defined log messages can help you to understand how your code is performing and to debug issues. To learn more about best practices refer to [Design for operations](/azure/architecture/guide/design-principles/design-for-operations).
-
-The most commonly used Container Apps specific columns in ContainerAppConsoleLogs_CL include:
-
-|Column |Description |
+|Feature |Description |
|||
-| `ContainerAppName_s` | Container app name |
-| `ContainerGroupName_g` | Replica name |
-| `ContainerId_s` | Container identifier |
-| `ContainerImage_s` | Container image name |
-| `EnvironmentName_s` | Container Apps environment name |
-| `Log_s` | Log message |
-| `RevisionName_s` | Revision name |
-
-### Use Log Analytics to query logs
-
-Log Analytics is a tool in the Azure portal that you can use to view and analyze log data. Using Log Analytics, you can write Kusto queries and then sort, filter, and visualize the results in charts to spot trends and identify issues. You can work interactively with the query results or use them with other features such as alerts, dashboards, and workbooks.
-
-Start Log Analytics from **Logs** in the sidebar menu on your container app page. You can also start Log Analytics from **Monitor>Logs**.
-
-You can query the logs using the tables listed in the **CustomLogs** category **Tables** tab. The tables in this category are the `ContainerAppSystemlogs_CL` and `ContainerAppConsolelogs_CL` tables.
--
-Below is a Kusto query that displays console log entries for the container app named *album-api*.
-
-```kusto
-ContainerAppConsoleLogs_CL
-| where ContainerAppName_s == 'album-api'
-| project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s
-| take 100
-```
-
-Below is a Kusto query that displays system log entries for the container app named *album-api*.
-
-```kusto
-ContainerAppSystemLogs_CL
-| where ContainerAppName_s == 'album-api'
-| project Time=TimeGenerated, EnvName=EnvironmentName_s, AppName=ContainerAppName_s, Revision=RevisionName_s, Message=Log_s
-| take 100
-```
-
-For more information regarding Log Analytics and log queries, see the [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md).
-
-### Query logs via the Azure CLI
+|[Log streaming](log-streaming.md) | View streaming console logs from a container in near real-time. |
+|[Container console](container-console.md) | Connect to the Linux console in your containers to debug your application from inside the container. |
+|[Azure Monitor metrics](metrics.md)| View and analyze your application's compute and network usage through metric data. |
+|[Azure Monitor Log Analytics](log-monitoring.md) | Run queries to view and analyze your app's system and application logs. |
+|[Azure Monitor alerts](alerts.md) | Create and manage alerts to notify you of events and conditions based on metric and log data.|
-Container Apps logs can be queried using the [Azure CLI](/cli/azure/monitor/log-analytics).
-
-These example Azure CLI queries output a table containing log records for the container app name **album-api**. The table columns are specified by the parameters after the `project` operator. The `$WORKSPACE_CUSTOMER_ID` variable contains the GUID of the Log Analytics workspace.
--
-This example queries the `ContainerAppConsoleLogs_CL` table:
-
-```azurecli
-az monitor log-analytics query --workspace $WORKSPACE_CUSTOMER_ID --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s, LogLevel_s | take 5" --out table
-```
-
-This example queries the `ContainerAppSystemLogs_CL` table:
-
-```azurecli
-az monitor log-analytics query --workspace $WORKSPACE_CUSTOMER_ID --analytics-query "ContainerAppSystemLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Message=Log_s, LogLevel_s | take 5" --out table
-```
-
-For more information about using Azure CLI to view container app logs, see [Viewing Logs](monitor.md#viewing-logs).
-
-## Azure Monitor alerts
-
-Azure Monitor alerts notify you so that you can respond quickly to critical issues. There are two types of alerts that you can define:
--- [metric alerts](../azure-monitor/alerts/alerts-metric-overview.md) based on metric data-- [log alerts](../azure-monitor/alerts/alerts-unified-log.md) based on log data-
-You can create alert rules from metric charts in the metric explorer and from queries in Log Analytics. You can also define and manage alerts from the **Monitor>Alerts** page.
-
-To learn more about alerts, refer to [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
-
-### Create metric alerts in metrics explorer
-
-When you add alert rules to a metric chart in the metrics explorer, alerts are triggered when the collected metric data matches alert rule conditions.
-
-After creating a [metric chart](#view-metrics-with-metrics-explorer), select **New alert rule** to create an alert rule based on the chart's settings. The new alert rule will include your chart's target resource, metric, splitting and filter dimensions.
--
-When you select **New alert rule**, the rule creation pane opens to the **Condition** tab. Metrics explorer automatically creates an alert condition containing the chart's metric settings. Select the alert condition to add the threshold criteria to complete the condition.
--
-Add more conditions to your alert rule by selecting the **Add condition** option in the **Create an alert rule** pane.
--
-When adding a new alert condition, select from the metrics listed in the **Select a signal** pane.
--
-After selecting the metric, you can configure the settings for your alert condition. For more information about configuring alerts, see [Manage metric alerts](../azure-monitor/alerts/alerts-metric.md).
-
-You can receive individual alerts for specific revisions or replicas by enabling alert splitting and selecting **Revision** or **Replica** from the **Dimension name** list.
-
-Example of selecting a dimension to split an alert.
--
- To learn more about configuring alerts, visit [Create a metric alert for an Azure resource](../azure-monitor/alerts/tutorial-metric-alert.md)
-
-### Create log alert rules in Log Analytics
-
-Use Log Analytics to create alert rules from a log query. When you create an alert rule from a query, the query is run at set intervals triggering alerts when the log data matches the alert rule conditions. To learn more about creating log alert rules, see [Manage log alerts](../azure-monitor/alerts/alerts-log.md).
-
-To create an alert rule, you first create and run the query to validate it. Then, select **New alert rule**.
--
-The **Create an alert rule** editor is opened to the **Condition** tab, which is populated with your log query. Configure the settings in the **Measurement** and **Alert logic** sections to complete the alert rule.
--
-Optionally, you can enable alert splitting in the alert rule to send individual alerts for each dimension you select in the **Split by dimensions** section of the editor. The dimensions for Container Apps are:
--- app name-- revision-- container-- log message-
+>[!NOTE]
+> While not a built-in feature, [Azure Monitor's Application Insights](../azure-monitor/app/app-insights-overview.md) is a powerful tool to monitor your web and background applications. Although Container Apps doesn't support the Application Insights auto-instrumentation agent, you can instrument your application code using Application Insights SDKs.
-## Observability throughout the application lifecycle
+## Application lifecycle observability
-With the Container Apps observability features, you can monitor your app throughout the development-to-production lifecycle. The following sections describe the most useful monitoring features for each phase.
+With Container Apps observability features, you can monitor your app throughout the development-to-production lifecycle. The following sections describe the most effective monitoring features for each phase.
-### Development and test phase
+### Development and test
During the development and test phase, real-time access to your containers' application logs and console is critical for debugging issues. Container Apps provides: -- [log streaming](#log-streaming) for real-time monitoring-- [container console](#container-console) access to debug your application
+- [Log streaming](log-streaming.md): View real-time log streams from your containers.
+- [Container console](container-console.md): Access the container console to debug your application.
-### Deployment phase
+### Deployment
-Once you deploy your container app, it's essential to monitor your app. Continuous monitoring helps you quickly identify problems that may occur around error rates, performance, or metrics.
+Once you deploy your container app, continuous monitoring helps you quickly identify problems that may occur around error rates, performance, and resource consumption.
-Azure Monitor features give you the ability to track your app with the following features:
+Azure Monitor gives you the ability to track your app with the following features:
-- [Azure Monitor Metrics](#azure-monitor-metrics): monitor and analyze key metrics-- [Azure Monitor Alerts](#azure-monitor-alerts): send alerts for critical conditions-- [Azure Monitor Log Analytics](#azure-monitor-log-analytics): view and analyze application logs
+- [Azure Monitor metrics](metrics.md): Monitor and analyze key metrics.
+- [Azure Monitor alerts](alerts.md): Receive alerts for critical conditions.
+- [Azure Monitor Log Analytics](log-monitoring.md): View and analyze application logs.
-### Maintenance phase
+### Maintenance
-Container Apps manages updates to your container app by creating [revisions](revisions.md). You can run multiple revisions concurrently to perform A/B testing or for blue green deployments. These observability features will help you monitor your app across revisions:
+Container Apps manages updates to your container app by creating [revisions](revisions.md). You can run multiple revisions concurrently in blue green deployments or to perform A/B testing. These observability features will help you monitor your app across revisions:
-- [Azure Monitor Metrics](#azure-monitor-metrics): monitor and compare key metrics for multiple revisions-- [Azure Monitor Alerts](#azure-monitor-alerts): send alerts individual alerts per revision-- [Azure Monitor Log Analytics](#azure-monitor-log-analytics): view, analyze and compare log data for multiple revisions
+- [Azure Monitor metrics](metrics.md): Monitor and compare key metrics for multiple revisions.
+- [Azure Monitor alerts](alerts.md): Receive individual alerts per revision.
+- [Azure Monitor Log Analytics](log-monitoring.md): View, analyze and compare log data for multiple revisions.
## Next steps
container-apps Storage Mounts Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts-azure-files.md
The following commands help you define variables and ensure your Container Apps
-1. Register the `Microsoft.OperationalInsights` provider for the [Azure Monitor Log Analytics Workspace](./observability.md?tabs=bash#azure-monitor-log-analytics) if you haven't used it before.
+1. Register the `Microsoft.OperationalInsights` provider for the Azure Monitor Log Analytics workspace if you haven't used it before.
# [Bash](#tab/bash)
cosmos-db Troubleshoot Dot Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk.md
Title: Diagnose and troubleshoot issues when using Azure Cosmos DB .NET SDK
description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues when using .NET SDK. Previously updated : 08/30/2022 Last updated : 09/01/2022
If your app is deployed on [Azure Virtual Machines without a public IP address](
See our [latency troubleshooting guide](troubleshoot-dot-net-sdk-slow-request.md) for details on latency troubleshooting.
+### Proxy authentication failures
+
+If you see errors that show as HTTP 407:
+
+```
+Response status code does not indicate success: ProxyAuthenticationRequired (407);
+```
+
+This error isn't generated by the SDK nor it's coming from the Cosmos DB Service. This is an error related to networking configuration. A proxy in your network configuration is most likely missing the required proxy authentication. If you're not expecting to be using a proxy, reach out to your network team. If you *are* using a proxy, make sure you're setting the right [WebProxy](/dotnet/api/system.net.webproxy) configuration on [CosmosClientOptions.WebProxy](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.webproxy) when creating the client instance.
+ ### Common query issues The [query metrics](sql-api-query-metrics.md) will help determine where the query is spending most of the time. From the query metrics, you can see how much of it's being spent on the back-end vs the client. Learn more on the [query performance guide](performance-tips-query-sdk.md?pivots=programming-language-csharp).
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 06/01/2022 Last updated : 08/19/2022 # Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
If all of your source records map to the same target entity and your source data
:::image type="content" source="./media/connector-dynamics-crm-office-365/connector-dynamics-add-entity-reference-column.png" alt-text="Dynamics lookup-field adding an entity-reference column":::
+## Writing data to a lookup field via alternative keys
+
+To write data into a lookup field using alternate key columns, follow this guidance and example:
+
+1. Ensure your source contains all the lookup key columns.
+
+2. The alternate key columns must be mapped to the column with the special naming pattern `{lookup_field_name}@{alternate_key_column_name}`. The column doesn't exist in Dynamics. It's used to indicate that this column is used to look up the record in the target entity.
+
+3. Go to **Mapping** tab in the sink transformation of mapping data flows. Select the alternate key as output columns under the Lookup field. The value after indicates the key columns of this alternate key.
+
+ :::image type="content" source="./media/connector-dynamics-crm-office-365/select-alternate-key-columns.png" alt-text="Screenshot shows selecting alternate key columns.":::
+
+4. Once selected, the alternate key columns will automatically display in below.
+
+ :::image type="content" source="./media/connector-dynamics-crm-office-365/connector-dynamics-lookup-field-column-mapping-alternate-key-1.png" alt-text="Screenshot shows mapping columns to lookup fields via alternate keys step 1.":::
+
+5. Map your input columns on left with the output columns.
+
+ :::image type="content" source="./media/connector-dynamics-crm-office-365/connector-dynamics-lookup-field-column-mapping-alternate-key-2.png" alt-text="Screenshot shows mapping columns to lookup fields via alternate keys step 2.":::
+
+> [!Note]
+> Currently this is only supported in mapping data flows.
+ ## Mapping data flow properties When transforming data in mapping data flow, you can read from and write to tables in Dynamics. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows. You can choose to use a Dynamics dataset or an [inline dataset](data-flow-source.md#inline-datasets) as source and sink type.
IncomingStream sink(allowSchemaDrift: true,
skipDuplicateMapInputs: true, skipDuplicateMapOutputs: true) ~> DynamicsSink ```+ ## Lookup activity properties To learn details about the properties, see [Lookup activity](control-flow-lookup-activity.md).
data-factory Connector Troubleshoot Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-synapse-sql.md
Title: Troubleshoot the Azure Synapse Analytics, Azure SQL Database, and SQL Server connectors
-description: Learn how to troubleshoot issues with the Azure Synapse Analytics, Azure SQL Database, and SQL Server connectors in Azure Data Factory and Azure Synapse Analytics.
+description: Learn how to troubleshoot issues with the Azure Synapse Analytics, Azure SQL Database, and SQL Server connectors in Azure Data Factory and Azure Synapse Analytics.
++ Last updated : 09/02/2022 Previously updated : 06/29/2022--+
+ - has-adal-ref
+ - synapse
# Troubleshoot the Azure Synapse Analytics, Azure SQL Database, and SQL Server connectors in Azure Data Factory and Azure Synapse
This article provides suggestions to troubleshoot common problems with the Azure
| If the error message contains the string "SqlException", SQL Database the error indicates that some specific operation failed. | For more information, search by SQL error code in [Database engine errors](/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support. | | If this is a transient issue (for example, an instable network connection), add retry in the activity policy to mitigate. | For more information, see [Pipelines and activities](./concepts-pipelines-activities.md#activity-policy). | | If the error message contains the string "Client with IP address '...' is not allowed to access the server", and you're trying to connect to Azure SQL Database, the error is usually caused by an Azure SQL Database firewall issue. | In the Azure SQL Server firewall configuration, enable the **Allow Azure services and resources to access this server** option. For more information, see [Azure SQL Database and Azure Synapse IP firewall rules](/azure/azure-sql/database/firewall-configure). |
-
+ ## Error code: SqlOperationFailed - **Message**: `A database operation failed. Please search error to get more details.`
This article provides suggestions to troubleshoot common problems with the Azure
| If the error message contains the string "InvalidOperationException", it's usually caused by invalid input data. | To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity, which can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity](./copy-activity-fault-tolerance.md). | | If the error message contains "Execution Timeout Expired", it's usually caused by query timeout. | Configure **Query timeout** in the source and **Write batch timeout** in the sink to increase timeout. | - ## Error code: SqlUnauthorizedAccess - **Message**: `Cannot connect to '%connectorName;'. Detail Message: '%message;'`
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Check to ensure that the login account has sufficient permissions to access the SQL database. - ## Error code: SqlOpenConnectionTimeout - **Message**: `Open connection to database timeout after '%timeoutValue;' seconds.`
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Retry the operation to update the linked service connection string with a larger connection timeout value. - ## Error code: SqlAutoCreateTableTypeMapFailed - **Message**: `Type '%dataType;' in source side cannot be mapped to a type that supported by sink side(column name:'%columnName;') in autocreate table.`
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Update the column type in *mappings*, or manually create the sink table in the target server. - ## Error code: SqlDataTypeNotSupported - **Message**: `A database operation failed. Check the SQL errors.`
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Update the corresponding column type to the *datetime2* type in the sink table. - ## Error code: SqlInvalidDbStoredProcedure - **Message**: `The specified Stored Procedure is not valid. It could be caused by that the stored procedure doesn't return any data. Invalid Stored Procedure script: '%scriptName;'.`
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Validate the SQL query by using SQL Tools. Make sure that the query can return data. - ## Error code: SqlInvalidColumnName - **Message**: `Column '%column;' does not exist in the table '%tableName;', ServerName: '%serverName;', DatabaseName: '%dbName;'.`
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Verify the column in the query, *structure* in the dataset, and *mappings* in the activity. - ## Error code: SqlBatchWriteTimeout - **Message**: `Timeouts in SQL write operation.`
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Retry the operation. If the problem persists, contact Azure SQL support. - ## Error code: SqlBatchWriteTransactionFailed - **Message**: `SQL transaction commits failed.`
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Retry the activity and review the SQL database side metrics. - ## Error code: SqlBulkCopyInvalidColumnLength - **Message**: `SQL Bulk Copy failed due to receive an invalid column length from the bcp client.`
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity. This can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity](./copy-activity-fault-tolerance.md). - ## Error code: SqlConnectionIsClosed - **Message**: `The connection is closed by SQL Database.`
This article provides suggestions to troubleshoot common problems with the Azure
- **Cause**: The linked service was not configured properly. -- **Recommendation**: Validate and fix the SQL server linked service.
+- **Recommendation**: Validate and fix the SQL server linked service.
## Error code: SqlParallelFailedToDetectPartitionColumn
This article provides suggestions to troubleshoot common problems with the Azure
- **Cause**: There is no primary key or unique key in the table. -- **Recommendation**: Check the table to make sure that a primary key or a unique index is created.
+- **Recommendation**: Check the table to make sure that a primary key or a unique index is created.
## Error code: SqlParallelFailedToDetectPhysicalPartitions
This article provides suggestions to troubleshoot common problems with the Azure
- **Symptoms**: When you copy data from a tabular data source (such as SQL Server) into Azure Synapse Analytics using staged copy and PolyBase, you receive the following error:
- `ErrorCode=FailedDbOperation,Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
- Message=Error happened when loading data into Azure Synapse Analytics.,
- Source=Microsoft.DataTransfer.ClientLibrary,Type=System.Data.SqlClient.SqlException,
+ `ErrorCode=FailedDbOperation,Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
+ Message=Error happened when loading data into Azure Synapse Analytics.,
+ Source=Microsoft.DataTransfer.ClientLibrary,Type=System.Data.SqlClient.SqlException,
Message=Conversion failed when converting from a character string to uniqueidentifier...` - **Cause**: Azure Synapse Analytics PolyBase can't convert an empty string to a GUID. - **Resolution**: In the copy activity sink, under PolyBase settings, set the **use type default** option to *false*. - ## Error message: Expected data type: DECIMAL(x,x), Offending value - **Symptoms**: When you copy data from a tabular data source (such as SQL Server) into Azure Synapse Analytics by using staged copy and PolyBase, you receive the following error:
- `ErrorCode=FailedDbOperation,Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
- Message=Error happened when loading data into Azure Synapse Analytics.,
- Source=Microsoft.DataTransfer.ClientLibrary,Type=System.Data.SqlClient.SqlException,
+ `ErrorCode=FailedDbOperation,Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
+ Message=Error happened when loading data into Azure Synapse Analytics.,
+ Source=Microsoft.DataTransfer.ClientLibrary,Type=System.Data.SqlClient.SqlException,
Message=Query aborted-- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected out of total 415 rows processed. (/file_name.txt) Column ordinal: 18, Expected data type: DECIMAL(x,x), Offending value:..`
This article provides suggestions to troubleshoot common problems with the Azure
- **Resolution**: In the copy activity sink, under PolyBase settings, set the **use type default** option to false. - ## Error message: Java exception message: HdfsBridge::CreateRecordReader - **Symptoms**: You copy data into Azure Synapse Analytics by using PolyBase and receive the following error: `Message=110802;An internal DMS error occurred that caused this operation to fail.
- Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Common.ExternalAccess.HdfsAccessException,
+ Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Common.ExternalAccess.HdfsAccessException,
Message: Java exception raised on call to HdfsBridge_CreateRecordReader. Java exception message:HdfsBridge::CreateRecordReader - Unexpected error encountered creating the record reader.: Error [HdfsBridge::CreateRecordReader - Unexpected error encountered creating the record reader.] occurred while accessing external file.....`
This article provides suggestions to troubleshoot common problems with the Azure
- Time = 12 bytes - Tinyint = 1 byte -- **Resolution**:
+- **Resolution**:
- Reduce column width to less than 1 MB. - Or use a bulk insert approach by disabling PolyBase. - ## Error message: The condition specified using HTTP conditional header(s) is not met - **Symptoms**: You use SQL query to pull data from Azure Synapse Analytics and receive the following error:
This article provides suggestions to troubleshoot common problems with the Azure
- **Resolution**: Run the same query in SQL Server Management Studio (SSMS) and check to see whether you get the same result. If you do, open a support ticket to Azure Synapse Analytics and provide your Azure Synapse Analytics server and database name. - ## Performance tier is low and leads to copy failure - **Symptoms**: You copy data into Azure SQL Database and receive the following error: `Database operation failed. Error message from database execution : ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.` - **Cause**: Azure SQL Database s1 has hit input/output (I/O) limits. -- **Resolution**: Upgrade the Azure SQL Database performance tier to fix the issue. -
+- **Resolution**: Upgrade the Azure SQL Database performance tier to fix the issue.
-## SQL table can't be found
+## SQL table can't be found
- **Symptoms**: You copy data from hybrid into an on-premises SQL Server table and receive the following error:`Cannot find the object "dbo.Contoso" because it does not exist or you do not have permissions.`
This article provides suggestions to troubleshoot common problems with the Azure
- **Resolution**: Switch to a more privileged SQL account. - ## Error message: String or binary data is truncated -- **Symptoms**: An error occurs when you copy data into an on-premises Azure SQL Server table.
+- **Symptoms**: An error occurs when you copy data into an on-premises Azure SQL Server table.
-- **Cause**: The Cx SQL table schema definition has one or more columns with less length than expected.
+- **Cause**: The SQL table schema definition has one or more columns with less length than expected.
- **Resolution**: To resolve the issue, try the following:
- 1. To troubleshoot which rows have the issue, apply SQL sink [fault tolerance](./copy-activity-fault-tolerance.md), especially "redirectIncompatibleRowSettings."
+ 1. To troubleshoot which rows have the issue, apply SQL sink [fault tolerance](./copy-activity-fault-tolerance.md), especially `redirectIncompatibleRowSettings`.
- > [!NOTE]
- > Fault tolerance might require additional execution time, which could lead to higher costs.
+ > [!NOTE]
+ > Fault tolerance might require additional execution time, which could lead to higher costs.
- 2. Double-check the redirected data against the SQL table schema column length to see which columns need to be updated.
+ 1. Double-check the redirected data against the SQL table schema column length to see which columns need to be updated.
- 3. Update the table schema accordingly.
+ 1. Update the table schema accordingly.
## Error code: FailedDbOperation
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Make sure the user configured in the Azure Synapse Analytics connector must have 'CONTROL' permission on the target database while using PolyBase to load data. For more detailed information, refer to this [document](./connector-azure-sql-data-warehouse.md#required-database-permission).
+## Error code: Msg 105208
+
+- **Symptoms**: Error code: `Error code: Msg 105208, Level 16, State 1, Line 1 COPY statement failed with the following error when validating value of option 'FROM': '105200;COPY statement failed because the value for option 'FROM' is invalid.'`
+- **Cause**: Currently, ingesting data using the COPY command into an Azure Storage account that is using the new DNS partitioning feature results in an error. DNS partition feature enables customers to create up to 5000 storage accounts per subscription.
+- **Resolutions**: Provision a storage account in a subscription that does not use the new [Azure Storage DNS partition feature](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466) (currently in Public Preview).
+ ## Next steps For more troubleshooting help, try these resources:
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-azure-function-activity.md
Previously updated : 09/09/2021 Last updated : 09/01/2022+ # Azure Function activity in Azure Data Factory+ [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] The Azure Function activity allows you to run [Azure Functions](../azure-functions/functions-overview.md) in an Azure Data Factory or Synapse pipeline. To run an Azure Function, you must create a linked service connection. Then you can use the linked service with an activity that specifies the Azure Function that you plan to execute.
The Azure Function activity allows you to run [Azure Functions](../azure-functio
To use an Azure Function activity in a pipeline, complete the following steps: 1. Expand the Azure Function section of the pipeline Activities pane, and drag an Azure Function activity to the pipeline canvas.
-1. Select the new Azure Function activity on the canvas if it is not already selected, and its **Settings** tab, to edit its details.
+
+2. Select the new Azure Function activity on the canvas if it is not already selected, and its **Settings** tab, to edit its details.
:::image type="content" source="media/control-flow-azure-function-activity/azure-function-activity-configuration.png" alt-text="Shows the UI for an Azure Function activity.":::
-1. If you do not already have an Azure Function linked service defined, select New to create a new one. In the new Azure Function linked service pane, choose your existing Azure Function App url and provide a Function Key.
-
+3. If you do not already have an Azure Function linked service defined, select New to create a new one. In the new Azure Function linked service pane, choose your existing Azure Function App url and provide a Function Key.
+ :::image type="content" source="media/control-flow-azure-function-activity/new-azure-function-linked-service.png" alt-text="Shows the new Azure Function linked service creation pane.":::
-1. After selecting the Azure Function linked service, provide the function name and other details to complete the configuration.
+4. After selecting the Azure Function linked service, provide the function name and other details to complete the configuration.
## Azure Function linked service
The return type of the Azure function has to be a valid `JObject`. (Keep in mind
Function Key provides secure access to function name with each one having separate unique keys or master key within a function app. Managed identity provides secure access to the entire function app. User needs to provide key to access function name. For more information, see the function documentation for more details about [Function access key](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#configuration)
-| **Property** | **Description** | **Required** |
-| | | |
-| Type | The type property must be set to: **AzureFunction** | Yes |
-| Function app url | URL for the Azure Function App. Format is `https://<accountname>.azurewebsites.net`. This URL is the value under **URL** section when viewing your Function App in the Azure portal | Yes |
-| Function key | Access key for the Azure Function. Click on the **Manage** section for the respective function, and copy either the **Function Key** or the **Host key**. Find out more here: [Azure Functions HTTP triggers and bindings](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys) | Yes |
-| | | |
+| **Property** | **Description** | **Required** |
+| - | | |
+| Type | The type property must be set to: **AzureFunction** | Yes |
+| Function app url | URL for the Azure Function App. Format is `https://<accountname>.azurewebsites.net`. This URL is the value under **URL** section when viewing your Function App in the Azure portal | Yes |
+| Function key | Access key for the Azure Function. Click on the **Manage** section for the respective function, and copy either the **Function Key** or the **Host key**. Find out more here: [Azure Functions HTTP triggers and bindings](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys) | Yes |
+| Authentication | The authentication method used for calling the Azure Function. The supported values are 'System-assigned managed identity' or 'anonymous'. | Yes |
## Azure Function activity
-| **Property** | **Description** | **Allowed values** | **Required** |
-| | | | |
-| Name | Name of the activity in the pipeline | String | Yes |
-| Type | Type of activity is ΓÇÿAzureFunctionActivityΓÇÖ | String | Yes |
-| Linked service | The Azure Function linked service for the corresponding Azure Function App | Linked service reference | Yes |
-| Function name | Name of the function in the Azure Function App that this activity calls | String | Yes |
-| Method | REST API method for the function call | String Supported Types: "GET", "POST", "PUT" | Yes |
-| Header | Headers that are sent to the request. For example, to set the language and type on a request: "headers": { "Accept-Language": "en-us", "Content-Type": "application/json" } | String (or expression with resultType of string) | No |
-| Body | Body that is sent along with the request to the function api method | String (or expression with resultType of string) or object.  | Required for PUT/POST methods |
-| | | | |
+| **Property** | **Description** | **Allowed values** | **Required** |
+| -- | | -- | -- |
+| Name | Name of the activity in the pipeline | String | Yes |
+| Type | Type of activity is ΓÇÿAzureFunctionActivityΓÇÖ | String | Yes |
+| Linked service | The Azure Function linked service for the corresponding Azure Function App | Linked service reference | Yes |
+| Function name | Name of the function in the Azure Function App that this activity calls | String | Yes |
+| Method | REST API method for the function call | String Supported Types: "GET", "POST", "PUT" | Yes |
+| Header | Headers that are sent to the request. For example, to set the language and type on a request: "headers": { "Accept-Language": "en-us", "Content-Type": "application/json" } | String (or expression with resultType of string) | No |
+| Body | Body that is sent along with the request to the function api method | String (or expression with resultType of string) or object. | Required for PUT/POST methods |
-See the schema of the request payload in [Request payload schema](control-flow-web-activity.md#request-payload-schema) section.
+See the schema of the request payload in [Request payload schema](control-flow-web-activity.md#request-payload-schema) section.
## Routing and queries
The Azure Function Activity supports **routing**. For example, if your Azure Fun
The Azure Function Activity also supports **queries**. A query must be included as part of the `functionName`. For example, when the function name is `HttpTriggerCSharp` and the query that you want to include is `name=hello`, then you can construct the `functionName` in the Azure Function Activity as `HttpTriggerCSharp?name=hello`. This function can be parameterized so the value can be determined at runtime.
-## Timeout and long running functions
+## Timeout and long-running functions
Azure Functions times out after 230 seconds regardless of the `functionTimeout` setting you've configured in the settings. For more information, see [this article](../azure-functions/functions-versions.md#timeout). To work around this behavior, follow an async pattern or use Durable Functions. The benefit of Durable Functions is that they offer their own state-tracking mechanism, so you don't need to implement your own state-tracking.
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
Previously updated : 08/04/2022 Last updated : 09/01/2022 # Sink transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| Connector | Format | Dataset/inline | | | | -- |
-| [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[Delta](format-delta.md) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/- <br>Γ£ô/- <br>-/Γ£ô <br>Γ£ô/- <br>Γ£ô/Γ£ô<br>Γ£ô/- |
+| [Azure Blob Storage](connector-azure-blob-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[Delta](format-delta.md) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/Γ£ô <br>Γ£ô/Γ£ô <br>-/Γ£ô <br>Γ£ô/Γ£ô <br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô |
| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md#mapping-data-flow-properties) | | Γ£ô/- | | [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/- <br>Γ£ô/- <br>Γ£ô/- <br>Γ£ô/Γ£ô<br>Γ£ô/- |
-| [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br/>[Common Data Model](format-common-data-model.md#sink-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[Delta](format-delta.md) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/- <br>-/Γ£ô <br>Γ£ô/- <br>-/Γ£ô <br>Γ£ô/-<br>Γ£ô/Γ£ô <br>Γ£ô/- |
+| [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br/>[Common Data Model](format-common-data-model.md#sink-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[Delta](format-delta.md) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/Γ£ô <br>-/Γ£ô <br>Γ£ô/Γ£ô <br>-/Γ£ô <br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô <br>Γ£ô/Γ£ô |
| [Azure Database for MySQL](connector-azure-database-for-mysql.md) | | Γ£ô/Γ£ô | | [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md) | | Γ£ô/Γ£ô | | [Azure Data Explorer](connector-azure-data-explorer.md) | | Γ£ô/Γ£ô |
dedicated-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/overview.md
#Customer intent: As an IT Pro, Decision maker I am looking for key storage capability within Azure Cloud that meets FIPS 140-2 Level 3 certification and that gives me exclusive access to the hardware. + # What is Azure Dedicated HSM? Azure Dedicated HSM is an Azure service that provides cryptographic key storage in Azure. Dedicated HSM meets the most stringent security requirements. It's the ideal solution for customers who require FIPS 140-2 Level 3-validated devices and complete and exclusive control of the HSM appliance.
Microsoft recognized a specific need for a unique set of customers. It is the on
## Is Azure Dedicated HSM right for you?
-Azure Dedicated HSM is a specialized service that addresses unique requirements for a specific type of large-scale organization. As a result, it's expected that the bulk of Azure customers will not fit the profile of use for this service. Many will find the Azure Key Vault or Azure Managed HSM service to be more appropriate and cost effective. For an comparison of offerings, see [Azure key management services](../security/fundamentals/key-management.md#azure-key-management-services)
-
-To help you decide if Azure Dedicated HSM is a fit for your requirements, we've identified the following criteria.
+Azure Dedicated HSM is a specialized service that addresses unique requirements for a specific type of large-scale organization. As a result, it's expected that the bulk of Azure customers will not fit the profile of use for this service. Many will find the Azure Key Vault or Azure Managed HSM service to be more appropriate and cost effective. To help you decide if it's a fit for your requirements, we've identified the following criteria.
### Best fit
Azure Dedicated HSM is most suitable for ΓÇ£lift-and-shiftΓÇ¥ scenarios that req
Azure Dedicated HSM is not a good fit for the following type of scenario: Microsoft cloud services that support encryption with customer-managed keys (such as Azure Information Protection, Azure Disk Encryption, Azure Data Lake Store, Azure Storage, Azure SQL Database, and Customer Key for Office 365) that are not integrated with Azure Dedicated HSM. > [!NOTE]
-> Customers must have a assigned Microsoft Account Manager and meet the monetary requirement of five million ($5M) USD or greater in overall committed Azure revenue annually to qualify for onboarding and use of Azure Dedicated HSM.
+> Customers must have a assigned Microsoft Account Manager and meet the monetary requirement of five million ($5M) USD or greater in overall committed Azure revenue annually to qualify for onboarding and use of Azure Dedicated HSM.
### It depends
dedicated-hsm Tutorial Deploy Hsm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/tutorial-deploy-hsm-cli.md
Title: Tutorial deploy into an existing virtual network using the Azure CLI - Azure Dedicated HSM | Microsoft Docs
+ Title: Tutorial deploys into an existing virtual network using the Azure CLI - Azure Dedicated HSM | Microsoft Docs
description: Tutorial showing how to deploy a dedicated HSM using the CLI into an existing virtual network documentationcenter: na
# Tutorial: Deploying HSMs into an existing virtual network using the Azure CLI
-Azure Dedicated HSM provides a physical device for sole customer use, with complete administrative control and full management responsibility. The use of physical devices creates the need for Microsoft to control device allocation to ensure capacity is managed effectively. As a result, within an Azure subscription, the Dedicated HSM service will not normally be visible for resource provisioning. Any Azure customer requiring access to the Dedicated HSM service, must first contact their Microsoft account executive to request registration for the Dedicated HSM service. Only once this process completes successfully will provisioning be possible.
+Azure Dedicated HSM provides a physical device for sole customer use, with complete administrative control and full management responsibility. The use of physical devices creates the need for Microsoft to control device allocation to ensure capacity is managed effectively. As a result, within an Azure subscription, the Dedicated HSM service won't normally be visible for resource provisioning. Any Azure customer requiring access to the Dedicated HSM service, must first contact their Microsoft account executive to request registration for the Dedicated HSM service. Only once this process completes successfully will provisioning be possible.
This tutorial shows a typical provisioning process where:
This tutorial focuses on a pair of HSMs and required [ExpressRoute gateway](../e
## Prerequisites
-Azure Dedicated HSM is not currently available in the Azure portal. All interaction with the service will be via command-line or using PowerShell. This tutorial will use the command-line (CLI) interface in the Azure Cloud Shell. If you are new to the Azure CLI, follow getting started instructions here: [Azure CLI 2.0 Get Started](/cli/azure/get-started-with-azure-cli).
+Azure Dedicated HSM is not currently available in the Azure portal. All interaction with the service will be via command-line or using PowerShell. This tutorial will use the command-line (CLI) interface in the Azure Cloud Shell. If you're new to the Azure CLI, follow getting started instructions here: [Azure CLI 2.0 Get Started](/cli/azure/get-started-with-azure-cli).
Assumptions: -- You completed the Azure Dedicated HSM registration process-- You have been approved for use of the service. If not, contact your Microsoft account representative for details.-- You created a Resource Group for these resources and the new ones deployed in this tutorial will join that group.-- You already created the necessary virtual network, subnet, and virtual machines as per the diagram above and now want to integrate 2 HSMs into that deployment.
+- You have an assigned Microsoft Account Manager and meet the monetary requirement of five million ($5M) USD or greater in overall committed Azure revenue annually to qualify for onboarding and use of Azure Dedicated HSM.
+- You have been through the Azure Dedicated HSM registration process and been approved for use of the service. If not, then contact your Microsoft account representative for details.
+- You have created a Resource Group for these resources and the new ones deployed in this tutorial will join that group.
+- You have already created the necessary virtual network, subnet, and virtual machines as per the diagram above and now want to integrate 2 HSMs into that deployment.
-All instructions below assume that you have already navigated to the Azure portal and you have opened the Cloud Shell (select "\>\_" towards the top right of the portal).
+All instructions below assume that you've already navigated to the Azure portal and you have opened the Cloud Shell (select "\>\_" towards the top right of the portal).
## Provisioning a Dedicated HSM
The output looks something like the following output:
} ```
-You will also now be able to see the resources using the [Azure resource explorer](https://resources.azure.com/). Once in the explorer, expand "subscriptions" on the left, expand your specific subscription for Dedicated HSM, expand "resource groups", expand the resource group you used and finally select the "resources" item.
+You'll also now be able to see the resources using the [Azure resource explorer](https://resources.azure.com/). Once in the explorer, expand "subscriptions" on the left, expand your specific subscription for Dedicated HSM, expand "resource groups", expand the resource group you used and finally select the "resources" item.
## Testing the Deployment
The ssh tool is used to connect to the virtual machine. The command will be simi
`ssh adminuser@hsmlinuxvm.westus.cloudapp.azure.com`
-The IP Address of the VM could also be used in place of the DNS name in the above command. If the command is successful, it will prompt for a password and you should enter that. Once logged on to the virtual machine, you can sign in to the HSM using the private IP address found in the portal for the network interface resource associated with the HSM.
+The IP Address of the VM could also be used in place of the DNS name in the above command. If the command is successful, it will prompt for a password, and you should enter that. Once logged on to the virtual machine, you can sign in to the HSM using the private IP address found in the portal for the network interface resource associated with the HSM.
![components list](media/tutorial-deploy-hsm-cli/resources.png)
When you have the correct IP address, run the following command substituting tha
`ssh tenantadmin@10.0.2.4`
-If successful you will be prompted for a password. The default password is PASSWORD and the HSM will first ask you to change your password so set a strong password and use whatever mechanism your organization prefers to store the password and prevent loss.
+If successful you'll be prompted for a password. The default password is PASSWORD and the HSM will first ask you to change your password so set a strong password and use whatever mechanism your organization prefers to store the password and prevent loss.
>[!IMPORTANT] >if you lose this password, the HSM will have to be reset and that means losing your keys.
-When you are connected to the HSM using ssh, run the following command to ensure the HSM is operational.
+When you're connected to the HSM using ssh, run the following command to ensure the HSM is operational.
`hsm show`
The output should look as shown on the image below:
![Screenshot shows output in PowerShell window.](media/tutorial-deploy-hsm-cli/hsm-show-output.png)
-At this point, you have allocated all resources for a highly available, two HSM deployment and validated access and operational state. Any further configuration or testing involves more work with the HSM device itself. For this, you should follow the instructions in the Thales Luna 7 HSM Administration Guide chapter 7 to initialize the HSM and create partitions. All documentation and software are available directly from Thales for download once you are registered in the [Thales customer support portal](https://supportportal.thalesgroup.com/csm) and have a Customer ID. Download Client Software version 7.2 to get all required components.
+At this point, you've allocated all resources for a highly available, two HSM deployment and validated access and operational state. Any further configuration or testing involves more work with the HSM device itself. For this, you should follow the instructions in the Thales Luna 7 HSM Administration Guide chapter 7 to initialize the HSM and create partitions. All documentation and software are available directly from Thales for download once you are registered in the [Thales customer support portal](https://supportportal.thalesgroup.com/csm) and have a Customer ID. Download Client Software version 7.2 to get all required components.
## Delete or clean up resources
-If you have finished with just the HSM device, then it can be deleted as a resource and returned to the free pool. The obvious concern when doing this is any sensitive customer data that is on the device. The best way to "zeroize" a device is to get the HSM admin password wrong 3 times (note: this is not appliance admin, it's the actual HSM admin). As a safety measure to protect key material, the device cannot be deleted as an Azure resource until it is in the zeroized state.
+If you've finished with just the HSM device, then it can be deleted as a resource and returned to the free pool. The obvious concern when doing this is any sensitive customer data that is on the device. The best way to "zeroize" a device is to get the HSM admin password wrong three times (note: this is not appliance admin, it's the actual HSM admin). As a safety measure to protect key material, the device can't be deleted as an Azure resource until it is in the zeroized state.
> [!NOTE] > if you have issue with any Thales device configuration you should contact [Thales customer support](https://supportportal.thalesgroup.com/csm).
-If you have finished with all resources in this resource group, then you can remove them all with the following command:
+If you've finished with all resources in this resource group, then you can remove them all with the following command:
```azurecli az group delete \
az group delete \
## Next steps
-After completing the steps in the tutorial, Dedicated HSM resources are provisioned and you have a virtual network with necessary HSMs and further network components to enable communication with the HSM. You are now in a position to compliment this deployment with more resources as required by your preferred deployment architecture. For more information on helping plan your deployment, see the Concepts documents.
+After completing the steps in the tutorial, Dedicated HSM resources are provisioned and you have a virtual network with necessary HSMs, and further network components to enable communication with the HSM. You're now in a position to complement this deployment with more resources as required by your preferred deployment architecture. For more information on helping plan your deployment, see the Concepts documents.
A design with two HSMs in a primary region addressing availability at the rack level, and two HSMs in a secondary region addressing regional availability is recommended. * [High Availability](high-availability.md)
dedicated-hsm Tutorial Deploy Hsm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/tutorial-deploy-hsm-powershell.md
Title: Tutorial deploy into an existing virtual network using PowerShell - Azure Dedicated HSM | Microsoft Docs
+ Title: Tutorial deploys into an existing virtual network using PowerShell - Azure Dedicated HSM | Microsoft Docs
description: Tutorial showing how to deploy a dedicated HSM using PowerShell into an existing virtual network documentationcenter: na
# Tutorial ΓÇô Deploying HSMs into an existing virtual network using PowerShell
-The Azure Dedicated HSM Service provides a physical device for sole customer use, with complete administrative control and full management responsibility. Due to providing physical hardware, Microsoft must control how those devices are allocated to ensure capacity is managed effectively. As a result, within an Azure subscription, the Dedicated HSM service will not normally be visible for resource provisioning. Any Azure customer requiring access to the Dedicated HSM service, must first contact their Microsoft account executive to request registration for the Dedicated HSM service. Only once this process completes successfully will provisioning be possible.
+The Azure Dedicated HSM Service provides a physical device for sole customer use, with complete administrative control and full management responsibility. Due to providing physical hardware, Microsoft must control how those devices are allocated to ensure capacity is managed effectively. As a result, within an Azure subscription, the Dedicated HSM service won't normally be visible for resource provisioning. Any Azure customer requiring access to the Dedicated HSM service, must first contact their Microsoft account executive to request registration for the Dedicated HSM service. Only once this process completes successfully will provisioning be possible.
This tutorial aims to show a typical provisioning process where: - A customer has a virtual network already
This tutorial focuses on a pair of HSMs and the required [ExpressRoute gateway](
## Prerequisites
-Azure Dedicated HSM is not currently available in the Azure portal, therefore all interaction with the service will be via command-line or using PowerShell. This tutorial will use PowerShell in the Azure Cloud Shell. If you are new to PowerShell, follow getting started instructions here: [Azure PowerShell Get Started](/powershell/azure/get-started-azureps).
+Azure Dedicated HSM is not currently available in the Azure portal, therefore all interaction with the service will be via command-line or using PowerShell. This tutorial will use PowerShell in the Azure Cloud Shell. If you're new to PowerShell, follow getting started instructions here: [Azure PowerShell Get Started](/powershell/azure/get-started-azureps).
Assumptions:
+- You have an assigned Microsoft Account Manager and meet the monetary requirement of five million ($5M) USD or greater in overall committed Azure revenue annually to qualify for onboarding and use of Azure Dedicated HSM.
- You have been through the Azure Dedicated HSM registration process and been approved for use of the service. If not, then contact your Microsoft account representative for details. - You have created a Resource Group for these resources and the new ones deployed in this tutorial will join that group. - You have already created the necessary virtual network, subnet, and virtual machines as per the diagram above and now want to integrate 2 HSMs into that deployment.
-All instructions below assume that you have already navigated to the Azure portal and you have opened the Cloud Shell (select ΓÇ£\>\_ΓÇ¥ towards the top right of the portal).
+All instructions below assume that you've already navigated to the Azure portal and you've opened the Cloud Shell (select ΓÇ£\>\_ΓÇ¥ towards the top right of the portal).
## Provisioning a Dedicated HSM
Provisioning the HSMs and integrating into an existing virtual network via [Expr
### Validating Feature Registration
-As mentioned above, any provisioning activity requires that the Dedicated HSM service is registered for your subscription. To validate that, run the following PowerShell command in the Azure portal cloud shell.
+As mentioned above, any provisioning activity requires that the Dedicated HSM service is registered for your subscription. To validate that, run the following PowerShell command in the Azure portal Cloud Shell.
```powershell Get-AzProviderFeature -ProviderNamespace Microsoft.HardwareSecurityModules -FeatureName AzureDedicatedHsm ```
-The command should return a status of ΓÇ£RegisteredΓÇ¥ (as shown below) before you proceed any further. If you are not registered for this service please contact your Microsoft account representative.
+The command should return a status of ΓÇ£RegisteredΓÇ¥ (as shown below) before you proceed any further. If you're not registered for this service, please contact your Microsoft account representative.
![subscription status](media/tutorial-deploy-hsm-powershell/subscription-status.png)
An example of these changes is as follows:
} ```
-The associated Resource Manager template file will create 6 resources with this information:
+The associated Resource Manager template file will create six resources with this information:
- A subnet for the HSMs in the specified VNET - A subnet for the virtual network gateway
The associated Resource Manager template file will create 6 resources with this
- An HSM in stamp 1 - An HSM in stamp 2
-Once parameter values are set, the files need to be uploaded to Azure portal cloud shell file share for use. In the Azure portal, click the ΓÇ£\>\_ΓÇ¥ cloud shell symbol top right and this will make the bottom portion of the screen a command environment. The options for this are BASH and PowerShell and you should select BASH if not already set.
+Once parameter values are set, the files need to be uploaded to Azure portal Cloud Shell file share for use. In the Azure portal, click the ΓÇ£\>\_ΓÇ¥ Cloud Shell symbol top right and this will make the bottom portion of the screen a command environment. The options for this are BASH and PowerShell and you should select BASH if not already set.
The command shell has an upload/download option on the toolbar and you should select this to upload the template and parameter files to your file share: ![file share](media/tutorial-deploy-hsm-powershell/file-share.png)
-Once the files are uploaded, you are ready to create resources.
+Once the files are uploaded, you're ready to create resources.
Prior to creating new HSM resources there are some pre-requisite resources you should ensure are in place. You must have a virtual network with subnet ranges for compute, HSMs, and gateway. The following commands serve as an example of what would create such a virtual network. ```powershell
Get-AzResource -Resourceid /subscriptions/$subId/resourceGroups/$resourceGroupNa
![provision status](media/tutorial-deploy-hsm-powershell/progress-status2.png)
-You will also now be able to see the resources using the [Azure resource explorer](https://resources.azure.com/). Once in the explorer, expand ΓÇ£subscriptionsΓÇ¥ on the left, expand your specific subscription for Dedicated HSM, expand ΓÇ£resource groupsΓÇ¥, expand the resource group you used and finally select the ΓÇ£resourcesΓÇ¥ item.
+You'll also now be able to see the resources using the [Azure resource explorer](https://resources.azure.com/). Once in the explorer, expand ΓÇ£subscriptionsΓÇ¥ on the left, expand your specific subscription for Dedicated HSM, expand ΓÇ£resource groupsΓÇ¥, expand the resource group you used and finally select the ΓÇ£resourcesΓÇ¥ item.
## Testing the Deployment
When you have the IP address, run the following command:
`ssh tenantadmin@<ip address of HSM>`
-If successful you will be prompted for a password. The default password is PASSWORD. The HSM will ask you to change your password so set a strong password and use whatever mechanism your organization prefers to store the password and prevent loss.
+If successful you'll be prompted for a password. The default password is PASSWORD. The HSM will ask you to change your password so set a strong password and use whatever mechanism your organization prefers to store the password and prevent loss.
>[!IMPORTANT] >if you lose this password, the HSM will have to be reset and that means losing your keys.
-When you are connected to the HSM device using ssh, run the following command to ensure the HSM is operational.
+When you're connected to the HSM device using ssh, run the following command to ensure the HSM is operational.
`hsm show`
The output should look like the image shown below:
![Screenshot that shows the output from the hsm show command.](media/tutorial-deploy-hsm-powershell/output.png)
-At this point, you have allocated all resources for a highly available, two HSM deployment and validated access and operational state. Any further configuration or testing involves more work with the HSM device itself. For this, you should follow the instructions in the Thales Luna 7 HSM Administration Guide chapter 7 to initialize the HSM and create partitions. All documentation and software are available directly from Thales for download once you are registered in the [Thales customer support portal](https://supportportal.thalesgroup.com/csm) and have a Customer ID. Download Client Software version 7.2 to get all required components.
+At this point, you've allocated all resources for a highly available, two HSM deployment and validated access and operational state. Any further configuration or testing involves more work with the HSM device itself. For this, you should follow the instructions in the Thales Luna 7 HSM Administration Guide chapter 7 to initialize the HSM and create partitions. All documentation and software are available directly from Thales for download once you're registered in the [Thales customer support portal](https://supportportal.thalesgroup.com/csm) and have a Customer ID. Download Client Software version 7.2 to get all required components.
## Delete or clean up resources
-If you have finished with just the HSM device, then it can be deleted as a resource and returned to the free pool. The obvious concern when doing this is any sensitive customer data that is on the device. The best way to "zeroize" a device is to get the HSM admin password wrong 3 times (note: this is not appliance admin, it's the actual HSM admin). As a safety measure to protect key material, the device cannot be deleted as an Azure resource until it is in the zeroized state.
+If you've finished with just the HSM device, then it can be deleted as a resource and returned to the free pool. The obvious concern when doing this is any sensitive customer data that is on the device. The best way to "zeroize" a device is to get the HSM admin password wrong three times (note: this is not appliance admin, it's the actual HSM admin). As a safety measure to protect key material, the device can't be deleted as an Azure resource until it is in the zeroized state.
> [!NOTE] > if you have issue with any Thales device configuration you should contact [Thales customer support](https://supportportal.thalesgroup.com/csm).
-If you want to remove the HSM resource in Azure you can use the following command replacing the "$" variables with your unique parameters:
+If you want to remove the HSM resource in Azure, you can use the following command replacing the "$" variables with your unique parameters:
```powershell
Remove-AzResource -Resourceid /subscriptions/$subId/resourceGroups/$resourceGrou
## Next steps
-After completing the steps in the tutorial, Dedicated HSM resources are provisioned and available in your virtual network. You are now in a position to compliment this deployment with more resources as required by your preferred deployment architecture. For more information on helping plan your deployment, see the Concepts documents. A design with two HSMs in a primary region addressing availability at the rack level, and two HSMs in a secondary region addressing regional availability is recommended. The template file used in this tutorial can easily be used as a basis for a two HSM deployment but needs to have its parameters modified to meet your requirements.
+After completing the steps in the tutorial, Dedicated HSM resources are provisioned and available in your virtual network. You're now in a position to complement this deployment with more resources as required by your preferred deployment architecture. For more information on helping plan your deployment, see the Concepts documents. A design with two HSMs in a primary region addressing availability at the rack level, and two HSMs in a secondary region addressing regional availability is recommended. The template file used in this tutorial can easily be used as a basis for a two HSM deployment but needs to have its parameters modified to meet your requirements.
* [High Availability](high-availability.md) * [Physical Security](physical-security.md)
defender-for-iot How To View Information Per Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-information-per-zone.md
Last updated 06/12/2022 -+
frontdoor How To Add Security Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-security-headers.md
Title: Configure security headers with Azure Front Door Standard/Premium (Preview) Rule Set
+ Title: Configure security headers with Azure Front Door Standard/Premium Rule Set
description: This article provides guidance on how to use rule set to configure security headers.
Last updated 02/18/2021
-# Configure security headers with Azure Front Door Standard/Premium (Preview) Rule Set
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+# Configure security headers with Azure Front Door Standard/Premium Rule Set
This article shows how to implement security headers to prevent browser-based vulnerabilities like HTTP Strict-Transport-Security (HSTS), X-XSS-Protection, Content-Security-Policy, or X-Frame-Options. Security-based attributes can also be defined with cookies. The following example shows you how to add a Content-Security-Policy header to all incoming requests that matches the path in the Route. Here, we only allow scripts from our trusted site, **https://apiphany.portal.azure-api.net** to run on our application.
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites * Before you can configure configure security headers, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door](create-front-door-portal.md).
guides Azure Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/developer/azure-developer-guide.md
Title: Get started guide for developers on Azure | Microsoft Docs description: This article provides essential information for developers looking to get started using the Microsoft Azure platform for their development needs. -+ Last updated 11/18/2019
hdinsight Enable Private Link On Kafka Rest Proxy Hdi Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/enable-private-link-on-kafka-rest-proxy-hdi-cluster.md
As a prerequisite, complete the steps mentioned in [Enable Private Link on an Az
| Config | Value | | | -- |
- | Name | hdi-privlink-cluster-1 |
+ | Name | hdi-prilink-cluster-restproxy |
| Resource type | Microsoft.Network/privatelinkServices | | Resource | kafkamanagementnode-* (This value should match the HDI deployment ID of your cluster, for example kafkamanagementnode 4eafe3a2a67e4cd88762c22a55fe4654) | | Virtual network | hdi-privlink-client-vnet |
As a prerequisite, complete the steps mentioned in [Enable Private Link on an Az
| Config | Value | | | -- |
- | Name | YourPrivatelinkClusterName-1 |
+ | Name | hdi-prilink-cluster-restproxy |
| Type | A - Alias record to 1Pv4 address | | TTL | 1 | | TTL unit | Hours |
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
description: Learn how to use Azure Monitor logs to monitor jobs running in an H
Previously updated : 08/01/2021 Last updated : 09/02/2022 # Use Azure Monitor logs to monitor HDInsight clusters
If you don't have an Azure subscription, [create a free account](https://azure.m
#### [New Azure monitor experience](#tab/new) > [!Important]
-> New Azure Monitor experience is only available in East US and West Europe as a preview feature.
+> New Azure Monitor experience is available in all the regions as a preview feature.
> ## Prerequisites
If you don't have an Azure subscription, [create a free account](https://azure.m
* If wanting to use Azure CLI and you haven't yet installed it, see [Install the Azure CLI](/cli/azure/install-azure-cli). > [!NOTE]
-> New Azure Monitor experience is only available in East US and West Europe as a preview feature. It is recommended to place both the HDInsight cluster and the Log Analytics workspace in the same region for better performance. Azure Monitor logs is not available in all Azure regions.
+> New Azure Monitor experience is only available in all the regions as a preview feature. It is recommended to place both the HDInsight cluster and the Log Analytics workspace in the same region for better performance.
> ## Enable Azure Monitor using the portal
hdinsight Log Analytics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/log-analytics-migration.md
Previously updated : 07/21/2022 Last updated : 09/02/2022 # Log Analytics migration guide for Azure HDInsight clusters
If you're using a cluster created after mid-September 2020, you'll see the new p
:::image type="content" source="./media/log-analytics-migration/hdinsight-classic-integration.png" alt-text="Screenshot that shows the link to access the classic integration." border="false":::
-Creating new clusters with classic Azure Monitor integration is not available after July 1, 2021.
+Creating new clusters with classic Azure Monitor integration is not available after Jan 1, 2023.
## Release and support timeline
healthcare-apis Dicom Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-register-application.md
+
+ Title: Register a client application for the DICOM service in Azure Active Directory
+description: How to register a client application for the DICOM service in Azure Active Directory.
++++ Last updated : 09/02/2022+++
+# Register a client application for the DICOM service in Azure Active Directory
+
+In this article, you'll learn how to register a client application for the DICOM service. You can find more information on [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
+
+## Register a new application
+
+1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**.
+2. Select **App registrations**.
+[ ![Screen shot of new app registration window.](media/register-application-one.png) ](media/register-application-one.png#lightbox)
+3. Select **New registration**.
+4. For Supported account types, select **Accounts in this organization directory only**. Leave the other options as is.
+[ ![Screenshot of new registration account options.](media/register-application-two.png) ](media/register-application-two.png#lightbox)
+5. Select **Register**.
+
+## Application ID (client ID)
+
+After registering a new application, you can find the application (client) ID and Directory (tenant) ID from the overview menu option. Make a note of the values for use later.
+
+[ ![Screenshot of client ID overview panel.](media/register-application-three.png) ](media/register-application-three.png#lightbox)
+
+## Authentication setting: confidential vs. public
+
+Select **Authentication** to review the settings. The default value for **Allow public client flows** is "No".
+
+If you keep this default value, the application registration is a **confidential client application** and a certificate or secret is required.
+
+[ ![Screenshot of confidential client application.](media/register-application-five.png) ](media/register-application-five.png#lightbox)
+
+If you change the default value to "Yes" for the "Allow public client flows" option in the advanced setting, the application registration is a **public client application** and a certificate or secret isn't required. The "Yes" value is useful when you want to use the client application in your mobile app or a JavaScript app where you don't want to store any secrets.
+
+For tools that require a redirect URL, select **Add a platform** to configure the platform.
+
+>[!NOTE]
+>
+>For Postman, select **Mobile and desktop applications**. Enter "https://www.getpostman.com/oauth2/callback" in the **Custom redirect URIs** section. Select the **Configure** button to save the setting.
+
+[ ![Screenshot of configure other services.](media/register-application-five-bravo.png) ](media/register-application-five-bravo.png#lightbox)
+
+## Certificates & secrets
+
+Select **Certificates & Secrets** and select **New Client Secret**.
+
+Add and then copy the secret value.
+
+[ ![Screenshot of certificates and secrets.](media/register-application-six.png) ](media/register-application-six.png#lightbox)
+
+Optionally, you can upload a certificate (public key) and use the Certificate ID, a GUID value associated with the certificate. For testing purposes, you can create a self-signed certificate using tools such as the PowerShell command line, `New-SelfSignedCertificate`, and then export the certificate from the certificate store.
+
+## API permissions
+
+The following steps are required for the DICOM service. In addition, user access permissions or role assignments for the Azure Health Data Services are managed through RBAC. For more details, visit [Configure Azure RBAC for Azure Health Data Services](./../configure-azure-rbac.md).
+
+1. Select the **API permissions** blade.
+
+ [ ![Screenshot of API permission page with Add a permission button highlighted.](./media/dicom-add-apis-permissions.png) ](./media/dicom-add-apis-permissions.png#lightbox)
+
+2. Select **Add a permission**.
+
+ Add a permission to the DICOM service by searching for **Azure API for DICOM** under **APIs my organization** uses.
+
+ [ ![Screenshot of Search API permissions page with the APIs my organization uses tab selected.](./media/dicom-search-apis-permissions.png) ](./media/dicom-search-apis-permissions.png#lightbox)
+
+ The search result for Azure API for DICOM will only return if you've already deployed the DICOM service in the workspace.
+
+ If you're referencing a different resource application, select your DICOM API Resource Application Registration that you created previously under **APIs my organization**.
+
+3. Select scopes (permissions) that the confidential client application will ask for on behalf of a user. Select **Dicom.ReadWrite**, and then select **Add permissions**.
+
+ [ ![Screenshot of scopes (permissions) that the client application will ask for on behalf of a user.](./media/dicom-select-scopes-new.png) ](./media/dicom-select-scopes-new.png#lightbox)
+
+Your application registration is now complete.
+
+## Next steps
+
+In this article, you learned how to register a client application for the DICOM service in the Azure AD. Additionally, you learned how to add a secret and API permissions to Azure Health Data Services. For more information about DICOM service, see
+
+>[!div class="nextstepaction"]
+>[Overview of the DICOM service](dicom-services-overview.md)
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/de-identified-export.md
Here's a sample configuration file for FHIR R4:
For detailed information on the settings within the configuration file, visit [here](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#configuration-file-format).
-## Using the `$export` endpoint for de-identifying data
-
-The API call below demonstrates how to form a request for de-id on export from the FHIR service.
-
-```
-GET https://<<FHIR service base URL>>/$export?_container=<<container_name>>&_anonymizationConfig=<<config file name>>&_anonymizationConfigEtag=<<ETag on storage>>
-```
-
-You will need to create a container for the de-identified export in your ADLS Gen2 account and specify the `<<container_name>>` in the API request as shown above. Additionally, you will need to place the JSON config file with the anonymization rules inside the container and specify the `<<config file name>>` in the API request (see above).
-
+## Manage Configuration File in storage account
+You will need to create a container for the de-identified export in your ADLS Gen2 account and specify the `<<container_name>>` in the API request as shown above. Additionally, you will need to place the JSON config file with the anonymization rules inside the container and specify the `<<config file name>>` in the API request (see below).
> [!Note] > It is common practice to name the container `anonymization`. The JSON file within the container is often named `anonymizationConfig.json`.
+## Manage Configuration File in ACR
+It's recommended that you host the export configuration files on Azure Container Registry(ACR). It takes the following steps similar as [hosting templates in ACR for $convert-data](convert-data.md#host-your-own-templates).
+1. Push the configuration files to your Azure Container Registry.
+2. Enable Managed Identity on your FHIR service instance.
+3. Provide access of the ACR to the FHIR service Managed Identity.
+4. Register the ACR servers in the FHIR service. You can use the portal to open "Artifacts" blade under "Transform and transfer data" section to add the ACR server.
+5. Optionally configure ACR firewall for secure access.
+
+## Using the `$export` endpoint for de-identifying data
+ `https://<<FHIR service base URL>>/$export?_container=<<container_name>>&_anonymizationConfigCollectionReference=<<ACR image reference>>&_anonymizationConfig=<<config file name>>&_anonymizationConfigEtag=<<ETag on storage>>`
> [!Note] > Right now the FHIR service only supports de-identified export at the system level (`$export`). |Query parameter | Example |Optionality| Description| |||--||
-| `anonymizationConfig` |`anonymizationConfig.json`|Required for de-identified export |Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/FHIR-Tools-for-Anonymization#configuration-file-format). This file should be kept inside a container named `anonymization` within the configured ADLS Gen2 account. |
-| `anonymizationConfigEtag`|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag from the blob property using Azure Storage Explorer.|
+| _\_container_|exportContainer|Required|Name of container within the configured storage account where the data will be exported. |
+| _\_anonymizationConfigCollectionReference_|"myacr.azurecr.io/deidconfigs:default"|Optional|Reference to an OCI image on ACR containing de-id configuration files for de-id export (such as stu3-config.json, r4-config.json). The ACR server of the image should be registered within the FHIR service. (Format: `<RegistryServer>/<imageName>@<imageDigest>`, `<RegistryServer>/<imageName>:<imageTag>`) |
+| _\_anonymizationConfig_ |`anonymizationConfig.json`|Required|Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/FHIR-Tools-for-Anonymization#configuration-file-format). If _\_anonymizationConfigCollectionReference_ is provided, we will search and use this file from the specified image. Otherwise, we will search and use this file inside a container named **anonymization** within the configured ADLS Gen2 account.|
+| _\_anonymizationConfigEtag_|"0x8D8494A069489EC"|Optional|Etag of the configuration file which can be obtained from the blob property in Azure Storage Explorer. Specify this parameter only if the configuration file is stored in Azure storage account. If you use ACR to host the configuration file, you should not include this parameter.|
+ > [!IMPORTANT] > Both the raw export and de-identified export operations write to the same Azure storage account specified in the export configuration for the FHIR service. If you have need for multiple de-identification configurations, it is recommended that you create a different container for each configuration and manage user access at the container level.
In this article, you've learned how to set up and use the de-identified export f
>[!div class="nextstepaction"] >[Export data](export-data.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
Previously updated : 08/16/2022 Last updated : 08/30/2022
Many functions are available when using **JmesPath** as the expression language. Besides the functions available as part of the JmesPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](how-to-use-device-mappings.md) during the device message [normalization](iot-data-flow.md#normalize) process.
-> [!NOTE]
->
+> [!TIP]
> For more information on JmesPath functions, see the JmesPath [specification](https://jmespath.org/specification.html#built-in-functions).
->[!TIP]
->
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
- ## Function signature Each function has a signature that follows the JmesPath specification. This signature can be represented as:
return_type function_name(type $argname)
The signature indicates the valid types for the arguments. If an invalid type is passed in for an argument, an error will occur. > [!NOTE]
->
> When math-related functions are done, the end result **must** be able to fit within a C# [long](/dotnet/csharp/language-reference/builtin-types/integral-numeric-types#characteristics-of-the-integral-types) value. If the end result in unable to fit within a C# long value, then a mathematical error will occur. ## Exception handling
healthcare-apis How To Use Iot Jsonpath Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iot-jsonpath-content-mappings.md
Title: IotJsonPathContentTemplate mappings in MedTech service Device mappings - Azure Health Data Services
-description: This article describes how to use IotJsonPathContentTemplate mappings with MedTech service Device mappings templates.
+ Title: IotJsonPathContentTemplate mappings in MedTech service device mapping - Azure Health Data Services
+description: This article describes how to use IotJsonPathContentTemplate mappings with MedTech service device mapping.
Previously updated : 03/22/2022 Last updated : 09/02/2022 # How to use IotJsonPathContentTemplate mappings
+This article describes how to use IoTJsonPathContentTemplate mappings with the MedTech service [device mapping](how-to-use-device-mappings.md).
+ > [!TIP] > Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-This article describes how to use IoTJsonPathContentTemplate mappings with the MedTech service Device mappings templates.
- ## IotJsonPathContentTemplate The IotJsonPathContentTemplate is similar to the JsonPathContentTemplate except the `DeviceIdExpression` and `TimestampExpression` aren't required.
When you're using these SDKs, the device identity and the timestamp of the messa
>[!IMPORTANT] >Make sure that you're using a device identifier from Azure Iot Hub or Azure IoT Central that is registered as an identifier for a device resource on the destination FHIR service.
-If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContentTemplate, assuming that you're using custom properties in the message body for the device identity or measurement timestamp
+If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContentTemplate, assuming that you're using custom properties in the message body for the device identity or measurement timestamp.
> [!NOTE] > When using `IotJsonPathContentTemplate`, the `TypeMatchExpression` should resolve to the entire message as a JToken. For more information, see the following examples:
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent
*Message* ```json+ { "Body": { "heartRate": "78"
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent
"iothub-connection-device-id" : "device123" } }+ ``` *Template* ```json
- "templateType": "JsonPathContent",
- "template": {
- "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@Body.heartRate)]",
- "deviceIdExpression": "$.deviceId",
- "timestampExpression": "$.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.Body.heartRate",
- "valueName": "hr"
- }
- ]
- }
-}
+{
+ "templateType": "IotJsonPathContentTemplate",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@Body.heartRate)]"
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.Body.heartRate",
+ "valueName": "hr"
+ }
+ ]
+ }
+}
+ ``` **Blood pressure**
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent
*Message* ```json+ { "Body": { "systolic": "123",
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent
"iothub-connection-device-id" : "device123" } }+ ``` *Template* ```json+ {
+ "templateType": "IotJsonPathContentTemplate",
+ "template": {
"typeName": "bloodpressure", "typeMatchExpression": "$..[?(@Body.systolic && @Body.diastolic)]", "values": [
- {
+ {
"required": "true", "valueExpression": "$.Body.systolic", "valueName": "systolic"
- },
- {
+ },
+ {
"required": "true", "valueExpression": "$.Body.diastolic", "valueName": "diastolic"
- }
+ }
]
+ }
}+ ``` > [!TIP]
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent
## Next steps
-In this article, you learned how to use Device mappings. To learn how to use FHIR destination mappings, see
+In this article, you learned how to use IotJsonPathContentTemplate mappings with the MedTech service device mapping. To learn how to use MedTech service FHIR destination mapping, see
>[!div class="nextstepaction"]
->[How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
+>[How to use FHIR destination mapping](how-to-use-fhir-mappings.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application.md
Title: Register a client application in Azure Active Directory for the Azure Health Data Services description: How to register a client application in the Azure AD and how to add a secret and API permissions to the Azure Health Data Services--+ + Previously updated : 06/06/2022- Last updated : 09/02/2022+ # Register a client application in Azure Active Directory
The following steps are required for the DICOM service, but optional for the FHI
1. Select the **API permissions** blade.
- [ ![Add API permissions](dicom/media/dicom-add-apis-permissions.png) ](dicom/media/dicom-add-apis-permissions.png#lightbox)
+ [ ![Screenshot of API permission page with Add a permission button highlighted.](dicom/media/dicom-add-apis-permissions.png) ](dicom/media/dicom-add-apis-permissions.png#lightbox)
2. Select **Add a permission**.
- If you're using Azure Health Data Services, you'll add a permission to the DICOM service by searching for **Azure Healthcare APIs** under **APIs my organization** uses.
+ If you're using Azure Health Data Services, you'll add a permission to the DICOM service by searching for **Azure API for DICOM** under **APIs my organization** uses.
- [ ![Search API permissions](dicom/media/dicom-search-apis-permissions.png) ](dicom/media/dicom-search-apis-permissions.png#lightbox)
+ [ ![Screenshot of Search API permissions page with the APIs my organization uses tab selected.](dicom/media/dicom-search-apis-permissions.png) ](dicom/media/dicom-search-apis-permissions.png#lightbox)
- The search result for Azure Healthcare APIs will only return if you've already deployed the DICOM service in the workspace.
+ The search result for Azure API for DICOM will only return if you've already deployed the DICOM service in the workspace.
If you're referencing a different resource application, select your DICOM API Resource Application Registration that you created previously under **APIs my organization**. 3. Select scopes (permissions) that the confidential client application will ask for on behalf of a user. Select **user_impersonation**, and then select **Add permissions**.
- [ ![Select permissions scopes.](dicom/media/dicom-select-scopes.png) ](dicom/media/dicom-select-scopes.png#lightbox)
+ [ ![Screenshot of scopes (permissions) that the client application will ask for on behalf of a user.](dicom/media/dicom-select-scopes.png) ](dicom/media/dicom-select-scopes.png#lightbox)
>[!NOTE] >Use grant_type of client_credentials when trying to obtain an access token for the FHIR service using tools such as Postman or REST Client. For more details, visit [Access using Postman](./fhir/use-postman.md) and [Accessing Azure Health Data Services using the REST Client Extension in Visual Studio Code](./fhir/using-rest-client.md).
->>Use grant_type of client_credentials or authentication_doe when trying to obtain an access token for the DICOM service. For more details, visit [Using DICOM with cURL](dicom/dicomweb-standard-apis-curl.md).
+>>Use grant_type of client_credentials or authentication_code when trying to obtain an access token for the DICOM service. For more details, visit [Using DICOM with cURL](dicom/dicomweb-standard-apis-curl.md).
Your application registration is now complete.
In this article, you learned how to register a client application in the Azure A
>[!div class="nextstepaction"] >[Overview of Azure Health Data Services](healthcare-apis-overview.md)-
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
Azure IoT Central provides rich analytics capabilities to analyze historical trends and correlate telemetry from your devices. To get started, select **Data explorer** on the left pane.
+> [!NOTE]
+> Only users in a role that have the necessary permissions can view, create, edit, and delete queries. To learn more, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md).
+ To learn how to query devices by using the IoT Central REST API, see [How to use the IoT Central REST API to query devices.](../core/howto-query-with-rest-api.md) ## Understand the data explorer UI
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
When you define a custom role, you choose the set of permissions that a user is
| Delete | View | | Full Control | View, Update, Create, Delete |
+**Data explorer permissions**
+
+| Name | Dependencies |
+| - | -- |
+| View | None <br/> Other dependencies: View device groups, device templates, device instances |
+| Update | View <br/> Other dependencies: View device groups, device templates, device instances |
+| Create | View, Update <br/> Other dependencies: View device groups, device templates, device instances |
+| Delete | View <br/> Other dependencies: View device groups, device templates, device instances |
+| Full Control | View, Update, Create, Delete <br/> Other dependencies: View device groups, device templates, device instances |
+ **Branding, favicon, and colors permissions** | Name | Dependencies |
iot-edge How To Access Host Storage From Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-access-host-storage-from-module.md
Or, you can configure the local storage directly in the deployment manifest. For
"type": "docker", "env": { "storageFolder": {
- "value": "<ModuleStoragePath>/edgeAgent"
+ "value": "<ModuleStoragePath>"
} } },
To enable a link from module storage to the storage on the host system, create a
For example, if you wanted to enable the IoT Edge hub to store messages in your device's local storage and retrieve them later, you can configure the environment variables and the create options in the Azure portal in the **Runtime Settings** section. 1. For both IoT Edge hub and IoT Edge agent, add an environment variable called **storageFolder** that points to a directory in the module.
-1. For both IoT Edge hub and IoT Edge agent, add binds to connect a local directory on the host machine to a directory in the module. For example:
+1. For both IoT Edge hub and IoT Edge agent, add binds to connect a local directory on the host machine to a directory in the module. For example, for version 1.1:
![Add create options and environment variables for local storage](./media/how-to-access-host-storage-from-module/offline-storage.png)
-Or, you can configure the local storage directly in the deployment manifest. For example:
+Or, you can configure the local storage directly in the deployment manifest. For example, for version 1.1:
```json "systemModules": {
Or, you can configure the local storage directly in the deployment manifest. For
} ```
-Replace `<HostStoragePath>` and `<ModuleStoragePath>` with your host and module storage path; both values must be an absolute path.
+Replace `<HostStoragePath>` and `<ModuleStoragePath>` with your host and module storage path; both values must be an absolute path. If using version 1.3, update each image version with `1.3`. For example, `mcr.microsoft.com/azureiotedge-agent:1.3`.
For example, on a Linux system, `"Binds":["/etc/iotedge/storage/:/iotedge/storage/"]` means the directory **/etc/iotedge/storage** on your host system is mapped to the directory **/iotedge/storage/** in the container. On a Windows system, as another example, `"Binds":["C:\\temp:C:\\contemp"]` means the directory **C:\\temp** on your host system is mapped to the directory **C:\\contemp** in the container.
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
Title: Connect downstream IoT Edge devices - Azure IoT Edge | Microsoft Docs
-description: How to configure an IoT Edge device to connect to Azure IoT Edge gateway devices.
+ Title: How to create nested Azure IoT Edge device hierarchies
+description: Step by step adaptable manual instructions on how to create a hierarchy of IoT Edge devices.
Previously updated : 05/03/2022 Last updated : 09/01/2022
monikerRange: ">=iotedge-2020-11"
-# Connect a downstream IoT Edge device to an Azure IoT Edge gateway
+# Connect Azure IoT Edge devices together to create a hierarchy (nested edge)
[!INCLUDE [iot-edge-version-202011](../../includes/iot-edge-version-202011.md)]
-This article provides instructions for establishing a trusted connection between an IoT Edge gateway and a downstream IoT Edge device.
+This article provides instructions for establishing a trusted connection between an IoT Edge gateway and a downstream IoT Edge device. This setup is also known as "nested edge".
In a gateway scenario, an IoT Edge device can be both a gateway and a downstream device. Multiple IoT Edge gateways can be layered to create a hierarchy of devices. The downstream (child) devices can authenticate and send or receive messages through their gateway (parent) device.
iot-edge How To Install Iot Edge Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-kubernetes.md
IoT Edge can be installed on Kubernetes by using [KubeVirt](https://www.cncf.io/
A functional sample for running IoT Edge on Azure Kubernetes Service (AKS) using KubeVirt is available at [https://aka.ms/iotedge-kubevirt](https://aka.ms/iotedge-kubevirt). > [!NOTE]
-> Based on feedback, the prior translation-based preview of IoT Edge integration with Kubernetes has been discontinued and will not be made generally available. An exception being Azure Stack Edge devices where tranlation-based Kubernetes integration will be supported until IoT Edge v1.1 is maintained (Dec 2022).
+> Based on feedback, the prior translation-based preview of IoT Edge integration with Kubernetes has been discontinued and will not be made generally available. An exception being Azure Stack Edge devices where translation-based Kubernetes integration will be supported until IoT Edge v1.1 is maintained (Dec 2022).
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
description: IoT Edge module log retrieval and upload to Azure Blob Storage.
Previously updated : 11/12/2020 Last updated : 09/01/2022
This method accepts a JSON payload with the following schema:
| id | string | A regular expression that supplies the module name. It can match multiple modules on an edge device. [.NET Regular Expressions](/dotnet/standard/base-types/regular-expressions) format is expected. In case there are multiple items whose ID matches the same module, only the filter options of the first matching ID will be applied to that module. | | filter | JSON section | Log filters to apply to the modules matching the `id` regular expression in the tuple. | | tail | integer | Number of log lines in the past to retrieve starting from the latest. OPTIONAL. |
-| since | string | Only return logs since this time, as a duration (1 d, 90 m, 2 days 3 hours 2 minutes), rfc3339 timestamp, or UNIX timestamp. If both `tail` and `since` are specified, the logs are retrieved using the `since` value first. Then, the `tail` value is applied to the result, and the final result is returned. OPTIONAL. |
-| until | string | Only return logs before the specified time, as an rfc3339 timestamp, UNIX timestamp, or duration (1 d, 90 m, 2 days 3 hours 2 minutes). OPTIONAL. |
+| since | string | Only return logs since this time, as an rfc3339 timestamp, UNIX timestamp, or a duration (days (d) hours (h) minutes (m)). For example, a duration for one day, 12 hours, and 30 minutes can be specified as *1 day 12 hours 30 minutes* or *1d 12h 30m*. If both `tail` and `since` are specified, the logs are retrieved using the `since` value first. Then, the `tail` value is applied to the result, and the final result is returned. OPTIONAL. |
+| until | string | Only return logs before the specified time, as an rfc3339 timestamp, UNIX timestamp, or duration (days (d) hours (h) minutes (m)). For example, a duration 90 minutes can be specified as *90 minutes* or *90m*. If both `tail` and `since` are specified, the logs are retrieved using the `since` value first. Then, the `tail` value is applied to the result, and the final result is returned. OPTIONAL. |
| loglevel | integer | Filter log lines equal to specified log level. Log lines should follow recommended logging format and use [Syslog severity level](https://en.wikipedia.org/wiki/Syslog#Severity_level) standard. Should you need to filter by multiple log level severity values, then rely on regex matching, provided the module follows some consistent format when logging different severity levels. OPTIONAL. | | regex | string | Filter log lines that have content that match the specified regular expression using [.NET Regular Expressions](/dotnet/standard/base-types/regular-expressions) format. OPTIONAL. | | encoding | string | Either `gzip` or `none`. Default is `none`. |
This method accepts a JSON payload with the following schema:
|-|-|-| | schemaVersion | string | Set to `1.0` | | sasURL | string (URI) | [Shared Access Signature URL with write access to Azure Blob Storage container](/archive/blogs/jpsanders/easily-create-a-sas-to-download-a-file-from-azure-storage-using-azure-storage-explorer) |
-| since | string | Only return logs since this time, as a duration (1 d, 90 m, 2 days 3 hours 2 minutes), rfc3339 timestamp, or UNIX timestamp. OPTIONAL. |
-| until | string | Only return logs before the specified time, as an rfc3339 timestamp, UNIX timestamp, or duration (1 d, 90 m, 2 days 3 hours 2 minutes). OPTIONAL. |
+| since | string | Only return logs since this time, as an rfc3339 timestamp, UNIX timestamp, or a duration (days (d) hours (h) minutes (m)). For example, a duration for one day, 12 hours, and 30 minutes can be specified as *1 day 12 hours 30 minutes* or *1d 12h 30m*. OPTIONAL. |
+| until | string | Only return logs before the specified time, as an rfc3339 timestamp, UNIX timestamp, or duration (days (d) hours (h) minutes (m)). For example, a duration 90 minutes can be specified as *90 minutes* or *90m*. OPTIONAL. |
| edgeRuntimeOnly | boolean | If true, only return logs from Edge Agent, Edge Hub, and the Edge Security Daemon. Default: false. OPTIONAL. | > [!IMPORTANT]
iot-edge How To Visual Studio Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-visual-studio-develop-module.md
Typically, you'll want to test and debug each module before running it within an
* If developing in C#, set a breakpoint in the `PipeMessage()` function in **Program.cs**. * If using C, set a breakpoint in the `InputQueue1Callback()` function in **main.c**.
-1. Test the module by sending a message by running the following command in a **Git Bash** or **WSL Bash** shell. (You cannot run the `curl` command from a PowerShell or command prompt.)
-
- ```bash
- curl --header "Content-Type: application/json" --request POST --data '{"inputName": "input1","data":"hello world"}' http://localhost:53000/api/v1/messages
- ```
+1. Test the module by sending a message by running the following command in a **Git Bash** or **WSL Bash** shell. You cannot run the `curl` command from a PowerShell or command prompt.
+ ```bash
+ curl --header "Content-Type: application/json" --request POST --data '{"inputName": "input1","data":"hello world"}' http://localhost:53000/api/v1/messages
+ ```
+ If you get the error *unmatched close brace/bracket in URL*, try the following command instead:
+
+ ```bash
+ curl --header "Content-Type: application/json" --request POST --data "{\"inputName\": \"input1\", \"data\", \"hello world\"}" http://localhost:53000/api/v1/messages
+ ```
+
:::image type="content" source="./media/how-to-visual-studio-develop-csharp-module/debug-single-module.png" alt-text="Screenshot of the output console, Visual Studio project, and Bash window." lightbox="./media/how-to-visual-studio-develop-csharp-module/debug-single-module.png"::: The breakpoint should be triggered. You can watch variables in the Visual Studio **Locals** window, found when the debugger is running. Go to Debug > Windows > Locals.
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
You can use the [IoT Hub resource provider REST API](/rest/api/iothub/iothubreso
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-To complete this tutorial, you need the following:
+## Prerequisites
* Visual Studio.
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
-
-* [Azure PowerShell 1.0](/powershell/azure/install-Az-ps) or later.
+* [Azure Az PowerShell module](/powershell/azure/install-Az-ps).
[!INCLUDE [iot-hub-prepare-resource-manager](../../includes/iot-hub-prepare-resource-manager.md)]
To complete this tutorial, you need the following:
4. In NuGet Package Manager, search for **Microsoft.IdentityModel.Clients.ActiveDirectory**. Click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the license. > [!IMPORTANT]
- > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details.
+ > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade. For more information see the [migration guide](../active-directory/develop/msal-migration.md).
5. In Program.cs, replace the existing **using** statements with the following code:
You can now complete the application by calling the **CreateIoTHub** method befo
## Next steps
-Now you have deployed an IoT hub using the resource provider REST API, you may want to explore further:
+Since you've deployed an IoT hub using the resource provider REST API, you may want to explore further:
* Read about the capabilities of the [IoT Hub resource provider REST API](/rest/api/iothub/iothubresource).
kinect-dk Access Data Body Frame https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/access-data-body-frame.md
description: Learn about the data in a body frame and the functions to access th
+ Last updated 06/26/2019 keywords: body, frame, azure, kinect, body, tracking, tips
kinect-dk Body Index Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/body-index-map.md
description: Understand how to query a body tracking index map in the Azure Kine
+ Last updated 06/26/2019 keywords: kinect, porting, body, tracking, index, segmentation, map
kinect-dk Body Joints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/body-joints.md
description: Understand the body frame, joints, joint coordinates, and joint hie
+ Last updated 06/26/2019 keywords: kinect, porting, body, tracking, joint, hierarchy, bone, connection
kinect-dk Body Sdk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/body-sdk-download.md
Title: Azure Kinect Body Tracking SDK download
description: Understand how to download each version of the Azure Kinect Sensor SDK on Windows or Linux. + Last updated 03/21/2022 keywords: azure, kinect, sdk, download update, latest, available, install, body, tracking
kinect-dk Body Sdk Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/body-sdk-setup.md
Title: Quickstart - Set up Azure Kinect body tracking
description: In this quickstart, you will set up the body tracking SDK for Azure Kinect. + Last updated 03/15/2022 keywords: kinect, azure, sensor, access, depth, sdk, body, tracking, joint, setup, onnx, directml, cuda, trt, nvidia
kinect-dk Build First Body App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/build-first-body-app.md
description: Step by step instructions to build your first Azure Kinect body tra
+ Last updated 06/26/2019 keywords: kinect, azure, sensor, sdk, body, tracking, joint, application, first
kinect-dk Get Body Tracking Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/get-body-tracking-results.md
Title: Azure Kinect get body tracking results description: Learn how to get body tracking results using the Azure Kinect Body Tracking SDK. + Last updated 06/26/2019
kinect-dk Hardware Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/hardware-specification.md
Title: Azure Kinect DK hardware specifications
description: Understand the components, specifications, and capabilities of the Azure Kinect DK. + Last updated 03/18/2021 keywords: azure, kinect, specs, hardware, DK, capabilities, depth, color, RGB, IMU, microphone, array, depth
kinect-dk System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/system-requirements.md
+ Last updated 03/05/2021 keywords: azure, kinect, system requirements, CPU, GPU, USB, set up, setup, minimum, requirements
kinect-dk Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/troubleshooting.md
Title: Azure Kinect known issues and troubleshooting
description: Learn about some of the known issues and troubleshooting tips when using the Sensor SDK with Azure Kinect DK. + Last updated 03/15/2022 keywords: troubleshooting, update, bug, kinect, feedback, recovery, logging, tips
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
description: This quickstart shows how to create an internal load balancer using
Previously updated : 03/24/2022 Last updated : 09/02/2022 #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
New-AzResourceGroup -Name 'CreateIntLBQS-rg' -Location 'eastus'
When you create an internal load balancer, a virtual network is configured as the network for the load balancer. Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
-Create a virtual network for the backend virtual machines
+- Create a public IP for the NAT gateway
-Create a network security group to define inbound connections to your virtual network
+- Create a virtual network for the backend virtual machines
-Create an Azure Bastion host to securely manage the virtual machines in the backend pool
+- Create a network security group to define inbound connections to your virtual network
+
+- Create an Azure Bastion host to securely manage the virtual machines in the backend pool
+
+## Create a public IP address
+
+Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a public IP address for the NAT gateway.
+
+```azurepowershell-interactive
+## Create public IP address for NAT gateway and place IP in variable ##
+$gwpublicip = @{
+ Name = 'myNATgatewayIP'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'eastus'
+ Sku = 'Standard'
+ AllocationMethod = 'static'
+ Zone = 1,2,3
+}
+$gwpublicip = New-AzPublicIpAddress @gwpublicip
+```
+
+To create a zonal public IP address in zone 1, use the following command:
+
+```azurepowershell-interactive
+## Create a zonal public IP address for NAT gateway and place IP in variable ##
+$gwpublicip = @{
+ Name = 'myNATgatewayIP'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'eastus'
+ Sku = 'Standard'
+ AllocationMethod = 'static'
+ Zone = 1
+}
+$gwpublicip = New-AzPublicIpAddress @gwpublicip
+
+```
### Create virtual network, network security group, bastion host, and NAT gateway
Create an Azure Bastion host to securely manage the virtual machines in the back
* Use [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to associate the NAT gateway to the subnet of the virtual network ```azurepowershell-interactive
-## Create public IP address for NAT gateway ##
-$ip = @{
- Name = 'myNATgatewayIP'
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
- Sku = 'Standard'
- AllocationMethod = 'Static'
-}
-$publicIP = New-AzPublicIpAddress @ip
## Create NAT gateway resource ## $nat = @{
$nat = @{
IdleTimeoutInMinutes = '10' Sku = 'Standard' Location = 'eastus'
- PublicIpAddress = $publicIP
+ PublicIpAddress = $gwpublicip
} $natGateway = New-AzNatGateway @nat
$net = @{
$vnet = New-AzVirtualNetwork @net ## Create public IP address for bastion host. ##
-$ip = @{
+$bastionip = @{
Name = 'myBastionIP' ResourceGroupName = 'CreateIntLBQS-rg' Location = 'eastus' Sku = 'Standard' AllocationMethod = 'Static' }
-$publicip = New-AzPublicIpAddress @ip
+$bastionip = New-AzPublicIpAddress @bastionip
## Create bastion host ## $bastion = @{ ResourceGroupName = 'CreateIntLBQS-rg' Name = 'myBastion'
- PublicIpAddress = $publicip
+ PublicIpAddress = $bastionip
VirtualNetwork = $vnet } New-AzBastion @bastion -AsJob
logic-apps Logic Apps Enterprise Integration Create Integration Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-create-integration-account.md
For this task, you can use the Azure portal, [Azure CLI](/cli/azure/resource#az-
--name integration_account_01 --location westus --sku name=Standard ```
- Your integration account name can contain only letters, numbers, hyphens (-), underscores (_), parentheses ((, )), and periods (.).
+ Your integration account name can contain only letters, numbers, hyphens (-), underscores (_), parentheses (()), and periods (.).
To view a specific integration account, use the [az logic integration-account show](/cli/azure/logic/integration-account#az-logic-integration-account-show) command:
Before you can link your integration account to a Standard logic app resource, y
|-|-| | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** | | **Value** | <*integration-account-callback-URL*> |
- |||
1. When you're done, select **OK**. When you return to the **Configuration** pane, make sure to save your changes. On the **Configuration** pane toolbar, select **Save**.
Before you can link your integration account to a Standard logic app resource, y
|-|-| | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** | | **Value** | <*integration-account-callback-URL*> |
- |||
This example shows how a sample app setting might appear:
If you want to link your logic app to another integration account, or no longer
|-|-| | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** | | **Value** | <*integration-account-callback-URL*> |
- |||
1. When you're done, save your changes.
logic-apps Logic Apps Enterprise Integration Flatfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-flatfile.md
Previously updated : 08/23/2022 Last updated : 09/01/2022 # Encode and decode flat files in Azure Logic Apps [!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-Before you send XML content to a business partner in a business-to-business (B2B) scenario, you might want to encode that content first. By building a logic app workflow, you can encode and decode flat files by using the [built-in](../connectors/built-in.md#integration-account-built-in) **Flat File** actions.
-
-Although no **Flat File** triggers are available, you can use a different trigger or action to get or feed the XML content from various sources into your workflow for encoding or decoding. For example, you can use the Request trigger, another app, or other [connectors supported by Azure Logic Apps](../connectors/apis-list.md). You can use **Flat File** actions with workflows in the [**Logic App (Consumption)** and **Logic App (Standard)** resource types](single-tenant-overview-compare.md).
+Before you send XML content to a business partner in a business-to-business (B2B) scenario, you might want to encode that content first. If you receive encoded XML content, you'll need to decode that content first. When you're building a logic app workflow in Azure Logic Apps, you can encode and decode flat files by using the **Flat File** built-in connector actions and a flat file schema for encoding and decoding. You can use **Flat File** actions in multi-tenant Consumption logic app workflows and single-tenant Standard logic app workflows.
> [!NOTE]
-> For **Logic App (Standard)**, the **Flat File** actions are currently in preview.
+>
+> In Standard logic app workflows, the **Flat File** actions are currently in preview.
+
+While no **Flat File** triggers are available, you can use any trigger or action to feed the source XML content into your workflow. For example, you can use a built-in connector trigger, a managed or Azure-hosted connector trigger available for Azure Logic Apps, or even another app.
+
+This article shows how to add the **Flat File** encoding and decoding actions to your workflow.
-This article shows how to add the Flat File encoding and decoding actions to an existing logic app workflow. If you're new to logic apps, review the following documentation:
+* Add a **Flat File** encoding or decoding action to your workflow.
+* Select the schema that you want to use.
-* [What is Azure Logic Apps](logic-apps-overview.md)
-* [B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md)
+For more information, review the following documentation:
+
+* [Consumption versus Standard logic apps](logic-apps-overview.md#resource-type-and-host-environment-differences)
+* [Integration account built-in connectors](../connectors/built-in.md#integration-account-built-in)
+* [Built-in connectors overview for Azure Logic Apps](../connectors/built-in.md)
+* [Managed or Azure-hosted connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
## Prerequisites * An Azure account and subscription. If you don't have a subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
+* The logic app workflow, blank or existing, where you want to use the **Flat File** action.
- * Is associated with the same Azure subscription as your logic app resource.
+ If you have a blank workflow, use any trigger that you want to start the workflow. This example uses the Request trigger.
- * Exists in the same location or Azure region as your logic app resource.
+* Your logic app resource and workflow. Flat file operations don't have any triggers available, so your workflow has to minimally include a trigger. For more information, review the following documentation:
- * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), your integration account requires the following items:
+ * [Quickstart: Create your first Consumption logic app workflow with multi-tenant Azure Logic Apps](quickstart-create-first-logic-app-workflow.md)
- * The flat file [schema](logic-apps-enterprise-integration-schemas.md) to use for encoding or decoding the XML content.
+ * [Create a Standard logic app workflow with single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
- * A [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account).
+* A flat file schema for encoding and decoding the XML content. For more information, [Add schemas to use with workflows in Azure Logic Apps](logic-apps-enterprise-integration-schemas.md).
- * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you don't store schemas in your integration account. Instead, you can [directly add schemas to your logic app resource](logic-apps-enterprise-integration-schemas.md) using either the Azure portal or Visual Studio Code. You can then use these schemas across multiple workflows within the *same logic app resource*.
+* Based on whether you're working on a Consumption or Standard logic app workflow, you'll need an [integration account resource](logic-apps-enterprise-integration-create-integration-account.md). Usually, you need this resource when you want to define and store artifacts for use in enterprise integration and B2B workflows.
- You still need an integration account to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. However, you don't need to link your logic app resource to your integration account, so the linking capability doesn't exist. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ > [!IMPORTANT]
+ >
+ > To work together, both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
- > [!NOTE]
- > Currently, only the **Logic App (Consumption)** resource type supports [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
- > The **Logic App (Standard)** resource type doesn't include [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
+ * If you're working on a Consumption logic app workflow, your logic app resource requires a [link to your integration account](logic-apps-enterprise-integration-create-integration-account.md?tabs=consumption#link-account).
-* The logic app workflow, blank or existing, where you want to use the **Flat File** action.
+ * If you're working on a Standard logic app workflow, you can link your logic app resource to your integration account, upload schemas directly to your logic app resource, or both, based on the following scenarios:
- If you have a blank workflow, use any trigger that you want to start the workflow. This example uses the Request trigger.
+ * If you already have an integration account with the artifacts that you need or want to use, you can link your integration account to multiple Standard logic app resources where you want to use the artifacts. That way, you don't have to upload schemas to each individual logic app. For more information, review [Link your logic app resource to your integration account](logic-apps-enterprise-integration-create-integration-account.md?tabs=standard#link-account).
-## Limits
+ * The **Flat File** built-in connector lets you select a schema that you previously uploaded to your logic app resource or to a linked integration account, but not both. You can then use this artifact across all child workflows within the same logic app resource.
-Make sure that the contained XML groups in the flat file schema that you generate doesn't have excessive numbers of the `max count` property set to a value *greater than 1*. Avoid nesting an XML group with a `max count` property value greater than 1 inside another XML group with a `max count` property greater than 1.
+ So, if you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option. Either way, you can use these artifacts across all child workflows within the same logic app resource.
-Each time that the flat file schema allows the choice of the next fragment, the Azure Logic Apps engine that parses the schema generates a *symbol* and a *prediction* for that fragment. If the schema allows too many such constructs, for example, more than 100,000, the schema expansion becomes excessively large, which consumes too much resources and time.
+## Limitations
-## Add Flat File Encoding action
+* In your flat file schema, make sure the contained XML groups don't have excessive numbers of the `max count` property set to a value *greater than 1*. Avoid nesting an XML group with a `max count` property value greater than 1 inside another XML group with a `max count` property greater than 1.
-### [Consumption](#tab/consumption)
+* When Azure Logic Apps parses the flat file schema, and whenever the schema allows the choice of the next fragment, Azure Logic Apps generates a *symbol* and a *prediction* for that fragment. If the schema allows too many such constructs, for example, more than 100,000, the schema expansion becomes excessively large, which consumes too much resources and time.
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+## Upload schema
-1. If you have a blank workflow that doesn't have a trigger, add any trigger you want. Otherwise, continue to the next step.
+After you create your schema, you now have to upload the schema based on the following scenario:
- This example uses the Request trigger, which is named **When a HTTP request is received**, and handles inbound requests from outside the logic app workflow. To add the Request trigger, follow these steps:
+* If you're working on a Consumption logic app workflow, [add your schema to your integration account](logic-apps-enterprise-integration-schemas.md?tabs=consumption#add-schema).
- 1. Under the designer search box, select **Built-in**. In the designer search box, enter `HTTP request`.
+* If you're working on a Standard logic app workflow, you can [add your schema to your integration account](logic-apps-enterprise-integration-schemas.md?tabs=consumption#add-schema), or [add your schema to your logic app resource](logic-apps-enterprise-integration-schemas.md?tabs=standard#add-schema).
- 1. From the triggers list, select the Request trigger named **When an HTTP request is received**.
+## Add a Flat File encoding action
+
+### [Consumption](#tab/consumption)
- > [!TIP]
- > Providing a JSON schema is optional. If you have a sample payload from the inbound request,
- > select **Use sample payload to generate schema**, enter the sample payload, and select **Done**.
- > The schema appears in the **Request Body JSON Schema** box.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer, if not already open.
-1. Under the step in your workflow where you want to add the **Flat File Encoding** action, choose an option:
+1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Flat File operations don't have any triggers available.
- * To add the **Flat File Encoding** action at the end of your workflow, select **New step**.
+ This example continues with the Request trigger named **When a HTTP request is received**.
- * To add the **Flat File Encoding** action between existing steps, move your pointer over the arrow that connects those steps so that the plus sign (**+**) appears. Select that plus sign, and then select **Add an action**.
+1. On the workflow designer, under the step where you want to add the Flat File action, select **New step**.
-1. In the **Choose an operation** search box, enter `flat file`. From the actions list, select the action named **Flat File Encoding**.
+1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **flat file**.
- ![Screenshot showing the Azure portal and Consumption designer with "flat file" in search box and the "Flat File Encoding" action selected.](./media/logic-apps-enterprise-integration-flatfile/flat-file-encoding-consumption.png)
+1. From the actions list, select the action named **Flat File Encoding**.
-1. Click inside the **Content** box so that the dynamic content list appears. From the list, in the **When a HTTP request is received** section, select the **Body** property, which contains the request body output from the trigger and the content to encode.
+ ![Screenshot showing Azure portal and Consumption workflow designer with "flat file" in search box and "Flat File Encoding" action selected.](./media/logic-apps-enterprise-integration-flatfile/flat-file-encoding-consumption.png)
- ![Screenshot showing the Consumption designer and the "Content" property with dynamic content list and content selected for encoding.](./media/logic-apps-enterprise-integration-flatfile/select-content-to-encode-consumption.png)
+1. In the action's **Content** property, provide the output from the trigger or a previous action that you want to encode by following these steps:
- > [!TIP]
+ 1. Click inside the **Content** box so that the dynamic content list appears.
+
+ 1. From the dynamic content list, select the flat file content that you want to encode.
+
+ For this example, from the dynamic content list, under **When a HTTP request is received**, select the **Body** token, which represents the body content output from the trigger.
+
+ ![Screenshot showing Consumption workflow designer and "Content" property with dynamic content list and content selected for encoding.](./media/logic-apps-enterprise-integration-flatfile/select-content-to-encode-consumption.png)
+
+ > [!NOTE]
+ >
> If the **Body** property doesn't appear in the dynamic content list, > select **See more** next to the **When a HTTP request is received** section label.
- > You can also directly enter the content to decode in the **Content** box.
+ > You can also directly enter the content to encode in the **Content** box.
-1. From the **Schema Name** list, select the schema that's in your linked integration account to use for encoding, for example:
+1. From the **Schema Name** list, select your schema.
- ![Screenshot showing the Consumption designer and the opened "Schema Name" list with selected schema to use for encoding.](./media/logic-apps-enterprise-integration-flatfile/select-encoding-schema-consumption.png)
+ ![Screenshot showing Consumption workflow designer and opened "Schema Name" list with selected schema for encoding.](./media/logic-apps-enterprise-integration-flatfile/select-encoding-schema-consumption.png)
> [!NOTE]
- > If no schema appears in the list, your integration account doesn't contain any schema files
- > to use for encoding. Upload the schema that you want to use to your integration account.
+ >
+ > If the schema list is empty, either your logic app resource isn't linked to your
+ > integration account or your integration account doesn't contain any schema files.
-1. Save your workflow. On the designer toolbar, select **Save**.
+ When you're done, your action looks similar to the following:
-1. To test your workflow, send a request to the HTTPS endpoint, which appears in the Request trigger's **HTTP POST URL** property, and include the XML content that you want to encode in the request body.
+ ![Screenshot showing Consumption workflow with finished "Flat File Encoding" action.](./media/logic-apps-enterprise-integration-flatfile/finished-flat-file-encoding-action-consumption.png)
-You're now done with setting up your flat file encoding action. In a real world app, you might want to store the encoded data in a line-of-business (LOB) app, such as Salesforce. Or, you can send the encoded data to a trading partner. To send the output from the encoding action to Salesforce or to your trading partner, use the other [connectors available in Azure Logic Apps](../connectors/apis-list.md).
+1. To add other optional parameters to the action, select those parameters from the **Add new parameter** list.
-### [Standard](#tab/standard)
+ | Parameter | Value | Description |
+ |--|-|-|
+ | **Mode of empty node generation** | **ForcedDisabled** or **HonorSchemaNodeProperty** or **ForcedEnabled** | The mode to use for empty node generation in flat file encoding |
+ | **XML Normalization** | **Yes** or **No** | The setting to enable or disable XML normalization in flat file encoding |
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+1. Save your workflow. On the designer toolbar, select **Save**.
-1. If you have a blank workflow that doesn't have a trigger, add any trigger you want. Otherwise, continue to the next step.
+### [Standard](#tab/standard)
- This example uses the Request trigger, which is named **When a HTTP request is received**, and handles inbound requests from outside the logic app workflow. To add the Request trigger, follow these steps:
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer, if not already open.
- 1. On the designer, select **Choose an operation**. In the **Choose an operation** pane that opens, under the search box, select **Built-in**.
+1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Flat File operations don't have any triggers available.
- 1. In the search box, enter `HTTP request`. From the triggers list, select the Request trigger named **When an HTTP request is received**.
+ This example continues with the Request trigger named **When a HTTP request is received**.
- > [!TIP]
- > Providing a JSON schema is optional. If you have a sample payload from the inbound request,
- > select **Use sample payload to generate schema**, enter the sample payload, and select **Done**.
- > The schema appears in the **Request Body JSON Schema** box.
+1. On the designer, under the step where you want to add the Flat File action, select the plus sign (**+**), and then select **Add an action**.
-1. Under the step in your workflow where you want to add the **Flat File Encoding** action, choose an option:
+1. On the **Add an action** pane that appears, under the search box, select **Built-in**.
- * To add the **Flat File Encoding** action at the end of your workflow, select the plus sign (**+**), and then select **Add an action**.
+1. In the search box, enter **flat file**. From the actions list, select the action named **Flat File Encoding**.
- * To add the **Flat File Encoding** action between existing steps, select the plus sign (**+**) that appears between those steps, and then select **Insert a new step**.
+ ![Screenshot showing Azure portal and Standard workflow designer with "flat file" in search box and "Flat File Encoding" action selected.](./media/logic-apps-enterprise-integration-flatfile/flat-file-encoding-standard.png)
-1. In the **Choose an operation** pane that opens, under the search box, select **Built-in**.
+1. In the action's **Content** property, provide the output from the trigger or a previous action that you want to encode by following these steps:
-1. In the search box, enter `flat file`. From the actions list, select the action named **Flat File Encoding**.
+ 1. Click inside the **Content** box so that the dynamic content list appears.
- ![Screenshot showing the Azure portal and Standard workflow designer with "flat file" in search box and the "Flat File Encoding" action selected.](./media/logic-apps-enterprise-integration-flatfile/flat-file-encoding-standard.png)
+ 1. From the dynamic content list, select the flat file content that you want to encode.
+
+ For this example, from the dynamic content list, under **When a HTTP request is received**, select the **Body** token, which represents the body content output from the trigger.
-1. Click inside the **Content** box so that the dynamic content list appears. From the list, in the **When a HTTP request is received** section, select the **Body** property, which contains the request body output from the trigger and the content to encode.
+ ![Screenshot showing Standard workflow designer and the "Content" property with dynamic content list and content selected for encoding.](./media/logic-apps-enterprise-integration-flatfile/select-content-to-encode-standard.png)
- ![Screenshot showing the Standard workflow designer and the "Content" property with dynamic content list and content selected for encoding.](./media/logic-apps-enterprise-integration-flatfile/select-content-to-encode-standard.png)
+1. From the **Source** list, select either **LogicApp** or **IntegrationAccount** as your schema source.
- > [!TIP]
- > If the **Body** property doesn't appear in the dynamic content list,
- > select **See more** next to the **When a HTTP request is received** section label.
- > You can also directly enter the content to encode in the **Content** box.
+ This example continues by selecting **IntegrationAccount**.
+
+ ![Screenshot showing Standard workflow with "Source" property and "IntegrationAccount" selected.](./media/logic-apps-enterprise-integration-flatfile/select-logic-app-integration-account.png)
1. From the **Name** list, select the schema that you previously uploaded to your logic app resource for encoding, for example: ![Screenshot showing the Standard workflow designer and the opened "Name" list with selected schema to use for encoding.](./media/logic-apps-enterprise-integration-flatfile/select-encoding-schema-standard.png) > [!NOTE]
- > If no schema appears in the list, your Standard logic app resource doesn't contain any schema files to use for encoding.
- > Learn how to [upload the schemma that you want to use to your Standard logic app resource](logic-apps-enterprise-integration-schemas.md).
+ >
+ > If the schema list is empty, either your logic app resource isn't linked to your
+ > integration account, your integration account doesn't contain any schema files,
+ > or your logic app resource doesn't contain any schema files.
1. Save your workflow. On the designer toolbar, select **Save**.
-1. To test your workflow, send a request to the HTTPS endpoint, which appears in the Request trigger's **HTTP POST URL** property, and include the XML content that you want to encode in the request body.
-
-You're now done with setting up your flat file encoding action. In a real world app, you might want to store the encoded data in a line-of-business (LOB) app, such as Salesforce. Or, you can send the encoded data to a trading partner. To send the output from the encoding action to Salesforce or to your trading partner, use the other [connectors available in Azure Logic Apps](../connectors/apis-list.md).
-
-## Add Flat File Decoding action
+## Add a Flat File decoding action
### [Consumption](#tab/consumption)
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer, if not already open.
-1. If you have a blank workflow that doesn't have a trigger, add any trigger you want. Otherwise, continue to the next step.
+1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Flat File operations don't have any triggers available.
- This example uses the Request trigger, which is named **When a HTTP request is received**, and handles inbound requests from outside the logic app workflow. To add the Request trigger, follow these steps:
+ This example continues with the Request trigger named **When a HTTP request is received**.
- 1. Under the designer search box, select **Built-in**. In the designer search box, enter `HTTP request`.
+1. On the workflow designer, under the step where you want to add the Flat File action, select **New step**.
- 1. From the triggers list, select the Request trigger named **When an HTTP request is received**.
+1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **flat file**.
- > [!TIP]
- > Providing a JSON schema is optional. If you have a sample payload from the inbound request,
- > select **Use sample payload to generate schema**, enter the sample payload, and select **Done**.
- > The schema appears in the **Request Body JSON Schema** box.
+1. From the actions list, select the action named **Flat File Decoding**.
-1. Under the step in your workflow where you want to add the **Flat File Decoding** action, choose an option:
+ ![Screenshot showing Azure portal and Consumption workflow designer with "flat file" in search box and "Flat File Decoding" action selected.](./media/logic-apps-enterprise-integration-flatfile/flat-file-decoding-consumption.png)
- * To add the **Flat File Decoding** action at the end of your workflow, select **New step**.
+1. In the action's **Content** property, provide the output from the trigger or a previous action that you want to decode by following these steps:
- * To add the **Flat File Decoding** action between existing steps, move your pointer over the arrow that connects those steps so that the plus sign (**+**) appears. Select that plus sign, and then select **Add an action**.
+ 1. Click inside the **Content** box so that the dynamic content list appears.
-1. In the **Choose an operation** search box, enter `flat file`. From the actions list, select the action named **Flat File Decoding**.
+ 1. From the dynamic content list, select the flat file content that you want to encode.
+
+ For this example, from the dynamic content list, under **When a HTTP request is received**, select the **Body** token, which represents the body content output from the trigger.
- ![Screenshot showing the Azure portal and the Consumption designer with "flat file" in search box and the "Flat File Decoding" action selected.](./media/logic-apps-enterprise-integration-flatfile/flat-file-decoding-consumption.png)
+ ![Screenshot showing the Consumption workflow designer and "Content" property with dynamic content list and content selected for decoding.](./media/logic-apps-enterprise-integration-flatfile/select-content-to-decode-consumption.png)
-1. Click inside the **Content** box so that the dynamic content list appears. From the list, in the **When a HTTP request is received** section, select the **Body** property, which contains the request body output from the trigger and the content to decode.
-
- ![Screenshot showing the Consumption designer and the "Content" property with dynamic content list and content selected for decoding.](./media/logic-apps-enterprise-integration-flatfile/select-content-to-decode-consumption.png)
-
- > [!TIP]
+ > [!NOTE]
+ >
> If the **Body** property doesn't appear in the dynamic content list,
- > select **See more** next to the **When a HTTP request is received** section label.
- > You can also directly enter the content to decode in the **Content** box.
+ > select **See more** next to the **When a HTTP request is received** section label.
+ > You can also directly enter the content to encode in the **Content** box.
-1. From the **Schema Name** list, select the schema that's in your linked integration account to use for decoding, for example:
+1. From the **Schema Name** list, select your schema.
- ![Screenshot showing the Consumption designer and the opened "Schema Name" list with selected schema to use for decoding.](./media/logic-apps-enterprise-integration-flatfile/select-decoding-schema-consumption.png)
+ ![Screenshot showing Consumption workflow designer and opened "Schema Name" list with selected schema for decoding.](./media/logic-apps-enterprise-integration-flatfile/select-decoding-schema-consumption.png)
> [!NOTE]
- > If no schema appears in the list, your integration account doesn't contain any schema files
- > to use for decoding. Upload the schema that you want to use to your integration account.
+ >
+ > If the schema list is empty, either your logic app resource isn't linked to your
+ > integration account or your integration account doesn't contain any schema files.
-1. Save your workflow. On the designer toolbar, select **Save**.
+ When you're done, your action looks similar to the following:
-1. To test your workflow, send a request to the HTTPS endpoint, which appears in the Request trigger's **HTTP POST URL** property, and include the XML content that you want to decode in the request body.
+ ![Screenshot showing Consumption workflow with finished "Flat File Decoding" action.](./media/logic-apps-enterprise-integration-flatfile/finished-flat-file-decoding-action-consumption.png)
-You're now done with setting up your flat file decoding action. In a real world app, you might want to store the decoded data in a line-of-business (LOB) app, such as Salesforce. Or, you can send the decoded data to a trading partner. To send the output from the decoding action to Salesforce or to your trading partner, use the other [connectors available in Azure Logic Apps](../connectors/apis-list.md).
+1. Save your workflow. On the designer toolbar, select **Save**.
### [Standard](#tab/standard)
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer, if not already open.
-1. If you have a blank workflow that doesn't have a trigger, add any trigger you want. Otherwise, continue to the next step.
+1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Flat File operations don't have any triggers available.
- This example uses the Request trigger, which is named **When a HTTP request is received**, and handles inbound requests from outside the logic app workflow. To add the Request trigger, follow these steps:
+ This example continues with the Request trigger named **When a HTTP request is received**.
- 1. On the designer, select **Choose an operation**. In the **Choose an operation** pane that opens, under the search box, select **Built-in**.
+1. On the designer, under the step where you want to add the Flat File action, select the plus sign (**+**), and then select **Add an action**.
- 1. In the search box, enter `HTTP request`. From the triggers list, select the Request trigger named **When an HTTP request is received**.
+1. On the **Add an action** pane that appears, under the search box, select **Built-in**.
- > [!TIP]
- > Providing a JSON schema is optional. If you have a sample payload from the inbound request,
- > select **Use sample payload to generate schema**, enter the sample payload, and select **Done**.
- > The schema appears in the **Request Body JSON Schema** box.
+1. In the search box, enter **flat file**. From the actions list, select the action named **Flat File Decoding**.
-1. Under the step in your workflow where you want to add the **Flat File Decoding** action, choose an option:
+ ![Screenshot showing Azure portal and Standard workflow designer with "flat file" in search box and "Flat File Decoding" action selected.](./media/logic-apps-enterprise-integration-flatfile/flat-file-decoding-standard.png)
- * To add the **Flat File Decoding** action at the end of your workflow, select the plus sign (**+**), and then select **Add an action**.
+1. In the action's **Content** property, provide the output from the trigger or a previous action that you want to decode by following these steps:
- * To add the **Flat File Decoding** action between existing steps, select the plus sign (**+**) that appears between those steps, and then select **Insert a new step**.
+ 1. Click inside the **Content** box so that the dynamic content list appears.
-1. In the **Choose an operation** pane that opens, under the search box, select **Built-in**.
+ 1. From the dynamic content list, select the flat file content that you want to encode.
+
+ For this example, from the dynamic content list, under **When a HTTP request is received**, select the **Body** token, which represents the body content output from the trigger.
-1. In the search box, enter `flat file`. From the actions list, select the action named **Flat File Decoding**.
+ ![Screenshot showing Standard workflow designer and the "Content" property with dynamic content list and content selected for decoding.](./media/logic-apps-enterprise-integration-flatfile/select-content-to-decode-standard.png)
- ![Screenshot showing the Azure portal and Standard workflow designer with "flat file" in search box and the "Flat File Decoding" action selected.](./media/logic-apps-enterprise-integration-flatfile/flat-file-decoding-standard.png)
+1. From the **Source** list, select either **LogicApp** or **IntegrationAccount** as your schema source.
-1. Click inside the **Content** box so that the dynamic content list appears. From the list, in the **When a HTTP request is received** section, select the **Body** property, which contains the request body output from the trigger and the content to decode.
+ This example continues by selecting **IntegrationAccount**.
- ![Screenshot showing the Standard workflow designer and the "Content" property with dynamic content list and content selected for decoding.](./media/logic-apps-enterprise-integration-flatfile/select-content-to-decode-standard.png)
-
- > [!TIP]
- > If the **Body** property doesn't appear in the dynamic content list,
- > select **See more** next to the **When a HTTP request is received** section label.
- > You can also directly enter the content to decode in the **Content** box.
+ ![Screenshot showing Standard workflow with "Source" property and "IntegrationAccount" selected.](./media/logic-apps-enterprise-integration-flatfile/select-logic-app-integration-account.png)
1. From the **Name** list, select the schema that you previously uploaded to your logic app resource for decoding, for example: ![Screenshot showing the Standard workflow designer and the opened "Name" list with selected schema to use for decoding.](./media/logic-apps-enterprise-integration-flatfile/select-decoding-schema-standard.png) > [!NOTE]
- > If no schema appears in the list, your Standard logic app resource doesn't contain any schema files to use for decoding.
- > Learn how to [upload the schema that you want to use to your Standard logic app resource](logic-apps-enterprise-integration-schemas.md).
+ >
+ > If the schema list is empty, either your logic app resource isn't linked to your
+ > integration account, your integration account doesn't contain any schema files,
+ > or your logic app resource doesn't contain any schema files.
1. Save your workflow. On the designer toolbar, select **Save**.
-1. To test your workflow, send a request to the HTTPS endpoint, which appears in the Request trigger's **HTTP POST URL** property, and include the XML content that you want to decode in the request body.
+ You're now done with setting up your flat file decoding action. In a real world app, you might want to store the decoded data in a line-of-business (LOB) app, such as Salesforce. Or, you can send the decoded data to a trading partner. To send the output from the decoding action to Salesforce or to your trading partner, use the other [connectors available in Azure Logic Apps](../connectors/apis-list.md). -
+## Test your workflow
+
+1. By using [Postman](https://www.getpostman.com/postman) or a similar tool and the `POST` method, send a call to the Request trigger's URL, which appears in the Request trigger's **HTTP POST URL** property, and include the XML content that you want to encode or decode in the request body.
+
+1. After your workflow finishes running, go to the workflow's run history, and examine the Flat File action's inputs and outputs.
## Next steps
logic-apps Logic Apps Enterprise Integration Liquid Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md
Last updated 08/15/2022
When you want to perform basic JSON transformations in your logic app workflows, you can use built-in data operations, such as the **Compose** action or **Parse JSON** action. However, some scenarios might require advanced and complex transformations that include elements such as iterations, control flows, and variables. For transformations between JSON to JSON, JSON to text, XML to JSON, or XML to text, you can create a template that describes the required mapping or transformation using the Liquid open-source template language. You can select this template when you add a **Liquid** built-in action to your workflow. You can use **Liquid** actions in multi-tenant Consumption logic app workflows and single-tenant Standard logic app workflows.
-Although no **Liquid** triggers are available, you can use any trigger or action to get or feed the source JSON or XML content into your workflow for transformation. For example, you can use a built-in connector trigger, a managed or Azure-hosted connector trigger available for Azure Logic Apps, or even another app.
+While no **Liquid** triggers are available, you can use any trigger or action to feed the source JSON or XML content into your workflow. For example, you can use a built-in connector trigger, a managed or Azure-hosted connector trigger available for Azure Logic Apps, or even another app.
This article shows how to complete the following tasks:
For more information, review the following documentation:
* If you already have an integration account with the artifacts that you need or want to use, you can link the integration account to multiple Standard logic app resources where you want to use the artifacts. That way, you don't have to upload maps to each individual logic app. For more information, review [Link your logic app resource to your integration account](logic-apps-enterprise-integration-create-integration-account.md?tabs=standard#link-account).
- * Some Azure-hosted integration account connectors, such as **AS2**, **EDIFACT**, and **X12**, let you create a connection to your integration account. If you're just using these connectors, you don't need the link.
-
- * The built-in connectors named **Liquid** and **Flat File** let you select maps and schemas that you previously uploaded to your logic app resource or to a linked integration account, but not both. You can then use these artifacts across all child workflows within the *same logic app resource*.
+ * The **Liquid** built-in connector lets you select a map that you previously uploaded to your logic app resource or to a linked integration account, but not both. You can then use these artifacts across all child workflows within the *same logic app resource*.
So, if you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option. Either way, you can use these artifacts across all child workflows within the same logic app resource.
The following steps show how to add a Liquid transformation action for Consumpti
![Screenshot showing Consumption workflow, Liquid action's "Content" property, an open dynamic content list, and "Body" token selected.](./media/logic-apps-enterprise-integration-liquid-transform/select-body-consumption.png)
-1. For the **Map** property, open the **Map** list, and select your Liquid template.
+1. From the **Map** list, select your Liquid template.
This example continues with the template named **JsonToJsonTemplate**.
The following steps show how to add a Liquid transformation action for Consumpti
> [!NOTE] >
- > If the maps list is empty, most likely your logic app resource isn't linked to your integration account.
- > Make sure to [link your logic app resource to the integration account that has the Liquid template or map](logic-apps-enterprise-integration-create-integration-account.md?tabs=consumption#link-account).
+ > If the maps list is empty, either your logic app resource isn't linked to your
+ > integration account or your integration account doesn't contain any map files.
When you're done, the action looks similar to the following example: ![Screenshot showing Consumption workflow with finished "Transform JSON to JSON" action.](./media/logic-apps-enterprise-integration-liquid-transform/finished-transform-action-consumption.png)
+1. Save your workflow. On the designer toolbar, select **Save**.
+ ### [Standard](#tab/standard) 1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer, if not already open.
The following steps show how to add a Liquid transformation action for Consumpti
![Screenshot showing Standard workflow, Liquid action's "Content" property with dynamic content list opened, and "Body" token selected.](./media/logic-apps-enterprise-integration-liquid-transform/select-body-standard.png)
+ > [!NOTE]
+ >
+ > If the **Body** property doesn't appear in the dynamic content list,
+ > select **See more** next to the **When a HTTP request is received** section label.
+ > You can also directly enter the content to decode in the **Content** box.
+ 1. From the **Source** list, select either **LogicApp** or **IntegrationAccount** as your Liquid template source. This example continues by selecting **IntegrationAccount**.
The following steps show how to add a Liquid transformation action for Consumpti
![Screenshot showing Standard workflow with finished "Transform JSON to JSON" action.](./media/logic-apps-enterprise-integration-liquid-transform/finished-transform-action-standard.png)
+1. Save your workflow. On the designer toolbar, select **Save**.
+ ## Test your workflow
-1. By using [Postman](https://www.getpostman.com/postman) or a similar tool and the `POST` method, send a call to the Request trigger's URL and include the JSON input to transform, for example:
+1. By using [Postman](https://www.getpostman.com/postman) or a similar tool and the `POST` method, send a call to the Request trigger's URL, which appears in the Request trigger's **HTTP POST URL** property, and include the JSON input to transform, for example:
```json {
logic-apps Logic Apps Enterprise Integration Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-schemas.md
Title: Add schemas to validate XML in workflows
-description: Add schemas to validate XML documents in workflows with Azure Logic Apps and the Enterprise Integration Pack.
+ Title: Add schemas to use with workflows
+description: Add schemas for workflows in Azure Logic Apps.
ms.suite: integration
Last updated 08/30/2022
-# Add schemas to validate XML in workflows with Azure Logic Apps
+# Add schemas to use with workflows with Azure Logic Apps
-To check that documents use valid XML and have the expected data in the predefined format, your logic app workflow can use XML schemas with the **XML Validation** action. An XML schema describes a business document that's represented in XML using the [XML Schema Definition (XSD)](https://www.w3.org/TR/xmlschema11-1/).
+Workflow actions such as **Flat File** and **XML Validation** require a schema to perform their tasks. For example, the **XML Validation** action requires an XML schema to check that documents use valid XML and have the expected data in the predefined format. This schema is a business document that's represented in XML using the [XML Schema Definition (XSD)](https://www.w3.org/TR/xmlschema11-1/) and uses the .xsd file name extension. The **Flat File** actions use a schema to encode and decode XML content.
-If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overview.md)? For more information about B2B enterprise integration, review [B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md).
+This article shows how to add a schema to your integration account. If you're working with a Standard logic app workflow, you can also add a schema directly to your logic app resource.
## Prerequisites * An Azure account and subscription. If you don't have a subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* To create schemas, you can use the following tools:
+* The schema file that you want to add. To create schemas, you can use the following tools:
* Visual Studio 2019 and the [Microsoft Azure Logic Apps Enterprise Integration Tools Extension](https://aka.ms/vsenterpriseintegrationtools).
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
> in Visual Studio. To resolve this display problem, either [restart Visual Studio in DPI-unaware mode](/visualstudio/designers/disable-dpi-awareness#restart-visual-studio-as-a-dpi-unaware-process), > or add the [DPIUNAWARE registry value](/visualstudio/designers/disable-dpi-awareness#add-a-registry-entry).
-* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
+* Based on whether you're working on a Consumption or Standard logic app workflow, you'll need an [integration account resource](logic-apps-enterprise-integration-create-integration-account.md). Usually, you need this resource when you want to define and store artifacts for use in enterprise integration and B2B workflows.
- * Is associated with the same Azure subscription as your logic app resource.
+ > [!IMPORTANT]
+ >
+ > To work together, both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
- * Exists in the same location or Azure region as your logic app resource where you plan to use the **XML Validation** action.
+ * If you're working on a Consumption logic app workflow, you'll need an [integration account that's linked to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md?tabs=consumption#link-account).
- * If you use the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you have to [link your integration account to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account) before you can use your artifacts in your workflow.
+ * If you're working on a Standard logic app workflow, you can link your integration account to your logic app resource, upload schemas directly to your logic app resource, or both, based on the following scenarios:
- To create and add schemas for use in **Logic App (Consumption)** workflows, you don't need a logic app resource yet. However, when you're ready to use those schemas in your workflows, your logic app resource requires a linked integration account that stores those schemas.
+ * If you already have an integration account with the artifacts that you need or want to use, you can link your integration account to multiple Standard logic app resources where you want to use the artifacts. That way, you don't have to upload schemas to each individual logic app. For more information, review [Link your logic app resource to your integration account](logic-apps-enterprise-integration-create-integration-account.md?tabs=standard#link-account).
- * If you use the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you need an existing logic app resource because you don't store schemas in your integration account. Instead, you can directly add schemas to your logic app resource using either the Azure portal or Visual Studio Code. You can then use these schemas across multiple workflows within the *same logic app resource*.
+ * The **Flat File** built-in connector lets you select a schema that you previously uploaded to your logic app resource or to a linked integration account, but not both. You can then use this artifact across all child workflows within the same logic app resource.
- You still need an integration account to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. However, you don't need to link your logic app resource to your integration account, so the linking capability doesn't exist. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ So, if you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option. Either way, you can use these artifacts across all child workflows within the same logic app resource.
- > [!NOTE]
- > Currently, only the **Logic App (Consumption)** resource type supports [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
- > The **Logic App (Standard)** resource type doesn't include [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
+## Limitations
-* If your schema is [2 MB or smaller](#smaller-schema), you can add your schema to your integration account *directly* from the Azure portal. However, if your schema is bigger than 2 MB but not bigger than the [size limit for schemas](logic-apps-limits-and-config.md#artifact-capacity-limits), you can upload your schema to an Azure storage account. To add that schema to your integration account, you can then link to your storage account from your integration account. For this task, here are the items you need:
+* Limits apply to the number of artifacts, such as schemas, per integration account. For more information, review [Limits and configuration information for Azure Logic Apps](logic-apps-limits-and-config.md#integration-account-limits).
- | Item | Description |
- ||-|
- | [Azure storage account](../storage/common/storage-account-overview.md) | In this account, create an Azure blob container for your schema. Learn [how to create a storage account](../storage/common/storage-account-create.md). |
- | Blob container | In this container, you can upload your schema. You also need this container's content URI later when you add the schema to your integration account. Learn how to [create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md). |
- | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, choose a step: <p>- In the Azure portal, select your storage account. From your storage account menu, select **Storage Explorer**. <p>- For the desktop version, [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md). |
- |||
+* Based on whether you're working on a Consumption or Standard logic app workflow, schema file size limits might apply.
- To add larger schemas for the **Logic App (Consumption)** resource type, you can also use the [Azure Logic Apps REST API - Schemas](/rest/api/logic/schemas/create-or-update). However, for the **Logic App (Standard)** resource type, the Azure Logic Apps REST API is currently unavailable.
+ * If you're working with Standard workflows, no limits apply to schema file sizes.
-## Limits
+ * If you're working with Consumption workflows, the following limits apply:
-* For **Logic App (Standard)**, no limits exist for schema file sizes.
+ * If your schema is [2 MB or smaller](#smaller-schema), you can add your schema to your integration account *directly* from the Azure portal.
-* For **Logic App (Consumption)**, limits exist for integration accounts and artifacts such as schemas. For more information, review [Limits and configuration information for Azure Logic Apps](logic-apps-limits-and-config.md#integration-account-limits).
+ * If your schema is bigger than 2 MB but not bigger than the [size limit for schemas](logic-apps-limits-and-config.md#artifact-capacity-limits), you'll need an Azure storage account where you can upload your schema. Then, to add that schema to your integration account, you can then link to your storage account from your integration account. For this task, the following table describes the items you need:
- Usually, when you're using an integration account with your workflow and you want to validate XML, you add or upload the schema to that account. If you're referencing or importing a schema that's not in your integration account, you might receive the following error when you use the element `xsd:redefine`:
+ | Item | Description |
+ ||-|
+ | [Azure storage account](../storage/common/storage-account-overview.md) | In this account, create an Azure blob container for your schema. Learn [how to create a storage account](../storage/common/storage-account-create.md). |
+ | Blob container | In this container, you can upload your schema. You also need this container's content URI later when you add the schema to your integration account. Learn how to [create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md). |
+ | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, choose a step: <br><br>- In the Azure portal, select your storage account. From your storage account menu, select **Storage Explorer**. <br><br>- For the desktop version, [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md). |
+
+ To add larger schemas, you can also use the [Azure Logic Apps REST API - Schemas](/rest/api/logic/schemas/create-or-update). However, for Standard workflows, the Azure Logic Apps REST API is currently unavailable.
+
+* Usually, when you're using an integration account with your workflow, you add the schema to that account. However, if you're referencing or importing a schema that's not in your integration account, you might receive the following error when you use the element `xsd:redefine`:
`An error occurred while processing the XML schemas: ''SchemaLocation' must successfully resolve if <redefine> contains any child other than <annotation>.'.`
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
## Add schemas
-### [Consumption](#tab/consumption)
+* If you're working with a Consumption workflow, you must add your schema to a linked integration account.
+
+* If you're working with a Standard workflow, you have the following options:
+
+ * Add your schema to a linked integration account. You can share the schema and integration account across multiple Standard logic app resources and their child workflows.
+
+ * Add your schema directly to your logic app resource. However, you can only share that schema across child workflows in the same logic app resource.
+
+<a name="add-schema-integration-account"></a>
+
+### Add schema to integration account
1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
-1. In the main Azure search box, enter `integration accounts`, and select **Integration accounts**.
+1. In the main Azure search box, enter **integration accounts**, and select **Integration accounts**.
1. Select the integration account where you want to add your schema.
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
1. On the **Schemas** pane toolbar, select **Add**.
-Based on your schema (.xsd) file's size, follow the steps for uploading a schema that's either [up to 2 MB](#smaller-schema) or [more than 2 MB, up to 8 MB](#larger-schema).
+For Consumption workflows, based on your schema's file size, now follow the steps for uploading a schema that's either [up to 2 MB](#smaller-schema) or [more than 2 MB, up to 8 MB](#larger-schema).
<a name="smaller-schema"></a>
Based on your schema (.xsd) file's size, follow the steps for uploading a schema
### Add schemas more than 2 MB
-To add larger schemas, you can upload your schema to an Azure blob container in your Azure storage account. Your steps for adding schemas differ based whether your blob container has public read access. So first, check whether or not your blob container has public read access by following these steps: [Set public access level for blob container](../vs-azure-tools-storage-explorer-blobs.md#set-the-public-access-level-for-a-blob-container)
+To add larger schemas for Consumption workflows to use, you can upload your schema to an Azure blob container in your Azure storage account. Your steps for adding schemas differ based whether your blob container has public read access. So first, check whether or not your blob container has public read access by following these steps: [Set public access level for blob container](../vs-azure-tools-storage-explorer-blobs.md#set-the-public-access-level-for-a-blob-container)
#### Check container access level
After your schema finishes uploading, the schema appears in the **Schemas** list
After your schema finishes uploading, the schema appears in the **Schemas** list. On your integration account's **Overview** page, under **Artifacts**, your uploaded schema appears.
-### [Standard](#tab/standard)
++
+### Add schema to Standard logic app resource
+
+These steps apply only if you want to add a schema directly to your Standard logic app resource. Otherwise, [add the schema to your integration account](#add-schema-integration-account).
#### Azure portal
After your schema finishes uploading, the schema appears in the **Schemas** list
1. In the **Schemas** folder, add your schema. -- <a name="edit-schema"></a> ++ ## Edit a schema To update an existing schema, you have to upload a new schema file that has the changes you want. However, you can first download the existing schema for editing.
To update an existing schema, you have to upload a new schema file that has the
### [Standard](#tab/standard)
+The following steps apply only if you're updating a schema that you added to your logic app resource. Otherwise, follow the Consumption steps for updating a schema in your integration account.
+ 1. In the [Azure portal](https://portal.azure.com), open your logic app resource, if not already open. 1. On your logic app resource's menu, under **Settings**, select **Schemas**.
To update an existing schema, you have to upload a new schema file that has the
### [Standard](#tab/standard)
+The following steps apply only if you're deleting a schema that you added to your logic app resource. Otherwise, follow the Consumption steps for deleting a schema from your integration account.
+ 1. In the [Azure portal](https://portal.azure.com), open your logic app resource, if not already open. 1. On your logic app resource's menu, under **Settings**, select **Schemas**.
migrate Concepts Migration Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-webapps.md
Title: Support matrix for web apps migration
description: Support matrix for web apps migration -+ Last updated 06/22/2022
migrate Troubleshoot Webapps Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-webapps-migration.md
Title: Troubleshoot web apps migration issues
description: Troubleshoot web apps migration issues -+ Last updated 6/22/2022
migrate Tutorial Migrate Vmware Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-agent.md
Select VMs for migration.
17. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then click **Next**. - You can exclude disks from replication.
- - If you exclude disks, won't be present on the Azure VM after migration.
+ - If you exclude disks, they won't be present on the Azure VM after migration.
:::image type="content" source="./media/tutorial-migrate-vmware-agent/disks-inline.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box." lightbox="./media/tutorial-migrate-vmware-agent/disks-expanded.png":::
+ - You can exclude disks if the mobility agent is already installed on that server. [Learn more](../site-recovery/exclude-disks-replication.md#exclude-limitations).
18. In **Tags**, choose to add tags to your Virtual machines, Disks, and NICs.
migrate Tutorial Migrate Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-webapps.md
Title: Modernize ASP.NET web apps to Azure App Service code
description: At-scale migration of ASP.NET web apps to Azure App Service using Azure Migrate -+ Last updated 08/09/2022
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md
The estimated time for the recovery of the server depends on several factors:
- The network bandwidth if the restore is to a different region - The number of concurrent restore requests being processed in the target region - The presence of primary key in the tables in the database. For faster recovery, consider adding primary key for all the tables in your database.-
- **Will modifying session level database variables impact restoration?**
+
+ - **Will modifying session level database variables impact restoration?**
Modifying session level variables and running DML statements in MySQL client session can impact the PITR (point in time restore) operation, as these modifications do not get recorded in binary log which is used for backup and restore operation. For example, [foreign_key_checks](http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_foreign_key_checks) is one such session level variable which if disabled for running a DML statement which violates the foreign key constraint will result in PITR (point in time restore) failure. The only workaround in such a scenario would be to select a PITR time which is earlier than the time at which [foreign_key_checks](http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_foreign_key_checks) were disabled. Our recommendation is to NOT modify any session variables for a successful PITR operation. ## Next steps
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Korea Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| Korea Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Korea South | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | North Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | North Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
sentinel Basic Logs Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/basic-logs-use-cases.md
The primary log sources used for detection often contain the metadata and contex
Event log data in Basic Logs can't be used as the primary log source for security incidents and alerts. But Basic Log event data is useful to correlate and draw conclusions when you investigate an incident or perform threat hunting.
-This topic highlights log sources to consider configuring for Basic Logs when they're stored in Log Analytics tables.
+This topic highlights log sources to consider configuring for Basic Logs when they're stored in Log Analytics tables. Before configuring tables as Basic Logs, [compare log data plans (preview)](../azure-monitor/logs/log-analytics-workspace-overview.md#log-data-plans-preview).
> [!IMPORTANT] > The Basic Logs feature is currently in **PREVIEW**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
sentinel Ci Cd Custom Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd-custom-content.md
+
+ Title: Manage custom content with repository connections
+
+description: This article explains custom Sentinel content like GitHub or Azure DevOps repositories that can utilize source control features.
++++ Last updated : 8/24/2022+
+#Customer intent: As a SOC collaborator or MSSP analyst, I want to manage dynamic Sentinel workspace content based on source control repositories for continuous integration and continuous delivery (CI/CD). Specifically as an MSSP content manager, I want to deploy one solution to many customer workspaces and still be able to tailor custom content for their environments.
++
+# Manage custom content with Microsoft Sentinel repositories (public preview)
+
+The Microsoft Sentinel repositories feature provides a central experience for the deployment and management of Sentinel content as code. Repositories allow connections to an external source control for continuous integration / continuous delivery (CI/CD). This automation removes the burden of manual processes to update and deploy your custom content across workspaces. For more information on Sentinel content, see [About Microsoft Sentinel content and solutions](sentinel-solutions.md).
+
+> [!IMPORTANT]
+> The Microsoft Sentinel **Repositories** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Plan your repository connection
+
+Microsoft Sentinel repositories require careful planning to ensure you have the proper permissions from your workspace to the repository (repo) you want connected. Only connections to GitHub and Azure DevOps repositories with contributor access are currently supported. The Microsoft Sentinel application will need authorization to your repo and have Actions enabled for GitHub and Pipelines enabled for Azure DevOps.
+
+Repositories require an **Owner** role in the resource group that contains your Microsoft Sentinel workspace. This role is required to create the connection between Microsoft Sentinel and your source control repository. If you're' unable to use the Owner role in your environment, you can instead use the combination of **User Access Administrator** and **Sentinel Contributor** roles to create the connection.
+
+If you find content in a public repository where you *aren't* a contributor, you'll need to get that content into your repo first. You can do that with an import, fork, or clone of the content to a repo where you're a contributor. Then you can connect your repo to your Sentinel workspace. For more information, see [Deploy custom content from your repository](ci-cd.md).
++
+### Validate your content
+
+The following Microsoft Sentinel content types can be deployed through a repository connection:
+- Analytics rules
+- Automation rules
+- Hunting queries
+- Parsers
+- Playbooks
+- Workbooks
+
+> [!TIP]
+> This article does *not* describe how to create these types of content from scratch. For more information, see the relevant [Microsoft Sentinel GitHub wiki](https://github.com/Azure/Azure-Sentinel/wiki#get-started) for each content type.
+>
+
+ Repositories content needs to be stored as [ARM templates](/azure/azure-resource-manager/templates/overview). The repositories deployment pipeline doesn't validate the content except to confirm it's in the correct JSON format.
+
+The first step to validate your content is to test it within Microsoft Sentinel. You can also apply the [Microsoft Sentinel GitHub validation process](https://github.com/Azure/Azure-Sentinel/wiki#test-your-contribution) and tools to complement your validation process.
+
+A sample repository is available with ARM templates for each of the content types listed above. The repo also demonstrates how to use advanced features of repository connections. For more information, see [Sentinel CICD repositories sample](https://github.com/SentinelCICD/RepositoriesSampleContent).
++++
+### Maximum connections and deployments
+
+- Each Microsoft Sentinel workspace is currently limited to **five repository connections**.
+
+- Each Azure resource group is limited to **800 deployments** in its deployment history. If you have a high volume of ARM template deployments in your resource group(s), you may see the `Deployment QuotaExceeded` error. For more information, see [DeploymentQuotaExceeded](/azure/azure-resource-manager/templates/deployment-quota-exceeded) in the Azure Resource Manager templates documentation.
+++
+## Improve performance with smart deployments
+
+The **smart deployments** feature is a back-end capability that improves performance by actively tracking modifications made to the content files of a connected repository. It uses a CSV file within the '.sentinel' folder in your repository to audit each commit. The workflow avoids redeploying content that hasn't been modified since the last deployment. This process improves your deployment performance and prevents tampering with unchanged content in your workspace, such as resetting dynamic schedules of your analytics rules.
+
+Smart deployments are enabled by default on newly created connections. If you prefer all source control content to be deployed every time a deployment is triggered, regardless of whether that content was modified or not, you can modify your workflow to disable smart deployments. For more information, see [Customize the deployment workflow](ci-cd.md#customize-the-deployment-workflow).
+
+ > [!NOTE]
+ > This capability was launched in public preview on April 20th, 2022. Connections created prior to launch would need to be updated or recreated for smart deployments to be turned on.
+ >
++
+## Consider deployment customization options
+
+Even with smart deployments enabled, the default behavior is to push all the updated content from the connected repository branch. If the default configuration for your content deployment from GitHub or Azure DevOps doesn't meet all your requirements, you can modify the experience to fit your needs.
+
+For example, you may want to:
+- turn off smart deployments
+- configure different deployment triggers
+- deploy content only from a specific root folder for a given workspace
+- schedule the workflow to run periodically
+- combine different workflow events together
+- prioritize content to be evaluated before the entire repo is enumerated for valid ARM templates
+
+For more details on how to implement these customizations, see [Customize the deployment workflow](ci-cd.md#customize-the-deployment-workflow).
++
+## Next steps
+
+Get more examples and step by step instructions on deploying Microsoft Sentinel repositories.
+
+- [Sentinel CICD sample repository](https://github.com/SentinelCICD/RepositoriesSampleContent)
+- [Deploy custom content from your repository](ci-cd.md)
+- [Automate Sentinel integration with DevOps](/azure/architecture/example-scenario/devops/automate-sentinel-integration#microsoft-sentinel-repositories)
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd.md
Title: Deploy custom content from your repository
-description: This article describes how to create connections with a GitHub or Azure DevOps repository where you can save your custom content and deploy it to Microsoft Sentinel.
-
+description: This article describes how to create connections with a GitHub or Azure DevOps repository where you can manage your custom content and deploy it to Microsoft Sentinel.
+ Previously updated : 11/09/2021-- Last updated : 8/25/2022+
+#Customer intent: As a SOC collaborator or MSSP analyst, I want to know how to connect my source control repositories for continuous integration and continuous delivery (CI/CD). Specifically as an MSSP content manager, I want to know how to deploy one solution to many customer workspaces and still be able to tailor custom content for their environments.
# Deploy custom content from your repository (Public preview)
-> [!IMPORTANT]
->
-> The Microsoft Sentinel **Repositories** page is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-Microsoft Sentinel *content* is Security Information and Event Management (SIEM) or Security Orchestration, Automation, and Response (SOAR) resources that assist customers with ingesting, monitoring, alerting, hunting, automating response, and more in Microsoft Sentinel. For example, Microsoft Sentinel content includes data connectors, parsers, workbooks, and analytics rules. For more information, see [About Microsoft Sentinel content and solutions](sentinel-solutions.md).
-
-You can use the out-of-the-box (built-in) content provided in the Microsoft Sentinel Content hub and customize it for your own needs, or create your own custom content from scratch.
+When creating custom content, you can manage it from your own Microsoft Sentinel workspaces, or an external source control repository. This article describes how to create and manage connections between Microsoft Sentinel and GitHub or Azure DevOps repositories. Managing your content in an external repository allows you to make updates to that content outside of Microsoft Sentinel, and have it automatically deployed to your workspaces. For more information, see [Update custom content with repository connections](ci-cd-custom-content.md).
-When creating custom content, you can store and manage it in your own Microsoft Sentinel workspaces, or an external source control repository, including GitHub and Azure DevOps repositories. This article describes how to create and manage the connections between Microsoft Sentinel and external source control repositories. Managing your content in an external repository allows you to make updates to that content outside of Microsoft Sentinel, and have it automatically deployed to your workspaces.
-
-> [!TIP]
-> This article does *not* describe how to create specific types of content from scratch. For more information, see the relevant [Microsoft Sentinel GitHub wiki](https://github.com/Azure/Azure-Sentinel/wiki#get-started) for each content type.
+> [!IMPORTANT]
>
+> The Microsoft Sentinel **Repositories** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Prerequisites and scope
-Before connecting your Microsoft Sentinel workspace to an external source control repository, make sure that you have:
--- Access to a GitHub or Azure DevOps repository, with any custom content files you want to deploy to your workspaces, in relevant [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/index.yml).
+Microsoft Sentinel currently supports connections to GitHub and Azure DevOps repositories. Before connecting your Microsoft Sentinel workspace to your source control repository, make sure that you have:
- Microsoft Sentinel currently supports connections only with GitHub and Azure DevOps repositories.
+- An **Owner** role in the resource group that contains your Microsoft Sentinel workspace *or* a combination of **User Access Administrator** and **Sentinel Contributor** roles to create the connection
+- Contributor access to your GitHub or Azure DevOps repository
+- Actions enabled for GitHub and Pipelines enabled for Azure DevOps
+- Ensure custom content files you want to deploy to your workspaces are in relevant [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/index.yml).
-- An **Owner** role in the resource group that contains your Microsoft Sentinel workspace. This role is required to create the connection between Microsoft Sentinel and your source control repository. If you are unable to use the Owner role in your environment, you can instead use the combination of **User Access Administrator** and **Sentinel Contributor** roles to create the connection.-
-### Maximum connections and deployments
--- Each Microsoft Sentinel workspace is currently limited to **five connections**.--- Each Azure resource group is limited to **800 deployments** in its deployment history. If you have a high volume of ARM template deployments in your resource group(s), you may see an `Deployment QuotaExceeded` error. For more information, see [DeploymentQuotaExceeded](/azure/azure-resource-manager/templates/deployment-quota-exceeded) in the Azure Resource Manager templates documentation.-
-### Validate your content
-
-Deploying content to Microsoft Sentinel via a repository connection does not validate that content other than verifying that the data is in the correct ARM template format.
-
-We recommend that you validate your content templates using your regular validation process. You can leverage the [Microsoft Sentinel GitHub validation process](https://github.com/Azure/Azure-Sentinel/wiki#test-your-contribution) and tools to set up your own validation process.
+For more information, see [Validate your content](ci-cd-custom-content.md#validate-your-content)
## Connect a repository
After the deployment is complete:
- The connection details on the **Repositories** page are updated with the link to the connection's deployment logs and the status and time of the last deployment. For example: :::image type="content" source="media/ci-cd/deployment-logs-status.png" alt-text="Screenshot of a GitHub repository connection's deployment logs.":::
-
-### Improve deployment performance with smart deployments
-
-Smart deployments is a back-end capability that improves the performance of deployments by actively tracking modifications made to the content files of a connected repository/branch using a csv file within the '.sentinel' folder in your repository. By actively tracking modifications made to content in each commit, your Microsoft Sentinel repositories will avoid redeploying any content that has not been modified since the last deployment into your Microsoft Sentinel workspace(s). This will improve your deployment performance and avoid unintentionally tampering with unchanged content in your workspace, such as resetting the dynamic schedules of your analytics rules by redeploying them.
-
-While smart deployments is enabled by default on newly created connections, we understand that some customers would prefer all their source control content to be deployed every time a deployment is triggered, regardless of whether that content was modified or not. You can modify your workflow to disable smart deployments to have your connection deploy all content regardless of its modification status. See [Customize the deployment workflow](#customize-the-deployment-workflow) for more details.
-
- > [!NOTE]
- > This capability was launched in public preview on April 20th, 2022. Connections created prior to launch would need to be updated or recreated for smart deployments to be turned on.
- >
### Customize the deployment workflow
-The default content deployment deploys all of the relevant custom content from the connected repository branch whenever a push is made to anything in that branch.
-
-If the default configuration for your content deployment from GitHub or Azure DevOps does not meet all your requirements, you can modify the experience to fit your needs.
-
-For example, you may want to turn off smart deployments, configure different deployment triggers, or deploy content only from a specific root folder.
+The default workflow only deploys content that has been modified since the last deployment based on commits to the repository. But you may want to turn off smart deployments or perform other customizations. For example, you can configure different deployment triggers, or deploy content exclusively from a specific root folder.
Select one of the following tabs depending on your connection type:
Select one of the following tabs depending on your connection type:
1. In GitHub, go to your repository and find your workflow in the `.github/workflows` directory.
- The workflow name is shown in the first line of the workflow file, and has the following default naming convention: `Deploy Content to <workspace-name> [<deployment-id>]`.
+ The workflow file is the YML file starting with `sentinel-deploy-xxxxx.yml`. Open that file and the workflow name is shown in the first line and has the following default naming convention: `Deploy Content to <workspace-name> [<deployment-id>]`.
For example: `name: Deploy Content to repositories-demo [xxxxx-dk5d-3s94-4829-9xvnc7391v83a]`
Select one of the following tabs depending on your connection type:
For more information, see the [GitHub documentation](https://docs.github.com/en/actions/learn-github-actions/events-that-trigger-workflows#configuring-workflow-events) on configuring workflow events.
+ - **To disable smart deployments**:
+ The smart deployments behavior is separate from the deployment trigger discussed above. Navigate to the `jobs` section of your workflow. Switch the `smartDeployment` default value from `true` to `false`. This will turn off the smart deployments functionality and all future deployments for this connection will redeploy all the repository's relevant content files to the connected workspace(s) once this change is committed.
+ - **To modify the deployment path**: In the default configuration shown above for the `on` section, the wildcards (`**`) in the first line in the `paths` section indicate that the entire branch is in the path for the deployment triggers.
- This default configuration means that a deployment workflow is triggered any time that content is pushed to any part of the branch.
+ This default configuration means that a deployment workflow is triggered anytime that content is pushed to any part of the branch.
Later on in the file, the `jobs` section includes the following default configuration: `directory: '${{ github.workspace }}'`. This line indicates that the entire GitHub branch is in the path for the content deployment, without filtering for any folder paths.
Select one of the following tabs depending on your connection type:
... directory: '${{ github.workspace }}/SentinelContent' ```
- - **To disable smart deployments**:
- Navigate to the `jobs` section of your workflow. Switch the `smartDeployment` default value (typically on line 33) from `true` to `false`. This will turn off the smart deployments functionality and all future deployments for this connection will redeploy all the repository's relevant content files to the connected workspace(s) once this change is committed.
For more information, see the [GitHub documentation](https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions#onpushpull_requestpaths) on GitHub Actions and editing GitHub workflows.
For more information, see the [GitHub documentation](https://docs.github.com/en/
Modify this trigger to any available Azure DevOps Triggers, such as to scheduling or pull request triggers. For more information, see the [Azure DevOps trigger documentation](/azure/devops/pipelines/yaml-schema).
+ - **To disable smart deployments**:
+ The smart deployments behavior is separate from the deployment trigger discussed above. Navigate to the `ScriptArguments` section of your pipeline. Switch the `smartDeployment` default value from `true` to `false`. This will turn off the smart deployments functionality and all future deployments for this connection will redeploy all the repository's relevant content files to the connected workspace(s) once this change is committed.
+ - **To modify the deployment path**: The default configuration for the `trigger` section has the following code, which indicates that the `main` branch is in the path for the deployment triggers:
For more information, see the [GitHub documentation](https://docs.github.com/en/
- main ```
- This default configuration means that a deployment pipeline is triggered any time that content is pushed to any part of the `main` branch.
+ This default configuration means that a deployment pipeline is triggered anytime that content is pushed to any part of the `main` branch.
To deploy content from a specific folder path only, add the folder name to the `include` section, for the trigger, and the `steps` section, for the deployment path, below as needed.
For more information, see the [GitHub documentation](https://docs.github.com/en/
azureSubscription: `Sentinel_Deploy_ServiceConnection_0000000000000000` workingDirectory: `SentinelContent` ```
-
- - **To disable smart deployments**:
- Navigate to the `ScriptArguments` section of your pipeline. Switch the `smartDeployment` default value (typically on line 33) from `true` to `false`. This will turn off the smart deployments functionality and all future deployments for this connection will redeploy all the repository's relevant content files to the connected workspace(s) once this change is committed.
For more information, see the [Azure DevOps documentation](/azure/devops/pipelines/yaml-schema) on the Azure DevOps YAML schema.
For more information, see the [Azure DevOps documentation](/azure/devops/pipelin
> [!IMPORTANT] > In both GitHub and Azure DevOps, make sure that you keep the trigger path and deployment path directories consistent. >
-## Edit or delete content in your repository
-After you've successfully created a connection to your source control repository, anytime that content in that repository is modified or added, the deployment workflow runs again and deploys all content in the repository to all connected Microsoft Sentinel workspaces.
+## Edit content
+
+After you've successfully created a connection to your source control repository, anytime content in that repository is modified or added, the modified content is deployed to all connected Microsoft Sentinel workspaces.
We recommend that you edit any content stored in a connected repository *only* in the repository, and not in Microsoft Sentinel. For example, to make changes to your analytics rules, do so directly in GitHub or Azure DevOps. If you have edited the content in Microsoft Sentinel, make sure to export it to your source control repository to prevent your changes from being overwritten the next time the repository content is deployed to your workspace.
-If you are deleting content, make sure to delete it from both your repository and the Azure portal. Deleting content from your repository does not delete it from your Microsoft Sentinel workspace.
+## Delete content
+
+Deleting content from your repository doesn't delete it from your Microsoft Sentinel workspace. If you want to remove content that was deployed through repositories, make sure to delete it from both your repository and Sentinel. For example, set a filter for the content based on source name to make is easier to identify content from repositories.
+ ## Remove a repository connection
sentinel Sentinel Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-service-limits.md
Last updated 04/27/2022 + # Service limits for Microsoft Sentinel
This article lists the most common service limits you might encounter as you use
[!INCLUDE [sentinel-service-limits](../../includes/sentinel-limits-notebooks.md)]
-## Threat intelligence limits
+## Repositories limits
-## Watchlist limits
+## Threat intelligence limits
## User and Entity Behavior Analytics (UEBA) limits [!INCLUDE [sentinel-service-limits](../../includes/sentinel-limits-ueba.md)]
+## Watchlist limits
++ ## Next steps - [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md)
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
To configure directory and file level permissions through Windows File explorer,
To configure directory and file-level permissions, follow the instructions in [Configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
-## Configure the clients
+## Configure the clients to retrieve Kerberos tickets
Enable the Azure AD Kerberos functionality on the client machine(s) you want to mount/use Azure File shares from. You must do this on every client on which Azure Files will be used. Use one of the following three methods: -- Configure this Intune [Policy CSP](/windows/client-management/mdm/policy-configuration-service-provider) and apply it to the session host: [Kerberos/CloudKerberosTicketRetrievalEnabled](/windows/client-management/mdm/policy-csp-kerberos#kerberos-cloudkerberosticketretrievalenabled)-- Configure this Group policy on the session host: `Administrative Templates\System\Kerberos\Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon`-- Create the following registry value on the session host: `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters /v CloudKerberosTicketRetrievalEnabled /t REG_DWORD /d 1`
+- Configure this Intune [Policy CSP](/windows/client-management/mdm/policy-configuration-service-provider) and apply it to the client(s): [Kerberos/CloudKerberosTicketRetrievalEnabled](/windows/client-management/mdm/policy-csp-kerberos#kerberos-cloudkerberosticketretrievalenabled)
+- Configure this group policy on the client(s): `Administrative Templates\System\Kerberos\Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon`
+- Create the following registry value on the client(s): `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters /v CloudKerberosTicketRetrievalEnabled /t REG_DWORD /d 1`
## Disable Azure AD authentication on your storage account
stream-analytics Quick Create Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-visual-studio-code.md
This quickstart shows you how to create and run an Azure Stream Analytics job by
> [!NOTE] > Visual Studio and Visual Studio Code tools don't support jobs in the China East, China North, Germany Central, and Germany NorthEast regions.
-## Before you begin
+## Prerequisites
+Here are the prerequisites for the quickstart:
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-
-* Sign in to the [Azure portal](https://portal.azure.com/).
-
-* Install [Visual Studio Code](https://code.visualstudio.com/).
+- Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
+- [Visual Studio Code](https://code.visualstudio.com/).
## Install the Azure Stream Analytics Tools extension 1. Open Visual Studio Code.
+2. From **Extensions** on the left pane, search for **Azure Stream Analytics** and select **Install** on the **Azure Stream Analytics Tools** extension.
-2. From **Extensions** on the left pane, search for **Stream Analytics** and select **Install** on the **Azure Stream Analytics Tools** extension.
-
+ :::image type="content" source="./media/quick-create-visual-studio-code/install-extension.png" alt-text="Screenshot showing the Extensions page of Visual Studio Code with an option to install Stream Analytics extension.":::
3. After the extension is installed, verify that **Azure Stream Analytics Tools** is visible in **Enabled Extensions**.
- ![Azure Stream Analytics Tools under enabled extensions in Visual Studio Code](./media/quick-create-visual-studio-code/enabled-extensions.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/enabled-extensions.png" alt-text="Screenshot showing the Azure Stream Analytics extension in the list of enabled extensions.":::
## Activate the Azure Stream Analytics Tools extension 1. Select the **Azure** icon on the Visual Studio Code activity bar. Under **Stream Analytics** on the side bar, select **Sign in to Azure**.
- ![Sign in to Azure in Visual Studio Code](./media/quick-create-visual-studio-code/azure-sign-in.png)
-2. When you're signed in, your Azure account name appears on the status bar in the lower-left corner of the Visual Studio Code window.
+ :::image type="content" source="./media/quick-create-visual-studio-code/azure-sign-in.png" alt-text="Screenshot showing how to sign in to Azure.":::
+2. You may need to select a subscription as showing in the following image:
-> [!NOTE]
-> The Azure Stream Analytics Tools extension will automatically sign you in the next time if you don't sign out.
-> If your account has two-factor authentication, we recommend that you use phone authentication rather than using a PIN.
-> If you have issues with listing resources, signing out and signing in again usually helps. To sign out, enter the command `Azure: Sign Out`.
+ :::image type="content" source="./media/quick-create-visual-studio-code/select-subscription.png" alt-text="Screenshot showing the selection of an Azure subscription.":::
+3. Keep Visual Studio Code open.
+
+ > [!NOTE]
+ > The Azure Stream Analytics Tools extension will automatically sign you in the next time if you don't sign out.
+ > If your account has two-factor authentication, we recommend that you use phone authentication rather than using a PIN.
+ > If you have issues with listing resources, signing out and signing in again usually helps. To sign out, enter the command `Azure: Sign Out`.
## Prepare the input data Before you define the Stream Analytics job, you should prepare the data that's later configured as the job input. To prepare the input data that the job requires, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com/).- 2. Select **Create a resource** > **Internet of Things** > **IoT Hub**.
+ :::image type="content" source="./media/quick-create-visual-studio-code/create-resource-iot-hub-menu.png" alt-text="Screenshot showing the Create Resource page for Iot Hub.":::
3. In the **IoT Hub** pane, enter the following information: |**Setting** |**Suggested value** |**Description** |
Before you define the Stream Analytics job, you should prepare the data that's l
|Region | \<Select the region that is closest to your users\> | Select a geographic location where you can host your IoT hub. Use the location that's closest to your users. | |IoT Hub Name | MyASAIoTHub | Select a name for your IoT hub. |
- ![Create an IoT hub](./media/quick-create-visual-studio-code/create-iot-hub.png)
-
-4. Select **Next: Set size and scale**.
-
-5. Make a selection for **Pricing and scale tier**. For this quickstart, select the **F1 - Free** tier if it's still available on your subscription. If the free tier is unavailable, choose the lowest tier available. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
-
- ![Size and scale your IoT hub](./media/quick-create-visual-studio-code/iot-hub-size-and-scale.png)
-
+ :::image type="content" source="./media/quick-create-visual-studio-code/create-iot-hub.png" alt-text="Screenshot showing the IoT Hub page for creation.":::
+4. Select **Next: Networking** at the bottom of the page to move to the **Networking** page of the creation wizard.
+1. On the **Networking** page, select **Next: Management** at the bottom of the page.
+1. On the **Management** page, for **Pricing and scale tier**, select **F1: Free tier**, if it's still available on your subscription. If the free tier is unavailable, choose the lowest pricing tier available. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
6. Select **Review + create**. Review your IoT hub information and select **Create**. Your IoT hub might take a few minutes to create. You can monitor the progress on the **Notifications** pane.
+1. After the creation is successful, select **Go to resource** to navigate to the **IoT Hub** page for your IoT hub.
+1. On the **IoT Hub** page, select **Devices** under **Device management** on the left menu, and then select **Add Device** as shown in the image.
-7. On your IoT hub's navigation menu, select **Add** under **IoT devices**. Add an ID for **Device ID**, and select **Save**.
+ :::image type="content" source="./media/quick-create-visual-studio-code/add-device-menu.png" alt-text="Screenshot showing the Add Device button on the Devices page.":::
+1. On your IoT hub's navigation menu, select **Add** under **IoT devices**. Add an ID for **Device ID**, and select **Save**.
- ![Add a device to your IoT hub](./media/quick-create-visual-studio-code/add-device-iot-hub.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/add-device-iot-hub.png" alt-text="Screenshot showing the Add Device page.":::
+1. After the device is saved, select the device from the list. If it doesn't show up in the list, move to another page and switch back to the **Devices** page.
-8. After the device is created, open the device from the **IoT devices** list. Copy the string in **Connection string (primary key)** and save it to a notepad to use later.
+ :::image type="content" source="./media/quick-create-visual-studio-code/select-device.png" alt-text="Screenshot showing the selection of the device on the Devices page.":::
+8. Copy the string in **Connection string (primary key)** and save it to a notepad to use later.
- ![Copy IoT hub device connection string](./media/quick-create-visual-studio-code/save-iot-device-connection-string.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/save-iot-device-connection-string.png" alt-text="Screenshot showing the primary connection string of the device you created.":::
## Run the IoT simulator- 1. Open the [Raspberry Pi Azure IoT Online Simulator](https://azure-samples.github.io/raspberry-pi-web-simulator/) in a new browser tab or window.- 2. Replace the placeholder in line 15 with the IoT hub device connection string that you saved earlier.- 3. Select **Run**. The output should show the sensor data and messages that are being sent to your IoT hub.
- ![Raspberry Pi Azure IoT Online Simulator with output](./media/quick-create-visual-studio-code/ras-pi-connection-string.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/ras-pi-connection-string.png" lightbox="./media/quick-create-visual-studio-code/ras-pi-connection-string.png" alt-text="Screenshot showing the Raspberry Pi Azure IoT Online Simulator with output.":::
## Create blob storage 1. From the upper-left corner of the Azure portal, select **Create a resource** > **Storage** > **Storage account**.
-2. In the **Create storage account** pane, enter a storage account name, location, and resource group. Choose the same location and resource group as the IoT hub that you created. Then select **Review + create** to create the account.
-
- ![Create storage account](./media/quick-create-visual-studio-code/create-storage-account.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/create-storage-account-menu.png" alt-text="Screenshot showing the Create storage account menu.":::
+2. In the **Create storage account** pane, enter a storage account name, location, and resource group. Choose the same location and resource group as the IoT hub that you created. Then select **Review** to create the account. Then, select **Create** to create the storage account. After the resource is created, select **Go to resource** to navigate to the **Storage account** page.
-3. After your storage account is created, select the **Blobs** tile on the **Overview** pane.
+ :::image type="content" source="./media/quick-create-visual-studio-code/create-storage-account.png" alt-text="Screenshot showing the Create storage account page.":::
+3. On the **Storage account** page, select **Containers** on the left menu, and then select **+ Container** on the command bar.
- ![Storage account overview](./media/quick-create-visual-studio-code/blob-storage.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/add-blob-container-menu.png" alt-text="Screenshot showing the Containers page.":::
+4. From the **New container** page, provide a **name** for your container, leave **Public access level** as **Private (no anonymous access)**, and select **OK**.
-4. From the **Blob Service** page, select **Container** and provide a name for your container, such as **container1**. Leave **Public access level** as **Private (no anonymous access)** and select **OK**.
-
- ![Create a blob container](./media/quick-create-visual-studio-code/create-blob-container.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/create-blob-container.png" alt-text="Screenshot showing the creation of a blob container page.":::
## Create a Stream Analytics project
-1. In Visual Studio Code, select **Ctrl+Shift+P** to open the command palette. Then enter **ASA** and select **ASA: Create New Project**.
+1. In Visual Studio Code, select **View** -> **Command palette** on the menu to open the command palette.
- ![Create a new project](./media/quick-create-visual-studio-code/create-new-project.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/view-command-palette.png" alt-text="Screenshot showing the View -> Command palette menu.":::
+1. Then enter **ASA** and select **ASA: Create New Project**.
+ :::image type="content" source="./media/quick-create-visual-studio-code/create-new-project.png" alt-text="Screenshot showing the selection of ASA: Create New Project in the command palette.":::
2. Enter your project name, like **myASAproj**, and select a folder for your project.
- ![Create a project name](./media/quick-create-visual-studio-code/create-project-name.png)
-
-3. The new project is added to your workspace. A Stream Analytics project consists of three folders: **Inputs**, **Outputs**, and **Functions**. It also has the query script **(*.asaql)**, a **JobConfig.json** file, and an **asaproj.json** configuration file.
+ :::image type="content" source="./media/quick-create-visual-studio-code/create-project-name.png" alt-text="Screenshot showing entering an ASA project name.":::
+3. The new project is added to your workspace. A Stream Analytics project consists of three folders: **Inputs**, **Outputs**, and **Functions**. It also has the query script **(*.asaql)**, a **JobConfig.json** file, and an **asaproj.json** configuration file. You may need to select **Explorer** button on the left menu of the Visual Studio Code to see the explorer.
The **asaproj.json** configuration file contains the inputs, outputs, and job configuration file information needed for submitting the Stream Analytics job to Azure.
- ![Stream Analytics project files in Visual Studio Code](./media/quick-create-visual-studio-code/asa-project-files.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/asa-project-files.png" alt-text="Screenshot showing Stream Analytics project files in Visual Studio Code.":::
-> [!Note]
-> When you're adding inputs and outputs from the command palette, the corresponding paths are added to **asaproj.json** automatically. If you add or remove inputs or outputs on disk directly, you need to manually add or remove them from **asaproj.json**. You can choose to put the inputs and outputs in one place and then reference them in different jobs by specifying the paths in each **asaproj.json** file.
+ > [!Note]
+ > When you're adding inputs and outputs from the command palette, the corresponding paths are added to **asaproj.json** automatically. If you add or remove inputs or outputs on disk directly, you need to manually add or remove them from **asaproj.json**. You can choose to put the inputs and outputs in one place and then reference them in different jobs by specifying the paths in each **asaproj.json** file.
## Define the transformation query 1. Open **myASAproj.asaql** from your project folder.- 2. Add the following query: ```sql
Before you define the Stream Analytics job, you should prepare the data that's l
WHERE Temperature > 27 ```
+ :::image type="content" source="./media/quick-create-visual-studio-code/query.png" lightbox="./media/quick-create-visual-studio-code/query.png" alt-text="Screenshot showing the transformation query.":::
+ ## Define a live input 1. Right-click the **Inputs** folder in your Stream Analytics project. Then select **ASA: Add Input** from the context menu.
- ![Add input from the Inputs folder](./media/quick-create-visual-studio-code/add-input-from-inputs-folder.png)
-
- Or select **Ctrl+Shift+P** to open the command palette and enter **ASA: Add Input**.
+ :::image type="content" source="./media/quick-create-visual-studio-code/add-input-from-inputs-folder.png" lightbox="./media/quick-create-visual-studio-code/add-input-from-inputs-folder.png" alt-text="Screenshot showing the ASA: Add input menu in Visual Studio Code.":::
- ![Add Stream Analytics input in Visual Studio Code](./media/quick-create-visual-studio-code/add-input.png)
+ Or select **Ctrl+Shift+P** (or **View** -> **Command palette** menu) to open the command palette and enter **ASA: Add Input**.
+ :::image type="content" source="./media/quick-create-visual-studio-code/add-input.png" lightbox="./media/quick-create-visual-studio-code/add-input.png" alt-text="Screenshot showing the ASA: Add input in the command palette of Visual Studio Code.":::
2. Choose **IoT Hub** for the input type.-
- ![Select IoT hub as the input option](./media/quick-create-visual-studio-code/iot-hub.png)
-
+
+ :::image type="content" source="./media/quick-create-visual-studio-code/iot-hub.png" lightbox="./media/quick-create-visual-studio-code/iot-hub.png" alt-text="Screenshot showing the selection of your IoT hub in VS Code command palette.":::
3. If you added the input from the command palette, choose the Stream Analytics query script that will use the input. It should be automatically populated with the file path to **myASAproj.asaql**.
- ![Select a Stream Analytics script in Visual Studio Code](./media/quick-create-visual-studio-code/asa-script.png)
-
-4. Choose **Select from your Azure Subscriptions** from the drop-down menu.
-
- ![Select from subscriptions](./media/quick-create-visual-studio-code/add-input-select-subscription.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/asa-script.png" lightbox="./media/quick-create-visual-studio-code/asa-script.png" alt-text="Screenshot showing the selection of your Stream Analytics script in VS Code command palette.":::
+4. Choose **Select from your Azure Subscriptions** from the drop-down menu, and then press **ENTER**.
+ :::image type="content" source="./media/quick-create-visual-studio-code/add-input-select-subscription.png" lightbox="./media/quick-create-visual-studio-code/add-input-select-subscription.png" alt-text="Screenshot showing the selection of your Azure subscription in VS Code command palette.":::
5. Edit the newly generated **IoTHub1.json** file with the following values. Keep default values for fields not mentioned here. |Setting|Suggested value|Description|
Before you define the Stream Analytics job, you should prepare the data that's l
You can use the CodeLens feature to help you enter a string, select from a drop-down list, or change the text directly in the file. The following screenshot shows **Select from your Subscriptions** as an example. The credentials are auto-listed and saved in local credential manager.
- ![Configure input in Visual Studio Code](./media/quick-create-visual-studio-code/configure-input.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/configure-input.png" lightbox="./media/quick-create-visual-studio-code/configure-input.png" alt-text="Screenshot showing the launch of CodeLens feature in VS Code.":::
+
+ After you select a subscription, **select an IoT hub** if you have multiple hubs in that subscription.
+
+ :::image type="content" source="./media/quick-create-visual-studio-code/select-iot-hub.png" lightbox="./media/quick-create-visual-studio-code/select-iot-hub.png" alt-text="Screenshot showing the selection of your IoT hub in VS Code.":::
- ![Configure input value in Visual Studio Code](./media/quick-create-visual-studio-code/configure-input-value.png)
+ > [!IMPORTANT]
+ > Make sure that the name of the input is **Input** as the query expect it.
## Preview input Select **Preview data** in **IoTHub1.json** from the top line. Some input data will be fetched from the IoT hub and shown in the preview window. This process might take a while.
- ![Preview live input](./media/quick-create-visual-studio-code/preview-live-input.png)
## Define an output 1. Select **Ctrl+Shift+P** to open the command palette. Then, enter **ASA: Add Output**.-
- ![Add Stream Analytics output in Visual Studio Code](./media/quick-create-visual-studio-code/add-output.png)
-
-2. Choose **Blob Storage** for the sink type.
-
+2. Choose **Data Lake Storage Gen2/Blob Storage** for the sink type.
3. Choose the Stream Analytics query script that will use this input.- 4. Enter the output file name as **BlobStorage**.-
-5. Edit **BlobStorage** by using the following values. Keep default values for fields not mentioned here. Use the CodeLens feature to help you select from a drop-down list or enter a string.
+5. Edit **BlobStorage** by using the following values. Keep default values for fields not mentioned here. Use the **CodeLens** feature to help you select an Azure subscription and storage account name from a drop-down list or manually enter values.
|Setting|Suggested value|Description| |-||--| |Name|Output| Enter a name to identify the job's output.|
- |Storage Account|asaquickstartstorage|Choose or enter the name of your storage account. Storage account names are automatically detected if they're created in the same subscription.|
+ |Storage Account| &lt;Name of your storage account&gt; |Choose or enter the name of your storage account. Storage account names are automatically detected if they're created in the same subscription.|
|Container|container1|Select the existing container that you created in your storage account.| |Path Pattern|output|Enter the name of a file path to be created within the container.|
- ![Configure output in Visual Studio Code](./media/quick-create-visual-studio-code/configure-output.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/configure-output.png" lightbox="./media/quick-create-visual-studio-code/configure-output.png" alt-text="Screenshot showing the configuration of output for the Stream Analytics job.":::
-## Compile the script
+ > [!IMPORTANT]
+ > Make sure that the name of the output is **Output** as the query expect it.
-Script compilation checks syntax and generates the Azure Resource Manager templates for automatic deployment.
+## Compile the script
-There are two ways to trigger script compilation:
+Script compilation checks syntax and generates the Azure Resource Manager templates for automatic deployment. There are two ways to trigger script compilation:
- Select the script from the workspace and then compile from the command palette.
- ![Use the Visual Studio Code command palette to compile the script](./media/quick-create-visual-studio-code/compile-script1.png)
-
+ :::image type="content" source="./media/quick-create-visual-studio-code/compile-script-1.png" lightbox="./media/quick-create-visual-studio-code/compile-script-1.png" alt-text="Screenshot showing the compilation of script option from the command palette.":::
- Right-click the script and select **ASA: Compile Script**.
- ![Right-click the Stream Analytics script to compile](./media/quick-create-visual-studio-code/compile-script2.png)
+ :::image type="content" source="./media/quick-create-visual-studio-code/compile-script-2.png" lightbox="./media/quick-create-visual-studio-code/compile-script-2.png" alt-text="Screenshot showing the compilation of script option from the Stream Analytics explorer in VS Code.":::
-After compilation, you can find the two generated Azure Resource Manager templates in the **Deploy** folder of your project. These two files are used for automatic deployment.
+After compilation, you can see results in the **Output** window. You can find the two generated Azure Resource Manager templates in the **Deploy** subfolder in your project folder. These two files are used for automatic deployment.
-![Stream Analytics deployment templates in File Explorer](./media/quick-create-visual-studio-code/deployment-templates.png)
## Submit a Stream Analytics job to Azure 1. In the script editor window of your query script, select **Submit to Azure**.
- ![Select from your subscriptions text in the script editor](./media/quick-create-visual-studio-code/submit-job.png)
- 2. Select your subscription from the pop-up list.- 3. Choose **Select a job**. Then choose **Create New Job**.- 4. Enter your job name, **myASAjob**. Then follow the instructions to choose the resource group and location.-
-5. Select **Submit to Azure**. You can find the logs in the output window.
-
-6. When your job is created, you can see it in **Stream Analytics Explorer**.
-
- ![Listed job in Stream Analytics Explorer](./media/quick-create-visual-studio-code/list-job.png)
+5. Select **Publish to Azure**. You can find the logs in the output window.
+6. When your job is created, you can see it in **Stream Analytics Explorer**. See the image in the next section.
## Start the Stream Analytics job and check output 1. Open **Stream Analytics Explorer** in Visual Studio Code and find your job, **myASAJob**.
+2. Select **Start** from the **Cloud view** page (OR) right-click the job name in Stream Analytics explorer, and select **Start** from the context menu.
-2. Right-click the job name. Then, select **Start** from the context menu.
-
- ![Start the Stream Analytics job in Visual Studio Code](./media/quick-create-visual-studio-code/start-asa-job-vs-code.png)
-
-3. Choose **Now** in the pop-up window to start the job.
-
+ :::image type="content" source="./media/quick-create-visual-studio-code/start-asa-job-vs-code.png" lightbox="./media/quick-create-visual-studio-code/start-asa-job-vs-code.png" alt-text="Screenshot showing the Start job button in the Cloud view page.":::
4. Note that the job status has changed to **Running**. Right-click the job name and select **Open Job View in Portal** to see the input and output event metrics. This action might take a few minutes.- 5. To view the results, open the blob storage in the Visual Studio Code extension or in the Azure portal.
+ :::image type="content" source="./media/quick-create-visual-studio-code/output-files.png" lightbox="./media/quick-create-visual-studio-code/output-files.png" alt-text="Screenshot showing the output file in the Blob container.":::
+
+ Download and open the file to see output.
+
+ ```json
+ {"messageId":11,"deviceId":"Raspberry Pi Web Client","temperature":28.165519323167562,"humidity":76.875393581654379,"EventProcessedUtcTime":"2022-09-01T22:53:58.1015921Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:52:57.6250000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:52:57.6290000Z"}}
+ {"messageId":14,"deviceId":"Raspberry Pi Web Client","temperature":29.014941877871451,"humidity":64.93477299527828,"EventProcessedUtcTime":"2022-09-01T22:53:58.2421545Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:53:03.6100000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:53:03.6140000Z"}}
+ {"messageId":17,"deviceId":"Raspberry Pi Web Client","temperature":28.032846241745975,"humidity":66.146114343897338,"EventProcessedUtcTime":"2022-09-01T22:53:58.2421545Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:53:19.5960000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:53:19.5830000Z"}}
+ {"messageId":18,"deviceId":"Raspberry Pi Web Client","temperature":30.176185593576143,"humidity":72.697359909427419,"EventProcessedUtcTime":"2022-09-01T22:53:58.2421545Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:53:21.6120000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:53:21.6140000Z"}}
+ {"messageId":20,"deviceId":"Raspberry Pi Web Client","temperature":27.851894248213021,"humidity":71.610229530268214,"EventProcessedUtcTime":"2022-09-01T22:53:58.2421545Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:53:25.6270000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:53:25.6140000Z"}}
+ {"messageId":21,"deviceId":"Raspberry Pi Web Client","temperature":27.718624694772238,"humidity":66.540445035685153,"EventProcessedUtcTime":"2022-09-01T22:53:58.2421545Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:53:48.0820000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:53:48.0830000Z"}}
+ {"messageId":22,"deviceId":"Raspberry Pi Web Client","temperature":27.7849054424326,"humidity":74.300662748167085,"EventProcessedUtcTime":"2022-09-01T22:54:09.3393532Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:54:09.2390000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:54:09.2400000Z"}}
+ {"messageId":28,"deviceId":"Raspberry Pi Web Client","temperature":30.839892925680324,"humidity":76.237611741451786,"EventProcessedUtcTime":"2022-09-01T22:54:47.8053253Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:54:47.6180000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:54:47.6150000Z"}}
+ {"messageId":29,"deviceId":"Raspberry Pi Web Client","temperature":30.561040300759053,"humidity":78.3845172058103,"EventProcessedUtcTime":"2022-09-01T22:54:49.8070489Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:54:49.6030000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:54:49.5990000Z"}}
+ {"messageId":31,"deviceId":"Raspberry Pi Web Client","temperature":28.163585438418679,"humidity":60.0511571297096,"EventProcessedUtcTime":"2022-09-01T22:55:25.1528729Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:55:24.9050000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:55:24.9120000Z"}}
+ {"messageId":32,"deviceId":"Raspberry Pi Web Client","temperature":31.00503387156985,"humidity":78.68821066044552,"EventProcessedUtcTime":"2022-09-01T22:55:43.2652127Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:55:43.0480000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:55:43.0520000Z"}}
+ ```
+
+ ## Clean up resources When they're no longer needed, delete the resource group, the streaming job, and all related resources. Deleting the job avoids billing the streaming units that the job consumes.
stream-analytics Stream Analytics Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-quick-create-portal.md
Title: Quickstart - Create a Stream Analytics job by using the Azure portal
description: This quickstart shows you how to get started by creating a Stream Analytic job, configuring inputs, outputs, and defining a query. Previously updated : 03/30/2021 Last updated : 09/02/2022
# Quickstart: Create a Stream Analytics job by using the Azure portal-
-This quickstart shows you how to get started with creating a Stream Analytics job. In this quickstart, you define a Stream Analytics job that reads real-time streaming data and filters messages with a temperature greater than 27. Your Stream Analytics job will read data from IoT Hub, transform the data, and write the data back to a container in blob storage. The input data used in this quickstart is generated by a Raspberry Pi online simulator.
+This quickstart shows you how to create a Stream Analytics job in the Azure portal. In this quickstart, you define a Stream Analytics job that reads real-time streaming data and filters messages with a temperature greater than 27. Your Stream Analytics job will read data from IoT Hub, transform the data, and write the output data to a container in blob storage. The input data used in this quickstart is generated by a Raspberry Pi online simulator.
## Before you begin-
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-
-* Sign in to the [Azure portal](https://portal.azure.com/).
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
## Prepare the input data- Before defining the Stream Analytics job, you should prepare the input data. The real-time sensor data is ingested to IoT Hub, which later configured as the job input. To prepare the input data required by the job, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com/).-
-2. Select **Create a resource** > **Internet of Things** > **IoT Hub**.
-
-3. In the **IoT Hub** pane, enter the following information:
-
- |**Setting** |**Suggested value** |**Description** |
- ||||
- |Subscription | \<Your subscription\> | Select the Azure subscription that you want to use. |
- |Resource group | asaquickstart-resourcegroup | Select **Create New** and enter a new resource-group name for your account. |
- |Region | \<Select the region that is closest to your users\> | Select a geographic location where you can host your IoT Hub. Use the location that's closest to your users. |
- |IoT Hub Name | MyASAIoTHub | Select a name for your IoT Hub. |
-
- ![Create an IoT Hub](./media/stream-analytics-quick-create-portal/create-iot-hub.png)
-
-4. Select **Next: Set size and scale**.
-
-5. Choose your **Pricing and scale tier**. For this quickstart, select the **F1 - Free** tier if it's still available on your subscription. For more information, see [IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
-
- ![Size and scale your IoT Hub](./media/stream-analytics-quick-create-portal/iot-hub-size-and-scale.png)
-
+2. Select **Create a resource**.
+
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/create-resource-menu.png" alt-text="Screenshot showing the Create a resource menu.":::
+1. On the **Create a resource** page, select **Internet of Things** > **IoT Hub**.
+
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/iot-hub-menu.png" alt-text="Screenshot showing the IoT Hub menu on the Create a resource page.":::
+1. On **IoT Hub** page, follow these steps:
+ 1. For **Subscription**, select your Azure subscription.
+ 1. For **Resource group**, select an existing resource group or create a new resource group.
+ 1. For **IoT hub name**, enter a name for your IoT hub.
+ 1. For **Region**, select the region that's closest to you.
+ 1. Select **Next: Networking** at the bottom of the page.
+
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/create-iot-hub.png" alt-text="Screenshot showing the IoT Hub page for creation.":::
+4. On the **Networking** page, select **Next: Management** at the bottom of the page.
+1. On the **Management** page, for **Pricing and scale tier**, select **F1: Free tier**, if it's still available on your subscription. For more information, see [IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
6. Select **Review + create**. Review your IoT Hub information and click **Create**. Your IoT Hub might take a few minutes to create. You can monitor the progress in the **Notifications** pane.
+1. After the resource (IoT hub) is created, select **Go to resource** to navigate to the IoT Hub page.
+1. On the **IoT Hub** page, select **Devices** on the left menu, and then select **+ Add device**.
-7. In your IoT Hub navigation menu, click **Add** under **IoT devices**. Add a **Device ID** and click **Save**.
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/add-device-button.png" lightbox="./media/stream-analytics-quick-create-portal/add-device-button.png" alt-text="Screenshot showing the Add device button on the Devices page.":::
+7. Enter a **Device ID** and click **Save**.
- ![Add a device to your IoT Hub](./media/stream-analytics-quick-create-portal/add-device-iot-hub.png)
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/add-device-iot-hub.png" alt-text="Screenshot showing the Create a device page.":::
+8. Once the device is created, you should see the device from the **IoT devices** list. Select **Refresh** button on the page if you don't see it.
-8. Once the device is created, open the device from the **IoT devices** list. Copy the **Connection string -- primary key** and save it to a notepad to use later.
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/device-list.png" lightbox="./media/stream-analytics-quick-create-portal/device-list.png" alt-text="Screenshot showing the list of devices.":::
+1. Select your device from the list.
+1. On the device page, select the copy button next to **Connection string - primary key**, and save it to a notepad to use later.
- ![Copy IoT Hub device connection string](./media/stream-analytics-quick-create-portal/save-iot-device-connection-string.png)
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/save-iot-device-connection-string.png" lightbox="./media/stream-analytics-quick-create-portal/save-iot-device-connection-string.png" alt-text="Screenshot showing the copy button next to device connection string.":::
## Create blob storage 1. From the upper left-hand corner of the Azure portal, select **Create a resource** > **Storage** > **Storage account**.
+2. In the **Create storage account** pane, enter a storage account name, location, and resource group. Choose the same location and resource group as the IoT Hub you created. Then click **Review** at the bottom of the page.
-2. In the **Create storage account** pane, enter a storage account name, location, and resource group. Choose the same location and resource group as the IoT Hub you created. Then click **Review + create** to create the account.
-
- ![Create storage account](./media/stream-analytics-quick-create-portal/create-storage-account.png)
-
-3. Once your storage account is created, select the **Blobs** tile on the **Overview** pane.
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/create-storage-account.png" alt-text="Screenshot showing the Create a storage account page.":::
+3. On the **Review** page, review your settings, and select **Create** to create the account.
+1. After the resource is created, select **Go to resource** to navigate to the **Storage account** page.
+1. On the **Storage account** page, select **Containers** on the left menu, and then select **+ Container**.
- ![Storage account overview](./media/stream-analytics-quick-create-portal/blob-storage.png)
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/add-container-menu.png" alt-text="Screenshot showing the Add container menu on the Containers page.":::
+4. On the **New container** page, provide a name for your container, such as *container1*, and select **Create**.
-4. From the **Blob Service** page, select **Container** and provide a name for your container, such as *container1*. Leave the **Public access level** as **Private (no anonymous access)** and select **OK**.
-
- ![Create blob container](./media/stream-analytics-quick-create-portal/create-blob-container.png)
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/create-blob-container.png" alt-text="Screenshot showing the **Add container** page.":::
## Create a Stream Analytics job
-1. Sign in to the Azure portal.
-
+1. On a separate tab of the same browser window or in a separate browser window, sign in to the [Azure portal](https://portal.azure.com).
2. Select **Create a resource** in the upper left-hand corner of the Azure portal. - 3. Select **Analytics** > **Stream Analytics job** from the results list. -
-4. Fill out the Stream Analytics job page with the following information:
-
- |**Setting** |**Suggested value** |**Description** |
- ||||
- |Job name | MyASAJob | Enter a name to identify your Stream Analytics job. Stream Analytics job name can contain alphanumeric characters, hyphens, and underscores only and it must be between 3 and 63 characters long. |
- |Subscription | \<Your subscription\> | Select the Azure subscription that you want to use for this job. |
- |Resource group | asaquickstart-resourcegroup | Select the same resource group as your IoT Hub. |
- |Location | \<Select the region that is closest to your users\> | Select geographic location where you can host your Stream Analytics job. Use the location that's closest to your users for better performance and to reduce the data transfer cost. |
- |Streaming units | 1 | Streaming units represent the computing resources that are required to execute a job. By default, this value is set to 1. To learn about scaling streaming units, refer to [understanding and adjusting streaming units](stream-analytics-streaming-unit-consumption.md) article. |
- |Hosting environment | Cloud | Stream Analytics jobs can be deployed to cloud or edge. Cloud allows you to deploy to Azure Cloud, and Edge allows you to deploy to an IoT Edge device. |
-
- ![Create job](./media/stream-analytics-quick-create-portal/create-asa-job.png)
-
-5. Check the **Pin to dashboard** box to place your job on your dashboard and then select **Create**.
-
-6. You should see a *Deployment in progress...* notification displayed in the top right of your browser window.
+1. On the **New Stream Analytics job** page, follow these steps:
+ 1. For **Subscription**, select your Azure subscription.
+ 1. For **Resource group**, select the same resource that you used earlier in this quickstart.
+ 1. For **Name**, enter a name for the job. Stream Analytics job name can contain alphanumeric characters, hyphens, and underscores only and it must be between 3 and 63 characters long.
+ 1. For **Hosting environment**, confirm that **Cloud** is selected. Stream Analytics jobs can be deployed to cloud or edge. Cloud allows you to deploy to Azure cloud, and the **Edge** option allows you to deploy to an IoT Edge device.
+ 1. For **Stream units**, select **1**. Streaming units represent the computing resources that are required to execute a job. To learn about scaling streaming units, refer to [understanding and adjusting streaming units](stream-analytics-streaming-unit-consumption.md) article.
+ 1. Select **Review + create** at the bottom of the page.
+
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/create-asa-job.png" alt-text="Screenshot showing the **New Stream Analytics job** page.":::
+5. On the **Review + create** page, review settings, and select **Create** to create a Stream Analytics page.
+1. On the deployment page, select **Go to resource** to navigate to the **Stream Analytics job** page.
## Configure job input
-In this section, you will configure an IoT Hub device input to the Stream Analytics job. Use the IoT Hub you created in the previous section of the quickstart.
+In this section, you'll configure an IoT Hub device input to the Stream Analytics job. Use the IoT Hub you created in the previous section of the quickstart.
-1. Navigate to your Stream Analytics job.
+1. On the **Stream Analytics job** page, select **Input** under **Job topology** on the left menu.
+1. On the **Inputs** page, select **Add stream input** > **IoT Hub**.
-2. Select **Inputs** > **Add Stream input** > **IoT Hub**.
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/add-input-menu.png" lightbox="./media/stream-analytics-quick-create-portal/add-input-menu.png" alt-text="Screenshot showing the **Inputs** page with **Add stream input** > **IoT Hub** menu selected.**.":::
+3. On the **IoT Hub** page, follow these steps:
+ 1. For **Input alias**, enter **IoTHubInput**.
+ 1. For **Subscription**, select the subscription that has the IoT hub you created earlier. This quickstart assumes that you've created the IoT hub in the same subscription.
+ 1. For **IoT Hub**, select your IoT hub.
+ 1. Select **Save** to save the input settings for the Stream Analytics job.
+
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/configure-asa-input.png" alt-text="Screenshot showing the New input page to enter input IoT hub information.":::
-3. Fill out the **IoT Hub** page with the following values:
-
- |**Setting** |**Suggested value** |**Description** |
- ||||
- |Input alias | IoTHubInput | Enter a name to identify the jobΓÇÖs input. |
- |Subscription | \<Your subscription\> | Select the Azure subscription that has the storage account you created. The storage account can be in the same or in a different subscription. This example assumes that you have created storage account in the same subscription. |
- |IoT Hub | MyASAIoTHub | Enter the name of the IoT Hub you created in the previous section. |
-
-4. Leave other options to default values and select **Save** to save the settings.
-
- ![Configure input data](./media/stream-analytics-quick-create-portal/configure-asa-input.png)
-
## Configure job output
-1. Navigate to the Stream Analytics job that you created earlier.
+1. Now, select **Outputs** under **Job topology** on the left menu.
+1. On the **Outputs** page, select **Add** > **Blob storage/ADLS Gen2**.
-2. Select **Outputs** > **Add** > **Blob storage**.
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/add-output-menu.png" alt-text="Screenshot showing the **Outputs** page with **Add** -> **Blob storage** option selected on the menu.":::
+1. On the **New output** page for **Blob storage/ADLS Gen2**, follow these steps:
+ 1. For **Output alias**, enter **BlobOutput**.
+ 1. For **Subscription**, select the subscription that has the Azure storage account you created earlier. This quickstart assumes that you've created the Storage account in the same subscription.
+ 1. For **Storage account**, select your Storage account.
+ 1. For **Container**, select your blob container if it isn't already selected.
+ 1. For **Authentication mode**, select **Connection string**.
+ 1. Select **Save** at the bottom of the page to save the output settings.
-3. Fill out the **Blob storage** page with the following values:
-
- |**Setting** |**Suggested value** |**Description** |
- ||||
- |Output alias | BlobOutput | Enter a name to identify the jobΓÇÖs output. |
- |Subscription | \<Your subscription\> | Select the Azure subscription that has the storage account you created. The storage account can be in the same or in a different subscription. This example assumes that you have created storage account in the same subscription. |
- |Storage account | asaquickstartstorage | Choose or enter the name of the storage account. Storage account names are automatically detected if they are created in the same subscription. |
- |Container | container1 | Select the existing container that you created in your storage account. |
-
-4. Leave other options to default values and select **Save** to save the settings.
-
- ![Configure output](./media/stream-analytics-quick-create-portal/configure-asa-output.png)
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/configure-asa-output.png" alt-text="Screenshot showing the **New output** page to enter input Azure storage account information.":::
## Define the transformation query
-1. Navigate to the Stream Analytics job that you created earlier.
-
-2. Select **Query** and update the query as follows:
+1. Now, select **Query** under **Job topology** on the left menu.
+1. Enter the following query into the query window. In this example, the query reads the data from IoT Hub and copies it to a new file in the blob.
```sql SELECT *
In this section, you will configure an IoT Hub device input to the Stream Analyt
FROM IoTHubInput WHERE Temperature > 27 ```
+1. Select **Save query** on the toolbar.
-3. In this example, the query reads the data from IoT Hub and copies it to a new file in the blob. Select **Save**.
-
- ![Configure job transformation](./media/stream-analytics-quick-create-portal/add-asa-query.png)
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/add-asa-query.png" lightbox="./media/stream-analytics-quick-create-portal/add-asa-query.png" alt-text="Screenshot showing the **Query** page with the sample query.":::
## Run the IoT simulator 1. Open the [Raspberry Pi Azure IoT Online Simulator](https://azure-samples.github.io/raspberry-pi-web-simulator/).- 2. Replace the placeholder in Line 15 with the Azure IoT Hub device connection string you saved in a previous section.- 3. Click **Run**. The output should show the sensor data and messages that are being sent to your IoT Hub.
- ![Raspberry Pi Azure IoT Online Simulator](./media/stream-analytics-quick-create-portal/ras-pi-connection-string.png)
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/ras-pi-connection-string.png" lightbox="./media/stream-analytics-quick-create-portal/ras-pi-connection-string.png" alt-text="Screenshot showing the **Raspberry Pi Azure IoT Online Simulator** page with the sample query.":::
## Start the Stream Analytics job and check the output
-1. Return to the job overview page and select **Start**.
+1. Return to the job overview page in the Azure portal, and select **Start**.
+
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/start-job-menu.png" alt-text="Screenshot showing the **Overview** page with **Start** button selected.":::
+1. On the **Start job** page, confirm that **Now** is selected for **Job output start time**, and then select **Start** at the bottom of the page.
-2. Under **Start job**, select **Now**, for the **Job output start time** field. Then, select **Start** to start your job.
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/start-job-page.png" alt-text="Screenshot showing the **Start job** page.":::
+1. After few minutes, in the portal, find the storage account & the container that you've configured as output for the job. You can now see the output file in the container. The job takes a few minutes to start for the first time, after it's started, it will continue to run as the data arrives.
-3. After few minutes, in the portal, find the storage account & the container that you have configured as output for the job. You can now see the output file in the container. The job takes a few minutes to start for the first time, after it is started, it will continue to run as the data arrives.
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/output-file-blob-container.png" lightbox="./media/stream-analytics-quick-create-portal/output-file-blob-container.png" alt-text="Screenshot showing the **Container** page with the sample output file.":::
+1. Select the file, and then on the **Blob** page, select **Edit** to view contents in the file.
- ![Transformed output](./media/stream-analytics-quick-create-portal/check-asa-results.png)
+ :::image type="content" source="./media/stream-analytics-quick-create-portal/check-asa-results.png" lightbox="./media/stream-analytics-quick-create-portal/check-asa-results.png" alt-text="Screenshot showing the sample output file.":::
## Clean up resources
-When no longer needed, delete the resource group, the Stream Analytics job, and all related resources. Deleting the job avoids billing the streaming units consumed by the job. If you're planning to use the job in future, you can stop it and restart it later when you need. If you are not going to continue to use this job, delete all resources created by this quickstart by using the following steps:
+When no longer needed, delete the resource group, the Stream Analytics job, and all related resources. Deleting the job avoids billing the streaming units consumed by the job. If you're planning to use the job in future, you can stop it and restart it later when you need. If you aren't going to continue to use this job, delete all resources created by this quickstart by using the following steps:
1. From the left-hand menu in the Azure portal, select **Resource groups** and then select the name of the resource you created.
synapse-analytics Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/metadata/database.md
The Azure Synapse Analytics workspace enables you to create two types of databas
- **Lake databases** where you can define tables on top of lake data using Apache Spark notebooks, database templates, or Microsoft Dataverse (previously Common Data Service). These tables will be available for querying using T-SQL (Transact-SQL) language using the serverless SQL pool. - **SQL databases** where you can define your own databases and tables directly using the serverless SQL pools. You can use T-SQL CREATE DATABASE, CREATE EXTERNAL TABLE to define the objects and add additional SQL views, procedures, and inline-table-value functions on top of the tables.
+![Diagram that shows Lake and SQL databases that are created on top of Data Lake files.](../media/metadata/shared-databases.png)
+ This article focuses on [lake databases](../database-designer/concepts-lake-database.md) in a serverless SQL pool in Azure Synapse Analytics. Azure Synapse Analytics allows you to create lake databases and tables using Spark or database designer, and then analyze data in the lake databases using the serverless SQL pool. The lake databases and the tables (parquet or CSV-backed) that are created on the Apache Spark pools, [database templates](../database-designer/concepts-database-templates.md), or Dataverse are automatically available for querying with the serverless SQL pool engine. The lake databases and tables that are modified will be available in serverless SQL pool after some time. There will be a delay until the changes made in Spark or Database designed appear in serverless.
Azure Synapse Analytics allows you to create lake databases and tables using Spa
To manage Spark created lake databases, you can use Apache Spark pools or [Database designer](../database-designer/create-empty-lake-database.md). For example, create or delete a lake database through a Spark pool job. You can't create a lake database or the objects in the lake databases using the serverless SQL pool.
-The Spark default database is available in the serverless SQL pool context as a lake database called `default`.
+The Spark `default` database is available in the serverless SQL pool context as a lake database called `default`.
>[!NOTE] > You cannot create a lake and a SQL database in the serverless SQL pool with the same name.
synapse-analytics How To Grant Workspace Managed Identity Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-grant-workspace-managed-identity-permissions.md
Title: Grant permissions to managed identity in Synapse workspace
-description: An article that explains how to configure permissions for managed identity in Azure Synapse workspace.
+ Title: Grant permissions to managed identity in Synapse workspace
+description: An article that explains how to configure permissions for managed identity in Azure Synapse workspace.
--- Previously updated : 04/15/2020 Last updated : 09/01/2022+++ - # Grant permissions to workspace managed identity This article teaches you how to grant permissions to the managed identity in Azure synapse workspace. Permissions, in turn, allow access to dedicated SQL pools in the workspace and ADLS Gen2 storage account through the Azure portal.
->[!NOTE]
->This workspace managed identity will be referred to as managed identity through the rest of this document.
+> [!NOTE]
+> This workspace managed identity will be referred to as managed identity through the rest of this document.
## Grant the managed identity permissions to ADLS Gen2 storage account
-An ADLS Gen2 storage account is required to create an Azure Synapse workspace. To successfully launch Spark pools in Azure Synapse workspace, the Azure Synapse managed identity needs the *Storage Blob Data Contributor* role on this storage account . Pipeline orchestration in Azure Synapse also benefits from this role.
+An ADLS Gen2 storage account is required to create an Azure Synapse workspace. To successfully launch Spark pools in Azure Synapse workspace, the Azure Synapse managed identity needs the *Storage Blob Data Contributor* role on this storage account. Pipeline orchestration in Azure Synapse also benefits from this role.
### Grant permissions to managed identity during workspace creation Azure Synapse will attempt to grant the Storage Blob Data Contributor role to the managed identity after you create the Azure Synapse workspace using Azure portal. You provide the ADLS Gen2 storage account details in the **Basics** tab.
-![Basics tab in workspace creation flow](./media/how-to-grant-workspace-managed-identity-permissions/configure-workspace-managed-identity-1.png)
Choose the ADLS Gen2 storage account and filesystem in **Account name** and **File system name**.
-![Providing an ADLS Gen2 storage account details](./media/how-to-grant-workspace-managed-identity-permissions/configure-workspace-managed-identity-2.png)
If the workspace creator is also **Owner** of the ADLS Gen2 storage account, then Azure Synapse will assign the *Storage Blob Data Contributor* role to the managed identity. You'll see the following message below the storage account details that you entered.
-![Successful Storage Blob Data Contributor assignment](./media/how-to-grant-workspace-managed-identity-permissions/configure-workspace-managed-identity-3.png)
If the workspace creator isn't the owner of the ADLS Gen2 storage account, then Azure Synapse doesn't assign the *Storage Blob Data Contributor* role to the managed identity. The message appearing below the storage account details notifies the workspace creator that they don't have sufficient permissions to grant the *Storage Blob Data Contributor* role to the managed identity.
-![Unsuccessful Storage Blob Data Contributor assignment](./media/how-to-grant-workspace-managed-identity-permissions/configure-workspace-managed-identity-4.png)
As the message states, you can't create Spark pools unless the *Storage Blob Data Contributor* is assigned to the managed identity.
During workspace creation, if you don't assign the *Storage Blob Data contributo
#### Step 1: Navigate to the ADLS Gen2 storage account in Azure portal
-In Azure portal, open the ADLS Gen2 storage account and select **Overview** from the left navigation. You'll only need to assign The *Storage Blob Data Contributor* role at the container or filesystem level. Select **Containers**.
-![ADLS Gen2 storage account overview](./media/how-to-grant-workspace-managed-identity-permissions/configure-workspace-managed-identity-5.png)
+In Azure portal, open the ADLS Gen2 storage account and select **Overview** from the left navigation. You'll only need to assign The *Storage Blob Data Contributor* role at the container or filesystem level. Select **Containers**.
+
#### Step 2: Select the container The managed identity should have data access to the container (file system) that was provided when the workspace was created. You can find this container or file system in Azure portal. Open the Azure Synapse workspace in Azure portal and select the **Overview** tab from the left navigation.
-![ADLS Gen2 storage account container](./media/how-to-grant-workspace-managed-identity-permissions/configure-workspace-managed-identity-7.png)
Select that same container or file system to grant the *Storage Blob Data Contributor* role to the managed identity.
-![Screenshot that shows the container or file system that you should select.](./media/how-to-grant-workspace-managed-identity-permissions/configure-workspace-managed-identity-6.png)
+ #### Step 3: Open Access control and add role assignment
Select that same container or file system to grant the *Storage Blob Data Contri
1. Select **Add** > **Add role assignment** to open the Add role assignment page. 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
+ | Setting | Value | | | | | Role | Storage Blob Data Contributor | | Assign access to | MANAGEDIDENTITY | | Members | managed identity name |
- > [!NOTE]
+ > [!NOTE]
> The managed identity name is also the workspace name.
- ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of the add role assignment page in the Azure portal.":::
1. Select **Save** to add the role assignment.
Select that same container or file system to grant the *Storage Blob Data Contri
Select **Access Control(IAM)** and then select **Role assignments**.
-![Verify role assignment](./media/how-to-grant-workspace-managed-identity-permissions/configure-workspace-managed-identity-14.png)
+
+You should see your managed identity listed under the **Storage Blob Data Contributor** section with the *Storage Blob Data Contributor* role assigned to it.
+
+#### Alternative to Storage Blob Data Contributor role
+
+Instead of granting yourself a Storage Blob Data Contributor role, you can also grant more granular permissions on a subset of files.
+
+All users who need access to some data in this container also must have EXECUTE permission on all parent folders up to the root (the container).
+
+Learn more about how to [set ACLs in Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-explorer-acl.md).
-You should see your managed identity listed under the **Storage Blob Data Contributor** section with the *Storage Blob Data Contributor* role assigned to it.
-![ADLS Gen2 storage account container selection](./media/how-to-grant-workspace-managed-identity-permissions/configure-workspace-managed-identity-15.png)
+> [!NOTE]
+> Execute permission on the container level must be set within Data Lake Storage Gen2.
+> Permissions on the folder can be set within Azure Synapse.
+
+If you want to query data2.csv in this example, the following permissions are needed:
+
+ - Execute permission on container
+ - Execute permission on folder1
+ - Read permission on data2.csv
++
+1. Sign in to Azure Synapse with an admin user that has full permissions on the data you want to access.
+1. In the data pane, right-click the file and select **Manage access**.
+
+ :::image type="content" source="../sql/media/resources-self-help-sql-on-demand/manage-access.png" alt-text="Screenshot that shows the manage access option.":::
+
+1. Select at least **Read** permission. Enter the user's UPN or object ID, for example, user@contoso.com. Select **Add**.
+1. Grant read permission for this user.
+
+ :::image type="content" source="../sql/media/resources-self-help-sql-on-demand/grant-permission.png" alt-text="Screenshot that shows granting read permissions.":::
+
+> [!NOTE]
+> For guest users, this step needs to be done directly with Azure Data Lake because it can't be done directly through Azure Synapse.
## Next steps Learn more about [Workspace managed identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics)+
+- [Best practices for dedicated SQL pools](../sql/best-practices-dedicated-sql-pool.md)
+- [Troubleshoot serverless SQL pool in Azure Synapse Analytics](../sql/resources-self-help-sql-on-demand.md)
+- [Azure Synapse Analytics frequently asked questions](../overview-faq.yml)
synapse-analytics Sql Data Warehouse Load From Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lake-store.md
Title: 'Tutorial load data from Azure Data Lake Storage'
+ Title: "Tutorial load data from Azure Data Lake Storage"
description: Use the COPY statement to load data from Azure Data Lake Storage for dedicated SQL pools. -++ Last updated : 09/02/2022 + - Previously updated : 11/20/2020--
Before you begin this tutorial, download and install the newest version of [SQL
To run this tutorial, you need: * A dedicated SQL pool. See [Create a dedicated SQL pool and query data](create-data-warehouse-portal.md).
-* A Data Lake Storage account. See [Get started with Azure Data Lake Storage](../../data-lake-store/data-lake-store-get-started-portal.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). For this storage account, you will need to configure or specify one of the following credentials to load: A storage account key, shared access signature (SAS) key, an Azure Directory Application user, or an AAD user which has the appropriate Azure role to the storage account.
+* A Data Lake Storage account. See [Get started with Azure Data Lake Storage](../../data-lake-store/data-lake-store-get-started-portal.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). For this storage account, you will need to configure or specify one of the following credentials to load: A storage account key, shared access signature (SAS) key, an Azure Directory Application user, or an Azure AD user that has the appropriate Azure role to the storage account.
+* Currently, ingesting data using the COPY command into an Azure Storage account that is using the new [Azure Storage DNS partition feature](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466) results in an error. Provision a storage account in a subscription that does not use DNS partitioning for this tutorial.
## Create the target table
-Connect to your dedicated SQL pool and create the target table you will to load to. In this example, we are creating a product dimension table.
+Connect to your dedicated SQL pool and create the target table you will load to. In this example, we are creating a product dimension table.
```sql -- A: Create the target table
WITH
); ``` - ## Create the COPY statement Connect to your SQL dedicated pool and run the COPY statement. For a complete list of examples, visit the following documentation: [Securely load data using dedicated SQL pools](./quickstart-bulk-load-copy-tsql-examples.md).
Connect to your SQL dedicated pool and run the COPY statement. For a complete li
```sql -- B: Create and execute the COPY statement
-COPY INTO [dbo].[DimProduct]
The column list allows you map, omit, or reorder input file columns to target table columns.
+COPY INTO [dbo].[DimProduct]
+--The column list allows you map, omit, or reorder input file columns to target table columns.
--You can also specify the default value when there is a NULL value in the file. --When the column list is not specified, columns will be mapped based on source and target ordinality (
- ProductKey default -1 1,
- ProductLabel default 'myStringDefaultWhenNull' 2,
- ProductName default 'myStringDefaultWhenNull' 3
+ ProductKey default -1 1,
+ ProductLabel default 'myStringDefaultWhenNull' 2,
+ ProductName default 'myStringDefaultWhenNull' 3
) --The storage account location where you data is staged FROM 'https://storageaccount.blob.core.windows.net/container/directory/'
-WITH
+WITH
( --CREDENTIAL: Specifies the authentication method and credential access your storage account CREDENTIAL = (IDENTITY = '', SECRET = ''),
The following example is a good starting point for creating statistics. It creat
You have successfully loaded data into your data warehouse. Great job! ## Next steps+ Loading data is the first step to developing a data warehouse solution using Azure Synapse Analytics. Check out our development resources. > [!div class="nextstepaction"]
Loading data is the first step to developing a data warehouse solution using Azu
For more loading examples and references, view the following documentation: - [COPY statement reference documentation](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true#syntax) - [COPY examples for each authentication method](./quickstart-bulk-load-copy-tsql-examples.md)-- [COPY quickstart for a single table](./quickstart-bulk-load-copy-tsql.md)
+- [COPY quickstart for a single table](./quickstart-bulk-load-copy-tsql.md)
synapse-analytics Sql Data Warehouse Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot.md
Title: Troubleshooting dedicated SQL pool (formerly SQL DW) description: Troubleshooting dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.---- Previously updated : 11/02/2021 -+ Last updated : 09/02/2022+++
This article lists common troubleshooting issues in dedicated SQL pool (formerly
| Issue | Resolution | | :-- | :-- |
-| Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. (Microsoft SQL Server, Error: 18456) | This error occurs when an Azure AD user tries to connect to the `master` database, but does not have a user in `master`. To correct this issue, either specify the dedicated SQL pool (formerly SQL DW) you wish to connect to at connection time or add the user to the `master` database. See [Security overview](sql-data-warehouse-overview-manage-security.md) article for more details. |
-| The server principal "MyUserName" is not able to access the database `master` under the current security context. Cannot open user default database. Login failed. Login failed for user 'MyUserName'. (Microsoft SQL Server, Error: 916) | This error occurs when an Azure AD user tries to connect to the `master` database, but does not have a user in `master`. To correct this issue, either specify the dedicated SQL pool (formerly SQL DW) you wish to connect to at connection time or add the user to the `master` database. See [Security overview](sql-data-warehouse-overview-manage-security.md) article for more details. |
+| Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. (Microsoft SQL Server, Error: 18456) | This error occurs when an Azure AD user tries to connect to the `master` database, but does not have a user in `master`. To correct this issue, either specify the dedicated SQL pool (formerly SQL DW) you wish to connect to at connection time or add the user to the `master` database. For more information, see [Security overview](sql-data-warehouse-overview-manage-security.md). |
+| The server principal "MyUserName" is not able to access the database `master` under the current security context. Cannot open user default database. Login failed. Login failed for user 'MyUserName'. (Microsoft SQL Server, Error: 916) | This error occurs when an Azure AD user tries to connect to the `master` database, but does not have a user in `master`. To correct this issue, either specify the dedicated SQL pool (formerly SQL DW) you wish to connect to at connection time or add the user to the `master` database. For more information, see [Security overview](sql-data-warehouse-overview-manage-security.md). |
| CTAIP error | This error can occur when a login has been created on the SQL Database `master` database, but not in the specific SQL database. If you encounter this error, take a look at the [Security overview](sql-data-warehouse-overview-manage-security.md) article. This article explains how to create a login and user in the `master` database, and then how to create a user in a SQL database. |
-| Blocked by Firewall | Dedicated SQL pool (formerly SQL DW) are protected by firewalls to ensure only known IP addresses have access to a database. The firewalls are secure by default, which means that you must explicitly enable and IP address or range of addresses before you can connect. To configure your firewall for access, follow the steps in [Configure server firewall access for your client IP](create-data-warehouse-portal.md) in the [Provisioning instructions](create-data-warehouse-portal.md). |
+| Blocked by Firewall | Dedicated SQL pools (formerly SQL DW) are protected by firewalls to ensure only known IP addresses have access to a database. The firewalls are secure by default, which means that you must explicitly enable and IP address or range of addresses before you can connect. To configure your firewall for access, follow the steps in [Configure server firewall access for your client IP](create-data-warehouse-portal.md) in the [Provisioning instructions](create-data-warehouse-portal.md). |
| Cannot connect with tool or driver | Dedicated SQL pool (formerly SQL DW) recommends using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true), [SSDT for Visual Studio](sql-data-warehouse-install-visual-studio.md), or [sqlcmd](sql-data-warehouse-get-started-connect-sqlcmd.md) to query your data. For more information on drivers and connecting to Azure Synapse, see [Drivers for Azure Synapse](sql-data-warehouse-connection-strings.md) and [Connect to Azure Synapse](sql-data-warehouse-connect-overview.md) articles. | ## Tools
This article lists common troubleshooting issues in dedicated SQL pool (formerly
| :-- | :-- | | Exporting empty strings using CETAS will result in NULL values in Parquet and ORC files. Note if you are exporting empty strings from columns with NOT NULL constraints, CETAS will result in rejected records and the export can potentially fail. | Remove empty strings or the offending column in the SELECT statement of your CETAS. | | Loading a value outside the range of 0-127 into a tinyint column for Parquet and ORC file format is not supported. | Specify a larger data type for the target column. |
+|Msg 105208, Level 16, State 1, Line 1 COPY statement failed with the following error when validating value of option 'FROM': '105200;COPY statement failed because the value for option 'FROM' is invalid.'|Currently, ingesting data using the COPY command into an Azure Storage account that is using the new DNS partitioning feature results in an error. DNS partition feature enables customers to create up to 5000 storage accounts per subscription. To resolve, provision a storage account in a subscription that does not use the new [Azure Storage DNS partition feature](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466) (currently in Public Preview).|
## Performance | Issue | Resolution | | :-- | :-- | | Query performance troubleshooting | If you are trying to troubleshoot a particular query, start with [Learning how to monitor your queries](sql-data-warehouse-manage-monitor.md#monitor-query-execution). |
-| TempDB space issues | [Monitor TempDB](sql-data-warehouse-manage-monitor.md#monitor-tempdb) space usage. Common causes for running out of TempDB space are:<br>- Not enough resources allocated to the query causing data to spill to TempDB. See [Workload management](resource-classes-for-workload-management.md) <br>- Statistics are missing or out of date causing excessive data movement. See [Maintaining table statistics](sql-data-warehouse-tables-statistics.md) for details on how to create statistics<br>- TempDB space is allocated per service level. [Scaling your dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-manage-compute-overview.md#scaling-compute) to a higher DWU setting allocates more TempDB space.|
+| `tempdb` space issues | [Monitor TempDB](sql-data-warehouse-manage-monitor.md#monitor-tempdb) space usage. Common causes for running out of `tempdb` space are:<br>- Not enough resources allocated to the query causing data to spill to `tempdb`. See [Workload management](resource-classes-for-workload-management.md) <br>- Statistics are missing or out of date causing excessive data movement. See [Maintaining table statistics](sql-data-warehouse-tables-statistics.md) for details on how to create statistics<br>- `tempdb` space is allocated per service level. [Scaling your dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-manage-compute-overview.md#scaling-compute) to a higher DWU setting allocates more `tempdb` space.|
| Poor query performance and plans often is a result of missing statistics | The most common cause of poor performance is lack of statistics on your tables. See [Maintaining table statistics](sql-data-warehouse-tables-statistics.md) for details on how to create statistics and why they are critical to your performance. | | Low concurrency / queries queued | Understanding [Workload management](resource-classes-for-workload-management.md) is important in order to understand how to balance memory allocation with concurrency. | | How to implement best practices | The best place to start to learn ways to improve query performance is [dedicated SQL pool (formerly SQL DW) best practices](../sql/best-practices-dedicated-sql-pool.md) article. | | How to improve performance with scaling | Sometimes the solution to improving performance is to simply add more compute power to your queries by [Scaling your dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-manage-compute-overview.md). |
-| Poor query performance as a result of poor index quality | Some times queries can slow down because of [Poor columnstore index quality](sql-data-warehouse-tables-index.md#causes-of-poor-columnstore-index-quality). See this article for more information and how to [Rebuild indexes to improve segment quality](sql-data-warehouse-tables-index.md#rebuild-indexes-to-improve-segment-quality). |
+| Poor query performance as a result of poor index quality | Some times queries can slow down because of [Poor columnstore index quality](sql-data-warehouse-tables-index.md#causes-of-poor-columnstore-index-quality). For more information, see [Rebuild indexes to improve segment quality](sql-data-warehouse-tables-index.md#rebuild-indexes-to-improve-segment-quality). |
## System management
This article lists common troubleshooting issues in dedicated SQL pool (formerly
| :-- | :-- | | Msg 40847: Could not perform the operation because server would exceed the allowed Database Transaction Unit quota of 45000. | Either reduce the [DWU](what-is-a-data-warehouse-unit-dwu-cdwu.md) of the database you are trying to create or [request a quota increase](sql-data-warehouse-get-started-create-support-ticket.md). | | Investigating space utilization | See [Table sizes](sql-data-warehouse-tables-overview.md#table-size-queries) to understand the space utilization of your system. |
-| Help with managing tables | See the [Table overview](sql-data-warehouse-tables-overview.md) article for help with managing your tables. This article also includes links into more detailed topics like [Table data types](sql-data-warehouse-tables-data-types.md), [Distributing a table](sql-data-warehouse-tables-distribute.md), [Indexing a table](sql-data-warehouse-tables-index.md), [Partitioning a table](sql-data-warehouse-tables-partition.md), [Maintaining table statistics](sql-data-warehouse-tables-statistics.md) and [Temporary tables](sql-data-warehouse-tables-temporary.md). |
+| Help with managing tables | See the [Table overview](sql-data-warehouse-tables-overview.md) article for help with managing your tables. For more information, see [Table data types](sql-data-warehouse-tables-data-types.md), [Distributing a table](sql-data-warehouse-tables-distribute.md), [Indexing a table](sql-data-warehouse-tables-index.md), [Partitioning a table](sql-data-warehouse-tables-partition.md), [Maintaining table statistics](sql-data-warehouse-tables-statistics.md) and [Temporary tables](sql-data-warehouse-tables-temporary.md). |
| Transparent data encryption (TDE) progress bar is not updating in the Azure portal | You can view the state of TDE via [PowerShell](/powershell/module/az.sql/get-azsqldatabasetransparentdataencryption?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). | ## Differences from SQL Database
synapse-analytics Best Practices Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-serverless-sql-pool.md
Title: Best practices for serverless SQL pool
-description: Recommendations and best practices for working with serverless SQL pool.
+description: Recommendations and best practices for working with serverless SQL pool.
+ + Last updated : 09/01/2022 - Previously updated : 05/01/2020--+ # Best practices for serverless SQL pool in Azure Synapse Analytics
Multiple applications and services might access your storage account. Storage th
When throttling is detected, serverless SQL pool has built-in handling to resolve it. Serverless SQL pool makes requests to storage at a slower pace until throttling is resolved.
-> [!TIP]
+> [!TIP]
> For optimal query execution, don't stress the storage account with other workloads during query execution. ### Prepare files for querying
If possible, you can prepare files for better performance:
### Colocate your Azure Cosmos DB analytical storage and serverless SQL pool
-Make sure your Azure Cosmos DB analytical storage is placed in the same region as an Azure Synapse workspace. Cross-region queries might cause huge latencies. Use the region property in the connection string to explicitly specify the region where the analytical store is placed (see [Query Azure Cosmos DB by using serverless SQL pool](query-cosmos-db-analytical-store.md#overview)):
-
-```
-'account=<database account name>;database=<database name>;region=<region name>'
-```
+Make sure your Azure Cosmos DB analytical storage is placed in the same region as an Azure Synapse workspace. Cross-region queries might cause huge latencies. Use the region property in the connection string to explicitly specify the region where the analytical store is placed (see [Query Azure Cosmos DB by using serverless SQL pool](query-cosmos-db-analytical-store.md#overview)): `account=<database account name>;database=<database name>;region=<region name>'`
## CSV optimizations
Here are best practices for using data types in serverless SQL pool.
### Use appropriate data types
-The data types you use in your query affect performance and concurrency. You can get better performance if you follow these guidelines:
+The data types you use in your query affect performance and concurrency. You can get better performance if you follow these guidelines:
- Use the smallest data size that can accommodate the largest possible value. - If the maximum character value length is 30 characters, use a character data type of length 30.
You can use [sp_describe_first_results_set](/sql/relational-databases/system-sto
The following example shows how you can optimize inferred data types. This procedure is used to show the inferred data types:
-```sql
+```sql
EXEC sp_describe_first_result_set N'
- SELECT
+ SELECT
vendor_id, pickup_datetime, passenger_count
- FROM
- OPENROWSET(
- BULK ''https://sqlondemandstorage.blob.core.windows.net/parquet/taxi/*/*/*'',
- FORMAT=''PARQUET''
- ) AS nyc';
+ FROM
+ OPENROWSET(
+ BULK ''https://sqlondemandstorage.blob.core.windows.net/parquet/taxi/*/*/*'',
+ FORMAT=''PARQUET''
+ ) AS nyc';
``` Here's the result set:
Here's the result set:
After you know the inferred data types for the query, you can specify appropriate data types:
-```sql
+```sql
SELECT vendorID, tpepPickupDateTime, passengerCount
-FROM
- OPENROWSET(
- BULK 'https://azureopendatastorage.blob.core.windows.net/nyctlc/yellow/puYear=2018/puMonth=*/*.snappy.parquet',
- FORMAT='PARQUET'
- )
- WITH (
- vendorID varchar(4), -- we used length of 4 instead of the inferred 8000
- tpepPickupDateTime datetime2,
- passengerCount int
- ) AS nyc;
+FROM
+ OPENROWSET(
+ BULK 'https://azureopendatastorage.blob.core.windows.net/nyctlc/yellow/puYear=2018/puMonth=*/*.snappy.parquet',
+ FORMAT='PARQUET'
+ )
+ WITH (
+ vendorID varchar(4), -- we used length of 4 instead of the inferred 8000
+ tpepPickupDateTime datetime2,
+ passengerCount int
+ ) AS nyc;
``` ## Filter optimization
Data is often organized in partitions. You can instruct serverless SQL pool to q
For more information, read about the [filename](query-data-storage.md#filename-function) and [filepath](query-data-storage.md#filepath-function) functions and see the examples for [querying specific files](query-specific-files.md).
-> [!TIP]
+> [!TIP]
> Always cast the results of the filepath and filename functions to appropriate data types. If you use character data types, be sure to use the appropriate length. Functions used for partition elimination, filepath and filename, aren't currently supported for external tables, other than those created automatically for each table created in Apache Spark for Azure Synapse Analytics.
You can use CETAS to materialize frequently used parts of queries, like joined r
As CETAS generates Parquet files, statistics are automatically created when the first query targets this external table. The result is improved performance for subsequent queries targeting table generated with CETAS.
+## Query Azure data
+
+Serverless SQL pools enable you to query data in Azure Storage or Azure Cosmos DB by using [external tables and the OPENROWSET function](develop-storage-files-overview.md). Make sure that you have proper [permission set up](develop-storage-files-overview.md#permissions) on your storage.
+
+### Query CSV data
+
+Learn how to [query a single CSV file](query-single-csv-file.md) or [folders and multiple CSV files](query-folders-multiple-csv-files.md). You can also [query partitioned files](query-specific-files.md)
+
+### Query Parquet data
+
+Learn how to [query Parquet files](query-parquet-files.md) with [nested types](query-parquet-nested-types.md). You can also [query partitioned files](query-specific-files.md).
+
+### Query Delta Lake
+
+Learn how to [query Delta Lake files](query-delta-lake-format.md) with [nested types](query-parquet-nested-types.md).
+
+### Query Azure Cosmos DB data
+
+Learn how to [query Azure Cosmos DB analytical store](query-cosmos-db-analytical-store.md). You can use an [online generator](https://htmlpreview.github.io/?https://github.com/Azure-Samples/Synapse/blob/main/SQL/tools/cosmosdb/generate-openrowset.html) to generate the WITH clause based on a sample Azure Cosmos DB document. You can [create views](create-use-views.md#cosmosdb-view) on top of Azure Cosmos DB containers.
+
+### Query JSON data
+
+Learn how to [query JSON files](query-json-files.md). You can also [query partitioned files](query-specific-files.md).
+
+### Create views, tables, and other database objects
+
+Learn how to create and use [views](create-use-views.md) and [external tables](create-use-external-tables.md) or set up [row-level security](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-implement-row-level-security-in-serverless-sql-pools/ba-p/2354759).
+If you have [partitioned files](query-specific-files.md), make sure you use [partitioned views](create-use-views.md#partitioned-views).
+
+### Copy and transform data (CETAS)
+
+Learn how to [store query results to storage](create-external-table-as-select.md) by using the CETAS command.
+ ## Next steps
-Review the [troubleshooting](resources-self-help-sql-on-demand.md) article for solutions to common problems. If you're working with a dedicated SQL pool rather than serverless SQL pool, see [Best practices for dedicated SQL pools](best-practices-dedicated-sql-pool.md) for specific guidance.
+- Review the [troubleshooting serverless SQL pools](resources-self-help-sql-on-demand.md) article for solutions to common problems.
+- If you're working with a dedicated SQL pool rather than serverless SQL pool, see [Best practices for dedicated SQL pools](best-practices-dedicated-sql-pool.md) for specific guidance.
+- [Azure Synapse Analytics frequently asked questions](../overview-faq.yml)
+- [Grant permissions to workspace managed identity](../security/how-to-grant-workspace-managed-identity-permissions.md)
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Title: Serverless SQL pool self-help description: This article contains information that can help you troubleshoot problems with serverless SQL pool. ++ Last updated : 09/01/2022 - + Previously updated : 07/21/2022--
-# Self-help for serverless SQL pool
+# Troubleshoot serverless SQL pool in Azure Synapse Analytics
This article contains information about how to troubleshoot the most frequent problems with serverless SQL pool in Azure Synapse Analytics.
For more information, see:
#### Alternative to Storage Blob Data Contributor role
-Instead of granting yourself a Storage Blob Data Contributor role, you can also grant more granular permissions on a subset of files.
+Instead of [granting yourself a Storage Blob Data Contributor role](../security/how-to-grant-workspace-managed-identity-permissions.md), you can also grant more granular permissions on a subset of files.
All users who need access to some data in this container also must have EXECUTE permission on all parent folders up to the root (the container). Learn more about how to [set ACLs in Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-explorer-acl.md).
-> [!NOTE]
+> [!NOTE]
> Execute permission on the container level must be set within Data Lake Storage Gen2. > Permissions on the folder can be set within Azure Synapse.
If you want to query data2.csv in this example, the following permissions are ne
- Execute permission on folder1 - Read permission on data2.csv
-![Diagram that shows permission structure on data lake.](./media/resources-self-help-sql-on-demand/folder-structure-data-lake.png)
1. Sign in to Azure Synapse with an admin user that has full permissions on the data you want to access. 1. In the data pane, right-click the file and select **Manage access**.
- ![Screenshot that shows the Manage access option.](./media/resources-self-help-sql-on-demand/manage-access.png)
+ :::image type="content" source="./media/resources-self-help-sql-on-demand/manage-access.png" alt-text="Screenshot that shows the manage access option.":::
1. Select at least **Read** permission. Enter the user's UPN or object ID, for example, user@contoso.com. Select **Add**. 1. Grant read permission for this user.
- ![Screenshot that shows granting read permissions.](./media/resources-self-help-sql-on-demand/grant-permission.png)
+ :::image type="content" source="./media/resources-self-help-sql-on-demand/grant-permission.png" alt-text="Screenshot that shows granting read permissions.":::
-> [!NOTE]
+> [!NOTE]
> For guest users, this step needs to be done directly with Azure Data Lake because it can't be done directly through Azure Synapse. ### Content of directory on the path can't be listed
For more information, see:
#### Content of Dataverse table can't be listed
-If you are using the Synapse link for Dataverse to read the linked DataVerse tables, you need to use Azure AD account to access the linked data using the serverless SQL pool. For more information, see [Azure Synapse Link for Dataverse with Azure Data Lake](/powerapps/maker/data-platform/azure-synapse-link-data-lake).
-
-If you try to use a SQL login to read an external table that is referencing the DataVerse table, you will get the following error:
+If you are using the Azure Synapse Link for Dataverse to read the linked DataVerse tables, you need to use Azure AD account to access the linked data using the serverless SQL pool. For more information, see [Azure Synapse Link for Dataverse with Azure Data Lake](/powerapps/maker/data-platform/azure-synapse-link-data-lake).
-```
-External table '???' is not accessible because content of directory cannot be listed.
-```
+If you try to use a SQL login to read an external table that is referencing the DataVerse table, you will get the following error: `External table '???' is not accessible because content of directory cannot be listed.`
Dataverse external tables always use Azure AD passthrough authentication. You *can't* configure them to use a [shared access signature key](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) or [workspace managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity).
Make sure the `_delta_log` folder exists. Maybe you're querying plain Parquet fi
```sql select top 10 *
-from openrowset(BULK 'https://.....core.windows.net/.../_delta_log/*.json',FORMAT='csv', FIELDQUOTE = '0x0b', FIELDTERMINATOR ='0x0b',ROWTERMINATOR = '0x0b')
+from openrowset(BULK 'https://.....core.windows.net/.../_delta_log/*.json',FORMAT='csv', FIELDQUOTE = '0x0b', FIELDTERMINATOR ='0x0b',ROWTERMINATOR = '0x0b')
with (line varchar(max)) as logs ```
-If this query fails, the caller doesn't have permission to read the underlying storage files.  
+If this query fails, the caller doesn't have permission to read the underlying storage files.
## Query execution
Your query might fail with the error message "This query cannot be executed due
- Make sure data types of reasonable sizes are used. - If your query targets Parquet files, consider defining explicit types for string columns because they'll be VARCHAR(8000) by default. [Check inferred data types](./best-practices-serverless-sql-pool.md#check-inferred-data-types). - If your query targets CSV files, consider [creating statistics](develop-tables-statistics.md#statistics-in-serverless-sql-pool).-- To optimize your query, see [Performance best practices for serverless SQL pool](./best-practices-serverless-sql-pool.md).
+- To optimize your query, see [Performance best practices for serverless SQL pool](./best-practices-serverless-sql-pool.md).
### Query timeout expired
In rare cases, where you use the LIKE operator on a string column or some compar
``` Msg 105, Level 15, State 1, Line 88
-Unclosed quotation mark after the character string
+Unclosed quotation mark after the character string
``` This error might happen if you use the `Latin1_General_100_BIN2_UTF8` collation on the column. Try to set `Latin1_General_100_CI_AS_SC_UTF8` collation on the column instead of the `Latin1_General_100_BIN2_UTF8` collation to resolve the issue. If the error is still returned, raise a support request through the Azure portal. ### Couldn't allocate tempdb space while transferring data from one distribution to another
-The error "Could not allocate tempdb space while transferring data from one distribution to another" is returned when the query execution engine can't process data and transfer it between the nodes that are executing the query.
-It's a special case of the generic [query fails because it cannot be executed due to current resource constraints](#query-fails-because-it-cant-be-executed-due-to-current-resource-constraints) error. This error is returned when the resources allocated to the `tempdb` database are insufficient to run the query.
+The error "Could not allocate tempdb space while transferring data from one distribution to another" is returned when the query execution engine can't process data and transfer it between the nodes that are executing the query. It's a special case of the generic [query fails because it cannot be executed due to current resource constraints](#query-fails-because-it-cant-be-executed-due-to-current-resource-constraints) error. This error is returned when the resources allocated to the `tempdb` database are insufficient to run the query.
Apply best practices before you file a support ticket.
If you want to query the file `names.csv` with this Query 1, Azure Synapse serve
names.csv ```csv
-Id,first name,
+Id,first name,
1, Adam 2,Bob 3,Charles
FROM
        PARSER_VERSION='2.0', FIELDTERMINATOR =';', FIRSTROW = 2
- )ΓÇ»
+ )
WITH (
- [ID] SMALLINT,
- [Text] VARCHAR (1) COLLATE Latin1_General_BIN2
+ [ID] SMALLINT,
+ [Text] VARCHAR (1) COLLATE Latin1_General_BIN2
) ASΓÇ»[result]
As soon as the parser version is changed from version 2.0 to 1.0, the error mess
Truncation tells you that your column type is too small to fit your data. The longest first name in this names.csv file has seven characters. The according data type to be used should be at least VARCHAR(7). The error is caused by this line of code:
-```sql
+```sql
[Text] VARCHAR (1) COLLATE Latin1_General_BIN2 ```
Changing the query accordingly resolves the error. After debugging, change the p
For more information about when to use which parser version, see [Use OPENROWSET using serverless SQL pool in Synapse Analytics](develop-openrowset.md).
-```sql
+```sql
SELECT     TOP 100 * FROM
FROM
        PARSER_VERSION='2.0', FIELDTERMINATOR =';', FIRSTROW = 2
- )ΓÇ»
+ )
WITH (
- [ID] SMALLINT,
- [Text] VARCHAR (7) COLLATE Latin1_General_BIN2
+ [ID] SMALLINT,
+ [Text] VARCHAR (7) COLLATE Latin1_General_BIN2
) ASΓÇ»[result]
The error "Cannot bulk load because the file could not be opened" is returned if
The serverless SQL pools can't read files that are being modified while the query is running. The query can't take a lock on the files. If you know that the modification operation is *append*, you can try to set the following option:
- `{"READ_OPTIONS":["ALLOW_INCONSISTENT_READS"]}`.
+`{"READ_OPTIONS":["ALLOW_INCONSISTENT_READS"]}`.
For more information, see how to [query append-only files](query-single-csv-file.md#querying-appendable-files) or [create tables on append-only files](create-use-external-tables.md#external-table-on-appendable-files).
If you want to query the file names.csv:
names.csv ```csv
-Id, first name,
+Id, first name,
1,Adam 2,Bob 3,Charles
five,Eva
with the following Query 1: Query 1:
-```sql
+```sql
SELECT     TOP 100 * FROM
FROM
PARSER_VERSION='1.0', FIELDTERMINATOR =',', FIRSTROW = 2
- )ΓÇ»
+ )
WITH (
- [ID] SMALLINT,
- [Firstname] VARCHAR (25) COLLATE Latin1_General_BIN2
+ [ID] SMALLINT,
+ [Firstname] VARCHAR (25) COLLATE Latin1_General_BIN2
) ASΓÇ»[result]
FROM
        PARSER_VERSION='1.0', FIELDTERMINATOR =',', FIRSTROW = 2
- )ΓÇ»
+ )
WITH (
- [ID] VARCHAR(100),
- [Firstname] VARCHAR (25) COLLATE Latin1_General_BIN2
+ [ID] VARCHAR(100),
+ [Firstname] VARCHAR (25) COLLATE Latin1_General_BIN2
) ASΓÇ»[result]
FROM
names.csv ```csv
-Id, first name,
+Id, first name,
1, Adam 2, Bob 3, Charles
five, Eva
You might observe that the data has unexpected values for ID in the fifth row. In such circumstances, it's important to align with the business owner of the data to agree on how corrupt data like this example can be avoided. If prevention isn't possible at the application level, reasonable-sized VARCHAR might be the only option here.
-> [!Tip]
+> [!TIP]
> Try to make VARCHAR() as short as possible. Avoid VARCHAR(MAX) if possible because it can impair performance. ### The query result doesn't look as expected
If you want to query the file `names.csv` with the query in Query 1, Azure Synap
names.csv ```csv
-Id,first name,
+Id,first name,
1, Adam 2, Bob 3, Charles
FROM
PARSER_VERSION='1.0', FIELDTERMINATOR =';', FIRSTROW = 2
- )ΓÇ»
+ )
WITH (
- [ID] VARCHAR(100),
- [Firstname] VARCHAR (25) COLLATE Latin1_General_BIN2
+ [ID] VARCHAR(100),
+ [Firstname] VARCHAR (25) COLLATE Latin1_General_BIN2
) ASΓÇ»[result] ```
-| ID | Firstname |
-| - |- |
-| 1,Adam | NULL |
-| 2,Bob | NULL |
-| 3,Charles | NULL |
-| 4,David | NULL |
-| 5,Eva | NULL |
+| ID | Firstname |
+| - |- |
+| 1,Adam | NULL |
+| 2,Bob | NULL |
+| 3,Charles | NULL |
+| 4,David | NULL |
+| 5,Eva | NULL |
There seems to be no value in the column `Firstname`. Instead, all values ended up being in the `ID` column. Those values are separated by a comma. The problem was caused by this line of code because it's necessary to choose the comma instead of the semicolon symbol as field terminator:
FROM
PARSER_VERSION='1.0', FIELDTERMINATOR =',', FIRSTROW = 2
- )ΓÇ»
+ )
WITH (
- [ID] VARCHAR(100),
- [Firstname] VARCHAR (25) COLLATE Latin1_General_BIN2
+ [ID] VARCHAR(100),
+ [Firstname] VARCHAR (25) COLLATE Latin1_General_BIN2
) ASΓÇ»[result]
-```
+```
returns
-| ID | Firstname |
-| - |- |
-| 1 | Adam |
-| 2 | Bob |
-| 3 | Charles |
-| 4 | David |
-| 5 | Eva |
+| ID | Firstname |
+| - |- |
+| 1 | Adam |
+| 2 | Bob |
+| 3 | Charles |
+| 4 | David |
+| 5 | Eva |
### Column of type isn't compatible with external data type
FROM
FORMAT='PARQUET' ) WITh (
- PassengerCount INT,
- SumTripDistance INT,
+ PassengerCount INT,
+ SumTripDistance INT,
AVGTripDistance FLOAT )
FROM
This error message tells you that data types aren't compatible and comes with the suggestion to use FLOAT instead of INT. The error is caused by this line of code: ```sql
-SumTripDistance INT,
+SumTripDistance INT,
``` With this slightly changed Query 2, the data can now be processed and shows all three columns:
FROM
FORMAT='PARQUET' ) WITh (
- PassengerCount INT,
- SumTripDistance FLOAT,
+ PassengerCount INT,
+ SumTripDistance FLOAT,
AVGTripDistance FLOAT )
There are reasons why this error code can happen:
- The file was deleted by another application. - In this common scenario, the query execution starts, it enumerates the files, and the files are found. Later, during the query execution, a file is deleted. For example, it could be deleted by Databricks, Spark, or Azure Data Factory. The query fails because the file isn't found.
+- This issue can also occur with the Delta format. The query might succeed on retry because there's a new version of the table and the deleted file isn't queried again.
- An invalid execution plan is cached. - As a temporary mitigation, run the command `DBCC FREEPROCCACHE`. If the problem persists, create a support ticket.
When the file format is Parquet, the query won't recover automatically. It needs
### Synapse Link for Dataverse
-This error can occur when reading data from Synapse Link for Dataverse, when Synapse Link is syncing data to the lake and the data is being queried at the same time. The product group has a goal to improve this behavior.
+This error can occur when reading data from Azure Synapse Link for Dataverse, when Synapse Link is syncing data to the lake and the data is being queried at the same time. The product group has a goal to improve this behavior.
### [0x800700A1](#tab/x800700A1)
If the issue persists, create a support ticket.
The error "Incorrect syntax near 'NOT'" indicates there are some external tables with columns that contain the NOT NULL constraint in the column definition. Update the table to remove NOT NULL from the column definition. This error can sometimes also occur transiently with tables created from a CETAS statement. If the problem doesn't resolve, you can try dropping and re-creating the external table.
-### Partitioning column returns NULL values
+### Partitione column returns NULL values
If your query returns NULL values instead of partitioning columns or can't find the partition columns, you have a few possible troubleshooting steps:
If your query returns NULL values instead of partitioning columns or can't find
- If you use the [partitioned views](create-use-views.md#partitioned-views) with the OPENROWSET that [queries partitioned files by using the FILEPATH() function](query-specific-files.md), make sure you correctly specified the wildcard pattern in the location and used the proper index for referencing the wildcard. - If you're querying the files directly in the partitioned folder, be aware that the partitioning columns aren't the parts of the file columns. The partitioning values are placed in the folder paths and not the files. For this reason, the files don't contain the partitioning values.
-### Inserting value to batch for column type DATETIME2 failed
+### <a id="#inserting-value-to-batch-for-column-type-datetime2-failed"></a>Insert value to batch for column type DATETIME2 failed
The error "Inserting value to batch for column type DATETIME2 failed" indicates that the serverless pool can't read the date values from the underlying files. The datetime value stored in the Parquet or Delta Lake file can't be represented as a `DATETIME2` column.
-Inspect the minimum value in the file by using Spark, and check that some dates are less than 0001-01-03. If you stored the files by using Spark 2.4, the datetime values before are written by using the Julian calendar that isn't aligned with the proleptic Gregorian calendar used in serverless SQL pools.
+Inspect the minimum value in the file by using Spark, and check that some dates are less than 0001-01-03. If you stored the files by using Spark 2.4, the datetime values before are written by using the Julian calendar that isn't aligned with the proleptic Gregorian calendar used in serverless SQL pools.
There might be a two-day difference between the Julian calendar used to write the values in Parquet (in some Spark versions) and the proleptic Gregorian calendar used in serverless SQL pool. This difference might cause conversion to a negative date value, which is invalid.
Try to use Spark to update these values because they're treated as invalid date
from delta.tables import * from pyspark.sql.functions import *
-deltaTable = DeltaTable.forPath(spark,
+deltaTable = DeltaTable.forPath(spark,
"abfss://my-container@myaccount.dfs.core.windows.net/delta-lake-data-set") deltaTable.update(col("MyDateTimeColumn") < '0001-02-02', { "MyDateTimeColumn": null } ) ```
Describe anything that might be unusual compared to the regular workload. For ex
Serverless SQL pools enable you to use T-SQL to configure database objects. There are some constraints: -- You can't create objects in master and lakehouse or Spark databases.
+- You can't create objects in `master` and `lakehouse` or Spark databases.
- You must have a master key to create credentials. - You must have permission to reference data that's used in the objects.
To resolve this problem, create a master key with the following query:
CREATE MASTER KEY [ ENCRYPTION BY PASSWORD ='password' ]; ```
-> [!NOTE]
+> [!NOTE]
> Replace `'password'` with a different secret here. ### CREATE statement isn't supported in the master database
-If your query fails with the error message "Failed to execute query. Error: CREATE EXTERNAL TABLE/DATA SOURCE/DATABASE SCOPED CREDENTIAL/FILE FORMAT is not supported in master database," it means that the master database in serverless SQL pool doesn't support the creation of:
+If your query fails with the error message "Failed to execute query. Error: CREATE EXTERNAL TABLE/DATA SOURCE/DATABASE SCOPED CREDENTIAL/FILE FORMAT is not supported in master database," it means that the `master` database in serverless SQL pool doesn't support the creation of:
- External tables. - External data sources.
Here's the solution:
CREATE DATABASE <DATABASE_NAME> ```
- 1. Execute a CREATE statement in the context of <DATABASE_NAME>, which failed earlier for the master database.
-
+ 1. Execute a CREATE statement in the context of <DATABASE_NAME>, which failed earlier for the `master` database.
+ Here's an example of the creation of an external file format:
-
+ ```sql USE <DATABASE_NAME>
- CREATE EXTERNAL FILE FORMAT [SynapseParquetFormat]
+ CREATE EXTERNAL FILE FORMAT [SynapseParquetFormat]
WITH ( FORMAT_TYPE = PARQUET) ``` ### Operation isn't allowed for a replicated database
-If you're trying to create SQL objects, users, or change permissions in a database, you might get errors like "Operation is not allowed for a replicated database." This error might be returned when you try to modify a Lake database that's [shared with Spark pool](../metadat). The Lake databases that are replicated from the Apache Spark pool are managed by Synapse and you cannot create objects like in SQL Databases by using T-SQL.
+If you're trying to create SQL objects, users, or change permissions in a database, you might get errors like "Operation is not allowed for a replicated database." This error might be returned when you try to modify a Lake database that's [shared with Spark pool](../metadat). The Lake databases that are replicated from the Apache Spark pool are managed by Synapse and you cannot create objects like in SQL Databases by using T-SQL.
Only the following operations are allowed in the Lake databases: - Creating, dropping, or altering views, procedures, and inline table-value functions (iTVF) in the schemas other than `dbo`. If you are creating a SQL object in `dbo` schema (or omitting schema and using the default one that is usually `dbo`), you will get the error message. - Creating and dropping the database users from Azure Active Directory.
Azure Synapse SQL returns NULL instead of the values that you see in the transac
The error "Column `column name` of the type `type name` is not compatible with the external data type `type name`" is returned if the specified column type in the WITH clause doesn't match the type in the Azure Cosmos DB container. Try to change the column type as it's described in the section [Azure Cosmos DB to SQL type mappings](query-cosmos-db-analytical-store.md#azure-cosmos-db-to-sql-type-mappings) or use the VARCHAR type.
-### Resolving Azure Cosmos DB path has failed with error
+### <a id="resolving-azure-cosmos-db-path-has-failed-with-error"></a>Resolve: Azure Cosmos DB path has failed with error
If you get the error "Resolving CosmosDB path has failed with error 'This request is not authorized to perform this operation'," check to see if you used private endpoints in Azure Cosmos DB. To allow serverless SQL pool to access an analytical store with private endpoints, you must [configure private endpoints for the Azure Cosmos DB analytical store](../../cosmos-db/analytical-store-private-endpoints.md#using-synapse-serverless-sql-pools).
If the dataset is valid, [create a support ticket](../../azure-portal/supportabi
Now you can continue using the Delta Lake folder with Spark pool. You'll provide copied data to Microsoft support if you're allowed to share this information. The Azure team will investigate the content of the `delta_log` file and provide more information about possible errors and workarounds.
-### Resolving Delta logs failed
+### <a id="#resolving-delta-logs-failed"></a>Resolve Delta logs failed
+
+The following error indicates that serverless SQL pool cannot resolve Delta logs: `Resolving Delta logs on path '%ls' failed with error: Cannot parse json object from log folder.`
+The most common cause is that `last_checkpoint_file` in `_delta_log` folder is larger than 200 bytes due to the `checkpointSchema` field added in Spark 3.3.
-The following error indicates that serverless SQL pool cannot resolve Delta logs:
-```
-Resolving Delta logs on path '%ls' failed with error: Cannot parse json object from log folder.
-```
-The most common cause is that `last_checkpoint_file` in `_delta_log` folder is larger than 200 bytes due to the `checkpointSchema` field added in Spark 3.3.
-
There are two options available to circumvent this error: * Modify appropriate config in Spark notebook and generate a new checkpoint, so that `last_checkpoint_file` gets re-created. In case you are using Azure Databricks, the config modification is the following: `spark.conf.set("spark.databricks.delta.checkpointSchema.writeThresholdLength", 0);` * Downgrade to Spark 3.2.1.
Serverless SQL pool enables you to connect by using the TDS protocol and by usin
### SQL pool is warming up
-Following a longer period of inactivity, serverless SQL pool will be deactivated. The activation happens automatically on the first next activity, such as the first connection attempt. The activation process might take a bit longer than a single connection attempt interval, so the error message is displayed. Retrying the connection attempt should be enough.
+Following a longer period of inactivity, serverless SQL pool will be deactivated. The activation happens automatically on the first next activity, such as the first connection attempt. The activation process might take a bit longer than a single connection attempt interval, so the error message is displayed. Retrying the connection attempt should be enough.
As a best practice, for the clients that support it, use ConnectionRetryCount and ConnectRetryInterval connection string keywords to control the reconnect behavior.
Dataverse tables access storage by using the caller's Azure AD identity. A SQL u
### Azure AD service principal sign-in failures when SPI creates a role assignment
-If you want to create a role assignment for a service principal identifier (SPI) or Azure AD app by using another SPI, or you've already created one and it fails to sign in, you'll probably receive the following error:
+If you want to create a role assignment for a service principal identifier (SPI) or Azure AD app by using another SPI, or you've already created one and it fails to sign in, you'll probably receive the following error: `Login error: Login failed for user '<token-identified principal>'.`
-```
-Login error: Login failed for user '<token-identified principal>'.
-```
-
-For service principals, sign-in should be created with an application ID as a security ID (SID) not with an object ID. There's a known limitation for service principals, which prevents Azure Synapse from fetching the application ID from Microsoft Graph when it creates a role assignment for another SPI or app.
+For service principals, sign-in should be created with an application ID as a security ID (SID) not with an object ID. There's a known limitation for service principals, which prevents Azure Synapse from fetching the application ID from Microsoft Graph when it creates a role assignment for another SPI or app.
**Solution 1**
go
You can also set up a service principal Azure Synapse admin by using PowerShell. You must have the [Az.Synapse module](/powershell/module/az.synapse) installed.
-The solution is to use the cmdlet New-AzSynapseRoleAssignment with `-ObjectId "parameter"`. In that parameter field, provide the application ID instead of the object ID by using the workspace admin Azure service principal credentials.
+The solution is to use the cmdlet `New-AzSynapseRoleAssignment` with `-ObjectId "parameter"`. In that parameter field, provide the application ID instead of the object ID by using the workspace admin Azure service principal credentials.
PowerShell script:
If you get the error "CREATE DATABASE failed. User database limit has been alrea
You don't need to use separate databases to isolate data for different tenants. All data is stored externally on a data lake and Azure Cosmos DB. The metadata like table, views, and function definitions can be successfully isolated by using schemas. Schema-based isolation is also used in Spark where databases and schemas are the same concepts.
-## Query Azure data
-
-Serverless SQL pools enable you to query data in Azure Storage or Azure Cosmos DB by using [external tables and the OPENROWSET function](develop-storage-files-overview.md). Make sure that you have proper [permission set up](develop-storage-files-overview.md#permissions) on your storage.
-
-### Query CSV data
-
-Learn how to [query a single CSV file](query-single-csv-file.md) or [folders and multiple CSV files](query-folders-multiple-csv-files.md). You can also [query partitioned files](query-specific-files.md)
-
-### Query Parquet data
-
-Learn how to [query Parquet files](query-parquet-files.md) with [nested types](query-parquet-nested-types.md). You can also [query partitioned files](query-specific-files.md).
-
-### Query Delta Lake
-
-Learn how to [query Delta Lake files](query-delta-lake-format.md) with [nested types](query-parquet-nested-types.md).
-
-### Query Azure Cosmos DB data
-
-Learn how to [query Azure Cosmos DB analytical store](query-cosmos-db-analytical-store.md). You can use an [online generator](https://htmlpreview.github.io/?https://github.com/Azure-Samples/Synapse/blob/main/SQL/tools/cosmosdb/generate-openrowset.html) to generate the WITH clause based on a sample Azure Cosmos DB document. You can [create views](create-use-views.md#cosmosdb-view) on top of Azure Cosmos DB containers.
-
-### Query JSON data
-
-Learn how to [query JSON files](query-json-files.md). You can also [query partitioned files](query-specific-files.md).
-
-### Create views, tables, and other database objects
-
-Learn how to create and use [views](create-use-views.md) and [external tables](create-use-external-tables.md) or set up [row-level security](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-implement-row-level-security-in-serverless-sql-pools/ba-p/2354759).
-If you have [partitioned files](query-specific-files.md), make sure you use [partitioned views](create-use-views.md#partitioned-views).
-
-### Copy and transform data (CETAS)
+## Next steps
-Learn how to [store query results to storage](create-external-table-as-select.md) by using the CETAS command.
+- [Best practices for serverless SQL pool in Azure Synapse Analytics](best-practices-serverless-sql-pool.md)
+- [Azure Synapse Analytics frequently asked questions](../overview-faq.yml)
+- [Store query results to storage using serverless SQL pool in Azure Synapse Analytics](create-external-table-as-select.md)
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/third-party-notices.md
Title: Legal notices
description: Legal notices for Azure documentation -+ Last updated 03/08/2019
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
Title: Deploy SAP S/4HANA or BW/4HANA on an Azure VM | Microsoft Docs
description: Deploy SAP S/4HANA or BW/4HANA on an Azure VM documentationcenter: ''-+ editor: '' tags: azure-resource-manager
The online library is continuously updated with Appliances for demo, proof of co
|This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) | | **SAP BW/4HANA 2021 including BW/4HANA Content 2.0 SP08 - Dev Edition** May 11 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=06725b24-b024-4757-860d-ac2db7b49577&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |This solution offers you an insight of SAP BW/4HANA. SAP BW/4HANA is the next generation Data Warehouse optimized for HANA. Beside the basic BW/4HANA options the solution offers a bunch of HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. As the system is pre-configured you can start directly implementing your scenarios. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/06725b24-b024-4757-860d-ac2db7b49577) |
-| **System Conversion for SAP S/4HANA ΓÇô SAP S/4HANA 2021 FPS01 after technical conversion** July 27 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=93895065-7267-4d51-945b-9300836f6a80&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|Third solution after performing a technical system conversion from SAP ERP to SAP S/4HANA before additional configuration. It has been tested and prepared as converted from SAP EHP7 for SAP ERP 6.0 to SAP S/4HANA 2020 FPS01. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/93895065-7267-4d51-945b-9300836f6a80) |
+| **SAP S/4HANA 2021, Fully-Activated Appliance** December 20 2021 | [Create Appliance](https://cal.sap.com/registration?sguid=b8a9077c-f0f7-47bd-977c-70aa6a6a2aa7&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+|This appliance contains SAP S/4HANA 2021 (SP00) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Transportation Mgmt. (TM), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/b8a9077c-f0f7-47bd-977c-70aa6a6a2aa7) |
| **System Conversion for SAP S/4HANA ΓÇô Source system SAP ERP6.0 before running SUM** July 05 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=b28b67f3-ebab-4b03-bee9-1cd57ddb41b6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |Second solution for performing a system conversion from SAP ERP to SAP S/4HANA after preparation steps before running Software Update Manager. It has been tested and prepared to be converted from SAP EHP7 for SAP ERP 6.0 to SAP S/4HANA 2021 FPS01 | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/b28b67f3-ebab-4b03-bee9-1cd57ddb41b6) | | **SAP NetWeaver 7.5 SP15 on SAP ASE** January 20 2020 | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
virtual-machines Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-rhel.md
vm-linux Previously updated : 03/24/2021 Last updated : 09/02/2022
Follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux in Azure](
## Implement the Python system replication hook SAPHanaSR
-This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook.
+This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook.
-1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes.
+1. **[A]** Install the SAP HANA resource agents on **all nodes**. Make sure to enable a repository that contains the package. You don't need to enable additional repositories, if using RHEL 8.x HA-enabled image.
+
+ ```bash
+ # Enable repository that contains SAP HANA resource agents
+ sudo subscription-manager repos --enable="rhel-sap-hana-for-rhel-7-server-rpms"
+
+ sudo yum install -y resource-agents-sap-hana
+ ```
+
+2. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes.
> [!TIP]
- > The Python hook can only be implemented for HANA 2.0.
+ > The Python hook can only be implemented for HANA 2.0.
1. Prepare the hook as `root`.
This is important step to optimize the integration with the cluster and improve
ha_dr_saphanasr = info ```
-2. **[A]** The cluster requires sudoers configuration on each cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root`.
+3. **[A]** The cluster requires sudoers configuration on each cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root`.
```bash sudo visudo -f /etc/sudoers.d/20-saphana # Insert the following lines and then save
This is important step to optimize the integration with the cluster and improve
Defaults!SITE1_SOK, SITE1_SFAIL, SITE2_SOK, SITE2_SFAIL !requiretty ```
-3. **[A]** Start SAP HANA on both nodes. Execute as <sid\>adm.
+4. **[A]** Start SAP HANA on both nodes. Execute as <sid\>adm.
```bash sapcontrol -nr 03 -function StartSystem ```
-4. **[1]** Verify the hook installation. Execute as <sid\>adm on the active HANA system replication site.
+5. **[1]** Verify the hook installation. Execute as <sid\>adm on the active HANA system replication site.
```bash cdtrace
This is important step to optimize the integration with the cluster and improve
# 2021-04-12 21:36:16.911343 ha_dr_SAPHanaSR SFAIL # 2021-04-12 21:36:29.147808 ha_dr_SAPHanaSR SFAIL # 2021-04-12 21:37:04.898680 ha_dr_SAPHanaSR SOK- ``` For more details on the implementation of the SAP HANA system replication hook see [Enable the SAP HA/DR provider hook](https://access.redhat.com/articles/3004101#enable-srhook).
-
-## Create SAP HANA cluster resources
-
-Install the SAP HANA resource agents on **all nodes**. Make sure to enable a repository that contains the package. You don't need to enable additional repositories, if using RHEL 8.x HA-enabled image.
-<pre><code># Enable repository that contains SAP HANA resource agents
-sudo subscription-manager repos --enable="rhel-sap-hana-for-rhel-7-server-rpms"
-
-sudo yum install -y resource-agents-sap-hana
-</code></pre>
+## Create SAP HANA cluster resources
-Next, create the HANA topology. Run the following commands on one of the Pacemaker cluster nodes:
+Create the HANA topology. Run the following commands on one of the Pacemaker cluster nodes:
<pre><code>sudo pcs property set maintenance-mode=true
Next, create the HANA resources.
If building a cluster on **RHEL 7.x**, use the following commands: <pre><code># Replace the bold string with your instance number, HANA system ID, and the front-end IP address of the Azure load balancer.
-#
+ sudo pcs resource create SAPHana_<b>HN1</b>_<b>03</b> SAPHana SID=<b>HN1</b> InstanceNumber=<b>03</b> PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \ op start timeout=3600 op stop timeout=3600 \ op monitor interval=61 role="Slave" timeout=700 \
sudo pcs property set maintenance-mode=false
If building a cluster on **RHEL 8.x**, use the following commands: <pre><code># Replace the bold string with your instance number, HANA system ID, and the front-end IP address of the Azure load balancer.
-#
+ sudo pcs resource create SAPHana_<b>HN1</b>_<b>03</b> SAPHana SID=<b>HN1</b> InstanceNumber=<b>03</b> PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \ op start timeout=3600 op stop timeout=3600 \ op monitor interval=61 role="Slave" timeout=700 \
virtual-network Troubleshoot Nat Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat-connectivity.md
# Troubleshoot Azure Virtual Network NAT connectivity
-This article provides guidance on how to troubleshoot and resolve common outbound connectivity issues with your NAT gateway resource. This article also provides guidance on best practices for designing applications to use outbound connections efficiently.
+This article provides guidance on how to troubleshoot and resolve common outbound connectivity issues with your NAT gateway resource, as well as best practices on how to design applications to use outbound connections efficiently.
## SNAT exhaustion due to NAT gateway configuration
-Common SNAT exhaustion issues with NAT gateway typically have to do with the configurations on the NAT gateway. Common SNAT exhaustion issues include:
+Common SNAT exhaustion issues with NAT gateway typically have to do with the configurations on the NAT gateway, such as:
* Outbound connectivity on NAT gateway not scaled out with enough public IP addresses.
Common SNAT exhaustion issues with NAT gateway typically have to do with the con
### Outbound connectivity not scaled out enough
-Each public IP address provides 64,512 SNAT ports to subnets attached to NAT gateway. From those available SNAT ports, NAT gateway can support up to 50,000 concurrent connections to the same destination endpoint. If outbound connections are dropping because SNAT ports are being exhausted, then NAT gateway may not be scaled out enough to handle the workload. More public IP addresses may need to be added to NAT gateway in order to provide more SNAT ports for outbound connectivity.
+Each public IP address provides 64,512 SNAT ports for connecting outbound with NAT gateway. From those available SNAT ports, NAT gateway can support up to 50,000 concurrent connections to the same destination endpoint. If outbound connections are dropping because SNAT ports are being exhausted, then NAT gateway may not be scaled out enough to handle the workload. Additional Public IP addresses on NAT gateway may be required in order to provide more SNAT ports for outbound connectivity.
-The table below describes two common scenarios in which outbound connectivity may not be scaled out enough and how to validate and mitigate these issues:
+The table below describes two common outbound connectivity failure scenarios due to scalability issues as well as how to validate and mitigate these issues:
| Scenario | Evidence |Mitigation | ||||
-| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | Determine if you can add more public IP addresses or public IP prefixes. This addition will allow for up to 16 IP addresses in total to your NAT gateway. This addition will provide more inventory for available SNAT ports (64,000 per IP address) and allow you to scale your scenario further. |
-| You've already given 16 IP addresses and still are experiencing SNAT port exhaustion. | Attempt to add more IP addresses fails. Total number of IP addresses from public IP address resources or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. |
+| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | Add more public IP addresses or public IP prefixes as need (assign up to 16 IP addresses in total to your NAT gateway). This addition will provide more SNAT port inventory and allow you to scale your scenario further. |
+| You've already assigned 16 IP addresses to your NAT gateway and still are experiencing SNAT port exhaustion. | Attempt to add more IP addresses fails. Total number of IP addresses from public IP address or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. |
>[!NOTE]
->It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT ports to the inventory by adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns.
+>It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT portsby adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns. See [best practices for efficient use of outbound connections](#best-practices-for-efficient-use-of-outbound-connections) for additional guidance.
### TCP idle timeout timers set higher than the default value
-The NAT gateway TCP idle timeout timer is set to 4 minutes by default but is configurable up to 120 minutes. If this setting is changed to a higher value than the default, NAT gateway will hold on to flows longer, and can create [extra pressure on SNAT port inventory](/azure/virtual-network/nat-gateway/nat-gateway-resource#timers). The table below describes a common scenario in which a high TCP idle timeout may be causing SNAT exhaustion and provides possible mitigation steps to take:
+The NAT gateway TCP idle timeout timer is set to 4 minutes by default but is configurable up to 120 minutes. If the timer is setting is set to a higher value than the default, NAT gateway will hold on to flows longer, and can create [extra pressure on SNAT port inventory](/azure/virtual-network/nat-gateway/nat-gateway-resource#timers). The table below describes a scenario where a long TCP idle timeout timer is causing SNAT exhaustion and provides possible mitigation steps to take:
| Scenario | Evidence | Mitigation | ||||
-| You want to ensure that TCP connections stay active for long periods of time without idle time-out. You increase the TCP idle timeout timer setting. After a period of time, you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | You have a few possible mitigation steps that you can take to resolve SNAT port exhaustion: </br></br> **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer can't be set lower than 4 minutes. </br></br> Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. </br></br> **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). </br></br> For connections to Azure PaaS services, use **[Private Link](../../private-link/private-link-overview.md)**. Private Link eliminates the need to use public IPs of your NAT gateway, which frees up more SNAT ports for outbound connections to the internet. |
+| You want to ensure that TCP connections stay active for long periods of time without going idle and timing out. You increase the TCP idle timeout timer setting. After a period of time, you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | You have a few possible mitigation steps that you can take to resolve SNAT port exhaustion: </br></br> **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer can't be set lower than 4 minutes. </br></br> Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. </br></br> **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). </br></br> For connections to Azure PaaS services, use **[Private Link](../../private-link/private-link-overview.md)**. Private Link eliminates the need to use public IPs of your NAT gateway, which frees up more SNAT ports for outbound connections to the internet. |
## Connection failures due to idle timeouts ### TCP idle timeout
-As described in the [TCP timers](#tcp-idle-timeout-timers-set-higher-than-the-default-value) section above, TCP keepalives should be used instead to refresh idle flows and reset the idle timeout. TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an ACK packet. The idle timeout timer is then reset on both sides of the connection. To learn more, see [Timer considerations](/azure/virtual-network/nat-gateway/nat-gateway-resource#timer-considerations).
+As described in the [TCP timers](#tcp-idle-timeout-timers-set-higher-than-the-default-value) section above, TCP keepalives should be used to refresh idle flows and reset the idle timeout. TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an ACK packet. The idle timeout timer is then reset on both sides of the connection. To learn more, see [Timer considerations](/azure/virtual-network/nat-gateway/nat-gateway-resource#timer-considerations).
>[!Note] >Increasing the TCP idle timeout is a last resort and may not resolve the root cause. A long timeout can cause low-rate failures when timeout expires and introduce delay and unnecessary failures.
UDP idle timeout timers are set to 4 minutes. Unlike TCP idle timeout timers for
| Scenario | Evidence | Mitigation | ||||
-| You notice that UDP traffic is dropping connections that need to be maintained for long periods of time. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor, **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | A few possible mitigation steps that can be taken: - **Enable UDP keepalives**. Keep in mind that when a UDP keepalive is enabled, it's only active for one direction in a connection. This behavior means that the connection can still time out from going idle on the other side of a connection. To prevent a UDP connection from idle time-out, UDP keepalives should be enabled for both directions in a connection flow. - **Application layer keepalives** can also be used to refresh idle flows and reset the idle timeout. Check the server side for what options exist for application specific keepalives. |
+| You notice that UDP traffic is dropping connections that need to be maintained for long periods of time. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor, **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | A few possible mitigation steps that can be taken: - **Enable UDP keepalives**. Keep in mind that when a UDP keepalive is enabled, it's only active for one direction in a connection, so the connection can still time out from going idle on the other side of a connection. To prevent a UDP connection from idle time-out, UDP keepalives should be enabled for both directions in a connection flow. - **Application layer keepalives** can also be used to refresh idle flows and reset the idle timeout. Check the server side for what options exist for application specific keepalives. |
## NAT gateway public IP not being used for outbound traffic
Connection failures at the internet destination endpoint could be due to multipl
* Volumetric DDoS mitigations or transport layer traffic shaping.
-Use NAT gateway [metrics]((nat-metrics.md) in Azure monitor to diagnose connection issues:
+Use NAT gateway [metrics](nat-metrics.md) in Azure monitor to diagnose connection issues:
* Look at packet count at the source and the destination (if available) to determine how many connection attempts were made.
Use NAT gateway [metrics]((nat-metrics.md) in Azure monitor to diagnose connecti
What else to check for:
-* Check for [SNAT exhaustion]((#snat-exhaustion-due-to-nat-gateway-configuration).
+* Check for [SNAT exhaustion](#snat-exhaustion-due-to-nat-gateway-configuration).
* Validate connectivity to an endpoint in the same region or elsewhere for comparison.
When possible, Private Link should be used to connect directly from your virtual
To create a Private Link, see the following Quickstart guides to get started: * [Create a Private Endpoint](/azure/private-link/create-private-endpoint-portal?tabs=dynamic-ip) - * [Create a Private Link](/azure/private-link/create-private-link-service-portal) ## Next steps
We're always looking to improve the experience of our customers. If you're exper
To learn more about NAT gateway, see: * [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) - * [NAT gateway resource](/azure/virtual-network/nat-gateway/nat-gateway-resource) - * [Metrics and alerts for NAT gateway resources](/azure/virtual-network/nat-gateway/nat-metrics)