Updates from: 09/19/2022 05:43:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/error-codes.md
The following errors can be returned by the Azure Active Directory B2C service.
| `AADB2C99059` | The supplied request must present a code_challenge. Required for single-page apps using the authorization code flow.| [Authorization code flow](authorization-code-flow.md) | | `AADB2C90067` | The post logout redirect URI '{0}' has an invalid format. Specify an https based URL such as 'https://example.com/return' or for native clients use the IETF native client URI 'urn:ietf:wg:oauth:2.0:oob'. | [Send a sign-out request](openid-connect.md#send-a-sign-out-request) | | `AADB2C90068` | The provided application with ID '{0}' is not valid against this service. Please use an application created via the B2C portal and try again. | [Register a web application in Azure AD B2C](tutorial-register-applications.md) |
+| `AADB2C90073` | KeyContainer with 'id': '{0}' cannot be found in the directory '{1}' |
| `AADB2C90075` | The claims exchange '{0}' specified in step '{1}' returned HTTP error response with Code '{2}' and Reason '{3}'. | | `AADB2C90077` | User does not have an existing session and request prompt parameter has a value of '{0}'. | | `AADB2C90079` | Clients must send a client_secret when redeeming a confidential grant. | [Create a web app client secret](configure-authentication-sample-web-app-with-api.md#step-24-create-a-web-app-client-secret) |
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
The following are the IDs for a content definition with an ID of `api.localaccou
| **months** | January, February, March, April, May, June, July, August, September, October, November, December | | **ver_fail_server** | We are having trouble verifying your email address. Please enter a valid email address and try again. | | **error_requiredFieldMissing** | A required field is missing. Please fill out all required fields and try again. |
+| **heading** | User Details |
| **initial_intro** | Please provide the following details. | | **ver_but_resend** | Send new code | | **button_continue** | Create |
The following example shows the use of some of the user interface elements in th
<LocalizedString ElementType="UxElement" StringId="error_passwordEntryMismatch">The password entry fields do not match. Please enter the same password in both fields and try again.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="error_requiredFieldMissing">A required field is missing. Please fill out all required fields and try again.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="helplink_text">What is this?</LocalizedString>
+ <LocalizedString ElementType="UxElement" StringId="heading">User Details</LocalizedString>
<LocalizedString ElementType="UxElement" StringId="initial_intro">Please provide the following details.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="preloader_alt">Please wait</LocalizedString> <LocalizedString ElementType="UxElement" StringId="required_field">This information is required.</LocalizedString>
active-directory-b2c String Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/string-transformations.md
Extracts parts of a string claim type, beginning at the character at the specifi
| InputClaim | inputClaim | string | The claim type, which contains the string. | | InputParameter | startIndex | int | The zero-based starting character position of a substring in this instance. | | InputParameter | length | int | The number of characters in the substring. |
-| OutputClaim | outputClaim | boolean | A string that is equivalent to the substring of length that begins at startIndex in this instance, or Empty if startIndex is equal to the length of this instance and length is zero. |
+| OutputClaim | outputClaim | string | A string that is equivalent to the substring of length that begins at startIndex in this instance, or Empty if startIndex is equal to the length of this instance and length is zero. |
### Example of StringSubstring
active-directory Concept Certificate Based Authentication Smartcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-smartcard.md
Users will get a primary refresh token (PRT) from Azure Active Directory after t
## Restrictions and caveats - The Windows login only works with the latest preview build of Windows 11. We are working to backport the functionality to Windows 10 and Windows Server.-- Only Windows machines that are joined to either or a hybrid environment can test SmartCard logon.
+- Only Windows machines that are joined to either Azure AD or a hybrid environment can test SmartCard logon.
- Like in the other Azure AD CBA scenarios, the user must be on a managed domain or using staged rollout and cannot use a federated authentication model. ## Next steps
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
Title: Use additional context in Microsoft Authenticator notifications (Preview) - Azure Active Directory
+ Title: Use additional context in Microsoft Authenticator notifications - Azure Active Directory
description: Learn how to use additional context in MFA notifications Previously updated : 09/09/2022 Last updated : 09/15/2022 # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use additional context in Microsoft Authenticator app notifications (Preview) - Authentication Methods Policy
+# How to use additional context in Microsoft Authenticator notifications - Authentication methods policy
-This topic covers how to improve the security of user sign-in by adding the application name and geographic location of the sign-in to Microsoft Authenticator push and passwordless notifications. The schema for the API to enable application name and geographic location is currently being updated. **While the API is updated over the next two weeks, you should only use the Azure AD portal to enable application name and geographic location.**
+This topic covers how to improve the security of user sign-in by adding the application name and geographic location of the sign-in to Microsoft Authenticator passwordless and push notifications.
## Prerequisites
-Your organization will need to enable Microsoft Authenticator push notifications for some users or groups by using the Azure AD portal. The new Authentication Methods Policy API will soon be ready as another configuration option.
+- Your organization needs to enable Microsoft Authenticator passwordless and push notifications for some users or groups by using the new Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API.
->[!NOTE]
->Additional context can be targeted to only a single group, which can be dynamic or nested. On-premises synchronized security groups and cloud-only security groups are supported for the Authentication Method Policy.
+ >[!NOTE]
+ >The policy schema for Microsoft Graph APIs has been improved. The older policy schema is now deprecated. Make sure you use the new schema to help prevent errors.
+
+- Additional context can be targeted to only a single group, which can be dynamic or nested. On-premises synchronized security groups and cloud-only security groups are supported for the Authentication method policy.
## Passwordless phone sign-in and multifactor authentication
-When a user receives a passwordless phone sign-in or MFA push notification in the Authenticator app, they'll see the name of the application that requests the approval and the location based on the IP address where the sign-in originated from.
+When a user receives a passwordless phone sign-in or MFA push notification in Microsoft Authenticator, they'll see the name of the application that requests the approval and the location based on the IP address where the sign-in originated from.
:::image type="content" border="false" source="./media/howto-authentication-passwordless-phone/location.png" alt-text="Screenshot of additional context in the MFA push notification.":::
The additional context can be combined with [number matching](how-to-mfa-number-
:::image type="content" border="false" source="./media/howto-authentication-passwordless-phone/location-with-number-match.png" alt-text="Screenshot of additional context with number matching in the MFA push notification.":::
-## Enable additional context
+### Policy schema changes
+
+You can enable and disable application name and geographic location separately. Under featureSettings, you can use the following name mapping for each features:
+
+- Application name: displayAppInformationRequiredState
+- Geographic location: displayLocationInformationRequiredState
+
+>[!NOTE]
+>Make sure you use the new policy schema for Microsoft Graph APIs. In Graph Explorer, you'll need to consent to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
+
+Identify your single target group for each of the features. Then use the following API endpoint to change the displayAppInformationRequiredState or displayLocationInformationRequiredState properties under featureSettings to **enabled** and include or exclude the groups you want:
+
+```http
+https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+```
+
+#### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|||-|
+| id | String | The Authentication method policy identifier. |
+| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
+
+**RELATIONSHIPS**
+
+| Relationship | Type | Description |
+|--||-|
+| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of users or groups who are enabled to use the authentication method. |
+| featureSettings | [microsoftAuthenticatorFeatureSettings](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of Microsoft Authenticator features. |
+
+#### MicrosoftAuthenticator includeTarget properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
+| id | String | Object ID of an Azure AD user or group. |
+| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.|
+
+#### MicrosoftAuthenticator featureSettings properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| numberMatchingRequiredState | authenticationMethodFeatureConfiguration | Require number matching for MFA notifications. Value is ignored for phone sign-in notifications. |
+| displayAppInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown application name in Microsoft Authenticator notification. |
+| displayLocationInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown geographic location context in Microsoft Authenticator notification. |
+
+#### Authentication method feature configuration properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br>You can only exclude one group for each feature.|
+| includeTarget | featureTarget | A single entity that is included in this feature. <br>You can only include one group for each feature.|
+| State | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
+
+#### Feature target properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| id | String | ID of the entity targeted. |
+| targetType | featureTargetType | The kind of entity targeted, such as group, role, or administrative unit. The possible values are: ΓÇÿgroupΓÇÖ, 'administrativeUnitΓÇÖ, ΓÇÿroleΓÇÖ, unknownFutureValueΓÇÖ. |
+
+#### Example of how to enable additional context for all users
+
+In **featureSettings**, change **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **default** to **enabled**.
+
+The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you don't want to allow passwordless, use **push**.
-To enable application name or geographic location, complete the following steps:
+You might need to PATCH the entire schema to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example shows how to update **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
+
+```json
+//Retrieve your existing policy via a GET.
+//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
+//Change the Query to PATCH and Run query
+
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "displayAppInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "all_users"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ },
+ "displayLocationInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "all_users"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+```
+
+
+#### Example of how to enable application name and geographic location for separate groups
+
+In **featureSettings**, change **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **default** to **enabled.**
+Inside the **includeTarget** for each featureSetting, change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
+
+You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "displayAppInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ },
+ "displayLocationInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "a229e768-961a-4401-aadb-11d836885c11"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+```
+
+To verify, run GET again and verify the ObjectID:
+
+```http
+GET https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+```
+
+#### Example of how to disable application name and only enable geographic location
+
+In **featureSettings**, change the state of **displayAppInformationRequiredState** to **default** or **disabled** and **displayLocationInformationRequiredState** to **enabled.**
+Inside the **includeTarget** for each featureSetting, change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
+
+You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "displayAppInformationRequiredState": {
+ "state": "disabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ },
+ "displayLocationInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "a229e768-961a-4401-aadb-11d836885c11"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+```
+
+#### Example of how to exclude a group from application name and geographic location
+
+In **featureSettings**, change the states of **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **default** to **enabled.**
+Inside the **includeTarget** for each featureSetting, change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
+
+In addition, for each of the features, you'll change the id of the excludeTarget to the ObjectID of the group from the Azure AD portal. This will exclude that group from seeing application name or geographic location.
+
+You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "displayAppInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "5af8a0da-5420-4d69-bf3c-8b129f3449ce"
+ }
+ },
+ "displayLocationInformationRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "a229e768-961a-4401-aadb-11d836885c11"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "b6bab067-5f28-4dac-ab30-7169311d69e8"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+```
+#### Example of removing the excluded group
+
+In **featureSettings**, change the states of **displayAppInformationRequiredState** from **default** to **enabled.**
+You need to change the **id** of the **excludeTarget** to `00000000-0000-0000-0000-000000000000`.
+
+You need to PATCH the entire schema to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The following example shows an update to **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the application name or geographic location. Users who aren't enabled for Microsoft Authenticator won't see these features.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ " displayAppInformationRequiredState ": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": " 00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any"
+ }
+ ]
+}
+```
+
+### Turn off additional context
+
+To turn off additional context, you'll need to PATCH **displayAppInformationRequiredState** and **displayLocationInformationRequiredState** from **enabled** to **disabled**/**default**. You can also turn off just one of the features.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "displayAppInformationRequiredState": {
+ "state": "disabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "44561710-f0cb-4ac9-ab9c-e6c394370823"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ },
+ "displayLocationInformationRequiredState": {
+ "state": "disabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "a229e768-961a-4401-aadb-11d836885c11"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+```
+
+## Enable additional context in the portal
+
+To enable application name or geographic location in the Azure AD portal, complete the following steps:
1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**. 1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone, and change **Authentication mode** to **Any**.
To enable application name or geographic location, complete the following steps:
:::image type="content" border="true" source="./media/how-to-mfa-additional-context/enable-settings-additional-context.png" alt-text="Screenshot of how to enable Microsoft Authenticator settings for Any authentication mode.":::
-1. On the **Configure** tab, for **Show application name in push and passwordless notifications (Preview)**, change **Status** to **Enabled**, choose who to include or exclude from the policy, and click **Save**.
+1. On the **Configure** tab, for **Show application name in push and passwordless notifications**, change **Status** to **Enabled**, choose who to include or exclude from the policy, and click **Save**.
:::image type="content" border="true" source="./media/how-to-mfa-additional-context/enable-app-name.png" alt-text="Screenshot of how to enable application name.":::
- Then do the same for **Show geographic location in push and passwordless notifications (Preview)**.
+ Then do the same for **Show geographic location in push and passwordless notifications**.
:::image type="content" border="true" source="./media/how-to-mfa-additional-context/enable-geolocation.png" alt-text="Screenshot of how to enable geographic location.":::
To enable application name or geographic location, complete the following steps:
## Known issues
-Additional context is not supported for Network Policy Server (NPS) or Active Directory Federation Services (AD FS).
+Additional context isn't supported for Network Policy Server (NPS) or Active Directory Federation Services (AD FS).
## Next steps [Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)+
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
Title: Use number matching in multifactor authentication (MFA) notifications (Preview) - Azure Active Directory
+ Title: Use number matching in multifactor authentication (MFA) notifications - Azure Active Directory
description: Learn how to use number matching in MFA notifications Previously updated : 09/09/2022 Last updated : 09/15/2022 # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use number matching in multifactor authentication (MFA) notifications (Preview) - Authentication Methods Policy
+# How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy
-This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. The schema for the API to enable number match is currently being updated. **While the API is updated over the next two weeks, you should only use the Azure AD portal to enable number match.**
+This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security.
>[!NOTE]
->Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator that will be enabled by default for all tenants a few months after general availability (GA).<br>
+>Number matching is a key security upgrade to traditional second factor notifications in the Authenticator app that will be enabled by default for all tenants a few months after general availability (GA).<br>
>We highly recommend enabling number matching in the near-term for improved sign-in security. ## Prerequisites
-Your organization will need to enable Authenticator (traditional second factor) push notifications for some users or groups only by using the Azure AD portal. The new Authentication Methods Policy API will soon be ready as another configuration option. If your organization is using ADFS adapter or NPS extensions, please upgrade to the latest versions for a consistent experience.
+- Your organization needs to enable Microsoft Authenticator (traditional second factor) push notifications for some users or groups by using the new Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API.
-## Number matching
+ >[!NOTE]
+ >The policy schema for Microsoft Graph APIs has been improved. The older policy schema is now deprecated. Make sure you use the new schema to help prevent errors.
+
+- If your organization is using ADFS adapter or NPS extensions, upgrade to the latest versions for a consistent experience.
-<!check below with Mayur. The bit about the policy came from the number match FAQ at the end.>
+## Number matching
-Number matching can be targeted to only a single group, which can be dynamic or nested. On-premises synchronized security groups and cloud-only security groups are supported for the Authentication Method Policy.
+Number matching can be targeted to only a single group, which can be dynamic or nested. On-premises synchronized security groups and cloud-only security groups are supported for the Authentication methods policy.
Number matching is available for the following scenarios. When enabled, all scenarios support number matching.
Number matching is available for the following scenarios. When enabled, all scen
>[!NOTE] >For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
-Number matching is not supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
+Number matching isn't supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
### Multifactor authentication
To create the registry key that overrides push notifications:
Value = TRUE 1. Restart the NPS Service.
-## Enable number matching
+### Policy schema changes
+
+Identify your single target group for the schema configuration. Then use the following API endpoint to change the numberMatchingRequiredState property under featureSettings to **enabled**, and include or exclude groups:
+
+```
+https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+```
+
+>[!NOTE]
+>Make sure you use the new policy schema for Microsoft Graph APIs. In Graph Explorer, you'll need to consent to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
++
+#### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|||-|
+| id | String | The authentication method policy identifier. |
+| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** |
+
+**RELATIONSHIPS**
+
+| Relationship | Type | Description |
+|--||-|
+| includeTargets | [microsoftAuthenticatorAuthenticationMethodTarget](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of users or groups who are enabled to use the authentication method |
+| featureSettings | [microsoftAuthenticatorFeatureSettings](/graph/api/resources/passwordlessmicrosoftauthenticatorauthenticationmethodtarget) collection | A collection of Microsoft Authenticator features. |
+
+#### MicrosoftAuthenticator includeTarget properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
+| id | String | Object ID of an Azure AD user or group. |
+| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.|
+++
+#### MicrosoftAuthenticator featureSettings properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| numberMatchingRequiredState | authenticationMethodFeatureConfiguration | Require number matching for MFA notifications. Value is ignored for phone sign-in notifications. |
+| displayAppInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown application name in Microsoft Authenticator notification. |
+| displayLocationInformationRequiredState | authenticationMethodFeatureConfiguration | Determines whether the user is shown geographic location context in Microsoft Authenticator notification. |
+
+#### Authentication method feature configuration properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br> Please note: You'll be able to only exclude one group for number matching. |
+| includeTarget | featureTarget | A single entity that is included in this feature. <br> Please note: You'll be able to only set one group for number matching.|
+| State | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
+
+#### Feature target properties
+
+**PROPERTIES**
+
+| Property | Type | Description |
+|-||-|
+| id | String | ID of the entity targeted. |
+| targetType | featureTargetType | The kind of entity targeted, such as group, role, or administrative unit. The possible values are: ΓÇÿgroupΓÇÖ, 'administrativeUnitΓÇÖ, ΓÇÿroleΓÇÖ, unknownFutureValueΓÇÖ. |
+
+>[!NOTE]
+>Number matching can be enabled only for a single group.
+
+#### Example of how to enable number matching for all users
+
+In **featureSettings**, you'll need to change the **numberMatchingRequiredState** from **default** to **enabled**.
+
+Note that the value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we will use **any**, but if you don't want to allow passwordless, use **push**.
+
+>[!NOTE]
+>For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
-To enable number matching, complete the following steps:
+You might need to patch the entire schema to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example only shows the update to the **numberMatchingRequiredState** under **featureSettings**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
+
+```json
+//Retrieve your existing policy via a GET.
+//Leverage the Response body to create the Request body section. Then update the Request body similar to the Request body as shown below.
+//Change the Query to PATCH and Run query
+
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "numberMatchingRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "all_users"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any",
+ }
+ ]
+}
+
+```
+
+To confirm this has applied, please run the GET request by using the following endpoint:
+
+```http
+GET https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+```
+
+#### Example of how to enable number matching for a single group
+
+In **featureSettings**, you'll need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
+Inside the **includeTarget**, you'll need to change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
+
+You need to PATCH the entire configuration to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **numberMatchingRequiredState**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will see the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "numberMatchingRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": "00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any"
+ }
+ ]
+}
+```
+
+To verify, run GET again and verify the ObjectID:
+
+```http
+GET https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+```
+
+#### Example of removing the excluded group from number matching
+
+In **featureSettings**, you'll need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
+You need to change the **id** of the **excludeTarget** to `00000000-0000-0000-0000-000000000000`.
+
+You need to PATCH the entire configuration to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **numberMatchingRequiredState**.
+
+Only users who are enabled for Microsoft Authenticator under Microsoft AuthenticatorΓÇÖs **includeTargets** will be excluded from the number match requirement. Users who aren't enabled for Microsoft Authenticator won't see the feature.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "numberMatchingRequiredState": {
+ "state": "enabled",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": " 00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any"
+ }
+ ]
+}
+```
+
+### Turn off number matching
+
+To turn number matching off, you'll need to PATCH remove **numberMatchingRequiredState** from **enabled** to **disabled**/**default**.
+
+```json
+{
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration",
+ "id": "MicrosoftAuthenticator",
+ "state": "enabled",
+ "featureSettings": {
+ "numberMatchingRequiredState": {
+ "state": "default",
+ "includeTarget": {
+ "targetType": "group",
+ "id": "1ca44590-e896-4dbe-98ed-b140b1e7a53a"
+ },
+ "excludeTarget": {
+ "targetType": "group",
+ "id": " 00000000-0000-0000-0000-000000000000"
+ }
+ }
+ },
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets": [
+ {
+ "targetType": "group",
+ "id": "all_users",
+ "isRegistrationRequired": false,
+ "authenticationMode": "any"
+ }
+ ]
+}
+```
+
+## Enable number matching in the portal
+
+To enable number matching in the Azure AD portal, complete the following steps:
1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
-1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone, and change **Authentication mode** to **Push**.
+1. On the **Basics** tab, click **Yes** and **All users** to enable the policy for everyone or add selected users and groups. Set the **Authentication mode** for these users/groups to **Any**/**Push**.
- Only users who are enabled for Microsoft Authenticator here can be included in the policy to require number matching for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see a number match.
+ Only users who are enabled for Microsoft Authenticator here can be included in the policy to require number matching for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see the feature.
:::image type="content" border="true" source="./media/how-to-mfa-number-match/enable-settings-number-match.png" alt-text="Screenshot of how to enable Microsoft Authenticator settings for Push authentication mode.":::
To enable number matching, complete the following steps:
## Next steps
-[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
+[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
active-directory Msal Node Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-node-migration.md
const msal = require('@azure/msal-node');
const pca = new msal.PublicClientApplication({ auth: {
- clientId = "YOUR_CLIENT_ID"
+ clientId: "YOUR_CLIENT_ID"
} }); ```
const msal = require('@azure/msal-node');
const cca = new msal.ConfidentialClientApplication({ auth: {
- clientId = "YOUR_CLIENT_ID",
- clientSecret = "YOUR_CLIENT_SECRET"
+ clientId: "YOUR_CLIENT_ID",
+ clientSecret: "YOUR_CLIENT_SECRET"
} }); ```
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-applications.md
-# Azure Active Directory security operations guide for Applications
+# Azure Active Directory security operations guide for applications
Applications have an attack surface for security breaches and must be monitored. While not targeted as often as user accounts, breaches can occur. Because applications often run without human intervention, the attacks may be harder to detect.
From the Azure portal, you can view the Azure AD Audit logs and download as comm
* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level with security information and event management (SIEM) capabilities.
-* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where there are Sigma templates for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where there are Sigma templates for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
Many applications use credentials to authenticate in Azure AD. Any other credent
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | -|-|-|-|-|
-| Added credentials to existing applications| High| Azure AD Audit logs| Service-Core Directory, Category-ApplicationManagement <br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application| Alert when credentials are: added outside of normal business hours or workflows, of types not used in your environment, or added to a non-SAML flow supporting service principal.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/NewAppOrServicePrincipalCredential.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Added credentials to existing applications| High| Azure AD Audit logs| Service-Core Directory, Category-ApplicationManagement <br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application| Alert when credentials are: added outside of normal business hours or workflows, of types not used in your environment, or added to a non-SAML flow supporting service principal.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/NewAppOrServicePrincipalCredential.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Credentials with a lifetime longer than your policies allow.| Medium| Microsoft Graph| State and end date of Application Key credentials<br>-and-<br>Application password credentials| You can use MS Graph API to find the start and end date of credentials, and evaluate longer-than-allowed lifetimes. See PowerShell script following this table. | The following pre-built monitoring and alerts are available:
Like an administrator account, applications can be assigned privileged roles. Ap
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| App assigned to Azure RBAC role, or Azure AD Role| High to Medium| Azure AD Audit logs| Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥ or ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥| For highly privileged roles such as Global Administrator, risk is high. For lower privileged roles risk is medium. Alert anytime an application is assigned to an Azure role or Azure AD role outside of normal change management or configuration procedures.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedPrivilegedRole.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| App assigned to Azure RBAC role, or Azure AD Role| High to Medium| Azure AD Audit logs| Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥ or ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥| For highly privileged roles such as Global Administrator, risk is high. For lower privileged roles risk is medium. Alert anytime an application is assigned to an Azure role or Azure AD role outside of normal change management or configuration procedures.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedPrivilegedRole.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
### Application granted highly privileged permissions
Applications should follow the principle of least privilege. Investigate applica
| What to monitor|Risk Level|Where| Filter/sub-filter| Notes| |-|-|-|-|-|
-| App granted highly privileged permissions, such as permissions with ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)| High |Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>- where-<br> Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>AppRole.Value identifies a highly privileged application permission (app role).| Apps granted broad permissions such as ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Administrator granting either application permissions (app roles) or highly privileged delegated permissions |High| Microsoft 365 portal| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>ΓÇ£Add delegated permission grantΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions.| Alert when a global administrator, application administrator, or cloud application administrator consents to an application. Especially look for consent outside of normal activity and change procedures.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/MailPermissionsAddedToApplication.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥ <br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on)| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Application permissions (app roles) for other APIs are granted |Medium| Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies any other API.| Alert as in the preceding row.<br>[Link to Sigma repo](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Highly privileged delegated permissions are granted on behalf of all users |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥, where Target(s) identifies an API with sensitive data (such as Microsoft Graph), <br> DelegatedPermissionGrant.Scope includes high-privilege permissions, <br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥.| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/SuspiciousOAuthApp_OfflineAccess.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| App granted highly privileged permissions, such as permissions with ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)| High |Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>- where-<br> Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>AppRole.Value identifies a highly privileged application permission (app role).| Apps granted broad permissions such as ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Administrator granting either application permissions (app roles) or highly privileged delegated permissions |High| Microsoft 365 portal| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>ΓÇ£Add delegated permission grantΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions.| Alert when a global administrator, application administrator, or cloud application administrator consents to an application. Especially look for consent outside of normal activity and change procedures.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/MailPermissionsAddedToApplication.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥ <br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on)| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Application permissions (app roles) for other APIs are granted |Medium| Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies any other API.| Alert as in the preceding row.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Highly privileged delegated permissions are granted on behalf of all users |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥, where Target(s) identifies an API with sensitive data (such as Microsoft Graph), <br> DelegatedPermissionGrant.Scope includes high-privilege permissions, <br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥.| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/SuspiciousOAuthApp_OfflineAccess.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
For more information on monitoring app permissions, see this tutorial: [Investigate and remediate risky OAuth apps](/cloud-app-security/investigate-risky-oauth).
Use Azure Key Vault to store your tenantΓÇÖs secrets. We recommend you pay atten
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| How and when your Key Vaults are accessed and by whom| Medium| [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)| Resource type: Key Vaults| Look for: any access to Key Vault outside regular processes and hours, any changes to Key Vault ACL.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AzureDiagnostics/AzureKeyVaultAccessManipulation.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| How and when your Key Vaults are accessed and by whom| Medium| [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)| Resource type: Key Vaults| Look for: any access to Key Vault outside regular processes and hours, any changes to Key Vault ACL.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AzureDiagnostics/AzureKeyVaultAccessManipulation.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
After you set up Azure Key Vault, [enable logging](../../key-vault/general/howto-logging.md?tabs=azure-cli). See [how and when your Key Vaults are accessed](../../key-vault/general/logging.md?tabs=Vault), and [configure alerts](../../key-vault/general/alert.md) on Key Vault to notify assigned users or distribution lists via email, phone, text, or [Event Grid](../../key-vault/general/event-grid-overview.md) notification, if health is affected. In addition, setting up [monitoring](../../key-vault/general/alert.md) with Key Vault insights gives you a snapshot of Key Vault requests, performance, failures, and latency. [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) also has some [example queries](../../azure-monitor/logs/queries.md) for Azure Key Vault that can be accessed after selecting your Key Vault and then under ΓÇ£MonitoringΓÇ¥ selecting ΓÇ£LogsΓÇ¥.
After you set up Azure Key Vault, [enable logging](../../key-vault/general/howto
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| End-user consent to application| Low| Azure AD Audit logs| Activity: Consent to application / ConsentContext.IsAdminConsent = false| Look for: high profile or highly privileged accounts, app requests high-risk permissions, apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/ConsentToApplicationDiscovery.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| End-user consent to application| Low| Azure AD Audit logs| Activity: Consent to application / ConsentContext.IsAdminConsent = false| Look for: high profile or highly privileged accounts, app requests high-risk permissions, apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/ConsentToApplicationDiscovery.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
The act of consenting to an application isn't malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](../../security/fundamentals/steps-secure-identity.md).
For more information on consent operations, see the following resources:
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| End-user consent stopped due to risk-based consent| Medium| Azure AD Audit logs| Core Directory / ApplicationManagement / Consent to application<br> Failure status reason = Microsoft.online.Security.userConsent<br>BlockedForRiskyAppsExceptions| Monitor and analyze any time consent is stopped due to risk. Look for: high profile or highly privileged accounts, app requests high-risk permissions, or apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/End-userconsentstoppedduetorisk-basedconsent.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| End-user consent stopped due to risk-based consent| Medium| Azure AD Audit logs| Core Directory / ApplicationManagement / Consent to application<br> Failure status reason = Microsoft.online.Security.userConsent<br>BlockedForRiskyAppsExceptions| Monitor and analyze any time consent is stopped due to risk. Look for: high profile or highly privileged accounts, app requests high-risk permissions, or apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/End-userconsentstoppedduetorisk-basedconsent.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Application authentication flows
Monitor application authentication using the following formation:
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| Applications that are using the ROPC authentication flow|Medium | Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-ROPC| High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow. This should only be used in automated testing of applications, if at all. For more information, see [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
-|Applications using the Device code flow |Low to medium|Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-Device Code|Device code flows are used for input constrained devices, which may not be in all environments. If successful device code flows appear, without a need for them, investigate for validity. For more information, see [Microsoft identity platform and the OAuth 2.0 device authorization grant flow](../develop/v2-oauth2-device-code.md)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Applications that are using the ROPC authentication flow|Medium | Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-ROPC| High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow. This should only be used in automated testing of applications, if at all. For more information, see [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|Applications using the Device code flow |Low to medium|Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-Device Code|Device code flows are used for input constrained devices, which may not be in all environments. If successful device code flows appear, without a need for them, investigate for validity. For more information, see [Microsoft identity platform and the OAuth 2.0 device authorization grant flow](../develop/v2-oauth2-device-code.md)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
## Application configuration changes
Monitor changes to application configuration. Specifically, configuration change
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-| | Dangling URI| High| Azure AD Logs and Application Registration| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| For example, look for dangling URIs that point to a domain name that no longer exists or one that you donΓÇÖt explicitly own.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/URLAddedtoApplicationfromUnknownDomain.yaml)<br><br>[Link to Sigma repo](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Redirect URI configuration changes| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are NOT unique to the application, URIs that point to a domain you don't control.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationRedirectURLUpdate.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Redirect URI configuration changes| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are NOT unique to the application, URIs that point to a domain you don't control.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationRedirectURLUpdate.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
Alert when these changes are detected.
Alert when these changes are detected.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Changes to AppID URI| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update<br>Application<br>Activity: Update Service principal| Look for any AppID URI modifications, such as adding, modifying, or removing the URI.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationIDURIChanged.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to AppID URI| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update<br>Application<br>Activity: Update Service principal| Look for any AppID URI modifications, such as adding, modifying, or removing the URI.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationIDURIChanged.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
Alert when these changes are detected outside approved change management procedures.
Alert when these changes are detected outside approved change management procedu
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Changes to application ownership| Medium| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Add owner to application| Look for any instance of a user being added as an application owner outside of normal change management activities.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoApplicationOwnership.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to application ownership| Medium| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Add owner to application| Look for any instance of a user being added as an application owner outside of normal change management activities.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoApplicationOwnership.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
### Log-out URL modified or removed | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Changes to log-out URL| Low| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle| Look for any modifications to a sign-out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoApplicationLogoutURL.yaml) |
+| Changes to log-out URL| Low| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle| Look for any modifications to a sign-out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoApplicationLogoutURL.yaml) <br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
## Resources
active-directory Security Operations Consumer Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-consumer-accounts.md
From the Azure portal, you can view the Azure AD Audit logs and download as comm
* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md
From the Azure portal, you can view the Azure AD Audit logs and download as comm
* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../..//azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
Azure AD registered and Azure AD joined devices possess primary refresh tokens (
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Device registration or join completed without MFA| Medium| Sign-in logs| Activity: successful authentication to Device Registration Service. <br>And<br>No MFA required| Alert when: Any device registered or joined without MFA<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Changes to the Device Registration MFA toggle in Azure AD| High| Audit log| Activity: Set device registration policies| Look for: The toggle being set to off. There isn't audit log entry. Schedule periodic checks.<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Device registration or join completed without MFA| Medium| Sign-in logs| Activity: successful authentication to Device Registration Service. <br>And<br>No MFA required| Alert when: Any device registered or joined without MFA<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to the Device Registration MFA toggle in Azure AD| High| Audit log| Activity: Set device registration policies| Look for: The toggle being set to off. There isn't audit log entry. Schedule periodic checks.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Changes to Conditional Access policies requiring domain joined or compliant device.| High| Audit log| Changes to CA policies<br>| Alert when: Change to any policy requiring domain joined or compliant, changes to trusted locations, or accounts or devices added to MFA policy exceptions. | You can create an alert that notifies appropriate administrators when a device is registered or joined without MFA by using Microsoft Sentinel.
It might not be possible to block access to all cloud and software-as-a-service
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant == false| If requiring sign-in from compliant devices, alert when: any sign in by non-compliant devices, or any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuccessfulSigninFromNon-CompliantDevice.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Sign-ins by unknown devices| Low| Sign-in logs| DeviceDetail is empty, single factor authentication, or from a non-trusted location| Look for: any access from out of compliance devices, any access without MFA or trusted location<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AnomolousSingleFactorSignin.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant == false| If requiring sign-in from compliant devices, alert when: any sign in by non-compliant devices, or any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuccessfulSigninFromNon-CompliantDevice.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Sign-ins by unknown devices| Low| Sign-in logs| DeviceDetail is empty, single factor authentication, or from a non-trusted location| Look for: any access from out of compliance devices, any access without MFA or trusted location<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AnomolousSingleFactorSignin.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
### Use LogAnalytics to query
Attackers who have compromised a userΓÇÖs device may retrieve the [BitLocker](/w
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker key"| Look for: key retrieval, other anomalous behavior by users retrieving keys.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/BitLockerKeyRetrieval.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker key"| Look for: key retrieval, other anomalous behavior by users retrieving keys.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/BitLockerKeyRetrieval.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
In LogAnalytics create a query such as
Global administrators and cloud Device Administrators automatically get local ad
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Users added to global or device admin roles| High| Audit logs| Activity type = Add member to role.| Look for: new users added to these Azure AD roles, subsequent anomalous behavior by machines or users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/4ad195f4fe6fdbc66fb8469120381e8277ebed81/Detections/AuditLogs/UserAddedtoAdminRole.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Users added to global or device admin roles| High| Audit logs| Activity type = Add member to role.| Look for: new users added to these Azure AD roles, subsequent anomalous behavior by machines or users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/4ad195f4fe6fdbc66fb8469120381e8277ebed81/Detections/AuditLogs/UserAddedtoAdminRole.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Non-Azure AD sign-ins to virtual machines
active-directory Security Operations Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-infrastructure.md
From the Azure portal, you can view the Azure AD Audit logs and download as comm
* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
Monitor changes to Conditional Access policies using the following information:
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| New Conditional Access Policy created by non-approved actors|Medium | Azure AD Audit logs|Activity: Add conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
-|Conditional Access Policy removed by non-approved actors|Medium|Azure AD Audit logs|Activity: Delete conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-|Conditional Access Policy updated by non-approved actors|Medium|Azure AD Audit logs|Activity: Update conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br><br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-|Removal of a user from a group used to scope critical Conditional Access policies|Medium|Azure AD Audit logs|Activity: Remove member from group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been removed.<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
-|Addition of a user to a group used to scope critical Conditional Access policies|Low|Azure AD Audit logs|Activity: Add member to group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been added.<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| New Conditional Access Policy created by non-approved actors|Medium | Azure AD Audit logs|Activity: Add conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|Conditional Access Policy removed by non-approved actors|Medium|Azure AD Audit logs|Activity: Delete conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Conditional Access Policy updated by non-approved actors|Medium|Azure AD Audit logs|Activity: Update conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br><br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Removal of a user from a group used to scope critical Conditional Access policies|Medium|Azure AD Audit logs|Activity: Remove member from group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been removed.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|Addition of a user to a group used to scope critical Conditional Access policies|Low|Azure AD Audit logs|Activity: Add member to group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been added.<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
## Next steps
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-introduction.md
From the Azure portal, you can view the Azure AD Audit logs. Download logs as co
* **[Microsoft Sentinel](../../sentinel/overview.md)** - Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../../azure-monitor/overview.md)** - Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
From the Azure portal, you can view the Azure AD Audit logs and download as comm
* **[Microsoft Sentinel](../../sentinel/overview.md)**. Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../../azure-monitor/overview.md)**. Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
You can monitor privileged account sign-in events in the Azure AD Sign-in logs.
| What to monitor | Risk level | Where | Filter/subfilter | Notes | | - | - | - | - | - |
-| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Failure because of Conditional Access requirement |High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Failure because of Conditional Access requirement |High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Privileged accounts that don't follow naming policy| | Azure subscription | [List Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where the sign-in name doesn't match your organization's format. An example is the use of ADM_ as a prefix. |
-| Interrupt | High, medium | Azure AD Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the multi-factor authentication challenge.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Interrupt | High, medium | Azure AD Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the multi-factor authentication challenge.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Privileged accounts that don't follow naming policy| High | Azure AD directory | [List Azure AD role assignments](../roles/view-assignments.md)| List role assignments for Azure AD roles and alert where the UPN doesn't match your organization's format. An example is the use of ADM_ as a prefix. | | Discover privileged accounts not registered for multi-factor authentication | High | Microsoft Graph API| Query for IsMFARegistered eq false for admin accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http) | Audit and investigate to determine if the event is intentional or an oversight. |
-| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountsLockedOut.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Account disabled or blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| MFA fraud alert or block | High | Azure AD Audit log log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on tenant-level settings for fraud report) | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Privileged account sign-ins outside of expected controls | | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account\><br>Location = \<unapproved location\><br>IP address = \<unapproved IP\><br>Device info = \<unapproved Browser, Operating System\> | Monitor and alert on any entries that you've defined as unapproved.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountsLockedOut.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Account disabled or blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| MFA fraud alert or block | High | Azure AD Audit log log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on tenant-level settings for fraud report) | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Privileged account sign-ins outside of expected controls | | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account\><br>Location = \<unapproved location\><br>IP address = \<unapproved IP\><br>Device info = \<unapproved Browser, Operating System\> | Monitor and alert on any entries that you've defined as unapproved.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Identity protection risk | High | Identity Protection logs | Risk state = At risk<br>-and-<br>Risk level = Low, medium, high<br>-and-<br>Activity = Unfamiliar sign-in/TOR, and so on | This event indicates there's some abnormality detected with the sign-in for the account and should be alerted on. |
-| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountPasswordChanges.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Change in legacy authentication protocol | High | Azure AD Sign-ins log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/17ead56ae30b1a8e46bb0f95a458bdeb2d30ba9b/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| New device or location | High | Azure AD Sign-ins log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Audit alert setting is changed | High | Azure AD Audit logs | Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity = Disable PIM alert<br>-and-<br>Status = Success | Changes to a core alert should be alerted if unexpected.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Administrators authenticating to other Azure AD tenants| Medium| Azure AD Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users, this monitor detects when an administrator has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID isn't equal to Home Tenant ID<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/AdministratorsAuthenticatingtoAnotherAzureADTenant.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-|Admin User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this change expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountPasswordChanges.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Change in legacy authentication protocol | High | Azure AD Sign-ins log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/17ead56ae30b1a8e46bb0f95a458bdeb2d30ba9b/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| New device or location | High | Azure AD Sign-ins log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Audit alert setting is changed | High | Azure AD Audit logs | Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity = Disable PIM alert<br>-and-<br>Status = Success | Changes to a core alert should be alerted if unexpected.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Administrators authenticating to other Azure AD tenants| Medium| Azure AD Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users, this monitor detects when an administrator has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID isn't equal to Home Tenant ID<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/AdministratorsAuthenticatingtoAnotherAzureADTenant.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Admin User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this change expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Changes by privileged accounts
Investigate changes to privileged accounts' authentication rules and privileges,
| What to monitor| Risk level| Where| Filter/subfilter| Notes | | - | - | - | - | - |
-| Privileged account creation| Medium| Azure AD Audit logs| Service = Core Directory<br>-and-<br>Category = User management<br>-and-<br>Activity type = Add user<br>-correlate with-<br>Category type = Role management<br>-and-<br>Activity type = Add member to role<br>-and-<br>Modified properties = Role.DisplayName| Monitor creation of any privileged accounts. Look for correlation that's of a short time span between creation and deletion of accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Changes to authentication methods| High| Azure AD Audit logs| Service = Authentication Method<br>-and-<br>Activity type = User registered security information<br>-and-<br>Category = User management| This change could be an indication of an attacker adding an auth method to the account so they can have continued access.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/AuthenticationMethodsChangedforPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Alert on changes to privileged account permissions| High| Azure AD Audit logs| Category = Role management<br>-and-<br>Activity type = Add eligible member (permanent)<br>-or-<br>Activity type = Add eligible member (eligible)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| This alert is especially for accounts being assigned roles that aren't known or are outside of their normal responsibilities.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivilegedAccountPermissionsChanged.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Unused privileged accounts| Medium| Azure AD Access Reviews| | Perform a monthly review for inactive privileged user accounts.<br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Privileged account creation| Medium| Azure AD Audit logs| Service = Core Directory<br>-and-<br>Category = User management<br>-and-<br>Activity type = Add user<br>-correlate with-<br>Category type = Role management<br>-and-<br>Activity type = Add member to role<br>-and-<br>Modified properties = Role.DisplayName| Monitor creation of any privileged accounts. Look for correlation that's of a short time span between creation and deletion of accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to authentication methods| High| Azure AD Audit logs| Service = Authentication Method<br>-and-<br>Activity type = User registered security information<br>-and-<br>Category = User management| This change could be an indication of an attacker adding an auth method to the account so they can have continued access.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/AuthenticationMethodsChangedforPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Alert on changes to privileged account permissions| High| Azure AD Audit logs| Category = Role management<br>-and-<br>Activity type = Add eligible member (permanent)<br>-or-<br>Activity type = Add eligible member (eligible)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| This alert is especially for accounts being assigned roles that aren't known or are outside of their normal responsibilities.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivilegedAccountPermissionsChanged.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Unused privileged accounts| Medium| Azure AD Access Reviews| | Perform a monthly review for inactive privileged user accounts.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Accounts exempt from Conditional Access| High| Azure Monitor Logs<br>-or-<br>Access Reviews| Conditional Access = Insights and reporting| Any account exempt from Conditional Access is most likely bypassing security controls and is more vulnerable to compromise. Break-glass accounts are exempt. See information on how to monitor break-glass accounts later in this article.|
-| Addition of a Temporary Access Pass to a privileged account| High| Azure AD Audit logs| Activity: Admin registered security info<br><br>Status Reason: Admin registered temporary access pass method for user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name<br><br>Target: User Principal Name|Monitor and alert on a Temporary Access Pass being created for a privileged user.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/tree/master/Detections/AuditLogs/AdditionofaTemporaryAccessPasstoaPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Addition of a Temporary Access Pass to a privileged account| High| Azure AD Audit logs| Activity: Admin registered security info<br><br>Status Reason: Admin registered temporary access pass method for user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name<br><br>Target: User Principal Name|Monitor and alert on a Temporary Access Pass being created for a privileged user.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/tree/master/Detections/AuditLogs/AdditionofaTemporaryAccessPasstoaPrivilegedAccount.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
For more information on how to monitor for exceptions to Conditional Access policies, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md).
You can monitor privileged account changes by using Azure AD Audit logs and Azur
| What to monitor| Risk level| Where| Filter/subfilter| Notes | | - | - | - | - | - |
-| Added to eligible privileged role| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role completed (eligible)<br>-and-<br>Status = Success or failureΓÇï<br>-and-<br>Modified properties = Role.DisplayName| Any account eligible for a role is now being given privileged access. If the assignment is unexpected or into a role that isn't the responsibility of the account holder, investigate.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Roles assigned out of PIM| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role (permanent)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| These roles should be closely monitored and alerted. Users shouldn't be assigned roles outside of PIM where possible.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivlegedRoleAssignedOutsidePIM.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Added to eligible privileged role| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role completed (eligible)<br>-and-<br>Status = Success or failureΓÇï<br>-and-<br>Modified properties = Role.DisplayName| Any account eligible for a role is now being given privileged access. If the assignment is unexpected or into a role that isn't the responsibility of the account holder, investigate.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Roles assigned out of PIM| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role (permanent)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| These roles should be closely monitored and alerted. Users shouldn't be assigned roles outside of PIM where possible.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivlegedRoleAssignedOutsidePIM.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Elevations| Medium| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure <br>-and-<br>Modified properties = Role.DisplayName| After a privileged account is elevated, it can now make changes that could affect the security of your tenant. All elevations should be logged and, if happening outside of the standard pattern for that user, should be alerted and investigated if not planned.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/tree/master/Detections/AuditLogs/AccountElevatedtoNewRole.yaml) |
-| Approvals and deny elevation| Low| Azure AD Audit Logs| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity type = Request approved or denied<br>-and-<br>Initiated actor = UPN| Monitor all elevations because it could give a clear indication of the timeline for an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Changes to PIM settings| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Update role setting in PIM<br>-and-<br>Status reason = MFA on activation disabled (example)| One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/4ad195f4fe6fdbc66fb8469120381e8277ebed81/Detections/AuditLogs/ChangestoPIMSettings.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Elevation not occurring on SAW/PAW| High| Azure AD Sign In logs| Device ID <br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>Correlate with:<br>Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| If this change is configured, any attempt to elevate on a non-PAW/SAW device should be investigated immediately because it could indicate an attacker is trying to use the account.<br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Approvals and deny elevation| Low| Azure AD Audit Logs| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity type = Request approved or denied<br>-and-<br>Initiated actor = UPN| Monitor all elevations because it could give a clear indication of the timeline for an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to PIM settings| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Update role setting in PIM<br>-and-<br>Status reason = MFA on activation disabled (example)| One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/4ad195f4fe6fdbc66fb8469120381e8277ebed81/Detections/AuditLogs/ChangestoPIMSettings.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Elevation not occurring on SAW/PAW| High| Azure AD Sign In logs| Device ID <br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>Correlate with:<br>Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| If this change is configured, any attempt to elevate on a non-PAW/SAW device should be investigated immediately because it could indicate an attacker is trying to use the account.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Elevation to manage all Azure subscriptions| High| Azure Monitor| Activity Log tab <br>Directory Activity tab <br> Operations Name = Assigns the caller to user access admin <br> -and- <br> Event category = Administrative <br> -and-<br>Status = Succeeded, start, fail<br>-and-<br>Event initiated by| This change should be investigated immediately if it isn't planned. This setting could allow an attacker access to Azure subscriptions in your environment. | For more information about managing elevation, see [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). For information on monitoring elevations by using information available in the Azure AD logs, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md), which is part of the Azure Monitor documentation.
active-directory Security Operations Privileged Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-identity-management.md
In the Azure portal, view the Azure AD Audit logs and download them as comma-sep
* [**Microsoft Sentinel**](../../sentinel/overview.md) ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* [**Azure Monitor**](../../azure-monitor/overview.md) ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
A privileged role administrator can customize PIM in their Azure AD organization
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Alert on Add changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Add eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Add eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Monitor and always alert for any changes to privileged role administrator and global administrator. This can be an indication an attacker is trying to gain privilege to modify role assignment settings. If you donΓÇÖt have a defined threshold, alert on 4 in 60 minutes for users and 2 in 60 minutes for privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAddedtoAdminRole.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Alert on bulk deletion changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Remove eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Remove eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Investigate immediately if not a planned change. This setting could enable an attacker access to Azure subscriptions in your environment.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/BulkChangestoPrivilegedAccountPermissions.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Changes to PIM settings| High| Azure AD Audit Log| Service = PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Update role setting in PIM<br>-and-<br>Status Reason = MFA on activation disabled (example)| Monitor and always alert for any changes to Privileged Role Administrator and Global Administrator. This can be an indication an attacker has access to modify role assignment settings. One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoPIMSettings.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Approvals and deny elevation| High| Azure AD Audit Log| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity Type = Request Approved/Denied<br>-and-<br>Initiated actor = UPN| All elevations should be monitored. Log all elevations to give a clear indication of timeline for an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Alert setting changes to disabled.| High| Azure AD Audit logs| Service =PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Disable PIM Alert<br>-and-<br>Status = Success /Failure| Always alert. Helps detect bad actor removing alerts associated with Azure AD Multi-Factor Authentication requirements to activate privileged access. Helps detect suspicious or unsafe activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Alert on Add changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Add eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Add eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Monitor and always alert for any changes to privileged role administrator and global administrator. This can be an indication an attacker is trying to gain privilege to modify role assignment settings. If you donΓÇÖt have a defined threshold, alert on 4 in 60 minutes for users and 2 in 60 minutes for privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAddedtoAdminRole.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Alert on bulk deletion changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Remove eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Remove eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Investigate immediately if not a planned change. This setting could enable an attacker access to Azure subscriptions in your environment.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/BulkChangestoPrivilegedAccountPermissions.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to PIM settings| High| Azure AD Audit Log| Service = PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Update role setting in PIM<br>-and-<br>Status Reason = MFA on activation disabled (example)| Monitor and always alert for any changes to Privileged Role Administrator and Global Administrator. This can be an indication an attacker has access to modify role assignment settings. One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoPIMSettings.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Approvals and deny elevation| High| Azure AD Audit Log| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity Type = Request Approved/Denied<br>-and-<br>Initiated actor = UPN| All elevations should be monitored. Log all elevations to give a clear indication of timeline for an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Alert setting changes to disabled.| High| Azure AD Audit logs| Service =PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Disable PIM Alert<br>-and-<br>Status = Success /Failure| Always alert. Helps detect bad actor removing alerts associated with Azure AD Multi-Factor Authentication requirements to activate privileged access. Helps detect suspicious or unsafe activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
For more information on identifying role setting changes in the Azure AD Audit log, see [View audit history for Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-how-to-use-audit-log.md).
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-user-accounts.md
From the Azure portal, you can view the Azure AD Audit logs and download as comm
* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
If the data trail for account creation and deletion is not discovered quickly, t
| Account creation and deletion events within a close time frame. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br> | Search for user principal name (UPN) events. Look for accounts created and then deleted in under 24 hours.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedandDeletedinShortTimeframe.yaml) | | Accounts created and deleted by non-approved users or processes. | Medium| Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>and-or<br>Activity: Delete user<br>Status = success | If the actors are non-approved users, configure to send an alert. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) | | Accounts from non-approved sources. | Medium | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Target(s) = USER PRINCIPAL NAME | If the entry isn't from an approved domain or is a known blocked domain, configure to send an alert.<br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Accountcreatedfromnon-approvedsources.yaml) |
-| Accounts assigned to a privileged role.| High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
+| Accounts assigned to a privileged role.| High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
Both privileged and non-privileged accounts should be monitored and alerted. However, since privileged accounts have administrative permissions, they should have higher priority in your monitor, alert, and respond processes.
The following are listed in order of importance based on the effect and severity
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Users authenticating to other Azure AD tenants.| Low| Azure AD Sign-ins log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID isn't equal to Home Tenant ID <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/UsersAuthenticatingtoOtherAzureADTenants.yaml) |
-|User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)
-|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)
+| Users authenticating to other Azure AD tenants.| Low| Azure AD Sign-ins log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID isn't equal to Home Tenant ID <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/UsersAuthenticatingtoOtherAzureADTenants.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure))
+|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
### Monitoring for failed unusual sign ins | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Failed sign-in attempts.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code 50126 - <br>Error validating credentials due to invalid username or password.| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) |
-| Smart lock-out events.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SmartLockouts.yaml) |
-| Interrupts| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| 500121, Authentication failed during strong authentication request. <br>-or-<br>50097, Device authentication is required or 50074, Strong Authentication is required. <br>-or-<br>50155, DeviceAuthenticationFailed<br>-or-<br>50158, ExternalSecurityChallenge - External security challenge wasn't satisfied<br>-or-<br>53003 and Failure reason = blocked by CA| Monitor and alert on interrupts.<br>Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml) |
+| Failed sign-in attempts.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code 50126 - <br>Error validating credentials due to invalid username or password.| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Smart lock-out events.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SmartLockouts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Interrupts| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| 500121, Authentication failed during strong authentication request. <br>-or-<br>50097, Device authentication is required or 50074, Strong Authentication is required. <br>-or-<br>50155, DeviceAuthenticationFailed<br>-or-<br>50158, ExternalSecurityChallenge - External security challenge wasn't satisfied<br>-or-<br>53003 and Failure reason = blocked by CA| Monitor and alert on interrupts.<br>Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
The following are listed in order of importance based on the effect and severity of the entries. | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Multi-factor authentication (MFA) fraud alerts.| High| Azure AD Sign-ins log| Status = failed<br>-and-<br>Details = MFA Denied<br>| Monitor and alert on any entry.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
-| Failed authentications from countries you don't operate out of.| Medium| Azure AD Sign-ins log| Location = \<unapproved location\>| Monitor and alert on any entries. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationAttemptfromNewCountry.yaml) |
-| Failed authentications for legacy protocols or protocols that aren't used.| Medium| Azure AD Sign-ins log| Status = failure<br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml) |
-| Failures blocked by CA.| Medium| Azure AD Sign-ins log| Error code = 53003 <br>-and-<br>Failure reason = blocked by CA| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml) |
+| Multi-factor authentication (MFA) fraud alerts.| High| Azure AD Sign-ins log| Status = failed<br>-and-<br>Details = MFA Denied<br>| Monitor and alert on any entry.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Failed authentications from countries you don't operate out of.| Medium| Azure AD Sign-ins log| Location = \<unapproved location\>| Monitor and alert on any entries. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationAttemptfromNewCountry.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Failed authentications for legacy protocols or protocols that aren't used.| Medium| Azure AD Sign-ins log| Status = failure<br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Failures blocked by CA.| Medium| Azure AD Sign-ins log| Error code = 53003 <br>-and-<br>Failure reason = blocked by CA| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Increased failed authentications of any type.| Medium| Azure AD Sign-ins log| Capture increases in failures across the board. That is, the failure total for today is >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if failures increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) | | Authentication occurring at times and days of the week when countries don't conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>-and-<br>Location = \<location\><br>-and-<br>Day\Time = \<not normal working hours\>| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) |
-| Account disabled/blocked for sign-ins| Low| Azure AD Sign-ins log| Status = Failure<br>-and-<br>error code = 50057, The user account is disabled.| This could indicate someone is trying to gain access to an account once they have left an organization. Although the account is blocked, it is important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml) |
+| Account disabled/blocked for sign-ins| Low| Azure AD Sign-ins log| Status = Failure<br>-and-<br>error code = 50057, The user account is disabled.| This could indicate someone is trying to gain access to an account once they have left an organization. Although the account is blocked, it is important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
### Monitoring for successful unusual sign ins | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Authentications of privileged accounts outside of expected controls.| High| Azure AD Sign-ins log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationsofPrivilegedAccountsOutsideofExpectedControls.yaml)<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| When only single-factor authentication is required.| Low| Azure AD Sign-ins log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor periodically and ensure expected behavior.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Authentications of privileged accounts outside of expected controls.| High| Azure AD Sign-ins log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationsofPrivilegedAccountsOutsideofExpectedControls.yaml)<br>[Sigma ruless](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| When only single-factor authentication is required.| Low| Azure AD Sign-ins log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor periodically and ensure expected behavior.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Discover privileged accounts not registered for MFA.| High| Azure Graph API| Query for IsMFARegistered eq false for administrator accounts. <br>[List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http)| Audit and investigate to determine if intentional or an oversight. |
-| Successful authentications from countries your organization doesn't operate out of.| Medium| Azure AD Sign-ins log| Status = success<br>Location = \<unapproved country\>| Monitor and alert on any entries not equal to the city names you provide.<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Successful authentication, session blocked by CA.| Medium| Azure AD Sign-ins log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by CA| Monitor and investigate when authentication is successful, but session is blocked by CA.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Successful authentication after you have disabled legacy authentication.| Medium| Azure AD Sign-ins log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Successful authentications from countries your organization doesn't operate out of.| Medium| Azure AD Sign-ins log| Status = success<br>Location = \<unapproved country\>| Monitor and alert on any entries not equal to the city names you provide.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Successful authentication, session blocked by CA.| Medium| Azure AD Sign-ins log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by CA| Monitor and investigate when authentication is successful, but session is blocked by CA.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Successful authentication after you have disabled legacy authentication.| Medium| Azure AD Sign-ins log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
We recommend you periodically review authentications to medium business impact (MBI) and high business impact (HBI) applications where only single-factor authentication is required. For each, you want to determine if single-factor authentication was expected or not. In addition, review for successful authentication increases or at unexpected times, based on the location. | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - | - |- |- |- |
-| Authentications to MBI and HBI application using single-factor authentication.| Low| Azure AD Sign-ins log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Authentications at days and times of the week or year that countries do not conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries do not conduct normal business operations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-UnusualLogonTimes.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Measurable increase of successful sign ins.| Low| Azure AD Sign-ins log| Capture increases in successful authentication across the board. That is, success totals for today are >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Authentications to MBI and HBI application using single-factor authentication.| Low| Azure AD Sign-ins log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Authentications at days and times of the week or year that countries do not conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries do not conduct normal business operations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-UnusualLogonTimes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Measurable increase of successful sign ins.| Low| Azure AD Sign-ins log| Capture increases in successful authentication across the board. That is, success totals for today are >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Next steps
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Currently, users can self-service leave for an organization without the visibili
With this feature, IT administrators can now allow or restrict external identities to leave an organization by Microsoft provided self-service controls via Azure Active Directory in the Microsoft Entra portal. In order to restrict users to leave an organization, customers need to include "Global privacy contact" and "Privacy statement URL" under tenant properties. A new policy API is available for the administrators to control tenant wide policy:
-[externalIdentitiesPolicy resource type](/graph/api/resources/externalidentitiespolicy?view=graph-rest-beta)
+[externalIdentitiesPolicy resource type](/graph/api/resources/externalidentitiespolicy?view=graph-rest-beta&preserve-view=true)
For more information, see:
For more information, see: [Block users from viewing their BitLocker keys (previ
**Service category:** Identity Protection **Product capability:** Identity Security & Protection
-Identity Protection risk detections (alerts) are now also available in Microsoft 365 Defender to provide a unified investigation experience for security professionals. For more information, see: [Investigate alerts in Microsoft 365 Defender](/microsoft-365/security/defender/investigate-alerts?view=o365-worldwide#alert-sources)
+Identity Protection risk detections (alerts) are now also available in Microsoft 365 Defender to provide a unified investigation experience for security professionals. For more information, see: [Investigate alerts in Microsoft 365 Defender](/microsoft-365/security/defender/investigate-alerts?view=o365-worldwide#alert-sources&preserve-view=true)
Pick a group of up to five members and provision them into your third-party appl
**Product capability:** Identity Security & Protection
-We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values).
+We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true).
We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
For listing your application in the Azure AD app gallery, see the details here h
-We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values).
+We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true).
We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
active-directory Identity Governance Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-automation.md
na
ms.devlang: na Previously updated : 1/20/2022 Last updated : 9/20/2022
By default, Azure Automation does not have any PowerShell modules preloaded for
1. If you are using the cmdlets for Azure AD identity governance features, such as entitlement management, then repeat the import process for the module **Microsoft.Graph.Identity.Governance**.
-1. Import other modules that your script may require. For example, if you are using Identity Protection, then you may wish to import the **Microsoft.Graph.Identity.SignIns** module.
+1. Import other modules that your script may require, such as **Microsoft.Graph.Users**. For example, if you are using Identity Protection, then you may wish to import the **Microsoft.Graph.Identity.SignIns** module.
## Create an app registration and assign permissions
$ap | Select-Object -Property Id,DisplayName | ConvertTo-Json
3. If the run was successful, the output instead of the welcome message will be a JSON array. The JSON array will include the ID and display name of each access package returned from the query.
+## Provide parameters to the runbook (optional)
+
+You can also add input parameters to your runbook, by adding a `Param` section at the top of the PowerShell script. For instance,
+
+```powershell
+Param
+(
+  [String]$AccessPackageAssignmentId
+)
+```
+
+The format of the allowed parameters depends upon the calling service. If your runbook does take parameters from the caller, then you will need to add validation logic to your runbook to ensure that the parameter values supplied are appropriate for how the runbook could be started. For example, if your runbook is started by a [webhook](../../automation/automation-webhooks.md), Azure Automation doesn't perform any authentication on a webhook request as long as it's made to the correct URL, so you will need an alternate means of validating the request.
+
+Once you [configure runbook input parameters](../../automation/runbook-input-parameters.md), then when you test your runbook you can provide values through the Test page. Later, when the runbook is published, you can provide parameters when starting the runbook from PowerShell, the REST API, or a Logic App.
+ ## Parse the output of an Azure Automation account in Logic Apps (optional)
-Once your runbook is published, your can create a schedule in Azure Automation, and link your runbook to that schedule to run automatically. Scheduling runbooks from Azure Automation is suitable for runbooks that do not need to interact with other Azure or Office 365 services.
+Once your runbook is published, your can create a schedule in Azure Automation, and link your runbook to that schedule to run automatically. Scheduling runbooks from Azure Automation is suitable for runbooks that do not need to interact with other Azure or Office 365 services that do not have PowerShell interfaces.
If you wish to send the output of your runbook to another service, then you may wish to consider using [Azure Logic Apps](../../logic-apps/logic-apps-overview.md) to start your Azure Automation runbook, as Logic Apps can also parse the results.
If you wish to send the output of your runbook to another service, then you may
1. Add the operation **Create job** from **Azure Automation**. Authenticate to Azure AD, and select the Subscription, Resource Group, Automation Account created earlier. Select **Wait for Job**.
-1. Add the parameter **Runbook name** and type the name of the runbook to be started.
+1. Add the parameter **Runbook name** and type the name of the runbook to be started. If the runbook has input parameters, then you can provide the values to them.
1. Select **New step** and add the operation **Get job output**. Select the same Subscription, Resource Group, Automation Account as the previous step, and select the Dynamic value of the **Job ID** from the previous step.
-1. You can then add more operations to the Logic App, such as the [**Parse JSON** action](../../logic-apps/logic-apps-perform-data-operations.md#parse-json-action), that use the **Content** returned when the runbook completes.
+1. You can then add more operations to the Logic App, such as the [**Parse JSON** action](../../logic-apps/logic-apps-perform-data-operations.md#parse-json-action) that uses the **Content** returned when the runbook completes. (If you're auto-generating the **Parse JSON** schema from a sample payload, be sure to account for PowerShell script potentially returning null; you might need to change some of the `"type": ΓÇï"string"` to `"type": [ΓÇï"string",ΓÇï "null"ΓÇï]` in the schema.)
Note that in Azure Automation, a PowerShell runbook can fail to complete if it tries to write a large amount of data to the output stream at once. You can typically work around this issue by having the runbook output just the information needed by the Logic App, such as by using the `Select-Object -Property` cmdlet to exclude unneeded properties.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
Before you install Azure AD Connect, there are a few things that you need.
* Review [optional sync features you can enable in Azure AD](how-to-connect-syncservice-features.md), and evaluate which features you should enable. ### On-premises Active Directory
-* The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met.
+* The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met. You may require [a paid support program](https://docs.microsoft.com/lifecycle/policies/fixed#extended-support) if you require support for domain controllers running Windows Server 2016 or older.
* The domain controller used by Azure AD must be writable. Using a read-only domain controller (RODC) *isn't supported*, and Azure AD Connect doesn't follow any write redirects. * Using on-premises forests or domains by using "dotted" (name contains a period ".") NetBIOS names *isn't supported*. * We recommend that you [enable the Active Directory recycle bin](how-to-connect-sync-recycle-bin.md).
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites -- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later - note that Windows Server 2022 is not yet supported.
+- Azure AD Connect must be installed on a domain-joined Windows Server 2019 or later - note that Windows Server 2022 is not yet supported. You can deploy Azure AD Connect on Windows Server 2016 but since WS2016 is in extended support, you may require [a paid support program](https://docs.microsoft.com/lifecycle/policies/fixed#extended-support) if you require support for this configuration.
- The minimum .Net Framework version required is 4.6.2, and newer versions of .Net are also supported. - Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported. - The Azure AD Connect server must not have PowerShell Transcription Group Policy enabled if you use the Azure AD Connect wizard to manage Active Directory Federation Services (AD FS) configuration. You can enable PowerShell transcription if you use the Azure AD Connect wizard to manage sync configuration. - If AD FS is being deployed:
- - The servers where AD FS or Web Application Proxy are installed must be Windows Server 2012 R2 or later. Windows remote management must be enabled on these servers for remote installation.
+ - The servers where AD FS or Web Application Proxy are installed must be Windows Server 2012 R2 or later. Windows remote management must be enabled on these servers for remote installation. You may require [a paid support program](https://docs.microsoft.com/lifecycle/policies/fixed#extended-support) if you require support for Windows Server 2016 and older.
- You must configure TLS/SSL certificates. For more information, see [Managing SSL/TLS protocols and cipher suites for AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-protocols-in-ad-fs) and [Managing SSL certificates in AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap). - You must configure name resolution. - It is not supported to break and analyze traffic between Azure AD Connect and Azure AD. Doing so may disrupt the service.
active-directory How To Connect Sync Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-whatis.md
The Azure Active Directory Connect synchronization services (Azure AD Connect sy
This topic is the home for **Azure AD Connect sync** (also called **sync engine**) and lists links to all other topics related to it. For links to Azure AD Connect, see [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md). The sync service consists of two components, the on-premises **Azure AD Connect sync** component and the service side in Azure AD called **Azure AD Connect sync service**.
+>[!IMPORTANT]
+>Azure AD Connect Cloud Sync is a new offering from Microsoft designed to meet and accomplish your hybrid identity goals for synchronization of users, groups, and contacts to Azure AD. It accomplishes this by using the Azure AD Cloud provisioning agent instead of the Azure AD Connect application. Azure AD Cloud Sync is replacing Azure AD Connect sync, which will be retired after Cloud Sync has full functional parity with Connect sync. The remainder of this article is about AADConnect sync, but we encourage customers to review the features and advantages of Cloud Sync before deploying AADConnect sync.
+>
+>To find out if you are already eligible for Cloud Sync, please verify your requirements in [this wizard](https://admin.microsoft.com/adminportal/home?Q=setupguidance#/modernonboarding/identitywizard).
+>
+>To learn more about Cloud Sync please read [this article](https://docs.microsoft.com/azure/active-directory/cloud-sync/what-is-cloud-sync), or watch this [short video](https://www.microsoft.com/en-us/videoplayer/embed/RWJ8l5).
+>
+ ## Azure AD Connect sync topics | Topic | What it covers and when to read |
active-directory Salesforce Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-tutorial.md
To configure the integration of Salesforce into Azure AD, you need to add Salesf
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide)
- ## Configure and test Azure AD SSO for Salesforce Configure and test Azure AD SSO with Salesforce using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Salesforce.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-After you configure Salesforce, you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+After you configure Salesforce, you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Net
> [!NOTE] > - Azure CNI Overlay is currently only available in US West Central region.
-> - Azure CNI Overlay does not currently support _v5 VM SKUs.
## Overview of overlay networking
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AK
description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure Disks and Azure Files in an Azure Kubernetes Service (AKS) cluster Previously updated : 07/21/2022 Last updated : 09/18/2022
The CSI storage driver support on AKS allows you to natively use:
> [!NOTE] > Azure Disks CSI driver v2 (preview) improves scalability and reduces pod failover latency. It uses shared disks to provision attachment replicas on multiple cluster nodes and integrates with the pod scheduler to ensure a node with an attachment replica is chosen on pod failover. Azure Disks CSI driver v2 (preview) also provides the ability to fine tune performance. If you're interested in participating in the preview, submit a request: [https://aka.ms/DiskCSIv2Preview](https://aka.ms/DiskCSIv2Preview). This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> [!NOTE]
-> AKS provides the option to enable and disable the CSI drivers (preview) on new and existing clusters. CSI drivers are enabled by default on new clusters. You should verify that there are no existing Persistent Volumes created by Azure Disks and Azure Files CSI drivers and that there is not any existing VolumeSnapshot, VolumeSnapshotClass or VolumeSnapshotContent resources before running this command on existing cluster. This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
-* You also need the Azure CLI installed and configured. Run az --version to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-### Install the `aks-preview` Azure CLI
-
-You also need the *aks-preview* Azure CLI extension version 0.5.78 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
+You need the Azure CLI version 2.40 installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Disable CSI storage drivers on a new cluster
az aks update -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable
## Enable CSI storage drivers on an existing cluster
-`--enable-disk-driver` allows you enable the [Azure Disks CSI driver][azure-disk-csi]. `--enable-file-driver` allows you to enable the [Azure Files CSI driver][azure-files-csi]. `--enable-snapshot-controller` allows you to enable the [snapshot controller][snapshot-controller].
+`--enable-disk-driver` allows you enable the [Azure Disks CSI driver][azure-disk-csi]. `--enable-file-driver` allows you to enable the [Azure Files CSI driver][azure-files-csi]. `--enable-snapshot-controller` allows you to enable the [snapshot controller][snapshot-controller].
To enable CSI storage drivers on an existing cluster with CSI storage drivers disabled, use `--enable-disk-driver`, `--enable-file-driver`, and `--enable-snapshot-controller`.
aks Howto Deploy Java Liberty App With Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app-with-postgresql.md
- Title: Deploy a Java application with Azure Database for PostgreSQL server to Open Liberty/WebSphere Liberty on an Azure Kubernetes Service(AKS) cluster
-recommendations: false
-description: Deploy a Java application with Azure Database for PostgreSQL server to Open Liberty/WebSphere Liberty on an Azure Kubernetes Service(AKS) cluster
---- Previously updated : 11/19/2021
-keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes
---
-# Deploy a Java application with Azure Database for PostgreSQL server to Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
-
-This article demonstrates how to:
-
-* Run your Java, Java EE, Jakarta EE, or MicroProfile application on the Open Liberty or WebSphere Liberty runtime with a PostgreSQL DB connection.
-* Build the application Docker image using Open Liberty or WebSphere Liberty container images.
-* Deploy the containerized application to an AKS cluster using the Open Liberty Operator.
-
-The Open Liberty Operator simplifies the deployment and management of applications running on Kubernetes clusters. With Open Liberty Operator, you can also perform more advanced operations, such as gathering traces and dumps.
-
-For more information on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more information on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
---
-* This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-* If running the commands in this guide locally (instead of Azure Cloud Shell):
- * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS, Windows Subsystem for Linux).
- * Install a Java SE implementation (for example, [AdoptOpenJDK OpenJDK 8 LTS/OpenJ9](https://adoptopenjdk.net/?variant=openjdk8&jvmVariant=openj9)).
- * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
- * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
- * Create a user-assigned managed identity and assign `Owner` role or `Contributor` and `User Access Administrator` roles to that identity by following the steps in [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Assign `Directory readers` role to the identity in Azure AD by following [Assign Azure AD roles to users](../active-directory/roles/manage-roles-portal.md). Return to this document after creating the identity and assigning it the necessary roles.
-
-## Create a Jakarta EE runtime using the portal
-
-The steps in this section guide you to create a Jakarta EE runtime on AKS. After completing these steps, you will have an Azure Container Registry and an Azure Kubernetes Service cluster for the sample application.
-
-1. Visit the [Azure portal](https://portal.azure.com/). In the search box at the top of the page, type **IBM WebSphere Liberty and Open Liberty on Azure Kubernetes Service**. When the suggestions start appearing, select the one and only match that appears in the **Marketplace** section.
-1. Select **Create** to start.
-1. In the **Basics** tab, create a new resource group called *java-liberty-project-rg*.
-1. Select *East US* as **Region**.
-1. Select the user-assigned managed identity you created above.
-1. Leave all other values at the defaults and start creating the cluster by selecting **Review + create**.
-1. When the validation completes, select **Create**. This may take up to ten minutes.
-1. After the deployment is complete, select the resource group into which you deployed the resources.
- 1. In the list of resources in the resource group, select the resource with **Type** of **Container registry**.
- 1. Save aside the values for **Registry name**, **Login server**, **Username**, and **password**. You may use the copy icon at the right of each field to copy the value of that field to the system clipboard.
-1. Navigate again to the resource group into which you deployed the resources.
-1. In the **Settings** section, select **Deployments**.
-1. Select the bottom most deployment. The **Deployment name** will match the publisher ID of the offer. It will contain the string **ibm**.
-1. In the left pane, select **Outputs**.
-1. Using the same copy technique as with the preceding values, save aside the values for the following outputs:
-
- - **clusterName**
- - **appDeploymentTemplateYamlEncoded**
- - **cmdToConnectToCluster**
-
- These values will be used later in this article. Note that several other useful commands are listed in the outputs.
-
-## Create an Azure Database for PostgreSQL server
-
-The steps in this section guide you through creating an Azure Database for PostgreSQL server using the Azure CLI for use with your app.
-
-1. Create a resource group
-
- An Azure resource group is a logical group in which Azure resources are deployed and managed.
-
- Create a resource group called *java-liberty-project-postgresql* using the [az group create](/cli/azure/group#az-group-create) command in the *eastus* location.
-
- ```bash
- RESOURCE_GROUP_NAME=java-liberty-project-postgresql
- az group create --name $RESOURCE_GROUP_NAME --location eastus
- ```
-
-1. Create the PostgreSQL server
-
- Use the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command to create the DB server. The following example creates a DB server named *youruniquedbname*. Make sure *youruniqueacrname* is unique within Azure.
-
- > [!TIP]
- > To help ensure a globally unique name, prepend a disambiguation string such as your initials and the MMDD of today's date.
--
- ```bash
- export DB_NAME=youruniquedbname
- export DB_ADMIN_USERNAME=myadmin
- export DB_ADMIN_PASSWORD=<server_admin_password>
- az postgres server create --resource-group $RESOURCE_GROUP_NAME --name $DB_NAME --location eastus --admin-user $DB_ADMIN_USERNAME --admin-password $DB_ADMIN_PASSWORD --sku-name GP_Gen5_2
- ```
-
-1. Allow Azure Services, such as our Open Liberty and WebSphere Liberty application, to access the Azure PostgreSQL server.
-
- ```bash
- az postgres server firewall-rule create --resource-group $RESOURCE_GROUP_NAME \
- --server-name $DB_NAME \
- --name "AllowAllWindowsAzureIps" \
- --start-ip-address "0.0.0.0" \
- --end-ip-address "0.0.0.0"
- ```
-
-1. Allow your local IP address to access the Azure PostgreSQL server. This is necessary to allow the `liberty:devc` to access the database.
-
- ```bash
- az postgres server firewall-rule create --resource-group $RESOURCE_GROUP_NAME \
- --server-name $DB_NAME \
- --name "AllowMyIp" \
- --start-ip-address YOUR_IP_ADDRESS \
- --end-ip-address YOUR_IP_ADDRESS
- ```
-
-If you don't want to use the CLI, you may use the Azure portal by following the steps in [Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portal](../postgresql/quickstart-create-server-database-portal.md). You must also grant access to Azure services by following the steps in [Firewall rules in Azure Database for PostgreSQL - Single Server](../postgresql/concepts-firewall-rules.md#connecting-from-azure). Return to this document after creating and configuring the database server.
-
-## Configure and deploy the sample application
-
-Follow the steps in this section to deploy the sample application on the Jakarta EE runtime. These steps use Maven and the `liberty-maven-plugin`. To learn more about the `liberty-maven-plugin` see [Building a web application with Maven](https://openliberty.io/guides/maven-intro.html).
-
-### Check out the application
-
-Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aks).
-There are three samples in the repository. We will use *javaee-app-db-using-actions/postgres*. Here is the file structure of the application.
-
-```
-javaee-app-db-using-actions/postgres
-Γö£ΓöÇ src/main/
-Γöé Γö£ΓöÇ aks/
-Γöé Γöé Γö£ΓöÇ db-secret.yaml
-Γöé Γöé Γö£ΓöÇ openlibertyapplication.yaml
-Γöé Γö£ΓöÇ docker/
-Γöé Γöé Γö£ΓöÇ Dockerfile
-Γöé Γöé Γö£ΓöÇ Dockerfile-local
-Γöé Γöé Γö£ΓöÇ Dockerfile-wlp
-Γöé Γöé Γö£ΓöÇ Dockerfile-wlp-local
-Γöé Γö£ΓöÇ liberty/config/
-Γöé Γöé Γö£ΓöÇ server.xml
-Γöé Γö£ΓöÇ java/
-Γöé Γö£ΓöÇ resources/
-Γöé Γö£ΓöÇ webapp/
-Γö£ΓöÇ pom.xml
-```
-
-The directories *java*, *resources*, and *webapp* contain the source code of the sample application. The code declares and uses a data source named `jdbc/JavaEECafeDB`.
-
-In the *aks* directory, we placed two deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication.yaml* is used to deploy the application image.
-
-In the *docker* directory, we placed four Dockerfiles. *Dockerfile-local* is used for local debugging, and *Dockerfile* is used to build the image for an AKS deployment. These two files work with Open Liberty. *Dockerfile-wlp-local* and *Dockerfile-wlp* are also used for local debugging and to build the image for an AKS deployment respectively, but instead work with WebSphere Liberty.
-
-In directory *liberty/config*, the *server.xml* is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
-
-### Acquire necessary variables from AKS deployment
-
-After the offer is successfully deployed, an AKS cluster will be generated automatically. The AKS cluster is configured to connect to the ACR. Before we get started with the application, we need to extract the namespace configured for the AKS.
-
-1. Run the following command to print the current deployment file, using the `appDeploymentTemplateYamlEncoded` you saved above. The output contains all the variables we need.
-
- ```bash
- echo <appDeploymentTemplateYamlEncoded> | base64 -d
- ```
-
-1. Save the `metadata.namespace` from this yaml output aside for later use in this article.
-
-### Build the project
-
-Now that you have gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment.
-
-```bash
-cd <path-to-your-repo>/javaee-app-db-using-actions/postgres
-
-# The following variables will be used for deployment file generation
-export LOGIN_SERVER=<Azure_Container_Registery_Login_Server_URL>
-export REGISTRY_NAME=<Azure_Container_Registery_Name>
-export USER_NAME=<Azure_Container_Registery_Username>
-export PASSWORD=<Azure_Container_Registery_Password>
-export DB_SERVER_NAME=${DB_NAME}.postgres.database.azure.com
-export DB_PORT_NUMBER=5432
-export DB_TYPE=postgres
-export DB_USER=${DB_ADMIN_USERNAME}@${DB_NAME}
-export DB_PASSWORD=${DB_ADMIN_PASSWORD}
-export NAMESPACE=<metadata.namespace>
-
-mvn clean install
-```
-
-### Test your project locally
-
-Use the `liberty:devc` command to run and test the project locally before dealing with any Azure complexity. For more information on `liberty:devc`, see the [Liberty Plugin documentation](https://github.com/OpenLiberty/ci.maven/blob/main/docs/dev.md#devc-container-mode).
-In the sample application, we've prepared *Dockerfile-local* and *Dockerfile-wlp-local* for use with `liberty:devc`.
-
-1. Start your local docker environment if you haven't done so already. The instructions for doing this vary depending on the host operating system.
-
-1. Start the application in `liberty:devc` mode
-
- ```bash
- cd <path-to-your-repo>/javaee-app-db-using-actions/postgres
-
- # If you are running with Open Liberty
- mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_TYPE} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-local
-
- # If you are running with WebSphere Liberty
- mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_TYPE} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-wlp-local
- ```
-
-1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser and verify the application is accessible and all functions are working.
-
-1. Press `Ctrl+C` to stop `liberty:devc` mode.
-
-### Build image for AKS deployment
-
-After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image.
-
-```bash
-cd <path-to-your-repo>/javaee-app-db-using-actions/postgres
-
-# Fetch maven artifactId as image name, maven build version as image version
-IMAGE_NAME=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
-IMAGE_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
-
-cd <path-to-your-repo>/javaee-app-db-using-actions/postgres/target
-
-# If you are running with Open Liberty
-docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile .
-
-# If you are running with WebSphere Liberty
-docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile-wlp .
-```
-
-### Upload image to ACR
-
-Now, we upload the built image to the ACR created in the offer.
-
-```bash
-docker tag ${IMAGE_NAME}:${IMAGE_VERSION} ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
-docker login -u ${USER_NAME} -p ${PASSWORD} ${LOGIN_SERVER}
-docker push ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
-```
-
-### Deploy and test the application
-
-The steps in this section deploy and test the application.
-
-1. Connect to the AKS cluster
-
- Paste the value of **cmdToConnectToCluster** into a bash shell.
-
-1. Apply the DB secret
-
- ```bash
- kubectl apply -f <path-to-your-repo>/javaee-app-db-using-actions/postgres/target/db-secret.yaml
- ```
-
- You will see the output `secret/db-secret-postgres created`.
-
-1. Apply the deployment file
-
- ```bash
- kubectl apply -f <path-to-your-repo>/javaee-app-db-using-actions/postgres/target/openlibertyapplication.yaml
- ```
-
-1. Wait for the pods to be restarted
-
- Wait until all pods are restarted successfully using the following command.
-
- ```bash
- kubectl get pods -n $NAMESPACE --watch
- ```
-
- You should see output similar to the following to indicate that all the pods are running.
-
- ```bash
- NAME READY STATUS RESTARTS AGE
- javaee-cafe-cluster-67cdc95bc-2j2gr 1/1 Running 0 29s
- javaee-cafe-cluster-67cdc95bc-fgtt8 1/1 Running 0 29s
- javaee-cafe-cluster-67cdc95bc-h47qm 1/1 Running 0 29s
- ```
-
-1. Verify the results
-
- 1. Get endpoint of the deployed service
-
- ```bash
- kubectl get service -n $NAMESPACE
- ```
-
- 1. Go to `EXTERNAL-IP:9080` to test the application.
-
-## Clean up resources
-
-To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, container service, container registry, and all related resources.
-
-```azurecli-interactive
-az group delete --name <RESOURCE_GROUP_NAME> --yes --no-wait
-```
-
-## Next steps
-
-* [Azure Kubernetes Service](https://azure.microsoft.com/free/services/kubernetes-service/)
-* [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/)
-* [Open Liberty](https://openliberty.io/)
-* [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator)
-* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
Title: Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
+ Title: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
recommendations: false
-description: Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster.
--
+description: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
++ - Previously updated : 02/01/2021+ Last updated : 09/13/2022 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes # Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
-This article demonstrates how to:
+This article demonstrates how to:
+ * Run your Java, Java EE, Jakarta EE, or MicroProfile application on the Open Liberty or WebSphere Liberty runtime.
-* Build the application Docker image using Open Liberty container images.
-* Deploy the containerized application to an AKS cluster using the Open Liberty Operator.
+* Build the application Docker image using Open Liberty or WebSphere Liberty container images.
+* Deploy the containerized application to an AKS cluster using the Open Liberty Operator.
+
+The Open Liberty Operator simplifies the deployment and management of applications running on Kubernetes clusters. With the Open Liberty Operator, you can also perform more advanced operations, such as gathering traces and dumps.
-The Open Liberty Operator simplifies the deployment and management of applications running on Kubernetes clusters. With Open Liberty Operator, you can also perform more advanced operations, such as gathering traces and dumps.
+For more information on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more information on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
-For more details on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more details on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
+This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to AKS. The offer automatically provisions a number of Azure resources including an Azure Container Registry (ACR) instance, an AKS cluster, an Azure App Gateway Ingress Controller (AGIC) instance, the Liberty Operator, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aks). If you prefer manual step-by-step guidance for running Liberty on AKS that doesn't utilize the automation enabled by the offer, see [Manually deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster](/azure/developer/java/ee/howto-deploy-java-liberty-app-manual).
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
-* This article requires the latest version of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+* This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
* If running the commands in this guide locally (instead of Azure Cloud Shell): * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS, Windows Subsystem for Linux).
- * Install a Java SE implementation (for example, [AdoptOpenJDK OpenJDK 8 LTS/OpenJ9](https://adoptopenjdk.net/?variant=openjdk8&jvmVariant=openj9)).
+ * Install a Java SE implementation (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
* Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher. * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
-* Please make sure you have been assigned either `Owner` role or `Contributor` and `User Access Administrator` roles of the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group)
-
-## Create a resource group
-
-An Azure resource group is a logical group in which Azure resources are deployed and managed.
-
-Create a resource group called *java-liberty-project* using the [az group create](/cli/azure/group#az-group-create) command in the *eastus* location. This resource group will be used later for creating the Azure Container Registry (ACR) instance and the AKS cluster.
-
-```azurecli-interactive
-RESOURCE_GROUP_NAME=java-liberty-project
-az group create --name $RESOURCE_GROUP_NAME --location eastus
-```
-
-## Create an ACR instance
-
-Use the [az acr create](/cli/azure/acr#az-acr-create) command to create the ACR instance. The following example creates an ACR instance named *youruniqueacrname*. Make sure *youruniqueacrname* is unique within Azure.
-
-```azurecli-interactive
-export REGISTRY_NAME=youruniqueacrname
-az acr create --resource-group $RESOURCE_GROUP_NAME --name $REGISTRY_NAME --sku Basic --admin-enabled
-```
-
-After a short time, you should see a JSON output that contains:
-
-```output
- "provisioningState": "Succeeded",
- "publicNetworkAccess": "Enabled",
- "resourceGroup": "java-liberty-project",
-```
-
-### Connect to the ACR instance
-
-You will need to sign in to the ACR instance before you can push an image to it. Run the following commands to verify the connection:
-
-```azurecli-interactive
-export LOGIN_SERVER=$(az acr show -n $REGISTRY_NAME --query 'loginServer' -o tsv)
-export USER_NAME=$(az acr credential show -n $REGISTRY_NAME --query 'username' -o tsv)
-export PASSWORD=$(az acr credential show -n $REGISTRY_NAME --query 'passwords[0].value' -o tsv)
-
-docker login $LOGIN_SERVER -u $USER_NAME -p $PASSWORD
-```
-
-You should see `Login Succeeded` at the end of command output if you have logged into the ACR instance successfully.
-
-## Create an AKS cluster
-
-Use the [az aks create](/cli/azure/aks#az-aks-create) command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This will take several minutes to complete.
-
-```azurecli-interactive
-CLUSTER_NAME=myAKSCluster
-az aks create --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME --node-count 1 --generate-ssh-keys --enable-managed-identity
-```
-
-After a few minutes, the command completes and returns JSON-formatted information about the cluster, including the following:
-
-```output
- "nodeResourceGroup": "MC_java-liberty-project_myAKSCluster_eastus",
- "privateFqdn": null,
- "provisioningState": "Succeeded",
- "resourceGroup": "java-liberty-project",
-```
-
-### Connect to the AKS cluster
-
-To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli](/cli/azure/aks#az-aks-install-cli) command:
-
-```azurecli-interactive
-az aks install-cli
-```
-
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks#az-aks-get-credentials) command. This command downloads credentials and configures the Kubernetes CLI to use them.
-
-```azurecli-interactive
-az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME --overwrite-existing
-```
-
-> [!NOTE]
-> The above command uses the default location for the [Kubernetes configuration file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), which is `~/.kube/config`. You can specify a different location for your Kubernetes configuration file using *--file*.
-
-To verify the connection to your cluster, use the [kubectl get]( https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command to return a list of the cluster nodes.
-
-```azurecli-interactive
-kubectl get nodes
-```
-
-The following example output shows the single node created in the previous steps. Make sure that the status of the node is *Ready*:
-
-```output
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-xxxxxxxx-yyyyyyyyyy Ready agent 76s v1.18.10
-```
+* Make sure you have been assigned either the `Owner` role or the `Contributor` and `User Access Administrator` roles in the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group).
+
+## Create a Liberty on AKS deployment using the portal
+
+The following steps guide you to create a Liberty runtime on AKS. After completing these steps, you'll have an Azure Container Registry and an Azure Kubernetes Service cluster for the sample application.
+
+1. Visit the [Azure portal](https://portal.azure.com/). In the search box at the top of the page, type *IBM WebSphere Liberty and Open Liberty on Azure Kubernetes Service*. When the suggestions start appearing, select the one and only match that appears in the **Marketplace** section. If you prefer, you can go directly to the offer with this shortcut link: [https://aka.ms/liberty-aks](https://aka.ms/liberty-aks).
+1. Select **Create**.
+1. In the **Basics** pane, create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`.
+1. Select *East US* as **Region**.
+1. Select **Next: Configure cluster**.
+1. This section allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to leverage the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. Leave all other values at the defaults and select **Next: Networking**.
+1. Next to **Connect to Azure Application Gateway?** select **Yes**. This pane lets you customize the following deployment options.
+ 1. You can customize the virtual network and subnet into which the deployment will place the resources. Leave these values at their defaults.
+ 1. You can provide the TLS/SSL certificate presented by the Azure Application Gateway. Leave the values at the default to cause the offer to generate a self-signed certificate. Do not go to production using a self-certificate. For more information about self-signed certificates, see [Create a self-signed public certificate to authenticate your application](/azure/active-directory/develop/howto-create-self-signed-certificate).
+ 1. You can enable cookie based affinity, also known as sticky sessions. We want this enabled for this article, so ensure this option is selected.
+ ![Screenshot of the enable cookie-based affinity checkbox.](./media/howto-deploy-java-liberty-app/enable-cookie-based-affinity.png)
+1. Select **Review + create** to validate your selected options.
+1. When you see the message **Validation Passed**, select **Create**. The deployment may take up to 20 minutes.
+
+## Capture selected information from the deployment
+
+If you navigated away from the **Deployment is in progress** page, the following steps will show you how to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to the third step.
+
+1. In the upper left of any portal page, select the hamburger menu and select **Resource groups**.
+1. In the box with the text **Filter for any field**, enter the first few characters of the resource group you created previously. If you followed the recommended convention, enter your initials, then select the appropriate resource group.
+1. In the list of resources in the resource group, select the resource with **Type** of **Container registry**.
+1. In the navigation pane, under **Settings** select **Access keys**.
+1. Save aside the values for **Login server**, **Registry name**, **Username**, and **password**. You may use the copy icon at the right of each field to copy the value of that field to the system clipboard.
+1. Navigate again to the resource group into which you deployed the resources.
+1. In the **Settings** section, select **Deployments**.
+1. Select the bottom-most deployment in the list. The **Deployment name** will match the publisher ID of the offer. It will contain the string **ibm**.
+1. In the left pane, select **Outputs**.
+1. Using the same copy technique as with the preceding values, save aside the values for the following outputs:
+
+ * **appDeploymentTemplateYamlEncoded**
+ * **cmdToConnectToCluster**
+
+ These values will be used later in this article. Note that several other useful commands are listed in the outputs.
## Create an Azure SQL Database
-The steps in this section guide you through creating an Azure SQL Database single database for use with your app. If your application doesn't require a database, you can skip this section.
-
-1. Create a single database in Azure SQL Database by following the steps in: [Quickstart: Create an Azure SQL Database single database](/azure/azure-sql/database/single-database-create-quickstart). Return to this document after creating and configuring the database server.
- > [!NOTE]
- >
- > * At the **Basics** step, write down **Database name**, ***Server name**.database.windows.net*, **Server admin login** and **Password**.
- > * At the **Networking** step, set **Connectivity method** to **Public endpoint**, **Allow Azure services and resources to access this server** to **Yes**, and **Add current client IP address** to **Yes**.
- >
- > ![Screenshot of configuring SQL database networking](./media/howto-deploy-java-liberty-app/create-sql-database-networking.png)
-
-2. Once your database is created, open **your SQL server** > **Firewalls and virtual networks**. Set **Minimal TLS Version** to **> 1.0** and select **Save**.
+The following steps guide you through creating an Azure SQL Database single database for use with your app.
- ![Screenshot of configuring SQL database minimum TLS version](./media/howto-deploy-java-liberty-app/sql-database-minimum-TLS-version.png)
-
-3. Open **your SQL database** > **Connection strings** > Select **JDBC**. Write down the **Port number** following sql server address. For example, **1433** is the port number in the example below.
-
- ![Screenshot of getting SQL server jdbc connection string](./media/howto-deploy-java-liberty-app/sql-server-jdbc-connection-string.png)
--
-## Install Open Liberty Operator
-
-After creating and connecting to the cluster, install the [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator/tree/main/deploy/releases/0.8.0#option-2-install-using-kustomize) by running the following commands.
-
-```azurecli-interactive
-# Install Open Liberty Operator
-OPERATOR_VERSION=0.8.2
-mkdir -p overlays/watch-all-namespaces
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/overlays/watch-all-namespaces/olo-all-namespaces.yaml -q -P ./overlays/watch-all-namespaces
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/overlays/watch-all-namespaces/cluster-roles.yaml -q -P ./overlays/watch-all-namespaces
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/overlays/watch-all-namespaces/kustomization.yaml -q -P ./overlays/watch-all-namespaces
-mkdir base
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/base/kustomization.yaml -q -P ./base
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/base/open-liberty-crd.yaml -q -P ./base
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/base/open-liberty-operator.yaml -q -P ./base
-kubectl apply -k overlays/watch-all-namespaces
-```
+1. Create a single database in Azure SQL Database by following the steps in [Quickstart: Create an Azure SQL Database single database](/azure/azure-sql/database/single-database-create-quickstart), carefully noting the differences in the box below. Return to this article after creating and configuring the database server.
-## Configure and build the application image
+ > [!NOTE]
+ > At the **Basics** step, write down **Resource group**, **Database name**, **_\<server-name>_.database.windows.net**, **Server admin login**, and **Password**. The database **Resource group** will be referred to as `<db-resource-group>` later in this article.
+ >
+ > At the **Networking** step, set **Connectivity method** to **Public endpoint**, **Allow Azure services and resources to access this server** to **Yes**, and **Add current client IP address** to **Yes**.
+ >
+ > ![Screenshot of configuring SQL database networking.](./media/howto-deploy-java-liberty-app/create-sql-database-networking.png)
+ >
+ > Also at the **Networking** step, under **Encrypted connections**, set the **Minimum TLS version** to **TLS 1.0**.
+ >
+ > ![Screenshot of configuring SQL database networking TLS 1.0.](./media/howto-deploy-java-liberty-app/sql-database-minimum-TLS-version.png)
-To deploy and run your Liberty application on the AKS cluster, containerize your application as a Docker image using [Open Liberty container images](https://github.com/OpenLiberty/ci.docker) or [WebSphere Liberty container images](https://github.com/WASdev/ci.docker).
+Now that the database and AKS cluster have been created, we can proceed to preparing AKS to host your Open Liberty application.
-# [with DB connection](#tab/with-sql)
+## Configure and deploy the sample application
-Follow the steps in this section to deploy the sample application on the Jakarta EE runtime. These steps use Maven and the `liberty-maven-plugin`. To learn more about the `liberty-maven-plugin`, see [Building a web application with Maven](https://openliberty.io/guides/maven-intro.html).
+Follow the steps in this section to deploy the sample application on the Liberty runtime. These steps use Maven and the `liberty-maven-plugin`. To learn more about the `liberty-maven-plugin` see [Building a web application with Maven](https://openliberty.io/guides/maven-intro.html).
### Check out the application
-Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aks).
-There are three samples in the repository. We will use *javaee-app-db-using-actions/mssql*. Here is the file structure of the application.
+Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aks).
+
+There are a few samples in the repository. We'll use *java-app/*. Here's the file structure of the application.
```
-javaee-app-db-using-actions/mssql
+java-app
Γö£ΓöÇ src/main/ Γöé Γö£ΓöÇ aks/ Γöé Γöé Γö£ΓöÇ db-secret.yaml
The directories *java*, *resources*, and *webapp* contain the source code of the
In the *aks* directory, we placed two deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication.yaml* is used to deploy the application image.
-In the *docker* directory, we place four Dockerfiles. *Dockerfile-local* is used for local debugging, and *Dockerfile* is used to build the image for an AKS deployment. These two files work with Open Liberty. *Dockerfile-wlp-local* and *Dockerfile-wlp* are also used for local debugging and to build the image for an AKS deployment respectively, but instead work with WebSphere Liberty.
+In the *docker* directory, we placed four Dockerfiles. *Dockerfile-local* is used for local debugging, and *Dockerfile* is used to build the image for an AKS deployment. These two files work with Open Liberty. *Dockerfile-wlp-local* and *Dockerfile-wlp* are also used for local debugging and to build the image for an AKS deployment respectively, but instead work with WebSphere Liberty.
+
+In directory *liberty/config*, the *server.xml* FILE is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
+
+### Acquire necessary variables from AKS deployment
-In the *liberty/config* directory, the *server.xml* is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
+After the offer is successfully deployed, an AKS cluster will be generated automatically. The AKS cluster is configured to connect to a generated ACR instance. Before we get started with the application, we need to extract the namespace configured for AKS.
+
+1. Run the following command to print the current deployment file, using the `appDeploymentTemplateYamlEncoded` you saved above. The output contains all the variables we need.
+
+ ```bash
+ echo <appDeploymentTemplateYamlEncoded> | base64 -d
+ ```
-### Build project
+1. Save aside the `metadata.namespace` from this yaml output for later use in this article.
-Now that you have gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment.
+### Build the project
+
+Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment.
+
+Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment. The reason for this parameterization is to avoid having to hard-code values such as database server names, passwords, and other identifiers into the example source code. This allows the sample source code to be easier to use in a wider variety of contexts.
```bash
-cd <path-to-your-repo>/javaee-app-db-using-actions/mssql
+cd <path-to-your-repo>/java-app
# The following variables will be used for deployment file generation
-export LOGIN_SERVER=${LOGIN_SERVER}
-export REGISTRY_NAME=${REGISTRY_NAME}
-export USER_NAME=${USER_NAME}
-export PASSWORD=${PASSWORD}
+export LOGIN_SERVER=<Azure_Container_Registery_Login_Server_URL>
+export REGISTRY_NAME=<Azure_Container_Registery_Name>
+export USER_NAME=<Azure_Container_Registery_Username>
+export PASSWORD=<Azure_Container_Registery_Password>
export DB_SERVER_NAME=<Server name>.database.windows.net export DB_PORT_NUMBER=1433 export DB_NAME=<Database name> export DB_USER=<Server admin login>@<Server name> export DB_PASSWORD=<Server admin password>
-export NAMESPACE=default
+export NAMESPACE=<metadata.namespace>
mvn clean install ```+ ### Test your project locally
-Use the `liberty:devc` command to run and test the project locally before dealing with any Azure complexity. For more information on `liberty:devc`, see the [Liberty Plugin documentation](https://github.com/OpenLiberty/ci.maven/blob/main/docs/dev.md#devc-container-mode).
+
+Use the `liberty:devc` command to run and test the project locally before deploying to Azure. For more information on `liberty:devc`, see the [Liberty Plugin documentation](https://github.com/OpenLiberty/ci.maven/blob/main/docs/dev.md#devc-container-mode).
In the sample application, we've prepared *Dockerfile-local* and *Dockerfile-wlp-local* for use with `liberty:devc`. 1. Start your local docker environment if you haven't done so already. The instructions for doing this vary depending on the host operating system. 1. Start the application in `liberty:devc` mode
- ```bash
- cd <path-to-your-repo>/javaee-app-db-using-actions/mssql
-
- # If you are running with Open Liberty
- mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-local
+ ```bash
+ cd <path-to-your-repo>/java-app
+
+ # If you're running with Open Liberty
+ mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-local
- # If you are running with WebSphere Liberty
- mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-wlp-local
- ```
+ # If you're running with WebSphere Liberty
+ mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-wlp-local
+ ```
-1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser to verify the application is accessible and all functions are working.
+1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser and verify the application is accessible and all functions are working.
1. Press `Ctrl+C` to stop `liberty:devc` mode.
In the sample application, we've prepared *Dockerfile-local* and *Dockerfile-wlp
After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image. ```bash
-cd <path-to-your-repo>/javaee-app-db-using-actions/mssql
+cd <path-to-your-repo>/java-app
# Fetch maven artifactId as image name, maven build version as image version
-IMAGE_NAME=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
-IMAGE_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
+export IMAGE_NAME=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
+export IMAGE_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
-cd <path-to-your-repo>/javaee-app-db-using-actions/mssql/target
+cd <path-to-your-repo>/java-app/target
# If you are running with Open Liberty docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile .
docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile-wlp .
### Upload image to ACR
-Now, we upload the built image to the ACR created in the previous steps.
+Now, we upload the built image to the ACR created in the offer.
```bash docker tag ${IMAGE_NAME}:${IMAGE_VERSION} ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
docker login -u ${USER_NAME} -p ${PASSWORD} ${LOGIN_SERVER}
docker push ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION} ```
-# [without DB connection](#tab/without-sql)
-
-1. Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aks).
-1. Change directory to `javaee-app-simple-cluster` of your local clone.
-1. Run `mvn clean package` to package the application.
-1. Run `mvn liberty:dev` to test the application. You should see `The defaultServer server is ready to run a smarter planet.` in the command output if successful. Use `CTRL-C` to stop the application.
-1. Retrieve values for properties `artifactId` and `version` defined in the `pom.xml`.
-
- ```azurecli-interactive
- artifactId=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
- version=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
- ```
-1. Run `cd target` to change directory to the build of the sample.
-1. Run one of the following commands to build the application image and push it to the ACR instance.
- * Build with Open Liberty base image if you prefer to use Open Liberty as a lightweight open source JavaΓäó runtime:
-
- ```azurecli-interactive
- # Build and tag application image. This will cause the ACR instance to pull the necessary Open Liberty base images.
- az acr build -t ${artifactId}:${version} -r $REGISTRY_NAME .
- ```
-
- * Build with WebSphere Liberty base image if you prefer to use a commercial version of Open Liberty:
-
- ```azurecli-interactive
- # Build and tag application image. This will cause the ACR instance to pull the necessary WebSphere Liberty base images.
- az acr build -t ${artifactId}:${version} -r $REGISTRY_NAME --file=Dockerfile-wlp .
- ```
+### Deploy and test the application
--
-## Deploy application on the AKS cluster
+The following steps deploy and test the application.
-The steps in this section deploy the application.
+1. Connect to the AKS cluster.
-# [with DB connection](#tab/with-sql)
+ Paste the value of **cmdToConnectToCluster** into a bash shell.
-Follow steps below to deploy the Liberty application on the AKS cluster.
-
-1. Attach the ACR instance to the AKS cluster so that the AKS cluster is authenticated to pull image from the ACR instance.
-
- ```azurecli-interactive
- az aks update -n $CLUSTER_NAME -g $RESOURCE_GROUP_NAME --attach-acr $REGISTRY_NAME
- ```
-
-1. Retrieve the value for `artifactId` defined in `pom.xml`.
+1. Apply the DB secret.
```bash
- cd <path-to-your-repo>/javaee-app-db-using-actions/mssql
- artifactId=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
+ cd <path-to-your-repo>/java-app/target
+ kubectl apply -f db-secret.yaml
```
-1. Apply the DB secret and deployment file by running the following command:
+ You'll see the output `secret/db-secret-postgres created`.
- ```bash
- cd <path-to-your-repo>/javaee-app-db-using-actions/mssql/target
-
- # Apply DB secret
- kubectl apply -f <path-to-your-repo>/javaee-app-db-using-actions/mssql/target/db-secret.yaml
-
- # Apply deployment file
- kubectl apply -f <path-to-your-repo>/javaee-app-db-using-actions/mssql/target/openlibertyapplication.yaml
-
- # Check if OpenLibertyApplication instance is created
- kubectl get openlibertyapplication ${artifactId}-cluster
-
- NAME IMAGE EXPOSED RECONCILED AGE
- javaee-cafe-cluster youruniqueacrname.azurecr.io/javaee-cafe:1.0.25 True 59s
+1. Apply the deployment file.
- # Check if deployment created by Operator is ready
- kubectl get deployment ${artifactId}-cluster --watch
-
- NAME READY UP-TO-DATE AVAILABLE AGE
- javaee-cafe-cluster 0/3 3 0 20s
+ ```bash
+ kubectl apply -f openlibertyapplication.yaml
```
-1. Wait until you see `3/3` under the `READY` column and `3` under the `AVAILABLE` column, then use `CTRL-C` to stop the `kubectl` watch process.
+1. Wait for the pods to be restarted.
-# [without DB connection](#tab/without-sql)
+ Wait until all pods are restarted successfully using the following command.
-Follow steps below to deploy the Liberty application on the AKS cluster.
-
-1. Attach the ACR instance to the AKS cluster so that the AKS cluster is authenticated to pull image from the ACR instance.
-
- ```azurecli-interactive
- az aks update -n $CLUSTER_NAME -g $RESOURCE_GROUP_NAME --attach-acr $REGISTRY_NAME
+ ```bash
+ kubectl get pods -n $NAMESPACE --watch
```
-1. Verify the current working directory is `javaee-app-simple-cluster/target` of your local clone.
-1. Run the following commands to deploy your Liberty application with 3 replicas to the AKS cluster. Command output is also shown inline.
-
- ```azurecli-interactive
- # Create OpenLibertyApplication "javaee-cafe-cluster"
- kubectl apply -f openlibertyapplication.yaml
-
- openlibertyapplication.openliberty.io/javaee-cafe-cluster created
-
- # Check if OpenLibertyApplication instance is created
- kubectl get openlibertyapplication ${artifactId}-cluster
+ You should see output similar to the following to indicate that all the pods are running.
- NAME IMAGE EXPOSED RECONCILED AGE
- javaee-cafe-cluster youruniqueacrname.azurecr.io/javaee-cafe:1.0.25 True 59s
-
- # Check if deployment created by Operator is ready
- kubectl get deployment ${artifactId}-cluster --watch
-
- NAME READY UP-TO-DATE AVAILABLE AGE
- javaee-cafe-cluster 0/3 3 0 20s
+ ```bash
+ NAME READY STATUS RESTARTS AGE
+ javaee-cafe-cluster-67cdc95bc-2j2gr 1/1 Running 0 29s
+ javaee-cafe-cluster-67cdc95bc-fgtt8 1/1 Running 0 29s
+ javaee-cafe-cluster-67cdc95bc-h47qm 1/1 Running 0 29s
```
-1. Wait until you see `3/3` under the `READY` column and `3` under the `AVAILABLE` column, use `CTRL-C` to stop the `kubectl` watch process.
---
-### Test the application
-
-When the application runs, a Kubernetes load balancer service exposes the application front end to the internet. This process can take a while to complete.
+1. Verify the results.
-To monitor progress, use the [kubectl get service](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command with the `--watch` argument.
+ 1. Get endpoint of the deployed service
-```azurecli-interactive
-kubectl get service ${artifactId}-cluster --watch
-
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-javaee-cafe-cluster LoadBalancer 10.0.251.169 52.152.189.57 80:31732/TCP 68s
-```
-
-Once the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
+ ```bash
+ kubectl get service -n $NAMESPACE
+ ```
-Open a web browser to the external IP address of your service (`52.152.189.57` for the above example) to see the application home page. You should see the pod name of your application replicas displayed at the top-left of the page. Wait for a few minutes and refresh the page to see a different pod name displayed due to load balancing provided by the AKS cluster.
+ 1. Go to `http://EXTERNAL-IP` to test the application.
-
->[!NOTE]
-> - Currently the application is not using HTTPS. It is recommended to [ENABLE TLS with your own certificates](ingress-own-tls.md).
-
-## Clean up the resources
+## Clean up resources
To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, container service, container registry, and all related resources. ```azurecli-interactive az group delete --name $RESOURCE_GROUP_NAME --yes --no-wait
+az group delete --name <db-resource-group> --yes --no-wait
``` ## Next steps
-You can learn more from references used in this guide:
- * [Azure Kubernetes Service](https://azure.microsoft.com/free/services/kubernetes-service/) * [Open Liberty](https://openliberty.io/) * [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator) * [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)
-* [Liberty Maven Plugin](https://github.com/OpenLiberty/ci.maven#liberty-maven-plugin)
-* [Open Liberty Container Images](https://github.com/OpenLiberty/ci.docker)
-* [WebSphere Liberty Container Images](https://github.com/WASdev/ci.docker)
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Previously updated : 09/09/2022 Last updated : 09/16/2022 # Use ImageCleaner to clean up stale images on your Azure Kubernetes Service cluster (preview)
-It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images can present security issues as they may contain vulnerabilities. By cleaning these unreferenced images, you can remove an area of risk in your clusters. When done manually, this process can be time intensive, which ImageCleaner can mitigate via automatic image identification and removal.
+It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images can present security issues as they may contain vulnerabilities. By cleaning these unreferenced images, you can remove an area of risk in your clusters. When done manually, this process can be time intensive, which ImageCleaner can mitigate via automatic image identification and removal.
+
+ImageCleaner is a feature inherited from Eraser. For more information on Eraser, see [Eraser plugin](https://github.com/Azure/eraser)
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
Title: Use a managed identity in Azure Kubernetes Service description: Learn how to use a system-assigned or user-assigned managed identity in Azure Kubernetes Service (AKS) Previously updated : 06/14/2022 Last updated : 09/16/2022 # Use a managed identity in Azure Kubernetes Service
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
If you use custom domain names for the API Management endpoints, especially if y
In this scenario, if the SSL certificate that's used by the Management endpoint isn't signed by a well-known CA certificate, you must make sure that the CA certificate is trusted by the pod of the self-hosted gateway.
+> [!NOTE]
+> With the self-hosted gateway v2, API Management provides a new configuration endpoint: `<apim-service-name>.configuration.azure-api.net`. Currently, API Management doesn't enable configuring a custom domain name for the v2 configuration endpoint. If you need custom hostname mapping for this endpoint, you may be able to configure an override in the container's local hosts file, for example, using a [`hostAliases`](https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/#adding-additional-entries-with-hostaliases) element in a Kubernetes container spec.
+ ## Configuration backup Configure a local storage volume for the self-hosted gateway container, so it can persist a backup copy of the latest downloaded configuration. If connectivity is down, the storage volume can use the backup copy upon restart. The volume mount path must be `/apim/config` and must be owned by group ID `1001`. See an example on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway/blob/master/examples/self-hosted-gateway-with-configuration-backup.yaml).
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md
Below are the current restrictions of WebSocket support in API Management:
* WebSocket APIs are not supported yet in the Consumption tier. * WebSocket APIs are not supported yet in the [self-hosted gateway](./self-hosted-gateway-overview.md).
-* Azure CLI, PowerShell, and SDK currently do not support management operations of WebSocket APIs.
* 200 active connections limit per unit. * Websockets APIs support the following valid buffer types for messages: Close, BinaryFragment, BinaryMessage, UTF8Fragment, and UTF8Message.
-* Currently, the set header policy doesn't support changing certain well-known headers, including `Host` headers, in onHandshake requests.
+* Currently, the [set-header](api-management-transformation-policies.md#SetHTTPheader) policy doesn't support changing certain well-known headers, including `Host` headers, in onHandshake requests.
+* During the TLS handshake with a WebSocket backend, API Management validates that the server certificate is trusted and that its subject name matches the hostname. With HTTP APIs, API Management validates that the certificate is trusted but doesnΓÇÖt validate that hostname and subject match.
### Unsupported policies
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-ip-restrictions.md
ms.assetid: 3be1f4bd-8a81-4565-8a56-528c037b24bd Previously updated : 03/21/2022 Last updated : 09/01/2022
On the **Add Access Restriction** pane, when you create a rule, do the following
1. Optionally, enter a name and description of the rule. 1. In the **Priority** box, enter a priority value.
-1. In the **Type** drop-down list, select the type of rule.
-
-The different types of rules are described in the following sections.
+1. In the **Type** drop-down list, select the type of rule. The different types of rules are described in the following sections.
+1. After typing in the rule specific input select **Save** to save the changes.
> [!NOTE] > - There is a limit of 512 access restriction rules. If you require more than 512 access restriction rules, we suggest that you consider installing a standalone security product, such as Azure Front Door, Azure App Gateway, or an alternative WAF.
All available service tags are supported in access restriction rules. Each servi
1. To begin editing an existing access restriction rule, on the **Access Restrictions** page, select the rule you want to edit.
-1. On the **Edit Access Restriction** pane, make your changes, and then select **Update rule**. Edits are effective immediately, including changes in priority ordering.
+1. On the **Edit Access Restriction** pane, make your changes, and then select **Update rule**.
+
+1. Select **Save** to save the changes.
:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-ip-edit.png?v2" alt-text="Screenshot of the 'Edit Access Restriction' pane in the Azure portal, showing the fields for an existing access restriction rule.":::
All available service tags are supported in access restriction rules. Each servi
### Delete a rule
-To delete a rule, on the **Access Restrictions** page, select the ellipsis (**...**) next to the rule you want to delete, and then select **Remove**.
+1. To delete a rule, on the **Access Restrictions** page, check the rule or rules you want to delete, and then select **Delete**.
+
+1. Select **Save** to save the changes.
:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-delete.png" alt-text="Screenshot of the 'Access Restrictions' page, showing the 'Remove' ellipsis next to the access restriction rule to be deleted.":::
PowerShell example:
### Block a single IP address
-When you add your first access restriction rule, the service adds an explicit *Deny all* rule with a priority of 2147483647. In practice, the explicit *Deny all* rule is the final rule to be executed, and it blocks access to any IP address that's not explicitly allowed by an *Allow* rule.
-
-For a scenario where you want to explicitly block a single IP address or a block of IP addresses, but allow access to everything else, add an explicit *Allow All* rule.
+For a scenario where you want to explicitly block a single IP address or a block of IP addresses, but allow access to everything else, add a **Deny** rule for the specific IP address and configure the unmatched rule action to **Allow**.
:::image type="content" source="media/app-service-ip-restrictions/block-single-address.png" alt-text="Screenshot of the 'Access Restrictions' page in the Azure portal, showing a single blocked IP address."::: ### Restrict access to an SCM site
-In addition to being able to control access to your app, you can restrict access to the SCM site that's used by your app. The SCM site is both the web deploy endpoint and the Kudu console. You can assign access restrictions to the SCM site from the app separately or use the same set of restrictions for both the app and the SCM site. When you select the **Same restrictions as \<app name>** check box, everything is blanked out. If you clear the check box, your SCM site settings are reapplied.
+In addition to being able to control access to your app, you can restrict access to the SCM (Advanced tool) site that's used by your app. The SCM site is both the web deploy endpoint and the Kudu console. You can assign access restrictions to the SCM site from the app separately or use the same set of restrictions for both the app and the SCM site. When you select the **Use main site rules** check box, the rules list will be hidden and it will use the rules from the main site. If you clear the check box, your SCM site settings will appear again.
### Restrict access to a specific Azure Front Door instance Traffic from Azure Front Door to your application originates from a well known set of IP ranges defined in the AzureFrontDoor.Backend service tag. Using a service tag restriction rule, you can restrict traffic to only originate from Azure Front Door. To ensure traffic only originates from your specific instance, you will need to further filter the incoming requests based on the unique http header that Azure Front Door sends.
You can add access restrictions programmatically by doing either of the followin
--rule-name 'IP example rule' --action Allow --ip-address 122.133.144.0/24 --priority 100 ```
- > [!NOTE]
- > Working with service tags, http headers or multi-source rules in Azure CLI requires at least version 2.23.0. You can verify the version of the installed module with: ```az version```
- * Use [Azure PowerShell](/powershell/module/Az.Websites/Add-AzWebAppAccessRestrictionRule). For example:
You can add access restrictions programmatically by doing either of the followin
Add-AzWebAppAccessRestrictionRule -ResourceGroupName "ResourceGroup" -WebAppName "AppName" -Name "Ip example rule" -Priority 100 -Action Allow -IpAddress 122.133.144.0/24 ```
- > [!NOTE]
- > Working with service tags, http headers or multi-source rules in Azure PowerShell requires at least version 5.7.0. You can verify the version of the installed module with: ```Get-InstalledModule -Name Az```
You can also set values manually by doing either of the following:
app-service Networking Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking-features.md
ms.assetid: 5c61eed1-1ad1-4191-9f71-906d610ee5b7 Previously updated : 09/20/2021 Last updated : 09/01/2022
Azure App Service scale units support many customers in each deployment. The Fre
The worker VMs are broken down in large part by the App Service plans. The Free, Shared, Basic, Standard, and Premium plans all use the same worker VM type. The PremiumV2 plan uses another VM type. PremiumV3 uses yet another VM type. When you change the VM family, you get a different set of outbound addresses. If you scale from Standard to PremiumV2, your outbound addresses will change. If you scale from PremiumV2 to PremiumV3, your outbound addresses will change. In some older scale units, both the inbound and outbound addresses will change when you scale from Standard to PremiumV2.
-There are a number of addresses that are used for outbound calls. The outbound addresses used by your app for making outbound calls are listed in the properties for your app. These addresses are shared by all the apps running on the same worker VM family in the App Service deployment. If you want to see all the addresses that your app might use in a scale unit, there's property called `possibleOutboundAddresses` that will list them.
+There are many addresses that are used for outbound calls. The outbound addresses used by your app for making outbound calls are listed in the properties for your app. These addresses are shared by all the apps running on the same worker VM family in the App Service deployment. If you want to see all the addresses that your app might use in a scale unit, there's property called `possibleOutboundAddresses` that will list them.
![Screenshot that shows app properties.](media/networking-features/app-properties.png)
-App Service has a number of endpoints that are used to manage the service. Those addresses are published in a separate document and are also in the `AppServiceManagement` IP service tag. The `AppServiceManagement` tag is used only in App Service Environments where you need to allow such traffic. The App Service inbound addresses are tracked in the `AppService` IP service tag. There's no IP service tag that contains the outbound addresses used by App Service.
+App Service has many endpoints that are used to manage the service. Those addresses are published in a separate document and are also in the `AppServiceManagement` IP service tag. The `AppServiceManagement` tag is used only in App Service Environments where you need to allow such traffic. The App Service inbound addresses are tracked in the `AppService` IP service tag. There's no IP service tag that contains the outbound addresses used by App Service.
![Diagram that shows App Service inbound and outbound traffic.](media/networking-features/default-behavior.png)
To learn how to set an address on your app, see [Add a TLS/SSL certificate in Az
### Access restrictions
-Access restrictions let you filter *inbound* requests. The filtering action takes place on the front-end roles that are upstream from the worker roles where your apps are running. Because the front-end roles are upstream from the workers, you can think of access restrictions as network-level protection for your apps.
+Access restrictions let you filter *inbound* requests. The filtering action takes place on the front-end roles that are upstream from the worker roles where your apps are running. Because the front-end roles are upstream from the workers, you can think of access restrictions as network-level protection for your apps. For more information about access restrictions, see [Access restrictions overview](./overview-access-restrictions.md).
+
+This feature allows you to build a list of allow and deny rules that are evaluated in priority order. It's similar to the network security group (NSG) feature in Azure networking. You can use this feature in an ASE or in the multi-tenant service. When you use it with an ILB ASE, you can restrict access from private address blocks. To learn how to enable this feature, see [Configuring access restrictions](./app-service-ip-restrictions.md).
-This feature allows you to build a list of allow and deny rules that are evaluated in priority order. It's similar to the network security group (NSG) feature in Azure networking. You can use this feature in an ASE or in the multi-tenant service. When you use it with an ILB ASE, you can restrict access from private address blocks.
> [!NOTE] > Up to 512 access restriction rules can be configured per app. ![Diagram that illustrates access restrictions.](media/networking-features/access-restrictions.png)
-#### IP-based access restriction rules
-
-The IP-based access restrictions feature helps when you want to restrict the IP addresses that can be used to reach your app. Both IPv4 and IPv6 are supported. Some use cases for this feature:
-* Restrict access to your app from a set of well-defined addresses.
-* Restrict access to traffic coming through an external load-balancing service or other network appliances with known egress IP addresses.
-
-To learn how to enable this feature, see [Configuring access restrictions][iprestrictions].
-
-> [!NOTE]
-> IP-based access restriction rules only handle virtual network address ranges when your app is in an App Service Environment. If your app is in the multi-tenant service, you need to use [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to restrict traffic to select subnets in your virtual network.
-
-#### Access restriction rules based on service endpoints
-
-Service endpoints allow you to lock down *inbound* access to your app so that the source address must come from a set of subnets that you select. This feature works together with IP access restrictions. Service endpoints aren't compatible with remote debugging. If you want to use remote debugging with your app, your client can't be in a subnet that has service endpoints enabled. The process for setting service endpoints is similar to the process for setting IP access restrictions. You can build an allow/deny list of access rules that includes public addresses and subnets in your virtual networks.
-
-> [!NOTE]
-> Access restriction rules based on service endpoints are not supported on apps that use IP-based SSL ([App-assigned address](#app-assigned-address)).
-
-Some use cases for this feature:
-
-* Set up an application gateway with your app to lock down inbound traffic to your app.
-* Restrict access to your app to resources in your virtual network. These resources can include VMs, ASEs, or even other apps that use virtual network integration.
-
-![Diagram that illustrates the use of service endpoints with Application Gateway.](media/networking-features/service-endpoints-appgw.png)
-
-To learn more about configuring service endpoints with your app, see [Azure App Service access restrictions][serviceendpoints].
-
-#### Access restriction rules based on service tags
-
-[Azure service tags][servicetags] are well defined sets of IP addresses for Azure services. Service tags group the IP ranges used in various Azure services and is often also further scoped to specific regions. This allows you to filter *inbound* traffic from specific Azure services.
-
-For a full list of tags and more information, visit the service tag link above.
-To learn how to enable this feature, see [Configuring access restrictions][iprestrictions].
-
-#### Http header filtering for access restriction rules
-
-For each access restriction rule, you can add additional http header filtering. This allows you to further inspect the incoming request and filter based on specific http header values. Each header can have up to eight values per rule. The following list of http headers is currently supported:
-* X-Forwarded-For
-* X-Forwarded-Host
-* X-Azure-FDID
-* X-FD-HealthProbe
-
-Some use cases for http header filtering are:
-* Restrict access to traffic from proxy servers forwarding the host name
-* Restrict access to a specific Azure Front Door instance with a service tag rule and X-Azure-FDID header restriction
- ### Private endpoint Private endpoint is a network interface that connects you privately and securely to your Web App by Azure private link. Private endpoint uses a private IP address from your virtual network, effectively bringing the web app into your virtual network. This feature is only for *inbound* flows to your web app.
This feature is commonly used to:
Because this feature enables access to on-premises resources without an inbound firewall hole, it's popular with developers. The other outbound App Service networking features are related to Azure Virtual Network. Hybrid Connections doesn't depend on going through a virtual network. It can be used for a wider variety of networking needs.
-Note that App Service Hybrid Connections is unaware of what you're doing on top of it. So you can use it to access a database, a web service, or an arbitrary TCP socket on a mainframe. The feature essentially tunnels TCP packets.
+App Service Hybrid Connections is unaware of what you're doing on top of it. So you can use it to access a database, a web service, or an arbitrary TCP socket on a mainframe. The feature essentially tunnels TCP packets.
Hybrid Connections is popular for development, but it's also used in production applications. It's great for accessing a web service or database, but it's not appropriate for situations that involve creating many connections.
Gateway-required App Service virtual network integration enables your app to mak
![Diagram that illustrates gateway-required virtual network integration.](media/networking-features/gw-vnet-integration.png)
-This feature solves the problem of accessing resources in other virtual networks. It can even be used to connect through a virtual network to either other virtual networks or on-premises. It doesn't work with ExpressRoute-connected virtual networks, but it does work with site-to-site VPN-connected networks. It's usually inappropriate to use this feature from an app in an App Service Environment (ASE) because the ASE is already in your virtual network. Use cases for this feature:
+This feature solves the problem of accessing resources in other virtual networks. It can even be used to connect through a virtual network to either other virtual networks or on-premises. It doesn't work with ExpressRoute-connected virtual networks, but it does work with site-to-site VPN-connected networks. It's inappropriate to use this feature from an app in an App Service Environment (ASE) because the ASE is already in your virtual network. Use cases for this feature:
* Access resources in cross region virtual networks that aren't peered to a virtual network in the region.
An App Service Environment (ASE) is a single-tenant deployment of the Azure App
* Access resources across service endpoints. * Access resources across private endpoints.
-With an ASE, you don't need to use virtual network integration because the ASE is already in your virtual network. If you want to access resources like SQL or Azure Storage over service endpoints, enable service endpoints on the ASE subnet. If you want to access resources in the virtual network or private endpoints in the virtual network, you don't need to do any additional configuration. If you want to access resources across ExpressRoute, you're already in the virtual network and don't need to configure anything on the ASE or the apps in it.
+With an ASE, you don't need to use virtual network integration because the ASE is already in your virtual network. If you want to access resources like SQL or Azure Storage over service endpoints, enable service endpoints on the ASE subnet. If you want to access resources in the virtual network or private endpoints in the virtual network, you don't need to do any extra configuration. If you want to access resources across ExpressRoute, you're already in the virtual network, and don't need to configure anything on the ASE or the apps in it.
Because the apps in an ILB ASE can be exposed on a private IP address, you can easily add WAF devices to expose just the apps that you want to the internet and help keep the rest secure. This feature can help make the development of multi-tier applications easier.
If you're hosting both the front end and API app for a multi-tier application, y
Here are some considerations to help you decide which method to use:
-* When you use service endpoints, you only need to secure traffic to your API app to the integration subnet. Service endpoints helps to secure the API app, but you could still have data exfiltration from your front-end app to other apps in the app service.
+* When you use service endpoints, you only need to secure traffic to your API app to the integration subnet. Service endpoints help to secure the API app, but you could still have data exfiltration from your front-end app to other apps in the app service.
* When you use private endpoints, you have two subnets at play, which adds complexity. Also, the private endpoint is a top-level resource and adds management overhead. The benefit of using private endpoints is that you don't have the possibility of data exfiltration. Either method will work with multiple front ends. On a small scale, service endpoints are easier to use because you simply enable service endpoints for the API app on the front-end integration subnet. As you add more front-end apps, you need to adjust every API app to include service endpoints with the integration subnet. When you use private endpoints, there's more complexity, but you don't have to change anything on your API apps after you set a private endpoint.
If you scan App Service, you'll find several ports that are exposed for inbound
<!--Links--> [appassignedaddress]: ./configure-ssl-certificate.md
-[iprestrictions]: ./app-service-ip-restrictions.md
[serviceendpoints]: ./app-service-ip-restrictions.md [hybridconn]: ./app-service-hybrid-connections.md [vnetintegrationp2s]: ./overview-vnet-integration.md
app-service Overview Access Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-access-restrictions.md
+
+ Title: App Service Access restrictions
+description: This article provides an overview of the access restriction features in App Service
++ Last updated : 09/01/2022+++
+# Azure App Service access restrictions
+
+Access restrictions in App Service are equivalent to a firewall allowing you to block and filter traffic. Access restrictions apply to **inbound** access only. Most App Service pricing tiers also have the ability to add private endpoints to the app, which is an additional entry point to the app. Access restrictions don't apply to traffic entering through a private endpoint. For all apps hosted on App Service, the default entry point is publicly available. The only exception is apps hosted in ILB App Service Environment where the default entry point is internal to the virtual network.
+
+## How it works
+
+When traffic reaches App Service, it will first evaluate if the traffic originates from a private endpoint or is coming through the default endpoint. If the traffic is sent through a private endpoint, it will be sent directly to the site without any restrictions. Restrictions to private endpoints are configured using network security groups.
+
+If the traffic is sent through the default endpoint (often a public endpoint), the traffic is first evaluated at the site access level. Here you can either enable or disable access. If site access is enabled, the traffic will be evaluated at the app access level. For any app, you'll have both the main site and the advanced tools site (also known as scm or kudu site). You have the option of configuring a set of access restriction rules for each site. You can also specify the behavior if no rules are matched. The following sections will go into details.
++
+## App access
+
+App access allows you to configure if access is available thought the default (public) endpoint. If the setting has never been configured, the default behavior is to enable access unless a private endpoint exists after which it will be implicitly disabled. You have the ability to explicitly configure this behavior to either enabled or disabled even if private endpoints exist.
++
+## Site access
+
+Site access restrictions let you filter the incoming requests. Site access restrictions allow you to build a list of allow and deny rules that are evaluated in priority order. It's similar to the network security group (NSG) feature in Azure networking.
++
+Site access restriction has several types of rules that you can apply:
+
+### Unmatched rule
+
+You can configure the behavior when no rules are matched (the default action). It's a special rule that will always appear as the last rule of the rules collection. If the setting has never been configured, the unmatched rule behavior is to allow all access unless one or more rules exists after which it will be implicitly changed to deny all access. You can explicitly configure this behavior to either allow or deny access regardless of defined rules.
+
+### IP-based access restriction rules
+
+The IP-based access restrictions feature helps when you want to restrict the IP addresses that can be used to reach your app. Both IPv4 and IPv6 are supported. Some use cases for this feature:
+
+* Restrict access to your app from a set of well-defined addresses.
+* Restrict access to traffic coming through an external load-balancing service or other network appliances with known egress IP addresses.
+
+To learn how to enable this feature, see [Configuring access restrictions](./app-service-ip-restrictions.md).
+
+> [!NOTE]
+> IP-based access restriction rules only handle virtual network address ranges when your app is in an App Service Environment. If your app is in the multi-tenant service, you need to use [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to restrict traffic to select subnets in your virtual network.
+
+### Access restriction rules based on service endpoints
+
+Service endpoints allow you to lock down *inbound* access to your app so that the source address must come from a set of subnets that you select. This feature works together with IP access restrictions. Service endpoints aren't compatible with remote debugging. If you want to use remote debugging with your app, your client can't be in a subnet that has service endpoints enabled. The process for setting service endpoints is similar to the process for setting IP access restrictions. You can build an allow/deny list of access rules that includes public addresses and subnets in your virtual networks.
+
+> [!NOTE]
+> Access restriction rules based on service endpoints are not supported on apps that use IP-based SSL ([App-assigned address](./networking-features.md#app-assigned-address)).
+
+To learn more about configuring service endpoints with your app, see [Azure App Service access restrictions](../virtual-network/virtual-network-service-endpoints-overview.md).
+
+#### Any service endpoint source
+
+For testing or in specific scenarios, you may want to allow traffic from any service endpoint enabled subnet. You can do that by defining an IP-based rule with the text "AnyVnets" instead of an IP range. You can't create these rules in the portal, but you can modify an existing IP-based rule and replace the IP address with the "AnyVnets" string.
+
+### Access restriction rules based on service tags
+
+[Azure service tags](../virtual-network/service-tags-overview.md) are well defined sets of IP addresses for Azure services. Service tags group the IP ranges used in various Azure services and is often also further scoped to specific regions. This type of rule allows you to filter *inbound* traffic from specific Azure services.
+
+For a full list of tags and more information, visit the service tag link above.
+To learn how to enable this feature, see [Configuring access restrictions](./app-service-ip-restrictions.md).
+
+### Multi-source rules
+
+Multi-source rules allow you to combine up to eight IP ranges or eight Service Tags in a single rule. You might use this if you've more than 512 IP ranges or you want to create logical rules where multiple IP ranges are combined with a single http header filter.
+
+Multi-source rules are defined the same way you define single-source rules, but with each range separated with comma.
+
+You can't create these rules in the portal, but you can modify an existing service tag or IP-based rule and add more sources to the rule.
+
+### Http header filtering for site access restriction rules
+
+For any rule, regardless of type, you can add http header filtering. Http header filters allow you to further inspect the incoming request and filter based on specific http header values. Each header can have up to eight values per rule. The following lists the supported http headers:
+
+* **X-Forwarded-For**. [Standard header](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-For) for identifying the originating IP address of a client connecting through a proxy server. Accepts valid CIDR values.
+* **X-Forwarded-Host**. [Standard header](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-Host) for identifying the original host requested by the client. Accepts any string up to 64 characters in length.
+* **X-Azure-FDID**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) for identifying the reverse proxy instance. Azure Front Door will send a guid identifying the instance, but it can also be used by third party proxies to identify the specific instance. Accepts any string up to 64 characters in length.
+* **X-FD-HealthProbe**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) for identifying the health probe of the reverse proxy. Azure Front Door will send "1" to uniquely identify a health probe request. The header can also be used by third party proxies to identify health probes. Accepts any string up to 64 characters in length.
+
+Some use cases for http header filtering are:
+* Restrict access to traffic from proxy servers forwarding the host name
+* Restrict access to a specific Azure Front Door instance with a service tag rule and X-Azure-FDID header restriction
+
+## Advanced use cases
+
+Combining the above features allow you to solve some specific use cases that are described in the following sections.
+
+### Block a single IP address
+
+If you want to deny/block one or more specific IP addresses, you can add the IP addresses as deny rules and configure the unmatched rule to allow all unmatched traffic.
+
+### Restrict access to the advanced tools site
+
+The advanced tools site, which is also known as scm or kudu, has an individual rules collection that you can configure. You can also configure the unmatched rule for this site. A setting will also allow you to use the rules configured for the main site.
+
+### Deploy through a private endpoint
+
+You might have a site that is publicly accessible, but your deployment system is in a virtual network. You can keep the deployment traffic private by adding a private endpoint. You then need to ensure that public app access is enabled. Finally you need to set the unmatched rule for the advanced tools site to deny, which will block all public traffic to that endpoint.
+
+### Allow external partner access to private endpoint protected site
+
+In this scenario, you're accessing your site through a private endpoint and are deploying through a private endpoint. You may want to temporarily invite an external partner to test the site. You can do that by enabling public app access. Add a rule (IP-based) to identify the client of the partner. Configure unmatched rules action to deny for both main and advanced tools site.
+
+### Restrict access to a specific Azure Front Door instance
+
+Traffic from Azure Front Door to your application originates from a well known set of IP ranges defined in the AzureFrontDoor.Backend service tag. Using a service tag restriction rule, you can restrict traffic to only originate from Azure Front Door. To ensure traffic only originates from your specific instance, you'll need to further filter the incoming requests based on the unique http header that Azure Front Door sends called X-Azure-FDID. You can find the Front Door ID in the portal.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to restrict access](app-service-ip-restrictions.md)
+
+> [!div class="nextstepaction"]
+> [Private endpoints for App Service apps](./networking/private-endpoint.md)
+
application-gateway Configure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-web-app.md
The above conditions (explained in more detail in [Architecture Center](/azure/a
The web apps deployed in these examples use public IP addresses that can be accessed directly from the Internet. This helps with troubleshooting when you are learning about a new feature and trying new things. But if you intend to deploy a feature into production, you'll want to add more restrictions. Consider the following options: -- Configure [Access restriction rules based on service endpoints](../app-service/networking-features.md#access-restriction-rules-based-on-service-endpoints). This allows you to lock down inbound access to the app making sure the source address is from Application Gateway.
+- Configure [Access restriction rules based on service endpoints](../app-service/overview-access-restrictions.md#access-restriction-rules-based-on-service-endpoints). This allows you to lock down inbound access to the app making sure the source address is from Application Gateway.
- Use [Azure App Service static IP restrictions](../app-service/app-service-ip-restrictions.md). For example, you can restrict the web app so that it only receives traffic from the application gateway. Use the app service IP restriction feature to list the application gateway VIP as the only address with access.
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
appGWVnetId=$(az network vnet show -n myVnet -g myResourceGroup -o tsv --query "
az network vnet peering create -n AKStoAppGWVnetPeering -g $nodeResourceGroup --vnet-name $aksVnetName --remote-vnet $appGWVnetId --allow-vnet-access ```
+> [!NOTE]
+> In the "Deploy a new AKS cluster" step above we created AKS with Azure CNI, in case you have an existing AKS cluster using [Kubenet mode](../aks/configure-kubenet.md) you need to update the route table to help the packets destined for a POD IP reach the node which is hosting the pod.
+> A simple way to achieve this is by associating the same route table created by AKS to the Application Gateway's subnet.
++ ## Deploy a sample application using AGIC You'll now deploy a sample application to the AKS cluster you created that will use the AGIC add-on for Ingress and connect the application gateway to the AKS cluster. First, you'll get credentials to the AKS cluster you deployed by running the `az aks get-credentials` command.
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. You can get started by exploring the pre-trained models with sample or your own documents. You can also create projects to build custom template models and reference the models in your applications using the [Python SDK](get-started-v3-sdk-rest-api.md) and other quickstarts.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE56n49]
## Prerequisites for new users
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
For the usage with [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api
| Adjustable | No | Yes<sup>2</sup> | | **Max document size** | 500 MB | 500 MB | | Adjustable | No | No |
-| **Max number of pages (Analysis)** | 2 | No limit |
+| **Max number of pages (Analysis)** | 2 | 2000 |
| Adjustable | No | No | | **Max size of labels file** | 10 MB | 10 MB | | Adjustable | No | No |
azure-app-configuration Concept Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md
This team would benefit from geo-replication. They can create a replica of their
- Geo-replication isn't available in the free tier. - Each replica has limits, as outlined in the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration/). These limits are isolated per replica. - Azure App Configuration also supports Azure availability zones to create a resilient and highly available store within an Azure Region. Availability zone support is automatically included for a replica if the replica's region has availability zone support. The combination of availability zones for redundancy within a region, and geo-replication across multiple regions, enhances both the availability and performance of a configuration store.-- Currently, you can only authenticate with replica endpoints with [Azure AD](/azure-app-configuration/overview-managed-identity).
+- Currently, you can only authenticate with replica endpoints with [Azure Active Directory (Azure AD)](/azure/app-service/overview-managed-identity).
<!-- To add once these links become available: - Request handling for replicas will vary by configuration provider, for further information reference [.NET Geo-replication Reference](https://azure.microsoft.com/pricing/details/app-configuration/) and [Java Geo-replication Reference](https://azure.microsoft.com/pricing/details/app-configuration/).
Each replica created will add extra charges. Reference the [App Configuration pr
> [!div class="nextstepaction"] > [How to enable Geo replication](./howto-geo-replication.md)
-> [Resiliency and Disaster Recovery](./concept-disaster-recovery.md)
+> [Resiliency and Disaster Recovery](./concept-disaster-recovery.md)
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 08/30/2022 Last updated : 09/15/2022 ms.devlang: azurecli
Helm release deployment succeeded
> [!TIP] > The above command without the location parameter specified creates the Azure Arc-enabled Kubernetes resource in the same location as the resource group. To create the Azure Arc-enabled Kubernetes resource in a different location, specify either `--location <region>` or `-l <region>` when running the `az connectedk8s connect` command.
+> [!IMPORTANT]
+> In some cases, deployment may fail due to a timeout error. Please see our [troubleshooting guide](troubleshooting.md#helm-timeout-error) for details on how to resolve this issue.
+ ### [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues"
# Previously updated : 06/13/2022 Last updated : 09/15/2022 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps." keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux"
To resolve this issue, try the following steps.
name: kube-aad-proxy-certificate ```
- If the certificate is missing, please contact support.
+ If the certificate is missing, [delete the deployment](quickstart-connect-cluster.md#clean-up-resources) and re-onboard with a different name for the cluster. If the problem continues, please contact support.
### Helm validation error
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
You need a Windows or Linux machine that can access both your vCenter Server ins
7. Select a subscription and resource group where the resource bridge will be created.
-8. Under **Region**, select an Azure location where the resource metadata will be stored. Currently, supported regions are **East US** and **West Europe**.
+8. Under **Region**, select an Azure location where the resource metadata will be stored. Currently, supported regions are **East US**, **West Europe**, **Australia East** and **Canada Central**.
9. Provide a name for **Custom location**. This is the name that you'll see when you deploy VMs. Name it for the datacenter or the physical location of your datacenter. For example: **contoso-nyc-dc**.
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Last updated 05/17/2022
[Redis persistence](https://redis.io/topics/persistence) allows you to persist data stored in Redis. You can also take snapshots and back up the data. If there's a hardware failure, you load the data. The ability to persist data is a huge advantage over the Basic or Standard tiers where all the data is stored in memory. Data loss is possible if a failure occurs where Cache nodes are down.
+> [!IMPORTANT]
+>
+> Check to see if your storage account has soft delete enabled before using hte data persistence feature. Using data persistence with soft delete will cause very high storage costs. For more information, see For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete)
+>
+ Azure Cache for Redis offers Redis persistence using the Redis database (RDB) and Append only File (AOF): - **RDB persistence** - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence.
Persistence writes Redis data into an Azure Storage account that you own and man
| Setting | Suggested value | Description | | | - | -- | | **Backup Frequency** | Drop-down and select a backup interval. Choices include **15 Minutes**, **30 minutes**, **60 minutes**, **6 hours**, **12 hours**, and **24 hours**. | This interval starts counting down after the previous backup operation successfully completes. When it elapses, a new backup starts. |
- | **Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, using the soft delete feature on the storage account could lead to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). |
+ | **Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, using the soft delete feature on the storage account is strongly discouraged as it leads to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). |
| **Storage Key** | Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. | The first backup starts once the backup frequency interval elapses.
Persistence writes Redis data into an Azure Storage account that you own and man
| Setting | Suggested value | Description | | | - | -- |
- | **First Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, using the soft delete feature on the storage account could lead to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). |
+ | **First Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, using the soft delete feature on the storage account is strongly discouraged as it leads to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). |
| **First Storage Key** | Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. | | **Second Storage Account** | (Optional) Drop-down and select your secondary storage account. | You can optionally configure another storage account. If a second storage account is configured, the writes to the replica cache are written to this second storage account. | | **Second Storage Key** | (Optional) Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. |
Yes, you'll be charged for the storage being used as per the pricing model of th
### How frequently does RDB and AOF persistence write to my blobs, and should I enable soft delete?
-Soft delete isn't recommended. RDB and AOF persistence can write to your blobs as frequently as every hour, every few minutes, or every second. Also, enabling soft delete on a storage account means Azure Cache for Redis can't minimize storage costs by deleting the old backup data. Soft delete can quickly become expensive with the typical data sizes of a cache and write operations every second. For more information on soft delete costs, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md).
+Enabling soft delete on storage accounts is strongly discouraged when used with Azure Cache for Redis data persistence. RDB and AOF persistence can write to your blobs as frequently as every hour, every few minutes, or every second. Also, enabling soft delete on a storage account means Azure Cache for Redis can't minimize storage costs by deleting the old backup data. Soft delete quickly becomes expensive with the typical data sizes of a cache and write operations every second. For more information on soft delete costs, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md).
### Can I change the RDB backup frequency after I create the cache?
azure-fluid-relay Deploy Fluid Static Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/deploy-fluid-static-web-apps.md
If you don't have an Azure subscription, [create a free trial account](https://a
## Connect to Azure Fluid Relay
-You can connect to Azure Fluid Relay by providing the tenant ID and key that is uniquely generated for you when creating the Azure resource. You can build your own token provider implementation or you can use the two token provider implementations that the Fluid Framework provides: [InsecureTokenProvider](https://fluidframework.com/docs/apis/test-client-utils/insecuretokenprovider) and [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider).
+You can connect to Azure Fluid Relay by providing the tenant ID and key that is uniquely generated for you when creating the Azure resource. You can build your own token provider implementation or you can use the two token provider implementations that the Fluid Framework provides: [InsecureTokenProvider](https://fluidframework.com/docs/apis/test-client-utils/insecuretokenprovider-class) and [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider-class).
To learn more about using InsecureTokenProvider for local development, see [Connecting to the service](connect-fluid-azure-service.md#connecting-to-the-service) and [Authentication and authorization in your app](../concepts/authentication-authorization.md#the-token-provider).
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
zone_pivot_groups: programming-languages-set-functions
| 1.x | GA | Recommended only for C# apps that must use .NET Framework and only supports development in the Azure portal, Azure Stack Hub portal, or locally on Windows computers. This version is in maintenance mode, with enhancements provided only in later versions. | > [!IMPORTANT]
-> Beginning on December 3, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime can no longer be supported. Before that time, please test, verify, and migrate your function apps to version 4.x of the Functions runtime. For more information, see [Migrating from 3.x to 4.x](#migrating-from-3x-to-4x).
+> Beginning on December 3, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime can no longer be supported. Before that time, please test, verify, and migrate your function apps to version 4.x of the Functions runtime. For more information, see [Migrating from 3.x to 4.x](#migrating-from-3x-to-4x). After the deadline, function apps can be created and deployed, and existing apps continue to run. However, your apps won't be eligible for new features, security patches, performance optimizations, and support until you upgrade them to version 4.x.
+>
>End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these older runtime versions. This requirement affects all Azure Functions runtime languages. >Functions version 1.x is still supported for C# function apps that require the .NET Framework. Preview support is now available in Functions 4.x to [run C# functions on .NET Framework 4.8](dotnet-isolated-process-guide.md#supported-versions).
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
This article explains Azure functions language runtime support policy.
## Retirement process
-Azure Functions runtime is built around various components, including operating systems, the Azure Functions host, and language-specific workers. To maintain full support coverages for function apps, Azure Functions uses a phased reduction in support as programming language versions reach their end-of-life dates. For most language versions, the retirement date coincides with the community end-of-life date.
+Azure Functions runtime is built around various components, including operating systems, the Azure Functions host, and language-specific workers. To maintain full-support coverages for function apps, Functions support aligns with end-of-life support for a given language. To achieve this, Functions implements a phased reduction in support as programming language versions reach their end-of-life dates. For most language versions, the retirement date coincides with the community end-of-life date.
### Notification phase
We'll send notification emails to function app users about upcoming language ver
### Retirement phase
-Starting on the end-of-life date for a language version, you can no longer create new function apps targeting that language version.
-
-After the language end-of-life date, function apps that use retired language versions won't be eligible for new features, security patches, and performance optimizations. However, these function apps will continue to run on the platform.
+After the language end-of-life date, function apps that use retired language versions can still be created and deployed, and they continue to run on the platform. However your apps won't be eligible for new features, security patches, and performance optimizations until you upgrade them to a supported language version.
> [!IMPORTANT] >You're highly encouraged to upgrade the language version of your affected function apps to a supported version.
->If you're running functions apps using an unsupported language version, you'll be required to upgrade before receiving support for the function apps.
+>If you're running functions apps using an unsupported language version, you'll be required to upgrade before receiving support for your function app.
## Retirement policy exceptions
There are few exceptions to the retirement policy outlined above. Here is a list
|Language Versions |EOL Date |Retirement Date| |--|--|-|
-|.NET 5|8 May 2022|TBA|
-|Node 6|30 April 2019|28 February 2022|
-|Node 8|31 December 2019|28 February 2022|
-|Node 10|30 April 2021|30 September 2022|
-|Node 12|30 Apr 2022|TBA|
|PowerShell Core 6| 4 September 2020|30 September 2022| |Python 3.6 |23 December 2021|30 September 2022| - ## Language version support timeline To learn more about specific language version support policy timeline, visit the following external resources:
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Alcala Consulting Inc.](https://www.alcalaconsulting.com/)| |[Alliance Enterprises, Inc.](https://www.allianceenterprises.com)| |[Alvarez Technology Group](https://www.alvareztg.com/)|
-|[Amalgama Technologies Inc](http://amalgamatetech.com/)|
+|Amalgama Technologies Inc|
|[Ambonare](https://redriver.com/press-release/austinacquisition)| |[American Technology Services LLC](https://networkats.com/)| |[Anautics](https://anautics.com)|
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 9/16/2022 Last updated : 9/15/2022
Azure Monitor Agent (AMA) collects monitoring data from the guest operating syst
Here's a short **introduction to Azure Monitor video**, which includes a quick demo of how to set up the agent from the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
-## Can I deploy Azure Monitor Agent?
+## Consolidating legacy agents
-Deploy Azure Monitor Agent on all new virtual machines to collect data for [supported services and features](#supported-services-and-features).
+Deploy Azure Monitor Agent on all new virtual machines, scale sets and on premise servers to collect data for [supported services and features](#supported-services-and-features).
-If you have virtual machines already deployed with legacy agents, we recommend you [check whether Azure Monitor Agent supports your monitoring needs](#compare-to-legacy-agents) and [migrate to Azure Monitor Agent](./azure-monitor-agent-migration.md) as soon as possible.
+If you have machines already deployed with legacy Log Analytics agents, we recommend you [migrate to Azure Monitor Agent](./azure-monitor-agent-migration.md) as soon as possible. The legacy Log Analytics agent will not be supported after August 2024.
Azure Monitor Agent replaces the Azure Monitor legacy monitoring agents: -- [Log Analytics Agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports monitoring solutions. -- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only).-- [Diagnostics extension](./diagnostics-extension-overview.md): Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage.
+- [Log Analytics Agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports monitoring solutions. This is fully consolidated into Azure Monitor agent.
+- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only). Only basic Telegraf plugins are supported today in Azure Monitor agent.
+- [Diagnostics extension](./diagnostics-extension-overview.md): Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage. This is not consolidated yet.
## Install the agent and configure data collection
The following tables list the operating systems that Azure Monitor Agent and the
| Debian 9 | X | X | X | | Debian 8 | | X | | | Debian 7 | | | X |
+| OpenSUSE 15 | X | | |
| OpenSUSE 13.1+ | | | X | | Oracle Linux 8 | X | X | | | Oracle Linux 7 | X | X | X |
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
description: Define network settings and enable network isolation for Azure Moni
Previously updated : 06/06/2022 Last updated : 9/16/2022
The Azure Monitor Agent extensions for Windows and Linux can communicate either
![Diagram that shows a flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
+ > [!NOTE]
+ > Azure Monitor agent for Linux doesnΓÇÖt support system proxy via environment variables such as `http_proxy` and `https_proxy`
+ 1. After determining the `Settings` and `ProtectedSettings` parameter values, *provide these other parameters* when you deploy Azure Monitor Agent, using PowerShell commands, as shown in the following examples: # [Windows VM](#tab/PowerShellWindows)
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
An action group is a **global** service, so there's no dependency on a specific
| Option | Behavior | | | -- |
- | Global | The action groups service decides where to store the action group. The action group is persisted in at least two regions to ensure regional resiliency. Processing of actions may be done in any [geographic region](/global-infrastructure/geographies/#overview).<br></br>Voice, SMS and email actions performed as the result of [service health alerts](/azure/service-health/alerts-activity-log-service-notifications-portal) are resilient to Azure live-site-incidents. |
- | Regional | The action group is stored within the selected region. The action group is [zone-redundant](/azure/availability-zones/az-region#highly-available-services). Processing of actions is performed within the region.</br></br>Use this option if you want to ensure that the processing of your action group is performed within a specific [geographic boundary](/global-infrastructure/geographies/#overview). |
+ | Global | The action groups service decides where to store the action group. The action group is persisted in at least two regions to ensure regional resiliency. Processing of actions may be done in any [geographic region](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview).<br></br>Voice, SMS and email actions performed as the result of [service health alerts](/azure/service-health/alerts-activity-log-service-notifications-portal) are resilient to Azure live-site-incidents. |
+ | Regional | The action group is stored within the selected region. The action group is [zone-redundant](/azure/availability-zones/az-region#highly-available-services). Processing of actions is performed within the region.</br></br>Use this option if you want to ensure that the processing of your action group is performed within a specific [geographic boundary](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview). |
The action group is saved in the subscription, region and resource group that you select.
azure-monitor Alerts Metric Multiple Time Series Single Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-multiple-time-series-single-rule.md
For this alert rule, two metric time series are being monitored:
An AND operator is used between the conditions. The alert rule fires an alert when *all* conditions are met. The fired alert resolves if at least one of the conditions is no longer met. > [!NOTE]
-> There are restrictions when you use dimensions in an alert rule with multiple conditions. For more information, see [Restrictions when using dimensions in a metric alert rule with multiple conditions](alerts-troubleshoot-metric.md#restrictions-when-using-dimensions-in-a-metric-alert-rule-with-multiple-conditions).
+> There are restrictions when you use dimensions in an alert rule with multiple conditions. For more information, see [Restrictions when using dimensions in a metric alert rule with multiple conditions](alerts-troubleshoot-metric.md#restrictions-when-you-use-dimensions-in-a-metric-alert-rule-with-multiple-conditions).
## Multiple dimensions (multi-dimension)
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
ms.reviwer: harelbr
# Supported resources for metric alerts in Azure Monitor
-Azure Monitor now supports a [new metric alert type](./alerts-overview.md) which has significant benefits over the older [classic metric alerts](./alerts-classic.overview.md). Metrics are available for [large list of Azure services](../essentials/metrics-supported.md). The newer alerts support a (growing) subset of the resource types. This article lists that subset.
+Azure Monitor now supports a [new metric alert type](./alerts-overview.md), which has significant benefits over the older [classic metric alerts](./alerts-classic.overview.md). Metrics are available for a [large list of Azure services](../essentials/metrics-supported.md). The newer alerts support a growing subset of the resource types. This article lists that subset.
-You can also use newer metric alerts on popular log data stored in a Log Analytics workspace extracted as metrics. For more information, view [Metric Alerts for Logs](./alerts-metric-logs.md).
+You can also use newer metric alerts on popular log data stored in a Log Analytics workspace extracted as metrics. For more information, see [Metric Alerts for Logs](./alerts-metric-logs.md).
-## Portal, PowerShell, CLI, REST support
-Currently, you can create newer metric alerts only in the Azure portal, [REST API](/rest/api/monitor/metricalerts/), or [Resource Manager Templates](./alerts-metric-create-templates.md). Support for configuring newer alerts using PowerShell and Azure CLI versions 2.0 and higher is coming soon.
+## Portal, PowerShell, CLI, and REST support
-## Metrics and Dimensions Supported
-Newer metric alerts support alerting for metrics that use dimensions. You can use dimensions to filter your metric to the right level. All supported metrics along with applicable dimensions can be explored and visualized from [Azure Monitor - Metrics Explorer](../essentials/metrics-charts.md).
+Currently, you can create newer metric alerts only in the Azure portal, the [REST API](/rest/api/monitor/metricalerts/), or [Azure Resource Manager templates](./alerts-metric-create-templates.md). Support for configuring newer alerts by using PowerShell and the Azure CLI versions 2.0 and higher is coming soon.
+
+## Metrics and dimensions supported
+
+Newer metric alerts support alerting for metrics that use dimensions. You can use dimensions to filter your metric to the proper level. All supported metrics along with applicable dimensions can be explored and visualized from [Azure Monitor - Metrics explorer](../essentials/metrics-charts.md).
Here's the full list of Azure Monitor metric sources supported by the newer alerts:
-|Resource type |Dimensions Supported |Multi-resource alerts| Metrics Available|
+|Resource type |Dimensions supported |Multi-resource alerts| Metrics available|
|||--|-| |Microsoft.Aadiam/azureADMetrics | Yes | No | Azure Active Directory (metrics in private preview) |
-|Microsoft.ApiManagement/service | Yes | No | [API Management](../essentials/metrics-supported.md#microsoftapimanagementservice) |
-|Microsoft.App/containerApps | Yes | No | Container Apps |
-|Microsoft.AppConfiguration/configurationStores |Yes | No | [App Configuration](../essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) |
+|Microsoft.ApiManagement/service | Yes | No | [Azure API Management](../essentials/metrics-supported.md#microsoftapimanagementservice) |
+|Microsoft.App/containerApps | Yes | No | Azure Container Apps |
+|Microsoft.AppConfiguration/configurationStores |Yes | No | [Azure App Configuration](../essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) |
|Microsoft.AppPlatform/spring | Yes | No | [Azure Spring Cloud](../essentials/metrics-supported.md#microsoftappplatformspring) |
-|Microsoft.Automation/automationAccounts | Yes| No | [Automation Accounts](../essentials/metrics-supported.md#microsoftautomationautomationaccounts) |
+|Microsoft.Automation/automationAccounts | Yes| No | [Azure Automation accounts](../essentials/metrics-supported.md#microsoftautomationautomationaccounts) |
|Microsoft.AVS/privateClouds | No | No | [Azure VMware Solution](../essentials/metrics-supported.md#microsoftavsprivateclouds) |
-|Microsoft.Batch/batchAccounts | Yes | No | [Batch Accounts](../essentials/metrics-supported.md#microsoftbatchbatchaccounts) |
-|Microsoft.Bing/accounts | Yes | No | [Bing Accounts](../essentials/metrics-supported.md#microsoftbingaccounts) |
-|Microsoft.BotService/botServices | Yes | No | [Bot Services](../essentials/metrics-supported.md#microsoftbotservicebotservices) |
+|Microsoft.Batch/batchAccounts | Yes | No | [Azure Batch accounts](../essentials/metrics-supported.md#microsoftbatchbatchaccounts) |
+|Microsoft.Bing/accounts | Yes | No | [Bing accounts](../essentials/metrics-supported.md#microsoftbingaccounts) |
+|Microsoft.BotService/botServices | Yes | No | [Azure Bot Service](../essentials/metrics-supported.md#microsoftbotservicebotservices) |
|Microsoft.Cache/redis | Yes | Yes | [Azure Cache for Redis](../essentials/metrics-supported.md#microsoftcacheredis) | |Microsoft.Cache/redisEnterprise | Yes | No | [Azure Cache for Redis Enterprise](../essentials/metrics-supported.md#microsoftcacheredisenterprise) |
-|microsoft.Cdn/profiles | Yes | No | [CDN Profiles](../essentials/metrics-supported.md#microsoftcdnprofiles) |
-|Microsoft.ClassicCompute/domainNames/slots/roles | No | No | [Classic Cloud Services](../essentials/metrics-supported.md#microsoftclassiccomputedomainnamesslotsroles) |
-|Microsoft.ClassicCompute/virtualMachines | No | No | [Classic Virtual Machines](../essentials/metrics-supported.md#microsoftclassiccomputevirtualmachines) |
-|Microsoft.ClassicStorage/storageAccounts | Yes | No | [Storage Accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccounts) |
-|Microsoft.ClassicStorage/storageAccounts/blobServices | Yes | No | [Storage Accounts (classic) - Blobs](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsblobservices) |
-|Microsoft.ClassicStorage/storageAccounts/fileServices | Yes | No | [Storage Accounts (classic) - Files](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsfileservices) |
-|Microsoft.ClassicStorage/storageAccounts/queueServices | Yes | No | [Storage Accounts (classic) - Queues](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) |
-|Microsoft.ClassicStorage/storageAccounts/tableServices | Yes | No | [Storage Accounts (classic) - Tables](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) |
-|Microsoft.CognitiveServices/accounts | Yes | No | [Cognitive Services](../essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) |
-|Microsoft.Compute/cloudServices | Yes | No | [Cloud Services](../essentials/metrics-supported.md#microsoftcomputecloudservices) |
-|Microsoft.Compute/cloudServices/roles | Yes | No | [Cloud Service Roles](../essentials/metrics-supported.md#microsoftcomputecloudservicesroles) |
-|Microsoft.Compute/virtualMachines | Yes | Yes<sup>1</sup> | [Virtual Machines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines) |
-|Microsoft.Compute/virtualMachineScaleSets | Yes | No |[Virtual Machine Scale Sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) |
+|microsoft.Cdn/profiles | Yes | No | [Azure Content Delivery Network profiles](../essentials/metrics-supported.md#microsoftcdnprofiles) |
+|Microsoft.ClassicCompute/domainNames/slots/roles | No | No | [Azure Cloud Services (classic)](../essentials/metrics-supported.md#microsoftclassiccomputedomainnamesslotsroles) |
+|Microsoft.ClassicCompute/virtualMachines | No | No | [Azure Virtual Machines (classic)](../essentials/metrics-supported.md#microsoftclassiccomputevirtualmachines) |
+|Microsoft.ClassicStorage/storageAccounts | Yes | No | [Azure Storage accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccounts) |
+|Microsoft.ClassicStorage/storageAccounts/blobServices | Yes | No | [Azure Blob Storage accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsblobservices) |
+|Microsoft.ClassicStorage/storageAccounts/fileServices | Yes | No | [Azure Files storage accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsfileservices) |
+|Microsoft.ClassicStorage/storageAccounts/queueServices | Yes | No | [Azure Queue Storage accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) |
+|Microsoft.ClassicStorage/storageAccounts/tableServices | Yes | No | [Azure Table Storage accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) |
+|Microsoft.CognitiveServices/accounts | Yes | No | [Azure Cognitive Services](../essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) |
+|Microsoft.Compute/cloudServices | Yes | No | [Azure Cloud Services](../essentials/metrics-supported.md#microsoftcomputecloudservices) |
+|Microsoft.Compute/cloudServices/roles | Yes | No | [Azure Cloud Services roles](../essentials/metrics-supported.md#microsoftcomputecloudservicesroles) |
+|Microsoft.Compute/virtualMachines | Yes | Yes<sup>1</sup> | [Azure Virtual Machines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines) |
+|Microsoft.Compute/virtualMachineScaleSets | Yes | No |[Azure Virtual Machine Scale Sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) |
|Microsoft.ConnectedVehicle/platformAccounts | Yes | No |[Connected Vehicle Platform Accounts](../essentials/metrics-supported.md#microsoftconnectedvehicleplatformaccounts) |
-|Microsoft.ContainerInstance/containerGroups | Yes| No | [Container Groups](../essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) |
-|Microsoft.ContainerRegistry/registries | No | No | [Container Registries](../essentials/metrics-supported.md#microsoftcontainerregistryregistries) |
-|Microsoft.ContainerService/managedClusters | Yes | No | [Managed Clusters](../essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) |
-|Microsoft.DataBoxEdge/dataBoxEdgeDevices | Yes | Yes | [Data Box](../essentials/metrics-supported.md#microsoftdataboxedgedataboxedgedevices) |
-|Microsoft.DataFactory/datafactories| Yes| No | [Data Factories V1](../essentials/metrics-supported.md#microsoftdatafactorydatafactories) |
-|Microsoft.DataFactory/factories |Yes | No | [Data Factories V2](../essentials/metrics-supported.md#microsoftdatafactoryfactories) |
-|Microsoft.DataProtection/backupVaults | Yes | Yes | Backup Vaults |
-|Microsoft.DataShare/accounts | Yes | No | [Data Shares](../essentials/metrics-supported.md#microsoftdatashareaccounts) |
-|Microsoft.DBforMariaDB/servers | No | No | [DB for MariaDB](../essentials/metrics-supported.md#microsoftdbformariadbservers) |
-|Microsoft.DBforMySQL/servers | No | No |[DB for MySQL](../essentials/metrics-supported.md#microsoftdbformysqlservers)|
-|Microsoft.DBforPostgreSQL/flexibleServers | Yes | Yes | [DB for PostgreSQL (flexible servers)](../essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers)|
-|Microsoft.DBforPostgreSQL/serverGroupsv2 | Yes | No | DB for PostgreSQL (hyperscale) |
-|Microsoft.DBforPostgreSQL/servers | No | No | [DB for PostgreSQL](../essentials/metrics-supported.md#microsoftdbforpostgresqlservers)|
-|Microsoft.DBforPostgreSQL/serversv2 | No | No | [DB for PostgreSQL V2](../essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2)|
-|Microsoft.Devices/IotHubs | Yes | No |[IoT Hub](../essentials/metrics-supported.md#microsoftdevicesiothubs) |
-|Microsoft.Devices/provisioningServices| Yes | No | [Device Provisioning Services](../essentials/metrics-supported.md#microsoftdevicesprovisioningservices) |
-|Microsoft.DigitalTwins/digitalTwinsInstances | Yes | No | [Digital Twins](../essentials/metrics-supported.md#microsoftdigitaltwinsdigitaltwinsinstances) |
-|Microsoft.DocumentDB/databaseAccounts | Yes | No | [Cosmos DB](../essentials/metrics-supported.md#microsoftdocumentdbdatabaseaccounts) |
-|Microsoft.EventGrid/domains | Yes | No | [Event Grid Domains](../essentials/metrics-supported.md#microsofteventgriddomains) |
-|Microsoft.EventGrid/systemTopics | Yes | No | [Event Grid System Topics](../essentials/metrics-supported.md#microsofteventgridsystemtopics) |
-|Microsoft.EventGrid/topics |Yes | No | [Event Grid Topics](../essentials/metrics-supported.md#microsofteventgridtopics) |
-|Microsoft.EventHub/clusters |Yes| No | [Event Hubs Clusters](../essentials/metrics-supported.md#microsofteventhubclusters) |
-|Microsoft.EventHub/namespaces |Yes| No | [Event Hubs](../essentials/metrics-supported.md#microsofteventhubnamespaces) |
-|Microsoft.HDInsight/clusters | Yes | No | [HDInsight Clusters](../essentials/metrics-supported.md#microsofthdinsightclusters) |
+|Microsoft.ContainerInstance/containerGroups | Yes| No | [Container groups](../essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) |
+|Microsoft.ContainerRegistry/registries | No | No | [Azure Container Registry](../essentials/metrics-supported.md#microsoftcontainerregistryregistries) |
+|Microsoft.ContainerService/managedClusters | Yes | No | [Managed clusters](../essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) |
+|Microsoft.DataBoxEdge/dataBoxEdgeDevices | Yes | Yes | [Azure Data Box](../essentials/metrics-supported.md#microsoftdataboxedgedataboxedgedevices) |
+|Microsoft.DataFactory/datafactories| Yes| No | [Azure Data Factory V1](../essentials/metrics-supported.md#microsoftdatafactorydatafactories) |
+|Microsoft.DataFactory/factories |Yes | No | [Azure Data Factory V2](../essentials/metrics-supported.md#microsoftdatafactoryfactories) |
+|Microsoft.DataProtection/backupVaults | Yes | Yes | Azure Backup vaults |
+|Microsoft.DataShare/accounts | Yes | No | [Azure Data Share](../essentials/metrics-supported.md#microsoftdatashareaccounts) |
+|Microsoft.DBforMariaDB/servers | No | No | [Azure Database for MariaDB](../essentials/metrics-supported.md#microsoftdbformariadbservers) |
+|Microsoft.DBforMySQL/servers | No | No |[Azure Database for MySQL](../essentials/metrics-supported.md#microsoftdbformysqlservers)|
+|Microsoft.DBforPostgreSQL/flexibleServers | Yes | Yes | [Azure Database for PostgreSQL (flexible servers)](../essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers)|
+|Microsoft.DBforPostgreSQL/serverGroupsv2 | Yes | No | Azure Database for PostgreSQL (hyperscale) |
+|Microsoft.DBforPostgreSQL/servers | No | No | [Azure Database for PostgreSQL](../essentials/metrics-supported.md#microsoftdbforpostgresqlservers)|
+|Microsoft.DBforPostgreSQL/serversv2 | No | No | [Azure Database for PostgreSQL V2](../essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2)|
+|Microsoft.Devices/IotHubs | Yes | No |[Azure IoT Hub](../essentials/metrics-supported.md#microsoftdevicesiothubs) |
+|Microsoft.Devices/provisioningServices| Yes | No | [Device Provisioning Service](../essentials/metrics-supported.md#microsoftdevicesprovisioningservices) |
+|Microsoft.DigitalTwins/digitalTwinsInstances | Yes | No | [Azure Digital Twins](../essentials/metrics-supported.md#microsoftdigitaltwinsdigitaltwinsinstances) |
+|Microsoft.DocumentDB/databaseAccounts | Yes | No | [Azure Cosmos DB](../essentials/metrics-supported.md#microsoftdocumentdbdatabaseaccounts) |
+|Microsoft.EventGrid/domains | Yes | No | [Azure Event Grid domains](../essentials/metrics-supported.md#microsofteventgriddomains) |
+|Microsoft.EventGrid/systemTopics | Yes | No | [Azure Event Grid system topics](../essentials/metrics-supported.md#microsofteventgridsystemtopics) |
+|Microsoft.EventGrid/topics |Yes | No | [Azure Event Grid topics](../essentials/metrics-supported.md#microsofteventgridtopics) |
+|Microsoft.EventHub/clusters |Yes| No | [Azure Event Hubs clusters](../essentials/metrics-supported.md#microsofteventhubclusters) |
+|Microsoft.EventHub/namespaces |Yes| No | [Azure Event Hubs](../essentials/metrics-supported.md#microsofteventhubnamespaces) |
+|Microsoft.HDInsight/clusters | Yes | No | [Azure HDInsight clusters](../essentials/metrics-supported.md#microsofthdinsightclusters) |
|Microsoft.Insights/Components | Yes | No | [Application Insights](../essentials/metrics-supported.md#microsoftinsightscomponents) |
-|Microsoft.KeyVault/vaults | Yes |Yes |[Vaults](../essentials/metrics-supported.md#microsoftkeyvaultvaults)|
-|Microsoft.Kusto/Clusters | Yes |No |[Data Explorer Clusters](../essentials/metrics-supported.md#microsoftkustoclusters)|
-|Microsoft.Logic/integrationServiceEnvironments | Yes | No |[Integration Service Environments](../essentials/metrics-supported.md#microsoftlogicintegrationserviceenvironments) |
-|Microsoft.Logic/workflows | No | No |[Logic Apps](../essentials/metrics-supported.md#microsoftlogicworkflows) |
-|Microsoft.MachineLearningServices/workspaces | Yes | No | [Machine Learning](../essentials/metrics-supported.md#microsoftmachinelearningservicesworkspaces) |
-|Microsoft.MachineLearningServices/workspaces/onlineEndpoints | Yes | No | Machine Learning - Endpoints |
-|Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments | Yes | No | Machine Learning - Endpoint Deployments |
-|Microsoft.Maps/accounts | Yes | No | [Maps Accounts](../essentials/metrics-supported.md#microsoftmapsaccounts) |
-|Microsoft.Medi#microsoftmediamediaservices) |
-|Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) |
-|Microsoft.NetApp/netAppAccounts/capacityPools | Yes | Yes | [Azure NetApp Capacity Pools](../essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypools) |
-|Microsoft.NetApp/netAppAccounts/capacityPools/volumes | Yes | Yes | [Azure NetApp Volumes](../essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypoolsvolumes) |
-|Microsoft.Network/applicationGateways | Yes | No | [Application Gateways](../essentials/metrics-supported.md#microsoftnetworkapplicationgateways) |
-|Microsoft.Network/azurefirewalls | Yes | No | [Firewalls](../essentials/metrics-supported.md#microsoftnetworkazurefirewalls) |
-|Microsoft.Network/dnsZones | No | No | [DNS Zones](../essentials/metrics-supported.md#microsoftnetworkdnszones) |
-|Microsoft.Network/expressRouteCircuits | Yes | No |[ExpressRoute Circuits](../essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) |
-|Microsoft.Network/expressRouteGateways | Yes | No |[ExpressRoute Gateways](../essentials/metrics-supported.md#microsoftnetworkexpressroutegateways) |
-|Microsoft.Network/expressRoutePorts | Yes | No |[ExpressRoute Direct](../essentials/metrics-supported.md#microsoftnetworkexpressrouteports) |
-|Microsoft.Network/loadBalancers (only for Standard SKUs)| Yes| No | [Load Balancers](../essentials/metrics-supported.md#microsoftnetworkloadbalancers) |
-|Microsoft.Network/natGateways| No | No | [NAT Gateways](../essentials/metrics-supported.md#microsoftnetworknatgateways) |
-|Microsoft.Network/privateEndpoints| No | No | [Private Endpoints](../essentials/metrics-supported.md#microsoftnetworkprivateendpoints) |
-|Microsoft.Network/privateLinkServices| No | No | [Private Link Services](../essentials/metrics-supported.md#microsoftnetworkprivatelinkservices) |
-|Microsoft.Network/publicipaddresses | No | No | [Public IP Addresses](../essentials/metrics-supported.md#microsoftnetworkpublicipaddresses)|
-|Microsoft.Network/trafficManagerProfiles | Yes | No | [Traffic Manager Profiles](../essentials/metrics-supported.md#microsoftnetworktrafficmanagerprofiles) |
+|Microsoft.KeyVault/vaults | Yes |Yes |[Azure Key Vault](../essentials/metrics-supported.md#microsoftkeyvaultvaults)|
+|Microsoft.Kusto/Clusters | Yes |No |[Data explorer clusters](../essentials/metrics-supported.md#microsoftkustoclusters)|
+|Microsoft.Logic/integrationServiceEnvironments | Yes | No |[Azure Integration Services environments](../essentials/metrics-supported.md#microsoftlogicintegrationserviceenvironments) |
+|Microsoft.Logic/workflows | No | No |[Azure Logic Apps](../essentials/metrics-supported.md#microsoftlogicworkflows) |
+|Microsoft.MachineLearningServices/workspaces | Yes | No | [Azure Machine Learning](../essentials/metrics-supported.md#microsoftmachinelearningservicesworkspaces) |
+|Microsoft.MachineLearningServices/workspaces/onlineEndpoints | Yes | No | Azure Machine Learning endpoints |
+|Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments | Yes | No | Azure Machine Learning endpoint deployments |
+|Microsoft.Maps/accounts | Yes | No | [Azure Maps accounts](../essentials/metrics-supported.md#microsoftmapsaccounts) |
+|Microsoft.Medi#microsoftmediamediaservices) |
+|Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) |
+|Microsoft.NetApp/netAppAccounts/capacityPools | Yes | Yes | [Azure NetApp Files capacity pools](../essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypools) |
+|Microsoft.NetApp/netAppAccounts/capacityPools/volumes | Yes | Yes | [Azure NetApp Files volumes](../essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypoolsvolumes) |
+|Microsoft.Network/applicationGateways | Yes | No | [Azure Application Gateway](../essentials/metrics-supported.md#microsoftnetworkapplicationgateways) |
+|Microsoft.Network/azurefirewalls | Yes | No | [Azure Firewall](../essentials/metrics-supported.md#microsoftnetworkazurefirewalls) |
+|Microsoft.Network/dnsZones | No | No | [Azure DNS zones](../essentials/metrics-supported.md#microsoftnetworkdnszones) |
+|Microsoft.Network/expressRouteCircuits | Yes | No |[Azure ExpressRoute circuits](../essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) |
+|Microsoft.Network/expressRouteGateways | Yes | No |[Azure ExpressRoute gateways](../essentials/metrics-supported.md#microsoftnetworkexpressroutegateways) |
+|Microsoft.Network/expressRoutePorts | Yes | No |[Azure ExpressRoute direct](../essentials/metrics-supported.md#microsoftnetworkexpressrouteports) |
+|Microsoft.Network/loadBalancers (only for Standard SKUs)| Yes| No | [Azure Load Balancer](../essentials/metrics-supported.md#microsoftnetworkloadbalancers) |
+|Microsoft.Network/natGateways| No | No | [NAT Gateway](../essentials/metrics-supported.md#microsoftnetworknatgateways) |
+|Microsoft.Network/privateEndpoints| No | No | [Private endpoints](../essentials/metrics-supported.md#microsoftnetworkprivateendpoints) |
+|Microsoft.Network/privateLinkServices| No | No | [Azure Private Link services](../essentials/metrics-supported.md#microsoftnetworkprivatelinkservices) |
+|Microsoft.Network/publicipaddresses | No | No | [Public IP addresses](../essentials/metrics-supported.md#microsoftnetworkpublicipaddresses)|
+|Microsoft.Network/trafficManagerProfiles | Yes | No | [Azure Traffic Manager profiles](../essentials/metrics-supported.md#microsoftnetworktrafficmanagerprofiles) |
|Microsoft.OperationalInsights/workspaces| Yes | No | [Log Analytics workspaces](../essentials/metrics-supported.md#microsoftoperationalinsightsworkspaces)|
-|Microsoft.Peering/peerings | Yes | No | [Peerings](../essentials/metrics-supported.md#microsoftpeeringpeerings) |
-|Microsoft.Peering/peeringServices | Yes | No | [Peering Services](../essentials/metrics-supported.md#microsoftpeeringpeeringservices) |
-|Microsoft.PowerBIDedicated/capacities | No | No | [Capacities](../essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) |
-|Microsoft.Purview/accounts | Yes | No | [Purview Accounts](../essentials/metrics-supported.md#microsoftpurviewaccounts) |
+|Microsoft.Peering/peerings | Yes | No | [Azure Peering Service](../essentials/metrics-supported.md#microsoftpeeringpeerings) |
+|Microsoft.Peering/peeringServices | Yes | No | [Azure Peering Service](../essentials/metrics-supported.md#microsoftpeeringpeeringservices) |
+|Microsoft.PowerBIDedicated/capacities | No | No | [Power BI dedicated capacities](../essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) |
+|Microsoft.Purview/accounts | Yes | No | [Azure Purview accounts](../essentials/metrics-supported.md#microsoftpurviewaccounts) |
|Microsoft.RecoveryServices/vaults | Yes | Yes | [Recovery Services vaults](../essentials/metrics-supported.md#microsoftrecoveryservicesvaults) | |Microsoft.Relay/namespaces | Yes | No | [Relays](../essentials/metrics-supported.md#microsoftrelaynamespaces) | |Microsoft.Search/searchServices | No | No | [Search services](../essentials/metrics-supported.md#microsoftsearchsearchservices) |
-|Microsoft.ServiceBus/namespaces | Yes | No | [Service Bus](../essentials/metrics-supported.md#microsoftservicebusnamespaces) |
-|Microsoft.SignalRService/WebPubSub | Yes | No | [Web PubSub Service](../essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) |
-|Microsoft.Sql/managedInstances | No | No | [SQL Managed Instances](../essentials/metrics-supported.md#microsoftsqlmanagedinstances) |
-|Microsoft.Sql/servers/databases | No | Yes | [SQL Databases](../essentials/metrics-supported.md#microsoftsqlserversdatabases) |
-|Microsoft.Sql/servers/elasticPools | No | Yes | [SQL Elastic Pools](../essentials/metrics-supported.md#microsoftsqlserverselasticpools) |
-|Microsoft.Storage/storageAccounts |Yes | No | [Storage Accounts](../essentials/metrics-supported.md#microsoftstoragestorageaccounts)|
-|Microsoft.Storage/storageAccounts/blobServices | Yes| No | [Storage Accounts - Blobs](../essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices) |
-|Microsoft.Storage/storageAccounts/fileServices | Yes| No | [Storage Accounts - Files](../essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices) |
-|Microsoft.Storage/storageAccounts/queueServices | Yes| No | [Storage Accounts - Queues](../essentials/metrics-supported.md#microsoftstoragestorageaccountsqueueservices) |
-|Microsoft.Storage/storageAccounts/tableServices | Yes| No | [Storage Accounts - Tables](../essentials/metrics-supported.md#microsoftstoragestorageaccountstableservices) |
-|Microsoft.StorageCache/caches | Yes | No | [HPC Caches](../essentials/metrics-supported.md#microsoftstoragecachecaches) |
-|Microsoft.StorageSync/storageSyncServices | Yes | No | [Storage Sync Services](../essentials/metrics-supported.md#microsoftstoragesyncstoragesyncservices) |
-|Microsoft.StreamAnalytics/streamingjobs | Yes | No | [Stream Analytics](../essentials/metrics-supported.md#microsoftstreamanalyticsstreamingjobs) |
-|Microsoft.Synapse/workspaces | Yes | No | [Synapse Analytics](../essentials/metrics-supported.md#microsoftsynapseworkspaces) |
-|Microsoft.Synapse/workspaces/bigDataPools | Yes | No | [Synapse Analytics Apache Spark Pools](../essentials/metrics-supported.md#microsoftsynapseworkspacesbigdatapools) |
-|Microsoft.Synapse/workspaces/sqlPools | Yes | No | [Synapse Analytics SQL Pools](../essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) |
-|Microsoft.VMWareCloudSimple/virtualMachines | Yes | No | [CloudSimple Virtual Machines](../essentials/metrics-supported.md#microsoftvmwarecloudsimplevirtualmachines) |
-|Microsoft.Web/containerApps | Yes | No | Container Apps |
-|Microsoft.Web/hostingEnvironments/multiRolePools | Yes | No | [App Service Environment Multi-Role Pools](../essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools)|
-|Microsoft.Web/hostingEnvironments/workerPools | Yes | No | [App Service Environment Worker Pools](../essentials/metrics-supported.md#microsoftwebhostingenvironmentsworkerpools)|
-|Microsoft.Web/serverfarms | Yes | No | [App Service Plans](../essentials/metrics-supported.md#microsoftwebserverfarms)|
-|Microsoft.Web/sites | Yes | No | [App Services and Functions](../essentials/metrics-supported.md#microsoftwebsites)|
-|Microsoft.Web/sites/slots | Yes | No | [App Service slots](../essentials/metrics-supported.md#microsoftwebsitesslots)|
-
-<sup>1</sup> Not supported for virtual machine network metrics (Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, Outbound Flows Maximum Creation Rate) and custom metrics.
+|Microsoft.ServiceBus/namespaces | Yes | No | [Azure Service Bus](../essentials/metrics-supported.md#microsoftservicebusnamespaces) |
+|Microsoft.SignalRService/WebPubSub | Yes | No | [Azure Web PubSub service](../essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) |
+|Microsoft.Sql/managedInstances | No | No | [Azure SQL Managed Instance](../essentials/metrics-supported.md#microsoftsqlmanagedinstances) |
+|Microsoft.Sql/servers/databases | No | Yes | [Azure SQL Database](../essentials/metrics-supported.md#microsoftsqlserversdatabases) |
+|Microsoft.Sql/servers/elasticPools | No | Yes | [Azure SQL Database elastic pools](../essentials/metrics-supported.md#microsoftsqlserverselasticpools) |
+|Microsoft.Storage/storageAccounts |Yes | No | [Azure Storage accounts](../essentials/metrics-supported.md#microsoftstoragestorageaccounts)|
+|Microsoft.Storage/storageAccounts/blobServices | Yes| No | [Azure Blob Storage accounts](../essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices) |
+|Microsoft.Storage/storageAccounts/fileServices | Yes| No | [Azure Files storage accounts](../essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices) |
+|Microsoft.Storage/storageAccounts/queueServices | Yes| No | [Azure Queue Storage accounts](../essentials/metrics-supported.md#microsoftstoragestorageaccountsqueueservices) |
+|Microsoft.Storage/storageAccounts/tableServices | Yes| No | [Azure Table Storage accounts](../essentials/metrics-supported.md#microsoftstoragestorageaccountstableservices) |
+|Microsoft.StorageCache/caches | Yes | No | [Azure HPC Cache](../essentials/metrics-supported.md#microsoftstoragecachecaches) |
+|Microsoft.StorageSync/storageSyncServices | Yes | No | [Storage sync services](../essentials/metrics-supported.md#microsoftstoragesyncstoragesyncservices) |
+|Microsoft.StreamAnalytics/streamingjobs | Yes | No | [Azure Stream Analytics](../essentials/metrics-supported.md#microsoftstreamanalyticsstreamingjobs) |
+|Microsoft.Synapse/workspaces | Yes | No | [Azure Synapse Analytics](../essentials/metrics-supported.md#microsoftsynapseworkspaces) |
+|Microsoft.Synapse/workspaces/bigDataPools | Yes | No | [Azure Synapse Analytics Apache Spark pools](../essentials/metrics-supported.md#microsoftsynapseworkspacesbigdatapools) |
+|Microsoft.Synapse/workspaces/sqlPools | Yes | No | [Azure Synapse Analytics SQL pools](../essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) |
+|Microsoft.VMWareCloudSimple/virtualMachines | Yes | No | [CloudSimple virtual machines](../essentials/metrics-supported.md#microsoftvmwarecloudsimplevirtualmachines) |
+|Microsoft.Web/containerApps | Yes | No | Azure Container Apps |
+|Microsoft.Web/hostingEnvironments/multiRolePools | Yes | No | [Azure App Service environment multi-role pools](../essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools)|
+|Microsoft.Web/hostingEnvironments/workerPools | Yes | No | [Azure App Service environment worker pools](../essentials/metrics-supported.md#microsoftwebhostingenvironmentsworkerpools)|
+|Microsoft.Web/serverfarms | Yes | No | [Azure App Service plans](../essentials/metrics-supported.md#microsoftwebserverfarms)|
+|Microsoft.Web/sites | Yes | No | [Azure App Service and Azure Functions](../essentials/metrics-supported.md#microsoftwebsites)|
+|Microsoft.Web/sites/slots | Yes | No | [Azure App Service slots](../essentials/metrics-supported.md#microsoftwebsitesslots)|
+
+<sup>1</sup> Not supported for virtual machine network metrics such as Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, and Outbound Flows Maximum Creation Rate. Also not supported for custom metrics.
## Payload schema > [!NOTE]
-> You can also use the [common alert schema](./alerts-common-schema.md), which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor, for your webhook integrations. [Learn about the common alert schema definitions.](./alerts-common-schema-definitions.md)ΓÇï
-
+> You can also use the [common alert schema](./alerts-common-schema.md), which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor, for your webhook integrations. [Learn about the common alert schema definitions](./alerts-common-schema-definitions.md).ΓÇï
The POST operation contains the following JSON payload and schema for all near newer metric alerts when an appropriately configured [action group](./action-groups.md) is used:
The POST operation contains the following JSON payload and schema for all near n
## Next steps
-* Learn more about the new [Alerts experience](./alerts-overview.md).
+* Learn more about the new [alerts experience](./alerts-overview.md).
* Learn about [log alerts in Azure](./alerts-unified-log.md). * Learn about [alerts in Azure](./alerts-overview.md).
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
Title: Frequently asked questions about Azure metric alerts
+ Title: Frequently asked questions about Azure Monitor metric alerts
description: Common issues with Azure Monitor metric alerts and possible solutions.
Last updated 8/31/2022 ms:reviwer: harelbr
-# Frequently asked questions about Azure Monitor metric alerts
+# Frequently asked questions about Azure Monitor metric alerts
This article discusses common questions about Azure Monitor [metric alerts](alerts-metric-overview.md) and how to troubleshoot them. Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them. For more information on alerting, see [Overview of alerts in Microsoft Azure](./alerts-overview.md).
-## Metric alert should have fired but didn't
+## Metric alert should have fired but didn't
-If you believe a metric alert should have fired but it didnΓÇÖt fire and isn't found in the Azure portal, try the following steps:
+If you believe a metric alert should have fired but it didn't fire and it isn't found in the Azure portal, try the following steps:
-1. **Configuration** - Review the metric alert rule configuration to make sure itΓÇÖs properly configured:
- - Check that the **Aggregation type** and **Aggregation granularity (period)** are configured as expected. **Aggregation type** determines how metric values are aggregated (learn more [here](../essentials/metrics-aggregation-explained.md#aggregation-types)), and **Aggregation granularity (period)** controls how far back the evaluation aggregates the metric values each time the alert rule runs.
- - Check that the **Threshold value** or **Sensitivity** are configured as expected.
- - For an alert rule that uses Dynamic Thresholds, check if advanced settings are configured, as **Number of violations** may filter alerts and **Ignore data before** can impact how the thresholds are calculated.
+1. **Configuration:** Review the metric alert rule configuration to make sure it's properly configured:
+ - Check that **Aggregation type** and **Aggregation granularity (Period)** are configured as expected. **Aggregation type** determines how metric values are aggregated. To learn more, see [Azure Monitor Metrics aggregation and display explained](../essentials/metrics-aggregation-explained.md#aggregation-types). **Aggregation granularity (Period)** controls how far back the evaluation aggregates the metric values each time the alert rule runs.
+ - Check that **Threshold value** or **Sensitivity** are configured as expected.
+ - For an alert rule that uses Dynamic Thresholds, check if advanced settings are configured. **Number of violations** might filter alerts, and **Ignore data before** can affect how the thresholds are calculated.
- > [!NOTE]
- > Dynamic Thresholds require at least 3 days and 30 metric samples before becoming active.
+ > [!NOTE]
+ > Dynamic thresholds require at least 3 days and 30 metric samples before they become active.
-2. **Fired but no notification** - Review the [fired alerts list](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/alertsV2) to see if you can locate the fired alert. If you can see the alert in the list, but have an issue with some of its actions or notifications, see more information [here](./alerts-troubleshoot.md#action-or-notification-on-my-alert-did-not-work-as-expected).
+1. **Fired but no notification:** Review the [fired alerts list](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/alertsV2) to see if you can locate the fired alert. If you can see the alert in the list but have an issue with some of its actions or notifications, see [Troubleshooting problems in Azure Monitor alerts](./alerts-troubleshoot.md#action-or-notification-on-my-alert-did-not-work-as-expected).
-3. **Already active** - Check if thereΓÇÖs already a fired alert on the metric time series you expected to get an alert for. Metric alerts are stateful, meaning that once an alert is fired on a specific metric time series, additional alerts on that time series will not be fired until the issue is no longer observed. This design choice reduces noise. The alert is resolved automatically when the alert condition is not met for three consecutive evaluations.
+1. **Already active:** Check if there's already a fired alert on the metric time series for which you expected to get an alert. Metric alerts are stateful, which means that once an alert is fired on a specific metric time series, more alerts on that time series won't be fired until the issue is no longer observed. This design choice reduces noise. The alert is resolved automatically when the alert condition isn't met for three consecutive evaluations.
-4. **Dimensions used** - If you've selected some [dimension values for a metric](./alerts-metric-overview.md#using-dimensions), the alert rule monitors each individual metric time series (as defined by the combination of dimension values) for a threshold breach. To also monitor the aggregate metric time series (without any dimensions selected), configure an additional alert rule on the metric without selecting dimensions.
+1. **Dimensions used:** If you've selected some [dimension values for a metric](./alerts-metric-overview.md#using-dimensions), the alert rule monitors each individual metric time series (as defined by the combination of dimension values) for a threshold breach. To also monitor the aggregate metric time series, without any dimensions selected, configure another alert rule on the metric without selecting dimensions.
-5. **Aggregation and time granularity** - If you're visualizing the metric using [metrics charts](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/metrics), ensure that:
- * The selected **Aggregation** in the metric chart is the same as **Aggregation type** in your alert rule
- * The selected **Time granularity** is the same as the **Aggregation granularity (period)** in your alert rule (and not set to 'Automatic')
+1. **Aggregation and time granularity:** If you're visualizing the metric by using [metrics charts](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/metrics), ensure that:
+
+ * The selected **Aggregation** in the metric chart is the same as **Aggregation type** in your alert rule.
+ * The selected **Time granularity** is the same as **Aggregation granularity (Period)** in your alert rule, and isn't set to **Automatic**.
## Metric alert fired when it shouldn't have If you believe your metric alert shouldn't have fired but it did, the following steps might help resolve the issue.
-1. Review the [fired alerts list](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/alertsV2) to locate the fired alert, and click to view its details. Review the information provided under **Why did this alert fire?** to see the metric chart, **Metric Value**, and **Threshold value** at the time when the alert was triggered.
+1. Review the [fired alerts list](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/alertsV2) to locate the fired alert. Select the alert to view its details. Review the information provided under **Why did this alert fire?** to see the metric chart, **Metric value**, and **Threshold value** at the time when the alert was triggered.
- > [!NOTE]
- > If you're using a Dynamic Thresholds condition type and think that the thresholds used were not correct, please provide feedback using the frown icon. This feedback will impact the machine learning algorithmic research and help improve future detections.
+ > [!NOTE]
+ > If you're using a Dynamic Thresholds condition type and think that the thresholds used weren't correct, provide feedback by using the frown icon. This feedback affects the machine learning algorithmic research and will help improve future detections.
-2. If you've selected multiple dimension values for a metric, the alert will be triggered when **any** of the metric time series (as defined by the combination of dimension values) breaches the threshold. For more information about using dimensions in metric alerts, see [here](./alerts-metric-overview.md#using-dimensions).
+1. If you've selected multiple dimension values for a metric, the alert is triggered when *any* of the metric time series (as defined by the combination of dimension values) breaches the threshold. For more information about using dimensions in metric alerts, see [this website](./alerts-metric-overview.md#using-dimensions).
-3. Review the alert rule configuration to make sure itΓÇÖs properly configured:
- - Check that the **Aggregation type**, **Aggregation granularity (period)**, and **Threshold value** or **Sensitivity** are configured as expected
- - For an alert rule that uses Dynamic Thresholds, check if advanced settings are configured, as **Number of violations** may filter alerts and **Ignore data before** can impact how the thresholds are calculated
+1. Review the alert rule configuration to make sure it's properly configured:
+ - Check that **Aggregation type**, **Aggregation granularity (Period)**, and **Threshold value** or **Sensitivity** are configured as expected.
+ - For an alert rule that uses dynamic thresholds, check if advanced settings are configured, as **Number of violations** might filter alerts and **Ignore data before** can affect how the thresholds are calculated.
> [!NOTE]
- > Dynamic Thresholds require at least 3 days and 30 metric samples before becoming active.
+ > Dynamic thresholds require at least 3 days and 30 metric samples before becoming active.
+
+1. If you're visualizing the metric by using [Metrics chart](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/metrics), ensure that:
-4. If you're visualizing the metric using [Metrics chart](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/metrics), ensure that:
- - The selected **Aggregation** in the metric chart is the same as **Aggregation type** in your alert rule
- - The selected **Time granularity** is the same as the **Aggregation granularity (period)** in your alert rule (and not set to 'Automatic')
+ - The selected **Aggregation** in the metric chart is the same as the **Aggregation type** in your alert rule.
+ - The selected **Time granularity** is the same as the **Aggregation granularity (Period)** in your alert rule, and that it isn't set to **Automatic**.
-5. If the alert fired while there are already fired alerts that monitor the same criteria (that arenΓÇÖt resolved), check if the alert rule has been configured not to automatically resolve alerts. Such configuration causes the alert rule to become stateless, meaning that the alert rule does not auto-resolve fired alerts, and does not require a fired alert to be resolved before firing again on the same time-series.
- You can check if the alert rule is configured not to auto-resolve in one of the following ways:
- - By editing the alert rule in the Azure portal, and reviewing if the 'Automatically resolve alerts' checkbox is unchecked (available under the 'Alert rule details' section).
- - By reviewing the script used to deploy the alert rule, or by retrieving the alert rule definition, and checking if the *autoMitigate* property is set to **false**.
+1. If the alert fired while there are already fired alerts that monitor the same criteria that aren't resolved, check if the alert rule has been configured not to automatically resolve alerts. Such configuration causes the alert rule to become stateless, which means the alert rule doesn't auto-resolve fired alerts and doesn't require a fired alert to be resolved before firing again on the same time series.
+ To check if the alert rule is configured not to auto-resolve:
+ - Edit the alert rule in the Azure portal. See if the **Automatically resolve alerts** checkbox under the **Alert rule details** section is cleared.
+ - Review the script used to deploy the alert rule or retrieve the alert rule definition. Check if the `autoMitigate` property is set to `false`.
-## Can't find the metric to alert on - virtual machines guest metrics
+## Can't find the metric to alert on: Virtual machines guest metrics
-To alert on guest operating system metrics of virtual machines (for example: memory, disk space), ensure you've installed the required agent to collect this data to Azure Monitor Metrics:
-- [For Windows VMs](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md)-- [For Linux VMs](../essentials/collect-custom-metrics-linux-telegraf.md)
+To alert on guest operating system metrics of virtual machines, such as memory and disk space, ensure you've installed the required agent to collect this data to Azure Monitor Metrics for:
-For more information about collecting data from the guest operating system of a virtual machine, see [here](../vm/monitor-vm-azure.md#guest-operating-system).
+- [Windows VMs](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md)
+- [Linux VMs](../essentials/collect-custom-metrics-linux-telegraf.md)
-> [!NOTE]
-> If you configured guest metrics to be sent to a Log Analytics workspace, the metrics appear under the Log Analytics workspace resource and will start showing data **only** after creating an alert rule that monitors them. To do so, follow the steps to [configure a metric alert for logs](./alerts-metric-logs.md#configuring-metric-alert-for-logs).
+For more information about collecting data from the guest operating system of a virtual machine, see [this website](../vm/monitor-vm-azure.md#guest-operating-system).
+
+> [!NOTE]
+> If you configured guest metrics to be sent to a Log Analytics workspace, the metrics appear under the Log Analytics workspace resource and start showing data *only* after you create an alert rule that monitors them. To do so, follow the steps to [configure a metric alert for logs](./alerts-metric-logs.md#configuring-metric-alert-for-logs).
-> [!NOTE]
-> Monitoring a guest metric for multiple virtual machines with a single alert rule isn't supported by metric alerts currently. You can achieve this with a [log alert rule](./alerts-unified-log.md). To do so, make sure the guest metrics are collected to a Log Analytics workspace, and create a log alert rule on the workspace.
+Currently, monitoring a guest metric for multiple virtual machines with a single alert rule isn't supported by metric alerts. But you can use a [log alert rule](./alerts-unified-log.md). To do so, make sure the guest metrics are collected to a Log Analytics workspace and create a log alert rule on the workspace.
-## CanΓÇÖt find the metric to alert on
+## Can't find the metric to alert on
+
+If you want to alert on a specific metric but you can't see it when you create an alert rule, check to determine:
-If youΓÇÖre looking to alert on a specific metric but canΓÇÖt see it when creating an alert rule, check the following:
- If you can't see any metrics for the resource, [check if the resource type is supported for metric alerts](./alerts-metric-near-real-time.md).-- If you can see some metrics for the resource, but canΓÇÖt find a specific metric, [check if that metric is available](../essentials/metrics-supported.md), and if so, see the metric description to check if itΓÇÖs only available in specific versions or editions of the resource.-- If the metric isn't available for the resource, it might be available in the resource logs, and can be monitored using log alerts. See here for more information on how to [collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md).
+- If you can see some metrics for the resource but can't find a specific metric, [check if that metric is available](../essentials/metrics-supported.md). If so, see the metric description to check if it's only available in specific versions or editions of the resource.
+- If the metric isn't available for the resource, it might be available in the resource logs and can be monitored by using log alerts. For more information, see how to [collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md).
-## CanΓÇÖt find the metric dimension to alert on
+## Can't find the metric dimension to alert on
-If you're looking to alert on [specific dimension values of a metric](./alerts-metric-overview.md#using-dimensions), but cannot find these values, note the following:
+If you want to alert on [specific dimension values of a metric](./alerts-metric-overview.md#using-dimensions) but you can't find these values:
-1. It might take a few minutes for the dimension values to appear under the **Dimension values** list
-2. The displayed dimension values are based on metric data collected in the last day
-3. If the dimension value isnΓÇÖt yet emitted or isn't shown, you can use the 'Add custom value' option to add a custom dimension value
-4. If youΓÇÖd like to alert on all possible values of a dimension (including future values), choose the 'Select all current and future values' option
-5. Custom metrics dimensions of Application Insights resources are turned off by default. To turn on the collection of dimensions for these custom metrics, see [here](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
+- It might take a few minutes for the dimension values to appear under the **Dimension values** list.
+- The displayed dimension values are based on metric data collected in the last day.
+- If the dimension value isn't yet emitted or isn't shown, you can use the **Add custom value** option to add a custom dimension value.
+- If you want to alert on all possible values of a dimension and even include future values, choose the **Select all current and future values** option.
+- Custom metrics dimensions of Application Insights resources are turned off by default. To turn on the collection of dimensions for these custom metrics, see [Log-based and pre-aggregated metrics in Application Insights](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
-## Metric alert rules still defined on a deleted resource
+## Metric alert rules still defined on a deleted resource
-When deleting an Azure resource, associated metric alert rules aren't deleted automatically. To delete alert rules associated with a resource that has been deleted:
+When you delete an Azure resource, associated metric alert rules aren't deleted automatically. To delete alert rules associated with a resource that's been deleted:
-1. Open the resource group in which the deleted resource was defined
-1. In the list displaying the resources, check the **Show hidden types** checkbox
-1. Filter the list by Type == **microsoft.insights/metricalerts**
-1. Select the relevant alert rules and select **Delete**
+1. Open the resource group in which the deleted resource was defined.
+1. In the list that displays the resources, select the **Show hidden types** checkbox.
+1. Filter the list by Type == **microsoft.insights/metricalerts**.
+1. Select the relevant alert rules and select **Delete**.
## Make metric alerts occur every time my condition is met
-Metric alerts are stateful by default, and therefore additional alerts are not fired if thereΓÇÖs already a fired alert on a given time series. If you wish to make a specific metric alert rule stateless, and get alerted on every evaluation<sup>1</sup> in which the alert condition is met, follow one of these options:
-- If you're creating the alert rule programmatically (for example, via [Resource Manager](./alerts-metric-create-templates.md), [PowerShell](/powershell/module/az.monitor/), [REST](/rest/api/monitor/metricalerts/createorupdate), [CLI](/cli/azure/monitor/metrics/alert)), set the *autoMitigate* property to 'False'.-- If you're creating the alert rule via the Azure portal, uncheck the 'Automatically resolve alerts' option (available under the 'Alert rule details' section).
+Metric alerts are stateful by default, so other alerts aren't fired if there's already a fired alert on a specific time series. To make a specific metric alert rule stateless and get alerted on every evaluation<sup>1</sup> in which the alert condition is met, use one of these options:
+
+- If you create the alert rule programmatically, for example, via [Azure Resource Manager](./alerts-metric-create-templates.md), [PowerShell](/powershell/module/az.monitor/), [REST](/rest/api/monitor/metricalerts/createorupdate), or the [Azure CLI](/cli/azure/monitor/metrics/alert), set the `autoMitigate` property to `False`.
+- If you create the alert rule via the Azure portal, clear the **Automatically resolve alerts** option under the **Alert rule details** section.
-<sup>1</sup> For stateless metric alert rules, an alert will trigger once every 10 minutes at a minimum, even if the frequency of evaluation is equal or less than 5 minutes and the condition is still being met.
+<sup>1</sup> For stateless metric alert rules, an alert triggers once every 10 minutes at a minimum, even if the frequency of evaluation is equal or less than 5 minutes and the condition is still being met.
-> [!NOTE]
-> Making a metric alert rule stateless prevents fired alerts from becoming resolved, so even after the condition isnΓÇÖt met anymore, the fired alerts will remain in a fired state until the 30 days retention period.
+> [!NOTE]
+> Making a metric alert rule stateless prevents fired alerts from becoming resolved. So, even after the condition isn't met anymore, the fired alerts remain in a fired state until the 30-day retention period.
## Define an alert rule on a custom metric that isn't emitted yet
-When creating a metric alert rule, the metric name is validated against the [Metric Definitions API](/rest/api/monitor/metricdefinitions/list) to make sure it exists. In some cases, you'd like to create an alert rule on a custom metric even before itΓÇÖs emitted. For example, when creating (using a Resource Manager template) an Application Insights resource that will emit a custom metric, along with an alert rule that monitors that metric.
+When you create a metric alert rule, the metric name is validated against the [Metric Definitions API](/rest/api/monitor/metricdefinitions/list) to make sure it exists. In some cases, you want to create an alert rule on a custom metric even before it's emitted. An example is when you use a Resource Manager template to create an Application Insights resource that will emit a custom metric, along with an alert rule that monitors that metric.
-To avoid having the deployment fail when trying to validate the custom metricΓÇÖs definitions, you can use the *skipMetricValidation* parameter in the criteria section of the alert rule, which will cause the metric validation to be skipped. See the example below for how to use this parameter in a Resource Manager template. For more information, see the [complete Resource Manager template samples for creating metric alert rules](./alerts-metric-create-templates.md).
+To avoid a deployment failure when you try to validate the custom metric's definitions, use the `skipMetricValidation` parameter in the `criteria` section of the alert rule. This parameter will cause the metric validation to be skipped. See the following example for how to use this parameter in a Resource Manager template. For more information, see the [complete Resource Manager template samples for creating metric alert rules](./alerts-metric-create-templates.md).
```json "criteria": {
To avoid having the deployment fail when trying to validate the custom metricΓÇÖ
] } ```
-> [!NOTE]
-> Using the *skipMetricValidation* parameter might also be required when defining an alert rule on an existing custom metric that hasn't been emitted in several days.
+
+> [!NOTE]
+> Using the `skipMetricValidation` parameter might also be required when you define an alert rule on an existing custom metric that hasn't been emitted in several days.
## Process data for a metric alert rule in a specific region You can make sure that an alert rule is processed in a specified region if your metric alert rule is defined with a scope of that region and if it monitors a custom metric.
-These are the currently support regions for regional processing of metric alert rules:
+The following regions are currently supported for regional processing of metric alert rules:
+ - North Europe - West Europe - Sweden Central-- Germany West Central
+- Germany West Central
+
+To enable regional data processing in one of these regions, select the specified region in the **Details** section of the [Create an alert rule wizard](./alerts-create-new-alert-rule.md).
-To enable regional data processing in one of these regions, select the specified region in the **Details** section of the [create a new alert rule wizard](./alerts-create-new-alert-rule.md).
-
> [!NOTE]
-> We are continually adding more regions for regional data processing.
+> We're continually adding more regions for regional data processing.
+## Export the Resource Manager template of a metric alert rule via the Azure portal
-## Export the Azure Resource Manager template of a metric alert rule via the Azure portal
+You can export the Resource Manager template of a metric alert rule to help you understand its JSON syntax and properties. Then you can use the template to automate future deployments.
-Exporting the Resource Manager template of a metric alert rule helps you understand its JSON syntax and properties, and can be used to automate future deployments.
1. In the Azure portal, open the alert rule to view its details.
-2. Click **Properties**.
-3. Under **Automation**, select **Export template**.
+1. Select **Properties**.
+1. Under **Automation**, select **Export template**.
## Metric alert rules quota too small The allowed number of metric alert rules per subscription is subject to [quota limits](../service-limits.md).
-If you've reached the quota limit, the following steps may help resolve the issue:
-1. Try deleting or disabling metric alert rules that arenΓÇÖt used anymore.
+If you've reached the quota limit, the following steps might help resolve the issue:
+
+1. Try deleting or disabling metric alert rules that aren't used anymore.
-2. Switch to using metric alert rules that monitor multiple resources. With this capability, a single alert rule can monitor multiple resources using only one alert rule counted against the quota. For more information about this capability and the supported resource types, see [here](./alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).
+1. Switch to using metric alert rules that monitor multiple resources. With this capability, a single alert rule can monitor multiple resources by using only one alert rule counted against the quota. For more information about this capability and the supported resource types, see [this website](./alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).
-3. If you need the quota limit to be increased, open a support request, and provide the following information:
+1. If you need the quota limit to be increased, open a support request and provide the:
- - Subscription Id(s) for which the quota limit needs to be increased
- - Resource type for the quota increase: **Metric alerts** or **Metric alerts (Classic)**
- - Requested quota limit
+ - Subscription IDs for which the quota limit needs to be increased.
+ - Resource type for the quota increase. Select **Metric alerts** or **Metric alerts (Classic)**.
+ - Requested quota limit.
-## Check total number of metric alert rules
+## Check the total number of metric alert rules
-To check the current usage of metric alert rules, follow the steps below.
+To check the current usage of metric alert rules, follow the next steps.
### From the Azure portal
-1. Open the **Alerts** screen, and click **Manage alert rules**
-2. Filter to the relevant subscription, by using the **Subscription** dropdown control
-3. Make sure NOT to filter to a specific resource group, resource type, or resource
-4. In the **Signal type** dropdown control, select **Metrics**
-5. Verify that the **Status** dropdown control is set to **Enabled**
-6. The total number of metric alert rules are displayed above the alert rules list
+1. Open the **Alerts** screen and select **Manage alert rules**.
+1. Filter to the relevant subscription by using the **Subscription** dropdown box.
+1. Make sure *not* to filter to a specific resource group, resource type, or resource.
+1. In the **Signal type** dropdown box, select **Metrics**.
+1. Verify that the **Status** dropdown box is set to **Enabled**.
+1. The total number of metric alert rules are displayed above the alert rules list.
### From API -- PowerShell - [Get-AzMetricAlertRuleV2](/powershell/module/az.monitor/get-azmetricalertrulev2)-- REST API - [List by subscription](/rest/api/monitor/metricalerts/listbysubscription)-- Azure CLI - [az monitor metrics alert list](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-list)
+- **PowerShell**: [Get-AzMetricAlertRuleV2](/powershell/module/az.monitor/get-azmetricalertrulev2)
+- **REST API**: [List by subscription](/rest/api/monitor/metricalerts/listbysubscription)
+- **Azure CLI**: [az monitor metrics alert list](/cli/azure/monitor/metrics/alert#az-monitor-metrics-alert-list)
-## Managing alert rules using Resource Manager templates, REST API, Azure PowerShell, or the Azure CLI
+## Manage alert rules by using Resource Manager templates, REST API, PowerShell, or the Azure CLI
-If you're running into issues creating, updating, retrieving, or deleting metric alerts using Resource Manager templates, REST API, PowerShell, or the Azure CLI, the following steps may help resolve the issue.
+You might run into an issue when you create, update, retrieve, or delete metric alerts by using Resource Manager templates, REST API, PowerShell, or the Azure CLI. The following steps might help resolve the issue.
### Resource Manager templates -- Review [common Azure deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md) list and troubleshoot accordingly-- Refer to the [metric alerts Azure Resource Manager template examples](./alerts-metric-create-templates.md) to ensure you're passing the all the parameters correctly
+- Review the [common Azure deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md) list and troubleshoot accordingly.
+- Refer to the [metric alerts Resource Manager template examples](./alerts-metric-create-templates.md) to ensure you're passing all the parameters correctly.
### REST API
-Review the [REST API guide](/rest/api/monitor/metricalerts/) to verify you're passing the all the parameters correctly
+Review the [REST API guide](/rest/api/monitor/metricalerts/) to verify you're passing all the parameters correctly.
### PowerShell Make sure that you're using the right PowerShell cmdlets for metric alerts: -- PowerShell cmdlets for metric alerts are available in the [Az.Monitor module](/powershell/module/az.monitor/)-- Make sure to use the cmdlets ending with 'V2' for new (non-classic) metric alerts (for example, [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2))
+- PowerShell cmdlets for metric alerts are available in the [Az.Monitor module](/powershell/module/az.monitor/).
+- Make sure to use the cmdlets that end with `V2` for new (non-classic) metric alerts, for example, [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2).
### Azure CLI
-Make sure that you're using the right CLI commands for metric alerts:
+Make sure you're using the right CLI commands for metric alerts:
- CLI commands for metric alerts start with `az monitor metrics alert`. Review the [Azure CLI reference](/cli/azure/monitor/metrics/alert) to learn about the syntax.-- You can see [sample showing how to use metric alert CLI](./alerts-metric.md#with-azure-cli)-- To alert on a custom metric, make sure to prefix the metric name with the relevant metric namespace: NAMESPACE.METRIC
+- You can see a [sample that shows how to use the metric alert CLI](./alerts-metric.md#with-azure-cli).
+- To alert on a custom metric, make sure to prefix the metric name with the relevant metric namespace: `NAMESPACE.METRIC`.
### General -- If you're receiving a `Metric not found` error:-
- - For a platform metric: Make sure that you're using the **Metric** name from [the Azure Monitor supported metrics page](../essentials/metrics-supported.md), and not the **Metric Display Name**
-
- - For a custom metric: Make sure that the metric is already being emitted (you cannot create an alert rule on a custom metric that doesn't yet exist), and that you're providing the custom metric's namespace (see a Resource Manager template example [here](./alerts-metric-create-templates.md#template-for-a-static-threshold-metric-alert-that-monitors-a-custom-metric))
--- If you're creating [metric alerts on logs](./alerts-metric-logs.md), ensure appropriate dependencies are included. See [sample template](./alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs).-
+- If you receive a `Metric not found` error:
+ - **For a platform metric:** Make sure you're using the **Metric** name from [the Azure Monitor supported metrics page](../essentials/metrics-supported.md) and not the **Metric Display Name**.
+ - **For a custom metric:** Make sure that the metric is already being emitted because you can't create an alert rule on a custom metric that doesn't yet exist. Also ensure that you're providing the custom metric's namespace. For a Resource Manager template example, see [Create a metric alert with a Resource Manager template](./alerts-metric-create-templates.md#template-for-a-static-threshold-metric-alert-that-monitors-a-custom-metric).
+- If you're creating [metric alerts on logs](./alerts-metric-logs.md), ensure appropriate dependencies are included. For a sample template, see [Create Metric Alerts for Logs in Azure Monitor](./alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs).
- If you're creating an alert rule that contains multiple criteria, note the following constraints:-
- - You can only select one value per dimension within each criterion
- - You cannot use "\*" as a dimension value
- - When metrics that are configured in different criterions support the same dimension, then a configured dimension value must be explicitly set in the same way for all of those metrics (see a Resource Manager template example [here](./alerts-metric-create-templates.md#template-for-a-static-threshold-metric-alert-that-monitors-multiple-criteria))
-
+ - You can only select one value per dimension within each criterion.
+ - You can't use an asterisk (\*) as a dimension value.
+ - When metrics that are configured in different criteria support the same dimension, a configured dimension value must be explicitly set in the same way for all those metrics. For a Resource Manager template example, see [Create a metric alert with a Resource Manager template](./alerts-metric-create-templates.md#template-for-a-static-threshold-metric-alert-that-monitors-multiple-criteria).
## No permissions to create metric alert rules
-To create a metric alert rule, youΓÇÖll need to have the following permissions:
+To create a metric alert rule, you must have the following permissions:
-- Read permission on the target resource of the alert rule-- Write permission on the resource group in which the alert rule is created (if youΓÇÖre creating the alert rule from the Azure portal, the alert rule is created by default in the same resource group in which the target resource resides)-- Read permission on any action group associated to the alert rule (if applicable)
+- Read permission on the target resource of the alert rule.
+- Write permission on the resource group in which the alert rule is created. If you're creating the alert rule from the Azure portal, the alert rule is created by default in the same resource group in which the target resource resides.
+- Read permission on any action group associated to the alert rule, if applicable.
## Subscription registration to the Microsoft.Insights resource provider Metric alerts can only access resources in subscriptions registered to the Microsoft.Insights resource provider.
-Therefore, to create a metric alert rule, all involved subscriptions must be registered to this resource provider:
+To create a metric alert rule, all involved subscriptions must be registered to this resource provider:
-- The subscription containing the alert rule's target resource (scope)-- The subscription containing the action groups associated with the alert rule (if defined)-- The subscription in which the alert rule is saved
+- The subscription that contains the alert rule's target resource (scope).
+- The subscription that contains the action groups associated with the alert rule, if defined.
+- The subscription in which the alert rule is saved.
Learn more about [registering resource providers](../../azure-resource-manager/management/resource-providers-and-types.md).
Learn more about [registering resource providers](../../azure-resource-manager/m
Consider the following restrictions for metric alert rule names: -- Metric alert rule names canΓÇÖt be changed (renamed) once created-- Metric alert rule names must be unique within a resource group-- Metric alert rule names canΓÇÖt contain the following characters: * # & + : < > ? @ % { } \ / -- Metric alert rule names canΓÇÖt end with a space or a period-- The combined resource group name and alert rule name canΓÇÖt exceed 252 characters
+- Metric alert rule names can't be changed (renamed) after they're created.
+- Metric alert rule names must be unique within a resource group.
+- Metric alert rule names can't contain the following characters: * # & + : < > ? @ % { } \ /
+- Metric alert rule names can't end with a space or a period.
+- The combined resource group name and alert rule name can't exceed 252 characters.
+
+> [!NOTE]
+> If the alert rule name contains characters that aren't alphabetic or numeric, for example, spaces, punctuation marks, or symbols, these characters might be URL-encoded when retrieved by certain clients.
-> [!NOTE]
-> If the alert rule name contains characters that aren't alphabetic or numeric (for example: spaces, punctuation marks or symbols), these characters may be URL-encoded when retrieved by certain clients.
+## Restrictions when you use dimensions in a metric alert rule with multiple conditions
-## Restrictions when using dimensions in a metric alert rule with multiple conditions
+Metric alerts support alerting on multi-dimensional metrics and support defining multiple conditions, up to five conditions per alert rule.
-Metric alerts support alerting on multi-dimensional metrics as well as support defining multiple conditions (up to 5 conditions per alert rule).
+Consider the following constraints when you use dimensions in an alert rule that contains multiple conditions:
-Consider the following constraints when using dimensions in an alert rule that contains multiple conditions:
- You can only select one value per dimension within each condition.-- You can't use the option to "Select all current and future values" (Select \*).-- When metrics that are configured in different conditions support the same dimension, then a configured dimension value must be explicitly set in the same way for all of those metrics (in the relevant conditions).
+- You can't use the option to **Select all current and future values**. Select the asterisk (\*).
+- When metrics that are configured in different conditions support the same dimension, a configured dimension value must be explicitly set in the same way for all those metrics in the relevant conditions.
For example:
- - Consider a metric alert rule that is defined on a storage account and monitors two conditions:
+ - Consider a metric alert rule that's defined on a storage account and monitors two conditions:
* Total **Transactions** > 5 * Average **SuccessE2ELatency** > 250 ms
- - I'd like to update the first condition, and only monitor transactions where the **ApiName** dimension equals *"GetBlob"*
- - Because both the **Transactions** and **SuccessE2ELatency** metrics support an **ApiName** dimension, I'll need to update both conditions, and have both of them specify the **ApiName** dimension with a *"GetBlob"* value.
+ - You want to update the first condition and only monitor transactions where the **ApiName** dimension equals `"GetBlob"`.
+ - Because both the **Transactions** and **SuccessE2ELatency** metrics support an **ApiName** dimension, you'll need to update both conditions and have them specify the **ApiName** dimension with a `"GetBlob"` value.
-## Setting the alert rule's Period and Frequency
+## Set the alert rule's period and frequency
-We recommend choosing an *Aggregation granularity (Period)* that is larger than the *Frequency of evaluation*, to reduce the likelihood of missing the first evaluation of added time series in the following cases:
-- Metric alert rule that monitors multiple dimensions ΓÇô When a new dimension value combination is added-- Metric alert rule that monitors multiple resources ΓÇô When a new resource is added to the scope-- Metric alert rule that monitors a metric that isnΓÇÖt emitted continuously (sparse metric) ΓÇô When the metric is emitted after a period longer than 24 hours in which it wasnΓÇÖt emitted
+Choose an **Aggregation granularity (Period)** that's larger than the **Frequency of evaluation** to reduce the likelihood of missing the first evaluation of added time series in the following cases:
+
+- **Metric alert rule that monitors multiple dimensions:** When a new dimension value combination is added.
+- **Metric alert rule that monitors multiple resources:** When a new resource is added to the scope.
+- **Metric alert rule that monitors a metric that isn't emitted continuously (sparse metric):** When the metric is emitted after a period longer than 24 hours in which it wasn't emitted.
## The Dynamic Thresholds borders don't seem to fit the data
-If the behavior of a metric changed recently, the changes won't necessarily become reflected in the Dynamic Threshold borders (upper and lower bounds) immediately, as those are calculated based on metric data from the last 10 days. When viewing the Dynamic Threshold borders for a given metric, make sure to look at the metric trend in the last week, and not only for recent hours or days.
+If the behavior of a metric changed recently, the changes won't necessarily be reflected in the Dynamic Threshold borders (upper and lower bounds) immediately. The borders are calculated based on metric data from the last 10 days. When you view the Dynamic Threshold borders for a given metric, look at the metric trend in the last week and not only for recent hours or days.
## Why is weekly seasonality not detected by Dynamic Thresholds?
-To identify weekly seasonality, the Dynamic Thresholds model requires at least three weeks of historical data. Once enough historical data is available, any weekly seasonality that exists in the metric data will be identified and the model would be adjusted accordingly.
+To identify weekly seasonality, the Dynamic Thresholds model requires at least three weeks of historical data. When enough historical data is available, any weekly seasonality that exists in the metric data is identified and the model is adjusted accordingly.
## Dynamic Thresholds shows a negative lower bound for a metric even though the metric always has positive values
-When a metric exhibits large fluctuation, Dynamic Thresholds will build a wider model around the metric values, which can result in the lower border being below zero. Specifically, this can happen in the following cases:
-1. The sensitivity is set to low
-2. The median values are close to zero
-3. The metric exhibits an irregular behavior with high variance (there are spikes or dips in the data)
+When a metric exhibits large fluctuation, Dynamic Thresholds builds a wider model around the metric values. This action can result in the lower border being below zero. Specifically, this scenario can happen when:
+
+- The sensitivity is set to low.
+- The median values are close to zero.
+- The metric exhibits an irregular behavior with high variance, which appears as spikes or dips in the data.
-When the lower bound has a negative value, this means that it's plausible for the metric to reach a zero value given the metric's irregular behavior. You may consider choosing a higher sensitivity or a larger *Aggregation granularity (Period)* to make the model less sensitive, or using the *Ignore data before* option to exclude a recent irregularity from the historical data used to build the model.
+When the lower bound has a negative value, it's plausible for the metric to reach a zero value given the metric's irregular behavior. Consider choosing a higher sensitivity or a larger **Aggregation granularity (Period)** to make the model less sensitive. Or, use the **Ignore data before** option to exclude a recent irregularity from the historical data used to build the model.
## The Dynamic Thresholds alert rule is too noisy (fires too much)+ To reduce the sensitivity of your Dynamic Thresholds alert rule, use one of the following options:
-1. Threshold sensitivity - Set the sensitivity to *Low* in order to be more tolerant for deviations.
-2. Number of violations (under *Advanced settings*) - Configure the alert rule to trigger only if a number of deviations occur within a certain period of time. This will make the rule less susceptible to transient deviations.
+- **Threshold sensitivity:** Set the sensitivity to **Low** to be more tolerant for deviations.
+- **Number of violations (under Advanced settings):** Configure the alert rule to trigger only if several deviations occur within a certain period of time. This setting makes the rule less susceptible to transient deviations.
## The Dynamic Thresholds alert rule is too insensitive (doesn't fire)
-Sometimes, an alert rule won't trigger even when a high sensitivity is configured. This usually happens when the metric's distribution is highly irregular.
+
+Sometimes an alert rule won't trigger, even when a high sensitivity is configured. This scenario usually happens when the metric's distribution is highly irregular.
Consider one of the following options:
-* Move to monitoring a complementary metric that's suitable for your scenario (if applicable). For example, check for changes in success rate, rather than failure rate.
-* Try selecting a different aggregation granularity (period).
-* Check if there was a drastic change in the metric behavior in the last 10 days (an outage). An abrupt change can impact the upper and lower thresholds calculated for the metric and make them broader. Wait for a few days until the outage is no longer taken into the thresholds calculation, or use the *Ignore data before* option (under *Advanced settings*).
-* If your data has weekly seasonality, but not enough history is available for the metric, the calculated thresholds can result in having broad upper and lower bounds. For example, the calculation can treat weekdays and weekends in the same way, and build wide borders that don't always fit the data. This should resolve itself once enough metric history is available, at which point the correct seasonality will be detected and the calculated thresholds will update accordingly.
+* Move to monitoring a complementary metric that's suitable for your scenario, if applicable. For example, check for changes in success rate rather than failure rate.
+* Try selecting a different value for **Aggregation granularity (Period)**.
+* Check if there was a drastic change in the metric behavior in the last 10 days, for example, an outage. An abrupt change can affect the upper and lower thresholds calculated for the metric and make them broader. Wait for a few days until the outage is no longer taken into the thresholds calculation. Or use the **Ignore data before** option under **Advanced settings**.
+* If your data has weekly seasonality, but not enough history is available for the metric, the calculated thresholds can result in having broad upper and lower bounds. For example, the calculation can treat weekdays and weekends in the same way and build wide borders that don't always fit the data. This issue should resolve itself after enough metric history is available. Then, the correct seasonality will be detected and the calculated thresholds will update accordingly.
+
+## When I configure an alert rule's condition, why is Dynamic Thresholds disabled?
-## When configuring an alert rule's condition, why is Dynamic threshold disabled?
-While dynamic thresholds are supported for the vast majority of metrics, there are some metrics that can't use dynamic thresholds.
+Dynamic thresholds are supported for most metrics, but some metrics can't use dynamic thresholds.
-The table below lists the metrics that aren't supported by dynamic thresholds.
+The following table lists the metrics that aren't supported by Dynamic Thresholds.
-| Resource Type | Metric Name |
+| Resource type | Metric name |
| | | | Microsoft.ClassicStorage/storageAccounts | UsedCapacity | | Microsoft.ClassicStorage/storageAccounts/blobServices | BlobCapacity |
The table below lists the metrics that aren't supported by dynamic thresholds.
| Microsoft.Storage/storageAccounts/fileServices | FileShareCapacityQuota | | Microsoft.Storage/storageAccounts/fileServices | FileShareProvisionedIOPS | - ## Next steps -- For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
+For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
azure-monitor Tutorial Asp Net Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-custom-metrics.md
In this article, you'll learn how to capture custom metrics with Application Insights in .NET and .NET Core apps.
-Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use.
+Insert a few lines of code in your application to find out what users are doing with it or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use.
## ASP.NET Core applications ### Prerequisites
-If you'd like to follow along with the guidance in this article, certain pre-requisites are needed.
+To complete this tutorial, you need:
* Visual Studio 2022
-* Visual Studio Workloads: ASP.NET and web development, Data storage and processing, and Azure development
+* The following Visual Studio Workloads:
+ * ASP.NET and web development
+ * Data storage and processing
+ * Azure development
* .NET 6.0 * Azure subscription and user account (with the ability to create and delete resources)
-* Deploy the [completed sample application (`2 - Completed Application`)](./tutorial-asp-net-core.md) or an existing ASP.NET Core application with the [Application Insights for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) NuGet package installed and [configured to gather server-side telemetry](asp-net-core.md#enable-application-insights-server-side-telemetry-visual-studio).
+* The [completed sample application](./tutorial-asp-net-core.md) deployed or an existing ASP.NET Core application with the [Application Insights for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) NuGet package installed and [configured to gather server-side telemetry](asp-net-core.md#enable-application-insights-server-side-telemetry-visual-studio).
### Custom metrics overview
-The Application Insights .NET and .NET Core SDKs have two different methods of collecting custom metrics, `TrackMetric()`, and `GetMetric()`. The key difference between these two methods is local aggregation. `TrackMetric()` lacks pre-aggregation while `GetMetric()` has pre-aggregation. The recommended approach is to use aggregation, therefore, `TrackMetric()` is no longer the preferred method of collecting custom metrics. This article will walk you through using the GetMetric() method, and some of the rationale behind how it works.
+The Application Insights .NET and .NET Core SDKs have two different methods for collecting custom metrics, which are `TrackMetric()`, and `GetMetric()`. The key difference between these two methods is local aggregation. `TrackMetric()` lacks pre-aggregation while `GetMetric()` has pre-aggregation. The recommended approach is to use aggregation. Therefore, `TrackMetric()` is no longer the preferred method for collecting custom metrics. This article will walk you through using the GetMetric() method, and some of the rationale behind how it works.
#### Pre-aggregating vs non pre-aggregating API
-`TrackMetric()` sends raw telemetry denoting a metric. It's inefficient to send a single telemetry item for each value. `TrackMetric()` is also inefficient in terms of performance since every `TrackMetric(item)` goes through the full SDK pipeline of telemetry initializers and processors. Unlike `TrackMetric()`, `GetMetric()` handles local pre-aggregation for you and then only submits an aggregated summary metric at a fixed interval of one minute. So if you need to closely monitor some custom metric at the second or even millisecond level you can do so while only incurring the storage and network traffic cost of only monitoring every minute. This behavior also greatly reduces the risk of throttling occurring since the total number of telemetry items that need to be sent for an aggregated metric are greatly reduced.
+`TrackMetric()` sends raw telemetry that denotes a metric. `TrackMetric()` is inefficient because it sends a single telemetry item for each value. `TrackMetric()` is also inefficient in terms of performance because every `TrackMetric(item)` goes through the full SDK pipeline of telemetry initializers and processors.
-In Application Insights, custom metrics collected via `TrackMetric()` and `GetMetric()` aren't subject to [sampling](./sampling.md). Sampling important metrics can lead to scenarios where alerting you may have built around those metrics could become unreliable. By never sampling your custom metrics, you can generally be confident that when your alert thresholds are breached, an alert will fire. But since custom metrics aren't sampled, there are some potential concerns.
+Unlike `TrackMetric()`, `GetMetric()` handles local pre-aggregation for you and then only submits an aggregated summary metric at a fixed interval of one minute. If you need to closely monitor some custom metric at the second or even millisecond level, you can use `GetMetric()` to do so while only incurring the storage and network traffic cost of only monitoring every minute. This behavior also greatly reduces the risk of throttling occurring because the total number of telemetry items that need to be sent for an aggregated metric are greatly reduced.
-Trend tracking in a metric every second, or at an even more granular interval can result in:
+In Application Insights, custom metrics collected via `TrackMetric()` and `GetMetric()` aren't subject to [sampling](./sampling.md). Sampling important metrics can lead to scenarios where the alerting you may have built around these metrics could become unreliable. By never sampling your custom metrics, you can generally be confident that when your alert thresholds are breached, an alert will fire. Because custom metrics aren't sampled, there are some potential concerns, which are described below.
+
+Trend tracking in a metric every second or at a more granular interval can result in:
- Increased data storage costs. There's a cost associated with how much data you send to Azure Monitor. (The more data you send the greater the overall cost of monitoring.)-- Increased network traffic/performance overhead. (In some scenarios this overhead could have both a monetary and application performance cost.)
+- Increased network traffic/performance overhead. (In some scenarios, this overhead could have both a monetary and application performance cost.)
- Risk of ingestion throttling. (The Azure Monitor service drops ("throttles") data points when your app sends a high rate of telemetry in a short time interval.)
-Throttling is a concern as it can lead to missed alerts. The condition to trigger an alert could occur locally and then be dropped at the ingestion endpoint due to too much data being sent. We don't recommend using `TrackMetric()` for .NET and .NET Core unless you've implemented your own local aggregation logic. If you're trying to track every instance an event occurs over a given time period, you may find that [`TrackEvent()`](./api-custom-events-metrics.md#trackevent) is a better fit. Though keep in mind that unlike custom metrics, custom events are subject to sampling. You can still use `TrackMetric()` even without writing your own local pre-aggregation, but if you do so be aware of the pitfalls.
+Throttling is a concern because it can lead to missed alerts. The condition to trigger an alert could occur locally and then be dropped at the ingestion endpoint due to too much data being sent. We don't recommend using `TrackMetric()` for .NET and .NET Core unless you've implemented your own local aggregation logic. If you're trying to track every instance when an event occurs over a given time period, you may find that [`TrackEvent()`](./api-custom-events-metrics.md#trackevent) is a better fit. Keep in mind that unlike custom metrics, custom events are subject to sampling. You can still use `TrackMetric()` even without writing your own local pre-aggregation, but be aware of the pitfalls if you do so.
-In summary `GetMetric()` is the recommended approach since it does pre-aggregation, it accumulates values from all the Track() calls and sends a summary/aggregate once every minute. `GetMetric()` can significantly reduce the cost and performance overhead by sending fewer data points, while still collecting all relevant information.
+In summary, `GetMetric()` is the recommended approach because it does pre-aggregation, accumulates values from all the Track() calls, and sends a summary/aggregate once every minute. `GetMetric()` can significantly reduce the cost and performance overhead by sending fewer data points, while still collecting all of the relevant information.
## Getting a TelemetryClient instance
namespace AzureCafe.Controllers
## TrackMetric
-Application Insights can chart metrics that aren't attached to particular events. For example, you could monitor a queue length at regular intervals. With metrics, the individual measurements are of less interest than the variations and trends, and so statistical charts are useful.
+Application Insights can chart metrics that aren't attached to particular events. For example, you could monitor a queue length at regular intervals. With metrics, the individual measurements are of less interest than the variations and trends, so statistical charts are useful.
+
+To send metrics to Application Insights, you can use the `TrackMetric(..)` API.
-To send metrics to Application Insights, you can use the `TrackMetric(..)` API. We'll cover the recommended way to send a metric:
+### Aggregation
-* **Aggregation**. When you work with metrics, every single measurement is rarely of interest. Instead, a summary of what happened during a particular time period is important. Such a summary is called _aggregation_.
+Aggregation is the recommended way to send a metric.
- For example, the aggregate metric sum for that time period is `1` and the count of the metric values is `2`. When you use the aggregation approach, you invoke `TrackMetric` only once per time period and send the aggregate values. We recommend this approach because it can significantly reduce the cost and performance overhead by sending fewer data points to Application Insights, while still collecting all relevant information.
+When you work with metrics, every single measurement is rarely of interest. Instead, a summary of what happened during a particular time period is important. Such a summary is called _aggregation_.
+
+For example, the aggregate metric sum for that time period is `1` and the count of the metric values is `2`. When you use the aggregation approach, you invoke `TrackMetric` only once per time period and send the aggregate values. We recommend this approach because it can significantly reduce the cost and performance overhead by sending fewer data points to Application Insights, while still collecting all of the relevant information.
### TrackMetric example
To send metrics to Application Insights, you can use the `TrackMetric(..)` API.
} ```
-3. Immediately following the previous code, insert the following to add a custom metric.
+3. To add a custom metric, insert the following code immediately after the previous code.
```csharp _telemetryClient.TrackMetric("ReviewPerformed", model.Rating); ```
-4. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu.
+4. Right-click on the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu.
- ![Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted.](./media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png "Publish Web App")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted." lightbox="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png":::
-5. Select **Publish** to promote the new code to the Azure App Service.
+5. To promote the new code to the Azure App Service, select **Publish**.
- ![Screenshot of the Azure Cafe publish profile screen with the Publish button highlighted.](./media/tutorial-asp-net-custom-metrics/publish-profile.png "Publish profile")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/publish-profile.png" alt-text="Screenshot of the Azure Cafe publish profile screen with the Publish button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/publish-profile.png":::
-6. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application.
+ When the Azure Cafe web application is successfully published, a new browser window opens to the Azure Cafe web application.
- ![Screenshot of the Azure Cafe web application.](./media/tutorial-asp-net-custom-metrics/azure-cafe-index.png "Azure Cafe web application")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png" alt-text="Screenshot of the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png":::
-7. Perform various activities in the web application to generate some telemetry.
+6. To generate some telemetry, follow these steps in the web application to add a review.
- 1. Select **Details** next to a Cafe to view its menu and reviews.
+ 1. To view a cafe's menu and reviews, select **Details** next to a cafe.
- ![Screenshot of a portion of the Azure Cafe list with the Details button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-details-button.png "Azure Cafe Details")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-details-button.png" alt-text="Screenshot of a portion of the Azure Cafe list in the Azure Cafe web application with the Details button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-details-button.png":::
- 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review.
+ 2. To view and add reviews, on the Cafe screen, select the **Reviews** tab. Select the **Add review** button to add a review.
- ![Screenshot of the Cafe details screen with the Add review button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png "Add review")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png" alt-text="Screenshot of the Cafe details screen in the Azure Cafe web application with the Add review button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png":::
- 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**.
+ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. When finished, select **Add review**.
- ![Screenshot of the Create a review dialog.](./media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png "Create a review")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png" alt-text="Screenshot of the Create a review dialog in the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png":::
- 4. Repeat adding reviews as desired to generate more telemetry.
+ 4. If you need to generate additional telemetry, add additional reviews.
### View metrics in Application Insights
-1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), select the **Application Insights** resource.
- :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="First screenshot of a resource group with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png":::
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="First screenshot of a resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png":::
-2. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **customMetrics** table, located under the **Application Insights** tree. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows, then select **Run** to filter the results.
+2. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Logs**.
+
+3. In the **Tables** pane, under the **Application Insights** tree, double-click on the **customMetrics** table.
+
+4. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows:
```kql customMetrics | where name == "ReviewPerformed" ```
-3. Observe the results display the rating value present in the Review.
+5. Select **Run** to filter the results.
+
+ The results display the rating value present in your review.
## GetMetric
-As referenced before, `GetMetric(..)` is the preferred method for sending metrics. In order to make use of this method, we'll be performing some changes to the existing code.
+As mentioned before, `GetMetric(..)` is the preferred method for sending metrics. In order to make use of this method, we'll be performing some changes to the existing code from the TrackMetric example.
When running the sample code, you'll see that no telemetry is being sent from the application right away. A single telemetry item will be sent by around the 60-second mark. > [!NOTE]
-> GetMetric does not support tracking the last value (i.e. "gauge") or tracking histograms/distributions.
+> GetMetric does not support tracking the last value (i.e. "gauge") or histograms/distributions.
### GetMetric example 1. From the Visual Studio Solution Explorer, locate and open the **HomeController.cs** file.
-2. Locate the `CreateReview` method and the code added in the previous [TrackMetric example](#trackmetric-example).
+2. Locate the `CreateReview` method and the code you added in the previous [TrackMetric example](#trackmetric-example).
-3. Replace the previously added code in _Step 3_ with the following one.
+3. Replace the code you inserted in the previous TrackMetric example with the following code.
```csharp var metric = _telemetryClient.GetMetric("ReviewPerformed");
When running the sample code, you'll see that no telemetry is being sent from th
4. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu.
- ![Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted.](./media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png "Publish Web App")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted." lightbox="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png":::
-5. Select **Publish** to promote the new code to the Azure App Service.
+5. To promote the new code to the Azure App Service, select **Publish**.
- ![Screenshot of the Azure Cafe publish profile with the Publish button highlighted.](./media/tutorial-asp-net-custom-metrics/publish-profile.png "Publish profile")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/publish-profile.png" alt-text="Screenshot of the Azure Cafe publish profile with the Publish button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/publish-profile.png":::
-6. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application.
+ When the Azure Cafe web application is successfully published, a new browser window opens to the Azure Cafe web application.
- ![Screenshot of the Azure Cafe web application.](./media/tutorial-asp-net-custom-metrics/azure-cafe-index.png "Azure Cafe web application")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png" alt-text="Screenshot of the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png":::
-7. Perform various activities in the web application to generate some telemetry.
+6. To generate some telemetry, follow these steps in the web application to add a review.
- 1. Select **Details** next to a Cafe to view its menu and reviews.
+ 1. To view a cafe's menu and reviews, select **Details** next to a cafe.
- ![Screenshot of a portion of the Azure Cafe list with the Details button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-details-button.png "Azure Cafe Details")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-details-button.png" alt-text="Screenshot of a portion of the Azure Cafe list in the Azure Cafe web application with the Details button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-details-button.png":::
- 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review.
+ 2. To view and add reviews, on the Cafe screen, select the **Reviews** tab. Select the **Add review** button to add a review.
- ![Screenshot of the Cafe details with the Add review button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png "Add review")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png" alt-text="Screenshot of the Cafe details in the Azure Cafe web application with the Add review button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png":::
- 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**.
+ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. When finished, select **Add review**.
- ![Screenshot of the Create a review dialog displays.](./media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png "Create a review")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png" alt-text="Screenshot of the Create a review dialog in the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png":::
- 4. Repeat adding reviews as desired to generate more telemetry.
+ 4. If you need to generate additional telemetry, add additional reviews.
### View metrics in Application Insights
-1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), select the **Application Insights** resource.
+
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="Second screenshot of a resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png":::
- ![Second screenshot of a resource group with the Application Insights resource highlighted.](./media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png "Resource Group")
+2. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Logs**.
-2. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **customMetrics** table, located under the **Application Insights** tree. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows, then select **Run** to filter the results.
+3. In the **Tables** pane, under the **Application Insights** tree, double-click on the **customMetrics** table.
+
+4. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows:
```kql customMetrics | where name == "ReviewPerformed" ```
-3. Observe the results display the rating value present in the Review and the aggregated values.
+ 5. Select **Run** to filter the results.
+
+ The results display the rating value present in your review.
## Multi-dimensional metrics The examples in the previous section show zero-dimensional metrics. Metrics can also be multi-dimensional. We currently support up to 10 dimensions.
-By default multi-dimensional metrics within the Metric explorer experience aren't turned on in Application Insights resources.
+By default, multi-dimensional metrics within the Metric explorer experience aren't turned on in Application Insights resources.
>[!NOTE] > This is a preview feature and additional billing may apply in the future. ### Enable multi-dimensional metrics
-To enable multi-dimensional metrics for an Application Insights resource, Select **Usage and estimated costs** > **Custom Metrics** > **Send custom metrics to Azure Metric Store (With dimensions)** > **OK**.
+This section walks through enabling multi-dimensional metrics for an Application Insights resource.
-Once you have made that change and send new multi-dimensional telemetry, you'll be able to **Apply splitting**.
+1. In the [Azure portal](https://portal.azure.com), select the **Application Insights** resource.
+1. Select **Usage and estimated costs**.
+1. Select **Custom Metrics**.
+1. Select **Send custom metrics to Azure Metric Store (With dimensions)**.
+1. Select **OK**.
+
+After you enable multi-dimensional metrics for an Application Insights resource and send new multi-dimensional telemetry, you can split a metric by dimension.
> [!NOTE]
-> Only newly sent metrics after the feature was turned on in the portal will have dimensions stored.
+> Only metrics that are sent after the feature is turned on in the portal will have dimensions stored.
### Multi-dimensional metrics example
Once you have made that change and send new multi-dimensional telemetry, you'll
2. Locate the `CreateReview` method and the code added in the previous [GetMetric example](#getmetric-example).
-3. Replace the previously added code in _Step 3_ with the following one.
+3. Replace the code you inserted in the previous GetMetric example with the following code.
```csharp var metric = _telemetryClient.GetMetric("ReviewPerformed", "IncludesPhoto"); ```
-4. Still in the `CreateReview` method, change to code to match the following one.
+4. In the `CreateReview` method, change the code to match the following code.
```csharp [HttpPost]
Once you have made that change and send new multi-dimensional telemetry, you'll
} ```
-5. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu.
+5. Right-click the on **AzureCafe** project in Solution Explorer and select **Publish** from the context menu.
- ![Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted.](./media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png "Publish Web App")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted." lightbox="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png":::
-6. Select **Publish** to promote the new code to the Azure App Service.
+6. To promote the new code to the Azure App Service, select **Publish**.
- ![Screenshot of the Azure Cafe publish profile with the Publish button highlighted.](./media/tutorial-asp-net-custom-metrics/publish-profile.png "Publish profile")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/publish-profile.png" alt-text="Screenshot of the Azure Cafe publish profile with the Publish button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/publish-profile.png":::
-7. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application.
+ When the Azure Cafe web application is successfully published, a new browser window opens to the Azure Cafe web application.
- ![Screenshot of the Azure Cafe web application.](./media/tutorial-asp-net-custom-metrics/azure-cafe-index.png "Azure Cafe web application")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png" alt-text="Screenshot of the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png":::
-8. Perform various activities in the web application to generate some telemetry.
+7. To generate some telemetry, follow these steps in the web application to add a review.
- 1. Select **Details** next to a Cafe to view its menu and reviews.
+ 1. To view a cafe's menu and reviews, select **Details** next to a cafe.
- ![Screenshot of a portion of the Azure Cafe list with the Details button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-details-button.png "Azure Cafe Details")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-details-button.png" alt-text="Screenshot of a portion of the Azure Cafe list in the Azure Cafe web application with the Details button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-details-button.png":::
- 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review.
+ 2. To view and add reviews, on the Cafe screen, select the **Reviews** tab. Select the **Add review** button to add a review.
- ![Screenshot of the Cafe details screen with the Add review button highlighted.](./media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png "Add review")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png" alt-text="Screenshot of the Cafe details screen in the Azure Cafe web application with the Add review button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png":::
- 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**.
+ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. When finished, select **Add review**.
- ![Screenshot of the Create a review dialog.](./media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png "Create a review")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png" alt-text="Screenshot of the Create a review dialog in the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png":::
- 4. Repeat adding reviews as desired to generate more telemetry.
+ 4. If you need to generate additional telemetry, add additional reviews.
### View logs in Application Insights
-1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), select the **Application Insights** resource.
+
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="Third screenshot of a resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png":::
- ![Third screenshot of a resource group with the Application Insights resource highlighted.](./media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png "Resource Group")
+2. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Logs**.
-2. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **customMetrics** table, located under the **Application Insights** tree. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows, then select **Run** to filter the results.
+3. In the **Tables** pane, under the **Application Insights** tree, double-click on the **customMetrics** table.
+
+4. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows:
```kql customMetrics | where name == "ReviewPerformed" ```
-3. Observe the results display the rating value present in the Review and the aggregated values.
+5. Select **Run** to filter the results.
+
+ The results display the rating value present in your review and the aggregated values.
-4. In order to better observe the **IncludesPhoto** dimension, we can extract it into a separate variable (column) by using the following query.
+6. To extract the **IncludesPhoto** dimension into a separate variable (column) to better observe the dimension, use the following query.
```kql customMetrics
Once you have made that change and send new multi-dimensional telemetry, you'll
| where name == "ReviewPerformed" ```
-5. Since we reused the same custom metric name has before, results with and without the custom dimension will be displayed. In order to avoid that, we'll update the query to match the following one.
+ Because we reused the same custom metric name as before, results with and without the custom dimension will be displayed.
+
+7. To only display results with the custom dimension, update the query to match the following query.
```kql customMetrics
Once you have made that change and send new multi-dimensional telemetry, you'll
### View metrics in Application Insights
-1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), select the **Application Insights** resource.
+
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="Fourth screenshot of a resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png":::
- ![Fourth screenshot of a resource group with the Application Insights resource highlighted.](./media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png "Resource Group")
+2. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Metrics**.
-2. From the left menu of the Application Insights resource, select **Metrics** from beneath the **Monitoring** section.
+3. In the **Metric Namespace** drop-down menu, select **azure.applicationinsights**.
-3. For **Metric Namespace**, select **azure.applicationinsights**.
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/metrics-explorer-namespace.png" alt-text="Screenshot of metrics explorer in the Azure portal with the Metric Namespace highlighted." lightbox="media/tutorial-asp-net-custom-metrics/metrics-explorer-namespace.png":::
- ![Screenshot of metrics explorer with the Metric Namespace highlighted.](./media/tutorial-asp-net-custom-metrics/metrics-explorer-namespace.png "Metric Namespace")
+4. In the **Metric** drop-down menu, select **ReviewPerformed**.
-4. For **Metric**, select **ReviewPerformed**.
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/metrics-explorer-metric.png" alt-text="Screenshot of metrics explorer in the Azure portal with the Metric highlighted." lightbox="media/tutorial-asp-net-custom-metrics/metrics-explorer-metric.png":::
- ![Screenshot of metrics explorer with the Metric highlighted.](./media/tutorial-asp-net-custom-metrics/metrics-explorer-metric.png "Metric")
+ You'll notice that you can't split the metric by your new custom dimension or view your custom dimension with the metrics view.
-5. However, you'll notice that you aren't able to split the metric by your new custom dimension, or view your custom dimension with the metrics view. Select **Apply Splitting**.
+5. To split the metric by dimension, select **Apply Splitting**.
- ![Screenshot of the Apply Splitting button.](./media/tutorial-asp-net-custom-metrics/apply-splitting.png "Splitting")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/apply-splitting.png" alt-text="Screenshot of the Apply Splitting button in the Azure portal." lightbox="media/tutorial-asp-net-custom-metrics/apply-splitting.png":::
-6. For the custom dimension **Values** to use, select **IncludesPhoto**.
+6. To view your custom dimension, in the **Values** drop-down menu, select **IncludesPhoto**.
- ![Screenshot illustrating splitting using a custom dimension](./media/tutorial-asp-net-custom-metrics/splitting-dimension.png "Splitting dimension")
+ :::image type="content" source="media/tutorial-asp-net-custom-metrics/splitting-dimension.png" alt-text="Screenshot of the Azure portal. It illustrates splitting by using a custom dimension." lightbox="media/tutorial-asp-net-custom-metrics/splitting-dimension.png":::
## Next steps
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
This article shows you how to create an NFS volume. For SMB volumes, see [Create
## Best practice
-* Ensure that youΓÇÖre using the proper mount instructions for the volume. See [Mount a volume for Windows or Linux VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md).
+* Ensure that youΓÇÖre using the proper mount instructions for the volume. See [Mount a volume for Windows or Linux VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md).
* The NFS client should be in the same VNet or peered VNet as the Azure NetApp Files volume. Connecting from outside the VNet is supported; however, it will introduce additional latency and decrease overall performance.
This article shows you how to create an NFS volume. For SMB volumes, see [Create
Additional configurations are required if you use Kerberos with NFSv4.1. Follow the instructions in [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md).
- * If you want to enable Active Directory LDAP users and extended groups (up to 1024 groups) to access the volume, select the **LDAP** option. Follow instructions in [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) to complete the required configurations.
+ * If you want to enable Active Directory LDAP users and extended groups (up to 1024 groups) to access the volume, select the **LDAP** option. Follow instructions in [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) to complete the required configurations.
* Customize **Unix Permissions** as needed to specify change permissions for the mount path. The setting does not apply to the files under the mount path. The default setting is `0770`. This default setting grants read, write, and execute permissions to the owner and the group, but no permissions are granted to other users. Registration requirement and considerations apply for setting **Unix Permissions**. Follow instructions in [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
This article shows you how to create an NFS volume. For SMB volumes, see [Create
![Specify NFS protocol](../media/azure-netapp-files/azure-netapp-files-protocol-nfs.png)
-4. Click **Review + Create** to review the volume details. Then click **Create** to create the volume.
+4. Click **Review + Create** to review the volume details. Then click **Create** to create the volume.
The volume you created appears in the Volumes page.
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* [Configure NFSv4.1 default domain for Azure NetApp Files](azure-netapp-files-configure-nfsv41-domain.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md)
-* [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
-* [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
+* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
+* [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
* [Mount a volume for Windows or Linux VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Configure export policy for an NFS volume](azure-netapp-files-configure-export-policy.md) * [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
azure-portal Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quick-create-bicep.md
+
+ Title: Create an Azure portal dashboard by using a Bicep file
+description: Learn how to create an Azure portal dashboard by using a Bicep file.
++ Last updated : 09/15/2022++
+# Quickstart: Create a dashboard in the Azure portal by using a Bicep file
+
+A dashboard in the Azure portal is a focused and organized view of your cloud resources. This quickstart focuses on the process of deploying a Bicep file to create a dashboard. The dashboard shows the performance of a virtual machine (VM), and some static information and links.
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-portal-dashboard/). The Bicep file for this article is too long to show here. To view the Bicep file, see [main.bicep](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/main.bicep). The Bicep file defines one Azure resource, a dashboard that displays data about the VM you created:
+
+- [Microsoft.Portal/dashboards](/azure/templates/microsoft.portal/dashboards?pivots=deployment-language-bicep)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ $resourceGroupName = 'SimpleWinVmResourceGroup'
+ $location = 'eastus'
+ $adminUserName = '<admin-user-name>'
+ $adminPassword = '<admin-password>'
+ $dnsLabelPrefix = '<dns-label-prefix>'
+ $virtualMachineName = 'SimpleWinVM'
+ $vmTemplateUri = 'https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/prereqs/prereq.azuredeploy.json'
+
+ az group create --name $resourceGroupName --location $location
+ az deployment group create --resource-group $resourceGroupName --template-uri $vmTemplateUri --parameters adminUsername=$adminUserName adminPassword=$adminPassword dnsLabelPrefix=$dnsLabelPrefix
+ az deployment group create --resource-group $resourceGroupName --template-file main.bicep --parameters virtualMachineName=$virtualMachineName virtualMachineResourceGroup=$resourceGroupName
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ $resourceGroupName = 'SimpleWinVmResourceGroup'
+ $location = 'eastus'
+ $adminUserName = '<admin-user-name>'
+ $adminPassword = '<admin-password>'
+ $dnsLabelPrefix = '<dns-label-prefix>'
+ $virtualMachineName = 'SimpleWinVM'
+ $vmTemplateUri = 'https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/prereqs/prereq.azuredeploy.json'
+
+ $encrypted = ConvertTo-SecureString -string $adminPassword -AsPlainText
+
+ New-AzResourceGroup -Name $resourceGroupName -Location $location
+ New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $vmTemplateUri -adminUsername $adminUserName -adminPassword $encrypted -dnsLabelPrefix $dnsLabelPrefix
+ New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile ./main.bicep -virtualMachineName $virtualMachineName -virtualMachineResourceGroup $resourceGroupName
+ ```
+
+
+
+ Replace the following values in the script:
+
+ - &lt;admin-user-name>: specify an administrator username.
+ - &lt;admin-password>: specify an administrator password.
+ - &lt;dns-label-prefix>: specify a DNS prefix.
+
+ The Bicep file requires an existing virtual machine. Before deploying the Bicep file, the script deploys an ARM template located at *https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/prereqs/prereq.azuredeploy.json* for creating a virtual machine. The virtual machine name is hard-coded as **SimpleWinVM** in the ARM template.
+
+When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
++
+## Clean up resources
+
+If you want to remove the VM and associated dashboard, delete the resource group that contains them.
+
+1. In the Azure portal, search for **SimpleWinVmResourceGroup**, then select it in the search results.
+
+1. On the **SimpleWinVmResourceGroup** page, select **Delete resource group**, enter the resource group name to confirm, then select **Delete**.
+
+> [!CAUTION]
+> Deleting a resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted.
+
+## Next steps
+
+For more information about dashboards in the Azure portal, see:
+
+> [!div class="nextstepaction"]
+> [Create and share dashboards in the Azure portal](azure-portal-dashboards.md)
azure-portal Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quick-create-template.md
Title: Create an Azure portal dashboard by using an Azure Resource Manager templ
description: Learn how to create an Azure portal dashboard by using an Azure Resource Manager template. Previously updated : 01/13/2022 Last updated : 09/16/2022 # Quickstart: Create a dashboard in the Azure portal by using an ARM template
-A dashboard in the Azure portal is a focused and organized view of your cloud resources. This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create a dashboard. The dashboard shows the performance of a virtual machine (VM), as well as some static information and links.
+A dashboard in the Azure portal is a focused and organized view of your cloud resources. This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create a dashboard. The dashboard shows the performance of a virtual machine (VM), and some static information and links.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- A virtual machine. The dashboard you create in the next part of this quickstart requires an existing VM. Create a VM by following these steps.
-## Create a virtual machine
+ 1. In the Azure portal, select **Cloud Shell** from the global controls at the top of the page.
-The dashboard you create in the next part of this quickstart requires an existing VM. Create a VM by following these steps.
+ :::image type="content" source="media/quick-create-template/cloud-shell.png" alt-text="Screenshot showing the Cloud Shell option in the Azure portal.":::
-1. In the Azure portal, select **Cloud Shell** from the global controls at the top of the page.
+ 1. In the **Cloud Shell** window, select **PowerShell**.
- :::image type="content" source="media/quick-create-template/cloud-shell.png" alt-text="Screenshot showing the Cloud Shell option in the Azure portal.":::
+ :::image type="content" source="media/quick-create-template/powershell.png" alt-text="Screenshot showing the PowerShell option in Cloud Shell.":::
-1. In the **Cloud Shell** window, select **PowerShell**.
+ 1. Copy the following command and enter it at the command prompt to create a resource group.
- :::image type="content" source="media/quick-create-template/powershell.png" alt-text="Screenshot showing the PowerShell option in Cloud Shell.":::
+ ```powershell
+ New-AzResourceGroup -Name SimpleWinVmResourceGroup -Location EastUS
+ ```
-1. Copy the following command and enter it at the command prompt to create a resource group.
+ 1. Next, copy the following command and enter it at the command prompt to create a VM in your new resource group.
- ```powershell
- New-AzResourceGroup -Name SimpleWinVmResourceGroup -Location EastUS
- ```
+ ```powershell
+ New-AzVm `
+ -ResourceGroupName "SimpleWinVmResourceGroup" `
+ -Name "myVM1" `
+ -Location "East US"
+ ```
-1. Next, copy the following command and enter it at the command prompt to create a VM in your new resource group.
-
- ```powershell
- New-AzVm `
- -ResourceGroupName "SimpleWinVmResourceGroup" `
- -Name "myVM1" `
- -Location "East US"
- ```
-
-1. Enter a username and password for the VM. This is a new user name and password; it's not, for example, the account you use to sign in to Azure. For more information, see [username requirements](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-) and [password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
+ 1. Enter a username and password for the VM. This is a new user name and password; it's not, for example, the account you use to sign in to Azure. For more information, see [username requirements](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-) and [password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
After the VM has been created, move on to the next section.
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
Previously updated : 04/07/2022 Last updated : 09/14/2022 # Language support in Azure Video Indexer This article provides a comprehensive list of language support by service features in Azure Video Indexer. For the list and definitions of all the features, see [Overview](video-indexer-overview.md).
+> [!NOTE]
+> To make sure a language is supported by the Azure Video Indexer frontend (the website and widget), check [the frontend language support](#language-support-in-frontend-experiences) table below.
+ ## General language support This section describes language support in Azure Video Indexer.
This section describes language support in Azure Video Indexer.
| Thai | `th-TH` | Γ£ö | | | Γ£ö | Γ£ö | | Tongan | `to-TO` | | | | Γ£ö | | | Turkish | `tr-TR` | Γ£ö | | | Γ£ö | Γ£ö |
-| Ukrainian | `uk-UA` | | | | Γ£ö | |
+| Ukrainian | `uk-UA` | Γ£ö | | | Γ£ö | |
| Urdu | `ur-PK` | | | | Γ£ö | |
-| Vietnamese | `vi-VN` | | | | Γ£ö | |
+| Vietnamese | `vi-VN` | Γ£ö | | | Γ£ö | |
## Language support in frontend experiences The following table describes language support in the Azure Video Indexer frontend experiences.
-* portal - the portal column lists supported languages for the [web portal](https://aka.ms/vi-portal-link)
-* widgets - the [widgets](video-indexer-embed-widgets.md) column lists supported languages for translating the index file
+* website - the website column lists supported languages for the [Azure Video Indexer website](https://aka.ms/vi-portal-link). For for more information, see [Get started](video-indexer-get-started.md).
+* widgets - the [widgets](video-indexer-embed-widgets.md) column lists supported languages for translating the index file. For for more information, see [Get started](video-indexer-embed-widgets.md).
-| **Language** | **Code** | **Portal** | **Widgets** |
+| **Language** | **Code** | **Website** | **Widgets** |
|::|::|:--:|:-:| | Afrikaans | `af-ZA` | | Γ£ö | | Arabic (Iraq) | `ar-IQ` | | |
The following table describes language support in the Azure Video Indexer fronte
| Thai | `th-TH` | |Γ£ö | | Tongan | `to-TO` | | Γ£ö | | Turkish | `tr-TR` | Γ£ö | Γ£ö |
-| Ukrainian | `uk-UA` | |Γ£ö |
+| Ukrainian | `uk-UA` | Γ£ö |Γ£ö |
| Urdu | `ur-PK` | | Γ£ö |
-| Vietnamese | `vi-VN` | | Γ£ö |
-
+| Vietnamese | `vi-VN` | Γ£ö | Γ£ö |
## Next steps
-[Overview](video-indexer-overview.md)
+- [Overview](video-indexer-overview.md)
+- [Release notes](release-notes.md)
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
+
+ Title: Logic Apps connector with ARM-based AVI accounts
+description: This article shows how to unlock new experiences and monetization opportunities Azure Video Indexer connectors with Logic App and Power Automate with AVI ARM accounts.
++ Last updated : 08/04/2022++
+# Logic Apps connector with ARM-based AVI accounts
+
+Azure Video Indexer (AVI) [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) supports both server-to-server and client-to-server communication. The API enables you to integrate video and audio insights into your application logic. To make the integration easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://preview.flow.microsoft.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with the Azure Video Indexer API.
+
+You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for the integration gives you better visibility on the health of your workflow and an easy way to debug it.
+
+To help you get started quickly with the Azure Video Indexer connectors, the example in this article creates Logic App flows. The Logic App and Power Automate capabilities and their editors are almost identical, thus the diagrams and explanations are applicable to both. The example in this article is based on the ARM AVI account. If you're working with a classic account, see [Logic App connectors with classic-based AVI accounts](logic-apps-connector-tutorial.md).
+
+The "upload and index your video automatically" scenario covered in this article is composed of two different flows that work together. The "two flow" approach is used to support async upload and indexing of larger files effectively.
+
+* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes.
+* The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
+
+> [!NOTE]
+> For details about the Azure Video Indexer REST ARM API and the request/response examples, see [API](https://aka.ms/avam-arm-api). For example, [Generate an Azure Video Indexer access token](/rest/api/videoindexer/generate/access-token?tabs=HTTP). Press **Try it** to get the correct values for your account.
+>
+> If you are using a classic AVI account, see [Logic Apps connector with classic-based AVI accounts]( logic-apps-connector-tutorial.md).
+
+## Prerequisites
+
+- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- Create an ARM-based [Azure Video Indexer account](create-account-portal.md).
+- Create an Azure Storage account. Keep note of the access key for your Storage account.
+
+ Create two containers: one to store the media files, second to store the insights generated by Azure Video Indexer. In this article, the containers are `videos` and `insights`.
+
+## Set up the first flow - file upload
+
+In this section you'll, you create the following flow. The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes.
+
+The following image shows the first flow:
+
+![Screenshot of the file upload flow.](./media/logic-apps-connector-arm-accounts/first-flow-high-level.png)
+
+1. Create the [Logic App](https://ms.portal.azure.com/#create/Microsoft.LogicApp). We create a Logic App in the same region as the Azure Video Indexer region (recommended but not required). We call the logic app `UploadIndexVideosApp`.
+
+ 1. Select **Consumption** for **Plan type**.
+ 1. Press **Review + Create** -> **Create**.
+ 1. Once the Logic App deployment is complete, in the Azure portal, go to the newly created Logic App.
+ 1. Under the **Settings** section, on the left side's panel, select the **Identity** tab.
+ 1. Under **System assigned**, change the **Status** from **Off** to **On** (the step is important for later on in this tutorial).
+ 1. Press **Save** (on the top of the page).
+ 1. Select the **Logic app designer** tab, in the pane on the left.
+ 1. Pick a **Blank Logic App** flow.
+ 1. Search for "blob".
+ 1. In the **All** tab, choose the **Azure Blob Storage** component.
+ 1. Under **Triggers**, select the **When a blob is added or modified (properties only) (V2)** trigger.
+1. Set the storage connection.
+
+ After creating a **When a blob is added or modified (properties only) (V2)** trigger, the connection needs to be set to the following values:
+
+ |Key | Value|
+ |--|--|
+ |Connection name | <*Name your connection*>. |
+ |Authentication type | Access Key|
+ |Azure Storage Account name| <*Storage account name where media files are going to be stored*>.|
+ |Azure Storage Account Access Key| To get access key of your storage account: in the Azure portal -> my-storage -> under **Security + networking** -> **Access keys** -> copy one of the keys.|
+
+ Select **Create**.
+
+ ![Screenshot of the storage connection trigger.](./media/logic-apps-connector-arm-accounts/trigger.png)
+
+ After setting the connection to the storage, it's required to specify the blob storage container that is being monitored for changes.
+
+ |Key| Value|
+ |--|--|
+ |Storage account name | *Storage account name where media files are stored*|
+ |Container| `/videos`|
+
+ Select **Save** -> **+New step**
+
+ ![Screenshot of the storage container trigger.](./media/logic-apps-connector-arm-accounts/storage-container-trigger.png)
+1. Create SAS URI by path action.
+
+ 1. Select the **Action** tab.
+ 1. Search for and select **Create SAS URI by path (V2)**.
+
+ |Key| Value|
+ |--|--|
+ |Storage account name | <*The storage account name where media files as stored*>.|
+ | Blob path| Under **Dynamic content**, select **List of Files Path**|
+ | Group Policy Identifier| Leave the default value.|
+ | Permissions| **Read** |
+ | Shared Access protocol (appears after pressing **Add new parameter**)| **HttpsOnly**|
+
+ Select **Save** (at the top of the page).
+
+ ![Screenshot of the create SAS URI by path logic.](./media/logic-apps-connector-arm-accounts/create-sas.png)
+
+ Select **+New Step**.
+1. Generate an access token.
+
+ > [!NOTE]
+ > For details about the ARM API and the request/response examples, see [Generate an Azure Video Indexer access token](/rest/api/videoindexer/generate/access-token?tabs=HTTP).
+ >
+ > Press **Try it** to get the correct values for your account.
+
+ Search and create an **HTTP** action.
+
+ |Key| Value|
+ |-|-|
+ |Method | **POST**|
+ | URI| `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.VideoIndexer/accounts/{accountName}/generateAccessToken?api-version={API-version}`. |
+ | Body|`{ "permissionType": "Contributor", "scope": "Account" }` |
+ | Add new parameter | **Authentication** |
+
+ ![Screenshot of the HTTP access token.](./media/logic-apps-connector-arm-accounts/http-with-param.png)
+
+ After the **Authentication** parameter is added, fill the required parameters according to the table below:
+
+ |Key| Value|
+ |-|-|
+ | Authentication type | **Managed identity** |
+ | Managed identity | **System-assigned managed identity**|
+ | Audience | `https://management.core.windows.net` |
+
+ Select **Save**.
+
+ > [!TIP]
+ > Before moving to the next step step up the right permission between the Logic app and the Azure Video Indexer account.
+ >
+ > Make sure you have followed the steps to enable the system -assigned managed identity of your Logic Apps.
+
+ ![Screenshot of the how to enable the system assigned managed identity.](./media/logic-apps-connector-arm-accounts/enable-system.png)
+ 1. Set up system assigned managed identity for permission on Azure Video Indexer resource.
+
+ In the Azure portal, go to your Azure Video Indexer resource/account.
+
+ 1. On the left side blade, and select **Access control**.
+ 1. Select **Add** -> **Add role assignment** -> **Contributor** -> **Next** -> **User, group, or service principal** -> **+Select members**.
+ 1. Under **Members**, search for the Logic Apps name you created (in this case, `UploadIndexVideosApp`).
+ 1. Press **Select**.
+ 1. Press **Review + assign**.
+1. Back in your Logic App, create an **Upload video and index** action.
+
+ 1. Select **Video Indexer(V2)**.
+ 1. From Video Indexer(V2) chose **Upload Video and index**.
+ 1. Set the connection to the Video Indexer account.
+
+ |Key| Value|
+ |-|-|
+ | Connection name| <*Enter a name for the connection*>, in this case `aviconnection`.|
+ | API key| This is your personal API key, which is available under **Profile** in the [developer portal](https://api-portal.videoindexer.ai/profile)|
+
+ Select **Create**.
+ 1. Fill **Upload video and index** action parameters.
+
+ > [!TIP]
+ > If the AVI Account ID cannot be found and isn't in the drop-down, use the custom value.
+
+ |Key| Value|
+ |-|-|
+ |Location| Location of the associated the Azure Video Indexer account.|
+ | Account ID| Account ID of the associated Azure Video Indexer account. You can find the **Account ID** in the **Overview** page of your account, in the Azure portal. Or, the **Account settings** tab, left of the [Azure Video Indexer website](https://www.videoindexer.ai/).|
+ |Access Token| Select **accessToken** from the **dynamic content** of the **Parse JSON** action.|
+ | Video Name| Select **List of Files Name** from the dynamic content of **When a blob is added or modified** action. |
+ |Video URL|Select **Web Url** from the dynamic content of **Create SAS URI by path** action.|
+ | Body| Can be left as default.|
+
+ ![Screenshot of the upload and index action.](./media/logic-apps-connector-arm-accounts/upload-and-index.png)
+
+The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it's up to you what you do with the output.
+
+## Create a second flow - JSON extraction
+
+Create the second flow, Logic Apps of type consumption. The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
+
+![Screenshot of the high level flow.](./media/logic-apps-connector-arm-accounts/second-flow-high-level.png)
+
+1. Set up the trigger
+
+ Search for the **When an HTTP request is received**.
+
+ ![Screenshot of the set up the trigger.](./media/logic-apps-connector-arm-accounts/serach-trigger.png)
+
+ For the trigger, we'll see an HTTP POST URL field. The URL wonΓÇÖt be generated until after you save your flow; however, you'll need the URL eventually.
+
+ > [!TIP]
+ > We will come back to the URL created in this step.
+1. Generate an access token.
+
+ Follow all the steps from:
+
+ 1. **Generate an access token** we did for the first flow.
+ 1. Select **Save** -> **+ New step**.
+1. Get Video Indexer insights.
+
+ 1. Search for "Video Indexer".
+ 1. From **Video Indexer(V2)** chose **Get Video Index** action.
+
+ Set the connection name:
+
+ |Key| Value|
+ |-|-|
+ |Connection name| <*A name for connection*>. For example, `aviconnection`.|
+ | API key| This is your personal API key, which is available under **Profile** at the [developer portal](https://api-portal.videoindexer.ai/profile). For more information, see [Subscribe to the API](video-indexer-use-apis.md#subscribe-to-the-api).|
+ 1. Select **Create**.
+ 1. Fill out the required parameters according to the table:
+
+ |Key| Value|
+ |-|-|
+ |Location| The Location of the Azure Video Indexer account.|
+ | Account ID| The Video Indexer account ID can be copied from the resource/account **Overview** page in the Azure portal.|
+ | Video ID\*| For Video ID, add dynamic content of type **Expression** and put in the following expression: **triggerOutputs()['queries']['id']**. |
+ | Access Token| From the dynamic content, under the **Parse JSON** section select the **accessToken** that is the output of the parse JSON action. |
+
+ \*This expression tells the connecter to get the Video ID from the output of your trigger. In this case, the output of your trigger will be the output of **Upload video and index** in your first trigger.
+
+ ![Screenshot of the upload and index a video action.](./media/logic-apps-connector-arm-accounts/get-video-index.png)
+
+ Select **Save** -> **+ New step**.
+1. Create a blob and store the insights JSON.
+
+ 1. Search for "Azure blob", from the group of actions.
+ 1. Select **Create blob(V2)**.
+ 1. Set the connection to the blob storage that will store the JSON insights files.
+
+ |Key| Value|
+ |-|-|
+ | Connection name| <*Enter a connection name*>.|
+ | Authentication type |Access Key|
+ | Azure Storage Account name| <* The storage account name where insights will be stored*>. |
+ | Azure Storage Account Access key| Go to Azure portal-> my-storage-> under **Security + networking** ->Access keys -> copy one of the keys. |
+
+ ![Screenshot of the create blob action.](./media/logic-apps-connector-arm-accounts/storage-connection.png)
+ 1. Select **Create**.
+ 1. Set the folder in which insights will be stored.
+
+ |Key| Value|
+ |-|-|
+ |Storage account name| <*Enter the storage account name that would contain the JSON output (in this tutorial is the same as the source video).>*|
+ | Folder path | From the dropdown, select the `/insights`|
+ | Blob name| From the dynamic content, under the **Get Video Index** section select **Name** and add `_insights.json`, insights file name will be the video name + insights.json |
+ | Blob content| From the dynamic content, under the **Get Video Index** section, select the **Body**. |
+
+ ![Screenshot of the store blob content action.](./media/logic-apps-connector-arm-accounts/create-blob.png)
+ 1. Select **Save flow**.
+1. Update the callback URL to get notified when an index job is finished.
+
+ Once the flow is saved, an HTTP POST URL is created in the trigger.
+
+ 1. Copy the URL from the trigger.
+
+ ![Screenshot of the save URL trigger.](./media/logic-apps-connector-arm-accounts/http-callback-url.png)
+ 1. Go back to the first flow and paste the URL in the **Upload video and index** action for the **Callback URL parameter**.
+
+Make sure both flows are saved.
+
+## Next steps
+
+Try out your newly created Logic App or Power Automate solution by adding a video to your Azure blobs container, and go back a few minutes later to see that the insights appear in the destination folder.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 09/08/2022 Last updated : 09/15/2022
In order to upload a video from a URL, change your code to send nu
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null); ```
+## September 2022
+
+### General availability of Azure Resource Management (ARM)
+
+With the ARM-based [paid (unlimited)](accounts-overview.md) account you are able to use:
+
+- The [Azure role-based access control (RBAC)](../role-based-access-control/overview.md).
+- Managed Identity to better secure the communication between your Azure Media Services and Azure Video Indexer account, Network Service Tags, and native integration with Azure Monitor to monitor your account (audit and indexing logs).
+- Scale and automate your [deployment with ARM-template](deploy-with-arm-template.md), [bicep](deploy-with-bicep.md) or terraform.
+
+To create an ARM-based account, see [create an account](create-account-portal.md).
+
+### New source languages support for STT, translation, and search
+
+Now supporting source languages for STT (speech-to-text), translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in Azure Video Indexer web applications, widgets and APIs.
+
+For more information, see [supported languages](language-support.md).
+ ## August 2022 ### Update topic inferencing model
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
A Cognitive Insights widget includes all visual insights that were extracted fro
|`locale` | A short language code | Controls the language of the UI. The default value is `en`. <br/>Example: `locale=de`.| |`tab` | The default selected tab | Controls the **Insights** tab that's rendered by default. <br/>Example: `tab=timeline` renders the insights with the **Timeline** tab selected.| |`search` | String | Allows you to control the initial search term.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?search=azure` renders the insights filtered by the word ΓÇ£azureΓÇ¥. |
-|`sort` | Strings separated by comma | Allows you to control the sorting of an insight.<br/>Each sort consist of 3 values: widget name, property and order, connected with '_' `sort=name_property_order`<br/>Available options:<br/>widgets: keywords, audioEffects, labels, sentiments, emotions, keyframes, scenes, namedEntities and spokenLanguage.<br/>property: startTime, endTime, seenDuration, name and id.<br/>order: asc and desc.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?sort=labels_id_asc,keywords_name_desc` renders the labels sorted by id in ascending order and keywords sorted by name in descending order.|
+|`sort` | Strings separated by comma | Allows you to control the sorting of an insight.<br/>Each sort consists of 3 values: widget name, property and order, connected with '_' `sort=name_property_order`<br/>Available options:<br/>widgets: keywords, audioEffects, labels, sentiments, emotions, keyframes, scenes, namedEntities and spokenLanguage.<br/>property: startTime, endTime, seenDuration, name and id.<br/>order: asc and desc.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?sort=labels_id_asc,keywords_name_desc` renders the labels sorted by id in ascending order and keywords sorted by name in descending order.|
|`location` ||The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter.| ### Player widget
You can use the Player widget to stream video by using adaptive bit rate. The Pl
|`autoplay` | A Boolean value | Indicates if the player should start playing the video when loaded. The default value is `true`.<br/> Example: `autoplay=false`. | |`language`/`locale` | A language code | Controls the player language. The default value is `en-US`.<br/>Example: `language=de-DE`.| |`location` ||The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter.|
+|`boundingBoxes`|Array of bounding boxes options: people (faces) and observed people. <br/>Values should be separated by a comma (",").|Controls the option to set bounding boxes on/off when embedding the player.<br/>All mentioned option will be turned on.<br/><br/>Example: `boundingBoxes= observedPeople, people`<br/>Default value is `boundingBoxes= observedPeople` (only observed people bounding box are turned on).|
### Editor widget
azure-vmware Concepts Design Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-design-public-internet-access.md
Title: Concept - Internet connectivity design considerations (Preview)
+ Title: Concept - Internet connectivity design considerations
description: Options for Azure VMware Solution Internet Connectivity.
The option that you select depends on the following factors:
[Enable Public IP to the NSX Edge for Azure VMware Solution (Preview)](enable-public-ip-nsx-edge.md)
-[Disable Internet access or enable a default route](disable-internet-access.md)
+[Disable Internet access or enable a default route](disable-internet-access.md)
backup Backup Azure Database Postgresql Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-troubleshoot.md
The Azure Backup service uses the credentials mentioned in the key-vault to acce
The Azure Backup service uses the credentials mentioned in the key-vault to access the database as a database user. The relevant key vault and the secret are [provided during configuration of backup](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases). Ensure that backup vault's MSI is given access to key vault as documented [here](backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-backup).
-## UserErrorSSLDisabled
-
-SSL needs to be enabled for connections to the server.
- ## UserErrorDBNotFound Ensure that the database and the relevant server exist.
batch Batch Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-virtual-network.md
You might have requirements in your organization to redirect (force) internet-bo
To ensure that the nodes in your pool work in a VNet that has forced tunneling enabled, you must add the following [user-defined routes](../virtual-network/virtual-networks-udr-overview.md) (UDR) for that subnet: -- The Batch service needs to communicate with nodes for scheduling tasks. To enable this communication, add a UDR for each IP address used by the Batch service in the region where your Batch account exists. The IP addresses of the Batch service are found in the `BatchNodeManagement.<region>` service tag. To obtain the list of IP addresses, see [Service tags on-premises](../virtual-network/service-tags-overview.md).
+- The Batch service needs to communicate with nodes for scheduling tasks. To enable this communication, add a UDR corresponding to the `BatchNodeManagement.<region>` [service tag](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) in the region where your Batch account exists. Set the **Next hop type** to **Internet**.
-- Ensure that outbound TCP traffic to the Azure Batch service on destination port 443 is not blocked by your on-premises network. These Azure Batch service destination IP addresses are the same as found in the `BatchNodeManagement.<region>` service tag as used for routes above.
+- Ensure that outbound TCP traffic to the Azure Batch `BatchNodeManagement.<region>` service tag on destination port 443 is not blocked by your on-premises network. This is required for [Simplified Compute Node Communication](simplified-compute-node-communication.md).
- Ensure that outbound TCP traffic to Azure Storage on destination port 443 (specifically, URLs of the form `*.table.core.windows.net`, `*.queue.core.windows.net`, and `*.blob.core.windows.net`) is not blocked by your on-premises network. - If you use virtual file mounts, review the [networking requirements](virtual-file-mount.md#networking-requirements) and ensure that no required traffic is blocked.
-When you add a UDR, define the route for each related Batch IP address prefix, and set **Next hop type** to **Internet**.
- > [!WARNING]
-> Batch service IP addresses can change over time. To prevent outages due to an IP address change, create a process to refresh Batch service IP addresses automatically and keep them up to date in your route table.
+> Batch service IP addresses can change over time. To prevent outages due to Batch service IP address changes, do not directly specify IP addresses. Instead use the `BatchNodeManagement.<region>` [service tag](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes).
## Next steps
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
This article discusses best practices and useful tips for using the Azure Batch
- **Business continuity during pool maintenance and failure:** There are many reasons why a pool may not grow to the size you desire, such as internal errors or capacity constraints. Make sure you can retarget jobs at a different pool (possibly with a different VM size; Batch supports this via [UpdateJob](/dotnet/api/microsoft.azure.batch.protocol.joboperationsextensions.update)) if necessary. Avoid relying on a static pool ID with the expectation that it will never be deleted and never change.
+### Pool security
+
+#### Isolation boundary
+
+For the purposes of isolation, if your scenario requires isolating jobs or tasks from each other, do so by having them in separate pools. A pool is the security isolation boundary in Batch, and by default, two pools are not visible or able to communicate with each other. Avoid using separate Batch accounts as a means of security isolation unless the larger environment from which the Batch account operates in requires isolation.
+
+#### Batch Node Agent updates
+
+Batch node agents are not automatically upgraded for pools that have non-zero compute nodes. In order to ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It is recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions and when they were released so that you can plan to upgrade to the latest agent version.
+
+Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you are experiencing issues with your Batch pool or compute nodes, as discussed in the [Nodes](#nodes) section.
+
+[!NOTE]
+> For general guidance about security in Azure Batch, see [Batch security and compliance best practices](security-best-practices.md).
+ ### Pool lifetime and billing Pool lifetime can vary depending upon the method of allocation and options applied to the pool configuration. Pools can have an arbitrary lifetime and a varying number of compute nodes at any point in time. It's your responsibility to manage the compute nodes in the pool either explicitly, or through features provided by the service ([autoscale](nodes-and-pools.md#automatic-scaling-policy) or [autopool](nodes-and-pools.md#autopools)). -- **Pool freshness:** Resize your pools to zero every few months to ensure you get the [latest node agent updates and bug fixes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md). Your pool won't receive node agent updates unless it's recreated (or if it's resized to 0 compute nodes). Before you recreate or resize your pool, you should download any node agent logs for debugging purposes, as discussed in the [Nodes](#nodes) section.--- **Pool recreation:** On a similar note, avoid deleting and recreating pools on a daily basis. Instead, create a new pool and then update your existing jobs to point to the new pool. Once all of the tasks have been moved to the new pool, then delete the old pool.
+- **Pool recreation:** Avoid deleting and recreating pools on a daily basis. Instead, create a new pool and then update your existing jobs to point to the new pool. Once all of the tasks have been moved to the new pool, then delete the old pool.
-- **Pool efficiency and billing:** Batch itself incurs no extra charges, but you do incur charges for the compute resources used. You're billed for every compute node in the pool, regardless of the state it's in. This includes any charges required for the node to run, such as storage and networking costs. For more information, see [Cost analysis and budgets for Azure Batch](budget.md).
+- **Pool efficiency and billing:** Batch itself incurs no extra charges, but you do incur charges for Azure resources that are utilized, such as compute, storage, networking and any other resources that may be required for your Batch workload. You're billed for every compute node in the pool, regardless of the state it is in. For more information, see [Cost analysis and budgets for Azure Batch](budget.md).
- **Ephemeral OS disks:** Virtual Machine Configuration pools can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary SSD, to avoid extra costs associated with managed disks.
For user subscription mode Batch accounts, automated OS upgrades can interrupt t
For Windows pools, `enableAutomaticUpdates` is set to `true` by default. Allowing automatic updates is recommended, but you can set this value to `false` if you need to ensure that an OS update doesn't happen unexpectedly.
-## Isolation security
-
-For the purposes of isolation, if your scenario requires isolating jobs from each other, do so by having them in separate pools. A pool is the security isolation boundary in Batch, and by default, two pools are not visible or able to communicate with each other. Avoid using separate Batch accounts as a means of isolation.
- ## Connectivity Review the following guidance related to connectivity in your Batch solutions. ### Network Security Groups (NSGs) and User Defined Routes (UDRs)
-When provisioning [Batch pools in a virtual network](batch-virtual-network.md), ensure that you are closely following the guidelines regarding the use of the `BatchNodeManagement` service tag, ports, protocols and direction of the rule. Use of the service tag is highly recommended, rather than using the underlying Batch service IP addresses. This is because the IP addresses can change over time. Using Batch service IP addresses directly can cause instability, interruptions, or outages for your Batch pools.
+When provisioning [Batch pools in a virtual network](batch-virtual-network.md), ensure that you are closely following the guidelines regarding the use of the `BatchNodeManagement` service tag, ports, protocols and direction of the rule. Use of the service tag is highly recommended; do not use underlying Batch service IP addresses as these can change over time. Using Batch service IP addresses directly can cause instability, interruptions, or outages for your Batch pools.
-For User Defined Routes (UDRs), ensure that you have a process in place to update Batch service IP addresses periodically in your route table, since these addresses change over time. To learn how to obtain the list of Batch service IP addresses, see [Service tags on-premises](../virtual-network/service-tags-overview.md). The Batch service IP addresses will be associated with the `BatchNodeManagement` service tag (or the regional variant that matches your Batch account region).
+For User Defined Routes (UDRs), it is recommended to use `BatchNodeManagement` [service tags](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) instead of Batch service IP addresses as these can change over time.
### Honoring DNS
center-sap-solutions Deploy S4hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/deploy-s4hana.md
There are three deployment options that you can select for your infrastructure,
1. For **SSH public key source**, select a source for the public key. You can choose to generate a new key pair, use an existing key stored in Azure, or use an existing public key stored on your local computer. If you don't have keys already saved, it's recommended to generate a new key pair. 1. For **Key pair name**, enter a name for the key pair.
+
+ 1. If you choose to use an **Existing public key stored in azure**, select the key in **Stored Keys** input
+
+ 1. Provide the corresponding SSH private key from **local file** stored on your computer or **copy paste** the private key.
+
+ 1. If you choose to use an **Existing public key**, you can either Provide the SSH public key from **local file** stored on your computer or **copy paste** the public key.
+
+ 1. Provide the corresponding SSH private key from **local file** stored on your computer or **copy paste** the private key.
+
+1. Under **Configuration Details**, enter the FQDN for you SAP System .
+
+ 1. For **SAP FQDN**, provide FQDN for you system such "sap.contoso.com"
1. Select **Next: Virtual machines**.
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
The following components are necessary for the SAP installation:
## Option 1: Upload software components with script
-You can use the following method to upload the SAP components to your Azure account using scripts. Then, you can [run the software installation wizard](#install-software) to install the SAP software.
+You can use the following method to upload the SAP components to your Azure account using scripts. Then, you can [run the software installation wizard](#install-software) to install the SAP software. We recommend using this method.
You also can [upload the components manually](#option-2-upload-software-components-manually) instead.
Before you can download the software, set up an Azure Storage account for storin
1. Grant the ACSS application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access on this storage account.
-### Download supporting software
-After setting up your Azure Storage account, you need an Ubuntu VM to run scripts that download the software components.
+
+### Download SAP media
+
+You can download the SAP installation media required to install the SAP software, using a script as described in this section.
1. Create an Ubuntu 20.04 VM in Azure
After setting up your Azure Storage account, you need an Ubuntu VM to run script
az login ```
-1. Download the following shell script for the deployer VM packages.
+1. Install Ansible 2.9.27 on the ubuntu VM
- ```azurecli
- wget "https://raw.githubusercontent.com/Azure/Azure-Center-for-SAP-solutions-preview/main/DownloadDeployerVMPackages.sh" -O "DownloadDeployerVMPackages.sh"
+ ```bash
+ sudo pip3 install ansible==2.9.27
```
+
+1. Clone the SAP automation repository from GitHub.
-1. Update the shell script's file permissions.
-
- ```azurecli
- chmod +x DownloadDeployerVMPackages.sh
+ ```git bash
+ git clone https://github.com/Azure/sap-automation.git
```
-1. Run the shell script.
+1. Change the branch to main
- ```azurecli
- ./DownloadDeployerVMPackages.sh
+ ```git bash
+ git checkout main
```-
-1. When asked if you have a storage account, enter `Y`.
-
-1. When asked for the base path to the software storage account, enter the container path. To find the container path:
- 1. Find the storage account that you created in the Azure portal.
+1. [Optional] : Verify if the current branch is "main"
- 1. Find the container named `sapbits`.
- 1. On the container's sidebar menu, select **Properties** under **Settings**.
+ ```git bash
+ git status
+ ```
- 1. Copy down the **URL** value. The format is `https://<your-storage-account>.blob.core.windows.net/sapbits`.
+1. Run the Ansible script **playbook_bom_download** with your own information.
+
+ - When asked if you have a storage account, enter `Y`.
+ - For `<username>`, use your SAP username.
+ - For `<password>`, use your SAP password.
+ - For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_**
+ - For `<storageAccountAccessKey>`, use your storage account's access key. To find the storage account's key:
-1. In the Azure CLI, when asked for the access key, enter your storage account's key. To find the storage account's key:
- 1. Find the storage account in the Azure portal.
+ 1. Find the storage account in the Azure portal that you created.
1. On the storage account's sidebar menu, select **Access keys** under **Security + networking**. 1. For **key1**, select **Show key and connection string**. 1. Copy the **Key** value.-
-1. Once the script completes successfully, in the Azure portal, find the container named `sapbits` in the storage account that you created.
-
-1. Make sure the deployer VM packages are now visible in `sapbits`.
+
+ - For `<containerBasePath>`, use the path to your `sapbits` container. To find the container path:
1. Find the storage account that you created in the Azure portal. 1. Find the container named `sapbits`.
- 1. On the **Overview** page for `sapbits`, look for a folder named **deployervmpackages**.
-
-### Download SAP media
-
-You can download the SAP installation media required to install the SAP software, using a script as described in this section.
-
-1. Sign in to the Ubuntu VM that you created in the [previous section](#download-supporting-software).
-
-1. Install Ansible 2.9.27 on the ubuntu VM
-
- ```bash
- sudo pip3 install ansible==2.9.27
- ```
-
-1. Clone the SAP automation repository from GitHub.
-
- ```azurecli
- git clone https://github.com/Azure/sap-automation.git
- ```
-
-1. Run the Ansible script **playbook_bom_download** with your own information.
+ 1. On the container's sidebar menu, select **Properties** under **Settings**.
- - For `<username>`, use your SAP username.
- - For `<password>`, use your SAP password.
- - For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_**
- - For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the [previous section](#download-supporting-software).
- - For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the [previous section](#download-supporting-software).
+ 1. Copy down the **URL** value. The format is `https://<your-storage-account>.blob.core.windows.net/sapbits`.
The format is `https://<your-storage-account>.blob.core.windows.net/sapbits`
+
+
+ - Ansible command to run
+
```azurecli
To install the SAP software on Azure, use the ACSS installation wizard.
1. For **BOM directory location**, select **Browse** and find the path to your BOM file. For example, `https://<your-storage-account>.blob.core.windows.net/sapbits/sapfiles/boms/S41909SPS03_v0010ms.yaml`.
- 1. For **SAP FQDN**, provide a fully qualified domain name (FQDN) for your SAP system. For example, `sap.contoso.com`.
-
- 1. For High Availability (HA) systems only, enter the client identifier for the Fencing Agent service principal for **Fencing client ID**.
+ 1. For High Availability (HA) systems only, enter the client identifier for the STONITH Fencing Agent service principal for **Fencing client ID**.
1. For High Availability (HA) systems only, enter the password for the Fencing Agent service principal for **Fencing client password**.
- 1. For **SSH private key**, provide the SSH private key that you created or selected as part of your infrastructure deployment.
- 1. Select **Next**. 1. On the **Review + install** tab, review the software settings.
If you encounter this problem, follow these steps:
- For `<username>`, use your SAP username. - For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_**-- For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the [previous section](#download-supporting-software). -- For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the [previous section](#download-supporting-software).
+- For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the Download SAP media section
+- For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the Download SAP media section.
+ The format is `https://<your-storage-account>.blob.core.windows.net/sapbits` This should resolve the problem and you can proceed with next steps as described in the section.
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
Previously updated : 07/28/2022 Last updated : 09/14/2022 ms.devlang: csharp, golang, java, javascript, python
ms.devlang: csharp, golang, java, javascript, python
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD049 -->
# Quickstart: Azure Cognitive Services Translator
In this quickstart, you'll get started using the Translator service to [translat
> [!NOTE] >
-> * For this quickstart it is recommended that you use a Translator text single-service subscription.
-> * With the single-service subscription you'll include one authorization header (**Ocp-Apim-Subscription-key**) with the REST API request. The value for Ocp-Apim-Subscription-key is your Azure secret key for your Translator Text subscription.
-> * If you choose to use the multi-service Cognitive Services subscription, it requires two authentication headers (**Ocp-Api-Subscription-Key** and **Ocp-Apim-Subscription-Region**). The value for Ocp-Apim-Subscription-Region is the region associated with your subscription.
-> * For more information on how to use the Ocp-Apim-Subscription-Region header, _see_ [Use the Text Translator APIs](translator-text-apis.md).
+> * For this quickstart it is recommended that you use a Translator text single-service global resource.
+> * With a single-service global resource you'll include one authorization header (**Ocp-Apim-Subscription-key**) with the REST API request. The value for Ocp-Apim-Subscription-key is your Azure secret key for your Translator Text subscription.
+> * If you choose to use the multi-service Cognitive Services or regional Translator resource, two authentication headers will be required: (**Ocp-Api-Subscription-Key** and **Ocp-Apim-Subscription-Region**). The value for Ocp-Apim-Subscription-Region is the region associated with your subscription.
+> * For more information on how to use the **Ocp-Apim-Subscription-Region** header, _see_ [Text Translator REST API headers](translator-text-apis.md).
## Prerequisites
To call the Translator service via the [REST API](reference/rest-api-guide.md),
For more information on Translator authentication options, _see_ the [Translator v3 reference](./reference/v3-0-reference.md#authentication) guide.
-|Header|Value| Condition |
+Header|Value| Condition |
| |: |:|
-|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|<ul><li>***Required***</li></ul> |
-|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|<ul><li>***Required***</li></ul>|
-|**Content-Length**|The **length of the request** body.|<ul><li>***Optional***</li></ul> |
-|**X-ClientTraceId**|A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named ClientTraceId.|<ul><li>***Optional***</li></ul>
-|||
+|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|&bullet; ***Required***|
+|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |&bullet; ***Required*** when using a multi-service Cognitive Services or regional (non-global) resource.</br>&bullet; ***Optional*** when using a single-service global Translator Resource.
+|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|&bullet; **Required**|
+|**Content-Length**|The **length of the request** body.|&bullet; ***Optional***|
> [!IMPORTANT] >
class Program
private static readonly string key = "<your-translator-key>"; private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+ static async Task Main(string[] args) { // Input and output languages are defined as parameters.
class Program
request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json"); request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
// Send the request and get response. HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
import (
func main() { key := "<YOUR-TRANSLATOR-KEY>"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/translate?api-version=3.0"
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse u, _ := url.Parse(uri) q := u.Query()
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
req.Header.Add("Content-Type", "application/json") // Call the Translator API
import okhttp3.Response;
public class TranslatorText { private static String key = "<your-translator-key";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
// Instantiates the OkHttpClient.
public class TranslatorText {
.url("https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=fr&to=zu") .post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
.addHeader("Content-type", "application/json") .build(); Response response = client.newCall(request).execute();
After a successful call, you should see the following response:
```json [
- {
- "detectedLanguage": {
- "language": "en",
- "score": 1.0
- },
- "translations": [
- {
- "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
- "to": "fr"
- },
- {
- "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
- "to": "zu"
- }
- ]
- }
+ {
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
] ```
After a successful call, you should see the following response:
let key = "<your-translator-key>"; let endpoint = "https://api.cognitive.microsofttranslator.com";
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ let location = "<YOUR-RESOURCE-LOCATION>";
+ axios({ baseURL: endpoint, url: '/translate', method: 'post', headers: { 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString() },
After a successful call, you should see the following response:
```json [ {
- "detectedLanguage": {
- "language": "en",
- "score": 1.0
- },
"translations": [ { "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
import requests, uuid, json
key = "<your-translator-key>" endpoint = "https://api.cognitive.microsofttranslator.com"
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
-location = "YOUR_RESOURCE_LOCATION"
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+location = "<YOUR-RESOURCE-LOCATION>"
path = '/translate' constructed_url = endpoint + path
params = {
headers = { 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4()) }
After a successful call, you should see the following response:
```json [
- {
- "detectedLanguage": {
- "language": "en",
- "score": 1.0
- },
- "translations": [
- {
- "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
- "to": "fr"
- },
- {
- "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
- "to": "zu"
- }
- ]
- }
+ {
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
] ```
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
Previously updated : 09/07/2022 Last updated : 09/14/2022 ms.devlang: csharp, golang, java, javascript, python
To call the Translator service via the [REST API](reference/rest-api-guide.md),
|Header|Value| Condition | | |: |:| |**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|<ul><li>***Required***</li></ul> |
-|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |<ul><li>***Required*** when using a multi-service Cognitive Services Resource.</li><li> ***Optional*** when using a single-service Translator Resource.</li></ul>|
+|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |<ul><li>***Required*** when using a multi-service Cognitive Services or regional (non-global) resource.</li><li> ***Optional*** when using a single-service Translator Resource.</li></ul>|
|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|<ul><li>***Required***</li></ul>| |**Content-Length**|The **length of the request** body.|<ul><li>***Optional***</li></ul> | |**X-ClientTraceId**|A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named ClientTraceId.|<ul><li>***Optional***</li></ul>
-|||
## Set up your application
class Program
private static readonly string key = "<YOUR-TRANSLATOR-KEY>"; private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource and can be found in the Azure portal on the Keys and Endpoint page.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static readonly string location = "<YOUR-RESOURCE-LOCATION>"; static async Task Main(string[] args)
class Program
request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json"); request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
func main() { key := "<YOUR-TRANSLATOR-KEY>"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "<YOUR-RESOURCE-LOCATION>";
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/translate?api-version=3.0"
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse u, _ := url.Parse(uri) q := u.Query()
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
public class TranslatorText {
public String route = "/translate?api-version=3.0&from=en&to=sw&to=it"; public String url = endpoint.concat(route);
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static String location = "<YOUR-RESOURCE-LOCATION>"; // Instantiates the OkHttpClient.
public class TranslatorText {
.url(url) .post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
const axios = require('axios').default;
let key = "<your-translator-key>"; let endpoint = "https://api.cognitive.microsofttranslator.com";
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
let location = "<YOUR-RESOURCE-LOCATION>"; axios({
axios({
method: 'post', headers: { 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
import requests, uuid, json
key = "<YOUR-TRANSLATOR-KEY>" endpoint = "https://api.cognitive.microsofttranslator.com"
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
location = "<YOUR-RESOURCE-LOCATION>" path = '/translate'
params = {
headers = { 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
class Program
private static readonly string key = "<YOUR-TRANSLATOR-KEY>"; private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static readonly string location = "<YOUR-RESOURCE-LOCATION>"; static async Task Main(string[] args)
class Program
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ // location required if you're using a multi-service or regional (not global) resource.
request.Headers.Add("Ocp-Apim-Subscription-Key", key); request.Headers.Add("Ocp-Apim-Subscription-Region", location);
import (
func main() { key := "<YOUR-TRANSLATOR-KEY>"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "<YOUR-RESOURCE-LOCATION>";
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+
+ location := "<YOUR-RESOURCE-LOCATION>"
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/translate?api-version=3.0"
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
public class TranslatorText {
public String route = "/translate?api-version=3.0&to=en&to=it"; public String url = endpoint.concat(route);
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static String location = "<YOUR-RESOURCE-LOCATION>"; // Instantiates the OkHttpClient.
public class TranslatorText {
.url(url) .post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class TranslatorText {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var key = "<YOUR-TRANSLATOR-KEY>";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
-var location = "<YOUR-RESOURCE-LOCATION>";
+let location = "<YOUR-RESOURCE-LOCATION>";
axios({ baseURL: endpoint,
axios({
method: 'post', headers: { 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
import requests, uuid, json
key = "<YOUR-TRANSLATOR-KEY>" endpoint = "https://api.cognitive.microsofttranslator.com"
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location = "<YOUR-RESOURCE-LOCATION>" path = '/translate'
params = {
headers = { 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
class Program
private static readonly string key = "<YOUR-TRANSLATOR-KEY>"; private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static readonly string location = "<YOUR-RESOURCE-LOCATION>"; static async Task Main(string[] args)
import (
func main() { key := "<YOUR-TRANSLATOR-KEY>"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "<YOUR-RESOURCE-LOCATION>";
+ // location, also known as region.
+// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/detect?api-version=3.0"
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
public class TranslatorText {
public String route = "/detect?api-version=3.0"; public String url = endpoint.concat(route);
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static String location = "<YOUR-RESOURCE-LOCATION>"; // Instantiates the OkHttpClient.
public class TranslatorText {
.url(url) .post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class TranslatorText {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var key = "<YOUR-TRANSLATOR-KEY>";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
-var location = "<YOUR-RESOURCE-LOCATION>";
+let location = "<YOUR-RESOURCE-LOCATION>";
axios({ baseURL: endpoint,
axios({
method: 'post', headers: { 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
import requests, uuid, json
key = "<YOUR-TRANSLATOR-KEY>" endpoint = "https://api.cognitive.microsofttranslator.com"
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location = "<YOUR-RESOURCE-LOCATION>" path = '/detect'
params = {
headers = { 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
class Program
private static readonly string key = "<YOUR-TRANSLATOR-KEY>"; private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static readonly string location = "<YOUR-RESOURCE-LOCATION>"; static async Task Main(string[] args)
import (
func main() { key := "<YOUR-TRANSLATOR-KEY>"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "<YOUR-RESOURCE-LOCATION>";
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/translate?api-version=3.0"
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
public class TranslatorText {
public String route = "/translate?api-version=3.0&to=th&toScript=latn"; public String url = endpoint.concat(route);
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static String location = "<YOUR-RESOURCE-LOCATION>";
public class TranslatorText {
.url(url) .post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class TranslatorText {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var key = "<YOUR-TRANSLATOR-KEY>";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
-var location = "<YOUR-RESOURCE-LOCATION>";
+let location = "<YOUR-RESOURCE-LOCATION>";
axios({ baseURL: endpoint,
axios({
method: 'post', headers: { 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
import requests, uuid, json
key = "<YOUR-TRANSLATOR-KEY>" endpoint = "https://api.cognitive.microsofttranslator.com"
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location = "<YOUR-RESOURCE-LOCATION>" path = '/translate'
params = {
headers = { 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
class Program
private static readonly string key = "<YOUR-TRANSLATOR-KEY>"; private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static readonly string location = "<YOUR-RESOURCE-LOCATION>"; static async Task Main(string[] args)
class Program
request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json"); request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
func main() { key := "<YOUR-TRANSLATOR-KEY>"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "<YOUR-RESOURCE-LOCATION>";
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/transliterate?api-version=3.0"
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
public class TranslatorText {
public String route = "/transliterate?api-version=3.0&language=th&fromScript=thai&toScript=latn"; public String url = endpoint.concat(route);
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static String location = "<YOUR-RESOURCE-LOCATION>";
public class TranslatorText {
.url(url) .post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class TranslatorText {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var key = "<YOUR-TRANSLATOR-KEY>";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
-var location = "<YOUR-RESOURCE-LOCATION>";
+let location = "<YOUR-RESOURCE-LOCATION>";
axios({ baseURL: endpoint,
axios({
method: 'post', headers: { 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
import requests, uuid, json
key = "<YOUR-TRANSLATOR-KEY>" endpoint = "https://api.cognitive.microsofttranslator.com"
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location = "<YOUR-RESOURCE-LOCATION>" path = '/transliterate'
params = {
headers = { 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
class Program
private static readonly string key = "<YOUR-TRANSLATOR-KEY>"; private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static readonly string location = "<YOUR-RESOURCE-LOCATION>"; static async Task Main(string[] args)
class Program
request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json"); request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
func main() { key := "<YOUR-TRANSLATOR-KEY>"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "<YOUR-RESOURCE-LOCATION>";
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/translate?api-version=3.0"
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
public class TranslatorText {
public String route = "/translate?api-version=3.0&to=es&includeSentenceLength=true"; public static String url = endpoint.concat(route);
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static String location = "<YOUR-RESOURCE-LOCATION>"; // Instantiates the OkHttpClient.
public class TranslatorText {
.url(url) .post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class TranslatorText {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var key = "<YOUR-TRANSLATOR-KEY>";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
-var location = "<YOUR-RESOURCE-LOCATION>";
+let location = "<YOUR-RESOURCE-LOCATION>";
axios({ baseURL: endpoint,
axios({
method: 'post', headers: { 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
import requests, uuid, json
key = "<YOUR-TRANSLATOR-KEY>" endpoint = "https://api.cognitive.microsofttranslator.com"
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location = "<YOUR-RESOURCE-LOCATION>" path = '/translate'
params = {
headers = { 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
class Program
private static readonly string key = "<YOUR-TRANSLATOR-KEY>"; private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static readonly string location = "<YOUR-RESOURCE-LOCATION>"; static async Task Main(string[] args)
class Program
request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json"); request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
func main() { key := "<YOUR-TRANSLATOR-KEY>"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "<YOUR-RESOURCE-LOCATION>";
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/breaksentence?api-version=3.0"
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
public class TranslatorText {
public String route = "/breaksentence?api-version=3.0"; public String url = endpoint.concat(route);
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static String location = "<YOUR-RESOURCE-LOCATION>"; // Instantiates the OkHttpClient.
public class TranslatorText {
.url(url) .post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class TranslatorText {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var key = "<YOUR-TRANSLATOR-KEY>";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
-var location = "<YOUR-RESOURCE-LOCATION>";
+let location = "<YOUR-RESOURCE-LOCATION>";
axios({ baseURL: endpoint,
axios({
method: 'post', headers: { 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
import requests, uuid, json
key = "<YOUR-TRANSLATOR-KEY>" endpoint = "https://api.cognitive.microsofttranslator.com"
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location = "<YOUR-RESOURCE-LOCATION>" path = '/breaksentence'
params = {
headers = { 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
class Program
private static readonly string key = "<YOUR-TRANSLATOR-KEY>"; private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static readonly string location = "<YOUR-RESOURCE-LOCATION>"; static async Task Main(string[] args)
class Program
request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json"); request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
func main() { key := "<YOUR-TRANSLATOR-KEY>"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "<YOUR-RESOURCE-LOCATION>";
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/dictionary/lookup?api-version=3.0"
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
public class TranslatorText {
public String route = "/dictionary/lookup?api-version=3.0&from=en&to=es"; public String url = endpoint.concat(route);
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static String location = "<YOUR-RESOURCE-LOCATION>"; // Instantiates the OkHttpClient.
public class TranslatorText {
.url(url) .post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class TranslatorText {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var key = "<YOUR-TRANSLATOR-KEY>";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
-var location = "<YOUR-RESOURCE-LOCATION>";
+let location = "<YOUR-RESOURCE-LOCATION>";
axios({ baseURL: endpoint,
axios({
method: 'post', headers: { 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
import requests, uuid, json
key = "<YOUR-TRANSLATOR-KEY>" endpoint = "https://api.cognitive.microsofttranslator.com"
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location = "<YOUR-RESOURCE-LOCATION>" path = '/dictionary/lookup'
params = {
headers = { 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
class Program
private static readonly string key = "<YOUR-TRANSLATOR-KEY>"; private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static readonly string location = "<YOUR-RESOURCE-LOCATION>"; static async Task Main(string[] args)
class Program
request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json"); request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
func main() { key := "<YOUR-TRANSLATOR-KEY>"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "<YOUR-RESOURCE-LOCATION>";
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/dictionary/examples?api-version=3.0"
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
public class TranslatorText {
public String url = endpoint.concat(route);
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
private static String location = "<YOUR-RESOURCE-LOCATION>"; // Instantiates the OkHttpClient.
public class TranslatorText {
.url(url) .post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class TranslatorText {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var key = "<YOUR-TRANSLATOR-KEY>";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
-var location = "<YOUR-RESOURCE-LOCATION>";
+let location = "<YOUR-RESOURCE-LOCATION>";
axios({ baseURL: endpoint,
axios({
method: 'post', headers: { 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
import requests, uuid, json
key = "<YOUR-TRANSLATOR-KEY>" endpoint = "https://api.cognitive.microsofttranslator.com"
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location = "<YOUR-RESOURCE-LOCATION>" path = '/dictionary/examples'
params = {
headers = { 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
communication-services Identifiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/identifiers.md
zone_pivot_groups: acs-js-csharp-java-python-ios-android-rest
-# Understand Identifier types
+# Understand identifier types
Communication Services SDKs and REST APIs use the *identifier* type to identify who is communicating with whom. For example, identifiers specify who to call, or who has sent a chat message.
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
Development of Calling and Chat applications can be accelerated by the [Azure C
| Identity | [REST](/rest/api/communication/communication-identity) | Service| Manage users, access tokens| | Phone numbers| [REST](/rest/api/communication/phonenumbers) | Service| Acquire and manage phone numbers | | SMS | [REST](/rest/api/communication/sms) | Service| Send and receive SMS messages|
+| Email | [REST](/rest/api/communication/Email) | Service|Send and get status on Email messages|
| Chat | [REST](/rest/api/communication/) with proprietary signaling | Client & Service | Add real-time text chat to your applications | | Calling | Proprietary transport | Client | Voice, video, screen-sharing, and other real-time communication | | Calling Server | [REST](/rest/api/communication/callautomation/server-calls) | Service| Make and manage calls, play audio, and configure recording |
Publishing locations for individual SDK packages are detailed below.
| Phone Numbers | [npm](https://www.npmjs.com/package/@azure/communication-phone-numbers) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.PhoneNumbers)| [PyPi](https://pypi.org/project/azure-communication-phonenumbers/)| [Maven](https://search.maven.org/search?q=a:azure-communication-phonenumbers) | -| -| -| | Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat)| [NuGet](https://www.NuGet.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases)| [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | -| | SMS| [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Sms)| [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | -| -| -|
+| Email| [npm](https://www.npmjs.com/package/@azure/communication-email) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Email)| [PyPi](https://pypi.org/project/azure-communication-email/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-email) | -| -| -|
| Calling| [npm](https://www.npmjs.com/package/@azure/communication-calling) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Calling) | -| - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/)| -| |Call Automation||[NuGet](https://www.NuGet.org/packages/Azure.Communication.CallingServer/)||[Maven](https://search.maven.org/artifact/com.azure/azure-communication-callingserver) |Network Traversal| [npm](https://www.npmjs.com/package/@azure/communication-network-traversal)|[NuGet](https://www.NuGet.org/packages/Azure.Communication.NetworkTraversal/) | [PyPi](https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | -|- | - |
For more information, see the following SDK overviews:
- [Calling SDK Overview](../concepts/voice-video-calling/calling-sdk-features.md) - [Chat SDK Overview](../concepts/chat/sdk-features.md) - [SMS SDK Overview](../concepts/sms/sdk-features.md)
+- [Email SDK Overview](../concepts/email/sdk-features.md)
To get started with Azure Communication
communication-services Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md
Last updated 04/15/2022
-zone_pivot_groups: acs-js-csharp
+zone_pivot_groups: acs-js-csharp-java-python
# Quickstart: How to send an email using Azure Communication Service
In this quick start, you'll learn about how to send email using our Email SDKs.
::: zone-end ::: zone pivot="programming-language-javascript"++ ::: zone-end ## Troubleshooting
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
The following application settings influence the experience:
For more detailed information, see [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md#register-an-application).
-When the application is registered, you'll see an identifier in the overview. This identifier, *Application (client) ID*, is used in the next steps.
+When the application is registered, you'll see an [identifier in the overview](../concepts/troubleshooting-info.md#getting-application-id). This identifier, *Application (client) ID*, is used in the next steps.
### Step 2: Allow public client flows
If you want to check roles in Azure portal, see [List Azure role assignments](..
To construct an Administrator consent URL, the Fabrikam Azure AD Administrator does the following steps:
-1. In the URL *https://login.microsoftonline.com/{Tenant_ID}/adminconsent?client_id={Application_ID}*, the Administrator replaces {Tenant_ID} with the Fabrikam tenant ID, and replaces {Application_ID} with the Contoso Application ID.
+1. In the URL *https://login.microsoftonline.com/{Tenant_ID}/adminconsent?client_id={Application_ID}*, the Administrator replaces {Tenant_ID} with the Fabrikam [Tenant ID](../concepts/troubleshooting-info.md#getting-directory-id), and replaces {Application_ID} with the Contoso [Application ID](../concepts/troubleshooting-info.md#getting-application-id).
1. The Administrator logs in and grants permissions on behalf of the organization. The service principal of the Contoso application in the Fabrikam tenant is created if consent is granted. The Fabrikam Administrator can review the consent in Azure AD by doing the following steps:
Learn about the following concepts:
- [Azure Communication Services support Teams identities](../concepts/teams-endpoint.md) - [Teams interoperability](../concepts/teams-interop.md)
+- [Single-tenant and multi-tenant authentication for Teams users](../concepts/interop/custom-teams-endpoint-authentication-overview.md)
+- [Create and manage Communication access tokens for Teams users in a single-page application (SPA)](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/manage-teams-identity-spa)
container-apps Dapr Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-github-actions.md
The [sample solution](https://github.com/Azure-Samples/container-apps-store-api-
:::image type="content" source="media/dapr-github-actions/arch.png" alt-text="Diagram demonstrating microservices app."::: > [!NOTE]
-> This tutorial focuses on the solution deployment outlined below. If you're interested in building and running the solution on your own, [follow the README instructions within the repo](https://github.com/azure-samples/container-apps-store-api-microservice/build-and-run.md).
+> This tutorial focuses on the solution deployment outlined below. If you're interested in building and running the solution on your own, [follow the README instructions within the repo](https://github.com/Azure-Samples/container-apps-store-api-microservice/blob/main/build-and-run.md).
## Prerequisites
container-registry Container Registry Oras Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oras-artifacts.md
Title: Push and pull supply chain artifacts
-description: Push and pull supply chain artifacts using Azure Registry (Preview)
+ Title: Attach, push, and pull supply chain artifacts
+description: Attach, push, and pull supply chain artifacts using Azure Registry (Preview)
ORAS Artifacts support is a preview feature and subject to [limitations](#previe
## Prerequisites
-* **ORAS CLI** - The ORAS CLI enables push, discover, pull of artifacts to an ORAS Artifacts enabled registry.
+* **ORAS CLI** - The ORAS CLI enables attach, copy, push, discover, pull of artifacts to an ORAS Artifacts enabled registry.
* **Azure CLI** - To create an identity, list and delete repositories, you need a local installation of the Azure CLI. Version 2.29.1 or later is recommended. Run `az --version `to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). * **Docker (optional)** - To complete the walkthrough, a container image is referenced. You can use Docker installed locally to build and push a container image, or reference an existing container image. Docker provides packages that easily configure Docker on any [macOS][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system.
ORAS Artifacts support is not available in the government or China clouds, but a
## ORAS installation
-Download and install a preview ORAS release for your operating system. See [ORAS Install instructions][oras-install-docs] for how to extract and install the file for your operating system, referencing an Alpha.1 preview build from the [ORAS GitHub repo][oras-preview-install]
+Download and install a preview ORAS release for your operating system. See [ORAS installation instructions][oras-install-docs] for how to extract and install the file for your operating system. This article uses ORAS CLI 0.14.1 to demonstrate how to manage supply chain artifacts in ACR.
## Configure a registry
-Configure environment variables to easily copy/paste commands into your shell. The commands can be run in the [Azure Cloud Shell](https://shell.azure.com/)
+Configure environment variables to easily copy/paste commands into your shell. The commands can be run in the [Azure Cloud Shell](https://shell.azure.com/).
```console ACR_NAME=myregistry
az acr create \
--output jsonc ```
-In the command output, note the `zoneRedundancy` property for the registry. When enabled, the registry is zone redundant, and ORAS Artifact enabled:
+In the command output, note the `zoneRedundancy` property for the registry. When enabled, the registry is zone redundant, and ORAS Artifact enabled.
```output {
docker push $IMAGE
echo '{"artifact": "'${IMAGE}'", "signature": "pat hancock"}' > signature.json ```
-### Push a signature to the registry, as a reference to the container image
+### Attach a signature to the registry, as a reference to the container image
-The ORAS command pushes the signature to a repository, referencing another artifact through the `subject` parameter. The `--artifact-type` provides for differentiating artifacts, similar to file extensions that enable different file types. One or more files can be pushed by specifying `file:mediaType`
+The ORAS command attaches the signature to a repository, referencing another artifact. The `--artifact-type` provides for differentiating artifacts, similar to file extensions that enable different file types. One or more files can be attached by specifying `file:mediaType`.
```bash
-oras push $REGISTRY/$REPO \
- --artifact-type 'signature/example' \
- --subject $IMAGE \
- ./signature.json:application/json
+oras attach $IMAGE \
+ ./signature.json:application/json \
+ --artifact-type signature/example
```
-For more information on oras push, see [ORAS documentation][oras-push-docs].
+For more information on oras attach, see [ORAS documentation][oras-docs].
-## Push a multi-file artifact as a reference
+## Attach a multi-file artifact as a reference
-Create some documentation around an artifact
+Create some documentation around an artifact.
```bash echo 'Readme Content' > readme.md echo 'Detailed Content' > readme-details.md ```
-Push the multi-file artifact as a reference
+Attach the multi-file artifact as a reference.
```bash
-oras push $REGISTRY/$REPO \
- --artifact-type 'readme/example' \
- --subject $IMAGE \
+oras attach $IMAGE \
./readme.md:application/markdown \ ./readme-details.md:application/markdown
+ --artifact-type readme/example
``` ## Discovering artifact references The ORAS Artifacts Specification defines a [referrers API][oras-artifacts-referrers] for discovering references to a `subject` artifact. The `oras discover` command can show the list of references to the container image.
-Using `oras discover`, view the graph of artifacts now stored in the registry
+Using `oras discover`, view the graph of artifacts now stored in the registry.
```bash oras discover -o tree $IMAGE ```
-The output shows the beginning of a graph of artifacts, where the signature and docs are viewed as children of the container image
+The output shows the beginning of a graph of artifacts, where the signature and docs are viewed as children of the container image.
```output myregistry.azurecr.io/net-monitor:v1
The ORAS Artifacts specification enables deep graphs, enabling signed software b
echo '{"version": "0.0.0.0", "artifact": "'${IMAGE}'", "contents": "good"}' > sbom.json ```
-### Push a sample SBoM to the registry
+### Attach a sample SBoM to the image in the registry
```bash
-oras push $REGISTRY/$REPO \
- --artifact-type 'sbom/example' \
- --subject $IMAGE \
- ./sbom.json:application/json
+oras attach $IMAGE \
+ ./sbom.json:application/json \
+ --artifact-type sbom/example
``` ### Sign the SBoM
Artifacts that are pushed as references, typically do not have tags as they are
```bash SBOM_DIGEST=$(oras discover -o json \ --artifact-type sbom/example \
- $IMAGE | jq -r ".references[0].digest")
+ $IMAGE | jq -r ".referrers[0].digest")
``` Create a signature of an SBoM ```bash
-echo '{"artifact": "'$REGISTRY/${REPO}@$SBOM_DIGEST'", "signature": "pat hancock"}' > sbom-signature.json
+echo '{"artifact": "'$IMAGE@$SBOM_DIGEST'", "signature": "pat hancock"}' > sbom-signature.json
```
-### Push the SBoM signature
+### Attach the SBoM signature
```bash
-oras push $REGISTRY/$REPO \
+oras attach $IMAGE@$SBOM_DIGEST \
--artifact-type 'signature/example' \
- --subject $REGISTRY/$REPO@$SBOM_DIGEST \
./sbom-signature.json:application/json ```
To pull a referenced type, the digest of reference is discovered with the `oras
```bash DOC_DIGEST=$(oras discover -o json \ --artifact-type 'readme/example' \
- $IMAGE | jq -r ".references[0].digest")
+ $IMAGE | jq -r ".referrers[0].digest")
``` ### Create a clean directory for downloading
mkdir ./download
### Pull the docs into the download directory ```bash
-oras pull -a -o ./download $REGISTRY/$REPO@$DOC_DIGEST
+oras pull -o ./download $REGISTRY/$REPO@$DOC_DIGEST
``` ### View the docs
A repository can have a list of manifests that are both tagged and untagged
```azurecli az acr manifest list-metadata \ --name $REPO \
- --repository $ACR_NAME \
+ --registry $ACR_NAME \
--output jsonc ```
az acr manifest list-metadata \
## Next steps
-* Learn more about [the ORAS CLI](https://oras.land)
+* Learn more about [the ORAS CLI](https://oras.land/cli/)
* Learn more about [ORAS Artifacts][oras-artifacts] for how to push, discover, pull, copy a graph of supply chain artifacts <!-- LINKS - external -->
az acr manifest list-metadata \
[docker-mac]: https://docs.docker.com/docker-for-mac/ [docker-windows]: https://docs.docker.com/docker-for-windows/ [oras-install-docs]: https://oras.land/cli/
-[oras-preview-install]: https://github.com/oras-project/oras/releases/tag/v0.2.1-alpha.1
-[oras-push-docs]: https://oras.land/cli/1_pushing/
+[oras-docs]: https://oras.land/
[oras-artifacts]: https://github.com/oras-project/artifacts-spec/ <!-- LINKS - internal --> [az-acr-repository-show]: /cli/azure/acr/repository?#az_acr_repository_show
container-registry Container Registry Soft Delete Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-soft-delete-policy.md
This feature is available in all the service tiers (also known as SKUs). For inf
The article gives you an overview of the soft delete policy and walks you through the step by step process to enable the soft delete policy using Azure CLI and Azure portal.
-You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.0.74 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli](/cli/azure/install-azure-cli).
+You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.0.74 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Prerequisites
cosmos-db Troubleshoot Dot Net Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-request-timeout.md
description: Learn how to diagnose and fix .NET SDK request timeout exceptions.
Previously updated : 09/01/2022 Last updated : 09/16/2022
The HTTP 408 error occurs if the SDK was unable to complete the request before the timeout limit occurred.
+It is important to make sure the application design is following our [guide for designing resilient applications with Azure Cosmos DB SDKs](conceptual-resilient-sdk-applications.md) to make sure it correctly reacts to different network conditions. Your application should have retries in place for timeout errors as these are normally expected in a distributed system.
+
+When evaluating the case for timeout errors:
+
+* What is the impact measured in volume of operations affected compared to the operations succeeding? Is it within the service SLAs?
+* Is the P99 latency / availability affected?
+* Are the failures affecting all your application instances or only a subset? When the issue is reduced to a subset of instances, it's commonly a problem related to those instances.
+ ## Customize the timeout on the Azure Cosmos DB .NET SDK The SDK has two distinct alternatives to control timeouts, each with a different scope.
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-store.md
Wildcard examples:
* ```[]``` Matches one of more characters in the brackets * ```/data/sales/**/*.csv``` Gets all csv files under /data/sales
-* ```/data/sales/20??/**/``` Gets all files in the 20th century
+* ```/data/sales/20??/**/``` Gets all files recursively within all matching 20xx folders
* ```/data/sales/*/*/*.csv``` Gets csv files two levels under /data/sales
-* ```/data/sales/2004/*/12/[XY]1?.csv``` Gets all csv files in 2004 in December starting with X or Y prefixed by a two-digit number
+* ```/data/sales/2004/12/[XY]1?.csv``` Gets all csv files from December 2004 starting with X or Y, followed by 1, and any single character
**Partition Root Path:** If you have partitioned folders in your file source with a ```key=value``` format (for example, year=2019), then you can assign the top level of that partition folder tree to a column name in your data flow data stream.
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md
Previously updated : 08/11/2022 Last updated : 09/15/2022 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory or Synapse pipelines
For different authentication types, refer to the following sections on specific
- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication) >[!TIP]
->When creating linked service for Azure Synapse **serverless** SQL pool from UI, choose "enter manually" instead of browsing from subscription.
+>When creating linked service for a **serverless** SQL pool in Azure Synapse from the Azure portal:
+> 1. For **Account Selection Method**, choose **Enter manually**.
+> 1. Paste the **fully qualified domain name** of the serverless endpoint. You can find this in the Azure portal Overview page for your Synapse workspace, in the properties under **Serverless SQL endpoint**. For example, `myserver-ondemand.sql-azuresynapse.net`.
+> 1. For **Database name**, provide the database name in the serverless SQL pool.
>[!TIP] >If you hit error with error code as "UserErrorFailedToConnectToSqlServer" and message like "The session limit for the database is XXX and has been reached.", add `Pooling=false` to your connection string and try again.
data-factory Connector Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle.md
Previously updated : 09/09/2021 Last updated : 09/15/2022
To copy data from Oracle, set the source type in the copy activity to `OracleSou
|: |: |: | | type | The type property of the copy activity source must be set to `OracleSource`. | Yes | | oracleReaderQuery | Use the custom SQL query to read data. An example is `"SELECT * FROM MyTable"`.<br>When you enable partitioned load, you need to hook any corresponding built-in partition parameters in your query. For examples, see the [Parallel copy from Oracle](#parallel-copy-from-oracle) section. | No |
+| convertDecimalToInteger | Oracle NUMBER type with zero or unspecified scale will be converted to corresponding integer. Allowed values are **true** and **false** (default).| No |
| partitionOptions | Specifies the data partitioning options used to load data from Oracle. <br>Allowed values are: **None** (default), **PhysicalPartitionsOfTable**, and **DynamicRange**.<br>When a partition option is enabled (that is, not `None`), the degree of parallelism to concurrently load data from an Oracle database is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. | No | | partitionSettings | Specify the group of the settings for data partitioning. <br>Apply when the partition option isn't `None`. | No | | partitionNames | The list of physical partitions that needs to be copied. <br>Apply when the partition option is `PhysicalPartitionsOfTable`. If you use a query to retrieve the source data, hook `?AdfTabularPartitionName` in the WHERE clause. For an example, see the [Parallel copy from Oracle](#parallel-copy-from-oracle) section. | No |
To copy data from Oracle, set the source type in the copy activity to `OracleSou
"typeProperties": { "source": { "type": "OracleSource",
+ "convertDecimalToInteger": false,
"oracleReaderQuery": "SELECT * FROM MyTable" }, "sink": {
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
ticks('<timestamp>')
| Return value | Type | Description | | | - | -- |
-| <*ticks-number*> | Integer | The number of ticks since the specified timestamp |
+| <*ticks-number*> | Integer | The number of ticks that have elapsed since 12:00:00 midnight, January 1, 0001 in the Gregorian calendar since the input timestamp |
|||| <a name="toLower"></a>
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-system-variables.md
These system variables can be referenced anywhere in the pipeline JSON.
| @pipeline().TriggerName|Name of the trigger that invoked the pipeline | | @pipeline().TriggerTime|Time of the trigger run that invoked the pipeline. This is the time at which the trigger **actually** fired to invoke the pipeline run, and it may differ slightly from the trigger's scheduled time. | | @pipeline().GroupId | ID of the group to which pipeline run belongs. |
-| @pipeline()?TriggeredByPipelineName | Name of the pipeline that triggers the pipeline run. Applicable when the pipeline run is triggered by an ExecutePipeline activity. Evaluate to _Null_ when used in other circumstances. Note the question mark after @pipeline() |
-| @pipeline()?TriggeredByPipelineRunId | Run ID of the pipeline that triggers the pipeline run. Applicable when the pipeline run is triggered by an ExecutePipeline activity. Evaluate to _Null_ when used in other circumstances. Note the question mark after @pipeline() |
+| @pipeline()?.TriggeredByPipelineName | Name of the pipeline that triggers the pipeline run. Applicable when the pipeline run is triggered by an ExecutePipeline activity. Evaluate to _Null_ when used in other circumstances. Note the question mark after @pipeline() |
+| @pipeline()?.TriggeredByPipelineRunId | Run ID of the pipeline that triggers the pipeline run. Applicable when the pipeline run is triggered by an ExecutePipeline activity. Evaluate to _Null_ when used in other circumstances. Note the question mark after @pipeline() |
>[!NOTE] >Trigger-related date/time system variables (in both pipeline and trigger scopes) return UTC dates in ISO 8601 format, for example, `2017-06-01T22:20:00.4061448Z`.
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
Previously updated : 08/03/2022 Last updated : 09/13/2022
If the HDI activity is stuck in preparing for cluster, follow the guidelines bel
## Web Activity
-### Error code: 2128
+### Error Code: 2001
-- **Message**: `No response from the endpoint. Possible causes: network connectivity, DNS failure, server certificate validation or timeout.`
+- **Message**: `The length of execution output is over limit (around 4MB currently).`
-- **Cause**: This issue is due to either Network connectivity, a DNS failure, a server certificate validation, or a timeout.
+- **Cause**: The execution output is greater than 4 MB in size but the maximum supported output response payload size is 4 MB.
-- **Recommendation**: Validate that the endpoint you are trying to hit is responding to requests. You may use tools like **Fiddler/Postman/Netmon/Wireshark**.
+- **Recommendation**: Make sure the execution output size does not exceed 4 MB. For more information, see [How to scale out the size of data moving using Azure Data Factory](https://docs.microsoft.com/answers/questions/700102/how-to-scale-out-the-size-of-data-moving-using-azu.html).
-### Error code: 2108
+### Error Code: 2002
-- **Message**: `Error calling the endpoint '%url;'. Response status code: '%code;'`
+- **Message**: `The payload including configurations on activity/dataSet/linked service is too large. Please check if you have settings with very large value and try to reduce its size.`
-- **Cause**: The request failed due to an underlying issue such as network connectivity, a DNS failure, a server certificate validation, or a timeout.
+- **Cause**: The payload you are attempting to send is too large.
-- **Recommendation**: Use Fiddler/Postman/Netmon/Wireshark to validate the request.
+- **Recommendation**: Refer to [Payload is too large](data-factory-troubleshoot-guide.md#payload-is-too-large).
-#### More details
-To use **Fiddler** to create an HTTP session of the monitored web application:
+### Error Code: 2003
-1. Download, install, and open [Fiddler](https://www.telerik.com/download/fiddler).
+- **Message**: `There are substantial concurrent external activity executions which is causing failures due to throttling under subscription <subscription id>, region <region code> and limitation <current limit>. Please reduce the concurrent executions. For limits, refer https://aka.ms/adflimits.`
-1. If your web application uses HTTPS, go to **Tools** > **Fiddler Options** > **HTTPS**.
+- **Cause**: Too many activities are running concurrently. This can happen when too many pipelines are triggered at once.
- 1. In the HTTPS tab, select both **Capture HTTPS CONNECTs** and **Decrypt HTTPS traffic**.
+- **Recommendation**: Reduce pipeline concurrency. You might have to distribute the trigger time of your pipelines.
- :::image type="content" source="media/data-factory-troubleshoot-guide/fiddler-options.png" alt-text="Fiddler options":::
+### Error Code: 2010
-1. If your application uses TLS/SSL certificates, add the Fiddler certificate to your device.
+- **Message**: `The Self-hosted Integration Runtime ΓÇÿ<SHIR name>ΓÇÖ is offline`
- Go to: **Tools** > **Fiddler Options** > **HTTPS** > **Actions** > **Export Root Certificate to Desktop**.
+- **Cause**: The self-hosted integration runtime is offline or the Azure integration runtime is expired or not registered.
-1. Turn off capturing by going to **File** > **Capture Traffic**. Or press **F12**.
+- **Recommendation**: Make sure your self-hosted integration runtime is up and running. Refer to [Troubleshoot self-hosted integration runtime](self-hosted-integration-runtime-troubleshoot-guide.md) for more information.
-1. Clear your browser's cache so that all cached items are removed and must be downloaded again.
+### Error Code: 2105
-1. Create a request:
+- **Message**: `The value type '<provided data type>', in key '<key name>' is not expected type '<expected data type>'`
-1. Select the **Composer** tab.
+- **Cause**: Data generated in the dynamic content expression doesn't match with the key and causes JSON parsing failure.
- 1. Set the HTTP method and URL.
-
- 1. If needed, add headers and a request body.
+- **Recommendation**: Look at the key field and fix the dynamic content definition.
+
+### Error code: 2108
+
+- **Message**: `Error calling the endpoint '<URL>'. Response status code: 'NA - Unknown'. More details: Exception message: 'NA - Unknown [ClientSideException] Invalid Url:<URL>. Please verify Url or integration runtime is valid and retry. Localhost URLs are allowed only with SelfHosted Integration Runtime`
+
+- **Cause**: Unable to reach the URL provided. This can occur because there was a network connection issue, the URL was unresolvable, or a localhost URL was being used on an Azure integration runtime.
+
+- **Recommendation**: Verify that the provided URL is accessible.
+
+<br/>
+
+- **Message**: `Error calling the endpoint '%url;'. Response status code: '%code;'`
+
+- **Cause**: The request failed due to an underlying issue such as network connectivity, a DNS failure, a server certificate validation, or a timeout.
+
+- **Recommendation**: Use Fiddler/Postman/Netmon/Wireshark to validate the request.
+
+ **Using Fiddler**
+
+ To use **Fiddler** to create an HTTP session of the monitored web application:
+
+ 1. Download, install, and open [Fiddler](https://www.telerik.com/download/fiddler).
+
+ 1. If your web application uses HTTPS, go to **Tools** > **Fiddler Options** > **HTTPS**.
+
+ 1. In the HTTPS tab, select both **Capture HTTPS CONNECTs** and **Decrypt HTTPS traffic**.
+
+ :::image type="content" source="media/data-factory-troubleshoot-guide/fiddler-options.png" alt-text="Fiddler options":::
+
+ 1. If your application uses TLS/SSL certificates, add the Fiddler certificate to your device.
+
+ Go to: **Tools** > **Fiddler Options** > **HTTPS** > **Actions** > **Export Root Certificate to Desktop**.
+
+ 1. Turn off capturing by going to **File** > **Capture Traffic**. Or press **F12**.
+
+ 1. Clear your browser's cache so that all cached items are removed and must be downloaded again.
+
+ 1. Create a request:
+
+ 1. Select the **Composer** tab.
+
+ 1. Set the HTTP method and URL.
+
+ 1. If needed, add headers and a request body.
+
+ 1. Select **Execute**.
+
+ 1. Turn on traffic capturing again, and complete the problematic transaction on your page.
+
+ 1. Go to: **File** > **Save** > **All Sessions**.
+
+ For more information, see [Getting started with Fiddler](https://docs.telerik.com/fiddler/Configure-Fiddler/Tasks/ConfigureFiddler).
+
+### Error Code: 2113
+
+- **Message**: `ExtractAuthorizationCertificate: Unable to generate a certificate from a Base64 string/password combination`
+
+- **Cause**: Unable to generate certificate from Base64 string/password combination.
+
+- **Recommendation**: Verify that the Base64 encoded PFX certificate and password combination you are using are correctly entered.
+
+### Error Code: 2403
- 1. Select **Execute**.
+- **Message**: `Get access token from MSI failed for Datafactory <DF mname>, region <region code>. Please verify resource url is valid and retry.`
-1. Turn on traffic capturing again, and complete the problematic transaction on your page.
+- **Cause**: Unable to acquire an access token from the resource URL provided.
-1. Go to: **File** > **Save** > **All Sessions**.
+- **Recommendation**: Verify that you have provided the correct resource URL for your managed identity.
-For more information, see [Getting started with Fiddler](https://docs.telerik.com/fiddler/Configure-Fiddler/Tasks/ConfigureFiddler).
## General
data-factory Data Flow Parse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-parse.md
Title: Parse data transformation in mapping data flow
+ Title: Parse data transformations in mapping data flow
description: Parse embedded column documents
Use the expression builder to set the source for your parsing. This can be as si
* Source JSON data: ```{"ts":1409318650332,"userId":"309","sessionId":1879,"page":"NextSong","auth":"Logged In","method":"PUT","status":200,"level":"free","itemInSession":2,"registration":1384448}``` * Expression: ```(level as string, registration as long)```
+* Source Nested JSON data: ```{"car" : {"model" : "camaro", "year" : 1989}, "color" : "white", "transmission" : "v8"}```
+* Expression: ```(car as (model as string, year as integer), color as string, transmission as string)```
+ * Source XML data: ```<Customers><Customer>122</Customer><CompanyName>Great Lakes Food Market</CompanyName></Customers>``` * Expression: ```(Customers as (Customer as integer, CompanyName as string))```
+* Source XML with Attribute data: ```<cars><car model="camaro"><year>1989</year></car></cars>```
+* Expression: ```(cars as (car as ({@model} as string, year as integer)))```
+* Note: If you run into errors extracting attributes (i.e. @model) from a complex type, a workaround is to convert the complex type to a string, remove the @ symbol (i.e. replace(toString(your_xml_string_parsed_column_name.cars.car),'@','')
+), and then use the parse JSON transformation activity.
+ ### Output column type
-Here is where you'll configure the target output schema from the parsing that will be written into a single column. The easiest way to set a schema for your output from parsing is to click the 'Detect Type' button on the top right of the expression builder. ADF will attempt to autodetect the schema from the string field which you are parsing and set it for you in the output expression.
+Here's where you'll configure the target output schema from the parsing that will be written into a single column. The easiest way to set a schema for your output from parsing is to select the 'Detect Type' button on the top right of the expression builder. ADF will attempt to autodetect the schema from the string field, which you're parsing and set it for you in the output expression.
:::image type="content" source="media/data-flow/data-flow-parse-2.png" alt-text="Parse example":::
-In this example, we have defined parsing of the incoming field "jsonString" which is plain text, but formatted as a JSON structure. We're going to store the parsed results as JSON in a new column called "json" with this schema:
+In this example, we have defined parsing of the incoming field "jsonString", which is plain text, but formatted as a JSON structure. We're going to store the parsed results as JSON in a new column called "json" with this schema:
`(trade as boolean, customers as string[])` Refer to the inspect tab and data preview to verify your output is mapped properly.
+Use the Derived Column activity to extract hierarchical data (that is, your_complex_column_name.car.model in the expression field)
+ ## Examples ```
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Inline datasets are recommended when you use flexible schemas, one-off source in
To use an inline dataset, select the format you want in the **Source type** selector. Instead of selecting a source dataset, you select the linked service you want to connect to.
+### Schema options
+
+Because an inline dataset is defined inside the data flow, there is not a defined schema associated with the inline dataset. On the Projection tab, you can import the source data schema and store that schema as your source projection. On this tab, you will find a "Schema options" button that allows you to define the behavior of ADF's schema discovery service.
+
+* Use projected schema: This option is useful when you have a large number of source files that ADF will scan as your source. ADF's default behavior is to discover the schema of every source file. But if you have a pre-defined projection already stored in your source transformation, you can set this to true and ADF will skip auto-discovery of every schema. With this option turned on, the source transformation can read all files in a much faster manner, applying the pre-defined schema to every file.
+* Allow schema drift: Turn on schema drift so that your data flow will allow new columns that are not already defined in the source schema.
+* Validate schema: Setting this option will cause data flow to fail if any column and type defined in the projection does not match the discovered schema of the source data.
+* Infer drifted column types: When new drifted columns are identified by ADF, those new columns will be cast to the appropriate data type using ADF's automatic type inference.
+ :::image type="content" source="media/data-flow/inline-selector.png" alt-text="Screenshot that shows Inline selected."::: ## Workspace DB (Synapse workspaces only)
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 08/26/2022 Last updated : 09/15/2022 # Manage Azure Data Factory studio preview experience
UI (user interface) changes have been made to activities in the pipeline editor
#### Adding activities to the canvas
+> [!NOTE]
+> This experience will soon be available in the default ADF settings.
+ You now have the option to add an activity using the Add button in the bottom right corner of an activity in the pipeline editor canvas. Clicking the button will open a drop-down list of all activities that you can add. Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas and automatically linked with the previous activity on success.
Select an activity by using the search box or scrolling through the listed activ
#### Iteration and conditionals container view
+> [!NOTE]
+> This experience will soon be available in the default ADF settings.
++ You can now view the activities contained iteration and conditional activities. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-11.png" alt-text="Screenshot of all iteration and conditional activity containers.":::
-##### Adding Activities
+##### Adding Activities
You have two options to add activities to your iteration and conditional activities.
data-factory Sap Change Data Capture Data Partitioning Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-data-partitioning-template.md
Title: SAP change data capture solution (Preview) - data partitioning template
+ Title: Auto-generate a pipeline by using the SAP data partitioning template
-description: This topic describes how to use the SAP data partitioning template for SAP change data capture (Preview) in Azure Data Factory.
+description: Learn how to use the SAP data partitioning template for SAP change data capture (CDC) (preview) extraction in Azure Data Factory.
Last updated 06/01/2022
-# Auto-generate a pipeline from the SAP data partitioning template
+# Auto-generate a pipeline by using the SAP data partitioning template
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This topic introduces and describes auto-generation of a pipeline with the SAP data partitioning template for SAP change data capture (Preview) in Azure Data Factory.
+Learn how to use the SAP data partitioning template to auto-generate a pipeline as part of your SAP change data capture (CDC) solution (preview). Then, use the pipeline in Azure Data Factory to partition SAP CDC extracted data.
-## Steps to use the SAP data partitioning template
+## Create a data partitioning pipeline from a template
-To auto-generate ADF pipeline from SAP data partitioning template, complete the following steps.
+To auto-generate an Azure Data Factory pipeline by using the SAP data partitioning template:
-1. Create a new pipeline from template.
+1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Pipelines** > **Pipelines Actions**, select **Pipeline from template**.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-pipeline-from-template.png" alt-text="Screenshot of the Azure Data Factory resources tab with the Pipeline from template menu highlighted.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-pipeline-from-template.png" alt-text="Screenshot of the Azure Data Factory resources tab, with Pipeline from template highlighted.":::
-1. Select the **Partition SAP data to extract and load into Azure Data Lake Store Gen2 in parallel** template.
+1. Select the **Partition SAP data to extract and load into Azure Data Lake Store Gen2 in parallel** template, and then select **Continue**.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-template-selection.png" alt-text="Screenshot of the template gallery with the SAP data partitioning template highlighted.":::
-
-1. Create SAP CDC and ADLS Gen2 linked services, if you havenΓÇÖt done so already, and use them as inputs to SAP data partitioning template.
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-template-selection.png" alt-text="Screenshot of the template gallery, with the SAP data partitioning template highlighted.":::
- For the **Connect via integration runtime** property of SAP ODP linked service, select your SHIR. For the **Connect via integration runtime** property of ADLS Gen2 linked service, select _AutoResolveIntegrationRuntime_.
+1. Create new or use existing [linked services](sap-change-data-capture-prepare-linked-service-source-dataset.md) for SAP ODP (preview), Azure Data Lake Storage Gen2, and Azure Synapse Analytics. Use the linked services as inputs in the SAP data partitioning template.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-template-configuration.png" alt-text="Screenshot of the SAP data partitioning template configuration page with the Inputs section highlighted.":::
+ Under **Inputs**, for the SAP ODP linked service, in **Connect via integration runtime**, select your self-hosted integration runtime. For the Data Lake Storage Gen2 linked service, in **Connect via integration runtime**, select **AutoResolveIntegrationRuntime**.
-1. Select **Use this template** button to auto-generate SAP data partitioning pipeline that can run multiple ADF copy activities to extract multiple partitions in parallel.
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-template-configuration.png" alt-text="Screenshot of the SAP data partitioning template configuration page, with the Inputs section highlighted.":::
- ADF copy activities run on SHIR to concurrently extract raw data (full) from your SAP system and load it into ADLS Gen2 as CSV files. The files can be found in _sapcdc_ container under _deltachange/&lt;your pipeline name&gt;/&lt;your pipeline run timestamp&gt;_ folder path. The **Extraction mode** property of ADF copy activity is set to _Full_.
+1. Select **Use this template** to auto-generate an SAP data partitioning pipeline that can run multiple Data Factory copy activities to extract multiple partitions in parallel.
- To ensure high throughput, locate your SAP system, SHIR, ADLS Gen2, and Azure IR in the same region.
+ Data Factory copy activities run on a self-hosted integration runtime to concurrently extract full raw data from your SAP system and load it into Data Lake Storage Gen2 as CSV files. The files are stored in the *sapcdc* container in the *deltachange/\<your pipeline name\>\<your pipeline run timestamp\>* folder path. Be sure that **Extraction mode** for the Data Factory copy activity is set to **Full**.
-1. Assign your SAP data extraction context and data source object names, as well as an array of partitions, each is defined as an array of row selection conditions, as run-time parameter values for SAP data partitioning pipeline.
+ To ensure high throughput, deploy your SAP system, self-hosted integration runtime, Data Lake Storage Gen2 instance, Azure integration runtime, and Azure Synapse Analytics instance in the same region.
- For the **selectionRangeList** parameter, enter your array of partition(s), each is defined as an array of row selection condition(s). For example, hereΓÇÖs an array of three partitions, where the first partition includes only rows where the value in _CUSTOMERID_ column is between _1_ and _1000000_ (the first million customers), the second partition includes only rows where the value in _CUSTOMERID_ column is between _1000001_ and _2000000_ (the second million customers), and the third partition includes the rest of customers:
+1. Assign your SAP data extraction context, data source object names, and an array of partitions. Define each element as an array of row selection conditions that serve as runtime parameter values for the SAP data partitioning pipeline.
- _[[{"fieldName":"CUSTOMERID","sign":"I","option":"BT","low":"1","high":"1000000"}],[{"fieldName":"CUSTOMERID","sign":"I","option":"BT","low":"1000001","high":"2000000"}],[{"fieldName":"CUSTOMERID","sign":"E","option":"BT","low":"1","high":"2000000"}]]_
+ For the `selectionRangeList` parameter, enter your array of partitions. Define each partition as an array of row selection conditions. For example, hereΓÇÖs an array of three partitions, where the first partition includes only rows where the value in the **CUSTOMERID** column is between **1** and **1000000** (the first million customers), the second partition includes only rows where the value in the **CUSTOMERID** column is between **1000001** and **2000000** (the second million customers), and the third partition includes the rest of the customers:
- These three partitions will be extracted using three ADF copy activities running in parallel.
+ `[[{"fieldName":"CUSTOMERID","sign":"I","option":"BT","low":"1","high":"1000000"}],[{"fieldName":"CUSTOMERID","sign":"I","option":"BT","low":"1000001","high":"2000000"}],[{"fieldName":"CUSTOMERID","sign":"E","option":"BT","low":"1","high":"2000000"}]]`
+
+ The three partitions are extracted by using three Data Factory copy activities that run in parallel.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-partition-extraction-configuration.png" alt-text="Screenshot of the pipeline configuration for the SAP data partitioning template with the parameters section highlighted.":::
-1. Select the **Save all** button and you can now run SAP data partitioning pipeline.
+1. Select **Save all** and run the SAP data partitioning pipeline.
## Next steps
-[Auto-generate a pipeline from the SAP data replication template](sap-change-data-capture-data-replication-template.md).
+[Auto-generate a pipeline by using the SAP data replication template](sap-change-data-capture-data-replication-template.md)
data-factory Sap Change Data Capture Data Replication Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-data-replication-template.md
Title: SAP change data capture solution (Preview) - data replication template
+ Title: Auto-generate a pipeline by using the SAP data replication template
-description: This topic describes how to use the SAP data replication template for SAP change data capture (Preview) in Azure Data Factory.
+description: Learn how to use the SAP data replication template for SAP change data capture (CDC) (preview) extraction in Azure Data Factory.
Last updated 06/01/2022
-# Auto-generate a pipeline from the SAP data replication template
+# Auto-generate a pipeline by using the SAP data replication template
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This topic describes how to use the SAP data replication template for SAP change data capture (Preview) in Azure Data Factory.
+Learn how to use the SAP data replication template to auto-generate a pipeline as part of your SAP change data capture (CDC) solution (preview). Then, use the pipeline in Azure Data Factory for SAP CDC extraction in your datasets.
-## Steps to auto-generate a pipeline from the SAP data replication template
+## Create a data replication pipeline from a template
-1. Create a new pipeline from template.
+To auto-generate an Azure Data Factory pipeline by using the SAP data replication template:
-1. Select the **Replicate SAP data to Azure Synapse Analytics and persist raw data in Azure Data Lake Store Gen2** template.
+1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Pipelines** > **Pipelines Actions**, select **Pipeline from template**.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-template.png" alt-text="Screenshot of the template gallery with the SAP data replication template highlighted.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-pipeline.png" alt-text="Screenshot that shows creating a new pipeline in the Author hub.":::
-1. Create SAP CDC, ADLS Gen2, and Azure Synapse Analytics linked services, if you havenΓÇÖt done so already, and use them as inputs to SAP data replication template.
+1. Select the **Replicate SAP data to Azure Synapse Analytics and persist raw data in Azure Data Lake Storage Gen2** template, and then select **Continue**.
- For the **Connect via integration runtime** property of the SAP ODP linked service, select your SHIR. For the **Connect via integration runtime** property of ADLS Gen2/Azure Synapse Analytics linked services, select _AutoResolveIntegrationRuntime_.
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-template.png" alt-text="Screenshot of the template gallery, with the SAP data replication template highlighted.":::
+
+1. Create new or use existing [linked services](sap-change-data-capture-prepare-linked-service-source-dataset.md) for SAP ODP (preview), Azure Data Lake Storage Gen2, and Azure Synapse Analytics. Use the linked services as inputs in the SAP data replication template.
+
+ Under **Inputs**, for the SAP ODP linked service, in **Connect via integration runtime**, select your self-hosted integration runtime. For the Data Lake Storage Gen2 and Azure Synapse Analytics linked services, in **Connect via integration runtime**, select **AutoResolveIntegrationRuntime**.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-template-configuration.png" alt-text="Screenshot of the configuration page for the SAP data replication template.":::
-1. Select the **Use this template** button to auto-generate an SAP data replication pipeline that contains Azure Data Factory copy and dataflow activities.
+1. Select **Use this template** to auto-generate an SAP data replication pipeline that contains Azure Data Factory copy activities and data flow activities.
- The data factory copy activity runs on the SHIR to extract raw data (full + deltas) from SAP systems and load it into ADLS Gen2 where itΓÇÖs persisted as CSV files, archiving/preserving historical changes. The files can be found in the _sapcdc_ container under the _deltachange/&lt;your pipeline name\&gt;/&lt;your pipeline run timestamp&gt;_ folder path. The **Extraction mode** property of the copy activity is set to _Delta_. The **Subscriber process** property of copy activity is parameterized.
+ The Data Factory copy activity runs on the self-hosted integration runtime to extract raw data (full and deltas) from the SAP system. The copy activity loads the raw data into Data Lake Storage Gen2 as a persisted CSV file. Historical changes are archived and preserved. The files are stored in the *sapcdc* container in the *deltachange/\<your pipeline name\>\<your pipeline run timestamp\>* folder path. Be sure that **Extraction mode** for the Data Factory copy activity is set to **Delta**. The **Subscriber process** property of copy activity is parameterized.
- The data factory data flow activity runs on the Azure IR to transform the raw data and merge all changes into Azure Synapse Analytics, replicating SAP data.
+ The Data Factory data flow activity runs on the Azure integration runtime to transform the raw data and merge all changes into Azure Synapse Analytics. The process replicates the SAP data.
- To ensure high throughput, locate your SAP system, SHIR, ADLS Gen2, Azure IR, and Azure Synapse Analytics in the same region.
+ To ensure high throughput, deploy your SAP system, self-hosted integration runtime, Data Lake Storage Gen2 instance, Azure integration runtime, and Azure Synapse Analytics instance in the same region.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-architecture.png" alt-text="Shows a diagram of the architecture of the SAP data replication scenario.":::
-1. Assign your SAP data extraction context, data source object, key column, and subscriber process names, as well as Synapse SQL schema and table names as run-time parameter values for SAP data replication pipeline.
+1. Assign your SAP data extraction context, data source object, key column names, subscriber process names, and Synapse SQL schema and table names as runtime parameter values for the SAP data replication pipeline.
- For the **keyColumns** parameter, enter your key column name(s) as an array of string(s), such as _[“CUSTOMERID”]/[“keyColumn1”, “keyColumn2”, “keyColumn3”, … up to 10 key column names]_. The key column(s) in raw SAP data will be used by ADF data flow activity to identify changed (created/updated/deleted) rows.
+ For the `keyColumns` parameter, enter your key column names as an array of strings, like `[“CUSTOMERID”]/[“keyColumn1”, “keyColumn2”, “keyColumn3”, … ]`. Include up to 10 key column names. The Data Factory data flow activity uses key columns in raw SAP data to identify changed rows. A changed row is a row that is created, deleted, or changed.
- For the **subscriberProcess** parameter, enter a unique name for the Subscriber process property of ADF copy activity. For example, you can name it _&lt;your pipeline name&gt;\_&lt;your copy activity name&gt;_. You can rename it to start a new ODQ subscription in SAP systems.
+ For the `subscriberProcess` parameter, enter a unique name for **Subscriber process** in the Data Factory copy activity. For example, you might name it `<your pipeline name>\<your copy activity name>`. You can rename it to start a new Operational Delta Queue subscription in SAP systems.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-pipeline-parameters.png" alt-text="Screenshot of the SAP data replication pipeline with the parameters section highlighted.":::
-1. Select the **Save all** button and you can now run SAP data replication pipeline.
+1. Select **Save all** and run the SAP data replication pipeline.
+
+## Create a data delta replication pipeline from a template
+
+If you want to replicate SAP data to Data Lake Storage Gen2 in delta format, complete the steps that are detailed in the preceding section, but instead use the **Replicate SAP data to Azure Data Lake Store Gen2 in Delta format and persist raw data in CSV format** template.
-1. If you want to replicate SAP data to ADLS Gen2 in Delta format, complete the same steps as above, except using the **Replicate SAP data to Azure Data Lake Store Gen2 in Delta format and persist raw data in CSV format** template.
+Like in the data replication template, in a data delta pipeline, the Data Factory copy activity runs on the self-hosted integration runtime to extract raw data (full and deltas) from the SAP system. The copy activity loads the raw data into Data Lake Storage Gen2 as a persisted CSV file. Historical changes are archived and preserved. The files are stored in the *sapcdc* container in the *deltachange/\<your pipeline name\>\<your pipeline run timestamp\>* folder path. The **Extraction mode** property of the copy activity is set to **Delta**. The **Subscriber process** property of copy activity is parameterized.
- ADF copy activity runs on SHIR to extract raw data (full + deltas) from SAP systems and load it into ADLS Gen2 where itΓÇÖs persisted as CSV files, archiving/preserving historical changes. The files can be found in the _sapcdc_ container under the _deltachange/&lt;your pipeline name&gt;/&lt;your pipeline run timestamp&gt;_ folder path. The **Extraction mode** property of ADF copy activity is set to _Delta_. The **Subscriber process** property of ADF copy activity is parameterized.
+The Data Factory data flow activity runs on the Azure integration runtime to transform the raw data and merge all changes into Data Lake Storage Gen2 as an open source Delta Lake or Lakehouse table. The process replicates the SAP data.
- ADF data flow activity runs on Azure IR to transform the raw data and merge all changes into ADLS Gen2 as Delta Lake/Lakehouse table, replicating SAP data. The table can be found in the _saptimetravel_ container under the _<your SAP table/object name>_ folder containing the _\_delta\_log_ subfolder and Parquet files. It can be queried using Synapse serverless SQL pool, see [Query Delta Lake files using serverless SQL pool in Azure Synapse Analytics](../synapse-analytics/sql/query-delta-lake-format.md), while time travel can be done using Synapse serverless Apache Spark pool, see [Quickstart: Create a serverless Apache Spark pool in Azure Synapse Analytics using web tools](../synapse-analytics/quickstart-apache-spark-notebook.md) and [Read older versions of data using Time Travel](../synapse-analytics/spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).
+The table is stored in the *saptimetravel* container in the *\<your SAP table or object name\>* folder that has the *\*delta\*log* subfolder and Parquet files. You can [query the table by using an Azure Synapse Analytics serverless SQL pool](../synapse-analytics/sql/query-delta-lake-format.md). You also can use the Delta Lake Time Travel feature with an Azure Synapse Analytics serverless Apache Spark pool. For more information, see [Create a serverless Apache Spark pool in Azure Synapse Analytics by using web tools](../synapse-analytics/quickstart-apache-spark-notebook.md) and [Read older versions of data by using Time Travel](../synapse-analytics/spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).
- To ensure high throughput, locate your SAP system, SHIR, ADLS Gen2, Azure IR, and Delta Lake/Lakehouse in the same region.
+To ensure high throughput, deploy your SAP system, self-hosted integration runtime, Data Lake Storage Gen2 instance, Azure integration runtime, and Delta Lake or Lakehouse instances in the same region.
## Next steps
-[Managing your SAP change data capture solution](sap-change-data-capture-management.md).
+[Manage your SAP CDC solution](sap-change-data-capture-management.md)
data-factory Sap Change Data Capture Debug Shir Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-debug-shir-logs.md
Title: SAP change data capture solution (Preview) - Debug issues using SHIR logs
+ Title: Debug copy activity in your SAP CDC solution (preview) by sending logs
-description: This topic describes how to debug issues with Copy activity for SAP change data capture (Preview) using self-hosted integration runtime (SHIR) logs in Azure Data Factory.
+description: Learn how to debug issues with the Azure Data Factory copy activity for your SAP change data capture (CDC) solution (preview) by sending self-hosted integration runtime logs to Microsoft.
Last updated 06/01/2022
-# Debug ADF copy activity issues by sending SHIR logs
+# Debug copy activity by sending self-hosted integration runtime logs
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-If you want us to debug your ADF copy activity issues, send SHIR logs to us. To do so, complete the following steps.
+If you want Microsoft to debug Azure Data Factory copy activity issues in your SAP change data capture (CDC) solution (preview), send us your self-hosted integration runtime logs, and then contact us.
## Send logs to Microsoft
-On SHIR machine, open the Microsoft Integration Runtime Configuration Manager app, select the Diagnostics tab, select the Send logs button, and select the Send Logs button again on dialog window that pops up.
+1. On the computer running the self-hosted integration runtime, open Microsoft Integration Runtime Configuration Manager.
+1. Select the **Diagnostics** tab. Under **Logging**, select **Send logs**.
-## Contacting support
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-shir-diagnostics-send-logs.png" alt-text="Screenshot of the Integration Runtime Configuration Manager Diagnostics tab, with Send logs highlighted.":::
-When SHIR logs have been uploaded/sent to us and you are contacting support, provide the Report ID and Timestamp values displayed on the dialog window.
+1. Enter or select the information that's requested, and then select **Send logs**.
+
+## Contact Microsoft support
+
+After you've uploaded and sent your self-hosted integration runtime logs, contact Microsoft support. In your support request, include the Report ID and Timestamp values that are shown in the confirmation:
+
-
## Next steps
-[Auto-generate ADF pipeline from SAP data partitioning template](sap-change-data-capture-data-partitioning-template.md)
+[Auto-generate a pipeline by using the SAP data partitioning template](sap-change-data-capture-data-partitioning-template.md)
data-factory Sap Change Data Capture Introduction Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md
Title: SAP change data capture solution (Preview) - introduction and architecture
+ Title: Overview and architecture of the SAP CDC solution (preview)
-description: This topic introduces and describes the architecture for SAP change data capture (Preview) in Azure Data Factory.
+description: Learn about the SAP change data capture (CDC) solution (preview) in Azure Data Factory and understand its architecture.
Last updated 06/01/2022
-# SAP change data capture (CDC) solution in Azure Data Factory (Preview)
+# Overview and architecture of the SAP CDC solution (preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This topic introduces and describes the architecture for SAP change data capture (Preview) in Azure Data Factory.
+Learn about the SAP change data capture (CDC) solution (preview) in Azure Data Factory and understand its architecture.
-## Introduction
+Azure Data Factory is an ETL and ELT data integration platform as a service (PaaS). For SAP data integration, Data Factory currently offers six general availability connectors:
-Azure Data Factory (ADF) is a data integration (ETL/ELT) Platform as a Service (PaaS) and for SAP data integration, ADF currently offers six connectors:
+## Data extraction needs
-These connectors can only extract data in batches, where each batch treats old and new data equally w/o identifying data changes (ΓÇ£batch modeΓÇ¥). This extraction mode isnΓÇÖt optimal when dealing w/ large data sets, such as tables w/ millions/billions records, that change often. To keep your copy of SAP data fresh/up-to-date, frequently extracting it in full is expensive/inefficient. ThereΓÇÖs a manual and limited workaround to extract mostly new/updated records, but this process requires a column w/ timestamp/monotonously increasing values and continuously tracking the highest value since last extraction (ΓÇ£watermarkingΓÇ¥). Unfortunately, some tables have no column that can be used for watermarking and this process canΓÇÖt handle deleted records.
+The SAP connectors in Data Factory extract SAP source data only in batches. Each batch processes existing and new data the same. In data extraction in batch mode, changes between existing and new datasets aren't identified. This type of extraction mode isnΓÇÖt optimal when you have large datasets like tables that have millions or billions of records that change often.
-Consequently, our customers have been asking for a new connector that can extract only data changes (inserts/updates/deletes = ΓÇ£deltasΓÇ¥), leveraging the CDC feature that exists in most SAP systems (in ΓÇ£CDC modeΓÇ¥). After gathering their requirements, weΓÇÖve decided to build a new SAP ODP connector leveraging SAP Operational Data Provisioning (ODP) framework. This new connector can connect to all SAP systems that support ODP, such as R/3, ECC, S/4HANA, BW, and BW/4HANA, directly at the application layer or indirectly via SAP Landscape Transformation (SLT) replication server as a proxy. It can fully/incrementally extract SAP data that includes not only physical tables, but also logical objects created on top of those tables, such as Advanced Business Application Programming (ABAP) Core Data Services (CDS) views, w/o watermarking. Combined w/ existing ADF features, such as copy + data flow activities, pipeline templates, and tumbling window triggers, we can offer low-latency SAP CDC/replication solution w/ self-managed pipeline experience.
+You can keep your copy of SAP data fresh and up-to-date by frequently extracting the full dataset, but this approach is expensive and inefficient. You also can use a manual, limited workaround to extract mostly new or updated records. In a process called *watermarking*, extraction requires using a timestamp column, monotonously increasing values, and continuously tracking the highest value since the last extraction. But some tables don't have a column that you can use for watermarking. This process also doesn't identify a deleted record as a change in the dataset.
-This document provides a high-level architecture of our SAP CDC solution in ADF, the prerequisites and step-by-step guidelines to preview it, and its current limitations.
+## The SAP CDC solution
-## Architecture
+Microsoft customers indicate that they need a connector that can extract only the delta between two sets of data. In data, a *delta* is any change in a dataset that's the result of an update, insert, or deletion in the dataset. A delta extraction connector uses the [SAP change data capture (CDC) feature](https://help.sap.com/docs/SAP_DATA_SERVICES/ec06fadc50b64b6184f835e4f0e1f52f/1752bddf523c45f18ce305ac3bcd7e08.html?q=change%20data%20capture) that exists in most SAP systems to determine the delta in a dataset. The SAP CDC solution in Data Factory uses the SAP Operational Data Provisioning (ODP) framework to replicate the delta in an SAP source dataset.
-The high-level architecture of our SAP CDC solution in ADF is divided into two sides, left-hand-side (LHS) and right-hand-side (RHS). LHS includes SAP ODP connector that invokes ODP API over standard Remote Function Call (RFC) modules to extract raw SAP data (full + deltas). RHS includes ADF copy activity that loads the raw SAP data into any destination, such as Azure Blob Storage/Azure Data Lake Store (ADLS) Gen2, in CSV/Parquet format, essentially archiving/preserving all historical changes. RHS can also include ADF data flow activity that transforms the raw SAP data, merges all changes, and loads the result into any destination, such as Azure SQL Database/Azure Synapse Analytics, essentially replicating SAP data. ADF data flow activity can also load the result into ADLS Gen2 in Delta format, enabling time-travel to produce snapshots of SAP data at any given periods in the past. LHS and RHS can be combined as SAP CDC/replication template to auto-generate ADF pipeline that can be frequently run using ADF tumbling window trigger to replicate SAP data into Azure w/ low latency and w/o watermarking.
+This article provides a high-level architecture of the SAP CDC solution in Azure Data Factory. Get more information about the SAP CDC solution:
+- [Prerequisites and setup](sap-change-data-capture-prerequisites-configuration.md)
+- [Set up a self-hosted integration runtime](sap-change-data-capture-shir-preparation.md)
+- [Set up a linked service and source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md)
+- [Use the SAP data extraction template](sap-change-data-capture-data-replication-template.md)
+- [Use the SAP data partition template](sap-change-data-capture-data-partitioning-template.md)
+- [Manage your solution](sap-change-data-capture-management.md)
-ADF copy activity w/ SAP ODP connector runs on a self-hosted integration runtime (SHIR) that you install on your on-premises/virtual machine, so it has a line of sight to your SAP source systems/SLT replication server, while ADF data flow activity runs on a serverless Databricks/Spark cluster, Azure IR. SAP ODP connector via ODP can extract various data source (ΓÇ£providerΓÇ¥) types, such as:
+## How to use the SAP CDC solution
-- SAP extractors, originally built to extract data from SAP ECC and load it into SAP BW-- ABAP CDS views, the new data extraction standard for SAP S/4HANA-- InfoProviders and InfoObjects in SAP BW and BW/4HANA-- SAP application tables, when using SLT replication server as a proxy
+The SAP CDC solution is a connector that you access through an SAP ODP (preview) linked service, an SAP ODP (preview) source dataset, and the SAP data replication template or the SAP data partitioning template. Choose your template when you set up a new pipeline in Azure Data Factory Studio. To access preview templates, you must [enable the preview experience in Azure Data Factory Studio](how-to-manage-studio-preview-exp.md#how-to-enabledisable-preview-experience).
-These providers run on SAP systems to produce full/incremental data in Operational Delta Queue (ODQ) that is consumed by ADF copy activity w/ SAP ODB connector in ADF pipeline (ΓÇ£subscriberΓÇ¥).
+The SAP CDC solution connects to all SAP systems that support ODP, including SAP R/3, SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. The solution works either directly at the application layer or indirectly via an SAP Landscape Transformation Replication Server (SLT) as a proxy. The solution doesn't rely on watermarking to extract SAP data either fully or incrementally. The data the SAP CDC solution extracts includes not only physical tables but also logical objects that are created by using the tables. An example of a table-based object is an SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) view.
+Use the SAP CDC solution with Data Factory features like copy activities and data flow activities, pipeline templates, and tumbling window triggers for a low-latency SAP CDC replication solution in a self-managed pipeline.
-Since ODP completely decouples providers from subscribers, any SAP docs that offer provider configurations are applicable for ADF as a subscriber. For more info on ODP, see [Introduction to operational data provisioning](https://wiki.scn.sap.com/wiki/display/BI/Introduction+to+Operational+Data+Provisioning).
+## The SAP CDC solution architecture
+
+The SAP CDC solution in Azure Data Factory is a connector between SAP and Azure. The SAP side includes the SAP ODP connector that invokes the ODP API over standard Remote Function Call (RFC) modules to extract full and delta raw SAP data.
+
+The Azure side includes the Data Factory copy activity that loads the raw SAP data into a storage destination like Azure Blob Storage or Azure Data Lake Storage Gen2. The data is saved in CSV or Parquet format, essentially archiving or preserving all historical changes.
+
+The Azure side also might include a Data Factory data flow activity that transforms the raw SAP data, merges all changes, and loads the results in a destination like Azure SQL Database or Azure Synapse Analytics, essentially replicating the SAP data. The Data Factory data flow activity also can load the results in Data Lake Storage Gen2 in delta format. You can use the open source Delta Lake Time Travel feature to produce snapshots of SAP data for a specific period.
+
+In Azure Data Factory Studio, the SAP template that you use to auto-generate a Data Factory pipeline connects SAP with Azure. You can run the pipeline frequently by using a Data Factory tumbling window trigger to replicate SAP data in Azure with low latency and without using watermarking.
++
+To get started, create a Data Factory copy activity by using an SAP ODP linked service, an SAP ODP source dataset, and an SAP data replication template or SAP data partitioning template. The copy activity runs on a self-hosted integration runtime that you install on an on-premises computer or on a virtual machine (VM). An on-premises computer has a line of sight to your SAP source systems and to the SLT. The Data Factory data flow activity runs on a serverless Azure Databricks or Apache Spark cluster, or on an Azure integration runtime.
+
+The SAP CDC solution uses ODP to extract various data source types, including:
+
+- SAP extractors, originally built to extract data from ECC and load it into BW
+- ABAP CDS views, the new data extraction standard for S/4HANA
+- InfoProviders and InfoObjects datasets in BW and BW/4HANA
+- SAP application tables, when you use an SLT replication server as a proxy
+
+In this process, the SAP data sources are *providers*. The providers run on SAP systems to produce either full or incremental data in an operational delta queue (ODQ). The Data Factory copy activity is a *subscriber* of the ODQ. The copy activity consumes the ODQ through the SAP CDC solution in the Data Factory pipeline.
++
+Because ODP completely decouples providers from subscribers, any SAP documentation that offers provider configurations are applicable to Data Factory as a subscriber. For more information about ODP, see [Introduction to operational data provisioning](https://wiki.scn.sap.com/wiki/display/BI/Introduction+to+Operational+Data+Provisioning).
## Next steps
-[Prerequisites and configuration of the SAP CDC solution](sap-change-data-capture-prerequisites-configuration.md)
+[Prerequisites and setup for the SAP CDC solution](sap-change-data-capture-prerequisites-configuration.md)
data-factory Sap Change Data Capture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-management.md
Title: SAP change data capture solution (Preview) - management
+ Title: Manage your SAP CDC solution (preview)
-description: This article describes how to manage SAP change data capture (Preview) in Azure Data Factory.
+description: Learn how to manage your SAP change data capture (CDC) solution (preview) in Azure Data Factory.
Last updated 06/01/2022
-# Management of SAP change data capture (CDC) (Preview) in Azure Data Factory
+# Manage your SAP CDC solution (preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article describes how to manage SAP change data capture (Preview) in Azure Data Factory.
+After you create a pipeline in Azure Data Factory as part of your SAP change data capture (CDC) solution (preview), it's important to manage the solution.
-## Run SAP a data replication pipeline on a recurring schedule
+## Run an SAP data replication pipeline on a recurring schedule
-To run an SAP data replication pipeline on a recurring schedule with a specified frequency, complete the following steps:
+To run an SAP data replication pipeline on a recurring schedule with a specified frequency:
-1. Create a tumbling window trigger that runs SAP data replication pipeline frequently with the **Max concurrency** property set to _1_, see [Create a trigger that runs a pipeline on a tumbling window](how-to-create-tumbling-window-trigger.md?tabs=data-factory).
+1. Create a tumbling window trigger that runs the SAP data replication pipeline frequently. Set **Max concurrency** to **1**.
-1. After the tumbling window trigger is created, add a self-dependency on it, such that subsequent pipeline runs always waits until previous pipeline runs are successfully completed, see [Create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md).
+ For more information, see [Create a trigger that runs a pipeline on a tumbling window](how-to-create-tumbling-window-trigger.md?tabs=data-factory).
+
+1. Add a self-dependency on the tumbling window trigger so that a subsequent pipeline run always waits until earlier pipeline runs are successfully completed.
+
+ For more information, see [Create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md).
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-tumbling-window-trigger.png" alt-text="Screenshot of the Edit trigger window with values highlighted to configure the tumbling window trigger."::: ## Recover a failed SAP data replication pipeline run
-To recover a failed SAP data replication pipeline run, complete the following steps:
+If an SAP data replication pipeline run fails, a subsequent run that's scheduled via a tumbling window trigger is suspended while it waits on the dependency.
+
-1. If any SAP data replication pipeline run fails, the subsequent run scheduled by tumbling window trigger will be suspended, waiting on dependency.
+To recover a failed SAP data replication pipeline run:
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-trigger-status.png" alt-text="Screenshot of the trigger status window with an SAP tumbling window trigger in the failed state.":::
+1. Fix the issues that caused the pipeline run failure.
-1. In that case, you can fix the issues causing pipeline run failure, switch the **Extraction mode** property of the copy activity to _Recovery_, and manually run SAP data replication pipeline in this mode.
+1. Switch the **Extraction mode** property of the copy activity to **Recovery**.
-1. If the recovery run is successfully completed, switch back the **Extraction mode** property of the copy activity to _Delta_, and select the **Rerun** button next to the failed run of tumbling window trigger.
+1. Manually run the SAP data replication pipeline.
+
+1. If the recovery run finishes successfully, change the **Extraction mode** property of the copy activity to **Delta**.
+
+1. Next to the failed run of the tumbling window trigger, select **Rerun**.
## Monitor data extractions on SAP systems
-To monitor data extractions on SAP systems, complete the following steps:
+To monitor data extractions on SAP systems:
-1. Using SAP Logon Tool on your SAP source system, run ODQMON transaction code.
+1. In the SAP Logon tool on your SAP source system, run the ODQMON transaction code.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-logon-tool.png" alt-text="Screenshot of the SAP Logon Tool.":::
-1. Enter the value for the **Subscriber name** property of your SAP ODP linked service in the **Subscriber** input field and select _All_ in the **Request Selection** dropdown menu to show all data extractions using that linked service.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-monitor-delta-queues.png" alt-text="Screenshot of the SAP ODQMON tool with all data extractions for a particular subscriber.":::
+1. In **Subscriber**, enter the value for the **Subscriber name** property of your SAP ODP (preview) linked service. In the **Request Selection** dropdown, select **All** to show all data extractions that use the linked service.
-1. You can now see all registered subscriber processes in ODQ representing data extractions from ADF copy activities that use your SAP ODP linked service. On each ODQ subscription, you can drill down to see individual full/delta extractions. On each extraction, you can drill down to see individual data packages that were consumed.
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-monitor-delta-queues.png" alt-text="Screenshot of the SAP ODQMON tool with all data extractions for a specific subscriber.":::
-1. When ADF copy activities that extract SAP data are no longer needed, their ODQ subscriptions should be deleted, so SAP systems can stop tracking their subscription states and remove the unconsumed data packages from ODQ. To do so, select the unneeded ODQ subscriptions and delete them.
+ You can see all registered subscriber processes in the operational delta queue (ODQ). Subscriber processes represent data extractions from Azure Data Factory copy activities that use your SAP ODP linked service. For each ODQ subscription, you can look at details to see all full and delta extractions. For each extraction, you can see individual data packages that were consumed.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-delete-queue-subscriptions.png" alt-text="Screenshot of the SAP ODQMON tool with the delete button highlighted for a particular queue subscription.":::
+1. When Data Factory copy activities that extract SAP data are no longer needed, you should delete their ODQ subscriptions. When you delete ODQ subscriptions, SAP systems can stop tracking their subscription states and remove the unconsumed data packages from the ODQ. To delete an ODQ subscription, select the subscription and select the Delete icon.
-## Troubleshooting delta change
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-delete-queue-subscriptions.png" alt-text="Screenshot of the SAP ODQMON tool with the delete button highlighted for a specific queue subscription.":::
-The Azure Data Factory ODP connector reads delta changes from the ODP framework, which itself provides them in tables called Operational Delta Queues (ODQs).
+## Troubleshoot delta changes
-In situations where the data movement works technically (that is, copy activities complete without errors), but doesn't appear to deliver the data correctly (for example, no data at all, or maybe just a subset of the expected data), you should first investigate if the number of records provided on the SAP side match the number of rows transferred by ADF. If so, the issue isn't related to ADF, but probably comes from incorrect or missing configuration on SAP side.
+The SAP CDC solution in Data Factory reads delta changes from the SAP ODP framework. The deltas are recorded in ODQ tables.
-### Troubleshooting in SAP using ODQMON
+In scenarios in which data movement works (copy activities finish without errors), but data isn't delivered correctly (no data at all, or maybe just a subset of the expected data), you should first investigate whether the number of records provided on the SAP side match the number of rows transferred by Data Factory. If they match, the issue isn't related to Data Factory, but probably comes from an incorrect or missing configuration on the SAP side.
-To analyze what data the SAP system has provided for your scenario, start transaction ODQMON in your SAP backend system. If you're using the SLT scenario, with a standalone SLT server, start the transaction there.
+### Troubleshoot in SAP by using ODQMON
-To find the Operational Delta Queue(s) corresponding to your copy activities or copy activity runs, use the filter options (blurred out below). In the field ΓÇ£QueueΓÇ¥ you can use wildcards to narrow down the search, for example, by table name *EKKO*, etc.
+To analyze what data the SAP system has provided for your scenario, start transaction ODQMON in your SAP back-end system. If you're using SAP Landscape Transformation Replication Server (SLT) with a standalone server, start the transaction there.
-Selecting the check box ΓÇ£Calculate Data VolumeΓÇ¥ provides details about the number of rows and data volume (in bytes) contained in the ODQs.
+To find the ODQs that correspond to your copy activities or copy activity runs, use the filter options. In **Queue**, you can use wildcards to narrow the search. For example, you can search by the table name **EKKO**.
+Select the **Calculate Data Volume** checkbox to see details about the number of rows and data volume (in bytes) contained in the ODQs.
-Double clicking on the queue will bring you to the subscriptions of this ODQ. Since there can be multiple subscribers to the same ODQ, check for the subscriber name (which you entered in the ADF linked service) and pick the subscription whose timestamp best fits your copy activity run. (Note that for delta subscriptions, the first run of the copy activity will be recorded on SAP side for the subscription).
+To view the ODQ subscriptions, double-click the queue. An ODQ can have multiple subscribers, so check for the subscriber name that you entered in the Data Factory linked service. Choose the subscription that has a timestamp that most closely matches the time your copy activity ran. For delta subscriptions, the first run of the copy activity for the subscription is recorded on the SAP side.
-Drilling down into the subscription, you find a list of ΓÇ£requestsΓÇ¥, corresponding to copy activity runs in ADF. In the screenshot below, you see the result of four copy activity runs.
+In the subscription, a list of requests corresponds to copy activity runs in Data Factory. In the following figure, you see the result of four copy activity runs:
-Based on the timestamp in the first row, find the line corresponding to the copy activity run you want to analyze. If the number of rows shown in this screen equals the number of rows read by the copy activity, you've verified that ADF has read and transferred the data as provided by the SAP system.
-In this case, we recommend consulting with the team responsible for your SAP system.
+Based on the timestamp in the first row, find the line that corresponds to the copy activity run you want to analyze. If the number of rows shown equals the number of rows read by the copy activity, you've verified that Data Factory has read and transferred the data as provided by the SAP system. In this scenario, we recommend that you consult with the team that's responsible for your SAP system.
## Current limitations
-The following are the current limitations of SAP CDC solution in ADF:
+Here are current limitations of the SAP CDC solution in Data Factory:
-- Resetting and deleting ODQ subscriptions from ADF aren't supported for now.-- SAP hierarchies aren't supported for now.
+- You can't reset or delete ODQ subscriptions in Data Factory.
+- You can't use SAP hierarchies with the solution.
+## Next steps
+Learn more about [SAP connectors](industry-sap-connectors.md).
data-factory Sap Change Data Capture Prepare Linked Service Source Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prepare-linked-service-source-dataset.md
Title: SAP change data capture solution (Preview) - Prepare linked service and dataset
+ Title: Set up a linked service and dataset for the SAP CDC solution (preview)
-description: This article introduces and describes preparation of the linked service and source dataset for SAP change data capture (Preview) in Azure Data Factory.
+description: Learn how to set up a linked service and source dataset to use with the SAP change data capture (CDC) solution (preview) in Azure Data Factory.
Last updated 06/01/2022
-# Prepare the SAP ODP linked service and source dataset for the SAP CDC solution in Azure Data Factory (Preview)
+# Set up a linked service and source dataset for your SAP CDC solution (preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article introduces and describes preparation of the linked service and source dataset for SAP change data capture (Preview) in Azure Data Factory.
+Learn how to set up the linked service and source dataset for your SAP change data capture (CDC) solution (preview) in Azure Data Factory.
-## Prepare the SAP ODP linked service
+## Set up a linked service
-To prepare SAP ODP linked service, complete the following steps:
+To set up an SAP ODP (preview) linked service for your SAP CDC solution:
-1. On ADF Studio, go to the **Linked services** section of **Manage** hub and select the **New** button to create a new linked service.
+1. In Azure Data Factory Studio, go to the Manage hub of your data factory. In the menu under **Connections**, select **Linked services**. Select **New** to create a new linked service.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-linked-service.png" alt-text="Screenshot of the manage hub in Azure Data Factory with the New Linked Service button highlighted.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-linked-service.png" alt-text="Screenshot of the Manage hub in Azure Data Factory Studio, with the New linked service button highlighted.":::
-1. Search for _SAP_ and select _SAP CDC (Preview)_.
+1. In **New linked service**, search for **SAP**. Select **SAP ODP (Preview)**, and then select **Continue**.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-selection.png" alt-text="Screenshot of the linked service source selection with SAP CDC (Preview) selected.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-selection.png" alt-text="Screenshot of the linked service source selection, with SAP ODP (Preview) selected.":::
-1. Set SAP ODP linked service properties, many of them are similar to SAP Table linked service properties, see [Linked service properties](connector-sap-table.md?tabs=data-factory#linked-service-properties).
- 1. For the **Connect via integration runtime** property, select your SHIR.
- 1. For the **Server name** property, enter the mapped server name for your SAP system.
- 1. For the **Subscriber name** property, enter a unique name to register and identify this ADF connection as a subscriber that consumes data packages produced in ODQ by your SAP system. For example, you can name it <_your ADF name_>_<_your linked service name_>.
+1. Set the linked service properties. Many of the properties are similar to SAP Table linked service properties. For more information, see [Linked service properties](connector-sap-table.md?tabs=data-factory#linked-service-properties).
- When using extraction mode "Delta", the combination of Subscriber name (maintained in the linked service) and Subscriber process has to be unique for every copy activity reading from the same ODP source object to ensure that the ODP framework can distinguish these copy activities and provide the correct chances.
+ 1. In **Name**, enter a unique name for the linked service.
+ 1. In **Connect via integration runtime**, select your self-hosted integration runtime.
+ 1. In **Server name**, enter the mapped server name for your SAP system.
+ 1. In **Subscriber name**, enter a unique name to register and identify this Data Factory connection as a subscriber that consumes data packages that are produced in the Operational Delta Queue (ODQ) by your SAP system. For example, you might name it `<your data factory -name>_<your linked service name>`.
+
+ When you use delta extraction mode in SAP, the combination of subscriber name (maintained in the linked service) and subscriber process must be unique for every copy activity that reads from the same ODP source object. A unique name ensures that the ODP framework can distinguish between copy activities and provide the correct delta.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-linked-service-configuration.png" alt-text="Screenshot of the SAP ODP linked service configuration.":::
-1. Test the connection and create your new SAP ODP linked service.
+1. Select **Test connection**, and then select **Create**.
+
+## Create a copy activity
+
+To create a Data Factory copy activity that uses an SAP ODP (preview) data source, complete the steps in the following sections.
+
+### Set up the source dataset
+
+1. In Azure Data Factory Studio, go to the Author hub of your data factory. In **Factory Resources**, under **Pipelines** > **Pipelines Actions**, select **New pipeline**.
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-new-pipeline.png" alt-text="Screenshot that shows creating a new pipeline in the Data Factory Studio Author hub.":::
+
+1. In **Activities**, select the **Move & transform** dropdown. Select the **Copy data** activity and drag it to the canvas of the new pipeline. Select the **Source** tab of the Data Factory copy activity, and then select **New** to create a new source dataset.
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-data-source-new.png" alt-text="Screenshot of the Copy data activity Source configuration.":::
+
+1. In **New dataset**, search for **SAP**. Select **SAP ODP (Preview)**, and then select **Continue**.
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-selection.png" alt-text="Screenshot of the SAP ODP (Preview) dataset type in the New dataset dialog.":::
+
+1. In **Set properties**, enter a name for the SAP ODP linked service data source. In **Linked service**, select the dropdown and select **New**.
+
+1. Select your SAP ODP linked service for the new source dataset and set the rest of the properties for the linked service:
-## Prepare the SAP ODP source dataset
+ 1. In **Connect via integration runtime**, select your self-hosted integration runtime.
-To prepare an ADF copy activity with an SAP ODP data source, complete the following steps:
+ 1. In **Context**, select the context of the ODP data extraction. Here are some examples:
-1. On ADF Studio, go to the **Pipeline** section of the **Author** hub, select the **…** button to drop down the **Pipeline Actions** menu, and select the **New pipeline** item.
-1. Drag & drop the **Copy data** activity onto the canvas of new pipeline, go to the **Source** tab of ADF copy activity, and select the **New** button to create a new source dataset.
+ - To extract ABAP CDS views from S/4HANA, select **ABAP_CDS**.
+ - To extract InfoProviders or InfoObjects from SAP BW or BW/4HANA, select **BW**.
+ - To extract SAP extractors from SAP ECC, select **SAPI**.
+ - To extract SAP application tables from SAP source systems via SAP LT replication server as a proxy, select **SLT_\<your queue alias\>**.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-source-configuration.png" alt-text="Screenshot of the Copy data activity's Source configuration.":::
+ If you want to extract SAP application tables, but you donΓÇÖt want to use SAP Landscape Transformation Replication Server (SLT) as a proxy, you can create SAP extractors by using the RSO2 transaction code or Core Data Services (CDS) views with the tables. Then, extract the tables directly from your SAP source systems by using either an **SAPI** or an **ABAP_CDS** context.
-1. Search for _SAP_ and select _SAP CDC (Preview)_.
+ 1. For **Object name**, under the selected data extraction context, select the name of the data source object to extract. If you connect to your SAP source system by using SLT as a proxy, the **Preview data** feature currently isn't supported.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-selection.png" alt-text="Screenshot of the SAP CDC (Preview) dataset type on the New dataset dialog.":::
+ To enter the selections directly, select the **Edit** checkbox.
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-configuration.png" alt-text="Screenshot of the SAP ODP (Preview) dataset configuration page.":::
-1. Select your new SAP ODP linked service for the new source dataset and set the rest of its properties.
- 1. For the **Connect via integration runtime** property, select your SHIR.
- 1. For the **Context** property, select the context of data extraction via ODP, such as:
- - _ABAP_CDS_ for extracting ABAP CDS views from S/4HANA
- - _BW_ for extracting InfoProviders or InfoObjects from SAP BW or BW/4HANA
- - _SAPI_ for extracting SAP extractors from SAP ECC
- - _SLT~_<_your queue alias_> for extracting SAP application tables from SAP source systems via SLT replication server as a proxy
+1. Select **OK** to create your new SAP ODP source dataset.
- If you want to extract SAP application tables, but donΓÇÖt want to use SLT replication server as a proxy, you can create SAP extractors via RSO2 transaction code/CDS views on top of those tables and extract them directly from your SAP source systems in _SAPI/ABAP_CDS_ context, respectively.
- 1. For the **Object name** property, select the name of data source object to extract under the selected data extraction context. If you connect to your SAP source system via SLT replication server as a proxy, the **Preview data** feature isn't supported for now.
- 1. Check the **Edit** check boxes, if loading the dropdown menu selections takes too long and you want to type them yourself.
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-source-dataset-configuration.png" alt-text="Screenshot of the SAP CDC (Preview) dataset configuration page.":::
+1. In the Data Factory copy activity, in **Extraction mode**, select one of the following options:
-1. Select the **OK** button to create your new SAP ODP source dataset.
-1. For the **Extraction** mode property of ADF copy activity, select one of the following modes:
- - _Full_ for always extracting the current snapshot of selected data source object w/o registering ADF copy activity as its ΓÇ£deltaΓÇ¥ subscriber that consumes data changes produced in ODQ by your SAP system
- - _Delta_ for initially extracting the current snapshot of selected data source object, registering ADF copy activity as its ΓÇ£deltaΓÇ¥ subscriber, and subsequently extracting new data changes produced in ODQ by your SAP system since the last extraction
- - _Recovery_ for repeating the last extraction that was part of a failed pipeline run
+ - **Full**: Always extracts the current snapshot of the selected data source object. This option doesn't register the Data Factory copy activity as its delta subscriber that consumes data changes produced in the ODQ by your SAP system.
+ - **Delta**: Initially extracts the current snapshot of the selected data source object. This option registers the Data Factory copy activity as its delta subscriber and then extracts new data changes produced in the ODQ by your SAP system since the last extraction.
+ - **Recovery**: Repeats the last extraction that was part of a failed pipeline run.
-1. For the **Subscriber process** property of ADF copy activity, enter a unique name to register and identify this ADF copy activity as a ΓÇ£deltaΓÇ¥ subscriber of the selected data source object, so your SAP system can manage its subscription state to keep track of data changes produced in ODQ and consumed in consecutive extractions, eliminating the need for watermarking them yourself. For example, you can name it <_your pipeline name_>_<_your copy activity name_>.
+1. In **Subscriber process**, enter a unique name to register and identify this Data Factory copy activity as a delta subscriber of the selected data source object. Your SAP system manages its subscription state to keep track of data changes that are produced in the ODQ and consumed in consecutive extractions. You don't need to manually watermark data changes. For example, you might name the subscriber process `<your pipeline name>_<your copy activity name>`.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-source-configuration.png" alt-text="Screenshot of the SAP CDC (Preview) source configuration in a Copy activity.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-source-configuration.png" alt-text="Screenshot of the SAP CDC source configuration in a Data Factory copy activity.":::
-1. If you want to extract only data from some columns/rows, you can use the column projection/row selection features:
- 1. For the **Projection** property of ADF copy activity, select the **Refresh** button to load the dropdown menu selections w/ column names of the selected data source object.
+1. If you want to extract data from only some columns or rows, you can use the column projection or row selection features:
- If you have many columns and you want to include only a few in your data extraction, select the check boxes for those columns. If you have many columns and you want to exclude only a few in your data extraction, select the **Select all** check box first and then unselect the check boxes for those columns. If no column is selected, all will be extracted by default.
+ 1. In **Projection**, select **Refresh** to load the dropdown selections with column names of the selected data source object.
- Check the **Edit** check box, if loading the dropdown menu selections takes too long and you want to add/type them yourself.
+ If you want to include only a few columns in your data extraction, select the checkboxes for those columns. If you want to exclude only a few columns from your data extraction, select the **Select all** checkbox first, and then clear the checkboxes for columns you want to exclude. If no column is selected, all columns are extracted.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-source-projection-configuration.png" alt-text="Screenshot of the SAP CDC (Preview) source configuration with the Projection, Selection, and Additional columns sections highlighted.":::
+ To enter the selections directly, select the **Edit** checkbox.
- 1. For the **Selection** property of ADF copy activity, select the **New** button to add a new row selection condition containing **Field name/Sign/Option/Low/High** arguments.
- 1. For the **Field name** argument, select the **Refresh** button to load the dropdown menu selections w/ column names of the selected data source object. if loading the dropdown menu selections takes too long, you can type it yourself.
- 1. For the **Sign** argument, select _Inclusive/Exclusive_ to respectively include/exclude only rows that meet the selection condition in your data extraction.
- 1. For the **Option** argument, select _EQ/CP/BT_ to respectively apply the following row selection conditions:
- - ΓÇ£True if the value in **Field name** column is equal to the value of **Low** argumentΓÇ¥
- - ΓÇ¥True if the value in **Field name** column contains a pattern specified in the value of **Low** argumentΓÇ¥
- - ΓÇ¥True if the value in **Field name** column is between the values of **Low** and **High** argumentsΓÇ¥
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-source-projection-configuration.png" alt-text="Screenshot of the SAP CDC source configuration with the Projection, Selection, and Additional columns sections highlighted.":::
- Please consult SAP docs/support notes to ensure that your row selection conditions can be applied to the selected data source object. For example, here are some row selection conditions and their respective arguments:
+ 1. In **Selection**, select **New** to add a new row selection condition that contains arguments.
+ 1. In **Field name**, select **Refresh** to load the dropdown selections with column names of the selected data source object. You also can enter the column names manually.
+ 1. In **Sign**, select **Inclusive** or **Exclusive** to include or exclude rows that meet the selection condition in your data extraction.
+ 1. In **Option**, select **EQ**, **CP**, or **BT** to apply the following row selection conditions:
- |**Row selection condition** |**Field name** |**Sign** |**Option** |**Low** |**High** |
+ - **EQ**: True if the value in the **Field name** column is equal to the value of the **Low** argument.
+ - **CP**: True if the value in the **Field name** column contains a pattern that's specified in the value of the **Low** argument.
+ - **BT**: True if the value in the **Field name** column is between the values of the **Low** and **High** arguments.
+
+ To ensure that your row selection conditions can be applied to the selected data source object, see SAP documentation or support notes for the data source object.
+
+ The following table shows example row selection conditions and their respective arguments:
+
+ | Row selection condition | Field name | Sign | Option | Low | High |
|||||||
- |Include only rows where the value in _COUNTRY_ column is _CHINA_ |_COUNTRY_ |_Inclusive_ |_EQ_ |_CHINA_ | |
- |Exclude only rows where the value in _COUNTRY_ column is _CHINA_ |_COUNTRY_ |_Exclusive_ |_EQ_ |_CHINA_ | |
- |Include only rows where the value in _FIRSTNAME_ column contains _JO*_ pattern |_FIRSTNAME_ |_Inclusive_ |_CP_ |_JO*_ | |
- |Include only rows where the value in _CUSTOMERID_ column is between _1_ and _999999_ |_CUSTOMERID_ |_Inclusive_ |_BT_ |_1_ |_999999_ |
-
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-selection-additional-columns.png" alt-text="Screenshot of the SAP CDC (Preview) source configuration for a Copy activity with the Selection and Additional columns sections highlighted.":::
-
- Row selections are especially useful to divide large data sets into multiple partitions, where each partition can be extracted using a single copy activity, so you can perform full extractions using multiple copy activities running in parallel. These copy activities will in turn invoke parallel processes on your SAP system to produce data packages in ODQ that can also be consumed by parallel processes in each copy activity, thus increasing throughput significantly.
-
-1. Go to the **Sink** tab of ADF copy activity and select an existing sink dataset or create a new one for any data store, such as Azure Blob Storage/ADLS Gen2.
-
- To increase throughput, you can enable ADF copy activity to concurrently extract data packages produced in ODQ by your SAP system and enforce all extraction processes to immediately write them into the sink in parallel. For example, if you use ADLS Gen2 as sink, leave the **File name** field in **File path** property of sink dataset empty, so all extracted data packages will be written as separate files.
+ | Include only rows in which the value in the **COUNTRY** column is **CHINA** | **COUNTRY** | **Inclusive** | **EQ** | **CHINA** | |
+ | Exclude only rows in which the value in the **COUNTRY** column is **CHINA** | **COUNTRY** | **Exclusive** | **EQ** | **CHINA** | |
+ | Include only rows in which the value in the **FIRSTNAME** column contains the **JO\*** pattern | **FIRSTNAME** | **Inclusive** | **CP** | **JO\*** | |
+ | Include only rows in which the value in the **CUSTOMERID** column is between **1** and **999999** | **CUSTOMERID** | **Inclusive** | **BT** | **1** | **999999** |
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-selection-additional-columns.png" alt-text="Screenshot of the SAP ODP source configuration for a copy activity with the Selection and Additional columns sections highlighted.":::
+
+ Row selections are especially useful to divide large data sets into multiple partitions. You can extract each partition by using a single copy activity. You can perform full extractions by using multiple copy activities running in parallel. These copy activities in turn invoke parallel processes on your SAP system to produce separate data packages in the ODQ. Parallel processes in each copy activity can consume packages and increase throughput significantly.
+
+### Set up the source sink
+
+- In the Data Factory copy activity, select the **Sink** tab. Select an existing sink dataset or create a new one for a data store like Azure Blob Storage or Azure Data Lake Storage Gen2.
+
+ To increase throughput, you can enable the Data Factory copy activity to concurrently extract data packages that your SAP system produces in the ODQ. You can enforce all extraction processes to immediately write them to the sink in parallel. For example, if you use Data Lake Storage Gen2 as a sink, in **File path** for the sink dataset, leave **File name** empty. All extracted data packages will be written as separate files.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-staging-dataset.png" alt-text="Screenshot of the staging dataset configuration for the solution.":::
-1. Go to the **Settings** tab of ADF copy activity and increase throughput by setting the **Degree of copy parallelism** property to concurrently extract data packages produced in ODQ by your SAP system.
+### Configure copy activity settings
+
+1. To increase throughput, in the Data Factory copy activity, select the **Settings** tab. Set **Degree of copy parallelism** to concurrently extract data packages that your SAP system produces in the ODQ.
- If you use Azure Blob Storage/ADLS Gen2 as sink, the maximum number of effective parallel extractions is four/five per SHIR machine, but you can install SHIR as a cluster of up to four machines, see [High availability and scalability](create-self-hosted-integration-runtime.md?tabs=data-factory#high-availability-and-scalability).
+ If you use Azure Blob Storage or Data Lake Storage Gen2 as the sink, the maximum number of effective parallel extractions you can set is four or five per self-hosted integration runtime machine. You can install a self-hosted integration runtime as a cluster of up to four machines. For more information, see [High availability and scalability](create-self-hosted-integration-runtime.md?tabs=data-factory#high-availability-and-scalability).
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-settings-parallelism.png" alt-text="Screenshot of a Copy activity with the Degree of parallelism setting highlighted.":::
-1. Adjust the maximum size of data packages produced in ODQ to fine-tune parallel extractions. The default size is 50 MB, so 3 GB of SAP table/object will be extracted into 60 files of raw SAP data in ADLS Gen2. Lowering it to 15 MB could increase throughput, but will produce more (200) files. To do so, select the **Code** button of ADF pipeline to edit the **maxPackageSize** property of ADF copy activity.
+1. To fine-tune parallel extractions, adjust the maximum size of data packages that are produced in the ODQ. The default size is 50 MB. 3 GB of an SAP table or object are extracted into 60 files of raw SAP data in Data Lake Storage Gen2. Lowering the maximum size to 15 MB might increase throughput, but more (200) files are produced. To lower the maximum size, in the pipeline navigation menu, select **Code**.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-code-configuration.png" alt-text="Screenshot of a pipeline with the Code configuration button highlighted.":::
+ Then, in the JSON file, edit `maxPackageSize` to lower the maximum size.
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-code-1.png" alt-text="Screenshot of the code configuration for a pipeline with the maxPackageSize setting highlighted.":::
-1. If you set the **Extraction mode** property of ADF copy activity to _Delta_, your initial/subsequent extractions will respectively consume full data/new data changes produced in ODQ by your SAP system since the last extraction.
+1. If you set **Extraction mode** in the Data Factory copy activity to **Delta**, your initial or subsequent extractions consume full data or new data changes produced in the ODQ by your SAP system since the last extraction.
- For each extraction, you can skip the actual data production/consumption/transfer and simply initialize/advance your ΓÇ£deltaΓÇ¥ subscription state. This is especially useful when you want to perform full and delta extractions using separate copy activities w/ different partitions. To do so, select the **Code** button of ADF pipeline to add the **deltaExtensionNoData** property of ADF copy activity and set it to _true_. Remove that property when you want to resume extracting data.
+ For each extraction, you can skip the actual data production, consumption, or transfer, and instead directly initialize or advance your delta subscription state. This option is especially useful if you want to perform full and delta extractions by using separate copy activities by using different partitions. To set up full and delta extractions by using separate copy activities with different partitions, in the pipeline navigation menu, select **Code**. In the JSON file, add the `deltaExtensionNoData` property and set it to `true`. To resume extracting data, remove that property or set it to `false`.
- :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-code-2.png" alt-text="Screenshot of the code configuration for a pipeline with the deltaExtensionNoData property highlighted.":::
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-copy-code-2.png" alt-text="Screenshot of the code configuration for a pipeline with the deltaExtensionNoData property highlighted.":::
-1. Select the **Save all** and **Debug** buttons to run your new pipeline containing ADF copy activity w/ SAP ODP source dataset.
+1. Select **Save all**, and then select **Debug** to run your new pipeline that contains the Data Factory copy activity with the SAP ODP source dataset.
-To illustrate the results of full and delta extractions from consecutively running your new pipeline, letΓÇÖs use the following simple/small custom table in SAP ECC as an example of data source object to extract.
+To illustrate the results of full and delta extractions from consecutively running your new pipeline, here's an example of a simple table in SAP ECC:
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-simple-custom-table.png" alt-text="Screenshot of a simple custom table in SAP.":::
-HereΓÇÖs the raw SAP data from initial/full extraction as CSV file in ADLS Gen2. It contains system columns/fields (ODQ_CHANGEMODE/ODQ_ENTITYCNTR/_SEQUENCENUMBER) that can be used by ADF data flow activity to merge data changes when replicating SAP data. The ODQ_CHANGEMODE column marks the type of change for each row/record: (C)reated, (U)pdated, or (D)eleted. The initial run of your pipeline w/ _Delta_ extraction mode always induces a full load that marks all rows as (C)reated.
+HereΓÇÖs the raw SAP data from an initial or full extraction in CSV format in Data Lake Storage Gen2:
++
+The file contains the system columns **ODQ_CHANGEMODE**, **ODQ_ENTITYCNTR**, and **SEQUENCENUMBER**. The Data Factory data flow activity uses these columns to merge data changes when it replicates SAP data.
+The **ODQ_CHANGEMODE** column marks the type of change for each row or record: **C** (created), **U** (updated), or **D** (deleted). The initial run of your pipeline in *delta* extraction mode always induces a full load that marks all rows as **C** (created).
-After creating, updating, and deleting three rows of the custom table in SAP ECC, hereΓÇÖs the raw SAP data from subsequent/delta extraction as CSV file in ADLS Gen2.
+The following example shows the delta extraction in CSV format in Data Lake Storage Gen2 after three rows of the custom table in SAP ECC are created, updated, and deleted:
## Next steps
-[Debug ADF copy activity issues by sending SHIR logs](sap-change-data-capture-debug-shir-logs.md)
+[Debug copy activity by sending self-hosted integration runtime logs](sap-change-data-capture-debug-shir-logs.md)
data-factory Sap Change Data Capture Prerequisites Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prerequisites-configuration.md
Title: SAP change data capture solution (Preview) - prerequisites and configuration
+ Title: Prerequisites and setup for the SAP CDC solution (preview)
-description: This topic introduces and describes the prerequisites and configuration of SAP change data capture (Preview) in Azure Data Factory.
+description: Learn about the prerequisites and setup for the SAP change data capture (CDC) solution (preview) in Azure Data Factory.
Last updated 06/01/2022
-# SAP change data capture (CDC) solution prerequisites and configuration in Azure Data Factory (Preview)
+# Prerequisites and setup for the SAP CDC solution (preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This topic introduces and describes the prerequisites and configuration of SAP change data capture (Preview) in Azure Data Factory.
+Learn about the prerequisites for the SAP change data capture (CDC) solution (preview) in Azure Data Factory and how to set up the solution in Azure Data Factory Studio.
## Prerequisites
-To preview our new SAP CDC solution in ADF you can/should:
+To preview the SAP CDC solution in Azure Data Factory, be able to complete these prerequisites:
-- Configure SAP systems for Operational Data Provisioning (ODP) framework-- Be already familiar w/ ADF concepts, such as integration runtimes, linked services, datasets, activities, data flows, pipelines, templates, and triggers-- Prepare SHIR w/ SAP ODP connector-- Prepare SAP ODP linked service-- Prepare ADF copy activity w/ SAP ODP source dataset-- Debug ADF copy activity issues by sending SHIR logs-- Auto-generate ADF pipeline from SAP data partitioning template-- Auto-generate ADF pipeline from SAP data replication template-- Run SAP data replication pipeline frequently-- Recover a failed SAP data replication pipeline run-- Monitor data extractions on SAP systems
+- In Azure Data Factory Studio, [enable the preview experience](how-to-manage-studio-preview-exp.md#how-to-enabledisable-preview-experience).
+- Set up SAP systems to use the [SAP Operational Data Provisioning (ODP) framework](https://help.sap.com/docs/SAP_LANDSCAPE_TRANSFORMATION_REPLICATION_SERVER/007c373fcacb4003b990c6fac29a26e4/b6e26f56fbdec259e10000000a441470.html?q=SAP%20Operational%20Data%20Provisioning%20%28ODP%29%20framework).
+- Be familiar with Data Factory concepts like integration runtimes, linked services, datasets, activities, data flows, pipelines, templates, and triggers.
+- Set up a self-hosted integration runtime to use for the connector.
+- Set up an SAP ODP (preview) linked service.
+- Set up the Data Factory copy activity with an SAP ODP (preview) source dataset.
+- Debug Data Factory copy activity issues by sending self-hosted integration runtime logs to Microsoft.
+- Auto-generate a Data Factory pipeline by using the SAP data partitioning template.
+- Auto-generate a Data Factory pipeline by using the SAP data replication template.
+- Be able to run an SAP data replication pipeline frequently.
+- Be able to recover a failed SAP data replication pipeline run.
+- Be familiar with monitoring data extractions on SAP systems.
-## Configure SAP systems for ODP framework
+## Set up SAP systems to use the SAP ODP framework
-To configure your SAP systems for ODP, follow these guidelines:
+To set up your SAP systems to use the SAP ODP framework, follow the guidelines that are described in the following sections.
### SAP system requirements
-ODP comes by default in software releases of most SAP systems (ECC, S/4HANA, BW, and BW/4HANA), except in very early ones. To ensure that your SAP systems come w/ ODP, please refer to the following SAP docs/support notes ΓÇô Even though they mostly mention SAP BW/DS as subscribers/consumers for data extractions via ODP, they also apply to ADF as a subscriber/consumer:
-- To support ODP, run your SAP systems on at least NetWeaver 7.0 SPS24 release, see [Transferring Data from SAP Source Systems via ODP (Extractors)](https://help.sap.com/docs/SAP_BW4HANA/107a6e8a38b74ede94c833ca3b7b6f51/327833022dcf42159a5bec552663dc51.html).-- To support ABAP CDS full/delta extractions via ODP, run your SAP systems on at least NetWeaver 7.4 SPS08/7.5 SPS05 release, respectively, see [Transferring Data from SAP Systems via ODP (ABAP CDS Views)](https://help.sap.com/docs/SAP_BW4HANA/107a6e8a38b74ede94c833ca3b7b6f51/af11a5cb6d2e4d4f90d344f58fa0fb1d.html). -- [1521883 - To use ODP API 1.0](https://launchpad.support.sap.com/#/notes/1521883).-- [1931427 - To use ODP API 2.0 that supports SAP hierarchies](https://launchpad.support.sap.com/#/notes/1931427).-- [2481315 - To use ODP for data extractions from SAP source systems into BW or BW/4HANA](https://launchpad.support.sap.com/#/notes/2481315).
+The ODP framework is available by default in most recent software releases of most SAP systems, including SAP ECC, SAP S/4HANA, SAP BW, and SAP BW/4HANA. To ensure that your SAP systems have ODP, see the following SAP documentation or support notes. Even though the guidance primarily refers to SAP BW and SAP DS as subscribers or consumers in data extraction via ODP, the guidance also applies to Data Factory as a subscriber or consumer.
-### SAP user configurations
+- To support ODP, run your SAP systems on SAP NetWeaver 7.0 SPS 24 or later. For more information, see [Transferring Data from SAP Source Systems via ODP (Extractors)](https://help.sap.com/docs/SAP_BW4HANA/107a6e8a38b74ede94c833ca3b7b6f51/327833022dcf42159a5bec552663dc51.html).
+- To support SAP Advanced Business Application Programming (ABAP) Core Data Services (CDS) full extractions via ODP, run your SAP systems on NetWeaver 7.4 SPS 08 or later. To support SAP ABAP CDS delta extractions, run your SAP systems on NetWeaver 7.5 SPS 05 or later. For more information, see [Transferring Data from SAP Systems via ODP (ABAP CDS Views)](https://help.sap.com/docs/SAP_BW4HANA/107a6e8a38b74ede94c833ca3b7b6f51/af11a5cb6d2e4d4f90d344f58fa0fb1d.html).
+- [1521883 - To use ODP API 1.0](https://launchpad.support.sap.com/#/notes/1521883)
+- [1931427 - To use ODP API 2.0 that supports SAP hierarchies](https://launchpad.support.sap.com/#/notes/1931427)
+- [2481315 - To use ODP for data extractions from SAP source systems into BW or BW/4HANA](https://launchpad.support.sap.com/#/notes/2481315)
-Data extractions via ODP require a properly configured user on SAP systems, which needs to be authorized for ODP API invocations over RFC modules. This user configuration is exactly the same as that required for data extractions via ODP from SAP source systems into BW or BW/4HANA:
-- [2855052 - To authorize ODP API usage](https://launchpad.support.sap.com/#/notes/2855052).-- [460089 - To authorize ODP RFC invocations](https://launchpad.support.sap.com/#/notes/460089).
+### Set up the SAP user
-### SAP data source configurations
+Data extractions via ODP require a properly configured user on SAP systems. The user must be authorized for ODP API invocations over Remote Function Call (RFC) modules. The user configuration is the same configuration that's required for data extractions via ODP from SAP source systems into BW or BW/4HANA. For more information, see these SAP support notes:
-ODP offers various data extraction ΓÇ£contextsΓÇ¥ or ΓÇ£source object typesΓÇ¥. While most data source objects are ready to extract, some need additional configurations. In SAPI context, the objects to extract are called DataSources/extractors. In order to extract DataSources, complete the following steps:
-- They must be activated on SAP source systems ΓÇô This applies only to those delivered by SAP/their partners, since those created by customers are automatically activated. If theyΓÇÖve been/are being extracted by SAP BW or BW/4HANA, theyΓÇÖre already activated. For more info on DataSources and their activations, see [Installing BW Content DataSources](https://help.sap.com/saphelp_nw73/helpdata/en/4a/1be8b7aece044fe10000000a421937/frameset.htm).-- They must be released for extractions via ODP ΓÇô This applies only to those created by customers, since those delivered by SAP/their partners are automatically released.
- - [1560241 - To release DataSources for ODP API](https://launchpad.support.sap.com/#/notes/1560241) ΓÇô This should be combined w/ running the following programs:
- - RODPS_OS_EXPOSE to release DataSources for external use.
- - BS_ANLY_DS_RELEASE_ODP to release BW extractors for ODP API.
- - [2232584 - To release SAP extractors for ODP API](https://launchpad.support.sap.com/#/notes/2232584) ΓÇô This contains a list of all SAP-delivered DataSources (7400+) that have been released.
+- [2855052 - To authorize ODP API usage](https://launchpad.support.sap.com/#/notes/2855052)
+- [460089 - To authorize ODP RFC invocations](https://launchpad.support.sap.com/#/notes/460089)
-### SLT configurations
+### Set up SAP data sources
-SLT is a database trigger-enabled CDC solution that can replicate SAP application tables and simple views in near real time from SAP source systems to various targets, including ODQ, such that it can be used as a proxy in data extractions via ODP. It can be installed on SAP source systems as Data Migration Server (DMIS) add-on or on a standalone replication server. In order to use SLT replication server as a proxy, complete the following steps:
-- Install at least NetWeaver 7.4 SPS04 release and DMIS 2011 SP05 add-on on your replication server, see [Transferring Data from SLT Using Operational Data Provisioning](https://help.sap.com/docs/SAP_NETWEAVER_750/ccc9cdbdc6cd4eceaf1e5485b1bf8f4b/6ca2eb9870c049159de25831d3269f3f.html).-- Run LTRC transaction code on your replication server to configure SLT.
- - In the Specify Source System section, specify the RFC destination representing your SAP source system.
- - In the Specify Target System section:
- - Select the RFC Connection radio button.
- - Select Operational Data Provisioning (ODP) in the Scenario for RFC Communication dropdown menu.
- - For the Queue Alias property, enter your queue alias that can be used to select the context of your data extractions via ODP in ADF as SLT~<_your queue alias_>.
+ODP offers various data extraction contexts or *source object types*. Although most data source objects are ready to extract, some require more configuration. In an SAPI context, the objects to extract are called DataSources or *extractors*. To extract DataSources, be sure to meet the following requirements:
+- Ensure that DataSources are activated on your SAP source systems. This requirement applies only to DataSources that are delivered by SAP or its partners. DataSources that are created by customers are automatically activated. If DataSources have been or are being extracted by SAP BW or BW/4HANA, the DataSources have already been activated. For more information about DataSources and their activations, see [Installing BW Content DataSources](https://help.sap.com/saphelp_nw73/helpdata/en/4a/1be8b7aece044fe10000000a421937/frameset.htm).
-For more info on SLT configurations, see [Replicating Data to SAP Business Warehouse](https://help.sap.com/docs/SAP_LANDSCAPE_TRANSFORMATION_REPLICATION_SERVER/969cf5258b964a5ba56380da648ac84e/737e69568fb4c359e10000000a441470.html).
+- Make sure that DataSources are released for extractions via ODP. This requirement applies only to DataSources that customers create. DataSources that are delivered by SAP or its partners are automatically released. For more information, see the following SAP support notes:
-### Known issues
+ - [1560241 - To release DataSources for ODP API](https://launchpad.support.sap.com/#/notes/1560241)
+
+ Combine this task with running the following programs:
+
+ - RODPS_OS_EXPOSE to release DataSources for external use
+
+ - BS_ANLY_DS_RELEASE_ODP to release BW extractors for the ODP API
+
+ - [2232584 - To release SAP extractors for ODP API](https://launchpad.support.sap.com/#/notes/2232584) for a list of all SAP-delivered DataSources (more than 7,400) that have been released
+
+### Set up the SAP replication server
+
+SAP Landscape Transformation Replication Server (SLT) is a database trigger-enabled CDC solution that can replicate SAP application tables and simple views in near real time. SLT replicates from SAP source systems to various targets, including the operational delta queue (ODQ). You can use SLT as a proxy in data extraction ODP. You can install SLT on an SAP source system as an SAP Data Migration Server (DMIS) add-on or use it on a standalone replication server. To use SLT as a proxy, complete the following steps:
+
+1. Install NetWeaver 7.4 SPS 04 or later and the DMIS 2011 SP 05 add-on on your replication server. For more information, see [Transferring Data from SLT Using Operational Data Provisioning](https://help.sap.com/docs/SAP_NETWEAVER_750/ccc9cdbdc6cd4eceaf1e5485b1bf8f4b/6ca2eb9870c049159de25831d3269f3f.html).
+
+1. Run the SAP Landscape Transformation Replication Server Cockpit (LTRC) transaction code on your replication server to configure SLT:
+
+ 1. Under **Specify Source System**, enter the RFC destination that represents your SAP source system.
-Here are SAP support notes to resolve known issues on SAP systems:
-- [1660374 - To extend timeout when fetching large data sets via ODP](https://launchpad.support.sap.com/#/notes/1660374).-- [2321589 - To resolve non-existing Business Add-In (BAdI) for RSODP_ODATA subscriber type](https://launchpad.support.sap.com/#/notes/2321589).-- [2636663 - To resolve inconsistent database trigger status in SLT when extracting and replicating the same SAP application table](https://launchpad.support.sap.com/#/notes/2636663).-- [3038236 - To resolve CDS view extractions that fail to populate ODQ](https://launchpad.support.sap.com/#/notes/3038236).-- [3076927 - To remove unsupported callbacks when extracting from SAP BW or BW/4HANA](https://launchpad.support.sap.com/#/notes/3076927).
+ 1. Under **Specify Target System**, complete these steps:
+
+ 1. Select **RFC Connection**.
+
+ 1. In **Scenario for RFC Communication**, select **Operational Data Provisioning (ODP)**.
+
+ 1. In **Queue Alias**, enter the queue alias to use to select the context of your data extractions via ODP in Data Factory. Use the format `SLT-<your queue alias>`.
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-slt-configurations.png" alt-text="Screenshot of the SAP SLT configuration dialog.":::
+
+For more information about SLT configurations, see [Replicating Data to SAP Business Warehouse](https://help.sap.com/docs/SAP_LANDSCAPE_TRANSFORMATION_REPLICATION_SERVER/969cf5258b964a5ba56380da648ac84e/737e69568fb4c359e10000000a441470.html).
+
+### Validate your setup
+
+To validate your SAP system configurations for ODP, you can run the RODPS_REPL_TEST program to test extraction, including SAPI extractors, CDS views, and BW objects. For more information, see [Replication test with RODPS_REPL_TEST](https://wiki.scn.sap.com/wiki/display/BI/Replication+test+with+RODPS_REPL_TEST).
+
+### Known issues
-### Validation
+The following SAP support notes resolve known issues on SAP systems:
-To validate your SAP system configurations for ODP, you can run RODPS_REPL_TEST program to test the extraction of your SAPI extractors, CDS views, BW objects, etc., see [Replication test with RODPS_REPL_TEST](https://wiki.scn.sap.com/wiki/display/BI/Replication+test+with+RODPS_REPL_TEST).
+- [1660374 - To extend timeout when fetching large data sets via ODP](https://launchpad.support.sap.com/#/notes/1660374)
+- [2321589 - To resolve non-existing Business Add-In (BAdI) for RSODP_ODATA subscriber type](https://launchpad.support.sap.com/#/notes/2321589)
+- [2636663 - To resolve inconsistent database trigger status in SLT when extracting and replicating the same SAP application table](https://launchpad.support.sap.com/#/notes/2636663)
+- [3038236 - To resolve CDS view extractions that fail to populate ODQ](https://launchpad.support.sap.com/#/notes/3038236)
+- [3076927 - To remove unsupported callbacks when extracting from SAP BW or BW/4HANA](https://launchpad.support.sap.com/#/notes/3076927)
## Next steps
-[Prepare the SHIR with the SAP ODP connector](sap-change-data-capture-shir-preparation.md).
+[Set up a self-hosted integration runtime for your SAP CDC solution](sap-change-data-capture-shir-preparation.md)
data-factory Sap Change Data Capture Shir Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-shir-preparation.md
Title: SAP change data capture solution (Preview) - SHIR preparation
+ Title: Set up a self-hosted integration runtime for the SAP CDC solution (preview)
-description: This article introduces and describes preparation of the self-hosted integration runtime (SHIR) for SAP change data capture (Preview) in Azure Data Factory.
+description: Learn how to create and set up a self-hosted integration runtime for your SAP change data capture (CDC) solution (preview) in Azure Data Factory.
Last updated 06/01/2022
-# Self-hosted integration runtime (SHIR) preparation for the SAP change data capture (CDC) solution in Azure Data Factory (Preview)
+# Set up a self-hosted integration runtime for the SAP CDC solution (preview)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article introduces and describes preparation of the self-hosted integration runtime (SHIR) for SAP change data capture (Preview) in Azure Data Factory.
+Learn how to create and set up a self-hosted integration runtime for the SAP change data capture (CDC) solution (preview) in Azure Data Factory.
-To prepare SHIR w/ SAP ODP connector, complete the following steps:
+To prepare a self-hosted integration runtime to use with the SAP ODP (preview) linked service and the SAP data extraction template or the SAP data partition template, complete the steps that are described in the following sections.
-## Create and configure SHIR
+## Create and set up a self-hosted integration runtime
-On ADF Studio, create and configure SHIR, see [Create and configure a self-hosted integration runtime](create-self-hosted-integration-runtime.md?tabs=data-factory). You can download our latest private SHIR version w/ improved performance and detailed error messages from [SHIR installation download](https://www.microsoft.com/download/details.aspx?id=39717) and install it on your on-premises/virtual machine.
+In Azure Data Factory Studio, [create and configure a self-hosted integration runtime](create-self-hosted-integration-runtime.md?tabs=data-factory). You can download the latest version of the private [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). The download version has improved performance and detailed error messages. Install the runtime on your on-premises computer or on a virtual machine (VM).
-The more CPU cores you have on your SHIR machine, the higher your data extraction throughput. For example, our internal test achieved +12 MB/s throughput from running parallel extractions on SHIR machine w/ 16 CPU cores.
+The more CPU cores you have on the computer running the self-hosted integration runtime, the higher your data extraction throughput is. For example, an internal test achieved a higher than 12-MB/s throughput when running parallel extractions on a self-hosted integration runtime computer that has 16 CPU cores.
-## Download and install the latest 64-bit SAP .NET Connector (SAP NCo 3.0)
+## Download and install the SAP .NET connector
-Download the latest [64-bit SAP .NET Connector (SAP NCo 3.0)](https://support.sap.com/en/product/connectors/msnet.html) and install it on your SHIR machine. During installation, select the **Install Assemblies to GAC** option in the **Optional setup steps** window.
+Download the latest [64-bit SAP .NET Connector (SAP NCo 3.0)](https://support.sap.com/en/product/connectors/msnet.html) and install it on the computer running the self-hosted integration runtime. During installation, in the **Optional setup steps** dialog, select **Install assemblies to GAC**, and then select **Next**.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-net-connector-installation.png" alt-text="Screenshot of the SAP .NET Connector 3.0 installation dialog.":::
-
-## Add required network security rule on your SAP systems
-Add a network security rule on your SAP systems, so SHIR machine can connect to them. If your SAP system is on Azure virtual machine (VM), add the rule by setting the **Source IP addresses/CIDR ranges** property to your SHIR machine IP address and the **Destination port ranges** property to 3200,3300. For example:
+## Add a network security rule
-
-## Run PowerShell cmdlet allowing your SHIR to connect to your SAP systems
+Add a network security rule on your SAP systems so that your self-hosted integration runtime computer can connect to them. If your SAP system is on an Azure VM, to add the rule:
-On your SHIR machine, run the following PowerShell cmdlet to ensure that it can connect to your SAP systems: Test-NetConnection _&lt;SAP system IP address&gt;_ -port 3300
+1. Set **Source IP addresses/CIDR ranges** to your self-hosted integration runtime machine IP address.
+
+1. Set **Destination port ranges** to **3200,3300**. For example:
+
+ :::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-add-network-security-rule.png" alt-text="Screenshot of the Azure portal networking configuration to add network security rules for your runtime to connect to your SAP systems.":::
+
+## Test connectivity
+
+On the computer running your self-hosted integration runtime, run the following PowerShell cmdlet to ensure that it can connect to your SAP systems:
+
+```powershell
+Test-NetConnection <SAP system IP address> -port 3300
+```
:::image type="content" source="medilet to test the connection to your SAP systems.":::
-## Edit hosts file on SHIR machine to add SAP IP addresses to server names
+## Edit hosts files
+
+Edit the hosts file on the computer running your self-hosted integration runtime to add your SAP IP addresses to your server names.
-On your SHIR machine, edit _C:\Windows\System32\drivers\etc\hosts_ to add mappings of SAP system IP addresses to server names. For example:
+On the computer running your self-hosted integration runtime, edit *C:\Windows\System32\drivers\etc\hosts* to add mappings of your SAP system IP addresses to your server names. For example:
```ini # SAP ECC
On your SHIR machine, edit _C:\Windows\System32\drivers\etc\hosts_ to add mappin
## Next steps
-[Prepare the SAP ODP linked service and source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md).
+[Set up an SAP ODP linked service and source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md)
databox-online Azure Stack Edge Add Hardware Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-add-hardware-terms.md
Title: Azure Stack Edge hardware additional terms | Microsoft Docs
+ Title: Azure Stack Edge Hardware Additional Terms | Microsoft Docs
description: Describes additional terms for Azure Stack Edge hardware. Previously updated : 08/10/2022 Last updated : 09/16/2022
-# Azure Stack Edge hardware additional terms
+# Azure Stack Edge Hardware Additional Terms
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)] This article documents additional terms for Azure Stack Edge hardware.
-## Availability of the Azure Stack Edge device
+## Availability of the Azure Stack Edge Device
-Microsoft isn't obligated to continue to offer the Azure Stack Edge device or any other hardware product in connection with the Service. The Azure Stack Edge device may not be offered in all regions or jurisdictions, and even where it is offered, it may be subject to availability. Microsoft reserves the right to refuse to offer the Azure Stack Edge device to anyone in its sole discretion and judgment.
+Microsoft is not obligated to continue to offer the Azure Stack Edge Device or any other hardware product in connection with the Service. The Azure Stack Edge Device may not be offered in all regions or jurisdictions, and even where it is offered, it may be subject to availability. Microsoft reserves the right to refuse to offer the Azure Stack Edge Device to anyone in its sole discretion and judgment.
-## Use of the Azure Stack Edge device
-As part of the Service, Microsoft allows Customer to use the Azure Stack Edge device for as long as Customer has an active subscription to the Service. If Customer no longer has an active subscription and fails to return the Azure Stack Edge device, Microsoft may deem the Azure Stack Edge device as lost as described in the ΓÇ£Title and risk of loss; shipment and return responsibilitiesΓÇ¥ section.
+## Use of the Azure Stack Edge Device
+As part of the Service, Microsoft allows Customer to use the Azure Stack Edge Device for as long as Customer has an active subscription to the Service. If Customer no longer has an active subscription and fails to return the Azure Stack Edge Device, Microsoft may deem the Azure Stack Edge Device as lost as described in the ΓÇ£Title and Risk of Loss; Shipment and Return ResponsibilitiesΓÇ¥ Section.
-## Title and risk of loss; shipment and return responsibilities
+## Title and Risk of Loss; Shipment and Return Responsibilities
-### Title and risk of loss
+### Title and Risk of Loss
-All right, title and interest in each Azure Stack Edge device is and shall remain the property of Microsoft, and except as described in these Additional Terms, no rights are granted to any Azure Stack Edge device (including under any patent, copyright, trade secret, trademark or other proprietary rights). Customer will compensate Microsoft for any loss, damage or destruction to or of any Azure Stack Edge device once it has been delivered by the carrier to CustomerΓÇÖs designated address until the Microsoft-designated carrier accepts the Azure Stack Edge device for return delivery, including while it is at any of CustomerΓÇÖs locations (other than expected wear and tear that don't compromise the structure or functionality) or such circumstances as described in the ΓÇ£Responsibilities if a government customer moves an Azure Stack Edge device between customerΓÇÖs locationsΓÇ¥ section. Customer is responsible for inspecting the Azure Stack Edge device upon receipt from the carrier and for promptly reporting any damages to Microsoft Support at [adbeops@microsoft.com](mailto:adbeops@microsoft.com).
+All right, title and interest in each Azure Stack Edge Device is and shall remain the property of Microsoft, and except as described in these Additional Terms, no rights are granted to any Azure Stack Edge Device (including under any patent, copyright, trade secret, trademark or other proprietary rights). Customer will compensate Microsoft for any loss, damage or destruction to or of any Azure Stack Edge Device once it has been delivered by the carrier to CustomerΓÇÖs designated address until the Microsoft-designated carrier accepts the Azure Stack Edge Device for return delivery, including while it is at any of CustomerΓÇÖs locations (other than expected wear and tear that do not compromise the structure or functionality) or such circumstances as described in the ΓÇ£Responsibilities if a Government Customer Moves an Azure Stack Edge Device between CustomerΓÇÖs LocationsΓÇ¥ Section. Customer is responsible for inspecting the Azure Stack Edge Device upon receipt from the carrier and for promptly reporting any damages to Microsoft Support at [adbeops@microsoft.com](mailto:adbeops@microsoft.com).
-If Customer prefers to arrange CustomerΓÇÖs own pick-up and/or return of the Azure Stack Edge device pursuant to the ΓÇ£Shipment and return of Azure Stack Edge deviceΓÇ¥ section below, Customer is responsible for the entire risk of loss of, or any damage to the Azure Stack Edge device until it has been returned to and accepted by Microsoft.
-Microsoft may charge Customer for a lost device fee for the Azure Stack Edge device (or equivalent) as described on the pricing pages for the specific Azure Stack Edge device models under the **FAQ** section at https://azure.microsoft.com/pricing/details/azure-stack/edge/ for the following reasons: (i) the Azure Stack Edge device is lost or materially damaged while it is CustomerΓÇÖs responsibility as described above, (ii) Customer does not provide the Azure Stack Edge device to the Microsoft-designated carrier for return or return the Azure Stack Edge device pursuant to the ΓÇ£Shipment and return of Azure Stack Edge deviceΓÇ¥ section below within 30 days from the end of CustomerΓÇÖs use of the Service. Microsoft reserves the right to change the fee charged for lost or damaged devices, including charging different amounts for different device form factors.
+If Customer prefers to arrange CustomerΓÇÖs own pick-up and/or return of the Azure Stack Edge Device pursuant to the ΓÇ£Shipment and Return of Azure Stack Edge DeviceΓÇ¥ Section below, Customer is responsible for the entire risk of loss of, or any damage to the Azure Stack Edge Device until it has been returned to and accepted by Microsoft. Microsoft may charge Customer for a lost device fee for the Azure Stack Edge Device (or equivalent) as described on the pricing pages for the specific Azure Stack Edge Device models under the **FAQ** section at https://azure.microsoft.com/pricing/details/azure-stack/edge/ for the following reasons: (i) the Azure Stack Edge Device is lost or materially damaged while it is CustomerΓÇÖs responsibility as described above, (ii) Customer does not provide the Azure Stack Edge Device to the Microsoft-designated carrier for return or return the Azure Stack Edge Device pursuant to the ΓÇ£Shipment and Return of Azure Stack Edge DeviceΓÇ¥ Section below within 30 days from the end of CustomerΓÇÖs use of the Service. Microsoft reserves the right to change the fee charged for lost or damaged devices, including charging different amounts for different device form factors.
-### Shipment and return of Azure Stack Edge device
+### Shipment and Return of Azure Stack Edge Device
-Customer will be responsible for a one-time metered shipping fee for the shipment of the Azure Stack Edge device from Microsoft to Customer and return shipping of the same, in addition to any metered amounts for carrier charges, any taxes, or applicable customs fees. When returning an Azure Stack Edge device to Microsoft, Customer will package and ship the same in accordance with MicrosoftΓÇÖs instructions, including using a carrier designated by Microsoft and the packaging materials provided by Microsoft. If Customer prefers to arrange CustomerΓÇÖs own pick-up and/or return of the same, then Customer is responsible for the costs of shipping the Azure Stack Edge device, including protections against any loss or damage of the Azure Stack Edge device (for example, insurance coverage) while in transit. Customer will package and ship the Azure Stack Edge device in accordance with MicrosoftΓÇÖs packaging instructions. Customer is also responsible to ensure that it removes all CustomerΓÇÖs data from the Azure Stack Edge device prior to returning it to Microsoft, including following any Microsoft-issued processes for wiping or clearing the Azure Stack Edge device.
+Customer will be responsible for a one-time metered shipping fee for the shipment of the Azure Stack Edge Device from Microsoft to Customer and return shipping of the same, in addition to any metered amounts for carrier charges, any taxes, or applicable customs fees. When returning an Azure Stack Edge Device to Microsoft, Customer will package and ship the same in accordance with MicrosoftΓÇÖs instructions, including using a carrier designated by Microsoft and the packaging materials provided by Microsoft. If Customer prefers to arrange CustomerΓÇÖs own pick-up and/or return of the same, then Customer is responsible for the costs of shipping the Azure Stack Edge Device, including protections against any loss or damage of the Azure Stack Edge Device (e.g., insurance coverage) while in transit. Customer will package and ship the Azure Stack Edge Device in accordance with MicrosoftΓÇÖs packaging instructions. Customer is also responsible to ensure that it removes all CustomerΓÇÖs data from the Azure Stack Edge Device prior to returning it to Microsoft, including following any Microsoft-issued processes for wiping or clearing the Azure Stack Edge Device.
-### Responsibilities if a government customer moves an Azure Stack Edge device between customerΓÇÖs locations
+### Responsibilities if a Government Customer Moves an Azure Stack Edge Device between CustomerΓÇÖs Locations
-Government Customer agrees to comply with and be responsible for all applicable import, export and general trade laws and regulations should Customer decide to transport the Azure Stack Edge device beyond the country border in which Customer receives the Azure Stack Edge device. For clarity, but not limited to, if a government Customer is in possession of an Azure Stack Edge device, only the government Customer may, at government CustomerΓÇÖs sole risk and expense, transport the Azure Stack Edge device to its different locations in accordance with this section and the requirements of the Additional Terms. Customer is responsible for obtaining at CustomerΓÇÖs own risk and expense any export license, import license and other official authorization for the exportation and importation of the Azure Stack Edge device and CustomerΓÇÖs data to any different Customer location. Customer shall also be responsible for customs clearance to any different Customer location, and will bear all duties, taxes, and other official charges payable upon importation as well as all costs and risks of carrying out customs formalities in a timely manner.
+Government Customer agrees to comply with and be responsible for all applicable import, export and general trade laws and regulations should Customer decide to transport the Azure Stack Edge Device beyond the country border in which Customer receives the Azure Stack Edge Device. For clarity, but not limited to, if a government Customer is in possession of an Azure Stack Edge Device, only the government Customer may, at government CustomerΓÇÖs sole risk and expense, transport the Azure Stack Edge Device to its different locations in accordance with this section and the requirements of the Additional Terms. Customer is responsible for obtaining at CustomerΓÇÖs own risk and expense any export license, import license and other official authorization for the exportation and importation of the Azure Stack Edge Device and CustomerΓÇÖs data to any different Customer location. Customer shall also be responsible for customs clearance to any different Customer location, and will bear all duties, taxes, and other official charges payable upon importation as well as all costs and risks of carrying out customs formalities in a timely manner.
-If Customer transports the Azure Stack Edge device to a different location, Customer agrees to return the Azure Stack Edge device to the country location where Customer received it initially, prior to shipping the Azure Stack Edge device back to Microsoft. Customer acknowledges that there are inherent risks in shipping data on and in connection with the Azure Stack Edge device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to an Azure Stack Edge device or any data stored on one, including during transit. It's CustomerΓÇÖs responsibility to obtain the appropriate support agreement from Microsoft to meet CustomerΓÇÖs operating objectives for the Azure Stack Edge device; however, depending on the location to which Customer intends to move the Azure Stack Edge device, MicrosoftΓÇÖs ability to provide hardware servicing and support may be delayed, or may not be available.
+If Customer transports the Azure Stack Edge Device to a different location, Customer agrees to return the Azure Stack Edge Device to the country location where Customer received it initially, prior to shipping the Azure Stack Edge Device back to Microsoft. Customer acknowledges that there are inherent risks in shipping data on and in connection with the Azure Stack Edge Device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to an Azure Stack Edge Device or any data stored on one, including during transit. It is CustomerΓÇÖs responsibility to obtain the appropriate support agreement from Microsoft to meet CustomerΓÇÖs operating objectives for the Azure Stack Edge Device; however, depending on the location to which Customer intends to move the Azure Stack Edge Device, MicrosoftΓÇÖs ability to provide hardware servicing and support may be delayed, or may not be available.
## Fees
-Microsoft will charge Customer specified fees in connection with CustomerΓÇÖs use of the Azure Stack Edge device as part of the Service, with [the current schedule of fees for each Azure Stack Edge model](https://azure.microsoft.com/pricing/details/azure-stack/edge/). Customer may use other Azure services in connection with CustomerΓÇÖs use of the Service, and Microsoft deems such services as separate services that may be subject to separate metered fees and costs. By way of example only, Azure Storage, Azure Compute, and Azure IoT Hub are separate Azure services, and if used (even in connection with its use of the Service), separate Azure metered services will apply.
+Microsoft will charge Customer specified fees in connection with CustomerΓÇÖs use of the Azure Stack Edge Device as part of the Service, with [the current schedule of fees for each Azure Stack Edge model](https://azure.microsoft.com/pricing/details/azure-stack/edge/). Customer may use other Azure services in connection with CustomerΓÇÖs use of the Service, and Microsoft deems such services as separate services that may be subject to separate metered fees and costs. By way of example only, Azure Storage, Azure Compute, and Azure IoT Hub are separate Azure services, and if used (even in connection with its use of the Service), separate Azure metered services will apply.
## Next steps
defender-for-cloud Episode Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-two.md
Last updated 05/29/2022
-# Integrate Azure Purview with Microsoft Defender for Cloud
+# Integrate Microsoft Purview with Microsoft Defender for Cloud
-**Episode description**: In this episode of Defender for Cloud in the field, David Trigano joins Yuri Diogenes to share the new integration of Microsoft Defender for Cloud with Azure Purview, which was released at Ignite 2021.
+**Episode description**: In this episode of Defender for Cloud in the field, David Trigano joins Yuri Diogenes to share the new integration of Microsoft Defender for Cloud with Microsoft Purview, which was released at Ignite 2021.
-David explains the use case scenarios for this integration and how the data classification is done by Azure Purview can help prioritize recommendations and alerts in Defender for Cloud. David also demonstrates the overall experience of data enrichment based on the information that flows from Azure Purview to Defender for Cloud.
+David explains the use case scenarios for this integration and how the data classification is done by Microsoft Purview can help prioritize recommendations and alerts in Defender for Cloud. David also demonstrates the overall experience of data enrichment based on the information that flows from Microsoft Purview to Defender for Cloud.
<br> <br> <iframe src="https://aka.ms/docs/player?id=9b911e9c-e933-4b7b-908a-5fd614f822c7" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe> -- [1:36](/shows/mdc-in-the-field/integrate-with-purview) - Overview of Azure Purview
+- [1:36](/shows/mdc-in-the-field/integrate-with-purview) - Overview of Microsoft Purview
- [2:40](/shows/mdc-in-the-field/integrate-with-purview) - Integration with Microsoft Defender for Cloud -- [3:48](/shows/mdc-in-the-field/integrate-with-purview) - How the integration with Azure Purview helps to prioritize Recommendations in Microsoft Defender for Cloud
+- [3:48](/shows/mdc-in-the-field/integrate-with-purview) - How the integration with Microsoft Purview helps to prioritize Recommendations in Microsoft Defender for Cloud
-- [5:26](/shows/mdc-in-the-field/integrate-with-purview) - How the integration with Azure Purview helps to prioritize Alerts in Microsoft Defender for Cloud
+- [5:26](/shows/mdc-in-the-field/integrate-with-purview) - How the integration with Microsoft Purview helps to prioritize Alerts in Microsoft Defender for Cloud
- [8:54](/shows/mdc-in-the-field/integrate-with-purview) - Demonstration
David explains the use case scenarios for this integration and how the data clas
## Recommended resources
-Learn more about the [integration with Azure Purview](information-protection.md).
+Learn more about the [integration with Microsoft Purview](information-protection.md).
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
Optional modules for port expansion include:
|Location |Type|Specifications| |-- | --| |
-| **PCI Slot 1 (Low profile)**| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI |
-| **PCI Slot 1 (Low profile)** | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
-|**PCI Slot 2 (High profile)**| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI|
-| **PCI Slot 2 (High profile)**|DP F/O NIC| 727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
-| **PCI Slot 2 (High profile)**|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI|
-| **SFPs for Fiber Optic NICs**|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
+| **PCI Slot 1 (Low profile)**| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI (FW 1.52)|
+| **PCI Slot 1 (Low profile)** | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter (FW 10.57.3)|
+|**PCI Slot 2 (High profile)**| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI (FW 1.52)|
+|**PCI Slot 2 (High profile)**| Quad Port Ethernet NIC|647594-B21 - HPE 1 GbE 4p BASE-T BCM5719 Adapter (FW 5719-v1.45 NCSI v1.3.12.0 )|
+| **PCI Slot 2 (High profile)**|DP F/O NIC| 727055-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter (FW 10.57.3)|
+| **PCI Slot 2 (High profile)**|DP F/O NIC| P08421-B21 - HPE Ethernet 10Gb 2-port SFP+ BCM57414 Adapter (FW 214.4.9.6/pkg 214.0.286012)|
+| **PCI Slot 2 (High profile)**|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI (FW 10.57.3)|
+| **SFPs for Fiber Optic NICs**|MultiMode, Short Range| 455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
|**SFPs for Fiber Optic NICs**|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
+> [!IMPORTANT]
+> Verify NIC cards run with the firmware version described above or later.
+> As described in the procedure below, it is also recommended to disable the LLDP Agent in the BIOS for each installed NIC.
+ ## HPE ProLiant DL360 installation This section describes how to install OT sensor software on the HPE ProLiant DL360 appliance and includes adjusting the appliance's BIOS configuration.
Use the following procedure to set up network options and update the default pas
This procedure describes how to update the HPE BIOS configuration for your OT sensor deployment. **To configure the HPE BIOS**:
+> [!IMPORTANT]
+> Please make sure your server is using the HPE SPP 2022.03.1 (BIOS version U32 v2.6.2) or later.
1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
+1. In the **BIOS/Ethernet Adapter/NIC Configuration**, disable LLDP Agent for all NIC cards.
+ 1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**. 1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
This procedure describes how to update the HPE BIOS configuration for your OT se
This procedure describes how to install iLO software remotely from a virtual drive.
-**To install iLO software**:
+**To install sensor software with iLO**:
1. Sign in to the iLO console, and then right-click the servers' screen.
This procedure describes how to install iLO software remotely from a virtual dri
1. Select **Local ISO file**.
-1. In the dialog box, choose the relevant ISO file.
+1. In the dialog box, choose the D4IoT sensor installation ISO file.
1. Go to the left icon, select **Power**, and the select **Reset**.
defender-for-iot How To Connect Sensor By Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-connect-sensor-by-proxy.md
Last updated 02/06/2022
-# Connect Microsoft Defender for IoT sensors without direct internet access by using a proxy (legacy)
+# Connect Microsoft Defender for IoT sensors without direct internet access by using a proxy (version 10.x)
-This article describes how to connect Microsoft Defender for IoT sensors to Defender for IoT via a proxy, with no direct internet access. This article is only relevant if you are using a legacy connection method via your own IoT Hub.
+This article describes how to connect Microsoft Defender for IoT sensors to Defender for IoT via a proxy, with no direct internet access.
+> [!NOTE]
+> This article is only relevant if you are using a OT sensor version 10.x via a private IoT Hub.
+> Starting with sensor software versions 22.1.x, updated connection methods are supported that don't require customers to have their own IoT Hub. For more information, see [Sensor connection methods](architecture-connections.md) and [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md).
-Starting with sensor software versions 22.1.x, updated connection methods are supported that don't require customers to have their own IoT Hub. For more information, see [Sensor connection methods](architecture-connections.md) and [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md).
-
-We recommend that you use the procedures in this article only if you are using a legacy sensor version lower than 22.1.x.
## Overview
The following diagram shows data going from Microsoft Defender for IoT to the Io
## Set up your system
-For this scenario we'll be installing, and configuring the latest version of [Squid](http://www.squid-cache.org/) on an Ubuntu 18 server.
+For this scenario we'll be installing, and configuring the latest version of [Squid](http://www.squid-cache.org/) on an Ubuntu 18 server (additional to the OT sensor).
> [!Note]
-> Microsoft Defender for IoT does not offer support for Squid or any other proxy service.
+> Microsoft Defender for IoT does not offer support for configuring Squid or any other proxy server. We recommend to follow the up to date instructions as applicable to the proxy software in use on your network.
**To install Squid proxy on an Ubuntu 18 server**:
dev-box How To Manage Dev Box Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md
+
+ Title: How to manage a dev box pool
+
+description: This article describes how to create, and delete Microsoft Dev Box dev box pools.
++++ Last updated : 09/16/2022+++
+<!-- Intent: As a dev infrastructure manager, I want to be able to manage dev box pools so that I can provide appropriate dev boxes to my users. -->
+
+# Manage a dev box pool
+To enable developers to self-serve dev boxes from projects, you must configure dev box pools that specify the dev box definitions and network connections used when dev boxes are created. Dev box users create dev boxes from the dev box pools they have access to through their project memberships.
+
+## Permissions
+To manage a dev box pool, you need the following permissions:
+
+|Action|Permission required|
+|--|--|
+|Create, delete, or update dev box pool|Owner or Contributor permissions on an Azure Subscription or a specific resource group. </br> DevCenter Project Admin for the project.|
+
+## Create a dev box pool
+A dev box pool is a collection of dev boxes that you manage together. You must have a pool before users can create a dev box.
+
+The following steps show you how to create a dev box pool associated with a project. You'll use an existing dev box definition and network connection in the dev center to configure a dev box pool.
+
+<!-- how many dev box pools can you create -->
+
+If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure the Microsoft Dev Box service](quickstart-configure-dev-box-service.md) to create them.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, type *Projects* and then select **Projects** from the list.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/discover-projects.png" alt-text="Screenshot showing a search for projects from the Azure portal search box.":::
+
+1. Open the project with which you want to associate the new dev box pool.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/projects-grid.png" alt-text="Screenshot of the list of existing projects.":::
+
+1. Select **Dev box pools** and then select **+ Create**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-grid-empty.png" alt-text="Screenshot of the list of dev box pools within a project. The list is empty.":::
+
+1. On the **Create a dev box pool** page, enter the following values:
+
+ |Name|Value|
+ |-|-|
+ |**Name**|Enter a name for the pool. The pool name is visible to developers to select when they're creating dev boxes, and must be unique within a project.|
+ |**Dev box definition**|Select an existing dev box definition. The definition determines the base image and size for the dev boxes created within this pool.|
+ |**Network connection**|Select an existing network connection. The network connection determines the region of the dev boxes created within this pool.|
+ |**Dev Box Creator Privileges**|Select Local Administrator or Standard User.|
+ |**Licensing**| Select this check box if your organization has Azure Hybrid Benefit licenses that you want to apply to the dev boxes in this pool. |
+
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-create.png" alt-text="Screenshot of the Create dev box pool dialog.":::
+
+1. Select **Add**.
+
+1. Verify that the new dev box pool appears in the list. You may need to refresh the screen.
+
+The dev box pool will be deployed and health checks will be run to ensure the image and network pass the validation criteria to be used for dev boxes. The screenshot below shows four dev box pools, each with a different status.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-grid-populated.png" alt-text="Screenshot showing a list of existing pools.":::
++
+## Delete a dev box pool
+You can delete a dev box pool when you're no longer using it.
+
+> [!CAUTION]
+> When you delete a dev box pool, all existing dev boxes within the pool will be permanently deleted.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, type *Projects* and then select **Projects** from the list.
+
+1. Open the project from which you want to delete the dev box pool.
+
+1. Select the dev box pool you want to delete and then select **Delete**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-delete.png" alt-text="Screenshot of the list of existing dev box pools, with the one to be deleted selected.":::
+
+1. In the confirmation message, select **Confirm**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-delete-confirm.png" alt-text="Screenshot of the Delete dev box pool confirmation message.":::
+
+## Next steps
+
+- [Provide access to projects for project admins](./how-to-project-admin.md)
+- [Create dev box definitions](./quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [Configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
dms Tutorial Mysql Azure Single To Flex Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md
Previously updated : 09/07/2022 Last updated : 09/17/2022
# Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server offline using DMS via the Azure portal
-You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server by using Azure Database Migration Service (DMS), a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms. In this tutorial, weΓÇÖll perform an offline migration of a sample database from an Azure Database for MySQL single server to a MySQL flexible server (both running version 5.7) using a DMS migration activity.
+You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server by using Azure Database Migration Service (DMS), a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms. In this tutorial, weΓÇÖll perform an offline migration of a sample database from an Azure Database for MySQL single server to a MySQL flexible server (both running version 5.7) using a DMS migration activity.
> [!NOTE]
-> DMS supports migrating from lower version MySQL servers (v5.6 and above) to higher versions. In addition, DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you can select a different region, resource group, and subscription for the target server than that specified for your source server.
+> DMS supports migrating from lower version MySQL servers (v5.6 and above) to higher versions. In addition, DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you can select a region, resource group, and subscription for the target server that is different than what is specified for your source server.
> [!IMPORTANT] > For online migrations, you can use the Enable Transactional Consistency feature supported by DMS together with [Data-in replication](./../mysql/single-server/concepts-data-in-replication.md) or [replicate changes](https://techcommunity.microsoft.com/t5/microsoft-data-migration-blog/azure-dms-mysql-replicate-changes-now-in-preview/ba-p/3601564). Additionally, you can use the online migration scenario to migrate by following the tutorial [here](./tutorial-mysql-azure-single-to-flex-offline-portal.md).
-In this tutorial, you will learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"] >
To complete this tutorial, you need to:
As you prepare for the migration, be sure to consider the following limitations.
-* When migrating non-table objects, DMS does not support renaming databases.
+* When migrating non-table objects, DMS doesn't support renaming databases.
* When migrating to a target server with bin_log enabled, be sure to enable log_bin_trust_function_creators to allow for creation of routines and triggers.
-* When migrating the schema, DMS does not support creating a database on the target server.
-* Currently, DMS does not support migrating the DEFINER clause for objects. All object types with definers on the source are dropped and after the migration the default definer for tables will be set to the login used to run the migration.
-* Currently, DMS only supports migrating a schema as part of data movement. If nothing is selected for data movement, the schema migration will not occur. Note that selecting a table for schema migration also selects it for data movement.
+* When migrating the schema, DMS doesn't support creating a database on the target server.
+* Currently, DMS doesn't support migrating the DEFINER clause for objects. All object types with definers on the source are dropped and after the migration the default definer for tables will be set to the login used to run the migration.
+* Currently, DMS only supports migrating a schema as part of data movement. If nothing is selected for data movement, the schema migration won't occur. Note that selecting a table for schema migration also selects it for data movement.
## Best practices for creating a flexible server for faster data loads using DMS
-DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you are free to select appropriate region, resource group and subscription for your target flexible server. Before you create your target flexible server, consider the following configuration guidance to help ensure faster data loads using DMS.
+DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you're free to select appropriate region, resource group and subscription for your target flexible server. Before you create your target flexible server, consider the following configuration guidance to help ensure faster data loads using DMS.
-* Select the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores as in the following table:
+* Select the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores based on the detail in the following table.
| Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier | | - | - |:-:|:-:|
DMS supports cross-region, cross-resource group, and cross-subscription migratio
* The MySQL version for the target flexible server must be greater than or equal to that of the source single server. * Unless you need to deploy the target flexible server in a specific zone, set the value of the Availability Zone parameter to ΓÇÿNo preferenceΓÇÖ.
-* For network connectivity, on the Networking tab, if the source single server has private endpoints or private links configured, select Private Access; otherwise, select Public Access.
+* For network connectivity, on the **Networking** tab, if the source single server has private endpoints or private links configured, select **Private Access**; otherwise, select **Public Access**.
* Copy all firewall rules from the source single server to the target flexible server. * Copy all the name/value tags from the single to flex server during creation itself.
DMS supports cross-region, cross-resource group, and cross-subscription migratio
With these best practices in mind, create your target flexible server and then configure it. * Create the target flexible server. For guided steps, see the quickstart [Create an Azure Database for MySQL flexible server](./../mysql/flexible-server/quickstart-create-server-portal.md).
-* Next to configure the newly created target flexible server, proceed as follows:
+* Next to configure the newly created target flexible server, proceed as follows:
* The user performing the migration requires the following permissions: * To create tables on the target, the user must have the ΓÇ£CREATEΓÇ¥ privilege. * If migrating a table with ΓÇ£DATA DIRECTORYΓÇ¥ or ΓÇ£INDEX DIRECTORYΓÇ¥ partition options, the user must have the ΓÇ£FILEΓÇ¥ privilege. * If migrating to a table with a ΓÇ£UNIONΓÇ¥ option, the user must have the ΓÇ£SELECT,ΓÇ¥ ΓÇ£UPDATE,ΓÇ¥ and ΓÇ£DELETEΓÇ¥ privileges for the tables you map to a MERGE table. * If migrating views, you must have the ΓÇ£CREATE VIEWΓÇ¥ privilege.
- Keep in mind that some privileges may be necessary depending on the contents of the views. Please refer to the MySQL docs specific to your version for ΓÇ£CREATE VIEW STATEMENTΓÇ¥ for details
+ Keep in mind that some privileges may be necessary depending on the contents of the views. Refer to the MySQL docs specific to your version for ΓÇ£CREATE VIEW STATEMENTΓÇ¥ for details
* If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege. * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege. * If migrating routines, the user must have the ΓÇ£CREATE ROUTINEΓÇ¥ privilege.
With these best practices in mind, create your target flexible server and then c
* Set the TLS version and require_secure_transport server parameter to match the values on the source server. * Configure server parameters on the target server to match any non-default values used on the source server. * To ensure faster data loads when using DMS, configure the following server parameters as described.
- * max_allowed_packet ΓÇô set to 1073741824 (i.e., 1GB) to prevent any connection issues due to large rows.
+ * max_allowed_packet ΓÇô set to 1073741824 (i.e., 1 GB) to prevent any connection issues due to large rows.
* slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads. * innodb_buffer_pool_size ΓÇô can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size. * innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.
To register the Microsoft.DataMigration resource provider, perform the following
### Create a Database Migration Service (DMS) instance
-01. In the Azure portal, select + **Create a resource**, search for the term ΓÇ£Azure Database Migration ServiceΓÇ¥, and then select **Azure Database Migration Service** from the drop-down list.
+1. In the Azure portal, select + **Create a resource**, search for the term ΓÇ£Azure Database Migration ServiceΓÇ¥, and then select **Azure Database Migration Service** from the drop-down list.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/4-dms-portal-marketplace.png" alt-text="Screenshot of a Search Azure Database Migration Service.":::
-02. On the **Azure Database Migration Service** screen, select **Create**.
+2. On the **Azure Database Migration Service** screen, select **Create**.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/5-dms-portal-marketplace-create.png" alt-text="Screenshot of a Create Azure Database Migration Service instance.":::
-03. On the **Select migration scenario and Database Migration Service** page, under **Migration scenario**, select **Azure Database for MySQL-Single Server** as the source server type, and then select **Azure Database for MySQL** as target server type, and then select **Select**.
+3. On the **Select migration scenario and Database Migration Service** page, under **Migration scenario**, select **Azure Database for MySQL-Single Server** as the source server type, and then select **Azure Database for MySQL** as target server type, and then select **Select**.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/6-create-dms-service-scenario-offline.png" alt-text="Screenshot of a Select Migration Scenario.":::
-04. On the **Create Migration Service** page, on the **Basics** tab, under **Project details**, select the appropriate subscription, and then select an existing resource group or create a new one.
+4. On the **Create Migration Service** page, on the **Basics** tab, under **Project details**, select the appropriate subscription, and then select an existing resource group or create a new one.
-05. Under **Instance details**, specify a name for the service, select a region, and then verify that **Azure** is selected as the service mode.
+5. Under **Instance details**, specify a name for the service, select a region, and then verify that **Azure** is selected as the service mode.
-06. To the right of **Pricing tier**, select **Configure tier**.
+6. To the right of **Pricing tier**, select **Configure tier**.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/7-project-details.png" alt-text="Screenshot of a Select Configure Tier.":::
-07. On the Configure page, select the pricing tier and number of vCores for your DMS instance, and then select Apply.
+7. On the **Configure** page, select the pricing tier and number of vCores for your DMS instance, and then select **Apply**.
+ For more information on DMS costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing). :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/8-configure-pricing-tier.png" alt-text="Screenshot of a Select Pricing tier."::: Next, we need to specify the VNet that will provide the DMS instance with access to the source single server and the target flexible server.
-08. On the **Create Migration Service** page, select **Next : Networking >>**.
+8. On the **Create Migration Service** page, select **Next : Networking >>**.
+
+9. On the **Networking** tab, select an existing VNet from the list or provide the name of new VNet to create, and then select **Review + Create**.
-09. On the **Networking** tab, select an existing VNet from the list or provide the name of new VNet to create, and then select **Review + Create**.
For more information, see the article [Create a virtual network using the Azure portal](./../virtual-network/quick-create-portal.md). :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/8-1-networking.png" alt-text="Screenshot of a Select Networking.":::
To register the Microsoft.DataMigration resource provider, perform the following
> Your vNet must be configured with access to both the source single server and the target flexible server, so be sure to: > > * Create a server-level firewall rule or [configure VNET service endpoints](./../mysql/single-server/how-to-manage-vnet-using-portal.md) for both the source and target Azure Database for MySQL servers to allow the VNet for Azure Database Migration Service access to the source and target databases.
- > * Ensure that your VNet Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and Azure Monitor. For more details about VNet NSG traffic filtering, see [Filter network traffic with network security groups](./../virtual-network/virtual-network-vnet-plan-design-arm.md).
-
+ > * Ensure that your VNet Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and Azure Monitor. For more Information about VNet NSG traffic filtering, see [Filter network traffic with network security groups](./../virtual-network/virtual-network-vnet-plan-design-arm.md).
+ > [!NOTE] > If you want to add tags to the service, first select Next : Tags to advance to the Tags tab first. Adding tags to the service is optional. 10. Navigate to the **Review + create** tab, review the configurations, view the terms, and then select **Create**. :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/9-review-create.png" alt-text="Screenshot of a Select Review+Create.":::+ Deployment of your instance of DMS now begins. The message **Deployment is in progress** appears for a few minutes, and then the message changes to **Your deployment is complete**. 11. Select **Go to resource**.
To create a migration project, perform the following steps.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/10-dms-search.png" alt-text="Screenshot of a Locate all instances of Azure Database Migration Service.":::
-2. In the search results, select the DMS instance that you just created, and then select + **New Migration Project**.
+2. In the search results, select the DMS instance that you created, and then select + **New Migration Project**.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/11-select-create.png" alt-text="Screenshot of a CSelect a new migration project."::: 3. On the **New migration project** page, specify a name for the project, in the Source server type selection box, select **Azure Database For MySQL ΓÇô Single Server**, in the Target server type selection box, select **Azure Database For MySQL**, in the **Migration activity type** selection box, select **Online migration**, and then select **Create and run activity**.+ > [!NOTE] > Selecting Create project only as the migration activity type will only create the migration project; you can then run the migration project at a later time.
To configure your DMS migration project, perform the following steps.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/13-select-source-offline.png" alt-text="Screenshot of an Add source details screen."::: 2. To proceed with the offline migration, select the **Make Source Server Read Only** check box.
-Selecting this check box prevents Write/Delete operations on the source server during migration, which ensures the data integrity of the target database as the source is migrated. When you make your source server read only as part of the migration process, all the databases on the source server, regardless of whether they are selected for migration, will be read-only.
+
+ Selecting this check box prevents Write/Delete operations on the source server during migration, which ensures the data integrity of the target database as the source is migrated. When you make your source server read only as part of the migration process, all the databases on the source server, regardless of whether they're selected for migration, will be read-only.
+ > [!NOTE] > Alternately, if you were performing an online migration, you would select the **Enable Transactional Consistency** check box. For more information about consistent backup, see [MySQL Consistent Backup](./migrate-azure-mysql-consistent-backup.md).
Selecting this check box prevents Write/Delete operations on the source server d
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/16-select-db.png" alt-text="Screenshot of a Select database."::: 5. In the **Select databases** section, under **Source Database**, select the database(s) to migrate.+ The non-table objects in the database(s) you specified will be migrated, while the items you didnΓÇÖt select will be skipped.
-6. Select **Next : Select databases>>** to navigate to the Select tables tab.
+6. Select **Next : Select databases>>** to navigate to the **Select tables** tab.
+ Before the tab populates, DMS fetches the tables from the selected database(s) on the source and target and then determines whether the table exists and contains data. 7. Select the tables that you want to migrate.
- If you select a table in the source database that doesnΓÇÖt exist on the target database, the box under **Migrate schema** is selected by default. For tables that do exist in the target database, a note indicates that the selected table already contains data and will be truncated. In addition, if the schema of a table on the target server does not match the schema on the source, the table will be dropped before the migration continues.
+
+ If you select a table in the source database that doesnΓÇÖt exist on the target database, the box under **Migrate schema** is selected by default. For tables that do exist in the target database, a note indicates that the selected table already contains data and will be truncated. In addition, if the schema of a table on the target server doesn't match the schema on the source, the table will be dropped before the migration continues.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/17-select-tables.png" alt-text="Screenshot of a Select Tables.":::
- DMS validates your inputs, and if the validation passes, you will be able to start the migration.
+ DMS validates your inputs, and if the validation passes, you'll be able to start the migration.
8. After configuring for schema migration, select **Review and start migration**.+ > [!NOTE] > You only need to navigate to the Configure migration settings tab if you are trying to troubleshoot failing migrations.
When the migration is complete, be sure to complete the following post-migration
* Perform sanity testing of the application against the target database to certify the migration. * Update the connection string to point to the new target flexible server. * Delete the source single server after you have ensured application continuity.
-* If you scaled-up the target flexible server for faster migration, scale it back by selecting the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores as in the table below.
+* If you scaled-up the target flexible server for faster migration, scale it back by selecting the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores based on the detail in the following table.
| Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier | | - | - |:-:|:-:|
dms Tutorial Mysql Azure Single To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md
Previously updated : 09/07/2022 Last updated : 09/16/2022
> [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server by using Azure Database Migration Service (DMS), a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms. In this tutorial, weΓÇÖll perform an online migration of a sample database from an Azure Database for MySQL single server to a MySQL flexible server (both running version 5.7) using a DMS migration activity.
+You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server by using Azure Database Migration Service (DMS), a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms. In this tutorial, weΓÇÖll perform an online migration of a sample database from an Azure Database for MySQL single server to a MySQL flexible server (both running version 5.7) using a DMS migration activity.
> [!NOTE]
-> DMS online migration is now in Preview. DMS supports migration for MySQL versions - 5.7 and 8.0, and also supports migration from lower version MySQL servers (v5.7 and above) to higher versions. In addition, DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you can select a different region, resource group, and subscription for the target server than that specified for your source server.
+> DMS online migration is now in preview. DMS supports migration for MySQL versions 5.7 and 8.0, and also supports migration from lower version MySQL servers (v5.7 and above) to higher version servers. In addition, DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you can select a region, resource group, and subscription for the target server that is different than what is specified for your source server.
-In this tutorial, you will learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"] >
To complete this tutorial, you need to:
* Create or use an existing instance of Azure Database for MySQL ΓÇô Single Server (the source server). * To complete the replicate changes migration successfully, ensure that the following prerequisites are in place:
- * Use the MySQL command line tool of your choice to determine whether log_bin is enabled on the source server. The Binlog is not always turned on by default, so verify that it is enabled before starting the migration. To determine whether log_bin is enabled on the source server, run the command: SHOW VARIABLES LIKE 'log_binΓÇÖ
- * Ensure that the user has ΓÇ£REPLICATION CLIENTΓÇ¥ and ΓÇ£REPLICATION SLAVEΓÇ¥ permission on source server for reading and applying the bin log.
- * If you're targeting a replicate changes migration, configure the binlog_expire_logs_seconds parameter on the source server to ensure that binlog files are not purged before the replica commits the changes. We recommend at least two days to start. After a successful cutover, the value can be reset.
+ * Use the MySQL command line tool of your choice to verify that log_bin is enabled on the source server by running the command: SHOW VARIABLES LIKE 'log_binΓÇÖ. If log_bin isn't enabled, be sure to enable it before starting the migration.
+ * Ensure that the user has ΓÇ£REPLICATION CLIENTΓÇ¥ and ΓÇ£REPLICATION SLAVEΓÇ¥ permissions on the source server for reading and applying the bin log.
+ * If you're targeting a replicate changes migration, configure the binlog_expire_logs_seconds parameter on the source server to ensure that binlog files aren't purged before the replica commits the changes. We recommend at least two days to start. After a successful cutover, you can reset the value.
* To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges: * ΓÇ£READΓÇ¥ privilege on the source database. * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database
- * If migrating views, user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege.
- * If migrating triggers, user must have the ΓÇ£TRIGGERΓÇ¥ privilege.
+ * If migrating views, the user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege.
+ * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege.
* If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege: * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table. * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine.
- * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the event is to be shown.
+ * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the events are to be shown.
## Limitations As you prepare for the migration, be sure to consider the following limitations.
-* When migrating non-table objects, DMS does not support renaming databases.
+* When migrating non-table objects, DMS doesn't support renaming databases.
* When migrating to a target server with bin_log enabled, be sure to enable log_bin_trust_function_creators to allow for creation of routines and triggers.
-* When migrating the schema, DMS does not support creating a database on the target server.
-* Currently, DMS does not support migrating the DEFINER clause for objects. All object types with definers on the source are dropped and after the migration the default definer for tables will be set to the login used to run the migration.
-* Currently, DMS only supports migrating a schema as part of data movement. If nothing is selected for data movement, the schema migration will not occur. Note that selecting a table for schema migration also selects it for data movement.
+* When migrating the schema, DMS doesn't support creating a database on the target server.
+* Currently, DMS doesn't support migrating the DEFINER clause for objects. All object types with definers on the source are dropped and after the migration, the default definer for tables will be set to the login used to run the migration.
+* Currently, DMS only supports migrating a schema as part of data movement. If nothing is selected for data movement, the schema migration won't occur. Note that selecting a table for schema migration also selects it for data movement.
* Online migration support is limited to the ROW binlog format.
-* Online migration only replicates DML changes; replicating DDL changes is not supported. Do not make any schema changes to the source while replication is in progress.
+* Online migration only replicates DML changes; replicating DDL changes isn't supported. Don't make any schema changes to the source while replication is in progress.
## Best practices for creating a flexible server for faster data loads using DMS
-DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you are free to select appropriate region, resource group and subscription for your target flexible server. Before you create your target flexible server, consider the following configuration guidance to help ensure faster data loads using DMS.
+DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you're free to select appropriate region, resource group and subscription for your target flexible server. Before you create your target flexible server, consider the following configuration guidance to help ensure faster data loads using DMS.
-* Select the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores as in the following table:
+* Select the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores based on the detail in the following table.
| Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier | | - | - |:-:|:-:|
DMS supports cross-region, cross-resource group, and cross-subscription migratio
| Memory Optimized | 16 | Business Critical | Standard_E16ds_v4 | | Memory Optimized | 32 | Business Critical | Standard_E32ds_v4 |
-\* For the migration, select General Purpose 16 VCores compute for the target flexible server for faster migrations. Scale back to the desired compute size for the target server after migration is complete by following the compute size recommendation in the Performing post-migration activities section later in this article.
+\* For the migration, select General Purpose 16 vCores compute for the target flexible server for faster migrations. Scale back to the desired compute size for the target server after migration is complete by following the compute size recommendation in the Performing post-migration activities section later in this article.
* The MySQL version for the target flexible server must be greater than or equal to that of the source single server. * Unless you need to deploy the target flexible server in a specific zone, set the value of the Availability Zone parameter to ΓÇÿNo preferenceΓÇÖ.
DMS supports cross-region, cross-resource group, and cross-subscription migratio
## Create and configure the target flexible server
-With these best practices in mind, create your target flexible server and then configure it.
+With these best practices in mind, create your target flexible server, and then configure it.
* Create the target flexible server. For guided steps, see the quickstart [Create an Azure Database for MySQL flexible server](./../mysql/flexible-server/quickstart-create-server-portal.md).
-* Next to configure the newly created target flexible server, proceed as follows:
+* Configure the new target flexible server as follows:
* The user performing the migration requires the following permissions: * Ensure that the user has ΓÇ£REPLICATION_APPLIERΓÇ¥ or ΓÇ£BINLOG_ADMINΓÇ¥ permission on target server for applying the bin log. * Ensure that the user has ΓÇ£REPLICATION SLAVEΓÇ¥ permission on target server.
With these best practices in mind, create your target flexible server and then c
* If migrating a table with ΓÇ£DATA DIRECTORYΓÇ¥ or ΓÇ£INDEX DIRECTORYΓÇ¥ partition options, the user must have the ΓÇ£FILEΓÇ¥ privilege. * If migrating to a table with a ΓÇ£UNIONΓÇ¥ option, the user must have the ΓÇ£SELECT,ΓÇ¥ ΓÇ£UPDATE,ΓÇ¥ and ΓÇ£DELETEΓÇ¥ privileges for the tables you map to a MERGE table. * If migrating views, you must have the ΓÇ£CREATE VIEWΓÇ¥ privilege.
- Keep in mind that some privileges may be necessary depending on the contents of the views. Please refer to the MySQL docs specific to your version for ΓÇ£CREATE VIEW STATEMENTΓÇ¥ for details
+ Keep in mind that some privileges may be necessary depending on the contents of the views. Refer to the MySQL docs specific to your version for ΓÇ£CREATE VIEW STATEMENTΓÇ¥ for details
* If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege. * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege. * If migrating routines, the user must have the ΓÇ£CREATE ROUTINEΓÇ¥ privilege.
- * Create a target database with the same name as that on source server, though it need not be populated with tables/views, etc.
+ * Create a target database with the same name as the source server, though you need not populate it with tables/views, etc.
* Set the appropriate character, collations, and any other applicable schema settings prior to starting the migration, as this may affect the DEFAULT set in some of the object definitions. * Additionally, if migrating non-table objects, be sure to use the same name for the target schema as is used on the source. * Configure the server parameters on the target flexible server as follows: * Set the TLS version and require_secure_transport server parameter to match the values on the source server. * Configure server parameters on the target server to match any non-default values used on the source server. * To ensure faster data loads when using DMS, configure the following server parameters as described.
- * max_allowed_packet ΓÇô set to 1073741824 (i.e., 1GB) to prevent any connection issues due to large rows.
+ * max_allowed_packet ΓÇô set to 1073741824 (i.e., 1 GB) to prevent any connection issues due to large rows.
* slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads. * innodb_buffer_pool_size ΓÇô can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size. * innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.
To register the Microsoft.DataMigration resource provider, perform the following
### Create a Database Migration Service (DMS) instance
-01. In the Azure portal, select + **Create a resource**, search for the term ΓÇ£Azure Database Migration ServiceΓÇ¥, and then select **Azure Database Migration Service** from the drop-down list.
+1. In the Azure portal, select **+ Create a resource**, search for the term ΓÇ£Azure Database Migration ServiceΓÇ¥, and then select **Azure Database Migration Service** from the drop-down list.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/4-dms-portal-marketplace.png" alt-text="Screenshot of a Search Azure Database Migration Service.":::
-02. On the **Azure Database Migration Service** screen, select **Create**.
+2. On the **Azure Database Migration Service** screen, select **Create**.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/5-dms-portal-marketplace-create.png" alt-text="Screenshot of a Create Azure Database Migration Service instance.":::
-03. On the **Select migration scenario and Database Migration Service** page, under **Migration scenario**, select **Azure Database for MySQL-Single Server** as the source server type, and then select **Azure Database for MySQL** as target server type, and then select **Select**.
+3. On the **Select migration scenario and Database Migration Service** page, under **Migration scenario**, select **Azure Database for MySQL-Single Server** as the source server type, and then select **Azure Database for MySQL** as target server type, and then select **Select**.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/6-create-dms-service-scenario-online.png" alt-text="Screenshot of a Select Migration Scenario.":::
-04. On the **Create Migration Service** page, on the **Basics** tab, under **Project details**, select the appropriate subscription, and then select an existing resource group or create a new one.
+4. On the **Create Migration Service** page, on the **Basics** tab, under **Project details**, select the appropriate subscription, and then select an existing resource group or create a new one.
-05. Under **Instance details**, specify a name for the service, select a region, and then verify that **Azure** is selected as the service mode.
+5. Under **Instance details**, specify a name for the service, select a region, and then verify that **Azure** is selected as the service mode.
-06. To the right of **Pricing tier**, select **Configure tier**.
+6. To the right of **Pricing tier**, select **Configure tier**.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/7-project-details.png" alt-text="Screenshot of a Select Configure Tier.":::
-07. On the Configure page, select the pricing tier and number of vCores for your DMS instance, and then select Apply.
+7. On the **Configure** page, select the pricing tier and number of vCores for your DMS instance, and then select **Apply**.
For more information on DMS costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing). :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/8-configure-pricing-tier.png" alt-text="Screenshot of a Select Pricing tier."::: Next, we need to specify the VNet that will provide the DMS instance with access to the source single server and the target flexible server.
-08. On the **Create Migration Service** page, select **Next : Networking >>**.
+8. On the **Create Migration Service** page, select **Next : Networking >>**.
-09. On the **Networking** tab, select an existing VNet from the list or provide the name of new VNet to create, and then select **Review + Create**.
+9. On the **Networking** tab, select an existing VNet from the list or provide the name of new VNet to create, and then select **Review + Create**.
For more information, see the article [Create a virtual network using the Azure portal.](./../virtual-network/quick-create-portal.md). :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/8-1-networking.png" alt-text="Screenshot of a Select Networking."::: > [!IMPORTANT]
- > Your vNet must be configured with access to both the source single server and the target flexible server, so be sure to:
+ > Your VNet must be configured with access to both the source single server and the target flexible server, so be sure to:
> > * Create a server-level firewall rule or [configure VNET service endpoints](./../mysql/single-server/how-to-manage-vnet-using-portal.md) for both the source and target Azure Database for MySQL servers to allow the VNet for Azure Database Migration Service access to the source and target databases.
- > * Ensure that your VNet Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and Azure Monitor. For more details about VNet NSG traffic filtering, see [Filter network traffic with network security groups](./../virtual-network/virtual-network-vnet-plan-design-arm.md).
-
+ > * Ensure that your VNet Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and Azure Monitor. For more information about VNet NSG traffic filtering, see [Filter network traffic with network security groups](./../virtual-network/virtual-network-vnet-plan-design-arm.md).
+ > [!NOTE]
- > If you want to add tags to the service, first select Next : Tags to advance to the Tags tab first. Adding tags to the service is optional.
+ > To add tags to the service, advance to the **Tags** tab by selecting **Next : Tags**. Adding tags to the service is optional.
10. Navigate to the **Review + create** tab, review the configurations, view the terms, and then select **Create**. :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/9-review-create.png" alt-text="Screenshot of a Select Review+Create.":::
To create a migration project, perform the following steps.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/10-dms-search.png" alt-text="Screenshot of a Locate all instances of Azure Database Migration Service.":::
-2. In the search results, select the DMS instance that you just created, and then select + **New Migration Project**.
+2. In the search results, select the DMS instance that you created, and then select **+ New Migration Project**.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/11-select-create.png" alt-text="Screenshot of a Select a new migration project.":::
-3. On the **New migration project** page, specify a name for the project, in the Source server type selection box, select **Azure Database For MySQL ΓÇô Single Server**, in the Target server type selection box, select **Azure Database For MySQL**, in the **Migration activity type** selection box, select **Online migration**, and then select **Create and run activity**.
+3. On the **New migration project** page, specify a name for the project, in the **Source server type** selection box, select **Azure Database For MySQL ΓÇô Single Server**, in the **Target server type** selection box, select **Azure Database For MySQL**, in the **Migration activity type** selection box, select **Online migration**, and then select **Create and run activity**.
+ > [!NOTE]
- > Selecting Create project only as the migration activity type will only create the migration project; you can then run the migration project at a later time.
+ > Selecting **Create project only** as the migration activity type will only create the migration project; you can then run the migration project at a later time.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/12-create-project-online.png" alt-text="Screenshot of a Create a new migration project.":::
To configure your DMS migration project, perform the following steps.
2. Select **Next : Select target>>**, and then, on the **Select target** screen, specify the connection details for the target flexible server. :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/select-target-online.png" alt-text="Screenshot of a Select target.":::
-3. Select **Next : Select databases>>**, and then, on the Select databases tab, under [Preview] Select server objects, select the server objects that you want to migrate.
+3. Select **Next : Select databases>>**, and then, on the **Select databases** tab, under **Preview**, select the server objects that you want to migrate.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/16-select-db.png" alt-text="Screenshot of a Select database."::: 4. In the **Select databases** section, under **Source Database**, select the database(s) to migrate.+ The non-table objects in the database(s) you specified will be migrated, while the items you didnΓÇÖt select will be skipped. You can only select the source and target databases whose names match that on the source and target server.
- If you select a database on the source server that doesnΓÇÖt exist on the target database, you will see a warning message ΓÇÿNot available at TargetΓÇÖ and you wonΓÇÖt be able to select the database for migration.
+ If you select a database on the source server that doesnΓÇÖt exist on the target database, you'll see a warning message ΓÇÿNot available at TargetΓÇÖ and you wonΓÇÖt be able to select the database for migration.
+
+5. Select **Next : Select databases>>** to navigate to the **Select tables** tab.
-5. Select **Next : Select databases>>** to navigate to the Select tables tab.
Before the tab populates, DMS fetches the tables from the selected database(s) on the source and target and then determines whether the table exists and contains data. 6. Select the tables that you want to migrate.+ If the selected source table doesn't exist on the target server, the online migration process will ensure that the table schema and data is migrated to the target server. :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/17-select-tables.png" alt-text="Screenshot of a Select Tables.":::
- DMS validates your inputs, and if the validation passes, you will be able to start the migration.
+ DMS validates your inputs, and if the validation passes, you'll be able to start the migration.
7. After configuring for schema migration, select **Review and start migration**. > [!NOTE]
- > You only need to navigate to the Configure migration settings tab if you are trying to troubleshoot failing migrations.
+ > You only need to navigate to the **Configure migration settings tab if you are trying to troubleshoot failing migrations.
8. On the **Summary** tab, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified. :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/18-summary-online.png" alt-text="Screenshot of a Select Summary."::: 9. Select **Start migration**.+ The migration activity window appears, and the Status of the activity is Initializing. The Status changes to Running when the table migrations start. :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/running-online-migration.png" alt-text="Screenshot of a Running status."::: ### Monitor the migration
-1. Once the **Initial Load** activity is completed, navigate to the **Initial Load** tab to view the completion status and the number of tables completed.
+1. After the **Initial Load** activity is completed, navigate to the **Initial Load** tab to view the completion status and the number of tables completed.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/completed-initial-load-online.png" alt-text="Screenshot of a completed initial load migration.":::
-2. Once the **Initial Load** activity is completed, you are navigated to the **Replicate Data Changes** tab automatically. You can monitor the migration progress as the screen is auto-refreshed every 30 seconds. Select **Refresh** to update the display and view the seconds behind source as and when needed.
+ After the **Initial Load** activity is completed, you're navigated to the **Replicate Data Changes** tab automatically. You can monitor the migration progress as the screen is auto-refreshed every 30 seconds.
+
+2. Select **Refresh** to update the display and view the seconds behind source as and when needed.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/running-replicate-data-changes.png" alt-text="Screenshot of a Monitoring migration.":::
-3. Monitor the **Seconds behind source** and as soon as it nears 0, proceed to start cutover by clicking on the **Start Cutover** menu tab at the top of the migration activity screen. Follow the steps in the cutover window before you are ready to perform a cutover. Once all steps are completed, click on **Confirm** and next click on **Apply**.
+3. Monitor the **Seconds behind source** and as soon as it nears 0, proceed to start cutover by navigating to the **Start Cutover** menu tab at the top of the migration activity screen.
+
+4. Follow the steps in the cutover window before you're ready to perform a cutover.
+
+5. After completing all steps, select **Confirm**, and then select **Apply**.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/21-complete-cutover-online.png" alt-text="Screenshot of a Perform cutover."::: ## Perform post-migration activities
-When the migration is complete, be sure to complete the following post-migration activities.
+When the migration has finished, be sure to complete the following post-migration activities.
* Perform sanity testing of the application against the target database to certify the migration.
-* Update the connection string to point to the new target flexible server.
+* Update the connection string to point to the new flexible server.
* Delete the source single server after you have ensured application continuity.
-* If you scaled-up the target flexible server for faster migration, scale it back by selecting the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores as in the table below.
+* If you scaled-up the target flexible server for faster migration, scale it back by selecting the compute size and compute tier for the flexible server based on the source single serverΓÇÖs pricing tier and VCores, based on the detail in the following table.
| Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier | | - | - |:-:|:-:|
When the migration is complete, be sure to complete the following post-migration
| Basic | 2 | Burstable | Standard_B2s | | General Purpose | 4 | General Purpose | Standard_D4ds_v4 | | General Purpose | 8 | General Purpose | Standard_D8ds_v4 |
-* Clean up Data Migration Service resources:
+* To clean up the DMS resources, perform the following steps:
1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
- 2. Select your migration service instance from the search results and select **Delete service**.
- 3. On the confirmation dialog box, in the **TYPE THE DATABASE MIGRATION SERVICE NAME** textbox, specify the name of the service, and then select **Delete**.
+ 2. Select your migration service instance from the search results, and then select **Delete service**.
+ 3. In the confirmation dialog box, in the **TYPE THE DATABASE MIGRATION SERVICE NAME** textbox, specify the name of the instance, and then select **Delete**.
## Migration best practices
-When performing a migration, be sure to keep the following best practices in mind.
+When performing a migration, be sure to consider the following best practices.
* As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations. * Perform test migrations before migrating for production:
When performing a migration, be sure to keep the following best practices in min
## Next steps * For information about Azure Database for MySQL - Flexible Server, see [Overview - Azure Database for MySQL Flexible Server](./../mysql/flexible-server/overview.md).
-* For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).
-* For information about known issues and limitations when performing migrations using DMS, see the article [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
-* For troubleshooting source database connectivity issues while using DMS, see the article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
+* For information about Azure Database Migration Service, see [What is Azure Database Migration Service?](./dms-overview.md).
+* For information about known issues and limitations when performing migrations using DMS, see [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
+* For troubleshooting source database connectivity issues while using DMS, see article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 08/18/2022 Last updated : 09/16/2022
Register-AzResourceProvider -ProviderNamespace Microsoft.Network
## Create a DNS resolver instance
+> [!IMPORTANT]
+> Steps to verify or confirm that resources were successfully created are not optional. Do not skip these steps. The steps populate variables that can be used in later procedures.
+ Create a resource group to host the resources. The resource group must be in a [supported region](dns-private-resolver-overview.md). In this example, the location is westcentralus. ```Azure PowerShell
event-hubs Apache Kafka Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/apache-kafka-configurations.md
Property | Recommended Values | Permitted Range | Notes
|:|:| `max.request.size` | 1000000 | < 1046528 | The service will close connections if requests larger than 1,046,528 bytes are sent. *This value **must** be changed and will cause issues in high-throughput produce scenarios.* `retries` | > 0 | | May require increasing delivery.timeout.ms value, see documentation.
-`request.timeout.ms` | 60000 | > 20000| Event Hubs will internally default to a minimum of 20,000 ms. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.*. <p>Make sure that your **request.timeout.ms** is at least the recommended value of 60000 and your **session.timeout.ms** is at least the recommended value of 30000. Having these settings too low could cause consumer timeouts, which then cause rebalances (which then cause more timeouts, which cause more rebalancing, and so on).</p>
+`request.timeout.ms` | 30000 .. 60000 | > 20000| Event Hubs will internally default to a minimum of 20,000 ms. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.*. <p>Make sure that your **request.timeout.ms** is at least the recommended value of 60000 and your **session.timeout.ms** is at least the recommended value of 30000. Having these settings too low could cause consumer timeouts, which then cause rebalances (which then cause more timeouts, which cause more rebalancing, and so on).</p>
`metadata.max.idle.ms` | 180000 | > 5000 | Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced exceeds the metadata idle duration, then the topic's metadata is forgotten and the next access to it will force a metadata fetch request. `linger.ms` | > 0 | | For high throughput scenarios, linger value should be equal to the highest tolerable value to take advantage of batching. `delivery.timeout.ms` | | | Set according to the formula (`request.timeout.ms` + `linger.ms`) * `retries`.
Property | Recommended Values | Permitted Range | Notes
|:|--:| `heartbeat.interval.ms` | 3000 | | 3000 is the default value and shouldn't be changed. `session.timeout.ms` | 30000 |6000 .. 300000| Start with 30000, increase if seeing frequent rebalancing because of missed heartbeats.<p>Make sure that your request.timeout.ms is at least the recommended value of 60000 and your session.timeout.ms is at least the recommended value of 30000. Having these settings too low could cause consumer timeouts, which then cause rebalances (which then cause more timeouts, which cause more rebalancing, and so on).</p>-
+`max.poll.interval.ms` | 300000 (default) |>session.timeout.ms| Used for rebalance timeout, so this should not be set too low. Must be greater than session.timeout.ms.
## librdkafka configuration properties The main `librdkafka` configuration file ([link](https://github.com/edenhill/librdkafk)) contains extended descriptions for the properties below.
Property | Recommended Values | Permitted Range | Notes
|:|--:| `heartbeat.interval.ms` | 3000 || 3000 is the default value and shouldn't be changed. `session.timeout.ms` | 30000 |6000 .. 300000| Start with 30000, increase if seeing frequent rebalancing because of missed heartbeats.
+`max.poll.interval.ms` | 300000 (default) |>session.timeout.ms| Used for rebalance timeout, so this should not be set too low. Must be greater than session.timeout.ms.
## Further notes
event-hubs Authenticate Shared Access Signature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-shared-access-signature.md
Title: Authenticate access to Azure Event Hubs with shared access signatures description: This article shows you how to authenticate access to Event Hubs resources using shared access signatures. Previously updated : 01/05/2022 Last updated : 09/16/2022 ms.devlang: csharp, java, javascript, php
Following section shows generating a SAS token using shared access signature pol
```javascript function createSharedAccessToken(uri, saName, saKey) {
- if (!uri || !saName || !saKey) {
- throw "Missing required parameter";
- }
- var encoded = encodeURIComponent(uri);
- var now = new Date();
- var week = 60*60*24*7;
- var ttl = Math.round(now.getTime() / 1000) + week;
- var signature = encoded + '\n' + ttl;
- var signatureUTF8 = utf8.encode(signature);
- var hash = crypto.createHmac('sha256', saKey).update(signatureUTF8).digest('base64');
- return 'SharedAccessSignature sr=' + encoded + '&sig=' +
- encodeURIComponent(hash) + '&se=' + ttl + '&skn=' + saName;
+ if (!uri || !saName || !saKey) {
+ throw "Missing required parameter";
+ }
+ var encoded = encodeURIComponent(uri);
+ var now = new Date();
+ var week = 60*60*24*7;
+ var ttl = Math.round(now.getTime() / 1000) + week;
+ var signature = encoded + '\n' + ttl;
+ var hash = crypto.createHmac('sha256', saKey).update(signature, 'utf8').digest('base64');
+ return 'SharedAccessSignature sr=' + encoded + '&sig=' +
+ encodeURIComponent(hash) + '&se=' + ttl + '&skn=' + saName;
+}
+```
+
+To use a policy name and a key value to connect to an event hub, use the `EventHubProducerClient` constructor that takes the `AzureNamedKeyCredential` parameter.
+
+```javascript
+const producer = new EventHubProducerClient("NAMESPACE NAME.servicebus.windows.net", eventHubName, new AzureNamedKeyCredential("POLICYNAME", "KEYVALUE"));
+```
+
+You'll need to add a reference to `AzureNamedKeyCredential`.
+
+```javascript
+const { AzureNamedKeyCredential } = require("@azure/core-auth");
+```
+
+To use a SAS token that you generated using the code above, use the `EventHubProducerClient` constructor that takes the `AzureSASCredential` parameter.
+
+```javascript
+var token = createSharedAccessToken("https://NAMESPACENAME.servicebus.windows.net", "POLICYNAME", "KEYVALUE");
+const producer = new EventHubProducerClient("NAMESPACENAME.servicebus.windows.net", eventHubName, new AzureSASCredential(token));
+```
+
+You'll need to add a reference to `AzureSASCredential`.
+
+```javascript
+const { AzureSASCredential } = require("@azure/core-auth");
``` #### JAVA
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**|Supported |Supported | Amsterdam, London2, Silicon Valley, Tokyo2, Toronto, Washington DC | | **Jaguar Network** |Supported |Supported | Marseille, Paris | | **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** |Supported |Supported | London, London2, Newport(Wales) |
-| **[KINX](https://www.kinx.net/service/cloudhub/ms-expressroute/?lang=en)** |Supported |Supported | Seoul |
+| **[KINX](https://www.kinx.net/service/cloudhub/clouds/microsoft_azure_expressroute/?lang=en)** |Supported |Supported | Seoul |
| **[Kordia](https://www.kordia.co.nz/cloudconnect)** | Supported |Supported | Auckland, Sydney | | **[KPN](https://www.kpn.com/zakelijk/cloud/connect.htm)** | Supported | Supported | Amsterdam | | **[KT](https://cloud.kt.com/)** | Supported | Supported | Seoul, Seoul2 |
The following table shows locations by service provider. If you want to view ava
| **SCSK** |Supported |Supported | Tokyo3 | | **[Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms)** |Supported |Supported | Seoul | | **[SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)** | Supported |Supported | London2, Washington DC |
-| **[SIFY](http://telecom.sify.com/azure-expressroute.html)** |Supported |Supported | Chennai, Mumbai2 |
+| **[SIFY](https://sifytechnologies.com/)** |Supported |Supported | Chennai, Mumbai2 |
| **[SingTel](https://www.singtel.com/about-us/news-releases/singtel-provide-secure-private-access-microsoft-azure-public-cloud)** |Supported |Supported | Hong Kong2, Singapore, Singapore2 | | **[SK Telecom](http://b2b.tworld.co.kr/bizts/solution/solutionTemplate.bs?solutionId=0085)** |Supported |Supported | Seoul | | **[Softbank](https://www.softbank.jp/biz/cloud/cloud_access/direct_access_for_az/)** |Supported |Supported | Osaka, Tokyo, Tokyo2 |
firewall-manager Check Point Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/check-point-overview.md
Title: Secure Azure virtual hubs using Check Point Cloudguard Connect
-description: Learn about Check Point CloudGuard Connect to secure Azure virtual hubs
+ Title: Secure Azure virtual hubs using Check Point Harmony Connect
+description: Learn about Check Point Harmony Connect to secure Azure virtual hubs
Last updated 10/30/2020
-# Secure virtual hubs using Check Point Cloudguard Connect
+# Secure virtual hubs using Check Point Harmony Connect
-Check Point CloudGuard Connect is a Trusted Security Partner in Azure Firewall Manager. It protects globally distributed branch office to Internet (B2I) or virtual network to Internet (V2I) connections with advanced threat prevention.
+>[!NOTE]
+> This offering provides limited features compared to the [Check Point NVA integration with Virtual WAN](../virtual-wan/about-nva-hub.md#partners). We strongly recommend using this NVA integration to secure your network traffic.
+
+Check Point Harmony Connect is a Trusted Security Partner in Azure Firewall Manager. It protects globally distributed branch office to Internet (B2I) or virtual network to Internet (V2I) connections with advanced threat prevention.
-With a simple configuration in Azure Firewall Manager, you can route branch hub and virtual network connections to the Internet through the CloudGuard Connect security as a service (SECaaS). Traffic is protected in transit from your hub to the Check Point cloud service in IPsec VPN tunnels.
+With a simple configuration in Azure Firewall Manager, you can route branch hub and virtual network connections to the Internet through the Harmony Connect security as a service (SECaaS). Traffic is protected in transit from your hub to the Check Point cloud service in IPsec VPN tunnels.
When you enable auto-sync in the Check Point portal, any resource marked as *secured* in the Azure portal is automatically secured. You don't have to manage your assets twice. You simply choose to secure them once in the Azure portal.
Check Point unifies multiple security services under one umbrella. Integrated se
Threat Emulation (sandboxing) protects users from unknown and zero-day threats. Check Point SandBlast Zero-Day Protection is a cloud-hosted sand-boxing technology where files are quickly quarantined and inspected. It runs in a virtual sandbox to discover malicious behavior before it enters your network. It prevents threats before the damage is done to save staff valuable time responding to threats.
->[!NOTE]
-> This offering provides limited features compared to the [Check Point NVA integration with Virtual WAN](../virtual-wan/about-nva-hub.md#partners). We strongly recommend using this NVA integration to secure your network traffic.
## Deployment example
-Watch the following video to see how to deploy Check Point CloudGuard Connect as a trusted Azure security partner.
+Watch the following video to see how to deploy Check Point Harmony Connect as a trusted Azure security partner.
> [!VIDEO https://www.youtube.com/embed/C8AuN76DEmU]
firewall Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ip-groups.md
IP Groups allow you to group and manage IP addresses for Azure Firewall rules in
- As a source address in application rules
-An IP Group can have a single IP address, multiple IP addresses, or one or more IP address ranges.
+An IP Group can have a single IP address, multiple IP addresses, one or more IP address ranges or addresses and ranges in combination.
IP Groups can be reused in Azure Firewall DNAT, network, and application rules for multiple firewalls across regions and subscriptions in Azure. Group names must be unique. You can configure an IP Group in the Azure portal, Azure CLI, or REST API. A sample template is provided to help you get started.
The following Azure PowerShell cmdlets can be used to create and manage IP Group
## Next steps -- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
+- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
firewall Policy Rule Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-rule-sets.md
Even though you can't delete the default rule collection groups nor modify their
Rule collection groups contain one or multiple rule collections, which can be of type DNAT, network, or application. For example, you can group rules belonging to the same workloads or a VNet in a rule collection group.
-Rule collection groups have a maximum size of 2 MB. If you need more than 2 MB, you can split the rules into multiple rule collection groups. A Firewall Policy can contain 50 rule collection groups.
+Rule collection groups have a maximum size of 2 MB. If you need more than 2 MB, you can split the rules into multiple rule collection groups. A Firewall Policy created before July 2022 can contain 50 rule collection groups and a Firewall Policy created after July 2022 can contain 100 rule collection groups.
## Rule collections
healthcare-apis How To Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-display-metrics.md
Metric category|Metric name|Metric description|
> [!TIP] >
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure/azure/azure-monitor/essentials/metrics-getting-started)
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)
> [!IMPORTANT] >
iot-develop Concepts Using C Sdk And Embedded C Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-using-c-sdk-and-embedded-c-sdk.md
+
+ Title: C SDK and Embedded C SDK usage scenarios
+description: Helps developers decide which C-based Azure IoT device SDK to use for device development, based on their usage scenario.
++++ Last updated : 09/16/2022+
+#Customer intent: As a device developer, I want to understand when to use the Azure IoT C SDK or the Embedded C SDK to optimize device and application performance.
++
+# C SDK and Embedded C SDK usage scenarios
+
+Microsoft provides Azure IoT device SDKs and middleware for embedded and constrained device scenarios. This article helps device developers decide which one to use for your application.
+
+The following diagram shows four common scenarios in which customers connect devices to Azure IoT, using a C-based (C99) SDK. The rest of this article provides more details on each scenario.
++
+## Scenario 1 ΓÇô Azure IoT C SDK (for Linux and Windows)
+
+Starting in 2015, [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) was the first Azure SDK created to connect devices to IoT services. It's a stable platform that was built to provide the following capabilities for connecting devices to Azure IoT:
+- IoT Hub services
+- Device Provisioning Service clients
+- Three choices of communication transport (MQTT, AMQP and HTTP), which are created and maintained by Microsoft
+- Multiple choices of common TLS stacks (OpenSSL, Schannel and Bed TLS according to the target platform)
+- TCP sockets (Win32, Berkeley or Mbed)
+
+Providing communication transport, TLS and socket abstraction has a performance cost. Many paths require `malloc` and `memcpy` calls between the various abstraction layers. This performance cost is small compared to a desktop or a Raspberry Pi device. Yet on a truly constrained device, the cost becomes significant overhead with the possibility of memory fragmentation. The communication transport layer also requires a `doWork` function to be called at least every 100 milliseconds. These frequent calls make it harder to optimize the SDK for battery powered devices. The existence of multiple abstraction layers also makes it hard for customers to use or change to any given library.
+
+Scenario 1 is recommended for Windows or Linux devices, which normally are less sensitive to memory usage or power consumption. However, Windows and Linux-based devices can also use the Embedded C SDK as shown in Scenario 2. Other options for windows and Linux-based devices include the other Azure IoT device SDKs: [Java SDK](https://github.com/Azure/azure-iot-sdk-java), [.NET SDK](https://github.com/Azure/azure-iot-sdk-csharp), [Node SDK](https://github.com/Azure/azure-iot-sdk-node) and [Python SDK](https://github.com/Azure/azure-iot-sdk-python).
+
+## Scenario 2 ΓÇô Embedded C SDK (for Bare Metal scenarios and micro-controllers)
+
+In 2020, Microsoft released the [Azure SDK for Embedded C](https://github.com/Azure/azure-sdk-for-c/tree/main/sdk/docs/iot) (also known as the Embedded C SDK). This SDK was built based on customers feedback and a growing need to support constrained [micro-controller devices](concepts-iot-device-types.md#microcontrollers-vs-microprocessors). Typically, constrained micro-controllers have reduced memory and processing power.
+
+The Embedded C SDK has the following key characteristics:
+- No dynamic memory allocation. Customers must allocate data structures where they desire such as in global memory, a heap, or a stack. Then they must pass the address of the allocated structure into SDK functions to initialize and perform various operations.
+- MQTT only. MQTT-only usage is ideal for constrained devices because it's an efficient, lightweight network protocol. Currently only MQTT v3.1.1 is supported.
+- Bring your own network stack. The Embedded C SDK performs no I/O operations. This approach allows customers to select the MQTT, TLS and Socket clients that have the best fit to their target platform.
+- Similar [feature set](concepts-iot-device-types.md#microcontrollers-vs-microprocessors) as the C SDK. The Embedded C SDK provides similar features as the Azure IoT C SDK, with the following exceptions that the Embedded C SDK doesn't provide:
+ - Upload to blob
+ - The ability to run as an IoT Edge module
+ - AMQP-based features like content message batching and device multiplexing
+- Smaller overall [footprint](https://github.com/Azure/azure-sdk-for-c/tree/main/sdk/docs/iot#size-chart). The Embedded C SDK, as see in a sample that shows how to connect to IoT Hub, can take as little as 74 KB of ROM and 8.26 KB of RAM.
+
+The Embedded C SDK supports micro-controllers with no operating system, micro-controllers with a real-time operating system (like Azure RTOS), Linux, and Windows. Customers can implement custom platform layers to use the SDK on custom devices. The SDK also provides some platform layers such as [Arduino](https://github.com/Azure/azure-sdk-for-c-arduino), and [Swift](https://github.com/Azure-Samples/azure-sdk-for-c-swift). Microsoft encourages the community to submit other platform layers to increase the out-of-the-box supported platforms. Wind River [VxWorks](https://github.com/Azure/azure-sdk-for-c/blob/main/sdk/samples/iot/docs/how_to_iot_hub_samples_vxworks.md) is an example of a platform layer submitted by the community.
+
+The Embedded C SDK adds some programming benefits because of its flexibility compared to the Azure IoT C SDK. In particular, applications that use constrained devices will benefit from enormous resource savings and greater programmatic control. In comparison, if you use Azure RTOS or FreeRTOS, you can have these same benefits along with other features per RTOS implementation.
+
+## Scenario 3 ΓÇô Azure RTOS with Azure RTOS middleware (for Azure RTOS-based projects)
+
+Scenario 3 involves using Azure RTOS and the [Azure RTOS middleware](https://github.com/azure-rtos/netxduo/tree/master/addons/azure_iot). Azure RTOS is built on top of the Embedded C SDK, and adds MQTT and TLS Support. The middleware for Azure RTOS exposes APIs for the application that are similar to the native Azure RTOS APIs. This approach makes it simpler for developers to use the APIs and connect their Azure RTOS-based devices to Azure IoT. Azure RTOS is a fully integrated, efficient, real time embedded platform, that provides all the networking and IoT features you need for your solution.
+
+Samples for several popular developer kits from ST, NXP, Renesas, and Microchip, are available. These samples work with Azure IoT Hub or Azure IoT Central, and are available as IAR Workbench or semiconductor IDE projects on [GitHub](https://github.com/azure-rtos/samples).
+
+Because it's based on the Embedded C SDK, the Azure IoT middleware for Azure RTOS is non-memory allocating. Customers must allocate SDK data structures in global memory, or a heap, or a stack. After customers allocate a data structure, they must pass the address of the structure into the SDK functions to initialize and perform various operations.
+
+## Scenario 4 ΓÇô FreeRTOS with FreeRTOS middleware (for use with FreeRTOS-based projects)
+
+Scenario 4 brings the embedded C middleware to FreeRTOS. The embedded C middleware is built on top of the Embedded C SDK and adds MQTT support via the open source coreMQTT library. This middleware for FreeRTOS operates at the MQTT level. It establishes the MQTT connection, subscribes and unsubscribes from topics, and sends and receives messages. Disconnections are handled by the customer via middleware APIs.
+
+Customers control the TLS/TCP configuration and connection to the endpoint. This approach allows for flexibility between software or hardware implementations of either stack. No background tasks are created by the Azure IoT middleware for FreeRTOS. Messages are sent and received synchronously.
+
+The core implementation is provided in this [GitHub repository](https://github.com/Azure/azure-iot-middleware-freertos). Samples for several popular developer kits are available, including the NXP1060, STM32, and ESP32. The samples work with Azure IoT Hub, Azure IoT Central, and Azure Device Provisioning Service, and are available in this [GitHub repository](https://github.com/Azure-Samples/iot-middleware-freertos-samples).
+
+Because it's based on the Azure Embedded C SDK, the Azure IoT middleware for FreeRTOS is also non-memory allocating. Customers must allocate SDK data structures in global memory, or a heap, or a stack. After customers allocate a data structure, they must pass the address of the allocated structures into the SDK functions to initialize and perform various operations.
+
+## C-based SDK technical usage scenarios
+
+The following diagram summarizes technical options for each SDK usage scenario described in this article.
++
+## C-based SDK comparison by memory and protocols
+
+The following table compares the four device SDK development scenarios based on memory and protocol usage.
+
+| &nbsp; | **Memory <br>allocation** | **Memory <br>usage** | **Protocols <br>supported** | **Recommended for** |
+| :-- | :-- | :-- | :-- | :-- |
+| **Azure IoT C SDK** | Mostly Dynamic | Unrestricted. Can span <br>to 1 MB or more in RAM. | AMQP<br>HTTP<br>MQTT v3.1.1 | Microprocessor-based systems<br>Microsoft Windows<br>Linux<br>Apple OS X |
+| **Azure SDK for Embedded C** | Static only | Restricted by amount of <br>data application allocates. | MQTT v3.1.1 | Micro-controllers <br>Bare-metal Implementations <br>RTOS-based implementations |
+| **Azure IoT Middleware for Azure RTOS** | Static only | Restricted | MQTT v3.1.1 | Micro-controllers <br>RTOS-based implementations |
+| **Azure IoT Middleware for FreeRTOS** | Static only | Restricted | MQTT v3.1.1 | Micro-controllers <br>RTOS-based implementations |
+
+## Azure IoT Features Supported by each SDK
+
+The following table compares the four device SDK development scenarios based on support for Azure IoT features.
+
+| &nbsp; | **Azure IoT C SDK** | **Azure SDK for <br>Embedded C** | **Azure IoT <br>middleware for <br>Azure RTOS** | **Azure IoT <br>middleware for <br>FreeRTOS** |
+| :-- | :-- | :-- | :-- | :-- |
+| SAS Client Authentication | Yes | Yes | Yes | Yes |
+| x509 Client Authentication | Yes | Yes | Yes | Yes |
+| Device Provisioning | Yes | Yes | Yes | Yes |
+| Telemetry | Yes | Yes | Yes | Yes |
+| Cloud-to-Device Messages | Yes | Yes | Yes | Yes |
+| Direct Methods | Yes | Yes | Yes | Yes |
+| Device Twin | Yes | Yes | Yes | Yes |
+| IoT Plug-And-Play | Yes | Yes | Yes | Yes |
+| Telemetry batching <br>(AMQP, HTTP) | Yes | No | No | No |
+| Uploads to Azure Blob | Yes | No | No | No |
+| Automatic integration in <br>IoT Edge hosted containers | Yes | No | No | No |
++
+## Next steps
+
+To learn more about device development and the available SDKs for Azure IoT, see the following table.
+- [Azure IoT Device Development](index.yml)
+- [Which SDK should I use](about-iot-sdks.md)
iot-develop Iot Device Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/iot-device-selection.md
+
+ Title: Azure IOT prototyping device selection list
+description: This document provides guidance on choosing a hardware device for prototyping IoT Azure solutions.
++++ Last updated : 08/03/2022+
+# IoT device selection list
+
+This IoT device selection list aims to give partners a starting point with IoT hardware to build prototypes and proof-of-concepts quickly and easily.[^1]
+
+All boards listed support users of all experience