Updates from: 05/29/2023 01:11:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Access Token Claims Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-token-claims-reference.md
+
+ Title: Access token claims reference
+description: Claims reference with details on the claims included in access tokens issued by the Microsoft identity platform.
+++++++ Last updated : 05/26/2023++++
+# Access token claims reference
+
+Access tokens are [JSON web tokens (JWT)](https://wikipedia.org/wiki/JSON_Web_Token). JWTs contain the following pieces:
+
+- **Header** - Provides information about how to validate the token including information about the type of token and its signing method.
+- **Payload** - Contains all of the important data about the user or application that's attempting to call the service.
+- **Signature** - Is the raw material used to validate the token.
+
+Each piece is separated by a period (`.`) and separately Base64 encoded.
+
+Claims are present only if a value exists to fill it. An application shouldn't take a dependency on a claim being present. Examples include `pwd_exp` (not every tenant requires passwords to expire) and `family_name` ([client credential](v2-oauth2-client-creds-grant-flow.md) flows are on behalf of applications that don't have names). Claims used for access token validation are always present.
+
+The Microsoft identity platform uses some claims to help secure tokens for reuse. The description of `Opaque` marks these claims as not being for public consumption. These claims may or may not appear in a token, and new ones may be added without notice.
+
+## Header claims
+
+| Claim | Format | Description |
+|-|--|-|
+| `typ` | String - always `JWT` | Indicates that the token is a JWT.|
+| `alg` | String | Indicates the algorithm used to sign the token, for example, `RS256`. |
+| `kid` | String | Specifies the thumbprint for the public key used for validating the signature of the token. Emitted in both v1.0 and v2.0 access tokens. |
+| `x5t` | String | Functions the same (in use and value) as `kid`. `x5t` and is a legacy claim emitted only in v1.0 access tokens for compatibility purposes. |
+
+## Payload claims
+
+| Claim | Format | Description | Authorization considerations |
+|-|--|-||
+| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. | This value must be validated, reject the token if the value doesn't match the intended audience. |
+| `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant of the authenticated user. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. | The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. |
+|`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. Use the value of `iss` if the claim isn't present. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. | |
+| `iat` | int, a Unix timestamp | Specifies when the authentication for this token occurred. | |
+| `nbf` | int, a Unix timestamp | Specifies the time after which the JWT can be processed. | |
+| `exp` | int, a Unix timestamp | Specifies the expiration time before which the JWT can be accepted for processing. A resource may reject the token before this time as well. The rejection can occur for a required change in authentication or when a token is revoked. | |
+| `aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Resources shouldn't use this claim. | |
+| `acr` | String, a `0` or `1`, only present in v1.0 tokens | A value of `0` for the "Authentication context class" claim indicates the end-user authentication didn't meet the requirements of ISO/IEC 29115. | |
+| `amr` | JSON array of strings, only present in v1.0 tokens | Identifies the authentication method of the subject of the token. | |
+| `appid` | String, a GUID, only present in v1.0 tokens | The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `appid` may be used in authorization decisions. |
+| `azp` | String, a GUID, only present in v2.0 tokens | A replacement for `appid`. The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `azp` may be used in authorization decisions. |
+| `appidacr` | String, a `0`, `1`, or `2`, only present in v1.0 tokens | Indicates authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. | |
+| `azpacr` | String, a `0`, `1`, or `2`, only present in v2.0 tokens | A replacement for `appidacr`. Indicates the authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. | |
+| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. Use the value for username hints and in human-readable UI as a username. To receive this claim, use the `profile` scope. | Since this value is mutable, don't use it to make authorization decisions. |
+| `name` | String | Provides a human-readable value that identifies the subject of the token. The value can vary, it's mutable, and is for display purposes only. To receive this claim, use the `profile` scope. | Don't use this value to make authorization decisions. |
+| `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. Only included for user tokens. | The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. |
+| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. The [client credential flow](v2-oauth2-client-creds-grant-flow.md) uses this set of permission in place of user scopes for application tokens. For user tokens, this set of values contains the assigned roles of the user on the target application. | These values can be used for managing access, such as enforcing authorization to access a resource. |
+| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures this claim on a per-application basis. Set the claim to `All` or `DirectoryRole`. May not be present in tokens obtained through the implicit flow due to token length concerns. | These values can be used for managing access, such as enforcing authorization to access a resource. |
+| `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures the groups claim on a per-application basis. A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. | These values can be used for managing access, such as enforcing authorization to access a resource. |
+| `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. | |
+| `groups:src1` | JSON object | Includes a link to the full groups list for the user when token requests are too large for the token. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` | |
+| `sub` | String | The principal associated with the token. For example, the user of an application. This value is immutable, don't reassign or reuse. The subject is a pairwise identifier that's unique to a particular application ID. If a single user signs into two different applications using two different client IDs, those applications receive two different values for the subject claim. Using the two different values depends on architecture and privacy requirements. See also the `oid` claim, which does remain the same across applications within a tenant. | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
+| `oid` | String, a GUID | The immutable identifier for the requestor, which is the verified identity of the user or service principal. This ID uniquely identifies the requestor across applications. Two different applications signing in the same user receive the same value in the `oid` claim. The `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph returns this ID as the `id` property for a given user account. Because the `oid` allows multiple applications to correlate principals, to receive this claim for users use the `profile` scope. If a single user exists in multiple tenants, the user contains a different object ID in each tenant. Even though the user logs into each account with the same credentials, the accounts are different. | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
+| `tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, the application must request the `profile` scope. | This value should be considered in combination with other claims in authorization decisions. |
+| `unique_name` | String, only present in v1.0 tokens | Provides a human readable value that identifies the subject of the token. | This value can be different within a tenant and use it only for display purposes. |
+| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. | |
+| `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. | |
+| `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. | |
+| `xms_cc` | JSON array of strings | Indicates whether the client application that acquired the token is capable of handling claims challenges. This claim is commonly used in Conditional Access and Continuous Access Evaluation scenarios. The resource server that the token is issued for controls the presence of the claim in it. For example, a service application. For more information, see [Claims challenges, claims requests and client capabilities](claims-challenge.md?tabs=dotnet). Resource servers should check this claim in access tokens received from client applications. If this claim is present, resource servers can respond back with a claims challenge. The claims challenge requests more claims in a new access token to authorize access to a protected resource. |
+
+### Groups overage claim
+
+Azure AD limits the number of object IDs that it includes in the groups claim to stay within the size limit of the HTTP header. If a user is a member of more groups than the overage limit (150 for SAML tokens, 200 for JWT tokens, and only 6 if issued by using the implicit flow), then Azure AD doesn't emit the groups claim in the token. Instead, it includes an overage claim in the token that indicates to the application to query the Microsoft Graph API to retrieve the group membership of the user.
+
+```JSON
+{
+ ...
+ "_claim_names": {
+ "groups": "src1"
+ },
+ "_claim_sources": {
+ "src1": {
+ "endpoint": "[Url to get this user's group membership from]"
+ }
+ }
+ ...
+}
+```
+
+Use the `BulkCreateGroups.ps1` provided in the [App Creation Scripts](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/5-WebApp-AuthZ/5-2-Groups/AppCreationScripts) folder to help test overage scenarios.
+
+### v1.0 basic claims
+
+The v1.0 tokens include the following claims if applicable, but not v2.0 tokens by default. To use these claims for v2.0, the application requests them using [optional claims](active-directory-optional-claims.md).
+
+| Claim | Format | Description |
+|-|--|-|
+| `ipaddr`| String | The IP address the user authenticated from. |
+| `onprem_sid`| String, in [SID format](/windows/desktop/SecAuthZ/sid-components) | In cases where the user has an on-premises authentication, this claim provides their SID. Use this claim for authorization in legacy applications. |
+| `pwd_exp`| int, a Unix timestamp | Indicates when the user's password expires. |
+| `pwd_url`| String | A URL where users can reset their password. |
+| `in_corp`| boolean | Signals if the client is signing in from the corporate network. |
+| `nickname`| String | Another name for the user, separate from first or last name.|
+| `family_name` | String | Provides the last name, surname, or family name of the user as defined on the user object. |
+| `given_name` | String | Provides the first or given name of the user, as set on the user object. |
+| `upn` | String | The username of the user. May be a phone number, email address, or unformatted string. Only use for display purposes and providing username hints in reauthentication scenarios. |
+
+### amr claim
+
+Identities can authenticate in different ways, which may be relevant to the application. The `amr` claim is an array that can contain multiple items, such as `["mfa", "rsa", "pwd"]`, for an authentication that used both a password and the Authenticator app.
+
+| Value | Description |
+|--|-|
+| `pwd` | Password authentication, either a user's Microsoft password or a client secret of an application. |
+| `rsa` | Authentication was based on the proof of an RSA key, for example with the [Microsoft Authenticator app](https://aka.ms/AA2kvvu). This value also indicates the use of a self-signed JWT with a service owned X509 certificate in authentication. |
+| `otp` | One-time passcode using an email or a text message. |
+| `fed` | Indicates the use of a federated authentication assertion (such as JWT or SAML). |
+| `wia` | Windows Integrated Authentication |
+| `mfa` | Indicates the use of [Multi-factor authentication](../authentication/concept-mfa-howitworks.md). Includes the other authentication methods when this claim is present. |
+| `ngcmfa` | Equivalent to `mfa`, used for provisioning of certain advanced credential types. |
+| `wiaormfa`| The user used Windows or an MFA credential to authenticate. |
+| `none` | Indicates no completed authentication. |
+
+## Next steps
+
+- Learn more about the [access tokens used in Azure AD](access-tokens.md).
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Title: Microsoft identity platform access tokens
-description: Learn about access tokens emitted by the Azure AD v1.0 and Microsoft identity platform (v2.0) endpoints.
+ Title: Access tokens in the Microsoft identity platform
+description: Learn about access tokens used in the Microsoft identity platform.
- Previously updated : 03/29/2023 Last updated : 05/26/2023
-# Microsoft identity platform access tokens
+# Access tokens in the Microsoft identity platform
Access tokens enable clients to securely call protected web APIs. Web APIs use access tokens to perform authentication and authorization.
The Microsoft identity platform supports issuing any token version from any vers
Resources always own their tokens using the `aud` claim and are the only applications that can change their token details.
-## Claims in access tokens
-
-JWTs contain the following pieces:
--- **Header** - Provides information about how to validate the token including information about the type of token and its signing method.-- **Payload** - Contains all of the important data about the user or application that's attempting to call the service.-- **Signature** - Is the raw material used to validate the token.-
-Each piece is separated by a period (`.`) and separately Base64 encoded.
-
-Claims are present only if a value exists to fill it. An application shouldn't take a dependency on a claim being present. Examples include `pwd_exp` (not every tenant requires passwords to expire) and `family_name` ([client credential](v2-oauth2-client-creds-grant-flow.md) flows are on behalf of applications that don't have names). Claims used for access token validation are always present.
-
-The Microsoft identity platform uses some claims to help secure tokens for reuse. The description of `Opaque` marks these claims as not being for public consumption. These claims may or may not appear in a token, and new ones may be added without notice.
-
-### Header claims
-
-| Claim | Format | Description |
-|-|--|-|
-| `typ` | String - always `JWT` | Indicates that the token is a JWT.|
-| `alg` | String | Indicates the algorithm used to sign the token, for example, `RS256`. |
-| `kid` | String | Specifies the thumbprint for the public key used for validating the signature of the token. Emitted in both v1.0 and v2.0 access tokens. |
-| `x5t` | String | Functions the same (in use and value) as `kid`. `x5t` and is a legacy claim emitted only in v1.0 access tokens for compatibility purposes. |
-
-### Payload claims
-
-| Claim | Format | Description | Authorization considerations |
-|-|--|-||
-| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. | This value must be validated, reject the token if the value doesn't match the intended audience. |
-| `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant of the authenticated user. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. | The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. |
-|`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. Use the value of `iss` if the claim isn't present. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. | |
-| `iat` | int, a Unix timestamp | Specifies when the authentication for this token occurred. | |
-| `nbf` | int, a Unix timestamp | Specifies the time after which the JWT can be processed. | |
-| `exp` | int, a Unix timestamp | Specifies the expiration time before which the JWT can be accepted for processing. A resource may reject the token before this time as well. The rejection can occur for a required change in authentication or when a token is revoked. | |
-| `aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Resources shouldn't use this claim. | |
-| `acr` | String, a `0` or `1`, only present in v1.0 tokens | A value of `0` for the "Authentication context class" claim indicates the end-user authentication didn't meet the requirements of ISO/IEC 29115. | |
-| `amr` | JSON array of strings, only present in v1.0 tokens | Identifies the authentication method of the subject of the token. | |
-| `appid` | String, a GUID, only present in v1.0 tokens | The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `appid` may be used in authorization decisions. |
-| `azp` | String, a GUID, only present in v2.0 tokens | A replacement for `appid`. The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `azp` may be used in authorization decisions. |
-| `appidacr` | String, a `0`, `1`, or `2`, only present in v1.0 tokens | Indicates authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. | |
-| `azpacr` | String, a `0`, `1`, or `2`, only present in v2.0 tokens | A replacement for `appidacr`. Indicates the authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. | |
-| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. Use the value for username hints and in human-readable UI as a username. To receive this claim, use the `profile` scope. | Since this value is mutable, don't use it to make authorization decisions. |
-| `name` | String | Provides a human-readable value that identifies the subject of the token. The value can vary, it's mutable, and is for display purposes only. To receive this claim, use the `profile` scope. | Don't use this value to make authorization decisions. |
-| `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. Only included for user tokens. | The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. |
-| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. The [client credential flow](v2-oauth2-client-creds-grant-flow.md) uses this set of permission in place of user scopes for application tokens. For user tokens, this set of values contains the assigned roles of the user on the target application. | These values can be used for managing access, such as enforcing authorization to access a resource. |
-| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures this claim on a per-application basis. Set the claim to `All` or `DirectoryRole`. May not be present in tokens obtained through the implicit flow due to token length concerns. | These values can be used for managing access, such as enforcing authorization to access a resource. |
-| `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures the groups claim on a per-application basis. A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. | These values can be used for managing access, such as enforcing authorization to access a resource. |
-| `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. | |
-| `groups:src1` | JSON object | Includes a link to the full groups list for the user when token requests are too large for the token. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` | |
-| `sub` | String | The principal associated with the token. For example, the user of an application. This value is immutable, don't reassign or reuse. The subject is a pairwise identifier that's unique to a particular application ID. If a single user signs into two different applications using two different client IDs, those applications receive two different values for the subject claim. Using the two different values depends on architecture and privacy requirements. See also the `oid` claim, which does remain the same across applications within a tenant. | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
-| `oid` | String, a GUID | The immutable identifier for the requestor, which is the verified identity of the user or service principal. This ID uniquely identifies the requestor across applications. Two different applications signing in the same user receive the same value in the `oid` claim. The `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph returns this ID as the `id` property for a given user account. Because the `oid` allows multiple applications to correlate principals, to receive this claim for users use the `profile` scope. If a single user exists in multiple tenants, the user contains a different object ID in each tenant. Even though the user logs into each account with the same credentials, the accounts are different. | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
-| `tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, the application must request the `profile` scope. | This value should be considered in combination with other claims in authorization decisions. |
-| `unique_name` | String, only present in v1.0 tokens | Provides a human readable value that identifies the subject of the token. | This value can be different within a tenant and use it only for display purposes. |
-| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. | |
-| `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. | |
-| `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. | |
-| `xms_cc` | JSON array of strings | Indicates whether the client application that acquired the token is capable of handling claims challenges. This claim is commonly used in Conditional Access and Continuous Access Evaluation scenarios. The resource server that the token is issued for controls the presence of the claim in it. For example, a service application. For more information, see [Claims challenges, claims requests and client capabilities](claims-challenge.md?tabs=dotnet). Resource servers should check this claim in access tokens received from client applications. If this claim is present, resource servers can respond back with a claims challenge. The claims challenge requests more claims in a new access token to authorize access to a protected resource. |
-
-#### Groups overage claim
-
-Azure AD limits the number of object IDs that it includes in the groups claim to stay within the size limit of the HTTP header. If a user is a member of more groups than the overage limit (150 for SAML tokens, 200 for JWT tokens, and only 6 if issued by using the implicit flow), then Azure AD doesn't emit the groups claim in the token. Instead, it includes an overage claim in the token that indicates to the application to query the Microsoft Graph API to retrieve the group membership of the user.
-
-```JSON
-{
- ...
- "_claim_names": {
- "groups": "src1"
- },
- "_claim_sources": {
- "src1": {
- "endpoint": "[Url to get this user's group membership from]"
- }
- }
- ...
-}
-```
-
-Use the `BulkCreateGroups.ps1` provided in the [App Creation Scripts](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/5-WebApp-AuthZ/5-2-Groups/AppCreationScripts) folder to help test overage scenarios.
-
-#### v1.0 basic claims
-
-The v1.0 tokens include the following claims if applicable, but not v2.0 tokens by default. To use these claims for v2.0, the application requests them using [optional claims](active-directory-optional-claims.md).
-
-| Claim | Format | Description |
-|-|--|-|
-| `ipaddr`| String | The IP address the user authenticated from. |
-| `onprem_sid`| String, in [SID format](/windows/desktop/SecAuthZ/sid-components) | In cases where the user has an on-premises authentication, this claim provides their SID. Use this claim for authorization in legacy applications. |
-| `pwd_exp`| int, a Unix timestamp | Indicates when the user's password expires. |
-| `pwd_url`| String | A URL where users can reset their password. |
-| `in_corp`| boolean | Signals if the client is signing in from the corporate network. |
-| `nickname`| String | Another name for the user, separate from first or last name.|
-| `family_name` | String | Provides the last name, surname, or family name of the user as defined on the user object. |
-| `given_name` | String | Provides the first or given name of the user, as set on the user object. |
-| `upn` | String | The username of the user. May be a phone number, email address, or unformatted string. Only use for display purposes and providing username hints in reauthentication scenarios. |
-
-#### amr claim
-
-Identities can authenticate in different ways, which may be relevant to the application. The `amr` claim is an array that can contain multiple items, such as `["mfa", "rsa", "pwd"]`, for an authentication that used both a password and the Authenticator app.
-
-| Value | Description |
-|--|-|
-| `pwd` | Password authentication, either a user's Microsoft password or a client secret of an application. |
-| `rsa` | Authentication was based on the proof of an RSA key, for example with the [Microsoft Authenticator app](https://aka.ms/AA2kvvu). This value also indicates the use of a self-signed JWT with a service owned X509 certificate in authentication. |
-| `otp` | One-time passcode using an email or a text message. |
-| `fed` | Indicates the use of a federated authentication assertion (such as JWT or SAML). |
-| `wia` | Windows Integrated Authentication |
-| `mfa` | Indicates the use of [Multi-factor authentication](../authentication/concept-mfa-howitworks.md). Includes the other authentication methods when this claim is present. |
-| `ngcmfa` | Equivalent to `mfa`, used for provisioning of certain advanced credential types. |
-| `wiaormfa`| The user used Windows or an MFA credential to authenticate. |
-| `none` | Indicates no completed authentication. |
-
-## Access token lifetime
+## Token lifetime
The default lifetime of an access token is variable. When issued, the Microsoft identity platform assigns a random value ranging between 60-90 minutes (75 minutes on average) as the default lifetime of an access token. The variation improves service resilience by spreading access token demand over a time, which prevents hourly spikes in traffic to Azure AD.
If the application needs to validate an ID token or an access token, it should f
The Azure AD middleware has built-in capabilities for validating access tokens, see [samples](sample-v2-code.md) to find one in the appropriate language. There are also several third-party open-source libraries available for JWT validation. For more information about Azure AD authentication libraries and code samples, see the [authentication libraries](reference-v2-libraries.md).
-### Validating the signature
+### Validate the signature
A JWT contains three segments separated by the `.` character. The first segment is the **header**, the second is the **body**, and the third is the **signature**. Use the signature segment to evaluate the authenticity of the token.
Doing signature validation is outside the scope of this document. There are many
If the application has custom signing keys as a result of using the [claims-mapping](active-directory-claims-mapping.md) feature, append an `appid` query parameter that contains the application ID. For validation, use `jwks_uri` that points to the signing key information of the application. For example: `https://login.microsoftonline.com/{tenant}/.well-known/openid-configuration?appid=6731de76-14a6-49ae-97bc-6eba6914391e` contains a `jwks_uri` of `https://login.microsoftonline.com/{tenant}/discovery/keys?appid=6731de76-14a6-49ae-97bc-6eba6914391e`.
-### Claims based authorization
-
-For more information about validating the claims in a token to ensure security, see [Secure applications and APIs by validating claims](claims-validation.md)
- ## Token revocation Refresh tokens are invalidated or revoked at any time, for different reasons. The reasons fall into the categories of timeouts and revocations.
Organizations can use [token lifetime configuration](configurable-token-lifetime
The server possibly revokes refresh tokens due to a change in credentials, or due to use or administrative action. Refresh tokens are in the classes of confidential clients and public clients. | Change | Password-based cookie | Password-based token | Non-password-based cookie | Non-password-based token | Confidential client token |
-||--|-||--||
+|--|--|-||--||
| Password expires | Stays alive | Stays alive | Stays alive | Stays alive | Stays alive | | Password changed by user | Revoked | Revoked | Stays alive | Stays alive | Stays alive | | User does SSPR | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
A *non-password-based* login is one where the user didn't type in a password to
- Voice - PIN
-For more information, see [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md).
+## See also
+
+- [Access token claims reference](access-token-claims-reference.md)
+- [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md)
+- [Secure applications and APIs by validating claims](claims-validation.md)
+ ## Next steps - Learn more about the [security tokens used in Azure AD](security-tokens.md).+
active-directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configurable-token-lifetimes.md
You can set token lifetime policies for access tokens, SAML tokens, and ID token
Clients use access tokens to access a protected resource. An access token can be used only for a specific combination of user, client, and resource. Access tokens cannot be revoked and are valid until their expiry. A malicious actor that has obtained an access token can use it for extent of its lifetime. Adjusting the lifetime of an access token is a trade-off between improving system performance and increasing the amount of time that the client retains access after the user's account is disabled. Improved system performance is achieved by reducing the number of times a client needs to acquire a fresh access token.
-The default lifetime of an access token is variable. When issued, an access token's default lifetime is assigned a random value ranging between 60-90 minutes (75 minutes on average). The default lifetime also varies depending on the client application requesting the token or if conditional access is enabled in the tenant. For more information, see [Access token lifetime](access-tokens.md#access-token-lifetime).
+The default lifetime of an access token is variable. When issued, an access token's default lifetime is assigned a random value ranging between 60-90 minutes (75 minutes on average). The default lifetime also varies depending on the client application requesting the token or if conditional access is enabled in the tenant. For more information, see [Access token lifetime](access-tokens.md#token-lifetime).
### SAML tokens
active-directory Custom Rbac For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-rbac-for-developers.md
Developers have the flexibility to provide their own implementation for how role
Azure AD allows you to [define app roles](./howto-add-app-roles-in-azure-ad-apps.md) for your application and assign those roles to users and other applications. The roles you assign to a user or application define their level of access to the resources and operations in your application.
-When Azure AD issues an access token for an authenticated user or application, it includes the names of the roles you've assigned the entity (the user or application) in the access token's [`roles`](./access-tokens.md#payload-claims) claim. An application like a web API that receives that access token in a request can then make authorization decisions based on the values in the `roles` claim.
+When Azure AD issues an access token for an authenticated user or application, it includes the names of the roles you've assigned the entity (the user or application) in the access token's [`roles`](./access-token-claims-reference.md#payload-claims) claim. An application like a web API that receives that access token in a request can then make authorization decisions based on the values in the `roles` claim.
### Groups
-Developers can also use [Azure AD groups](../fundamentals/active-directory-manage-groups.md) to implement RBAC in their applications, where the memberships of the user in specific groups are interpreted as their role memberships. When an organization uses Azure AD groups, a [groups claim](./access-tokens.md#payload-claims) is included in the token that specifies the identifiers of all of the groups to which the user is assigned within the current Azure AD tenant.
+Developers can also use [Azure AD groups](../fundamentals/active-directory-manage-groups.md) to implement RBAC in their applications, where the memberships of the user in specific groups are interpreted as their role memberships. When an organization uses groups, the token includes a [groups claim](./access-token-claims-reference.md#payload-claims). The group claim specifies the identifiers of all of the assigned groups of the user within the tenant.
> [!IMPORTANT]
-> When working with groups, developers need to be aware of the concept of an [overage claim](./access-tokens.md#payload-claims). By default, if a user is a member of more than the overage limit (150 for SAML tokens, 200 for JWT tokens, 6 if using the implicit flow), Azure AD doesn't emit a groups claim in the token. Instead, it includes an "overage claim" in the token that indicates the consumer of the token needs to query the Microsoft Graph API to retrieve the group memberships of the user. For more information about working with overage claims, see [Claims in access tokens](./access-tokens.md#claims-in-access-tokens). It's possible to only emit groups that are assigned to an application, though [group-based assignment](../manage-apps/assign-user-or-group-access-portal.md) does require Azure Active Directory Premium P1 or P2 edition.
+> When working with groups, developers need to be aware of the concept of an [overage claim](./access-token-claims-reference.md#payload-claims). By default, if a user is a member of more than the overage limit (150 for SAML tokens, 200 for JWT tokens, 6 if using the implicit flow), Azure AD doesn't emit a groups claim in the token. Instead, it includes an "overage claim" in the token that indicates the consumer of the token needs to query the Microsoft Graph API to retrieve the group memberships of the user. For more information about working with overage claims, see [Claims in access tokens](./access-token-claims-reference.md). It's possible to only emit groups that are assigned to an application, though [group-based assignment](../manage-apps/assign-user-or-group-access-portal.md) does require Azure Active Directory Premium P1 or P2 edition.
### Custom data store
-App roles and groups both store information about user assignments in the Azure AD directory. Another option for managing user role information that is available to developers is to maintain the information outside of the directory in a custom data store. For example, in a SQL database, Azure Table storage, or Azure Cosmos DB for Table.
+App roles and groups both store information about user assignments in the Azure AD directory. Another option for managing user role information that is available to developers is to maintain the information outside of the directory in a custom data store. For example, in an SQL database, Azure Table storage, or Azure Cosmos DB for Table.
-Using custom storage allows developers extra customization and control over how to assign roles to users and how to represent them. However, the extra flexibility also introduces more responsibility. For example, there's no mechanism currently available to include this information in tokens returned from Azure AD. If developers maintain role information in a custom data store, they'll need to have the applications retrieve the roles. Retrieving the roles is typically done using extensibility points defined in the middleware available to the platform that's being used to develop the application. Developers are responsible for properly securing the custom data store.
+Using custom storage allows developers extra customization and control over how to assign roles to users and how to represent them. However, the extra flexibility also introduces more responsibility. For example, there's no mechanism currently available to include this information in tokens returned from Azure AD. Applications must retrieve the roles if role information is maintained in a custom data store. Retrieving the roles is typically done using extensibility points defined in the middleware available to the platform that's being used to develop the application. Developers are responsible for properly securing the custom data store.
Using [Azure AD B2C Custom policies](../../active-directory-b2c/custom-policy-overview.md) it's possible to interact with custom data stores and to include custom claims within a token.
active-directory Whats Deprecated Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-deprecated-azure-ad.md
Use the following table to learn about changes including deprecations, retiremen
|Functionality, feature, or service|Change|Change date | |||:|
-|Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|May 8, 2023|
|[My Groups experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023| |[My Apps browser extension](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023|
+|[Microsoft Authenticator Lite for Outlook mobile](../../active-directory/authentication/how-to-mfa-authenticator-lite.md)|Feature change|Jun 9, 2023|
|[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Sometime after GA| |[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Jun 30, 2023| |[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023|
Use the following table to learn about changes including deprecations, retiremen
|Functionality, feature, or service|Change|Change date | |||:|
+|Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|May 8, 2023|
|[Azure AD Domain Services virtual network deployments](../../active-directory-domain-services/overview.md)|Retirement|Mar 1, 2023| |[License management API, PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/migrate-your-apps-to-access-the-license-managements-apis-from/ba-p/2464366)|Retirement|*Mar 31, 2023|
active-directory Entitlement Management Group Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-group-licenses.md
Title: Manage the lifecycle of group-based licenses in Azure AD
description: This step-by-step tutorial shows how to create an access package for managing group-based licenses in entitlement management. documentationCenter: ''-+ na Previously updated : 01/25/2023 Last updated : 05/25/2023
For more information, see [License requirements](entitlement-management-overview
1. Select **Next: Requests** to go to the **Requests** tab.
- On this tab, you create a request policy. A *policy* defines the rules for access to an access package. You'll create a policy that allows employees in the resource directory to request the access package.
+ On this tab, you create a request policy. A *policy* defines the rules for access to an access package. You create a policy that allows employees in the resource directory to request the access package.
3. In the **Users who can request access** section, select **For users in your directory** and then select **All members (excluding guests)**. These settings make it so that only members of your directory can request Office licenses.
For more information, see [License requirements](entitlement-management-overview
2. In the **Expiration** section, for **Access package assignments expire**, select **Number of days**.
-3. In **Assignments expire after**, enter **365**. This box specifies when members who have access to the access package will need to renew their access.
+3. In **Assignments expire after**, enter **365**. This box specifies when members who have access to the access package needs to renew their access.
4. You can also configure access reviews, which allow periodic checks of whether the employee still needs access to the access package. A review can be a self-review performed by the employee. Or you can set the employee's manager or another person as the reviewer. For more information, see [Access reviews](entitlement-management-access-reviews-create.md). In this scenario, you want all employees to review whether they still need a license for Office each year. 1. Under **Require access reviews**, select **Yes**.
- 2. You can leave **Starting on** set to the current date. This date is when the access review will start. After you create an access review, you can't update its start date.
- 3. Under **Review frequency**, select **Annually**, because the review will occur once per year. The **Review frequency** box is where you determine how often the access review runs.
- 4. Specify a **Duration (in days)**. The duration box is where you indicate how many days each occurrence of the access review series will run.
+ 2. You can leave **Starting on** set to the current date. This date is when the access review starts. After you create an access review, you can't update its start date.
+ 3. Under **Review frequency**, select **Annually**, because the review occurs once per year. The **Review frequency** box is where you determine how often the access review runs.
+ 4. Specify a **Duration (in days)**. The duration box is where you indicate how many days each occurrence of the access review series runs.
5. Under **Reviewers**, select **Manager**. ## Step 6: Review and create your access package
active-directory Identity Governance Organizational Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-organizational-roles.md
Title: Govern access with an organizational role model
description: Microsoft Entra Identity Governance allows you to model organizational roles using access packages, so you can migrate your existing role definitions to entitlement management. documentationcenter: ''-+ editor: markwahl-msft
na Previously updated : 12/1/2022- Last updated : 05/26/2023+
For example, an organization may have an existing organizational role model simi
|Role Name|Permissions the role provides|Automatic assignment to the role|Request-based assignment to the role|Separation of duties checks| |:--|-|-|-|-| |*Salesperson*|Member of **Sales** Team|Yes|No|None|
-|*Sales Solution Manager*|The permissions of *Salesperson*, and **Solution manager** app role in the Sales application|None|A salesperson can request, requires manager approval and quarterly review|Requestor cannot be a *Sales Account Manager*|
-|*Sales Account Manager*|The permissions of *Salesperson*, and **Account manager** app role in the Sales application|None|A salesperson can request, requires manager approval and quarterly review|Request cannot be a *Sales Solution Manager*|
-|*Sales Support*|Same permissions as a *Salesperson*|None|Any non-salesperson can request, requires manager approval and quarterly review|Requestor cannot be a *Salesperson*|
+|*Sales Solution Manager*|The permissions of *Salesperson*, and **Solution manager** app role in the Sales application|None|A salesperson can request, requires manager approval and quarterly review|Requestor can't be a *Sales Account Manager*|
+|*Sales Account Manager*|The permissions of *Salesperson*, and **Account manager** app role in the Sales application|None|A salesperson can request, requires manager approval and quarterly review|Request can't be a *Sales Solution Manager*|
+|*Sales Support*|Same permissions as a *Salesperson*|None|Any nonsalesperson can request, requires manager approval and quarterly review|Requestor can't be a *Salesperson*|
This could be represented in Entra Identity Governance as an access package catalog containing four access packages.
The next sections outline the process for migration, creating the Azure AD and M
### Connect apps whose permissions are referenced in the organizational roles to Azure AD
-If your organizational roles are used to assign permissions that control access to non-Microsoft SaaS apps, on-premises apps or your own cloud apps, then you will need to connect your applications to Azure AD.
+If your organizational roles are used to assign permissions that control access to non-Microsoft SaaS apps, on-premises apps or your own cloud apps, then you'll need to connect your applications to Azure AD.
In order for an access package representing an organizational role to be able to refer to an application's roles as the permissions to include in the role, for an application that has multiple roles and supports modern standards such as SCIM, you should [integrate the application with Azure AD](identity-governance-applications-integrate.md) and ensure that the application's roles are listed in the application manifest.
-If the application only has a single role, then you should still [integrated the application with Azure AD](identity-governance-applications-integrate.md). For applications that do not support SCIM, Azure AD can write users into an application's existing directory or SQL database, or add AD users into an AD group.
+If the application only has a single role, then you should still [integrated the application with Azure AD](identity-governance-applications-integrate.md). For applications that don't support SCIM, Azure AD can write users into an application's existing directory or SQL database, or add AD users into an AD group.
### Populate Azure AD schema used by apps and for user scoping rules in the organizational roles
-If your role definitions include statements of the form "all users with these attribute values get assigned to the role automatically" or "users with these attribute values are allowed to request", then you will need to ensure those attributes are present in Azure AD.
+If your role definitions include statements of the form "all users with these attribute values get assigned to the role automatically" or "users with these attribute values are allowed to request", then you'll need to ensure those attributes are present in Azure AD.
You can [extend the Azure AD schema](../app-provisioning/user-provisioning-sync-attributes-for-mapping.md) and then populate those attributes either from on-premises AD, via Azure AD Connect, or from an HR system such as Workday or SuccessFactors. ### Create catalogs for delegation
-If the ongoing maintenance of roles is delegated, then you can delegate the administration of access packages by [creating a catalog](entitlement-management-catalog-create.md ) for each part of the organization you will be delegating to.
+If the ongoing maintenance of roles is delegated, then you can delegate the administration of access packages by [creating a catalog](entitlement-management-catalog-create.md ) for each part of the organization you'll be delegating to.
If you have multiple catalogs to create, you can use a PowerShell script to [create each catalog](entitlement-management-catalog-create.md#create-a-catalog-with-powershell).
-If you are not planning to delegate the administration of the access packages, then you can keep the access packages in a single catalog.
+If you aren't planning to delegate the administration of the access packages, then you can keep the access packages in a single catalog.
### Add resources to the catalogs
-Now that you have the catalogs identified, then [add the applications, groups or sites](entitlement-management-catalog-create.md#add-resources-to-a-catalog) that will be included in the access packages representing the organization roles to the catalogs.
+Now that you have the catalogs identified, then [add the applications, groups or sites](entitlement-management-catalog-create.md#add-resources-to-a-catalog) that are included in the access packages representing the organization roles to the catalogs.
If you have many resources, you can use a PowerShell script to [add each resource to a catalog](entitlement-management-catalog-create.md#add-a-resource-to-a-catalog-with-powershell).
Each organizational role definition can be represented with an [access package](
You can use a PowerShell script to [create an access package in a catalog](entitlement-management-access-package-create.md#create-an-access-package-with-microsoft-powershell).
-Once you've created an access package, then you'll link one or more of the roles of the resources in the catalog to the access package. This represents the permissions of the organizational role.
+Once you've created an access package, then you link one or more of the roles of the resources in the catalog to the access package. This represents the permissions of the organizational role.
In addition, you'll [create a policy for direct assignment](entitlement-management-access-package-request-policy.md#none-administrator-direct-assignments-only), as part of that access package that can be used to track the users who already have individual organizational role assignments. ### Create access package assignments for existing individual organizational role assignments
-If some of your users already have organizational role memberships, that they would not receive via automatic assignment, then you should [create direct assignments](entitlement-management-access-package-assignments.md#directly-assign-a-user) for those users to the corresponding access packages.
+If some of your users already have organizational role memberships, that they wouldn't receive via automatic assignment, then you should [create direct assignments](entitlement-management-access-package-assignments.md#directly-assign-a-user) for those users to the corresponding access packages.
If you have many users who need assignments, you can use a PowerShell script to [assign each user to an access package](entitlement-management-access-package-assignments.md#assign-a-user-to-an-access-package-with-powershell). This would link the users to the direct assignment policy.
For each access package that is to be marked as incompatible with another, you c
### Add policies to access packages for users to be allowed to request
-If users who do not already have an organizational role are allowed to request and be approved to take on a role, then you can also configure entitlement management to allow users to request an access package. You can [add additional policies to an access package](entitlement-management-access-package-request-policy.md#choose-between-one-or-multiple-policies), and in each policy specify which users can request and who must approve.
+If users who don't already have an organizational role are allowed to request and be approved to take on a role, then you can also configure entitlement management to allow users to request an access package. You can [add additional policies to an access package](entitlement-management-access-package-request-policy.md#choose-between-one-or-multiple-policies), and in each policy specify which users can request and who must approve.
### Configure access reviews in access package assignment policies
active-directory Workflows Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md
Title: 'Lifecycle workflows FAQs (preview)' description: Frequently asked questions about Lifecycle workflows (preview). -+ Previously updated : 07/14/2022 Last updated : 05/26/2023 # Lifecycle workflows - FAQs (preview)
-In this article you will find questions to commonly asked questions about [Lifecycle Workflows](what-are-lifecycle-workflows.md). Please check back to this page frequently as changes happen often, and answers are continually being added.
+In this article, you'll find questions to commonly asked questions about [Lifecycle Workflows](what-are-lifecycle-workflows.md). Check back to this page frequently as changes happen often, and answers are continually being added.
## Frequently asked questions
For a small portion of our customers, Lifecycle Workflows may still be listed un
### Do I need to map employeeHireDate in provisioning apps like WorkDay?
-Yes, key user properties like employeeHireDate and employeeType are supported for user provisioning from HR apps like WorkDay. To use these properties in Lifecycle workflows, you will need to map them in the provisioning process to ensure the values are set. The following is an example of the mapping:
+Yes, key user properties like employeeHireDate and employeeType are supported for user provisioning from HR apps like WorkDay. To use these properties in Lifecycle workflows, you need to map them in the provisioning process to ensure the values are set. The following is an example of the mapping:
![Screenshot showing an example of how mapping is done in a Lifecycle Workflow.](./media/workflows-faqs/workflows-mapping.png)
active-directory Cross Tenant Synchronization Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md
Attribute mappings allow you to define how data should flow between the source t
| **Member** | Default. Users will be created as external member (B2B collaboration users) in the target tenant. Users will be able to function as any internal member of the target tenant. | | **Guest** | Users will be created as external guests (B2B collaboration users) in the target tenant. |
+ > [!NOTE]
+ > If the the B2B user already exists in the target tenant then **Member (userType)** will not changed to **Member**, unless the **Apply this mapping** setting is set to **Always**.
+
The user type you choose has the following limitations for apps or services (but aren't limited to): [!INCLUDE [user-type-workload-limitations-include](../includes/user-type-workload-limitations-include.md)]
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
Additionally, some checks can be configured through an [application setting], re
Requests that fail these built-in checks are given an HTTP `403 Forbidden` response.
-[Microsoft Identity Platform claims reference]: ../active-directory/develop/access-tokens.md#payload-claims
+[Microsoft Identity Platform claims reference]: ../active-directory/develop/access-token-claims-reference.md#payload-claims
## Configure client apps to access your App Service
application-gateway Tutorial Url Redirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-cli.md
az network application-gateway create \
--frontend-port 80 \ --http-settings-port 80 \ --http-settings-protocol Http \
- --public-ip-address myAGPublicIPAddress
+ --public-ip-address myAGPublicIPAddress \
+ --priority 100
``` It may take several minutes for the application gateway to be created. After the application gateway is created, you can see these new features:
az network application-gateway rule create \
--http-listener backendListener \ --rule-type PathBasedRouting \ --url-path-map urlpathmap \
- --address-pool appGatewayBackendPool
+ --address-pool appGatewayBackendPool \
+ --priority 100
az network application-gateway rule create \ --gateway-name myAppGateway \
az network application-gateway rule create \
--http-listener redirectedListener \ --rule-type PathBasedRouting \ --url-path-map redirectpathmap \
- --address-pool appGatewayBackendPool
+ --address-pool appGatewayBackendPool \
+ --priority 100
``` ## Create virtual machine scale sets
az group delete --name myResourceGroupAG
## Next steps > [!div class="nextstepaction"]
-> [Learn more about what you can do with application gateway](./overview.md)
+> [Learn more about what you can do with application gateway](./overview.md)
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Global requests from clients can be processed by action group services in any re
|Notification type|Description |Fields| ||||
- |Email Azure Resource Manager role|Send an email to the subscription members, based on their role.<br>A notification email is sent only to the primary email address configured for the Azure AD user.<br>The email is only sent to Azure Active Directory **user** members of the selected role, not to Azure AD groups or service principals.<br> See [Configure the email address for the Email Azure Resource Manager role](#email).|Enter the primary email address configured for the Azure AD user. See [Configure the email address for the Email Azure Resource Manager role](#email).|
+ |Email Azure Resource Manager role|Send an email to the subscription members, based on their role.<br>A notification email is sent only to the primary email address configured for the Azure AD user.<br>The email is only sent to Azure Active Directory **user** members of the selected role, not to Azure AD groups or service principals.<br> See [Email](#email).|Enter the primary email address configured for the Azure AD user. See [Email](#email).|
|Email| Ensure that your email filtering and any malware/spam prevention services are configured appropriately. Emails are sent from the following email addresses:<br> * azure-noreply@microsoft.com<br> * azureemail-noreply@microsoft.com<br> * alerts-noreply@mail.windowsazure.com|Enter the email where the notification should be sent.| |SMS|SMS notifications support bi-directional communication. The SMS contains the following information:<br> * Shortname of the action group this alert was sent to<br> * The title of the alert.<br> A user can respond to an SMS to:<br> * Unsubscribe from all SMS alerts for all action groups or a single action group.<br> * Resubscribe to alerts<br> * Request help.<br> For more information about supported SMS replies, see [SMS replies](#sms-replies).|Enter the **Country code** and the **Phone number** for the SMS recipient. If you can't select your country/region code in the Azure portal, SMS isn't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). As a workaround until your country is supported, configure the action group to call a webhook to a third-party SMS provider that supports your country/region.| |Azure app Push notifications|Send notifications to the Azure mobile app. To enable push notifications to the Azure mobile app, provide the For more information about the Azure mobile app, see [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/).|In the **Azure account email** field, enter the email address that you use as your account ID when you configure the Azure mobile app. |
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 05/25/2023 Last updated : 05/28/2023
> [!NOTE] > This list is largely auto-generated. Any modification made to this list via GitHub might be written over without warning. Contact the author of this article for details on how to make permanent updates.
-Date list was last updated: 05/07/2023.
+Date list was last updated: 05/28/2023.
Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface).
This latest update adds a new column and reorders the metrics to be alphabetical
|\DirectoryServices(NTDS)\LDAP Searches/sec |Yes |NTDS - LDAP Searches/sec |CountPerSecond |Average |This metric indicates the average number of searches per second for the NTDS object. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit | |\DirectoryServices(NTDS)\LDAP Successful Binds/sec |Yes |NTDS - LDAP Successful Binds/sec |CountPerSecond |Average |This metric indicates the number of LDAP successful binds per second for the NTDS object. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit | |\DNS\Total Query Received/sec |Yes |DNS - Total Query Received/sec |CountPerSecond |Average |This metric indicates the average number of queries received by DNS server in each second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit |
-|\DNS\Total Response Sent/sec |Yes |Total Response Sent/sec |CountPerSecond |Average |This metric indicates the average number of responses sent by DNS server in each second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit |
+|\DNS\Total Response Sent/sec |Yes |Total Response Sent/sec |CountPerSecond |Average |This metric indicates the average number of reponses sent by DNS server in each second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit |
|\Memory\% Committed Bytes In Use |Yes |% Committed Bytes In Use |Percent |Average |This metric indicates the ratio of Memory\Committed Bytes to the Memory\Commit Limit. Committed memory is the physical memory in use for which space has been reserved in the paging file should it need to be written to disk. The commit limit is determined by the size of the paging file. If the paging file is enlarged, the commit limit increases, and the ratio is reduced. This counter displays the current percentage value only; it is not an average. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit | |\Process(dns)\% Processor Time |Yes |% Processor Time (dns) |Percent |Average |This metric indicates the percentage of elapsed time that all of dns process threads used the processor to execute instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit | |\Process(lsass)\% Processor Time |Yes |% Processor Time (lsass) |Percent |Average |This metric indicates the percentage of elapsed time that all of lsass process threads used the processor to execute instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit | |\Processor(_Total)\% Processor Time |Yes |Total Processor Time |Percent |Average |This metric indicates the percentage of elapsed time that the processor spends to execute a non-Idle thread. It is calculated by measuring the percentage of time that the processor spends executing the idle thread and then subtracting that value from 100%. (Each processor has an idle thread that consumes cycles when no other threads are ready to run). This counter is the primary indicator of processor activity, and displays the average percentage of busy time observed during the sample interval. It should be noted that the accounting calculation of whether the processor is idle is performed at an internal sampling interval of the system clock (10ms). On todays fast processors, % Processor Time can therefore underestimate the processor utilization as the processor may be spending a lot of time servicing threads between the system clock sampling interval. Workload based timer applications are one example of applications which are more likely to be measured inaccurately as timers are signaled just after the sample is taken. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit | |\Security System-Wide Statistics\Kerberos Authentications |Yes |Kerberos Authentications |CountPerSecond |Average |This metric indicates the number of times that clients use a ticket to authenticate to this computer per second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit |
-|\Security System-Wide Statistics\NTLM Authentications |Yes |NTLM Authentications |CountPerSecond |Average |This metric indicates the number of NTLM authentications processed per second for the Active Directory on this domain controller or for local accounts on this member server. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit |
+|\Security System-Wide Statistics\NTLM Authentications |Yes |NTLM Authentications |CountPerSecond |Average |This metric indicates the number of NTLM authentications processed per second for the Active Directory on this domain contrller or for local accounts on this member server. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit |
## Microsoft.AnalysisServices/servers <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|connectedclients7 |Yes |Connected Clients (Shard 7) |Count |Maximum |The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics. |No Dimensions | |connectedclients8 |Yes |Connected Clients (Shard 8) |Count |Maximum |The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics. |No Dimensions | |connectedclients9 |Yes |Connected Clients (Shard 9) |Count |Maximum |The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics. |No Dimensions |
-|errors |Yes |Errors |Count |Maximum |The number errors that occurred on the cache. For more details, see https://aka.ms/redis/metrics. |ShardId, ErrorType |
+|ConnectedClientsUsingAADToken |Yes |Connected Clients using AAD Token (Instance Based) |Count |Maximum |The number of client connections to the cache using AAD Token. For more details, see https://aka.ms/redis/metrics. |ShardId, Port, Primary |
+|errors |Yes |Errors |Count |Maximum |The number errors that occured on the cache. For more details, see https://aka.ms/redis/metrics. |ShardId, ErrorType |
|evictedkeys |Yes |Evicted Keys |Count |Total |The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics. |ShardId | |evictedkeys0 |Yes |Evicted Keys (Shard 0) |Count |Total |The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics. |No Dimensions | |evictedkeys1 |Yes |Evicted Keys (Shard 1) |Count |Total |The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|getcommands7 |Yes |Gets (Shard 7) |Count |Total |The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics. |No Dimensions | |getcommands8 |Yes |Gets (Shard 8) |Count |Total |The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics. |No Dimensions | |getcommands9 |Yes |Gets (Shard 9) |Count |Total |The number of get operations from the cache. For more details, see https://aka.ms/redis/metrics. |No Dimensions |
+|LatencyP99 |Yes |99th percentile latency |Count |Maximum |Measures the worst-case (99th percentile) latency of server-side commands in microseconds. Measured by issuing PING commands from the load balancer to the Redis server and tracking the time to respond. |No Dimensions |
|operationsPerSecond |Yes |Operations Per Second |Count |Maximum |The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics. |ShardId | |operationsPerSecond0 |Yes |Operations Per Second (Shard 0) |Count |Maximum |The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics. |No Dimensions | |operationsPerSecond1 |Yes |Operations Per Second (Shard 1) |Count |Maximum |The number of instantaneous operations per second executed on the cache. For more details, see https://aka.ms/redis/metrics. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|cacheRead |Yes |Cache Read |BytesPerSecond |Maximum |The amount of data read from the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/enterprise/metrics. |InstanceId | |cacheWrite |Yes |Cache Write |BytesPerSecond |Maximum |The amount of data written to the cache in Megabytes per second (MB/s). For more details, see https://aka.ms/redis/enterprise/metrics. |InstanceId | |connectedclients |Yes |Connected Clients |Count |Maximum |The number of client connections to the cache. For more details, see https://aka.ms/redis/enterprise/metrics. |InstanceId |
-|errors |Yes |Errors |Count |Maximum |The number errors that occurred on the cache. For more details, see https://aka.ms/redis/enterprise/metrics. |InstanceId, ErrorType |
+|errors |Yes |Errors |Count |Maximum |The number errors that occured on the cache. For more details, see https://aka.ms/redis/enterprise/metrics. |InstanceId, ErrorType |
|evictedkeys |Yes |Evicted Keys |Count |Total |The number of items evicted from the cache. For more details, see https://aka.ms/redis/enterprise/metrics. |No Dimensions | |expiredkeys |Yes |Expired Keys |Count |Total |The number of items expired from the cache. For more details, see https://aka.ms/redis/enterprise/metrics. |No Dimensions | |geoReplicationHealthy |Yes |Geo Replication Healthy |Count |Maximum |The health of geo replication in an Active Geo-Replication group. 0 represents Unhealthy and 1 represents Healthy. For more details, see https://aka.ms/redis/enterprise/metrics. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |SignCompleted |Yes |SignCompleted |Count |Count |Completed Sign Request |CertType, Region, TenantId |
+## Microsoft.CognitiveSearch/indexes
+<!-- Data source : naam-->
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|IndexReadVCoreAllocationCurrent |No |Query Capacity Current vCore |Cores |Maximum |The currently allocated vCore capacity for querying an index |IndexSin |
+|IndexReadVCoreAllocationMaximum |No |Query Capacity Maximum vCore |Cores |Maximum |The upper bound of vCore usage for querying an index |IndexSin |
+|IndexReadVCoreAllocationMinimum |No |Query Capacity Minimum vCore |Cores |Maximum |The lower bound of vCore capacity for querying an index |IndexSin |
+|IndexWriteVCoreAllocationCurrent |No |Indexing Capacity Current vCore |Cores |Maximum |The currently allocated vCore consumption for indexing documents |IndexSin |
+|IndexWriteVCoreAllocationMaximum |No |Indexing Capacity Maximum vCore |Cores |Maximum |The upper bound of vCore usage for indexing documents |IndexSin |
+|IndexWriteVCoreAllocationMinimum |No |Indexing Capacity Minimum vCore |Cores |Maximum |The lower bound of vCore usage for indexing documents |IndexSin |
+ ## Microsoft.CognitiveServices/accounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |ActionFeatureIdOccurrences |Yes |Action Feature Occurrences |Count |Total |Number of times each action feature appears. |FeatureId, Mode, RunId | |ActionFeaturesPerEvent |Yes |Action Features Per Event |Count |Average |Average number of action features per event. |Mode, RunId |
-|ActionIdOccurrences |Yes |Action Occurrences |Count |Total |Number of times each action appears. |ActionId, Mode, RunId |
+|ActionIdOccurrences |Yes |Action Occurences |Count |Total |Number of times each action appears. |ActionId, Mode, RunId |
|ActionNamespacesPerEvent |Yes |Action Namespaces Per Event |Count |Average |Average number of action namespaces per event. |Mode, RunId | |ActionsPerEvent |Yes |Actions Per Event |Count |Average |Number of actions per event. |Mode, RunId | |AudioSecondsTranscribed |Yes |Audio Seconds Transcribed |Count |Total |Number of seconds transcribed |ApiName, FeatureName, UsageChannel, Region |
This latest update adds a new column and reorders the metrics to be alphabetical
|CharactersTranslated |Yes |Characters Translated (Deprecated) |Count |Total |Total number of characters in incoming text request. |ApiName, OperationName, Region | |ClientErrors |Yes |Client Errors |Count |Total |Number of calls with client side error (HTTP response code 4xx). |ApiName, OperationName, Region, RatelimitKey | |ComputerVisionTransactions |Yes |Computer Vision Transactions |Count |Total |Number of Computer Vision Transactions |ApiName, FeatureName, UsageChannel, Region |
+|ContentSafetyImageAnalyzeRequestCount |Yes |Call Count for Image Moderation |Count |Total |Number of calls for image moderation. |ApiVersion |
+|ContentSafetyTextAnalyzeRequestCount |Yes |Call Count for Text Moderation |Count |Total |Number of calls for text moderation. |ApiVersion |
|ContextFeatureIdOccurrences |Yes |Context Feature Occurrences |Count |Total |Number of times each context feature appears. |FeatureId, Mode, RunId | |ContextFeaturesPerEvent |Yes |Context Features Per Event |Count |Average |Number of context features per event. |Mode, RunId | |ContextNamespacesPerEvent |Yes |Context Namespaces Per Event |Count |Average |Number of context namespaces per event. |Mode, RunId |
This latest update adds a new column and reorders the metrics to be alphabetical
|FeatureCardinality_Context |Yes |Feature Cardinality by Context |Count |Average |Feature Cardinality based on Context. |FeatureId, Mode, RunId | |FeatureCardinality_Slot |Yes |Feature Cardinality by Slot |Count |Average |Feature Cardinality based on Slot. |FeatureId, Mode, RunId | |FineTunedTrainingHours |Yes |Processed FineTuned Training Hours |Count |Total |Number of Training Hours Processed on an OpenAI FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|GeneratedTokens |Yes |Generated Completion Tokens |Count |Total |Number of Generated Tokens from an OpenAI Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
|ImagesStored |Yes |Images Stored |Count |Total |Number of Custom Vision images stored. |ApiName, FeatureName, UsageChannel, Region | |Latency |Yes |Latency |MilliSeconds |Average |Latency in milliseconds. |ApiName, OperationName, Region, RatelimitKey | |LearnedEvents |Yes |Learned Events |Count |Total |Number of Learned Events. |IsMatchBaseline, Mode, RunId |
This latest update adds a new column and reorders the metrics to be alphabetical
|ProcessedHealthTextRecords |Yes |Processed Health Text Records |Count |Total |Number of health text records processed |ApiName, FeatureName, UsageChannel, Region | |ProcessedImages |Yes |Processed Images |Count |Total |Number of images processed |ApiName, FeatureName, UsageChannel, Region | |ProcessedPages |Yes |Processed Pages |Count |Total |Number of pages processed |ApiName, FeatureName, UsageChannel, Region |
+|ProcessedPromptTokens |Yes |Processed Prompt Tokens |Count |Total |Number of Prompt Tokens Processed on an OpenAI Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
|ProcessedTextRecords |Yes |Processed Text Records |Count |Total |Count of Text Records. |ApiName, FeatureName, UsageChannel, Region | |QuestionAnsweringTextRecords |Yes |QA Text Records |Count |Total |Number of text records processed |ApiName, FeatureName, UsageChannel, Region | |Ratelimit |Yes |Ratelimit |Count |Total |The current ratelimit of the ratelimit key. |Region, RatelimitKey |
This latest update adds a new column and reorders the metrics to be alphabetical
|ApiRequestRouter |Yes |Job Router API Requests |Count |Count |Count of all requests against the Communication Services Job Router endpoint. |OperationName, StatusCode, StatusCodeSubClass, ApiVersion | |ApiRequests |Yes |Email Service API Requests |Count |Count |Email Communication Services API request metric for the data-plane API surface. |Operation, StatusCode, StatusCodeClass, StatusCodeReason | |APIRequestSMS |Yes |SMS API Requests |Count |Count |Count of all requests against the Communication Services SMS endpoint. |Operation, StatusCode, StatusCodeClass, ErrorCode, NumberType, Country, OptAction |
-|DeliveryStatusUpdate |Yes |Email Service Delivery Status Updates |Count |Count |Email Communication Services message delivery results. |MessageStatus, Result |
+|DeliveryStatusUpdate |Yes |Email Service Delivery Status Updates |Count |Count |Email Communication Services message delivery results. |MessageStatus, Result, SmtpStatusCode, EnhancedSmtpStatusCode, SenderDomain, IsHardBounce |
|UserEngagement |Yes |Email Service User Engagement |Count |Count |Email Communication Services user engagement metrics. |EngagementType | ## Microsoft.Compute/cloudservices
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |ClaimsProviderRequestLatency |Yes |Claims request execution time |Milliseconds |Average |The average execution time of requests to the customer claims provider endpoint in milliseconds. |IsSuccessful, FailureCategory | |ClaimsProviderRequests |Yes |Claims provider requests |Count |Total |Number of requests to claims provider |IsSuccessful, FailureCategory |
-|ConnectionServiceRequestRuntime |Yes |Vehicle connection service request execution time |Milliseconds |Average |Vehicle connection request execution time average in milliseconds |IsSuccessful, FailureCategory |
+|ConnectionServiceRequestRuntime |Yes |Vehicle connection service request execution time |Milliseconds |Average |Vehicle conneciton request execution time average in milliseconds |IsSuccessful, FailureCategory |
|ConnectionServiceRequests |Yes |Vehicle connection service requests |Count |Total |Total number of vehicle connection requests |IsSuccessful, FailureCategory | |DataPipelineMessageCount |Yes |Data pipeline message count |Count |Total |The total number of messages sent to the MCVP data pipeline for storage. |IsSuccessful, FailureCategory | |ExtensionInvocationCount |Yes |Extension invocation count |Count |Total |Total number of times an extension was called. |ExtensionName, IsSuccessful, FailureCategory |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ActivityCancelledRuns |Yes |Canceled activity runs metrics |Count |Total |Canceled activity runs metrics |ActivityType, PipelineName, FailureType, Name |
+|ActivityCancelledRuns |Yes |Cancelled activity runs metrics |Count |Total |Cancelled activity runs metrics |ActivityType, PipelineName, FailureType, Name |
|ActivityFailedRuns |Yes |Failed activity runs metrics |Count |Total |Failed activity runs metrics |ActivityType, PipelineName, FailureType, Name | |ActivitySucceededRuns |Yes |Succeeded activity runs metrics |Count |Total |Succeeded activity runs metrics |ActivityType, PipelineName, FailureType, Name | |AirflowIntegrationRuntimeCeleryTaskTimeoutError |No |Airflow Integration Runtime Celery Task Timeout Error |Count |Total |Airflow Integration Runtime Celery Task Timeout Error |IntegrationRuntimeName |
This latest update adds a new column and reorders the metrics to be alphabetical
|IntegrationRuntimeQueueLength |Yes |Integration runtime queue length |Count |Average |Integration runtime queue length |IntegrationRuntimeName | |MaxAllowedFactorySizeInGbUnits |Yes |Maximum allowed factory size (GB unit) |Count |Maximum |Maximum allowed factory size (GB unit) |No Dimensions | |MaxAllowedResourceCount |Yes |Maximum allowed entities count |Count |Maximum |Maximum allowed entities count |No Dimensions |
-|PipelineCancelledRuns |Yes |Canceled pipeline runs metrics |Count |Total |Canceled pipeline runs metrics |FailureType, CanceledBy, Name |
+|PipelineCancelledRuns |Yes |Cancelled pipeline runs metrics |Count |Total |Cancelled pipeline runs metrics |FailureType, CancelledBy, Name |
|PipelineElapsedTimeRuns |Yes |Elapsed Time Pipeline Runs Metrics |Count |Total |Elapsed Time Pipeline Runs Metrics |RunId, Name | |PipelineFailedRuns |Yes |Failed pipeline runs metrics |Count |Total |Failed pipeline runs metrics |FailureType, Name | |PipelineSucceededRuns |Yes |Succeeded pipeline runs metrics |Count |Total |Succeeded pipeline runs metrics |FailureType, Name | |ResourceCount |Yes |Total entities count |Count |Maximum |Total entities count |No Dimensions |
-|SSISIntegrationRuntimeStartCancel |Yes |Canceled SSIS integration runtime start metrics |Count |Total |Canceled SSIS integration runtime start metrics |IntegrationRuntimeName |
+|SSISIntegrationRuntimeStartCancel |Yes |Cancelled SSIS integration runtime start metrics |Count |Total |Cancelled SSIS integration runtime start metrics |IntegrationRuntimeName |
|SSISIntegrationRuntimeStartFailed |Yes |Failed SSIS integration runtime start metrics |Count |Total |Failed SSIS integration runtime start metrics |IntegrationRuntimeName | |SSISIntegrationRuntimeStartSucceeded |Yes |Succeeded SSIS integration runtime start metrics |Count |Total |Succeeded SSIS integration runtime start metrics |IntegrationRuntimeName | |SSISIntegrationRuntimeStopStuck |Yes |Stuck SSIS integration runtime stop metrics |Count |Total |Stuck SSIS integration runtime stop metrics |IntegrationRuntimeName | |SSISIntegrationRuntimeStopSucceeded |Yes |Succeeded SSIS integration runtime stop metrics |Count |Total |Succeeded SSIS integration runtime stop metrics |IntegrationRuntimeName |
-|SSISPackageExecutionCancel |Yes |Canceled SSIS package execution metrics |Count |Total |Canceled SSIS package execution metrics |IntegrationRuntimeName |
+|SSISPackageExecutionCancel |Yes |Cancelled SSIS package execution metrics |Count |Total |Cancelled SSIS package execution metrics |IntegrationRuntimeName |
|SSISPackageExecutionFailed |Yes |Failed SSIS package execution metrics |Count |Total |Failed SSIS package execution metrics |IntegrationRuntimeName | |SSISPackageExecutionSucceeded |Yes |Succeeded SSIS package execution metrics |Count |Total |Succeeded SSIS package execution metrics |IntegrationRuntimeName |
-|TriggerCancelledRuns |Yes |Canceled trigger runs metrics |Count |Total |Canceled trigger runs metrics |Name, FailureType |
+|TriggerCancelledRuns |Yes |Cancelled trigger runs metrics |Count |Total |Cancelled trigger runs metrics |Name, FailureType |
|TriggerFailedRuns |Yes |Failed trigger runs metrics |Count |Total |Failed trigger runs metrics |Name, FailureType | |TriggerSucceededRuns |Yes |Succeeded trigger runs metrics |Count |Total |Succeeded trigger runs metrics |Name, FailureType |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|JobAUEndedCancelled |Yes |Canceled AU Time |Seconds |Total |Total AU time for cancelled jobs. |No Dimensions |
+|JobAUEndedCancelled |Yes |Cancelled AU Time |Seconds |Total |Total AU time for cancelled jobs. |No Dimensions |
|JobAUEndedFailure |Yes |Failed AU Time |Seconds |Total |Total AU time for failed jobs. |No Dimensions | |JobAUEndedSuccess |Yes |Successful AU Time |Seconds |Total |Total AU time for successful jobs. |No Dimensions |
-|JobEndedCancelled |Yes |Canceled Jobs |Count |Total |Count of cancelled jobs. |No Dimensions |
+|JobEndedCancelled |Yes |Cancelled Jobs |Count |Total |Count of cancelled jobs. |No Dimensions |
|JobEndedFailure |Yes |Failed Jobs |Count |Total |Count of failed jobs. |No Dimensions | |JobEndedSuccess |Yes |Successful Jobs |Count |Total |Count of successful jobs. |No Dimensions | |JobStage |Yes |Jobs in Stage |Count |Total |Number of jobs in each stage. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|disk_iops_consumed_percentage |Yes |Disk IOPS Consumed Percentage (Preview) |Percent |Average |Percentage of disk I/Os consumed per minute |No Dimensions | |disk_queue_depth |Yes |Disk Queue Depth |Count |Average |Number of outstanding I/O operations to the data disk |No Dimensions | |iops |Yes |IOPS |Count |Average |IO Operations per second |No Dimensions |
+|is_db_alive |Yes |Database Is Alive (Preview) |Count |Maximum |Indicates if the database is up or not |No Dimensions |
|logical_replication_delay_in_bytes |Yes |Max Logical Replication Lag (Preview) |Bytes |Maximum |Maximum lag across all logical replication slots |No Dimensions | |longest_query_time_sec |Yes |Oldest Query (Preview) |Seconds |Maximum |The age in seconds of the longest query that is currently running |No Dimensions | |longest_transaction_time_sec |Yes |Oldest Transaction (Preview) |Seconds |Maximum |The age in seconds of the longest transaction (including idle transactions) |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|configurations |Yes |Configuration Metrics |Count |Total |Metrics for Configuration Operations |No Dimensions | |connectedDeviceCount |No |Connected devices |Count |Average |Number of devices connected to your IoT hub |No Dimensions | |d2c.endpoints.egress.builtIn.events |Yes |Routing: messages delivered to messages/events |Count |Total |The number of times IoT Hub routing successfully delivered messages to the built-in endpoint (messages/events). |No Dimensions |
-|d2c.endpoints.egress.eventHubs |Yes |Routing: messages delivered to Event Hubs |Count |Total |The number of times IoT Hub routing successfully delivered messages to Event Hubs endpoints. |No Dimensions |
+|d2c.endpoints.egress.eventHubs |Yes |Routing: messages delivered to Event Hub |Count |Total |The number of times IoT Hub routing successfully delivered messages to Event Hub endpoints. |No Dimensions |
|d2c.endpoints.egress.serviceBusQueues |Yes |Routing: messages delivered to Service Bus Queue |Count |Total |The number of times IoT Hub routing successfully delivered messages to Service Bus queue endpoints. |No Dimensions | |d2c.endpoints.egress.serviceBusTopics |Yes |Routing: messages delivered to Service Bus Topic |Count |Total |The number of times IoT Hub routing successfully delivered messages to Service Bus topic endpoints. |No Dimensions | |d2c.endpoints.egress.storage |Yes |Routing: messages delivered to storage |Count |Total |The number of times IoT Hub routing successfully delivered messages to storage endpoints. |No Dimensions | |d2c.endpoints.egress.storage.blobs |Yes |Routing: blobs delivered to storage |Count |Total |The number of times IoT Hub routing delivered blobs to storage endpoints. |No Dimensions | |d2c.endpoints.egress.storage.bytes |Yes |Routing: data delivered to storage |Bytes |Total |The amount of data (bytes) IoT Hub routing delivered to storage endpoints. |No Dimensions | |d2c.endpoints.latency.builtIn.events |Yes |Routing: message latency for messages/events |MilliSeconds |Average |The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into the built-in endpoint (messages/events). |No Dimensions |
-|d2c.endpoints.latency.eventHubs |Yes |Routing: message latency for Event Hubs |MilliSeconds |Average |The average latency (milliseconds) between message ingress to IoT Hub and message ingress into an Event Hubs endpoint. |No Dimensions |
+|d2c.endpoints.latency.eventHubs |Yes |Routing: message latency for Event Hub |MilliSeconds |Average |The average latency (milliseconds) between message ingress to IoT Hub and message ingress into an Event Hub endpoint. |No Dimensions |
|d2c.endpoints.latency.serviceBusQueues |Yes |Routing: message latency for Service Bus Queue |MilliSeconds |Average |The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus queue endpoint. |No Dimensions | |d2c.endpoints.latency.serviceBusTopics |Yes |Routing: message latency for Service Bus Topic |MilliSeconds |Average |The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a Service Bus topic endpoint. |No Dimensions | |d2c.endpoints.latency.storage |Yes |Routing: message latency for storage |MilliSeconds |Average |The average latency (milliseconds) between message ingress to IoT Hub and telemetry message ingress into a storage endpoint. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|cassandra_client_request_timeouts |No |timeouts (deprecated) |Count |Total |Number of timeouts encountered. |cassandra_datacenter, cassandra_node, request_type | |cassandra_client_request_timeouts2 |No |timeouts |Count |Total |Number of timeouts encountered. |cassandra_datacenter, cassandra_node, request_type | |cassandra_client_request_unfinished_commit |No |unfinished commit |Count |Total |Number of transactions that were committed on write. |cassandra_datacenter, cassandra_node, request_type |
-|cassandra_commit_log_waiting_on_commit_latency_histogram |No |commit latency on waiting average (microseconds) |Count |Average |Average time spent waiting on CL fsync (in microseconds); for Periodic this only occurs when the sync is lagging its sync interval. |cassandra_datacenter, cassandra_node, quantile |
+|cassandra_commit_log_waiting_on_commit_latency_histogram |No |commit latency on waiting average (microseconds) |Count |Average |Average time spent waiting on CL fsync (in microseconds); for Periodic this is only occurs when the sync is lagging its sync interval. |cassandra_datacenter, cassandra_node, quantile |
|cassandra_cql_prepared_statements_executed |No |prepared statements executed |Count |Total |Number of prepared statements executed. |cassandra_datacenter, cassandra_node | |cassandra_cql_regular_statements_executed |No |regular statements executed |Count |Total |Number of non prepared statements executed. |cassandra_datacenter, cassandra_node | |cassandra_jvm_gc_count |No |gc count |Count |Total |Total number of collections that have occurred. |cassandra_datacenter, cassandra_node |
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalRequests |Yes |Total Requests |Count |Count |Number of requests made |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType | |TotalRequestsPreview |No |Total Requests (Preview) |Count |Count |Number of SQL requests |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, IsExternal | |TotalRequestUnits |Yes |Total Request Units |Count |Total |SQL Request Units consumed |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType |
-|TotalRequestUnitsPreview |No |Total Request Units (Preview) |Count |Total |Request Units consumed with CapacityType |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType |
+|TotalRequestUnitsPreview |No |Total Request Units (Preview) |Count |Total |Request Units consumed with CapacityType |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType, PriorityLevel |
|UpdateAccountKeys |Yes |Account Keys Updated |Count |Count |Account Keys Updated |KeyType | |UpdateAccountNetworkSettings |Yes |Account Network Settings Updated |Count |Count |Account Network Settings Updated |No Dimensions | |UpdateAccountReplicationSettings |Yes |Account Replication Settings Updated |Count |Count |Account Replication Settings Updated |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|PublishSuccessLatencyInMs |Yes |Publish Success Latency |Milliseconds |Total |Publish success latency in milliseconds |No Dimensions | |UnmatchedEventCount |Yes |Unmatched Events |Count |Total |Total events not matching any of the event subscriptions for this topic |No Dimensions |
+## Microsoft.EventGrid/namespaces
+<!-- Data source : arm-->
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|AcknowledgeLatencyInMilliseconds |No |Acknowledge Operations Latency |Milliseconds |Total |The observed latency in milliseconds for acknowledge events operation. |Topic, EventSubscriptionName |
+|FailedAcknowledgedEvents |No |Failed Acknowledged Events |Count |Total |The number of events for which acknowledgements from clients failed. |Topic, EventSubscriptionName, Error, ErrorType |
+|FailedPublishedEvents |No |Failed Publish Events |Count |Total |The number of events that weren't accepted by Event Grid. This count excludes events that were published but failed to reach Event Grid due to a network issue, for example. |Topic, Error, ErrorType |
+|FailedReceivedEvents |No |Failed Received Events |Count |Total |The number of events that were requested by clients but weren't delivered successfully by Event Grid. |Topic, EventSubscriptionName, Error, ErrorType |
+|FailedReleasedEvents |No |Failed Released Events |Count |Total |The number of events for which release failed. |Topic, EventSubscriptionName, Error, ErrorType |
+|Mqtt.Connections |Yes |MQTT: Connections |Count |Total |The number of active connections in the namespace. |Protocol |
+|Mqtt.FailedPublishedMessages |Yes |MQTT: Failed Published Messages |Count |Total |The number of MQTT messages that failed to be published into the namespace. |QoS, Protocol, Error |
+|Mqtt.FailedSubscriptionOperations |Yes |MQTT: Failed Subscription Operations |Count |Total |The number of failed subscription operations (Subscribe, Unsubscribe). This metric is incremented for every topic filter within a subscription request. |Protocol, OperationType, Error |
+|Mqtt.RequestCount |Yes |MQTT: Request Count |Count |Total |The number of MQTT requests. |OperationType, Protocol, Error, Result |
+|Mqtt.SuccessfulDeliveredMessages |Yes |MQTT: Successful Delivered Messages |Count |Total |The number of messages delivered by the namespace. There are no failures for this operation. |QoS, Protocol |
+|Mqtt.SuccessfulPublishedMessages |Yes |MQTT: Successful Published Messages |Count |Total |The number of MQTT messages that were published successfully into the namespace. |QoS, Protocol |
+|Mqtt.SuccessfulSubscriptionOperations |Yes |MQTT: Successful Subscription Operations |Count |Total |The number of successful subscription operations (Subscribe, Unsubscribe). This metric is incremented for every topic filter within a subscription request. |Protocol, OperationType |
+|Mqtt.Throughput |Yes |MQTT: Throughput |Bytes |Total |The number of bytes published to or delivered by the namespace. |Direction |
+|PublishLatencyInMilliseconds |No |Publish Operations Latency |Milliseconds |Total |The observed latency in milliseconds for publish events operation. |Topic |
+|ReceiveLatencyInMilliseconds |No |Receive Operations Latency |Milliseconds |Total |The observed latency in milliseconds for receive events operation. |Topic, EventSubscriptionName |
+|RejectLatencyInMilliseconds |No |Reject Operations Latency |Milliseconds |Total |The observed latency in milliseconds for reject events operation. |Topic, EventSubscriptionName |
+|SuccessfulAcknowledgedEvents |No |Successful Acknowledged Events |Count |Total |The number of events for which delivery was successfully acknowledged by clients. |Topic, EventSubscriptionName |
+|SuccessfulPublishedEvents |No |Successful Publish Events |Count |Total |The number of events published successfully to a topic or topic space within a namespace. |Topic |
+|SuccessfulReceivedEvents |No |Successful Received Events |Count |Total |The total number of events that were successfully returned to (received by) clients by Event Grid. |Topic, EventSubscriptionName |
+|SuccessfulReleasedEvents |No |Successful Released Events |Count |Total |The number of events that were released successfully by queue subscriber clients. |Topic, EventSubscriptionName |
+ ## Microsoft.EventGrid/partnerNamespaces <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PublishFailCount |Yes |Publish Failed Events |Count |Total |Total events failed to publish to this topic |ErrorType, Error | |PublishSuccessCount |Yes |Published Events |Count |Total |Total events published to this topic |No Dimensions | |PublishSuccessLatencyInMs |Yes |Publish Success Latency |Milliseconds |Total |Publish success latency in milliseconds |No Dimensions |
+|ServerDeliverySuccessRate |Yes |Server Delivery Success Rate |Count |Total |Success rate of events delivered to this event subscription where failure is caused due to server errors |EventSubscriptionName |
|UnmatchedEventCount |Yes |Unmatched Events |Count |Total |Total events not matching any of the event subscriptions for this topic |No Dimensions | ## Microsoft.EventGrid/topics
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |ActiveConnections |No |ActiveConnections |Count |Average |Total Active Connections for Microsoft.EventHub. |No Dimensions |
-|AvailableMemory |No |Available Memory |Percent |Maximum |Available memory for the Event Hubs Cluster as a percentage of total memory. |Role |
+|AvailableMemory |No |Available Memory |Percent |Maximum |Available memory for the Event Hub Cluster as a percentage of total memory. |Role |
|CaptureBacklog |No |Capture Backlog. |Count |Total |Capture Backlog for Microsoft.EventHub. |No Dimensions | |CapturedBytes |No |Captured Bytes. |Bytes |Total |Captured Bytes for Microsoft.EventHub. |No Dimensions | |CapturedMessages |No |Captured Messages. |Count |Total |Captured Messages for Microsoft.EventHub. |No Dimensions | |ConnectionsClosed |No |Connections Closed. |Count |Average |Connections Closed for Microsoft.EventHub. |No Dimensions | |ConnectionsOpened |No |Connections Opened. |Count |Average |Connections Opened for Microsoft.EventHub. |No Dimensions |
-|CPU |No |CPU |Percent |Maximum |CPU utilization for the Event Hubs Cluster as a percentage |Role |
+|CPU |No |CPU |Percent |Maximum |CPU utilization for the Event Hub Cluster as a percentage |Role |
|IncomingBytes |Yes |Incoming Bytes. |Bytes |Total |Incoming Bytes for Microsoft.EventHub. |No Dimensions | |IncomingMessages |Yes |Incoming Messages |Count |Total |Incoming Messages for Microsoft.EventHub. |No Dimensions | |IncomingRequests |Yes |Incoming Requests |Count |Total |Incoming Requests for Microsoft.EventHub. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|CapturedMessages |No |Captured Messages. |Count |Total |Captured Messages for Microsoft.EventHub. |EntityName | |ConnectionsClosed |No |Connections Closed. |Count |Maximum |Connections Closed for Microsoft.EventHub. |EntityName | |ConnectionsOpened |No |Connections Opened. |Count |Maximum |Connections Opened for Microsoft.EventHub. |EntityName |
-|EHABL |Yes |Archive backlog messages (Deprecated) |Count |Total |Event Hubs archive messages in backlog for a namespace (Deprecated) |No Dimensions |
-|EHAMBS |Yes |Archive message throughput (Deprecated) |Bytes |Total |Event Hubs archived message throughput in a namespace (Deprecated) |No Dimensions |
-|EHAMSGS |Yes |Archive messages (Deprecated) |Count |Total |Event Hubs archived messages in a namespace (Deprecated) |No Dimensions |
-|EHINBYTES |Yes |Incoming bytes (Deprecated) |Bytes |Total |Event Hubs incoming message throughput for a namespace (Deprecated) |No Dimensions |
-|EHINMBS |Yes |Incoming bytes (obsolete) (Deprecated) |Bytes |Total |Event Hubs incoming message throughput for a namespace. This metric is deprecated. Please use Incoming bytes metric instead (Deprecated) |No Dimensions |
+|EHABL |Yes |Archive backlog messages (Deprecated) |Count |Total |Event Hub archive messages in backlog for a namespace (Deprecated) |No Dimensions |
+|EHAMBS |Yes |Archive message throughput (Deprecated) |Bytes |Total |Event Hub archived message throughput in a namespace (Deprecated) |No Dimensions |
+|EHAMSGS |Yes |Archive messages (Deprecated) |Count |Total |Event Hub archived messages in a namespace (Deprecated) |No Dimensions |
+|EHINBYTES |Yes |Incoming bytes (Deprecated) |Bytes |Total |Event Hub incoming message throughput for a namespace (Deprecated) |No Dimensions |
+|EHINMBS |Yes |Incoming bytes (obsolete) (Deprecated) |Bytes |Total |Event Hub incoming message throughput for a namespace. This metric is deprecated. Please use Incoming bytes metric instead (Deprecated) |No Dimensions |
|EHINMSGS |Yes |Incoming Messages (Deprecated) |Count |Total |Total incoming messages for a namespace (Deprecated) |No Dimensions |
-|EHOUTBYTES |Yes |Outgoing bytes (Deprecated) |Bytes |Total |Event Hubs outgoing message throughput for a namespace (Deprecated) |No Dimensions |
-|EHOUTMBS |Yes |Outgoing bytes (obsolete) (Deprecated) |Bytes |Total |Event Hubs outgoing message throughput for a namespace. This metric is deprecated. Please use Outgoing bytes metric instead (Deprecated) |No Dimensions |
+|EHOUTBYTES |Yes |Outgoing bytes (Deprecated) |Bytes |Total |Event Hub outgoing message throughput for a namespace (Deprecated) |No Dimensions |
+|EHOUTMBS |Yes |Outgoing bytes (obsolete) (Deprecated) |Bytes |Total |Event Hub outgoing message throughput for a namespace. This metric is deprecated. Please use Outgoing bytes metric instead (Deprecated) |No Dimensions |
|EHOUTMSGS |Yes |Outgoing Messages (Deprecated) |Count |Total |Total outgoing messages for a namespace (Deprecated) |No Dimensions | |FAILREQ |Yes |Failed Requests (Deprecated) |Count |Total |Total failed requests for a namespace (Deprecated) |No Dimensions | |IncomingBytes |Yes |Incoming Bytes. |Bytes |Total |Incoming Bytes for Microsoft.EventHub. |EntityName |
This latest update adds a new column and reorders the metrics to be alphabetical
|IoTConnectorMeasurement |Yes |Number of Measurements |Count |Sum |The number of normalized value readings received by the FHIR conversion stage of the Azure IoT Connector for FHIR. |Operation, ConnectorName | |IoTConnectorMeasurementGroup |Yes |Number of Message Groups |Count |Sum |The total number of unique groupings of measurements across type, device, patient, and configured time period generated by the FHIR conversion stage. |Operation, ConnectorName | |IoTConnectorMeasurementIngestionLatencyMs |Yes |Average Group Stage Latency |Milliseconds |Average |The time period between when the IoT Connector received the device data and when the data is processed by the FHIR conversion stage. |Operation, ConnectorName |
-|IoTConnectorNormalizedEvent |Yes |Number of Normalized Messages |Count |Sum |The total number of mapped normalized values outputted from the normalization stage of the Azure IoT Connector for FHIR. |Operation, ConnectorName |
+|IoTConnectorNormalizedEvent |Yes |Number of Normalized Messages |Count |Sum |The total number of mapped normalized values outputted from the normalization stage of the the Azure IoT Connector for FHIR. |Operation, ConnectorName |
|IoTConnectorTotalErrors |Yes |Total Error Count |Count |Sum |The total number of errors logged by the Azure IoT Connector for FHIR |Name, Operation, ErrorType, ErrorSeverity, ConnectorName | |TotalErrors |Yes |Total Errors |Count |Sum |The total number of internal server errors encountered by the service. |Protocol, StatusCode, StatusCodeClass, StatusCodeText | |TotalLatency |Yes |Total Latency |Milliseconds |Average |The response latency of the service. |Protocol |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|ActiveSessionCount |No |Active PDU Sessions |Count |Total |Number of Active PDU Sessions |3gppGen, PccpId, SiteId |
|AuthAttempt |Yes |Authentication Attempts |Count |Total |Authentication attempts rate (per minute) |3gppGen, PccpId, SiteId | |AuthFailure |Yes |Authentication Failures |Count |Total |Authentication failure rate (per minute) |3gppGen, PccpId, SiteId, Result | |AuthSuccess |Yes |Authentication Successes |Count |Total |Authentication success rate (per minute) |3gppGen, PccpId, SiteId |
This latest update adds a new column and reorders the metrics to be alphabetical
|PagingFailure |Yes |Paging Failures |Count |Total |Paging failure rate (per minute) |3gppGen, PccpId, SiteId | |ProvisionedSubscribers |No |Provisioned Subscribers |Count |Total |Number of provisioned subscribers |PccpId, SiteId | |RanSetupFailure |Yes |RAN Setup Failures |Count |Total |RAN setup failure rate (per minute) |3gppGen, PccpId, SiteId, Cause |
-|RanSetupRequest |Yes |RAN Setup Requests |Count |Total |RAN setup requests rate (per minute) |3gppGen, PccpId, SiteId |
+|RanSetupRequest |Yes |RAN Setup Requests |Count |Total |RAN setup reuests rate (per minute) |3gppGen, PccpId, SiteId |
|RanSetupResponse |Yes |RAN Setup Responses |Count |Total |RAN setup response rate (per minute) |3gppGen, PccpId, SiteId | |RegisteredSubscribers |Yes |Registered Subscribers |Count |Total |Number of registered subscribers |3gppGen, PccpId, SiteId | |RegisteredSubscribersConnected |Yes |Registered Subscribers Connected |Count |Total |Number of registered and connected subscribers |3gppGen, PccpId, SiteId |
This latest update adds a new column and reorders the metrics to be alphabetical
|ServiceRequestAttempt |Yes |Service Request Attempts |Count |Total |Service request attempts rate (per minute) |3gppGen, PccpId, SiteId | |ServiceRequestFailure |Yes |Service Request Failures |Count |Total |Service request failure rate (per minute) |3gppGen, PccpId, SiteId, Result, Tai | |ServiceRequestSuccess |Yes |Service Request Successes |Count |Total |Service request success rate (per minute) |3gppGen, PccpId, SiteId |
-|SessionEstablishmentAttempt |Yes |Session Establishment Attempts |Count |Total |PDU session establishment attempts rate (per minute) |3gppGen, PccpId, SiteId |
-|SessionEstablishmentFailure |Yes |Session Establishment Failures |Count |Total |PDU session establishment failure rate (per minute) |3gppGen, PccpId, SiteId |
-|SessionEstablishmentSuccess |Yes |Session Establishment Successes |Count |Total |PDU session establishment success rate (per minute) |3gppGen, PccpId, SiteId |
+|SessionEstablishmentAttempt |Yes |Session Establishment Attempts |Count |Total |PDU session establishment attempts rate (per minute) |3gppGen, PccpId, SiteId, Dnn |
+|SessionEstablishmentFailure |Yes |Session Establishment Failures |Count |Total |PDU session establishment failure rate (per minute) |3gppGen, PccpId, SiteId, Dnn |
+|SessionEstablishmentSuccess |Yes |Session Establishment Successes |Count |Total |PDU session establishment success rate (per minute) |3gppGen, PccpId, SiteId, Dnn |
|SessionRelease |Yes |Session Releases |Count |Total |Session release rate (per minute) |3gppGen, PccpId, SiteId | |UeContextReleaseCommand |Yes |UE Context Release Commands |Count |Total |UE context release command message rate (per minute) |3gppGen, PccpId, SiteId | |UeContextReleaseComplete |Yes |UE Context Release Completes |Count |Total |UE context release complete message rate (per minute) |3gppGen, PccpId, SiteId |
This latest update adds a new column and reorders the metrics to be alphabetical
|CacheUtilization |Yes |Cache utilization (deprecated) |Percent |Average |Utilization level in the cluster scope. The metric is deprecated and presented for backward compatibility only, you should use the 'Cache utilization factor' metric instead. |No Dimensions | |CacheUtilizationFactor |Yes |Cache utilization factor |Percent |Average |Percentage of utilized disk space dedicated for hot cache in the cluster. 100% means that the disk space assigned to hot data is optimally utilized. No action is needed in terms of the cache size. More than 100% means that the cluster's disk space is not large enough to accommodate the hot data, as defined by your caching policies. To ensure that sufficient space is available for all the hot data, the amount of hot data needs to be reduced or the cluster needs to be scaled out. Enabling auto scale is recommended. |No Dimensions | |ContinuousExportMaxLatenessMinutes |Yes |Continuous Export Max Lateness |Count |Maximum |The lateness (in minutes) reported by the continuous export jobs in the cluster |No Dimensions |
-|ContinuousExportNumOfRecordsExported |Yes |Continuous export ΓÇô num of exported records |Count |Total |Number of records exported, fired for every storage artifact written during the export operation |ContinuousExportName, Database |
+|ContinuousExportNumOfRecordsExported |Yes |Continuous export - num of exported records |Count |Total |Number of records exported, fired for every storage artifact written during the export operation |ContinuousExportName, Database |
|ContinuousExportPendingCount |Yes |Continuous Export Pending Count |Count |Maximum |The number of pending continuous export jobs ready for execution |No Dimensions | |ContinuousExportResult |Yes |Continuous Export Result |Count |Count |Indicates whether Continuous Export succeeded or failed |ContinuousExportName, Result, Database | |CPU |Yes |CPU |Percent |Average |CPU utilization level |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|EventsProcessedForEventHubs |Yes |Events Processed (for Event/IoT Hubs) |Count |Total |Number of events processed by the cluster when ingesting from Event/IoT Hub |EventStatus | |EventsReceived |Yes |Events Received |Count |Total |Number of events received by data connection. |ComponentType, ComponentName | |ExportUtilization |Yes |Export Utilization |Percent |Maximum |Export utilization |No Dimensions |
-|FollowerLatency |Yes |FollowerLatency |MilliSeconds |Average |The follower databases synchronize changes in the leader databases. Because of the synchronization, there's a data lag of a few seconds to a few minutes in data availability. This metric measures the length of the time lag. The time lag depends on the overall size of the leader database metadata. This is a cluster level metrics: the followers catch metadata of all databases that are followed. This metric represents the latency of the process. |State, RoleInstance |
+|FollowerLatency |Yes |FollowerLatency |MilliSeconds |Average |The follower databases synchronize changes in the leader databases. Because of the synchronization, there's a data lag of a few seconds to a few minutes in data availability.This metric measures the length of the time lag. The time lag depends on the overall size of the leader database metadata.This is a cluster level metrics: the followers catch metadata of all databases that are followed. This metric represents the latency of the process. |State, RoleInstance |
|IngestionLatencyInSeconds |Yes |Ingestion Latency |Seconds |Average |Latency of data ingested, from the time the data was received in the cluster until it's ready for query. The ingestion latency period depends on the ingestion scenario. |No Dimensions | |IngestionResult |Yes |Ingestion result |Count |Total |Total number of sources that either failed or succeeded to be ingested. Splitting the metric by status, you can get detailed information about the status of the ingestion operations. |IngestionResultDetails, FailureKind | |IngestionUtilization |Yes |Ingestion utilization |Percent |Average |Ratio of used ingestion slots in the cluster |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|MaterializedViewHealth |Yes |Materialized View Health |Count |Average |The health of the materialized view (1 for healthy, 0 for non-healthy) |Database, MaterializedViewName | |MaterializedViewRecordsInDelta |Yes |Materialized View Records In Delta |Count |Average |The number of records in the non-materialized part of the view |Database, MaterializedViewName | |MaterializedViewResult |Yes |Materialized View Result |Count |Average |The result of the materialization process |Database, MaterializedViewName, Result |
-|QueryDuration |Yes |Query duration |MilliSeconds |Average |Queries' duration in seconds |QueryStatus |
+|QueryDuration |Yes |Query duration |MilliSeconds |Average |Queries duration in seconds |QueryStatus |
|QueryResult |No |Query Result |Count |Count |Total number of queries. |QueryStatus | |QueueLength |Yes |Queue Length |Count |Average |Number of pending messages in a component's queue. |ComponentType | |QueueOldestMessage |Yes |Queue Oldest Message |Count |Average |Time in seconds from when the oldest message in queue was inserted. |ComponentType |
This latest update adds a new column and reorders the metrics to be alphabetical
|IntegrationServiceEnvironmentWorkflowMemoryUsage |Yes |Workflow Memory Usage for Integration Service Environment |Percent |Average |Workflow memory usage for integration service environment. |No Dimensions | |IntegrationServiceEnvironmentWorkflowProcessorUsage |Yes |Workflow Processor Usage for Integration Service Environment |Percent |Average |Workflow processor usage for integration service environment. |No Dimensions | |RunLatency |Yes |Run Latency |Seconds |Average |Latency of completed workflow runs. |No Dimensions |
-|RunsCancelled |Yes |Runs Canceled |Count |Total |Number of workflow runs cancelled. |No Dimensions |
+|RunsCancelled |Yes |Runs Cancelled |Count |Total |Number of workflow runs cancelled. |No Dimensions |
|RunsCompleted |Yes |Runs Completed |Count |Total |Number of workflow runs completed. |No Dimensions | |RunsFailed |Yes |Runs Failed |Count |Total |Number of workflow runs failed. |No Dimensions | |RunsStarted |Yes |Runs Started |Count |Total |Number of workflow runs started. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|BillingUsageStorageConsumption |Yes |Billing Usage for Storage Consumption Executions |Count |Total |Number of storage consumption executions getting billed. |No Dimensions | |RunFailurePercentage |Yes |Run Failure Percentage |Percent |Total |Percentage of workflow runs failed. |No Dimensions | |RunLatency |Yes |Run Latency |Seconds |Average |Latency of completed workflow runs. |No Dimensions |
-|RunsCancelled |Yes |Runs Canceled |Count |Total |Number of workflow runs cancelled. |No Dimensions |
+|RunsCancelled |Yes |Runs Cancelled |Count |Total |Number of workflow runs cancelled. |No Dimensions |
|RunsCompleted |Yes |Runs Completed |Count |Total |Number of workflow runs completed. |No Dimensions | |RunsFailed |Yes |Runs Failed |Count |Total |Number of workflow runs failed. |No Dimensions | |RunsStarted |Yes |Runs Started |Count |Total |Number of workflow runs started. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |Active Cores |Yes |Active Cores |Count |Average |Number of active cores |Scenario, ClusterName |
-|Active Nodes |Yes |Active Nodes |Count |Average |Number of Active nodes. These are the nodes which are actively running a job. |Scenario, ClusterName |
+|Active Nodes |Yes |Active Nodes |Count |Average |Number of Acitve nodes. These are the nodes which are actively running a job. |Scenario, ClusterName |
|Cancel Requested Runs |Yes |Cancel Requested Runs |Count |Total |Number of runs where cancel was requested for this workspace. Count is updated when cancellation request has been received for a run. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName |
-|Cancelled Runs |Yes |Canceled Runs |Count |Total |Number of runs cancelled for this workspace. Count is updated when a run is successfully cancelled. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName |
+|Cancelled Runs |Yes |Cancelled Runs |Count |Total |Number of runs cancelled for this workspace. Count is updated when a run is successfully cancelled. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName |
|Completed Runs |Yes |Completed Runs |Count |Total |Number of runs completed successfully for this workspace. Count is updated when a run has completed and output has been collected. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName | |CpuCapacityMillicores |Yes |CpuCapacityMillicores |Count |Average |Maximum capacity of a CPU node in millicores. Capacity is aggregated in one minute intervals. |RunId, InstanceId, ComputeName | |CpuMemoryCapacityMegabytes |Yes |CpuMemoryCapacityMegabytes |Count |Average |Maximum memory utilization of a CPU node in megabytes. Utilization is aggregated in one minute intervals. |RunId, InstanceId, ComputeName |
This latest update adds a new column and reorders the metrics to be alphabetical
|Preempted Nodes |Yes |Preempted Nodes |Count |Average |Number of preempted nodes. These nodes are the low priority nodes which are taken away from the available node pool. |Scenario, ClusterName | |Preparing Runs |Yes |Preparing Runs |Count |Total |Number of runs that are preparing for this workspace. Count is updated when a run enters Preparing state while the run environment is being prepared. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName | |Provisioning Runs |Yes |Provisioning Runs |Count |Total |Number of runs that are provisioning for this workspace. Count is updated when a run is waiting on compute target creation or provisioning. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName |
-|Queued Runs |Yes |Queued Runs |Count |Total |Number of runs that are queued for this workspace. Count is updated when a run is queued in compute target. Can occur when waiting for required compute nodes to be ready. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName |
+|Queued Runs |Yes |Queued Runs |Count |Total |Number of runs that are queued for this workspace. Count is updated when a run is queued in compute target. Can occure when waiting for required compute nodes to be ready. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName |
|Quota Utilization Percentage |Yes |Quota Utilization Percentage |Count |Average |Percent of quota utilized |Scenario, ClusterName, VmFamilyName, VmPriority | |Started Runs |Yes |Started Runs |Count |Total |Number of runs running for this workspace. Count is updated when run starts running on required resources. |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName | |Starting Runs |Yes |Starting Runs |Count |Total |Number of runs started for this workspace. Count is updated after request to create run and run info, such as the Run Id, has been populated |Scenario, RunType, PublishedPipelineId, ComputeType, PipelineStepType, ExperimentName |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|AclMatchedPackets |Yes |Acl Matched Packets |Count |Average |Count of the number of packets matching the current ACL entry. |FabricId, RegionName, AclSetName, AclEntrySequenceId, AclSetType |
|BgpPeerStatus |Yes |BGP Peer Status |Unspecified |Minimum |Operational state of the BGP peer. State is represented in numerical form. Idle : 1, Connect : 2, Active : 3, Opensent : 4, Openconfirm : 5, Established : 6 |FabricId, RegionName, IpAddress |
+|ComponentOperStatus |Yes |Component Operational State |Unspecified |Minimum |The current operational status of the component. |FabricId, RegionName, ComponentName |
|CpuUtilizationMax |Yes |Cpu Utilization Max |Percent |Average |Max cpu utilization. The maximum value of the percentage measure of the statistic over the time interval. |FabricId, RegionName, ComponentName | |CpuUtilizationMin |Yes |Cpu Utilization Min |Percent |Average |Min cpu utilization. The minimum value of the percentage measure of the statistic over the time interval. |FabricId, RegionName, ComponentName | |FanSpeed |Yes |Fan Speed |Unspecified |Average |Current fan speed. |FabricId, RegionName, ComponentName |
This latest update adds a new column and reorders the metrics to be alphabetical
|IfEthInJabberFrames |Yes |Ethernet Interface In Jabber Frames |Count |Average |Number of jabber frames received on the interface. Jabber frames are typically defined as oversize frames which also have a bad CRC. |FabricId, RegionName, InterfaceName | |IfEthInMacControlFrames |Yes |Ethernet Interface In MAC Control Frames |Count |Average |MAC layer control frames received on the interface |FabricId, RegionName, InterfaceName | |IfEthInMacPauseFrames |Yes |Ethernet Interface In MAC Pause Frames |Count |Average |MAC layer PAUSE frames received on the interface |FabricId, RegionName, InterfaceName |
+|IfEthInMaxsizeExceeded |Yes |Ethernet Interface In Maxsize Exceeded |Count |Average |The total number frames received that are well-formed dropped due to exceeding the maximum frame size on the interface. |FabricId, RegionName, InterfaceName |
|IfEthInOversizeFrames |Yes |Ethernet Interface In Oversize Frames |Count |Average |The total number of frames received that were longer than 1518 octets (excluding framing bits, but including FCS octets) and were otherwise well formed. |FabricId, RegionName, InterfaceName | |IfEthOutMacControlFrames |Yes |Ethernet Interface Out MAC Control Frames |Count |Average |MAC layer control frames sent on the interface. |FabricId, RegionName, InterfaceName | |IfEthOutMacPauseFrames |Yes |Ethernet Interface Out MAC Pause Frames |Count |Average |MAC layer PAUSE frames sent on the interface. |FabricId, RegionName, InterfaceName |
This latest update adds a new column and reorders the metrics to be alphabetical
|PowerSupplyOutputCurrent |Yes |Power Supply Output Current |Unspecified |Average |The output current supplied by the power supply (amps) |FabricId, RegionName, ComponentName | |PowerSupplyOutputPower |Yes |Power Supply Output Power |Unspecified |Average |Output power supplied by the power supply (watts) |FabricId, RegionName, ComponentName | |PowerSupplyOutputVoltage |Yes |Power Supply Output Voltage |Unspecified |Average |Output voltage supplied by the power supply (volts). |FabricId, RegionName, ComponentName |
+|TemperatureMax |Yes |Temperature Max |Unspecified |Average |Max temperature in degrees Celsius of the component. The maximum value of the statistic over the sampling period. |FabricId, RegionName, ComponentName |
## Microsoft.Maps/accounts <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|AssetQuotaUsedPercentage |Yes |Asset quota used percentage |Percent |Average |Asset used percentage in current media service account |No Dimensions | |ChannelsAndLiveEventsCount |Yes |Live event count |Count |Average |The total number of live events in the current media services account |No Dimensions | |ContentKeyPolicyCount |Yes |Content Key Policy count |Count |Average |How many content key policies are already created in current media service account |No Dimensions |
-|ContentKeyPolicyQuota |Yes |Content Key Policy quota |Count |Average |How many content key policies are allowed for current media service account |No Dimensions |
+|ContentKeyPolicyQuota |Yes |Content Key Policy quota |Count |Average |How many content key polices are allowed for current media service account |No Dimensions |
|ContentKeyPolicyQuotaUsedPercentage |Yes |Content Key Policy quota used percentage |Percent |Average |Content Key Policy used percentage in current media service account |No Dimensions | |JobQuota |Yes |Job quota |Count |Average |The Job quota for the current media service account. |No Dimensions | |JobsScheduled |Yes |Jobs Scheduled |Count |Average |The number of Jobs in the Scheduled state. Counts on this metric only reflect jobs submitted through the v3 API. Jobs submitted through the v2 (Legacy) API are not counted. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ApplicationGatewayTotalTime |No |Application Gateway Total Time |MilliSeconds |Average |Average time that it takes for a request to be processed and its response to be sent. This is calculated as average of the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that this usually includes the Application Gateway processing time, time that the request and response packets are traveling over the network and the time the backend server took to respond. |Listener |
+|ApplicationGatewayTotalTime |No |Application Gateway Total Time |MilliSeconds |Average |Time that it takes for a request to be processed and its response to be sent. This is the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that this usually includes the Application Gateway processing time, time that the request and response packets are traveling over the network and the time the backend server took to respond. |Listener |
|AvgRequestCountPerHealthyHost |No |Requests per minute per Healthy Host |Count |Average |Average request count per minute per healthy backend host in a pool |BackendSettingsPool | |AzwafBotProtection |Yes |WAF Bot Protection Matches |Count |Total |Matched Bot Rules |Action, Category, Mode, CountryCode, PolicyName, PolicyScope | |AzwafCustomRule |Yes |WAF Custom Rule Matches |Count |Total |Matched Custom Rules |Action, CustomRuleID, Mode, CountryCode, PolicyName, PolicyScope |
This latest update adds a new column and reorders the metrics to be alphabetical
|BytesReceived |Yes |Bytes Received |Bytes |Total |The total number of bytes received by the Application Gateway from the clients |Listener | |BytesSent |Yes |Bytes Sent |Bytes |Total |The total number of bytes sent by the Application Gateway to the clients |Listener | |CapacityUnits |No |Current Capacity Units |Count |Average |Capacity Units consumed |No Dimensions |
-|ClientRtt |No |Client RTT |MilliSeconds |Average |Average round trip time between clients and Application Gateway. This metric indicates how long it takes to establish connections and return acknowledgments |Listener |
+|ClientRtt |No |Client RTT |MilliSeconds |Average |Round trip time between clients and Application Gateway. This metric indicates how long it takes to establish connections and return acknowledgements |Listener |
|ComputeUnits |No |Current Compute Units |Count |Average |Compute Units consumed |No Dimensions | |CpuUtilization |No |CPU Utilization |Percent |Average |Current CPU utilization of the Application Gateway |No Dimensions | |CurrentConnections |Yes |Current Connections |Count |Total |Count of current connections established with Application Gateway |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|incoming.all.failedrequests |Yes |All Incoming Failed Requests |Count |Total |Total incoming failed requests for a notification hub |No Dimensions | |incoming.all.requests |Yes |All Incoming Requests |Count |Total |Total incoming requests for a notification hub |No Dimensions | |incoming.scheduled |Yes |Scheduled Push Notifications Sent |Count |Total |Scheduled Push Notifications Sent |No Dimensions |
-|incoming.scheduled.cancel |Yes |Scheduled Push Notifications Canceled |Count |Total |Scheduled Push Notifications Canceled |No Dimensions |
+|incoming.scheduled.cancel |Yes |Scheduled Push Notifications Cancelled |Count |Total |Scheduled Push Notifications Cancelled |No Dimensions |
|installation.all |Yes |Installation Management Operations |Count |Total |Installation Management Operations |No Dimensions | |installation.delete |Yes |Delete Installation Operations |Count |Total |Delete Installation Operations |No Dimensions | |installation.get |Yes |Get Installation Operations |Count |Total |Get Installation Operations |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|AvailabilityRate_Query |No |AvailabilityRate_Query |Percent |Average |User query success rate for this workspace. |IsUserQuery |
|Average_% Available Memory |Yes |% Available Memory |Count |Average |Average_% Available Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem | |Average_% Available Swap Space |Yes |% Available Swap Space |Count |Average |Average_% Available Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem | |Average_% Committed Bytes In Use |Yes |% Committed Bytes In Use |Count |Average |Average_% Committed Bytes In Use. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem |
This latest update adds a new column and reorders the metrics to be alphabetical
|Heartbeat |Yes |Heartbeat |Count |Total |Heartbeat. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, OSType, Version, SourceComputerId | |Query Count |No |Query Count |Count |Count |Total number of user queries for this workspace. |IsUserQuery | |Query Failure Count |No |Query Failure Count |Count |Count |Total number of failed user queries for this workspace. |IsUserQuery |
-|Query Success Rate |No |Query Success Rate |Percent |Average |User query success rate for this workspace. |IsUserQuery |
|Update |Yes |Update |Count |Average |Update. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, Product, Classification, UpdateState, Optional, Approved | ## Microsoft.Orbital/contactProfiles
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|InBitsRate |Yes |In Bits Rate |BitsPerSecond |Average |Ingress Bit Rate for the L2 connection |No Dimensions |
-|InBroadcastPktCount |Yes |In Broadcast Packet Count |Count |Average |Ingress Broadcast Packet Count for the L2 connection |No Dimensions |
-|InBytesPerVLAN |Yes |In Bytes Count Per Vlan |Count |Average |Ingress Subinterface Byte Count for the L2 connection |VLANID |
-|InInterfaceBytes |Yes |In Bytes Count |Count |Average |Ingress Bytes Count for the L2 connection |No Dimensions |
-|InMulticastPktCount |Yes |In Multicast Packet Count |Count |Average |Ingress Multicast Packet Count for the L2 connection |No Dimensions |
-|InPktErrorCount |Yes |In Packet Error Count |Count |Average |Ingress Packet Error Count for the L2 connection |No Dimensions |
-|InPktsRate |Yes |In Packets Rate |CountPerSecond |Average |Ingress Packet Rate for the L2 connection |No Dimensions |
-|InTotalPktCount |Yes |In Packet Count |Count |Average |Ingress Packet Count for the L2 connection |No Dimensions |
-|InUcastPktCount |Yes |In Unicast Packet Count |Count |Average |Ingress Unicast Packet Count for the L2 connection |No Dimensions |
-|InUCastPktsPerVLAN |Yes |In Unicast Packet Count Per Vlan |Count |Average |Ingress Subinterface Unicast Packet Count for the L2 connection |VLANID |
-|OutBitsRate |Yes |Out Bits Rate |BitsPerSecond |Average |Egress Bit Rate for the L2 connection |No Dimensions |
-|OutBroadcastPktCount |Yes |Out Broadcast Packet Count Per Vlan |Count |Average |Egress Broadcast Packet Count for the L2 connection |No Dimensions |
-|OutBytesPerVLAN |Yes |Out Bytes Count Per Vlan |Count |Average |Egress Subinterface Byte Count for the L2 connection |VLANID |
-|OutInterfaceBytes |Yes |Out Bytes Count |Count |Average |Egress Bytes Count for the L2 connection |No Dimensions |
-|OutMulticastPktCount |Yes |Out Multicast Packet Count |Count |Average |Egress Multicast Packet Count for the L2 connection |No Dimensions |
-|OutPktErrorCount |Yes |Out Packet Error Count |Count |Average |Egress Packet Error Count for the L2 connection |No Dimensions |
-|OutPktsRate |Yes |Out Packets Rate |CountPerSecond |Average |Egress Packet Rate for the L2 connection |No Dimensions |
-|OutUcastPktCount |Yes |Out Unicast Packet Count |Count |Average |Egress Unicast Packet Count for the L2 connection |No Dimensions |
-|OutUCastPktsPerVLAN |Yes |Out Unicast Packet Count Per Vlan |Count |Average |Egress Subinterface Unicast Packet Count for the L2 connection |VLANID |
+|InEdgeSiteBitsRate |Yes |In Edge Site Bit Rate |BitsPerSecond |Average |Ingress Edge Site Bit Rate for the L2 connection |No Dimensions |
+|InEdgeSiteBroadcastPkts |Yes |In Edge Site Broadcast Packet Count |Count |Average |Ingress Edge Site Broadcast Packet Count for the L2 connection |No Dimensions |
+|InEdgeSiteBytes |Yes |In Edge Site Byte Count |Count |Average |Ingress Edge Site Byte Count for the L2 connection |No Dimensions |
+|InEdgeSiteDiscards |Yes |In Edge Site Packet Discard Count |Count |Average |Ingress Edge Site Packet Discard Count for the L2 connection |No Dimensions |
+|InEdgeSiteMulticastPkts |Yes |In Edge Site Multicast Packet Count |Count |Average |Ingress Edge Site Multicast Packet Count for the L2 connection |No Dimensions |
+|InEdgeSitePktErrors |Yes |In Edge Site Packet Error Count |Count |Average |Ingress Edge Site Packet Error Count for the L2 connection |No Dimensions |
+|InEdgeSitePktsRate |Yes |In Edge Site Packet Rate |CountPerSecond |Average |Ingress Edge Site Packet Rate for the L2 connection |No Dimensions |
+|InEdgeSiteUnicastPkts |Yes |In Edge Site Unicast Packet Count |Count |Average |Ingress Edge Site Unicast Packet Count for the L2 connection |No Dimensions |
+|InGroundStationBitsRate |Yes |In Ground Station Bit Rate |BitsPerSecond |Average |Ingress Ground Station Bit Rate for the L2 connection |No Dimensions |
+|InGroundStationBroadcastPkts |Yes |In Ground Station Broadcast Packet Count |Count |Average |Ingress Ground Station Broadcast Packet Count for the L2 connection |No Dimensions |
+|InGroundStationBytes |Yes |In Ground Station Byte Count |Count |Average |Ingress Ground Station Byte Count for the L2 connection |No Dimensions |
+|InGroundStationDiscards |Yes |In Ground Station Packet Discard Count |Count |Average |Ingress Ground Station Packet Discard Count for the L2 connection |No Dimensions |
+|InGroundStationMulticastPkts |Yes |In Ground Station Multicast Packet Count |Count |Average |Ingress Ground Station Multicast Packet Count for the L2 connection |No Dimensions |
+|InGroundStationPktErrors |Yes |In Ground Station Packet Error Count |Count |Average |Ingress Ground Station Packet Error Count for the L2 connection |No Dimensions |
+|InGroundStationPktsRate |Yes |In Ground Station Packet Rate |CountPerSecond |Average |Ingress Ground Station Packet Rate for the L2 connection |No Dimensions |
+|InGroundStationUnicastPkts |Yes |In Ground Station Unicast Packet Count |Count |Average |Ingress Ground Station Unicast Packet Count for the L2 connection |No Dimensions |
+|OutEdgeSiteBitsRate |Yes |Out Edge Site Bit Rate |BitsPerSecond |Average |Egress Edge Site Bit Rate for the L2 connection |No Dimensions |
+|OutEdgeSiteBroadcastPkts |Yes |Out Edge Site Broadcast Packet Count |Count |Average |Egress Edge Site Broadcast Packet Count for the L2 connection |No Dimensions |
+|OutEdgeSiteBytes |Yes |Out Edge Site Byte Count |Count |Average |Egress Edge Site Byte Count for the L2 connection |No Dimensions |
+|OutEdgeSiteDiscards |Yes |Out Edge Site Packet Discard Count |Count |Average |Egress Edge Site Packet Discard Count for the L2 connection |No Dimensions |
+|OutEdgeSiteMulticastPkts |Yes |Out Edge Site Multicast Packet Count |Count |Average |Egress Edge Site Multicast Packet Count for the L2 connection |No Dimensions |
+|OutEdgeSitePktErrors |Yes |Out Edge Site Packet Error Count |Count |Average |Egress Edge Site Packet Error Count for the L2 connection |No Dimensions |
+|OutEdgeSitePktsRate |Yes |Out Edge Site Packet Rate |CountPerSecond |Average |Egress Edge Site Packet Rate for the L2 connection |No Dimensions |
+|OutEdgeSiteUnicastPkts |Yes |Out Edge Site Unicast Packet Count |Count |Average |Egress Edge Site Unicast Packet Count for the L2 connection |No Dimensions |
+|OutGroundStationBitsRate |Yes |Out Ground Station Bit Rate |BitsPerSecond |Average |Egress Ground Station Bit Rate for the L2 connection |No Dimensions |
+|OutGroundStationBroadcastPkts |Yes |Out Ground Station Broadcast Packet Count |Count |Average |Egress Ground Station Broadcast Packet Count for the L2 connection |No Dimensions |
+|OutGroundStationBytes |Yes |Out Ground Station Byte Count |Count |Average |Egress Ground Station Byte Count for the L2 connection |No Dimensions |
+|OutGroundStationDiscards |Yes |Out Ground Station Packet Discard Count |Count |Average |Egress Ground Station Packet Discard Count for the L2 connection |No Dimensions |
+|OutGroundStationMulticastPkts |Yes |Out Ground Station Multicast Packet Count |Count |Average |Egress Ground Station Multicast Packet Count for the L2 connection |No Dimensions |
+|OutGroundStationPktErrors |Yes |Out Ground Station Packet Error Count |Count |Average |Egress Ground Station Packet Error Count for the L2 connection |No Dimensions |
+|OutGroundStationPktsRate |Yes |Out Ground Station Packet Rate |CountPerSecond |Average |Egress Ground Station Packet Rate for the L2 connection |No Dimensions |
+|OutGroundStationUnicastPkts |Yes |Out Ground Station Unicast Packet Count |Count |Average |Egress Ground Station Unicast Packet Count for the L2 connection |No Dimensions |
## Microsoft.Orbital/spacecrafts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |DataMapCapacityUnits |Yes |Data Map Capacity Units |Count |Total |Indicates Data Map Capacity Units. |No Dimensions | |DataMapStorageSize |Yes |Data Map Storage Size |Bytes |Total |Indicates the data map storage size. |No Dimensions |
-|ScanCancelled |Yes |Scan Canceled |Count |Total |Indicates the number of scans cancelled. |No Dimensions |
+|ScanCancelled |Yes |Scan Cancelled |Count |Total |Indicates the number of scans cancelled. |No Dimensions |
|ScanCompleted |Yes |Scan Completed |Count |Total |Indicates the number of scans completed successfully. |No Dimensions | |ScanFailed |Yes |Scan Failed |Count |Total |Indicates the number of scans failed. |No Dimensions | |ScanTimeTaken |Yes |Scan time taken |Seconds |Total |Indicates the total scan time in seconds. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|CompleteMessage |Yes |Completed Messages |Count |Total |Count of messages completed on a Queue/Topic. |EntityName | |ConnectionsClosed |No |Connections Closed. |Count |Average |Connections Closed for Microsoft.ServiceBus. |EntityName | |ConnectionsOpened |No |Connections Opened. |Count |Average |Connections Opened for Microsoft.ServiceBus. |EntityName |
-|CPUXNS |No |CPU (Deprecated) |Percent |Maximum |Service bus premium namespace CPU usage metric. This metric is deprecated. Please use the CPU metric (NamespaceCpuUsage) instead. |Replica |
+|CPUXNS |No |CPU (Deprecated) |Percent |Maximum |Service bus premium namespace CPU usage metric. This metric is depricated. Please use the CPU metric (NamespaceCpuUsage) instead. |Replica |
|DeadletteredMessages |No |Count of dead-lettered messages in a Queue/Topic. |Count |Average |Count of dead-lettered messages in a Queue/Topic. |EntityName | |IncomingMessages |Yes |Incoming Messages |Count |Total |Incoming Messages for Microsoft.ServiceBus. |EntityName | |IncomingRequests |Yes |Incoming Requests |Count |Total |Incoming Requests for Microsoft.ServiceBus. |EntityName |
This latest update adds a new column and reorders the metrics to be alphabetical
|SystemErrors |Yes |System Errors |Percent |Maximum |The percentage of system errors |No Dimensions | |UserErrors |Yes |User Errors |Percent |Maximum |The percentage of user errors |No Dimensions |
+## Microsoft.SignalRService/SignalR/replicas
+<!-- Data source : naam-->
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|ConnectionCloseCount |Yes |Connection Close Count |Count |Total |The count of connections closed by various reasons. |Endpoint, ConnectionCloseCategory |
+|ConnectionCount |Yes |Connection Count |Count |Maximum |The amount of user connection. |Endpoint |
+|ConnectionOpenCount |Yes |Connection Open Count |Count |Total |The count of new connections opened. |Endpoint |
+|ConnectionQuotaUtilization |Yes |Connection Quota Utilization |Percent |Maximum |The percentage of connection connected relative to connection quota. |No Dimensions |
+|InboundTraffic |Yes |Inbound Traffic |Bytes |Total |The inbound traffic of service |No Dimensions |
+|MessageCount |Yes |Message Count |Count |Total |The total amount of messages. |No Dimensions |
+|OutboundTraffic |Yes |Outbound Traffic |Bytes |Total |The outbound traffic of service |No Dimensions |
+|ServerLoad |No |Server Load |Percent |Maximum |SignalR server load. |No Dimensions |
+|SystemErrors |Yes |System Errors |Percent |Maximum |The percentage of system errors |No Dimensions |
+|UserErrors |Yes |User Errors |Percent |Maximum |The percentage of user errors |No Dimensions |
+ ## Microsoft.SignalRService/WebPubSub <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ServerLoad |No |Server Load |Percent |Maximum |SignalR server load. |No Dimensions | |TotalConnectionCount |Yes |Connection Count |Count |Maximum |The number of user connections established to the service. It is aggregated by adding all the online connections. |No Dimensions |
+## Microsoft.SignalRService/WebPubSub/replicas
+<!-- Data source : naam-->
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|ConnectionCloseCount |Yes |Connection Close Count |Count |Total |The count of connections closed by various reasons. |ConnectionCloseCategory |
+|ConnectionOpenCount |Yes |Connection Open Count |Count |Total |The count of new connections opened. |No Dimensions |
+|ConnectionQuotaUtilization |Yes |Connection Quota Utilization |Percent |Maximum |The percentage of connection connected relative to connection quota. |No Dimensions |
+|InboundTraffic |Yes |Inbound Traffic |Bytes |Total |The traffic originating from outside to inside of the service. It is aggregated by adding all the bytes of the traffic. |No Dimensions |
+|OutboundTraffic |Yes |Outbound Traffic |Bytes |Total |The traffic originating from inside to outside of the service. It is aggregated by adding all the bytes of the traffic. |No Dimensions |
+|ServerLoad |No |Server Load |Percent |Maximum |WebPubSub server load. |No Dimensions |
+|TotalConnectionCount |Yes |Connection Count |Count |Maximum |The number of user connections established to the service. It is aggregated by adding all the online connections. |No Dimensions |
+ ## microsoft.singularity/accounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |ObjectsOperatedCount |Yes |Objects operated count |Count |Total |The number of objects operated in storage task |AccountName, TaskAssignmentId | |ObjectsOperationFailedCount |Yes |Objects failed count |Count |Total |The number of objects failed in storage task |AccountName, TaskAssignmentId |
-|ObjectsTargetedCount |Yes |Objects targeted count |Count |Total |The number of objects targeted in storage task |AccountName, TaskAssignmentId |
+|ObjectsTargetedCount |Yes |Objects targed count |Count |Total |The number of objects targeted in storage task |AccountName, TaskAssignmentId |
## Microsoft.Storage/storageAccounts/tableServices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |ObjectsOperatedCount |Yes |Objects operated count |Count |Total |The number of objects operated in storage task |AccountName, TaskAssignmentId | |ObjectsOperationFailedCount |Yes |Objects failed count |Count |Total |The number of objects failed in storage task |AccountName, TaskAssignmentId |
-|ObjectsTargetedCount |Yes |Objects targeted count |Count |Total |The number of objects targeted in storage task |AccountName, TaskAssignmentId |
+|ObjectsTargetedCount |Yes |Objects targed count |Count |Total |The number of objects targeted in storage task |AccountName, TaskAssignmentId |
## Microsoft.StorageCache/amlFilesystems <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|StorageTargetFreeWriteSpace |Yes |Storage Target Free Write Space |Bytes |Average |Write space available for changed files associated with a storage target. |StorageTarget | |StorageTargetHealth |Yes |Storage Target Health |Count |Average |Boolean results of connectivity test between the Cache and Storage Targets. |No Dimensions | |StorageTargetIOPS |Yes |Total StorageTarget IOPS |Count |Average |The rate of all file operations the Cache sends to a particular StorageTarget. |StorageTarget |
-|StorageTargetLatency |Yes |StorageTarget Latency |MilliSeconds |Average |The average round trip latency of all the file operations the Cache sends to a particular StorageTarget. |StorageTarget |
+|StorageTargetLatency |Yes |StorageTarget Latency |MilliSeconds |Average |The average round trip latency of all the file operations the Cache sends to a partricular StorageTarget. |StorageTarget |
|StorageTargetMetadataReadIOPS |Yes |StorageTarget Metadata Read IOPS |CountPerSecond |Average |The rate of file operations that do not modify persistent state, and excluding the read operation, that the Cache sends to a particular StorageTarget. |StorageTarget | |StorageTargetMetadataWriteIOPS |Yes |StorageTarget Metadata Write IOPS |CountPerSecond |Average |The rate of file operations that do modify persistent state and excluding the write operation, that the Cache sends to a particular StorageTarget. |StorageTarget |
-|StorageTargetReadAheadThroughput |Yes |StorageTarget Read Ahead Throughput |BytesPerSecond |Average |The rate the Cache opportunistically reads data from the StorageTarget. |StorageTarget |
+|StorageTargetReadAheadThroughput |Yes |StorageTarget Read Ahead Throughput |BytesPerSecond |Average |The rate the Cache opportunisticly reads data from the StorageTarget. |StorageTarget |
|StorageTargetReadIOPS |Yes |StorageTarget Read IOPS |CountPerSecond |Average |The rate of file read operations the Cache sends to a particular StorageTarget. |StorageTarget | |StorageTargetRecycleRate |Yes |Storage Target Recycle Rate |BytesPerSecond |Average |Cache space recycle rate associated with a storage target in the HPC Cache. This is the rate at which existing data is cleared from the cache to make room for new data. |StorageTarget | |StorageTargetSpaceAllocation |Yes |Storage Target Space Allocation |Bytes |Average |Total space (read and write) allocated for a storage target. |StorageTarget |
This latest update adds a new column and reorders the metrics to be alphabetical
|StorageSyncRecalledNetworkBytesByApplication |Yes |Cloud tiering recall size by application |Bytes |Total |Size of data recalled by application |SyncGroupName, ServerName, ApplicationName | |StorageSyncRecalledTotalNetworkBytes |Yes |Cloud tiering recall size |Bytes |Total |Size of data recalled |SyncGroupName, ServerName, ServerEndpointName | |StorageSyncRecallThroughputBytesPerSecond |Yes |Cloud tiering recall throughput |BytesPerSecond |Average |Size of data recall throughput |SyncGroupName, ServerName, ServerEndpointName |
-|StorageSyncServerHeartbeat |Yes |Server Online Status |Count |Maximum |Metric that logs a value of 1 each time the registered server successfully records a heartbeat with the Cloud Endpoint |ServerName |
+|StorageSyncServerHeartbeat |Yes |Server Online Status |Count |Maximum |Metric that logs a value of 1 each time the resigtered server successfully records a heartbeat with the Cloud Endpoint |ServerName |
|StorageSyncSyncSessionAppliedFilesCount |Yes |Files Synced |Count |Total |Count of Files synced |SyncGroupName, ServerEndpointName, SyncDirection | |StorageSyncSyncSessionPerItemErrorsCount |Yes |Files not syncing |Count |Average |Count of files failed to sync |SyncGroupName, ServerEndpointName, SyncDirection | |StorageSyncTieredDataSizeBytes |Yes |Cloud tiering size of data tiered |Bytes |Average |Size of data tiered to Azure file share |SyncGroupName, ServerName, ServerEndpointName |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |BuiltinSqlPoolDataProcessedBytes |No |Data processed (bytes) |Bytes |Total |Amount of data processed by queries |No Dimensions |
-|BuiltinSqlPoolLoginAttempts |No |Login attempts |Count |Total |Count of login attempts that succeeded or failed |Result |
+|BuiltinSqlPoolLoginAttempts |No |Login attempts |Count |Total |Count of login attempts that succeded or failed |Result |
|BuiltinSqlPoolRequestsEnded |No |Requests ended |Count |Total |Count of Requests that succeeded, failed, or were cancelled |Result | |IntegrationActivityRunsEnded |No |Activity runs ended |Count |Total |Count of integration activities that succeeded, failed, or were cancelled |Result, FailureType, Activity, ActivityType, Pipeline | |IntegrationLinkConnectionEvents |No |Link connection events |Count |Total |Number of Synapse Link connection events including start, stop and failure. |EventType, LinkConnectionName |
This latest update adds a new column and reorders the metrics to be alphabetical
|SQLStreamingInputEvents |No |Input events (preview) |Count |Total |This is a preview metric available in East US, West Europe. Number of input events. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance | |SQLStreamingInputEventsSourcesPerSecond |No |Input sources received (preview) |Count |Total |This is a preview metric available in East US, West Europe. Number of input events sources per second. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance | |SQLStreamingLateInputEvents |No |Late input events (preview) |Count |Total |This is a preview metric available in East US, West Europe. Number of input events which application time is considered late compared to arrival time, according to late arrival policy. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance |
-|SQLStreamingOutOfOrderEvents |No |Out of order events (preview) |Count |Total |This is a preview metric available in East US, West Europe. Number of Event Hubs Events (serialized messages) received by the Event Hubs Input Adapter, received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance |
+|SQLStreamingOutOfOrderEvents |No |Out of order events (preview) |Count |Total |This is a preview metric available in East US, West Europe. Number of Event Hub Events (serialized messages) received by the Event Hub Input Adapter, received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance |
|SQLStreamingOutputEvents |No |Output events (preview) |Count |Total |This is a preview metric available in East US, West Europe. Number of output events. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance | |SQLStreamingOutputWatermarkDelaySeconds |No |Watermark delay (preview) |Count |Maximum |This is a preview metric available in East US, West Europe. Output watermark delay in seconds. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance |
-|SQLStreamingResourceUtilization |No |Resource % utilization (preview) |Percent |Maximum |This is a preview metric available in East US, West Europe.
- Resource utilization expressed as a percentage. High utilization indicates that the job is using close to the maximum allocated resources. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance |
+|SQLStreamingResourceUtilization |No |Resource % utilization (preview) |Percent |Maximum |This is a preview metric available in East US, West Europe. Resource utilization expressed as a percentage. High utilization indicates that the job is using close to the maximum allocated resources. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance |
|SQLStreamingRuntimeErrors |No |Runtime errors (preview) |Count |Total |This is a preview metric available in East US, West Europe. Total number of errors related to query processing (excluding errors found while ingesting events or outputting results). |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance | ## Microsoft.Synapse/workspaces/bigDataPools
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |BigDataPoolAllocatedCores |No |vCores allocated |Count |Maximum |Allocated vCores for an Apache Spark Pool |SubmitterId |
-|BigDataPoolAllocatedMemory |No |Memory allocated (GB) |Count |Maximum |Allocated Memory for Apache Spark Pool (GB) |SubmitterId |
+|BigDataPoolAllocatedMemory |No |Memory allocated (GB) |Count |Maximum |Allocated Memory for Apach Spark Pool (GB) |SubmitterId |
|BigDataPoolApplicationsActive |No |Active Apache Spark applications |Count |Maximum |Total Active Apache Spark Pool Applications |JobState | |BigDataPoolApplicationsEnded |No |Ended Apache Spark applications |Count |Total |Count of Apache Spark pool applications ended |JobType, JobResult |
This latest update adds a new column and reorders the metrics to be alphabetical
|Handles |Yes |Handle Count |Count |Average |The total number of handles currently open by the app process. For WebApps and FunctionApps. |Instance | |HealthCheckStatus |Yes |Health check status |Count |Average |Health check status. For WebApps and FunctionApps. |Instance | |Http101 |Yes |Http 101 |Count |Total |The count of requests resulting in an HTTP status code 101. For WebApps and FunctionApps. |Instance |
-|Http2xx |Yes |Http 2xx |Count |Total |The count of requests resulting in an HTTP status code ≥ 200 but < 300. For WebApps and FunctionApps. |Instance |
-|Http3xx |Yes |Http 3xx |Count |Total |The count of requests resulting in an HTTP status code ≥ 300 but < 400. For WebApps and FunctionApps. |Instance |
+|Http2xx |Yes |Http 2xx |Count |Total |The count of requests resulting in an HTTP status code >= 200 but < 300. For WebApps and FunctionApps. |Instance |
+|Http3xx |Yes |Http 3xx |Count |Total |The count of requests resulting in an HTTP status code >= 300 but < 400. For WebApps and FunctionApps. |Instance |
|Http401 |Yes |Http 401 |Count |Total |The count of requests resulting in HTTP 401 status code. For WebApps and FunctionApps. |Instance | |Http403 |Yes |Http 403 |Count |Total |The count of requests resulting in HTTP 403 status code. For WebApps and FunctionApps. |Instance | |Http404 |Yes |Http 404 |Count |Total |The count of requests resulting in HTTP 404 status code. For WebApps and FunctionApps. |Instance | |Http406 |Yes |Http 406 |Count |Total |The count of requests resulting in HTTP 406 status code. For WebApps and FunctionApps. |Instance |
-|Http4xx |Yes |Http 4xx |Count |Total |The count of requests resulting in an HTTP status code ≥ 400 but < 500. For WebApps and FunctionApps. |Instance |
-|Http5xx |Yes |Http Server Errors |Count |Total |The count of requests resulting in an HTTP status code ≥ 500 but < 600. For WebApps and FunctionApps. |Instance |
+|Http4xx |Yes |Http 4xx |Count |Total |The count of requests resulting in an HTTP status code >= 400 but < 500. For WebApps and FunctionApps. |Instance |
+|Http5xx |Yes |Http Server Errors |Count |Total |The count of requests resulting in an HTTP status code >= 500 but < 600. For WebApps and FunctionApps. |Instance |
|HttpResponseTime |Yes |Response Time |Seconds |Average |The time taken for the app to serve requests, in seconds. For WebApps and FunctionApps. |Instance | |IoOtherBytesPerSecond |Yes |IO Other Bytes Per Second |BytesPerSecond |Total |The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations. For WebApps and FunctionApps. |Instance | |IoOtherOperationsPerSecond |Yes |IO Other Operations Per Second |BytesPerSecond |Total |The rate at which the app process is issuing I/O operations that aren't read or write operations. For WebApps and FunctionApps. |Instance |
This latest update adds a new column and reorders the metrics to be alphabetical
|Handles |Yes |Handle Count |Count |Average |The total number of handles currently open by the app process. |Instance | |HealthCheckStatus |Yes |Health check status |Count |Average |Health check status |Instance | |Http101 |Yes |Http 101 |Count |Total |The count of requests resulting in an HTTP status code 101. |Instance |
-|Http2xx |Yes |Http 2xx |Count |Total |The count of requests resulting in an HTTP status code ≥ 200 but < 300. |Instance |
-|Http3xx |Yes |Http 3xx |Count |Total |The count of requests resulting in an HTTP status code ≥ 300 but < 400. |Instance |
+|Http2xx |Yes |Http 2xx |Count |Total |The count of requests resulting in an HTTP status code >= 200 but < 300. |Instance |
+|Http3xx |Yes |Http 3xx |Count |Total |The count of requests resulting in an HTTP status code >= 300 but < 400. |Instance |
|Http401 |Yes |Http 401 |Count |Total |The count of requests resulting in HTTP 401 status code. |Instance | |Http403 |Yes |Http 403 |Count |Total |The count of requests resulting in HTTP 403 status code. |Instance | |Http404 |Yes |Http 404 |Count |Total |The count of requests resulting in HTTP 404 status code. |Instance | |Http406 |Yes |Http 406 |Count |Total |The count of requests resulting in HTTP 406 status code. |Instance |
-|Http4xx |Yes |Http 4xx |Count |Total |The count of requests resulting in an HTTP status code ≥ 400 but < 500. |Instance |
-|Http5xx |Yes |Http Server Errors |Count |Total |The count of requests resulting in an HTTP status code ≥ 500 but < 600. |Instance |
+|Http4xx |Yes |Http 4xx |Count |Total |The count of requests resulting in an HTTP status code >= 400 but < 500. |Instance |
+|Http5xx |Yes |Http Server Errors |Count |Total |The count of requests resulting in an HTTP status code >= 500 but < 600. |Instance |
|HttpResponseTime |Yes |Response Time |Seconds |Average |The time taken for the app to serve requests, in seconds. |Instance | |IoOtherBytesPerSecond |Yes |IO Other Bytes Per Second |BytesPerSecond |Total |The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations. |Instance | |IoOtherOperationsPerSecond |Yes |IO Other Operations Per Second |BytesPerSecond |Total |The rate at which the app process is issuing I/O operations that aren't read or write operations. |Instance |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|BytesPerSecond |Yes |Bytes per Second. |BytesPerSecond |Average |Throughput speed of Bytes/second being utilized for a migrator. |No Dimensions |
+|BytesPerSecond |Yes |Bytes per Second. |BytesPerSecond |Average |Throughput speed of Bytes/second being utilised for a migrator. |No Dimensions |
|DirectoriesCreatedCount |Yes |Directories Created Count |Count |Total |This provides a running view of how many directories have been created as part of a migration. |No Dimensions | |FileMigrationCount |Yes |Files Migration Count |Count |Total |This provides a running total of how many files have been migrated. |No Dimensions | |InitialScanDataMigratedInBytes |Yes |Initial Scan Data Migrated in Bytes |Bytes |Total |This provides the view of the total bytes which have been transferred in a new migrator as a result of the initial scan of the On-Premises file system. Any data which is added to the migration after the initial scan migration, is NOT included in this metric. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalMigratedDataInBytes |Yes |Total Migrated Data in Bytes |Bytes |Total |This provides a view of the successfully migrated Bytes for a given migrator |No Dimensions | |TotalTransactions |Yes |Total Transactions |Count |Total |This provides a running total of the Data Transactions for which the user could be billed. |No Dimensions |
+## Wandisco.Fusion/migrators/dataTransferAgents
+<!-- Data source : naam-->
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|BytesPerSecond |Yes |Bytes per Second. |BytesPerSecond |Average |Throughput speed of Bytes/second being utilised for a DTA. |No Dimensions |
+|DtaCPULoad |Yes |DTA CPU Load |Percent |Average |CPU consumption by the DTA process. |No Dimensions |
+|FileMigrationCount |Yes |Files Migration Count |Count |Total |This provides a running total of how many files have been migrated. |No Dimensions |
+|MigratedDataInBytes |Yes |Migrated Data in Bytes |Bytes |Total |This provides a view of the successfully migrated Bytes for a given DTA |No Dimensions |
+|NumberOfFailedPaths |Yes |Number of Failed Paths |Count |Total |A count of which paths have failed to migrate. |No Dimensions |
+|SystemCPULoad |Yes |System CPU Load |Percent |Average |Total CPU consumption. |No Dimensions |
+ ## Wandisco.Fusion/migrators/liveDataMigrations <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
- [Read about metrics in Azure Monitor](../data-platform.md) - [Create alerts on metrics](../alerts/alerts-overview.md) - [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)--
-<!--Gen Date: Sun May 07 2023 12:43:57 GMT+0300 (Israel Daylight Time)-->
++
+<!--Gen Date: Sun May 28 2023 17:43:46 GMT+0300 (Israel Daylight Time)-->
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
This article describes how to configure your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus. When you configure your AKS cluster to send data to Azure Monitor managed service for Prometheus, a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) is installed with a metrics extension. In addition, you'll specify the Azure Monitor workspace where the data should be sent. > [!NOTE]
-> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster. However, both agents do use the Azure Monitor agent.
+> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster. However, both processes use the Azure Monitor agent.
> >For different methods to enable Container insights on your cluster, see [Enable Container insights](../containers/container-insights-onboard.md). For details on adding Prometheus collection to a cluster that already has Container insights enabled, see [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md).
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs
description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 05/07/2023 Last updated : 05/28/2023
Following is a list of the types of logs available for each resource type.
Some categories might be supported only for specific types of resources. See the resource-specific documentation if you feel you're missing a resource. For example, Microsoft.Sql/servers/databases categories aren't available for all types of databases. For more information, see [information on SQL Database diagnostic logging](/azure/azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure). If you think something is missing, you can open a GitHub comment at the bottom of this article.-
-## Microsoft.AAD/DomainServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AccountLogon |AccountLogon |No |
-|AccountManagement |AccountManagement |No |
-|DetailTracking |DetailTracking |No |
-|DirectoryServiceAccess |DirectoryServiceAccess |No |
-|LogonLogoff |LogonLogoff |No |
-|ObjectAccess |ObjectAccess |No |
-|PolicyChange |PolicyChange |No |
-|PrivilegeUse |PrivilegeUse |No |
-|SystemSecurity |SystemSecurity |No |
-
-## microsoft.aadiam/tenants
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Signin |Signin |Yes |
-
-## Microsoft.AgFoodPlatform/farmBeats
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationAuditLogs |Application Audit Logs |Yes |
-|FarmManagementLogs |Farm Management Logs |Yes |
-|FarmOperationLogs |Farm Operation Logs |Yes |
-|InsightLogs |Insight Logs |Yes |
-|JobProcessedLogs |Job Processed Logs |Yes |
-|ModelInferenceLogs |Model Inference Logs |Yes |
-|ProviderAuthLogs |Provider Auth Logs |Yes |
-|SatelliteLogs |Satellite Logs |Yes |
-|SensorManagementLogs |Sensor Management Logs |Yes |
-|WeatherLogs |Weather Logs |Yes |
-
-## Microsoft.AnalysisServices/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Engine |Engine |No |
-|Service |Service |No |
-
-## Microsoft.ApiManagement/service
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GatewayLogs |Logs related to ApiManagement Gateway |No |
-|WebSocketConnectionLogs |Logs related to Websocket Connections |Yes |
-
-## Microsoft.App/managedEnvironments
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppEnvSpringAppConsoleLogs |Spring App console logs |Yes |
-|ContainerAppConsoleLogs |Container App console logs |Yes |
-|ContainerAppSystemLogs |Container App system logs |Yes |
-
-## Microsoft.AppConfiguration/configurationStores
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|HttpRequest |HTTP Requests |Yes |
-
-## Microsoft.AppPlatform/Spring
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationConsole |Application Console |No |
-|BuildLogs |Build Logs |Yes |
-|ContainerEventLogs |Container Event Logs |Yes |
-|IngressLogs |Ingress Logs |Yes |
-|SystemLogs |System Logs |No |
-
-## Microsoft.Attestation/attestationProviders
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |AuditEvent message log category. |No |
-|NotProcessed |Requests which could not be processed. |Yes |
-|Operational |Operational message log category. |Yes |
-
-## Microsoft.Automation/automationAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |AuditEvent |Yes |
-|DscNodeStatus |DscNodeStatus |No |
-|JobLogs |JobLogs |No |
-|JobStreams |JobStreams |No |
-
-## Microsoft.AutonomousDevelopmentPlatform/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|Operational |Operational |Yes |
-|Request |Request |Yes |
-
-## Microsoft.AutonomousDevelopmentPlatform/workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|Operational |Operational |Yes |
-|Request |Request |Yes |
-
-## microsoft.avs/privateClouds
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|vmwaresyslog |VMware Syslog |Yes |
-
-## microsoft.azuresphere/catalogs
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit Logs |Yes |
-|DeviceEvents |Device Events |Yes |
-
-## Microsoft.Batch/batchaccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLog |Audit Logs |Yes |
-|ServiceLog |Service Logs |No |
-|ServiceLogs |Service Logs |Yes |
-
-## microsoft.botservice/botservices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BotRequest |Requests from the channels to the bot |Yes |
-
-## Microsoft.Cache/redis
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ConnectedClientList |Connected client list |Yes |
-
-## Microsoft.Cache/redisEnterprise/databases
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ConnectionEvents |Connection events (New Connection/Authentication/Disconnection) |Yes |
-
-## Microsoft.Cdn/cdnwebapplicationfirewallpolicies
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|WebApplicationFirewallLogs |Web Appliation Firewall Logs |No |
-
-## Microsoft.Cdn/profiles
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AzureCdnAccessLog |Azure Cdn Access Log |No |
-|FrontDoorAccessLog |FrontDoor Access Log |Yes |
-|FrontDoorHealthProbeLog |FrontDoor Health Probe Log |Yes |
-|FrontDoorWebApplicationFirewallLog |FrontDoor WebApplicationFirewall Log |Yes |
-
-## Microsoft.Cdn/profiles/endpoints
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CoreAnalytics |Gets the metrics of the endpoint, e.g., bandwidth, egress, etc. |No |
-
-## Microsoft.Chaos/experiments
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ExperimentOrchestration |Experiment Orchestration Events |Yes |
-
-## Microsoft.ClassicNetwork/networksecuritygroups
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Network Security Group Rule Flow Event |Network Security Group Rule Flow Event |No |
-
-## Microsoft.CodeSigning/codesigningaccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|SignTransactions |Sign Transactions |Yes |
-
-## Microsoft.CognitiveServices/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |No |
-|RequestResponse |Request and Response Logs |No |
-|Trace |Trace Logs |No |
-
-## Microsoft.Communication/CommunicationServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuthOperational |Operational Authentication Logs |Yes |
-|CallAutomationOperational |Operational Call Automation Logs |Yes |
-|CallDiagnostics |Call Diagnostics Logs |Yes |
-|CallRecordingSummary |Call Recording Summary Logs |Yes |
-|CallSummary |Call Summary Logs |Yes |
-|ChatOperational |Operational Chat Logs |No |
-|EmailSendMailOperational |Email Service Send Mail Logs |Yes |
-|EmailStatusUpdateOperational |Email Service Delivery Status Update Logs |Yes |
-|EmailUserEngagementOperational |Email Service User Engagement Logs |Yes |
-|JobRouterOperational |Operational Job Router Logs |Yes |
-|NetworkTraversalDiagnostics |Network Traversal Relay Diagnostic Logs |Yes |
-|NetworkTraversalOperational |Operational Network Traversal Logs |Yes |
-|RoomsOperational |Operational Rooms Logs |Yes |
-|SMSOperational |Operational SMS Logs |No |
-|Usage |Usage Records |No |
-
-## Microsoft.Compute/virtualMachines
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|SoftwareUpdateProfile |SoftwareUpdateProfile |Yes |
-|SoftwareUpdates |SoftwareUpdates |Yes |
-
-## Microsoft.ConfidentialLedger/ManagedCCF
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|applicationlogs |CCF Application Logs |Yes |
-
-## Microsoft.ConfidentialLedger/ManagedCCFs
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|applicationlogs |CCF Application Logs |Yes |
-
-## Microsoft.ConnectedCache/CacheNodes
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Events |Events |Yes |
-
-## Microsoft.ConnectedCache/ispCustomers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Events |Events |Yes |
-
-## Microsoft.ConnectedVehicle/platformAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |MCVP Audit Logs |Yes |
-|Logs |MCVP Logs |Yes |
-
-## Microsoft.ContainerRegistry/registries
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ContainerRegistryLoginEvents |Login Events |No |
-|ContainerRegistryRepositoryEvents |RepositoryEvent logs |No |
-
-## Microsoft.ContainerService/fleets
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes |
-|cluster-autoscaler |Kubernetes Cluster Autoscaler |Yes |
-|csi-azuredisk-controller |csi-azuredisk-controller |Yes |
-|csi-azurefile-controller |csi-azurefile-controller |Yes |
-|csi-snapshot-controller |csi-snapshot-controller |Yes |
-|guard |guard |Yes |
-|kube-apiserver |Kubernetes API Server |Yes |
-|kube-audit |Kubernetes Audit |Yes |
-|kube-audit-admin |Kubernetes Audit Admin Logs |Yes |
-|kube-controller-manager |Kubernetes Controller Manager |Yes |
-|kube-scheduler |Kubernetes Scheduler |Yes |
-
-## Microsoft.ContainerService/managedClusters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes |
-|cluster-autoscaler |Kubernetes Cluster Autoscaler |No |
-|csi-azuredisk-controller |csi-azuredisk-controller |Yes |
-|csi-azurefile-controller |csi-azurefile-controller |Yes |
-|csi-snapshot-controller |csi-snapshot-controller |Yes |
-|guard |guard |No |
-|kube-apiserver |Kubernetes API Server |No |
-|kube-audit |Kubernetes Audit |No |
-|kube-audit-admin |Kubernetes Audit Admin Logs |No |
-|kube-controller-manager |Kubernetes Controller Manager |No |
-|kube-scheduler |Kubernetes Scheduler |No |
-
-## Microsoft.CustomProviders/resourceproviders
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit logs for MiniRP calls |No |
-
-## Microsoft.D365CustomerInsights/instances
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit events |No |
-|Operational |Operational events |No |
-
-## Microsoft.Dashboard/grafana
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GrafanaLoginEvents |Grafana Login Events |Yes |
-
-## Microsoft.Databricks/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|accounts |Databricks Accounts |No |
-|capsule8Dataplane |Databricks Capsule8 Container Security Scanning Reports |Yes |
-|clamAVScan |Databricks Clam AV Scan |Yes |
-|clusterLibraries |Databricks Cluster Libraries |Yes |
-|clusters |Databricks Clusters |No |
-|databrickssql |Databricks DatabricksSQL |Yes |
-|dbfs |Databricks File System |No |
-|deltaPipelines |Databricks Delta Pipelines |Yes |
-|featureStore |Databricks Feature Store |Yes |
-|genie |Databricks Genie |Yes |
-|gitCredentials |Databricks Git Credentials |Yes |
-|globalInitScripts |Databricks Global Init Scripts |Yes |
-|iamRole |Databricks IAM Role |Yes |
-|instancePools |Instance Pools |No |
-|jobs |Databricks Jobs |No |
-|mlflowAcledArtifact |Databricks MLFlow Acled Artifact |Yes |
-|mlflowExperiment |Databricks MLFlow Experiment |Yes |
-|modelRegistry |Databricks Model Registry |Yes |
-|notebook |Databricks Notebook |No |
-|partnerHub |Databricks Partner Hub |Yes |
-|RemoteHistoryService |Databricks Remote History Service |Yes |
-|repos |Databricks Repos |Yes |
-|secrets |Databricks Secrets |No |
-|serverlessRealTimeInference |Databricks Serverless Real-Time Inference |Yes |
-|sqlanalytics |Databricks SQL Analytics |Yes |
-|sqlPermissions |Databricks SQLPermissions |No |
-|ssh |Databricks SSH |No |
-|unityCatalog |Databricks Unity Catalog |Yes |
-|webTerminal |Databricks Web Terminal |Yes |
-|workspace |Databricks Workspace |No |
-
-## Microsoft.DataCollaboration/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CollaborationAudit |Collaboration Audit |Yes |
-|Computations |Computations |Yes |
-|DataAssets |Data Assets |No |
-|Pipelines |Pipelines |No |
-|Pipelines |Pipelines |No |
-|Proposals |Proposals |No |
-|Scripts |Scripts |No |
-
-## Microsoft.DataFactory/factories
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ActivityRuns |Pipeline activity runs log |No |
-|AirflowDagProcessingLogs |Airflow dag processing logs |Yes |
-|AirflowSchedulerLogs |Airflow scheduler logs |Yes |
-|AirflowTaskLogs |Airflow task execution logs |Yes |
-|AirflowWebLogs |Airflow web logs |Yes |
-|AirflowWorkerLogs |Airflow worker logs |Yes |
-|PipelineRuns |Pipeline runs log |No |
-|SandboxActivityRuns |Sandbox Activity runs log |Yes |
-|SandboxPipelineRuns |Sandbox Pipeline runs log |Yes |
-|SSISIntegrationRuntimeLogs |SSIS integration runtime logs |No |
-|SSISPackageEventMessageContext |SSIS package event message context |No |
-|SSISPackageEventMessages |SSIS package event messages |No |
-|SSISPackageExecutableStatistics |SSIS package executable statistics |No |
-|SSISPackageExecutionComponentPhases |SSIS package execution component phases |No |
-|SSISPackageExecutionDataStatistics |SSIS package exeution data statistics |No |
-|TriggerRuns |Trigger runs log |No |
-
-## Microsoft.DataLakeAnalytics/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |No |
-|ConfigurationChange |Configuration Change Event Logs |Yes |
-|JobEvent |Job Event Logs |Yes |
-|JobInfo |Job Info Logs |Yes |
-|Requests |Request Logs |No |
-
-## Microsoft.DataLakeStore/accounts
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |No |
-|Requests |Request Logs |No |
-
-## Microsoft.DataProtection/BackupVaults
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AddonAzureBackupJobs |Addon Azure Backup Job Data |Yes |
-|AddonAzureBackupPolicy |Addon Azure Backup Policy Data |Yes |
-|AddonAzureBackupProtectedInstance |Addon Azure Backup Protected Instance Data |Yes |
-|CoreAzureBackup |Core Azure Backup Data |Yes |
-
-## Microsoft.DataShare/accounts
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ReceivedShareSnapshots |Received Share Snapshots |No |
-|SentShareSnapshots |Sent Share Snapshots |No |
-|Shares |Shares |No |
-|ShareSubscriptions |Share Subscriptions |No |
-
-## Microsoft.DBforMariaDB/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|MySqlAuditLogs |MariaDB Audit Logs |No |
-|MySqlSlowLogs |MariaDB Server Logs |No |
-
-## Microsoft.DBforMySQL/flexibleServers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|MySqlAuditLogs |MySQL Audit Logs |No |
-|MySqlSlowLogs |MySQL Slow Logs |No |
-
-## Microsoft.DBforMySQL/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|MySqlAuditLogs |MySQL Audit Logs |No |
-|MySqlSlowLogs |MySQL Server Logs |No |
-
-## Microsoft.DBforPostgreSQL/flexibleServers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLFlexDatabaseXacts |PostgreSQL remaining transactions |Yes |
-|PostgreSQLFlexQueryStoreRuntime |PostgreSQL Query Store Runtime |Yes |
-|PostgreSQLFlexQueryStoreWaitStats |PostgreSQL Query Store Wait Statistics |Yes |
-|PostgreSQLFlexSessions |PostgreSQL Sessions data |Yes |
-|PostgreSQLFlexTableStats |PostgreSQL Autovacuum and schema statistics |Yes |
-|PostgreSQLLogs |PostgreSQL Server Logs |No |
-
-## Microsoft.DBForPostgreSQL/serverGroupsv2
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLLogs |PostgreSQL Server Logs |Yes |
-
-## Microsoft.DBforPostgreSQL/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLLogs |PostgreSQL Server Logs |No |
-|QueryStoreRuntimeStatistics |PostgreSQL Query Store Runtime Statistics |No |
-|QueryStoreWaitStatistics |PostgreSQL Query Store Wait Statistics |No |
-
-## Microsoft.DBforPostgreSQL/serversv2
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLLogs |PostgreSQL Server Logs |No |
-
-## Microsoft.DesktopVirtualization/applicationgroups
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Checkpoint |Checkpoint |No |
-|Error |Error |No |
-|Management |Management |No |
-
-## Microsoft.DesktopVirtualization/hostpools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AgentHealthStatus |AgentHealthStatus |No |
-|AutoscaleEvaluationPooled |Do not use - internal testing |Yes |
-|Checkpoint |Checkpoint |No |
-|Connection |Connection |No |
-|ConnectionGraphicsData |Connection Graphics Data Logs Preview |Yes |
-|Error |Error |No |
-|HostRegistration |HostRegistration |No |
-|Management |Management |No |
-|NetworkData |Network Data Logs |Yes |
-|SessionHostManagement |Session Host Management Activity Logs |Yes |
-
-## Microsoft.DesktopVirtualization/scalingplans
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Autoscale |Autoscale logs |Yes |
-
-## Microsoft.DesktopVirtualization/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Checkpoint |Checkpoint |No |
-|Error |Error |No |
-|Feed |Feed |No |
-|Management |Management |No |
-
-## Microsoft.DevCenter/devcenters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataplaneAuditEvent |Dataplane audit logs |Yes |
-
-## Microsoft.Devices/IotHubs
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|C2DCommands |C2D Commands |No |
-|C2DTwinOperations |C2D Twin Operations |No |
-|Configurations |Configurations |No |
-|Connections |Connections |No |
-|D2CTwinOperations |D2CTwinOperations |No |
-|DeviceIdentityOperations |Device Identity Operations |No |
-|DeviceStreams |Device Streams (Preview) |No |
-|DeviceTelemetry |Device Telemetry |No |
-|DirectMethods |Direct Methods |No |
-|DistributedTracing |Distributed Tracing (Preview) |No |
-|FileUploadOperations |File Upload Operations |No |
-|JobsOperations |Jobs Operations |No |
-|Routes |Routes |No |
-|TwinQueries |Twin Queries |No |
-
-## Microsoft.Devices/provisioningServices
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DeviceOperations |Device Operations |No |
-|ServiceOperations |Service Operations |No |
-
-## Microsoft.DigitalTwins/digitalTwinsInstances
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataHistoryOperation |DataHistoryOperation |Yes |
-|DigitalTwinsOperation |DigitalTwinsOperation |No |
-|EventRoutesOperation |EventRoutesOperation |No |
-|ModelsOperation |ModelsOperation |No |
-|QueryOperation |QueryOperation |No |
-|ResourceProviderOperation |ResourceProviderOperation |Yes |
-
-## Microsoft.DocumentDB/cassandraClusters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CassandraAudit |CassandraAudit |Yes |
-|CassandraLogs |CassandraLogs |Yes |
-
-## Microsoft.DocumentDB/DatabaseAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CassandraRequests |CassandraRequests |No |
-|ControlPlaneRequests |ControlPlaneRequests |No |
-|DataPlaneRequests |DataPlaneRequests |No |
-|GremlinRequests |GremlinRequests |No |
-|MongoRequests |MongoRequests |No |
-|PartitionKeyRUConsumption |PartitionKeyRUConsumption |No |
-|PartitionKeyStatistics |PartitionKeyStatistics |No |
-|QueryRuntimeStatistics |QueryRuntimeStatistics |No |
-|TableApiRequests |TableApiRequests |Yes |
-
-## Microsoft.EventGrid/domains
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataPlaneRequests |Data plane operations logs |Yes |
-|DeliveryFailures |Delivery Failure Logs |No |
-|PublishFailures |Publish Failure Logs |No |
-
-## Microsoft.EventGrid/partnerNamespaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataPlaneRequests |Data plane operations logs |Yes |
-|PublishFailures |Publish Failure Logs |No |
-
-## Microsoft.EventGrid/partnerTopics
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DeliveryFailures |Delivery Failure Logs |No |
-
-## Microsoft.EventGrid/systemTopics
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DeliveryFailures |Delivery Failure Logs |No |
-
-## Microsoft.EventGrid/topics
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataPlaneRequests |Data plane operations logs |Yes |
-|DeliveryFailures |Delivery Failure Logs |No |
-|PublishFailures |Publish Failure Logs |No |
-
-## Microsoft.EventHub/Namespaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationMetricsLogs |Application Metrics Logs |Yes |
-|ArchiveLogs |Archive Logs |No |
-|AutoScaleLogs |Auto Scale Logs |No |
-|CustomerManagedKeyUserLogs |Customer Managed Key Logs |No |
-|EventHubVNetConnectionEvent |VNet/IP Filtering Connection Logs |No |
-|KafkaCoordinatorLogs |Kafka Coordinator Logs |No |
-|KafkaUserErrorLogs |Kafka User Error Logs |No |
-|OperationalLogs |Operational Logs |No |
-|RuntimeAuditLogs |Runtime Audit Logs |Yes |
-
-## Microsoft.HealthcareApis/services
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit logs |No |
-|DiagnosticLogs |Diagnostic logs |Yes |
-
-## Microsoft.HealthcareApis/workspaces/analyticsconnectors
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DiagnosticLogs |Diagnostic logs for Analytics Connector |Yes |
-
-## Microsoft.HealthcareApis/workspaces/dicomservices
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit logs |Yes |
-|DiagnosticLogs |Diagnostic logs |Yes |
-
-## Microsoft.HealthcareApis/workspaces/fhirservices
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |FHIR Audit logs |Yes |
-
-## Microsoft.HealthcareApis/workspaces/iotconnectors
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DiagnosticLogs |Diagnostic logs |Yes |
-
-## microsoft.insights/autoscalesettings
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AutoscaleEvaluations |Autoscale Evaluations |No |
-|AutoscaleScaleActions |Autoscale Scale Actions |No |
-
-## microsoft.insights/components
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppAvailabilityResults |Availability results |No |
-|AppBrowserTimings |Browser timings |No |
-|AppDependencies |Dependencies |No |
-|AppEvents |Events |No |
-|AppExceptions |Exceptions |No |
-|AppMetrics |Metrics |No |
-|AppPageViews |Page views |No |
-|AppPerformanceCounters |Performance counters |No |
-|AppRequests |Requests |No |
-|AppSystemEvents |System events |No |
-|AppTraces |Traces |No |
-
-## Microsoft.Insights/datacollectionrules
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|LogErrors |Log Errors |Yes |
-|LogTroubleshooting |Log Troubleshooting |Yes |
-
-## microsoft.keyvault/managedhsms
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |Audit Event |No |
-
-## Microsoft.KeyVault/vaults
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |Audit Logs |No |
-|AzurePolicyEvaluationDetails |Azure Policy Evaluation Details |Yes |
-
-## Microsoft.Kusto/clusters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Command |Command |No |
-|FailedIngestion |Failed ingestion |No |
-|IngestionBatching |Ingestion batching |No |
-|Journal |Journal |Yes |
-|Query |Query |No |
-|SucceededIngestion |Succeeded ingestion |No |
-|TableDetails |Table details |No |
-|TableUsageStatistics |Table usage statistics |No |
-
-## microsoft.loadtestservice/loadtests
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|OperationLogs |Azure Load Testing Operations |Yes |
-
-## Microsoft.Logic/IntegrationAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|IntegrationAccountTrackingEvents |Integration Account track events |No |
-
-## Microsoft.Logic/Workflows
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|WorkflowRuntime |Workflow runtime diagnostic events |No |
-
-## Microsoft.MachineLearningServices/registries
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|RegistryAssetReadEvent |Registry Asset Read Event |Yes |
-|RegistryAssetWriteEvent |Registry Asset Write Event |Yes |
-
-## Microsoft.MachineLearningServices/workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AmlComputeClusterEvent |AmlComputeClusterEvent |No |
-|AmlComputeClusterNodeEvent |AmlComputeClusterNodeEvent |Yes |
-|AmlComputeCpuGpuUtilization |AmlComputeCpuGpuUtilization |No |
-|AmlComputeJobEvent |AmlComputeJobEvent |No |
-|AmlRunStatusChangedEvent |AmlRunStatusChangedEvent |No |
-|ComputeInstanceEvent |ComputeInstanceEvent |Yes |
-|DataLabelChangeEvent |DataLabelChangeEvent |Yes |
-|DataLabelReadEvent |DataLabelReadEvent |Yes |
-|DataSetChangeEvent |DataSetChangeEvent |Yes |
-|DataSetReadEvent |DataSetReadEvent |Yes |
-|DataStoreChangeEvent |DataStoreChangeEvent |Yes |
-|DataStoreReadEvent |DataStoreReadEvent |Yes |
-|DeploymentEventACI |DeploymentEventACI |Yes |
-|DeploymentEventAKS |DeploymentEventAKS |Yes |
-|DeploymentReadEvent |DeploymentReadEvent |Yes |
-|EnvironmentChangeEvent |EnvironmentChangeEvent |Yes |
-|EnvironmentReadEvent |EnvironmentReadEvent |Yes |
-|InferencingOperationACI |InferencingOperationACI |Yes |
-|InferencingOperationAKS |InferencingOperationAKS |Yes |
-|ModelsActionEvent |ModelsActionEvent |Yes |
-|ModelsChangeEvent |ModelsChangeEvent |Yes |
-|ModelsReadEvent |ModelsReadEvent |Yes |
-|PipelineChangeEvent |PipelineChangeEvent |Yes |
-|PipelineReadEvent |PipelineReadEvent |Yes |
-|RunEvent |RunEvent |Yes |
-|RunReadEvent |RunReadEvent |Yes |
-
-## Microsoft.MachineLearningServices/workspaces/onlineEndpoints
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AmlOnlineEndpointConsoleLog |AmlOnlineEndpointConsoleLog |Yes |
-|AmlOnlineEndpointEventLog |AmlOnlineEndpointEventLog (preview) |Yes |
-|AmlOnlineEndpointTrafficLog |AmlOnlineEndpointTrafficLog (preview) |Yes |
-
-## Microsoft.ManagedNetworkFabric/networkDevices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppAvailabilityResults |Availability results |Yes |
-|AppBrowserTimings |Browser timings |Yes |
-
-## Microsoft.Media/mediaservices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|KeyDeliveryRequests |Key Delivery Requests |No |
-|MediaAccount |Media Account Health Status |Yes |
-
-## Microsoft.Media/mediaservices/liveEvents
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|LiveEventState |Live Event Operations |Yes |
-
-## Microsoft.Media/mediaservices/streamingEndpoints
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StreamingEndpointRequests |Streaming Endpoint Requests |Yes |
-
-## Microsoft.Media/videoanalyzers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |Yes |
-|Diagnostics |Diagnostics Logs |Yes |
-|Operational |Operational Logs |Yes |
-
-## Microsoft.NetApp/netAppAccounts/capacityPools
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Autoscale |Capacity Pool Autoscaled |Yes |
-
-## Microsoft.NetApp/netAppAccounts/capacityPools/volumes
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ANFFileAccess |ANF File Access |Yes |
-
-## Microsoft.Network/applicationgateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationGatewayAccessLog |Application Gateway Access Log |No |
-|ApplicationGatewayFirewallLog |Application Gateway Firewall Log |No |
-|ApplicationGatewayPerformanceLog |Application Gateway Performance Log |No |
-
-## Microsoft.Network/azureFirewalls
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AZFWApplicationRule |Azure Firewall Application Rule |Yes |
-|AZFWApplicationRuleAggregation |Azure Firewall Network Rule Aggregation (Policy Analytics) |Yes |
-|AZFWDnsQuery |Azure Firewall DNS query |Yes |
-|AZFWFatFlow |Azure Firewall Fat Flow Log |Yes |
-|AZFWFlowTrace |Azure Firewall Flow Trace Log |Yes |
-|AZFWFqdnResolveFailure |Azure Firewall FQDN Resolution Failure |Yes |
-|AZFWIdpsSignature |Azure Firewall IDPS Signature |Yes |
-|AZFWNatRule |Azure Firewall Nat Rule |Yes |
-|AZFWNatRuleAggregation |Azure Firewall Nat Rule Aggregation (Policy Analytics) |Yes |
-|AZFWNetworkRule |Azure Firewall Network Rule |Yes |
-|AZFWNetworkRuleAggregation |Azure Firewall Application Rule Aggregation (Policy Analytics) |Yes |
-|AZFWThreatIntel |Azure Firewall Threat Intelligence |Yes |
-|AzureFirewallApplicationRule |Azure Firewall Application Rule (Legacy Azure Diagnostics) |No |
-|AzureFirewallDnsProxy |Azure Firewall DNS Proxy (Legacy Azure Diagnostics) |No |
-|AzureFirewallNetworkRule |Azure Firewall Network Rule (Legacy Azure Diagnostics) |No |
-
-## microsoft.network/bastionHosts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BastionAuditLogs |Bastion Audit Logs |No |
-
-## Microsoft.Network/expressRouteCircuits
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PeeringRouteLog |Peering Route Table Logs |No |
-
-## Microsoft.Network/frontdoors
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|FrontdoorAccessLog |Frontdoor Access Log |No |
-|FrontdoorWebApplicationFirewallLog |Frontdoor Web Application Firewall Log |No |
-
-## Microsoft.Network/loadBalancers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|LoadBalancerAlertEvent |Load Balancer Alert Events |No |
-|LoadBalancerProbeHealthStatus |Load Balancer Probe Health Status |No |
-
-## Microsoft.Network/networkManagers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NetworkGroupMembershipChange |Network Group Membership Change |Yes |
-
-## Microsoft.Network/networksecuritygroups
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NetworkSecurityGroupEvent |Network Security Group Event |No |
-|NetworkSecurityGroupFlowEvent |Network Security Group Rule Flow Event |No |
-|NetworkSecurityGroupRuleCounter |Network Security Group Rule Counter |No |
-
-## Microsoft.Network/networkSecurityPerimeters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NspCrossPerimeterInboundAllowed |Cross perimeter inbound access allowed by perimeter link. |Yes |
-|NspCrossPerimeterOutboundAllowed |Cross perimeter outbound access allowed by perimeter link. |Yes |
-|NspIntraPerimeterInboundAllowed |Inbound access allowed within same perimeter. |Yes |
-|NspIntraPerimeterOutboundAllowed |Outbound attempted to same perimeter. NOTE: To be deprecated in future. |Yes |
-|NspOutboundAttempt |Outbound attempted to same or different perimeter. |Yes |
-|NspPrivateInboundAllowed |Private endpoint traffic allowed. |Yes |
-|NspPublicInboundPerimeterRulesAllowed |Public inbound access allowed by NSP access rules. |Yes |
-|NspPublicInboundPerimeterRulesDenied |Public inbound access denied by NSP access rules. |Yes |
-|NspPublicInboundResourceRulesAllowed |Public inbound access allowed by PaaS resource rules. |Yes |
-|NspPublicInboundResourceRulesDenied |Public inbound access denied by PaaS resource rules. |Yes |
-|NspPublicOutboundPerimeterRulesAllowed |Public outbound access allowed by NSP access rules. |Yes |
-|NspPublicOutboundPerimeterRulesDenied |Public outbound access denied by NSP access rules. |Yes |
-|NspPublicOutboundResourceRulesAllowed |Public outbound access allowed by PaaS resource rules. |Yes |
-|NspPublicOutboundResourceRulesDenied |Public outbound access denied by PaaS resource rules |Yes |
-
-## Microsoft.Network/networkSecurityPerimeters/profiles
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NSPInboundAccessAllowed |NSP Inbound Access Allowed. |Yes |
-|NSPInboundAccessDenied |NSP Inbound Access Denied. |Yes |
-|NSPOutboundAccessAllowed |NSP Outbound Access Allowed. |Yes |
-|NSPOutboundAccessDenied |NSP Outbound Access Denied. |Yes |
-
-## microsoft.network/p2svpngateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
-|IKEDiagnosticLog |IKE Diagnostic Logs |No |
-|P2SDiagnosticLog |P2S Diagnostic Logs |No |
-
-## Microsoft.Network/publicIPAddresses
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DDoSMitigationFlowLogs |Flow logs of DDoS mitigation decisions |No |
-|DDoSMitigationReports |Reports of DDoS mitigations |No |
-|DDoSProtectionNotifications |DDoS protection notifications |No |
-
-## Microsoft.Network/trafficManagerProfiles
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ProbeHealthStatusEvents |Traffic Manager Probe Health Results Event |No |
-
-## microsoft.network/virtualnetworkgateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
-|IKEDiagnosticLog |IKE Diagnostic Logs |No |
-|P2SDiagnosticLog |P2S Diagnostic Logs |No |
-|RouteDiagnosticLog |Route Diagnostic Logs |No |
-|TunnelDiagnosticLog |Tunnel Diagnostic Logs |No |
-
-## Microsoft.Network/virtualNetworks
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|VMProtectionAlerts |VM protection alerts |No |
-
-## microsoft.network/vpngateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
-|IKEDiagnosticLog |IKE Diagnostic Logs |No |
-|RouteDiagnosticLog |Route Diagnostic Logs |No |
-|TunnelDiagnosticLog |Tunnel Diagnostic Logs |No |
-
-## Microsoft.NetworkFunction/azureTrafficCollectors
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ExpressRouteCircuitIpfix |Express Route Circuit IPFIX Flow Records |Yes |
-
-## Microsoft.NotificationHubs/namespaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|OperationalLogs |Operational Logs |No |
-
-## MICROSOFT.OPENENERGYPLATFORM/ENERGYSERVICES
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AirFlowTaskLogs |Air Flow Task Logs |Yes |
-|ElasticOperatorLogs |Elastic Operator Logs |Yes |
-|ElasticsearchLogs |Elasticsearch Logs |Yes |
-
-## Microsoft.OpenLogisticsPlatform/Workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|SupplyChainEntityOperations |Supply Chain Entity Operations |Yes |
-|SupplyChainEventLogs |Supply Chain Event logs |Yes |
-
-## Microsoft.OperationalInsights/workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |No |
-
-## Microsoft.PlayFab/titles
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |AuditLogs |Yes |
-
-## Microsoft.PowerBI/tenants
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Engine |Engine |No |
-
-## Microsoft.PowerBI/tenants/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Engine |Engine |No |
-
-## Microsoft.PowerBIDedicated/capacities
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Engine |Engine |No |
-
-## microsoft.purview/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataSensitivityLogEvent |DataSensitivity |Yes |
-|ScanStatusLogEvent |ScanStatus |No |
-|Security |PurviewAccountAuditEvents |Yes |
-
-## Microsoft.RecoveryServices/Vaults
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AddonAzureBackupAlerts |Addon Azure Backup Alert Data |No |
-|AddonAzureBackupJobs |Addon Azure Backup Job Data |No |
-|AddonAzureBackupPolicy |Addon Azure Backup Policy Data |No |
-|AddonAzureBackupProtectedInstance |Addon Azure Backup Protected Instance Data |No |
-|AddonAzureBackupStorage |Addon Azure Backup Storage Data |No |
-|ASRReplicatedItems |Azure Site Recovery Replicated Items Details |Yes |
-|AzureBackupReport |Azure Backup Reporting Data |No |
-|AzureSiteRecoveryEvents |Azure Site Recovery Events |No |
-|AzureSiteRecoveryJobs |Azure Site Recovery Jobs |No |
-|AzureSiteRecoveryProtectedDiskDataChurn |Azure Site Recovery Protected Disk Data Churn |No |
-|AzureSiteRecoveryRecoveryPoints |Azure Site Recovery Recovery Points |No |
-|AzureSiteRecoveryReplicatedItems |Azure Site Recovery Replicated Items |No |
-|AzureSiteRecoveryReplicationDataUploadRate |Azure Site Recovery Replication Data Upload Rate |No |
-|AzureSiteRecoveryReplicationStats |Azure Site Recovery Replication Stats |No |
-|CoreAzureBackup |Core Azure Backup Data |No |
-
-## Microsoft.Relay/namespaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|HybridConnectionsEvent |HybridConnections Events |No |
-|HybridConnectionsLogs |HybridConnectionsLogs |Yes |
-
-## Microsoft.Search/searchServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|OperationLogs |Operation Logs |No |
-
-## Microsoft.Security/antiMalwareSettings
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ScanResults |AntimalwareScanResults |Yes |
-
-## Microsoft.Security/defenderForStorageSettings
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ScanResults |AntimalwareScanResults |Yes |
-
-## microsoft.securityinsights/settings
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Analytics |Analytics |Yes |
-|Automation |Automation |Yes |
-|DataConnectors |Data Collection - Connectors |Yes |
-
-## Microsoft.ServiceBus/Namespaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationMetricsLogs |Application Metrics Logs(Unused) |Yes |
-|OperationalLogs |Operational Logs |No |
-|RuntimeAuditLogs |Runtime Audit Logs |Yes |
-|VNetAndIPFilteringLogs |VNet/IP Filtering Connection Logs |No |
-
-## Microsoft.SignalRService/SignalR
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AllLogs |Azure SignalR Service Logs. |No |
-
-## Microsoft.SignalRService/WebPubSub
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ConnectivityLogs |Connectivity logs for Azure Web PubSub Service. |Yes |
-|HttpRequestLogs |Http Request logs for Azure Web PubSub Service. |Yes |
-|MessagingLogs |Messaging logs for Azure Web PubSub Service. |Yes |
-
-## microsoft.singularity/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Activity |Activity Logs |Yes |
-|Execution |Execution Logs |Yes |
-
-## Microsoft.Sql/managedInstances
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DevOpsOperationsAudit |Devops operations Audit Logs |No |
-|ResourceUsageStats |Resource Usage Statistics |No |
-|SQLSecurityAuditEvents |SQL Security Audit Event |No |
-
-## Microsoft.Sql/managedInstances/databases
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Errors |Errors |No |
-|QueryStoreRuntimeStatistics |Query Store Runtime Statistics |No |
-|QueryStoreWaitStatistics |Query Store Wait Statistics |No |
-|SQLInsights |SQL Insights |No |
-
-## Microsoft.Sql/servers/databases
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AutomaticTuning |Automatic tuning |No |
-|Blocks |Blocks |No |
-|DatabaseWaitStatistics |Database Wait Statistics |No |
-|Deadlocks |Deadlocks |No |
-|DevOpsOperationsAudit |Devops operations Audit Logs |No |
-|DmsWorkers |Dms Workers |No |
-|Errors |Errors |No |
-|ExecRequests |Exec Requests |No |
-|QueryStoreRuntimeStatistics |Query Store Runtime Statistics |No |
-|QueryStoreWaitStatistics |Query Store Wait Statistics |No |
-|RequestSteps |Request Steps |No |
-|SQLInsights |SQL Insights |No |
-|SqlRequests |Sql Requests |No |
-|SQLSecurityAuditEvents |SQL Security Audit Event |No |
-|Timeouts |Timeouts |No |
-|Waits |Waits |No |
-
-## Microsoft.Storage/storageAccounts/blobServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StorageDelete |StorageDelete |Yes |
-|StorageRead |StorageRead |Yes |
-|StorageWrite |StorageWrite |Yes |
-
-## Microsoft.Storage/storageAccounts/fileServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StorageDelete |StorageDelete |Yes |
-|StorageRead |StorageRead |Yes |
-|StorageWrite |StorageWrite |Yes |
-
-## Microsoft.Storage/storageAccounts/queueServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StorageDelete |StorageDelete |Yes |
-|StorageRead |StorageRead |Yes |
-|StorageWrite |StorageWrite |Yes |
-
-## Microsoft.Storage/storageAccounts/tableServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StorageDelete |StorageDelete |Yes |
-|StorageRead |StorageRead |Yes |
-|StorageWrite |StorageWrite |Yes |
-
-## Microsoft.StorageCache/caches
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AscCacheOperationEvent |HPC Cache operation event |Yes |
-|AscUpgradeEvent |HPC Cache upgrade event |Yes |
-|AscWarningEvent |HPC Cache warning |Yes |
-
-## Microsoft.StorageMover/storageMovers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CopyLogsFailed |Copy logs - Failed |Yes |
-|JobRunLogs |Job run logs |Yes |
-
-## Microsoft.StreamAnalytics/streamingjobs
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Authoring |Authoring |No |
-|Execution |Execution |No |
-
-## Microsoft.Synapse/workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BuiltinSqlReqsEnded |Built-in Sql Pool Requests Ended |No |
-|GatewayApiRequests |Synapse Gateway Api Requests |No |
-|IntegrationActivityRuns |Integration Activity Runs |Yes |
-|IntegrationPipelineRuns |Integration Pipeline Runs |Yes |
-|IntegrationTriggerRuns |Integration Trigger Runs |Yes |
-|SQLSecurityAuditEvents |SQL Security Audit Event |No |
-|SynapseLinkEvent |Synapse Link Event |Yes |
-|SynapseRbacOperations |Synapse RBAC Operations |No |
-
-## Microsoft.Synapse/workspaces/bigDataPools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BigDataPoolAppEvents |Big Data Pool Applications Execution Metrics |Yes |
-|BigDataPoolAppsEnded |Big Data Pool Applications Ended |No |
-|BigDataPoolBlockManagerEvents |Big Data Pool Block Manager Events |Yes |
-|BigDataPoolDriverLogs |Big Data Pool Driver Logs |Yes |
-|BigDataPoolEnvironmentEvents |Big Data Pool Environment Events |Yes |
-|BigDataPoolExecutorEvents |Big Data Pool Executor Events |Yes |
-|BigDataPoolExecutorLogs |Big Data Pool Executor Logs |Yes |
-|BigDataPoolJobEvents |Big Data Pool Job Events |Yes |
-|BigDataPoolSqlExecutionEvents |Big Data Pool Sql Execution Events |Yes |
-|BigDataPoolStageEvents |Big Data Pool Stage Events |Yes |
-|BigDataPoolTaskEvents |Big Data Pool Task Events |Yes |
-
-## Microsoft.Synapse/workspaces/kustoPools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Command |Synapse Data Explorer Command |Yes |
-|FailedIngestion |Synapse Data Explorer Failed Ingestion |Yes |
-|IngestionBatching |Synapse Data Explorer Ingestion Batching |Yes |
-|Query |Synapse Data Explorer Query |Yes |
-|SucceededIngestion |Synapse Data Explorer Succeeded Ingestion |Yes |
-|TableDetails |Synapse Data Explorer Table Details |Yes |
-|TableUsageStatistics |Synapse Data Explorer Table Usage Statistics |Yes |
-
-## Microsoft.Synapse/workspaces/scopePools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ScopePoolScopeJobsEnded |Scope Pool Scope Jobs Ended |Yes |
-|ScopePoolScopeJobsStateChange |Scope Pool Scope Jobs State Change |Yes |
-
-## Microsoft.Synapse/workspaces/sqlPools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DmsWorkers |Dms Workers |No |
-|ExecRequests |Exec Requests |No |
-|RequestSteps |Request Steps |No |
-|SqlRequests |Sql Requests |No |
-|SQLSecurityAuditEvents |Sql Security Audit Event |No |
-|Waits |Waits |No |
-
-## Microsoft.TimeSeriesInsights/environments
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Ingress |Ingress |No |
-|Management |Management |No |
-
-## Microsoft.TimeSeriesInsights/environments/eventsources
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Ingress |Ingress |No |
-|Management |Management |No |
-
-## microsoft.videoindexer/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|IndexingLogs |Indexing Logs |Yes |
-
-## Microsoft.Web/hostingEnvironments
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppServiceEnvironmentPlatformLogs |App Service Environment Platform Logs |No |
-
-## Microsoft.Web/sites
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppServiceAntivirusScanAuditLogs |Report Antivirus Audit Logs |No |
-|AppServiceAppLogs |App Service Application Logs |No |
-|AppServiceAuditLogs |Access Audit Logs |No |
-|AppServiceConsoleLogs |App Service Console Logs |No |
-|AppServiceFileAuditLogs |Site Content Change Audit Logs |No |
-|AppServiceHTTPLogs |HTTP logs |No |
-|AppServiceIPSecAuditLogs |IPSecurity Audit logs |No |
-|AppServicePlatformLogs |App Service Platform logs |No |
-|FunctionAppLogs |Function Application Logs |No |
-|WorkflowRuntime |Workflow Runtime Logs |Yes |
-
-## Microsoft.Web/sites/slots
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppServiceAntivirusScanAuditLogs |Report Antivirus Audit Logs |No |
-|AppServiceAppLogs |App Service Application Logs |No |
-|AppServiceAuditLogs |Access Audit Logs |No |
-|AppServiceConsoleLogs |App Service Console Logs |No |
-|AppServiceFileAuditLogs |Site Content Change Audit Logs |No |
-|AppServiceHTTPLogs |HTTP logs |No |
-|AppServiceIPSecAuditLogs |IPSecurity Audit logs |No |
-|AppServicePlatformLogs |App Service Platform logs |No |
-|FunctionAppLogs |Function Application Logs |No |
-
-## microsoft.workloads/sapvirtualinstances
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ChangeDetection |Change Detection |Yes |
+
+## Microsoft.AAD/DomainServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AccountLogon |AccountLogon |No |
+|AccountManagement |AccountManagement |No |
+|DetailTracking |DetailTracking |No |
+|DirectoryServiceAccess |DirectoryServiceAccess |No |
+|DNSServerAuditsDynamicUpdates |DNSServerAuditsDynamicUpdates - Preview |Yes |
+|DNSServerAuditsGeneral |DNSServerAuditsGeneral - Preview |Yes |
+|LogonLogoff |LogonLogoff |No |
+|ObjectAccess |ObjectAccess |No |
+|PolicyChange |PolicyChange |No |
+|PrivilegeUse |PrivilegeUse |No |
+|SystemSecurity |SystemSecurity |No |
+
+## microsoft.aadiam/tenants
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Signin |Signin |Yes |
+
+## Microsoft.AgFoodPlatform/farmBeats
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ApplicationAuditLogs |Application Audit Logs |Yes |
+|FarmManagementLogs |Farm Management Logs |Yes |
+|FarmOperationLogs |Farm Operation Logs |Yes |
+|InsightLogs |Insight Logs |Yes |
+|JobProcessedLogs |Job Processed Logs |Yes |
+|ModelInferenceLogs |Model Inference Logs |Yes |
+|ProviderAuthLogs |Provider Auth Logs |Yes |
+|SatelliteLogs |Satellite Logs |Yes |
+|SensorManagementLogs |Sensor Management Logs |Yes |
+|WeatherLogs |Weather Logs |Yes |
+
+## Microsoft.AnalysisServices/servers
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Engine |Engine |No |
+|Service |Service |No |
+
+## Microsoft.ApiManagement/service
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GatewayLogs |Logs related to ApiManagement Gateway |No |
+|WebSocketConnectionLogs |Logs related to Websocket Connections |Yes |
+
+## Microsoft.App/managedEnvironments
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppEnvSpringAppConsoleLogs |Spring App console logs |Yes |
+|ContainerAppConsoleLogs |Container App console logs |Yes |
+|ContainerAppSystemLogs |Container App system logs |Yes |
+
+## Microsoft.AppConfiguration/configurationStores
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit |Yes |
+|HttpRequest |HTTP Requests |Yes |
+
+## Microsoft.AppPlatform/Spring
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ApplicationConsole |Application Console |No |
+|BuildLogs |Build Logs |Yes |
+|ContainerEventLogs |Container Event Logs |Yes |
+|IngressLogs |Ingress Logs |Yes |
+|SystemLogs |System Logs |No |
+
+## Microsoft.Attestation/attestationProviders
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditEvent |AuditEvent message log category. |No |
+|NotProcessed |Requests which could not be processed. |Yes |
+|Operational |Operational message log category. |Yes |
+
+## Microsoft.Automation/automationAccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditEvent |AuditEvent |Yes |
+|DscNodeStatus |DscNodeStatus |No |
+|JobLogs |JobLogs |No |
+|JobStreams |JobStreams |No |
+
+## Microsoft.AutonomousDevelopmentPlatform/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit |Yes |
+|Operational |Operational |Yes |
+|Request |Request |Yes |
+
+## Microsoft.AutonomousDevelopmentPlatform/workspaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit |Yes |
+|Operational |Operational |Yes |
+|Request |Request |Yes |
+
+## microsoft.avs/privateClouds
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|vmwaresyslog |VMware Syslog |Yes |
+
+## Microsoft.AzureDataTransfer/connections/flows
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|OperationalLogs |Operational Logs |Yes |
+
+## microsoft.azuresphere/catalogs
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |Audit Logs |Yes |
+|DeviceEvents |Device Events |Yes |
+
+## Microsoft.Batch/batchaccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLog |Audit Logs |Yes |
+|ServiceLog |Service Logs |No |
+|ServiceLogs |Service Logs |Yes |
+
+## microsoft.botservice/botservices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BotRequest |Requests from the channels to the bot |Yes |
+
+## Microsoft.Cache/redis
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ConnectedClientList |Connected client list |Yes |
+
+## Microsoft.Cache/redisEnterprise/databases
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ConnectionEvents |Connection events (New Connection/Authentication/Disconnection) |Yes |
+
+## Microsoft.Cdn/cdnwebapplicationfirewallpolicies
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|WebApplicationFirewallLogs |Web Appliation Firewall Logs |No |
+
+## Microsoft.Cdn/profiles
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AzureCdnAccessLog |Azure Cdn Access Log |No |
+|FrontDoorAccessLog |FrontDoor Access Log |Yes |
+|FrontDoorHealthProbeLog |FrontDoor Health Probe Log |Yes |
+|FrontDoorWebApplicationFirewallLog |FrontDoor WebApplicationFirewall Log |Yes |
+
+## Microsoft.Cdn/profiles/endpoints
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CoreAnalytics |Gets the metrics of the endpoint, e.g., bandwidth, egress, etc. |No |
+
+## Microsoft.Chaos/experiments
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ExperimentOrchestration |Experiment Orchestration Events |Yes |
+
+## Microsoft.ClassicNetwork/networksecuritygroups
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Network Security Group Rule Flow Event |Network Security Group Rule Flow Event |No |
+
+## Microsoft.CodeSigning/codesigningaccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|SignTransactions |Sign Transactions |Yes |
+
+## Microsoft.CognitiveServices/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit Logs |No |
+|RequestResponse |Request and Response Logs |No |
+|Trace |Trace Logs |No |
+
+## Microsoft.Communication/CommunicationServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuthOperational |Operational Authentication Logs |Yes |
+|CallAutomationMediaSummary |Call Automation Events Summary Logs |Yes |
+|CallAutomationOperational |Operational Call Automation Logs |Yes |
+|CallDiagnostics |Call Diagnostics Logs |Yes |
+|CallRecordingOperational |Operational Call Recording Logs |Yes |
+|CallRecordingSummary |Call Recording Summary Logs |Yes |
+|CallSummary |Call Summary Logs |Yes |
+|CallSurvey |Call Survey Logs |Yes |
+|ChatOperational |Operational Chat Logs |No |
+|EmailSendMailOperational |Email Service Send Mail Logs |Yes |
+|EmailStatusUpdateOperational |Email Service Delivery Status Update Logs |Yes |
+|EmailUserEngagementOperational |Email Service User Engagement Logs |Yes |
+|JobRouterOperational |Operational Job Router Logs |Yes |
+|NetworkTraversalDiagnostics |Network Traversal Relay Diagnostic Logs |Yes |
+|NetworkTraversalOperational |Operational Network Traversal Logs |Yes |
+|RoomsOperational |Operational Rooms Logs |Yes |
+|SMSOperational |Operational SMS Logs |No |
+|Usage |Usage Records |No |
+
+## Microsoft.Compute/virtualMachines
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|SoftwareUpdateProfile |SoftwareUpdateProfile |Yes |
+|SoftwareUpdates |SoftwareUpdates |Yes |
+
+## Microsoft.ConfidentialLedger/ManagedCCF
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|applicationlogs |CCF Application Logs |Yes |
+
+## Microsoft.ConfidentialLedger/ManagedCCFs
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|applicationlogs |CCF Application Logs |Yes |
+
+## Microsoft.ConnectedCache/CacheNodes
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Events |Events |Yes |
+
+## Microsoft.ConnectedCache/ispCustomers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Events |Events |Yes |
+
+## Microsoft.ConnectedVehicle/platformAccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |MCVP Audit Logs |Yes |
+|Logs |MCVP Logs |Yes |
+
+## Microsoft.ContainerRegistry/registries
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ContainerRegistryLoginEvents |Login Events |No |
+|ContainerRegistryRepositoryEvents |RepositoryEvent logs |No |
+
+## Microsoft.ContainerService/fleets
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes |
+|guard |guard |Yes |
+|kube-apiserver |Kubernetes API Server |Yes |
+|kube-audit |Kubernetes Audit |Yes |
+|kube-audit-admin |Kubernetes Audit Admin Logs |Yes |
+|kube-controller-manager |Kubernetes Controller Manager |Yes |
+|kube-scheduler |Kubernetes Scheduler |Yes |
+
+## Microsoft.ContainerService/managedClusters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes |
+|cluster-autoscaler |Kubernetes Cluster Autoscaler |No |
+|csi-azuredisk-controller |csi-azuredisk-controller |Yes |
+|csi-azurefile-controller |csi-azurefile-controller |Yes |
+|csi-snapshot-controller |csi-snapshot-controller |Yes |
+|guard |guard |No |
+|kube-apiserver |Kubernetes API Server |No |
+|kube-audit |Kubernetes Audit |No |
+|kube-audit-admin |Kubernetes Audit Admin Logs |No |
+|kube-controller-manager |Kubernetes Controller Manager |No |
+|kube-scheduler |Kubernetes Scheduler |No |
+
+## Microsoft.CustomProviders/resourceproviders
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |Audit logs for MiniRP calls |No |
+
+## Microsoft.D365CustomerInsights/instances
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit events |No |
+|Operational |Operational events |No |
+
+## Microsoft.Dashboard/grafana
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GrafanaLoginEvents |Grafana Login Events |Yes |
+
+## Microsoft.Databricks/workspaces
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|accounts |Databricks Accounts |No |
+|capsule8Dataplane |Databricks Capsule8 Container Security Scanning Reports |Yes |
+|clamAVScan |Databricks Clam AV Scan |Yes |
+|clusterLibraries |Databricks Cluster Libraries |Yes |
+|clusters |Databricks Clusters |No |
+|databrickssql |Databricks DatabricksSQL |Yes |
+|dbfs |Databricks File System |No |
+|deltaPipelines |Databricks Delta Pipelines |Yes |
+|featureStore |Databricks Feature Store |Yes |
+|genie |Databricks Genie |Yes |
+|gitCredentials |Databricks Git Credentials |Yes |
+|globalInitScripts |Databricks Global Init Scripts |Yes |
+|iamRole |Databricks IAM Role |Yes |
+|instancePools |Instance Pools |No |
+|jobs |Databricks Jobs |No |
+|mlflowAcledArtifact |Databricks MLFlow Acled Artifact |Yes |
+|mlflowExperiment |Databricks MLFlow Experiment |Yes |
+|modelRegistry |Databricks Model Registry |Yes |
+|notebook |Databricks Notebook |No |
+|partnerHub |Databricks Partner Hub |Yes |
+|RemoteHistoryService |Databricks Remote History Service |Yes |
+|repos |Databricks Repos |Yes |
+|secrets |Databricks Secrets |No |
+|serverlessRealTimeInference |Databricks Serverless Real-Time Inference |Yes |
+|sqlanalytics |Databricks SQL Analytics |Yes |
+|sqlPermissions |Databricks SQLPermissions |No |
+|ssh |Databricks SSH |No |
+|unityCatalog |Databricks Unity Catalog |Yes |
+|webTerminal |Databricks Web Terminal |Yes |
+|workspace |Databricks Workspace |No |
+
+## Microsoft.DataCollaboration/workspaces
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CollaborationAudit |Collaboration Audit |Yes |
+|Computations |Computations |Yes |
+|DataAssets |Data Assets |No |
+|Pipelines |Pipelines |No |
+|Pipelines |Pipelines |No |
+|Proposals |Proposals |No |
+|Scripts |Scripts |No |
+
+## Microsoft.DataFactory/factories
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ActivityRuns |Pipeline activity runs log |No |
+|AirflowDagProcessingLogs |Airflow dag processing logs |Yes |
+|AirflowSchedulerLogs |Airflow scheduler logs |Yes |
+|AirflowTaskLogs |Airflow task execution logs |Yes |
+|AirflowWebLogs |Airflow web logs |Yes |
+|AirflowWorkerLogs |Airflow worker logs |Yes |
+|PipelineRuns |Pipeline runs log |No |
+|SandboxActivityRuns |Sandbox Activity runs log |Yes |
+|SandboxPipelineRuns |Sandbox Pipeline runs log |Yes |
+|SSISIntegrationRuntimeLogs |SSIS integration runtime logs |No |
+|SSISPackageEventMessageContext |SSIS package event message context |No |
+|SSISPackageEventMessages |SSIS package event messages |No |
+|SSISPackageExecutableStatistics |SSIS package executable statistics |No |
+|SSISPackageExecutionComponentPhases |SSIS package execution component phases |No |
+|SSISPackageExecutionDataStatistics |SSIS package exeution data statistics |No |
+|TriggerRuns |Trigger runs log |No |
+
+## Microsoft.DataLakeAnalytics/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit Logs |No |
+|ConfigurationChange |Configuration Change Event Logs |Yes |
+|JobEvent |Job Event Logs |Yes |
+|JobInfo |Job Info Logs |Yes |
+|Requests |Request Logs |No |
+
+## Microsoft.DataLakeStore/accounts
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit Logs |No |
+|Requests |Request Logs |No |
+
+## Microsoft.DataProtection/BackupVaults
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AddonAzureBackupJobs |Addon Azure Backup Job Data |Yes |
+|AddonAzureBackupPolicy |Addon Azure Backup Policy Data |Yes |
+|AddonAzureBackupProtectedInstance |Addon Azure Backup Protected Instance Data |Yes |
+|CoreAzureBackup |Core Azure Backup Data |Yes |
+
+## Microsoft.DataShare/accounts
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ReceivedShareSnapshots |Received Share Snapshots |No |
+|SentShareSnapshots |Sent Share Snapshots |No |
+|Shares |Shares |No |
+|ShareSubscriptions |Share Subscriptions |No |
+
+## Microsoft.DBforMariaDB/servers
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|MySqlAuditLogs |MariaDB Audit Logs |No |
+|MySqlSlowLogs |MariaDB Server Logs |No |
+
+## Microsoft.DBforMySQL/flexibleServers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|MySqlAuditLogs |MySQL Audit Logs |No |
+|MySqlSlowLogs |MySQL Slow Logs |No |
+
+## Microsoft.DBforMySQL/servers
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|MySqlAuditLogs |MySQL Audit Logs |No |
+|MySqlSlowLogs |MySQL Server Logs |No |
+
+## Microsoft.DBforPostgreSQL/flexibleServers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PostgreSQLFlexDatabaseXacts |PostgreSQL remaining transactions |Yes |
+|PostgreSQLFlexQueryStoreRuntime |PostgreSQL Query Store Runtime |Yes |
+|PostgreSQLFlexQueryStoreWaitStats |PostgreSQL Query Store Wait Statistics |Yes |
+|PostgreSQLFlexSessions |PostgreSQL Sessions data |Yes |
+|PostgreSQLFlexTableStats |PostgreSQL Autovacuum and schema statistics |Yes |
+|PostgreSQLLogs |PostgreSQL Server Logs |No |
+
+## Microsoft.DBForPostgreSQL/serverGroupsv2
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PostgreSQLLogs |PostgreSQL Server Logs |Yes |
+
+## Microsoft.DBforPostgreSQL/servers
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PostgreSQLLogs |PostgreSQL Server Logs |No |
+|QueryStoreRuntimeStatistics |PostgreSQL Query Store Runtime Statistics |No |
+|QueryStoreWaitStatistics |PostgreSQL Query Store Wait Statistics |No |
+
+## Microsoft.DBforPostgreSQL/serversv2
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PostgreSQLLogs |PostgreSQL Server Logs |No |
+
+## Microsoft.DesktopVirtualization/applicationgroups
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Checkpoint |Checkpoint |No |
+|Error |Error |No |
+|Management |Management |No |
+
+## Microsoft.DesktopVirtualization/hostpools
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AgentHealthStatus |AgentHealthStatus |No |
+|AutoscaleEvaluationPooled |Do not use - internal testing |Yes |
+|Checkpoint |Checkpoint |No |
+|Connection |Connection |No |
+|ConnectionGraphicsData |Connection Graphics Data Logs Preview |Yes |
+|Error |Error |No |
+|HostRegistration |HostRegistration |No |
+|Management |Management |No |
+|NetworkData |Network Data Logs |Yes |
+|SessionHostManagement |Session Host Management Activity Logs |Yes |
+
+## Microsoft.DesktopVirtualization/scalingplans
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Autoscale |Autoscale logs |Yes |
+
+## Microsoft.DesktopVirtualization/workspaces
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Checkpoint |Checkpoint |No |
+|Error |Error |No |
+|Feed |Feed |No |
+|Management |Management |No |
+
+## Microsoft.DevCenter/devcenters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataplaneAuditEvent |Dataplane audit logs |Yes |
+|ResourceLifecycle |Resource lifecycle |Yes |
+
+## Microsoft.Devices/IotHubs
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|C2DCommands |C2D Commands |No |
+|C2DTwinOperations |C2D Twin Operations |No |
+|Configurations |Configurations |No |
+|Connections |Connections |No |
+|D2CTwinOperations |D2CTwinOperations |No |
+|DeviceIdentityOperations |Device Identity Operations |No |
+|DeviceStreams |Device Streams (Preview) |No |
+|DeviceTelemetry |Device Telemetry |No |
+|DirectMethods |Direct Methods |No |
+|DistributedTracing |Distributed Tracing (Preview) |No |
+|FileUploadOperations |File Upload Operations |No |
+|JobsOperations |Jobs Operations |No |
+|Routes |Routes |No |
+|TwinQueries |Twin Queries |No |
+
+## Microsoft.Devices/provisioningServices
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DeviceOperations |Device Operations |No |
+|ServiceOperations |Service Operations |No |
+
+## Microsoft.DigitalTwins/digitalTwinsInstances
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataHistoryOperation |DataHistoryOperation |Yes |
+|DigitalTwinsOperation |DigitalTwinsOperation |No |
+|EventRoutesOperation |EventRoutesOperation |No |
+|ModelsOperation |ModelsOperation |No |
+|QueryOperation |QueryOperation |No |
+|ResourceProviderOperation |ResourceProviderOperation |Yes |
+
+## Microsoft.DocumentDB/cassandraClusters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CassandraAudit |CassandraAudit |Yes |
+|CassandraLogs |CassandraLogs |Yes |
+
+## Microsoft.DocumentDB/DatabaseAccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CassandraRequests |CassandraRequests |No |
+|ControlPlaneRequests |ControlPlaneRequests |No |
+|DataPlaneRequests |DataPlaneRequests |No |
+|GremlinRequests |GremlinRequests |No |
+|MongoRequests |MongoRequests |No |
+|PartitionKeyRUConsumption |PartitionKeyRUConsumption |No |
+|PartitionKeyStatistics |PartitionKeyStatistics |No |
+|QueryRuntimeStatistics |QueryRuntimeStatistics |No |
+|TableApiRequests |TableApiRequests |Yes |
+
+## Microsoft.EventGrid/domains
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataPlaneRequests |Data plane operations logs |Yes |
+|DeliveryFailures |Delivery Failure Logs |No |
+|PublishFailures |Publish Failure Logs |No |
+
+## Microsoft.EventGrid/partnerNamespaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataPlaneRequests |Data plane operations logs |Yes |
+|PublishFailures |Publish Failure Logs |No |
+
+## Microsoft.EventGrid/partnerTopics
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DeliveryFailures |Delivery Failure Logs |No |
+
+## Microsoft.EventGrid/systemTopics
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DeliveryFailures |Delivery Failure Logs |No |
+
+## Microsoft.EventGrid/topics
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataPlaneRequests |Data plane operations logs |Yes |
+|DeliveryFailures |Delivery Failure Logs |No |
+|PublishFailures |Publish Failure Logs |No |
+
+## Microsoft.EventHub/Namespaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ApplicationMetricsLogs |Application Metrics Logs |Yes |
+|ArchiveLogs |Archive Logs |No |
+|AutoScaleLogs |Auto Scale Logs |No |
+|CustomerManagedKeyUserLogs |Customer Managed Key Logs |No |
+|EventHubVNetConnectionEvent |VNet/IP Filtering Connection Logs |No |
+|KafkaCoordinatorLogs |Kafka Coordinator Logs |No |
+|KafkaUserErrorLogs |Kafka User Error Logs |No |
+|OperationalLogs |Operational Logs |No |
+|RuntimeAuditLogs |Runtime Audit Logs |Yes |
+
+## Microsoft.HealthcareApis/services
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |Audit logs |No |
+|DiagnosticLogs |Diagnostic logs |Yes |
+
+## Microsoft.HealthcareApis/workspaces/analyticsconnectors
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DiagnosticLogs |Diagnostic logs for Analytics Connector |Yes |
+
+## Microsoft.HealthcareApis/workspaces/dicomservices
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |Audit logs |Yes |
+|DiagnosticLogs |Diagnostic logs |Yes |
+
+## Microsoft.HealthcareApis/workspaces/fhirservices
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |FHIR Audit logs |Yes |
+
+## Microsoft.HealthcareApis/workspaces/iotconnectors
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DiagnosticLogs |Diagnostic logs |Yes |
+
+## microsoft.insights/autoscalesettings
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AutoscaleEvaluations |Autoscale Evaluations |No |
+|AutoscaleScaleActions |Autoscale Scale Actions |No |
+
+## microsoft.insights/components
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppAvailabilityResults |Availability results |No |
+|AppBrowserTimings |Browser timings |No |
+|AppDependencies |Dependencies |No |
+|AppEvents |Events |No |
+|AppExceptions |Exceptions |No |
+|AppMetrics |Metrics |No |
+|AppPageViews |Page views |No |
+|AppPerformanceCounters |Performance counters |No |
+|AppRequests |Requests |No |
+|AppSystemEvents |System events |No |
+|AppTraces |Traces |No |
+
+## Microsoft.Insights/datacollectionrules
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|LogErrors |Log Errors |Yes |
+|LogTroubleshooting |Log Troubleshooting |Yes |
+
+## microsoft.keyvault/managedhsms
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditEvent |Audit Event |No |
+
+## Microsoft.KeyVault/vaults
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditEvent |Audit Logs |No |
+|AzurePolicyEvaluationDetails |Azure Policy Evaluation Details |Yes |
+
+## Microsoft.Kusto/clusters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Command |Command |No |
+|FailedIngestion |Failed ingestion |No |
+|IngestionBatching |Ingestion batching |No |
+|Journal |Journal |Yes |
+|Query |Query |No |
+|SucceededIngestion |Succeeded ingestion |No |
+|TableDetails |Table details |No |
+|TableUsageStatistics |Table usage statistics |No |
+
+## microsoft.loadtestservice/loadtests
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|OperationLogs |Azure Load Testing Operations |Yes |
+
+## Microsoft.Logic/IntegrationAccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|IntegrationAccountTrackingEvents |Integration Account track events |No |
+
+## Microsoft.Logic/Workflows
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|WorkflowRuntime |Workflow runtime diagnostic events |No |
+
+## Microsoft.MachineLearningServices/registries
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|RegistryAssetReadEvent |Registry Asset Read Event |Yes |
+|RegistryAssetWriteEvent |Registry Asset Write Event |Yes |
+
+## Microsoft.MachineLearningServices/workspaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AmlComputeClusterEvent |AmlComputeClusterEvent |No |
+|AmlComputeClusterNodeEvent |AmlComputeClusterNodeEvent |Yes |
+|AmlComputeCpuGpuUtilization |AmlComputeCpuGpuUtilization |No |
+|AmlComputeJobEvent |AmlComputeJobEvent |No |
+|AmlRunStatusChangedEvent |AmlRunStatusChangedEvent |No |
+|ComputeInstanceEvent |ComputeInstanceEvent |Yes |
+|DataLabelChangeEvent |DataLabelChangeEvent |Yes |
+|DataLabelReadEvent |DataLabelReadEvent |Yes |
+|DataSetChangeEvent |DataSetChangeEvent |Yes |
+|DataSetReadEvent |DataSetReadEvent |Yes |
+|DataStoreChangeEvent |DataStoreChangeEvent |Yes |
+|DataStoreReadEvent |DataStoreReadEvent |Yes |
+|DeploymentEventACI |DeploymentEventACI |Yes |
+|DeploymentEventAKS |DeploymentEventAKS |Yes |
+|DeploymentReadEvent |DeploymentReadEvent |Yes |
+|EnvironmentChangeEvent |EnvironmentChangeEvent |Yes |
+|EnvironmentReadEvent |EnvironmentReadEvent |Yes |
+|InferencingOperationACI |InferencingOperationACI |Yes |
+|InferencingOperationAKS |InferencingOperationAKS |Yes |
+|ModelsActionEvent |ModelsActionEvent |Yes |
+|ModelsChangeEvent |ModelsChangeEvent |Yes |
+|ModelsReadEvent |ModelsReadEvent |Yes |
+|PipelineChangeEvent |PipelineChangeEvent |Yes |
+|PipelineReadEvent |PipelineReadEvent |Yes |
+|RunEvent |RunEvent |Yes |
+|RunReadEvent |RunReadEvent |Yes |
+
+## Microsoft.MachineLearningServices/workspaces/onlineEndpoints
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AmlOnlineEndpointConsoleLog |AmlOnlineEndpointConsoleLog |Yes |
+|AmlOnlineEndpointEventLog |AmlOnlineEndpointEventLog (preview) |Yes |
+|AmlOnlineEndpointTrafficLog |AmlOnlineEndpointTrafficLog (preview) |Yes |
+
+## Microsoft.ManagedNetworkFabric/networkDevices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BfdStateUpdates |Bi-Directional Forwarding Detection Updates |Yes |
+|ComponentStateUpdates |Component State Updates |Yes |
+|InterfaceStateUpdates |Interface State Updates |Yes |
+|InterfaceVxlanUpdates |Interface Vxlan Updates |Yes |
+|NetworkInstanceBgpNeighborUpdates |BGP Neighbor Updates |Yes |
+|NetworkInstanceUpdates |Network Instance Updates |Yes |
+|SystemStateMessageUpdates |System State Message Updates |Yes |
+
+## Microsoft.Media/mediaservices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|KeyDeliveryRequests |Key Delivery Requests |No |
+|MediaAccount |Media Account Health Status |Yes |
+
+## Microsoft.Media/mediaservices/liveEvents
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|LiveEventState |Live Event Operations |Yes |
+
+## Microsoft.Media/mediaservices/streamingEndpoints
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StreamingEndpointRequests |Streaming Endpoint Requests |Yes |
+
+## Microsoft.Media/videoanalyzers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit Logs |Yes |
+|Diagnostics |Diagnostics Logs |Yes |
+|Operational |Operational Logs |Yes |
+
+## Microsoft.NetApp/netAppAccounts/capacityPools
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Autoscale |Capacity Pool Autoscaled |Yes |
+
+## Microsoft.NetApp/netAppAccounts/capacityPools/volumes
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ANFFileAccess |ANF File Access |Yes |
+
+## Microsoft.Network/applicationgateways
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ApplicationGatewayAccessLog |Application Gateway Access Log |No |
+|ApplicationGatewayFirewallLog |Application Gateway Firewall Log |No |
+|ApplicationGatewayPerformanceLog |Application Gateway Performance Log |No |
+
+## Microsoft.Network/azureFirewalls
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AZFWApplicationRule |Azure Firewall Application Rule |Yes |
+|AZFWApplicationRuleAggregation |Azure Firewall Network Rule Aggregation (Policy Analytics) |Yes |
+|AZFWDnsQuery |Azure Firewall DNS query |Yes |
+|AZFWFatFlow |Azure Firewall Fat Flow Log |Yes |
+|AZFWFlowTrace |Azure Firewall Flow Trace Log |Yes |
+|AZFWFqdnResolveFailure |Azure Firewall FQDN Resolution Failure |Yes |
+|AZFWIdpsSignature |Azure Firewall IDPS Signature |Yes |
+|AZFWNatRule |Azure Firewall Nat Rule |Yes |
+|AZFWNatRuleAggregation |Azure Firewall Nat Rule Aggregation (Policy Analytics) |Yes |
+|AZFWNetworkRule |Azure Firewall Network Rule |Yes |
+|AZFWNetworkRuleAggregation |Azure Firewall Application Rule Aggregation (Policy Analytics) |Yes |
+|AZFWThreatIntel |Azure Firewall Threat Intelligence |Yes |
+|AzureFirewallApplicationRule |Azure Firewall Application Rule (Legacy Azure Diagnostics) |No |
+|AzureFirewallDnsProxy |Azure Firewall DNS Proxy (Legacy Azure Diagnostics) |No |
+|AzureFirewallNetworkRule |Azure Firewall Network Rule (Legacy Azure Diagnostics) |No |
+
+## microsoft.network/bastionHosts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BastionAuditLogs |Bastion Audit Logs |No |
+
+## Microsoft.Network/expressRouteCircuits
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PeeringRouteLog |Peering Route Table Logs |No |
+
+## Microsoft.Network/frontdoors
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|FrontdoorAccessLog |Frontdoor Access Log |No |
+|FrontdoorWebApplicationFirewallLog |Frontdoor Web Application Firewall Log |No |
+
+## Microsoft.Network/loadBalancers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|LoadBalancerAlertEvent |Load Balancer Alert Events |No |
+|LoadBalancerProbeHealthStatus |Load Balancer Probe Health Status |No |
+
+## Microsoft.Network/networkManagers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NetworkGroupMembershipChange |Network Group Membership Change |Yes |
+
+## Microsoft.Network/networksecuritygroups
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NetworkSecurityGroupEvent |Network Security Group Event |No |
+|NetworkSecurityGroupFlowEvent |Network Security Group Rule Flow Event |No |
+|NetworkSecurityGroupRuleCounter |Network Security Group Rule Counter |No |
+
+## Microsoft.Network/networkSecurityPerimeters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NspCrossPerimeterInboundAllowed |Cross perimeter inbound access allowed by perimeter link. |Yes |
+|NspCrossPerimeterOutboundAllowed |Cross perimeter outbound access allowed by perimeter link. |Yes |
+|NspIntraPerimeterInboundAllowed |Inbound access allowed within same perimeter. |Yes |
+|NspIntraPerimeterOutboundAllowed |Outbound attempted to same perimeter. NOTE: To be deprecated in future. |Yes |
+|NspOutboundAttempt |Outbound attempted to same or different perimeter. |Yes |
+|NspPrivateInboundAllowed |Private endpoint traffic allowed. |Yes |
+|NspPublicInboundPerimeterRulesAllowed |Public inbound access allowed by NSP access rules. |Yes |
+|NspPublicInboundPerimeterRulesDenied |Public inbound access denied by NSP access rules. |Yes |
+|NspPublicInboundResourceRulesAllowed |Public inbound access allowed by PaaS resource rules. |Yes |
+|NspPublicInboundResourceRulesDenied |Public inbound access denied by PaaS resource rules. |Yes |
+|NspPublicOutboundPerimeterRulesAllowed |Public outbound access allowed by NSP access rules. |Yes |
+|NspPublicOutboundPerimeterRulesDenied |Public outbound access denied by NSP access rules. |Yes |
+|NspPublicOutboundResourceRulesAllowed |Public outbound access allowed by PaaS resource rules. |Yes |
+|NspPublicOutboundResourceRulesDenied |Public outbound access denied by PaaS resource rules |Yes |
+
+## Microsoft.Network/networkSecurityPerimeters/profiles
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NSPInboundAccessAllowed |NSP Inbound Access Allowed. |Yes |
+|NSPInboundAccessDenied |NSP Inbound Access Denied. |Yes |
+|NSPOutboundAccessAllowed |NSP Outbound Access Allowed. |Yes |
+|NSPOutboundAccessDenied |NSP Outbound Access Denied. |Yes |
+
+## microsoft.network/p2svpngateways
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
+|IKEDiagnosticLog |IKE Diagnostic Logs |No |
+|P2SDiagnosticLog |P2S Diagnostic Logs |No |
+
+## Microsoft.Network/publicIPAddresses
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DDoSMitigationFlowLogs |Flow logs of DDoS mitigation decisions |No |
+|DDoSMitigationReports |Reports of DDoS mitigations |No |
+|DDoSProtectionNotifications |DDoS protection notifications |No |
+
+## Microsoft.Network/trafficManagerProfiles
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ProbeHealthStatusEvents |Traffic Manager Probe Health Results Event |No |
+
+## microsoft.network/virtualnetworkgateways
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
+|IKEDiagnosticLog |IKE Diagnostic Logs |No |
+|P2SDiagnosticLog |P2S Diagnostic Logs |No |
+|RouteDiagnosticLog |Route Diagnostic Logs |No |
+|TunnelDiagnosticLog |Tunnel Diagnostic Logs |No |
+
+## Microsoft.Network/virtualNetworks
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|VMProtectionAlerts |VM protection alerts |No |
+
+## microsoft.network/vpngateways
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
+|IKEDiagnosticLog |IKE Diagnostic Logs |No |
+|RouteDiagnosticLog |Route Diagnostic Logs |No |
+|TunnelDiagnosticLog |Tunnel Diagnostic Logs |No |
+
+## Microsoft.NetworkCloud/bareMetalMachines
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|SecurityCritical |Security - Critical |Yes |
+|SecurityDebug |Security - Debug |Yes |
+|SecurityError |Security - Error |Yes |
+|SecurityInfo |Security - Info |Yes |
+|SecurityNotice |Security - Notice |Yes |
+|SecurityWarning |Security - Warning |Yes |
+|SyslogCritical |System - Critical |Yes |
+|SyslogDebug |System - Debug |Yes |
+|SyslogError |System - Error |Yes |
+|SyslogInfo |System - Info |Yes |
+|SyslogNotice |System - Notice |Yes |
+|SyslogWarning |System - Warning |Yes |
+
+## Microsoft.NetworkCloud/clusters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CustomerContainerLogs |Kubernetes Logs |Yes |
+|VMOrchestrationLogs |VM Orchestration Logs |Yes |
+
+## Microsoft.NetworkCloud/storageAppliances
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StorageApplianceAlert |Storage Appliance alerts |Yes |
+|StorageApplianceAudit |Storage Appliance logs |Yes |
+
+## Microsoft.NetworkFunction/azureTrafficCollectors
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ExpressRouteCircuitIpfix |Express Route Circuit IPFIX Flow Records |Yes |
+
+## Microsoft.NotificationHubs/namespaces
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|OperationalLogs |Operational Logs |No |
+
+## MICROSOFT.OPENENERGYPLATFORM/ENERGYSERVICES
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AirFlowTaskLogs |Air Flow Task Logs |Yes |
+|AuditEvent |Audit Event |Yes |
+|CRSCatalogLogs |CRS Catalog Service Logs |Yes |
+|CRSConversionLogs |CRS Conversion Service Logs |Yes |
+|DatasetLogs |Dataset Service Logs |Yes |
+|ElasticOperatorLogs |Elastic Operator Logs |Yes |
+|ElasticsearchLogs |Elasticsearch Logs |Yes |
+|EntitlementsLogs |Entitlements Service Logs |Yes |
+|FileLogs |File Service Logs |Yes |
+|IndexerLogs |Indexer Service Logs |Yes |
+|LegalLogs |Legal Service Logs |Yes |
+|NotificationLogs |Notification Service Logs |Yes |
+|PartitionLogs |Partition Service Logs |Yes |
+|PDSBackendLogs |PDSBackend Service Logs |Yes |
+|PDSFrontendLogs |PDSFrontend Service Logs |Yes |
+|RegisterLogs |Register Service Logs |Yes |
+|SchemaLogs |Schema Service Logs |Yes |
+|SearchLogs |Search Service Logs |Yes |
+|StorageLogs |Storage Service Logs |Yes |
+|UnitLogs |Unit Service Logs |Yes |
+|WellDeliveryLogs |WellDelivery Service Logs |Yes |
+|WorkflowLogs |Workflow Service Logs |Yes |
+
+## Microsoft.OpenLogisticsPlatform/Workspaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|SupplyChainEntityOperations |Supply Chain Entity Operations |Yes |
+|SupplyChainEventLogs |Supply Chain Event logs |Yes |
+
+## Microsoft.OperationalInsights/workspaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit |No |
+
+## Microsoft.PlayFab/titles
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |AuditLogs |Yes |
+
+## Microsoft.PowerBI/tenants
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Engine |Engine |No |
+
+## Microsoft.PowerBI/tenants/workspaces
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Engine |Engine |No |
+
+## Microsoft.PowerBIDedicated/capacities
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Engine |Engine |No |
+
+## microsoft.purview/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataSensitivityLogEvent |DataSensitivity |Yes |
+|ScanStatusLogEvent |ScanStatus |No |
+|Security |PurviewAccountAuditEvents |Yes |
+
+## Microsoft.RecoveryServices/Vaults
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AddonAzureBackupAlerts |Addon Azure Backup Alert Data |No |
+|AddonAzureBackupJobs |Addon Azure Backup Job Data |No |
+|AddonAzureBackupPolicy |Addon Azure Backup Policy Data |No |
+|AddonAzureBackupProtectedInstance |Addon Azure Backup Protected Instance Data |No |
+|AddonAzureBackupStorage |Addon Azure Backup Storage Data |No |
+|ASRReplicatedItems |Azure Site Recovery Replicated Items Details |Yes |
+|AzureBackupReport |Azure Backup Reporting Data |No |
+|AzureSiteRecoveryEvents |Azure Site Recovery Events |No |
+|AzureSiteRecoveryJobs |Azure Site Recovery Jobs |No |
+|AzureSiteRecoveryProtectedDiskDataChurn |Azure Site Recovery Protected Disk Data Churn |No |
+|AzureSiteRecoveryRecoveryPoints |Azure Site Recovery Recovery Points |No |
+|AzureSiteRecoveryReplicatedItems |Azure Site Recovery Replicated Items |No |
+|AzureSiteRecoveryReplicationDataUploadRate |Azure Site Recovery Replication Data Upload Rate |No |
+|AzureSiteRecoveryReplicationStats |Azure Site Recovery Replication Stats |No |
+|CoreAzureBackup |Core Azure Backup Data |No |
+
+## Microsoft.Relay/namespaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|HybridConnectionsEvent |HybridConnections Events |No |
+|HybridConnectionsLogs |HybridConnectionsLogs |Yes |
+
+## Microsoft.Search/searchServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|OperationLogs |Operation Logs |No |
+
+## Microsoft.Security/antiMalwareSettings
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ScanResults |AntimalwareScanResults |Yes |
+
+## Microsoft.Security/defenderForStorageSettings
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ScanResults |AntimalwareScanResults |Yes |
+
+## microsoft.securityinsights/settings
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Analytics |Analytics |Yes |
+|Automation |Automation |Yes |
+|DataConnectors |Data Collection - Connectors |Yes |
+
+## Microsoft.ServiceBus/Namespaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ApplicationMetricsLogs |Application Metrics Logs(Unused) |Yes |
+|OperationalLogs |Operational Logs |No |
+|RuntimeAuditLogs |Runtime Audit Logs |Yes |
+|VNetAndIPFilteringLogs |VNet/IP Filtering Connection Logs |No |
+
+## Microsoft.SignalRService/SignalR
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AllLogs |Azure SignalR Service Logs. |No |
+
+## Microsoft.SignalRService/SignalR/replicas
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AllLogs |Azure SignalR Service Logs. |Yes |
+
+## Microsoft.SignalRService/WebPubSub
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ConnectivityLogs |Connectivity logs for Azure Web PubSub Service. |Yes |
+|HttpRequestLogs |Http Request logs for Azure Web PubSub Service. |Yes |
+|MessagingLogs |Messaging logs for Azure Web PubSub Service. |Yes |
+
+## Microsoft.SignalRService/WebPubSub/replicas
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ConnectivityLogs |Connectivity logs for Azure Web PubSub Service. |Yes |
+|HttpRequestLogs |Http Request logs for Azure Web PubSub Service. |Yes |
+|MessagingLogs |Messaging logs for Azure Web PubSub Service. |Yes |
+
+## microsoft.singularity/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Activity |Activity Logs |Yes |
+|Execution |Execution Logs |Yes |
+
+## Microsoft.Sql/managedInstances
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DevOpsOperationsAudit |Devops operations Audit Logs |No |
+|ResourceUsageStats |Resource Usage Statistics |No |
+|SQLSecurityAuditEvents |SQL Security Audit Event |No |
+
+## Microsoft.Sql/managedInstances/databases
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Errors |Errors |No |
+|QueryStoreRuntimeStatistics |Query Store Runtime Statistics |No |
+|QueryStoreWaitStatistics |Query Store Wait Statistics |No |
+|SQLInsights |SQL Insights |No |
+
+## Microsoft.Sql/servers/databases
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AutomaticTuning |Automatic tuning |No |
+|Blocks |Blocks |No |
+|DatabaseWaitStatistics |Database Wait Statistics |No |
+|Deadlocks |Deadlocks |No |
+|DevOpsOperationsAudit |Devops operations Audit Logs |No |
+|DmsWorkers |Dms Workers |No |
+|Errors |Errors |No |
+|ExecRequests |Exec Requests |No |
+|QueryStoreRuntimeStatistics |Query Store Runtime Statistics |No |
+|QueryStoreWaitStatistics |Query Store Wait Statistics |No |
+|RequestSteps |Request Steps |No |
+|SQLInsights |SQL Insights |No |
+|SqlRequests |Sql Requests |No |
+|SQLSecurityAuditEvents |SQL Security Audit Event |No |
+|Timeouts |Timeouts |No |
+|Waits |Waits |No |
+
+## Microsoft.Storage/storageAccounts/blobServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StorageDelete |StorageDelete |Yes |
+|StorageRead |StorageRead |Yes |
+|StorageWrite |StorageWrite |Yes |
+
+## Microsoft.Storage/storageAccounts/fileServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StorageDelete |StorageDelete |Yes |
+|StorageRead |StorageRead |Yes |
+|StorageWrite |StorageWrite |Yes |
+
+## Microsoft.Storage/storageAccounts/queueServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StorageDelete |StorageDelete |Yes |
+|StorageRead |StorageRead |Yes |
+|StorageWrite |StorageWrite |Yes |
+
+## Microsoft.Storage/storageAccounts/tableServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StorageDelete |StorageDelete |Yes |
+|StorageRead |StorageRead |Yes |
+|StorageWrite |StorageWrite |Yes |
+
+## Microsoft.StorageCache/caches
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AscCacheOperationEvent |HPC Cache operation event |Yes |
+|AscUpgradeEvent |HPC Cache upgrade event |Yes |
+|AscWarningEvent |HPC Cache warning |Yes |
+
+## Microsoft.StorageMover/storageMovers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CopyLogsFailed |Copy logs - Failed |Yes |
+|JobRunLogs |Job run logs |Yes |
+
+## Microsoft.StreamAnalytics/streamingjobs
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Authoring |Authoring |No |
+|Execution |Execution |No |
+
+## Microsoft.Synapse/workspaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BuiltinSqlReqsEnded |Built-in Sql Pool Requests Ended |No |
+|GatewayApiRequests |Synapse Gateway Api Requests |No |
+|IntegrationActivityRuns |Integration Activity Runs |Yes |
+|IntegrationPipelineRuns |Integration Pipeline Runs |Yes |
+|IntegrationTriggerRuns |Integration Trigger Runs |Yes |
+|SQLSecurityAuditEvents |SQL Security Audit Event |No |
+|SynapseLinkEvent |Synapse Link Event |Yes |
+|SynapseRbacOperations |Synapse RBAC Operations |No |
+
+## Microsoft.Synapse/workspaces/bigDataPools
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BigDataPoolAppEvents |Big Data Pool Applications Execution Metrics |Yes |
+|BigDataPoolAppsEnded |Big Data Pool Applications Ended |No |
+|BigDataPoolBlockManagerEvents |Big Data Pool Block Manager Events |Yes |
+|BigDataPoolDriverLogs |Big Data Pool Driver Logs |Yes |
+|BigDataPoolEnvironmentEvents |Big Data Pool Environment Events |Yes |
+|BigDataPoolExecutorEvents |Big Data Pool Executor Events |Yes |
+|BigDataPoolExecutorLogs |Big Data Pool Executor Logs |Yes |
+|BigDataPoolJobEvents |Big Data Pool Job Events |Yes |
+|BigDataPoolSqlExecutionEvents |Big Data Pool Sql Execution Events |Yes |
+|BigDataPoolStageEvents |Big Data Pool Stage Events |Yes |
+|BigDataPoolTaskEvents |Big Data Pool Task Events |Yes |
+
+## Microsoft.Synapse/workspaces/kustoPools
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Command |Synapse Data Explorer Command |Yes |
+|FailedIngestion |Synapse Data Explorer Failed Ingestion |Yes |
+|IngestionBatching |Synapse Data Explorer Ingestion Batching |Yes |
+|Query |Synapse Data Explorer Query |Yes |
+|SucceededIngestion |Synapse Data Explorer Succeeded Ingestion |Yes |
+|TableDetails |Synapse Data Explorer Table Details |Yes |
+|TableUsageStatistics |Synapse Data Explorer Table Usage Statistics |Yes |
+
+## Microsoft.Synapse/workspaces/scopePools
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ScopePoolScopeJobsEnded |Scope Pool Scope Jobs Ended |Yes |
+|ScopePoolScopeJobsStateChange |Scope Pool Scope Jobs State Change |Yes |
+
+## Microsoft.Synapse/workspaces/sqlPools
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DmsWorkers |Dms Workers |No |
+|ExecRequests |Exec Requests |No |
+|RequestSteps |Request Steps |No |
+|SqlRequests |Sql Requests |No |
+|SQLSecurityAuditEvents |Sql Security Audit Event |No |
+|Waits |Waits |No |
+
+## Microsoft.TimeSeriesInsights/environments
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Ingress |Ingress |No |
+|Management |Management |No |
+
+## Microsoft.TimeSeriesInsights/environments/eventsources
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Ingress |Ingress |No |
+|Management |Management |No |
+
+## microsoft.videoindexer/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit |Yes |
+|IndexingLogs |Indexing Logs |Yes |
+
+## Microsoft.Web/hostingEnvironments
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppServiceEnvironmentPlatformLogs |App Service Environment Platform Logs |No |
+
+## Microsoft.Web/sites
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppServiceAntivirusScanAuditLogs |Report Antivirus Audit Logs |No |
+|AppServiceAppLogs |App Service Application Logs |No |
+|AppServiceAuditLogs |Access Audit Logs |No |
+|AppServiceConsoleLogs |App Service Console Logs |No |
+|AppServiceFileAuditLogs |Site Content Change Audit Logs |No |
+|AppServiceHTTPLogs |HTTP logs |No |
+|AppServiceIPSecAuditLogs |IPSecurity Audit logs |No |
+|AppServicePlatformLogs |App Service Platform logs |No |
+|FunctionAppLogs |Function Application Logs |No |
+|WorkflowRuntime |Workflow Runtime Logs |Yes |
+
+## Microsoft.Web/sites/slots
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppServiceAntivirusScanAuditLogs |Report Antivirus Audit Logs |No |
+|AppServiceAppLogs |App Service Application Logs |No |
+|AppServiceAuditLogs |Access Audit Logs |No |
+|AppServiceConsoleLogs |App Service Console Logs |No |
+|AppServiceFileAuditLogs |Site Content Change Audit Logs |No |
+|AppServiceHTTPLogs |HTTP logs |No |
+|AppServiceIPSecAuditLogs |IPSecurity Audit logs |No |
+|AppServicePlatformLogs |App Service Platform logs |No |
+|FunctionAppLogs |Function Application Logs |No |
+
+## microsoft.workloads/sapvirtualinstances
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ChangeDetection |Change Detection |Yes |
## Next Steps
If you think something is missing, you can open a GitHub comment at the bottom o
* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
-<!--Gen Date: Sun May 07 2023 12:43:57 GMT+0300 (Israel Daylight Time)-->
+<!--Gen Date: Sun May 28 2023 17:43:46 GMT+0300 (Israel Daylight Time)-->
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Configure a table for Basic logs if:
| Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | | Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | | Container Apps Environments | [AppEnvSpringAppConsoleLogs](/azure/azure-monitor/reference/tables/AppEnvSpringAppConsoleLogs) |
- | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) |
+ | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) |
| Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) | | Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) | | Data Manager for Energy | [OEPDataplaneLogs](/azure/azure-monitor/reference/tables/OEPDataplaneLogs) |
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md
Azure file shares backup is available in all regions, **except** for Germany Cen
| Maximum number of restore per day | 10 | | Maximum number of individual files or folders per restore, if ILR (Item level recovery) | 99 | | Maximum recommended restore size per restore for large file shares | 15 TiB |
+| Maximum duration of a restore job | 15 days
## Retention limits
batch Automatic Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/automatic-certificate-rotation.md
Title: Enable automatic certificate rotation in a Batch pool description: You can create a Batch pool with a managed identity and a certificate that will automatically be renewed. Previously updated : 07/16/2021 Last updated : 05/24/2023 # Enable automatic certificate rotation in a Batch pool
Request Body
"imageReference": { "publisher": "canonical", "offer": "ubuntuserver",
- "sku": "18.04-lts",
+ "sku": "20.04-lts",
"version": "latest" },
- "nodeAgentSkuId": "batch.node.ubuntu 18.04",
+ "nodeAgentSkuId": "batch.node.ubuntu 20.04",
"extensions": [ { "name": "KVExtensions",
batch Batch Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md
Title: Autoscale compute nodes in an Azure Batch pool description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Previously updated : 04/12/2023 Last updated : 05/26/2023
new_pool = batch.models.PoolAddParameter(
image_reference=batchmodels.ImageReference( publisher="Canonical", offer="UbuntuServer",
- sku="18.04-LTS",
+ sku="20.04-LTS",
version="latest" ),
- node_agent_sku_id="batch.node.ubuntu 18.04"),
+ node_agent_sku_id="batch.node.ubuntu 20.04"),
vm_size="STANDARD_D1_v2", target_dedicated_nodes=0, target_low_priority_nodes=0
batch Batch Cli Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-cli-templates.md
Title: Run jobs end-to-end using templates description: With only CLI commands, you can create a pool, upload input data, create jobs and associated tasks, and download the resulting output data. Previously updated : 12/20/2021 Last updated : 05/26/2023 # Use Azure Batch CLI templates and file transfer
The following is an example of a template that creates a pool of Linux VMs with
"imageReference": { "publisher": "Canonical", "offer": "UbuntuServer",
- "sku": "18.04-LTS",
+ "sku": "20.04-LTS",
"version": "latest" },
- "nodeAgentSKUId": "batch.node.ubuntu 18.04"
+ "nodeAgentSKUId": "batch.node.ubuntu 20.04"
}, "vmSize": "STANDARD_D3_V2", "targetDedicatedNodes": "[parameters('nodeCount')]",
batch Batch Parallel Node Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-parallel-node-tasks.md
Title: Run tasks concurrently to maximize usage of Batch compute nodes description: Learn how to increase efficiency and lower costs by using fewer compute nodes and parallelism in an Azure Batch pool. Previously updated : 04/10/2023 Last updated : 05/24/2023 ms.devlang: csharp
For more information on adding pools by using the REST API, see [Add a pool to a
"imageReference": { "publisher": "canonical", "offer": "ubuntuserver",
- "sku": "18.04-lts"
+ "sku": "20.04-lts"
},
- "nodeAgentSKUId": "batch.node.ubuntu 18.04"
+ "nodeAgentSKUId": "batch.node.ubuntu 20.04"
}, "targetDedicatedComputeNodes":2, "taskSlotsPerNode":4,
batch Batch Powershell Cmdlets Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-powershell-cmdlets-get-started.md
Title: Get started with PowerShell description: A quick introduction to the Azure PowerShell cmdlets you can use to manage Batch resources. Previously updated : 01/21/2021 Last updated : 05/24/2023
When using many of these cmdlets, in addition to passing a BatchContext object,
When creating or updating a Batch pool, you specify a [configuration](nodes-and-pools.md#configurations). Pools should generally be configured with Virtual Machine Configuration, which lets you either specify one of the supported Linux or Windows VM images listed in the [Azure Virtual Machines Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/compute?filters=virtual-machine-images&page=1), or provide a custom image that you have prepared. Cloud Services Configuration pools provide only Windows compute nodes and do not support all Batch features.
-When you run **New-AzBatchPool**, pass the operating system settings in a PSVirtualMachineConfiguration or PSCloudServiceConfiguration object. For example, the following snippet creates a Batch pool with size Standard_A1 compute nodes in the virtual machine configuration, imaged with Ubuntu Server 18.04-LTS. Here, the **VirtualMachineConfiguration** parameter specifies the *$configuration* variable as the PSVirtualMachineConfiguration object. The **BatchContext** parameter specifies a previously defined variable *$context* as the BatchAccountContext object.
+When you run **New-AzBatchPool**, pass the operating system settings in a PSVirtualMachineConfiguration or PSCloudServiceConfiguration object. For example, the following snippet creates a Batch pool with size Standard_A1 compute nodes in the virtual machine configuration, imaged with Ubuntu Server 20.04-LTS. Here, the **VirtualMachineConfiguration** parameter specifies the *$configuration* variable as the PSVirtualMachineConfiguration object. The **BatchContext** parameter specifies a previously defined variable *$context* as the BatchAccountContext object.
```powershell
-$imageRef = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("UbuntuServer","Canonical","18.04-LTS")
+$imageRef = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("UbuntuServer","Canonical","20.04-LTS")
-$configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageRef, "batch.node.ubuntu 18.04")
+$configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageRef, "batch.node.ubuntu 20.04")
New-AzBatchPool -Id "mypspool" -VirtualMachineSize "Standard_a1" -VirtualMachineConfiguration $configuration -AutoScaleFormula '$TargetDedicated=4;' -BatchContext $context ```
batch Create Pool Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-availability-zones.md
Title: Create a pool across availability zones description: Learn how to create a Batch pool with zonal policy to help protect against failures. Previously updated : 08/06/2021 Last updated : 05/25/2023 ms.devlang: csharp
Request body
"imageReference": { "publisher": "Canonical", "offer": "UbuntuServer",
- "sku": "18.04-lts"
+ "sku": "20.04-lts"
}, "nodePlacementConfiguration": { "policy": "Zonal" }
- "nodeAgentSKUId": "batch.node.ubuntu 18.04"
+ "nodeAgentSKUId": "batch.node.ubuntu 20.04"
}, "resizeTimeout": "PT15M", "targetDedicatedNodes": 5,
batch Create Pool Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-extensions.md
Title: Use extensions with Batch pools description: Extensions are small applications that facilitate post-provisioning configuration and setup on Batch compute nodes. Previously updated : 11/03/2021 Last updated : 05/26/2023 # Use extensions with Batch pools
Request Body
"imageReference": { "publisher": "canonical", "offer": "ubuntuserver",
- "sku": "18.04-lts",
+ "sku": "20.04-lts",
"version": "latest" },
- "nodeAgentSkuId": "batch.node.ubuntu 18.04",
+ "nodeAgentSkuId": "batch.node.ubuntu 20.04",
"extensions": [ { "name": "secretext",
batch Create Pool Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-public-ip.md
Title: Create a Batch pool with specified public IP addresses description: Learn how to create an Azure Batch pool that uses your own static public IP addresses. Previously updated : 12/20/2021 Last updated : 05/26/2023 # Create an Azure Batch pool with specified public IP addresses
Request body:
"imageReference": { "publisher": "Canonical", "offer": "UbuntuServer",
- "sku": "18.04-LTS"
+ "sku": "20.04-LTS"
},
- "nodeAgentSKUId": "batch.node.ubuntu 18.04"
+ "nodeAgentSKUId": "batch.node.ubuntu 20.04"
}, "networkConfiguration": { "subnetId": "/subscriptions/<subId>/resourceGroups/<rgId>/providers/Microsoft.Network/virtualNetworks/<vNetId>/subnets/<subnetId>",
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md
Title: "Tutorial: Run a parallel workload using the Python API"
description: Learn how to process media files in parallel using ffmpeg in Azure Batch with the Batch Python client library. ms.devlang: python Previously updated : 04/19/2023 Last updated : 05/25/2023
input_files = [
### Create a pool of compute nodes
-Next, the sample creates a pool of compute nodes in the Batch account with a call to `create_pool`. This defined function uses the Batch [PoolAddParameter](/python/api/azure-batch/azure.batch.models.pooladdparameter) class to set the number of nodes, VM size, and a pool configuration. Here, a [VirtualMachineConfiguration](/python/api/azure-batch/azure.batch.models.virtualmachineconfiguration) object specifies an [ImageReference](/python/api/azure-batch/azure.batch.models.imagereference) to an Ubuntu Server 18.04 LTS image published in the Azure Marketplace. Batch supports a wide range of VM images in the Azure Marketplace, as well as custom VM images.
+Next, the sample creates a pool of compute nodes in the Batch account with a call to `create_pool`. This defined function uses the Batch [PoolAddParameter](/python/api/azure-batch/azure.batch.models.pooladdparameter) class to set the number of nodes, VM size, and a pool configuration. Here, a [VirtualMachineConfiguration](/python/api/azure-batch/azure.batch.models.virtualmachineconfiguration) object specifies an [ImageReference](/python/api/azure-batch/azure.batch.models.imagereference) to an Ubuntu Server 20.04 LTS image published in the Azure Marketplace. Batch supports a wide range of VM images in the Azure Marketplace, as well as custom VM images.
The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure doesn't have enough capacity. The sample by default creates a pool containing only five Spot nodes in size *Standard_A1_v2*.
new_pool = batch.models.PoolAddParameter(
image_reference=batchmodels.ImageReference( publisher="Canonical", offer="UbuntuServer",
- sku="18.04-LTS",
+ sku="20.04-LTS",
version="latest" ),
- node_agent_sku_id="batch.node.ubuntu 18.04"),
+ node_agent_sku_id="batch.node.ubuntu 20.04"),
vm_size=_POOL_VM_SIZE, target_dedicated_nodes=_DEDICATED_POOL_NODE_COUNT, target_low_priority_nodes=_LOW_PRIORITY_POOL_NODE_COUNT,
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Title: Identify vulnerabilities in Azure Container Registry with Microsoft Defen
description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities. Previously updated : 05/14/2023 Last updated : 05/28/2023
The triggers for an image scan are:
- Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster. Once a scan is triggered, scan results will typically appear in the Defender for Cloud recommendations after a few minutes, but in some cases it may take up to an hour.+ ## Prerequisites Before you can scan your ACR images: - You must enable one of the following plans on your subscription:
- - [Defender CSPM](concept-cloud-security-posture-management.md). When you enable this plan, ensure you enable the **Container registries vulnerability assessments (preview)** extension.
- - [Defender for Containers](defender-for-containers-enable.md).
+ - [Defender CSPM](concept-cloud-security-posture-management.md). When you enable this plan, ensure you enable the **Container registries vulnerability assessments (preview)** extension.
+ - [Defender for Containers](defender-for-containers-enable.md).
- >[!NOTE]
- > This feature is charged per image. Learn more about the [pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/)
+ >[!NOTE]
+ > This feature is charged per image. Learn more about the [pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
To find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
For a list of the types of images and container registries supported by Microsof
:::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/container-registry-details.png" alt-text="Screenshot showing select specific image to see vulnerabilities." lightbox="media/defender-for-containers-vulnerability-assessment-azure/container-registry-details.png"::: -- The repository details page opens. It lists the vulnerable images together with an assessment of the severity of the findings. 1. Select a specific image to see the vulnerabilities.
To create a rule:
:::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Screenshot showing the scope list."::: 1. To view or delete the rule, select the ellipsis menu ("...").
-## View vulnerabilities for images running on your AKS clusters
+## View vulnerabilities for images running on your AKS clusters
Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved-(powered by Qualys)](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
To provide findings for the recommendation, Defender for Cloud collects the inve
:::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/view-running-containers-vulnerability.png" alt-text="Screenshot of recommendations showing your running containers with the vulnerabilities associated with the images used by each container." lightbox="media/defender-for-containers-vulnerability-assessment-azure/view-running-containers-vulnerability.png"::: - ## FAQ ### How does Defender for Containers scan an image?
Defender for Containers pulls the image from the registry and runs it in an isol
Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
+### How can I identify pull events performed by the scanner?
+
+To identify pull events performed by the scanner, do the following steps:
+
+1. Search for pull events with the UserAgent of *AzureContainerImageScanner*.
+1. Extract the identity associated with this event.
+1. Use the extracted identity to identify pull events from the scanner.
+ ### What is the difference between Not Applicable Resources and Unverified Resources? -- **Not applicable resources** are resources for which the recommendation can't give a definitive answer. The not applicable tab includes reasons for each resource that could not be assessed.
+- **Not applicable resources** are resources for which the recommendation can't give a definitive answer. The not applicable tab includes reasons for each resource that could not be assessed.
- **Unverified resources** are resources that have been scheduled to be assessed, but have not been assessed yet. ### Does Microsoft share any information with Qualys in order to perform image scans?
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
# Overview of Defender for DevOps
+> [!IMPORTANT]
+> Microsoft Defender for DevOps is constantly making changes and updates that require Defender for DevOps customers who have onboarded their GitHub environments in Defender for Cloud to provide permissions as part of the application deployed in their GitHub organization. These permissions are necessary to ensure all of the security features of Defender for DevOps operate normally and without issues.
+>
+> Please see the recent release note for [instructions on how to add these additional permissions](release-notes.md#defender-for-devops-github-application-update).
+ Microsoft Defender for Cloud enables comprehensive visibility, posture management, and threat protection across multicloud environments including Azure, AWS, GCP, and on-premises resources. Defender for DevOps, a service available in Defender for Cloud, empowers security teams to manage DevOps security across multi-pipeline environments. Defender for DevOps uses a central console to empower security teams with the ability to protect applications and resources from code to cloud across multi-pipeline environments, such as GitHub and Azure DevOps. Findings from Defender for DevOps can then be correlated with other contextual cloud security insights to prioritize remediation in code. Key capabilities in Defender for DevOps include:
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
By default, the limit is set to 5,000GB per month per storage account. Once this
Microsoft Defender for Storage enables you to secure your data at scale with granular controls. You can apply consistent security policies across all your storage accounts within a subscription or customize them for specific accounts to suit your business needs. You can also control your costs by choosing the level of protection you need for each resource. To get started, visit [enable Defender for Storage](../storage/common/azure-defender-storage-configure.md).
-## Malware Scanning and hash reputation analysisΓÇ»
+## Understanding the differences between Malware Scanning and hash reputation analysisΓÇ»
-**Malware Scanning** is a paid add-on feature to Defender for Storage, currently available for Azure Blob Storage. It leverages MDAV (Microsoft Defender Antivirus) to do a full malware scan, with high efficacy. It is significantly more comprehensive than only file hash reputation analysis. 
-
-The Activity Monitoring feature in Defender for Storage includes blob/file hash reputation analysis.
+Defender for Storage offers two capabilities to detect malicious content uploaded to storage accounts: **Malware Scanning** (paid add-on feature available only on the new plan) and **hash reputation analysis** (available in all plans).
-### Limitations of hash reputation analysis
+### Malware Scanning (paid add-on feature available only on the new plan)
-- **Hash reputation isn't deep file inspection** - Microsoft Defender for Storage uses hash reputation analysis supported by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) to determine whether an uploaded file is suspicious. The threat protection tools don’t scan the uploaded files; rather they analyze the telemetry generated from the Blobs Storage and Files services. Defender for Storage then compares the hashes of newly uploaded files with hashes of known viruses, trojans, spyware, and ransomware. 
+**Malware Scanning** leverages Microsoft Defender Antivirus (MDAV) to scan blobs uploaded to Blob storage, providing a comprehensive analysis that includes deep file scans and hash reputation analysis. This feature provides an enhanced level of detection against potential threats.
-- **Hash reputation analysis isn't supported for all files protocols and operation types** - Some, but not all, of the telemetry logs contain the hash value of the related blob or file. In some cases, the telemetry doesn't contain a hash value. As a result, some operations can't be monitored for known malware uploads. Examples of such unsupported use cases include SMB file-shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list).
+### Hash reputation analysis (available in all plans)
-For blob storage, you can enable [Malware Scanning](defender-for-storage-malware-scan.md) to get fuller coverage and efficacy.ΓÇ»
+**Hash reputation analysis** detects potential malware in Blob storage and Azure Files by comparing the hash values of newly uploaded blobs/files against those of known malware by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684). Not all file protocols and operation types are supported with this capability, leading to some operations not being monitored for potential malware uploads. Unsupported use cases include SMB file shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list).
+
+In summary, Malware Scanning, which is only available on the new plan for Blob storage, offers a more comprehensive approach to malware detection by analyzing the full content of files and incorporating hash reputation analysis in its scanning methodology.
## Common questions
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Malware Scanning doesn't block access or change permissions to the uploaded blob
1. **Scan throughput rate limit:** The malware scanning process operates in near real-time with a throughput capacity of 2GB per minute for each storage account. If this limit is exceeded, the scanning speed will decrease, resulting in blobs being scanned later.
-1. **Blob scan limit:** The scanning can process a maximum of 2,000 files per minute. If this limit is exceeded, the scanning speed will decrease, resulting in blobs being scanned later.
+1. **Blob scan limit:** Malware Scanning can process up to 2,000 files per minute for each storage account. If the rate of file upload momentarily exceeds this threshold for a storage account, the system will attempt to scan the files in excess of the rate limit at a later time when the load is lower. If the rate of file upload consistently exceeds this threshold, some files will not be scanned.
1. **Blob size limit:** The maximum size limit for a blob to be scanned is 2 GB.
-1. **Request limit and exceeding limit procedure:** Azure Storage accounts have a maximum limit of 2,000 requests per minute. If this limit is exceeded, an automatic retry mechanism is initiated by Malware Scanning to manage the overflow of requests and ensure they are scanned for malware. This mechanism functions over 24 hours, evenly distributing the request traffic. However, if the volume of requests consistently surpasses this limit over an extended duration, some scans might not be performed.
- ### Blob uploads and index tag updates
-Upon uploading a blob to the storage account, the Malware Scanning will initiate an additional read operation and update the index tag. In most cases, these operations are an insignificant load for most applications.
+Upon uploading a blob to the storage account, the Malware Scanning will initiate an additional read operation and update the index tag. In most cases, these operations do not generate significant load.
### Capping mechanism
defender-for-cloud Defender For Storage Threats Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-threats-alerts.md
Security alerts are triggered in the following scenarios:
Security alerts include details of the suspicious activity, relevant investigation steps, remediation actions, and security recommendations. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM/XDR tool. Learn more about [how to stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md).
-## Malware Scanning and Hash reputation analysis
+## Understanding the differences between Malware Scanning and hash reputation analysisΓÇ»
-Malware Scanning is a paid add-on feature to Defender for Storage, currently available for Azure Blob Storage. It leverages MDAV (Microsoft Defender Antivirus) to do a full malware scan, with high efficacy. It is significantly more comprehensive than only file hash reputation analysis.
+Defender for Storage offers two capabilities to detect malicious content uploaded to storage accounts: **Malware Scanning** (paid add-on feature available only on the new plan) and **hash reputation analysis** (available in all plans).
+### Malware Scanning (paid add-on feature available only on the new plan)
-- **Hash reputation isn't deep file inspection** - Microsoft Defender for Storage uses hash reputation analysis supported by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) to determine whether an uploaded file is suspicious. The threat protection tools don’t scan the uploaded files; rather they analyze the telemetry generated from the Blobs Storage and Files services. Defender for Storage then compares the hashes of newly uploaded files with hashes of known viruses, trojans, spyware, and ransomware.-- **Hash reputation analysis isn't supported for all files protocols and operation types** - Some, but not all, of the telemetry logs contain the hash value of the related blob or file. In some cases, the telemetry doesn't contain a hash value. As a result, some operations can't be monitored for known malware uploads. Examples of such unsupported use cases include SMB file-shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list).
+**Malware Scanning** leverages Microsoft Defender Antivirus (MDAV) to scan blobs uploaded to Blob storage, providing a comprehensive analysis that includes deep file scans and hash reputation analysis. This feature provides an enhanced level of detection against potential threats.
-For blob storage, you can [enable Malware Scanning](defender-for-storage-malware-scan.md) to get full coverage and efficacy.
+### Malware Scanning (paid add-on feature available only on the new plan)
+
+**Hash reputation analysis** detects potential malware in Blob storage and Azure Files by comparing the hash values of newly uploaded blobs/files against those of known malware by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684). Not all file protocols and operation types are supported with this capability, leading to some operations not being monitored for potential malware uploads. Unsupported use cases include SMB file shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list).
+
+In summary, Malware Scanning, which is only available on the new plan for Blob storage, offers a more comprehensive approach to malware detection by analyzing the full content of files and incorporating hash reputation analysis in its scanning methodology.
## Next steps
defender-for-cloud Enable Pull Request Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-pull-request-annotations.md
With Microsoft Defender for Cloud, you can configure PR annotations in Azure Dev
**For Azure DevOps**: - An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+- [Have write access (owner/contributer) to the Azure subscription](https://learn.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-activate-role).
- [Connect your Azure DevOps repositories to Microsoft Defender for Cloud](quickstart-onboard-devops.md). - [Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md). - [Setup secret scanning in Azure DevOps](detect-exposed-secrets.md#setup-secret-scanning-in-azure-devops).
Once you've completed these steps, you can select the build pipeline you created
1. Select **Configure**.
- :::image type="content" source="media/tutorial-enable-pr-annotations/select-configure.png" alt-text="Screenshot that shows you where to select the configure button on the screen.":::
+ :::image type="content" source="media/tutorial-enable-pr-annotations/select-configure.png" alt-text="Screenshot that shows you how to configure PR annotations within the portal.":::
1. Toggle Pull request annotations to **On**.
Once you've completed these steps, you can select the build pipeline you created
1. (Optional) Select a category from the drop-down menu. > [!NOTE]
- > Only secret scan results are currently supported.
+ > Only secret scan results and Infrastructure-as-Code misconfigurations for ARM/Bicep templates are currently supported.
1. (Optional) Select a severity level from the drop-down menu.
- > [!NOTE]
- > Only high-level severity findings are currently supported.
- 1. Select **Save**.
-All annotations on your main branch will be displayed from now on based on your configurations with the relevant line of code.
+All annotations on your pull requests will be displayed from now on based on your configurations.
### Resolve security issues in Azure DevOps
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
The protections include:
- **Threat intelligence**. Defender for Endpoint generates alerts when it identifies attacker tools, techniques, and procedures. It uses data generated by Microsoft threat hunters and security teams, augmented by intelligence provided by partners.
-When you integrate Defender for Endpoint with Defender for Cloud, you'll gain access to the benefits from the following extra capabilities:
+When you integrate Defender for Endpoint with Defender for Cloud, you gain access to the benefits from the following extra capabilities:
- **Automated onboarding**. Defender for Cloud automatically enables the Defender for Endpoint sensor on all supported machines connected to Defender for Cloud.
Before you can enable the Microsoft Defender for Endpoint integration with Defen
- Ensure the machine is connected to Azure and the internet as required:
- - **Azure virtual machines (Windows or Linux)** - Configure the network settings described in configure device proxy and internet connectivity settings: [Windows](/microsoft-365/security/defender-endpoint/configure-proxy-internet) or [Linux](/microsoft-365/security/defender-endpoint/linux-static-proxy-configuration).
+ - **Azure virtual machines (Windows or Linux)** - Configure the network settings described in configure device proxy and internet connectivity settings: [Windows](/microsoft-365/security/defender-endpoint/configure-proxy-internet) or [Linux](/microsoft-365/security/defender-endpoint/linux-static-proxy-configuration).
- - **On-premises machines** - Connect your target machines to Azure Arc as explained in [Connect hybrid machines with Azure Arc-enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md).
+ - **On-premises machines** - Connect your target machines to Azure Arc as explained in [Connect hybrid machines with Azure Arc-enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md).
- Enable **Microsoft Defender for Servers**. See [Quickstart: Enable Defender for Cloud's enhanced security features](enable-enhanced-security.md).
You'll deploy Defender for Endpoint to your Windows machines in one of two ways
If you've already enabled the integration with **Defender for Endpoint**, you have complete control over when and whether to deploy the MDE unified solution to your **Windows** machines.
-To deploy the MDE unified solution, you'll need to use the [REST API call](#enable-the-mde-unified-solution-at-scale) or the Azure portal:
+To deploy the MDE unified solution, you need to use the [REST API call](#enable-the-mde-unified-solution-at-scale) or the Azure portal:
1. From Defender for Cloud's menu, select **Environment settings** and select the subscription with the Windows machines that you want to receive Defender for Endpoint.
To deploy the MDE unified solution, you'll need to use the [REST API call](#enab
1. Select **Fix** to see the components that aren't enabled. - :::image type="content" source="./media/integration-defender-for-endpoint/fix-defender-for-endpoint.png" alt-text="Screenshot of Fix button that enables Microsoft Defender for Endpoint support."::: 1. To enable the Unified solution for Windows Server 2012 R2 and 2016 machines, select **Enable**.
If you've already enabled the integration with **Defender for Endpoint for Windo
1. Select **Fix** to see the components that aren't enabled. - :::image type="content" source="./media/integration-defender-for-endpoint/fix-defender-for-endpoint.png" alt-text="Screenshot of Fix button that enables Microsoft Defender for Endpoint support."::: 1. To enable deployment to Linux machines, select **Enable**.
If you've already enabled the integration with **Defender for Endpoint for Windo
> [!NOTE] > The next time you return to this page of the Azure portal, the **Enable for Linux machines** button won't be shown. To disable the integration for Linux, you'll need to disable it for Windows too by clearing the checkbox for **Allow Microsoft Defender for Endpoint to access my data**, and selecting **Save**. - 1. To verify installation of Defender for Endpoint on a Linux machine, run the following shell command on your machines: `mdatp health`
For endpoints running Windows:
```powershell powershell.exe -NoExit -ExecutionPolicy Bypass -WindowStyle Hidden (New-Object System.Net.WebClient).DownloadFile('http://127.0.0.1/1.exe', 'C:\\test-MDATP-test\\invoice.exe'); Start-Process 'C:\\test-MDATP-test\\invoice.exe' ```+ :::image type="content" source="./media/integration-defender-for-endpoint/generate-edr-alert.png" alt-text="A command prompt window with the command to generate a test alert."::: If the command is successful, you'll see a new alert on the workload protection dashboard and the Microsoft Defender for Endpoint portal. This alert might take a few minutes to appear.
Defender for Cloud automatically deploys the extension to machines running:
- Linux. > [!IMPORTANT]
-> If you delete the MDE.Windows/MDE.Linux extension, it will not remove Microsoft Defender for Endpoint. to 'offboard', see [Offboard Windows servers.](/microsoft-365/security/defender-endpoint/configure-server-endpoints).
+> If you delete the MDE.Windows/MDE.Linux extension, it will not remove Microsoft Defender for Endpoint. To offboard the machine, see [Offboard Windows servers.](/microsoft-365/security/defender-endpoint/configure-server-endpoints#offboard-windows-servers).
### I enabled the solution but the `MDE.Windows`/`MDE.Linux` extension isn't showing on my machine
Licenses for Defender for Endpoint for servers are included with **Microsoft Def
### Do I need to buy a separate anti-malware solution to protect my machines? No. With MDE integration in Defender for Servers, you'll also get malware protection on your machines.+ - On Windows Server 2012 R2 with MDE unified solution integration enabled, Defender for Servers will deploy [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows) in *active mode*. - On newer Windows Server operating systems, Microsoft Defender Antivirus is part of the operating system and will be enabled in *active mode*. - On Linux, Defender for Servers will deploy MDE including the anti-malware component, and set the component in *passive mode*.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud
description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 05/23/2023 Last updated : 05/28/2023 # What's new in Microsoft Defender for Cloud?
Updates in May include:
- [Download a CSV report of your cloud security explorer query results (Preview)](#download-a-csv-report-of-your-cloud-security-explorer-query-results-preview) - [Release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM](#release-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-cspm) - [Renaming container recommendations powered by Qualys](#renaming-container-recommendations-powered-by-qualys)
+- [Defender for DevOps GitHub Application update](#defender-for-devops-github-application-update)
### New alert in Defender for Key Vault
-Defender for Key Vault has the following new alert:
| Alert (alert type) | Description | MITRE tactics | Severity | |||:-:||
Vulnerability assessment (VA) solutions are essential to safeguard machines from
Microsoft Defender Vulnerability Management (MDVM) is now enabled as the default, built-in solution for all subscriptions protected by Defender for Servers that don't already have a VA solution selected.
-If a subscription has a VA solution enabled on any of its VMs, no changes will be made and MDVM will not be enabled by default on the remaining VMs in that subscription. You can choose to [enable a VA solution](deploy-vulnerability-assessment-defender-vulnerability-management.md) on the remaining VMs on your subscriptions.
+If a subscription has a VA solution enabled on any of its VMs, no changes are made and MDVM won't be enabled by default on the remaining VMs in that subscription. You can choose to [enable a VA solution](deploy-vulnerability-assessment-defender-vulnerability-management.md) on the remaining VMs on your subscriptions.
Learn how to [Find vulnerabilities and collect software inventory with agentless scanning (Preview)](enable-vulnerability-assessment-agentless.md).
Learn more about [Agentless Containers Posture in Defender CSPM](concept-agentle
Learn more about [Microsoft Defender Vulnerability Management (MDVM)](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management).
+### Release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM
+
+We're announcing the release of Vulnerability Assessment for Linux images in Azure container registries powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM. This release includes daily scanning of images. Findings used in the Security Explorer and attack paths rely on MDVM Vulnerability Assessment instead of the Qualys scanner.
+
+The existing recommendation "Container registry images should have vulnerability findings resolved" is replaced by a new recommendation powered by MDVM:
+
+|Recommendation | Description | Assessment Key|
+|--|--|--|
+| Container registry images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to  improving your security posture, significantly reducing the attack surface for your containerized workloads. |dbd0cb49-b563-45e7-9724-889e799fa648 <br> is replaced by c0b7cfc6-3172-465a-b378-53c7ff2cc0d5
+
+Learn more about [Agentless Containers Posture in Defender CSPM](concept-agentless-containers.md).
+
+Learn more about [Microsoft Defender Vulnerability Management (MDVM)](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management).
++ ### Renaming container recommendations powered by Qualys The current container recommendations in Defender for Containers will be renamed as follows:
The current container recommendations in Defender for Containers will be renamed
| Container registry images should have vulnerability findings resolved (powered by Qualys) | Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | dbd0cb49-b563-45e7-9724-889e799fa648 | | Running container images should have vulnerability findings resolved (powered by Qualys) | Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | 41503391-efa5-47ee-9282-4eff6131462c |
+### Defender for DevOps GitHub Application update
+
+Microsoft Defender for DevOps is constantly making changes and updates that require Defender for DevOps customers who have onboarded their GitHub environments in Defender for Cloud to provide permissions as part of the application deployed in their GitHub organization. These permissions are necessary to ensure all of the security features of Defender for DevOps operate normally and without issues.
+
+We suggest updating the permissions as soon as possible to ensure continued access to all available features of Defender for DevOps.
+
+Permissions can be granted in two different ways:
+
+- In your organization, select **GitHub Apps**. Locate Your organization, and select **Review request**.
+
+- You'll get an automated email from GitHub Support. In the email, select **Review permission request to accept or reject this change**.
+
+After you have followed either of these options, you'll be navigated to the review screen where you should review the request. Select **Accept new permissions** to approve the request.
+
+If you require any assistance updating permissions, you can [create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+
+You can also learn more about [Defender for DevOps](defender-for-devops-introduction.md).
+If a subscription has a VA solution enabled on any of its VMs, no changes are made and MDVM won't be enabled by default on the remaining VMs in that subscription. You can choose to [enable a VA solution](deploy-vulnerability-assessment-defender-vulnerability-management.md) on the remaining VMs on your subscriptions.
+
+Learn how to [Find vulnerabilities and collect software inventory with agentless scanning (Preview)](enable-vulnerability-assessment-agentless.md).
+ ## April 2023 Updates in April include:
defender-for-cloud Working With Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/working-with-log-analytics-agent.md
When you select a data collection tier in Microsoft Defender for Cloud, the secu
The enhanced security protections of Defender for Cloud are required for storing Windows security event data. Learn more about [the enhanced protection plans](defender-for-cloud-introduction.md).
-You maybe charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+You may be charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
### Information for Microsoft Sentinel users
Security events collection within the context of a single workspace can be confi
The **Common** and **Minimal** event sets were designed to address typical scenarios based on customer and industry standards for the unfiltered frequency of each event and their usage. -- **Minimal** - This set is intended to cover only events that might indicate a successful breach and important events with low volume. Most of the data volume of this set is successful user logon (event ID 4625), failed user logon events (event ID 4624), and process creation events (event ID 4688). Sign out events are important for auditing only and have relatively high volume, so they aren't included in this event set.
+- **Minimal** - This set is intended to cover only events that might indicate a successful breach and important events with low volume. Most of the data volume of this set is successful user logon (event ID 4624), failed user logon events (event ID 4625), and process creation events (event ID 4688). Sign out events are important for auditing only and have relatively high volume, so they aren't included in this event set.
- **Common** - This set is intended to provide a full user audit trail, including events with low volume. For example, this set contains both user logon events (event ID 4624) and user logoff events (event ID 4634). We include auditing actions like security group changes, key domain controller Kerberos operations, and other events that are recommended by industry organizations. Here's a complete breakdown of the Security and App Locker event IDs for each set:
Here's a complete breakdown of the Security and App Locker event IDs for each se
| | 6273,6278,6416,6423,6424,8001,8002,8003,8004,8005,8006,8007,8222,26401,30004 | > [!NOTE]
+>
> - If you are using Group Policy Object (GPO), it is recommended that you enable audit policies Process Creation Event 4688 and the *CommandLine* field inside event 4688. For more information about Process Creation Event 4688, see Defender for Cloud's [FAQ](./faq-data-collection-agents.yml#what-happens-when-data-collection-is-enabled-). For more information about these audit policies, see [Audit Policy Recommendations](/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations).
-> - To enable data collection for [Adaptive application controls](adaptive-application-controls.md), Defender for Cloud configures a local AppLocker policy in Audit mode to allow all applications. This will cause AppLocker to generate events which are then collected and leveraged by Defender for Cloud. It is important to note that this policy will not be configured on any machines on which there is already a configured AppLocker policy.
+> - To enable data collection for [Adaptive application controls](adaptive-application-controls.md), Defender for Cloud configures a local AppLocker policy in Audit mode to allow all applications. This will cause AppLocker to generate events which are then collected and leveraged by Defender for Cloud. It is important to note that this policy will not be configured on any machines on which there is already a configured AppLocker policy.
> - To collect Windows Filtering Platform [Event ID 5156](https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=5156), you need to enable [Audit Filtering Platform Connection](/windows/security/threat-protection/auditing/audit-filtering-platform-connection) (Auditpol /set /subcategory:"Filtering Platform Connection" /Success:Enable) >
To turn off monitoring components:
- For Defender plans that have monitoring settings, go to the settings of the Defender plan, turn off the extension, and select **Save**. > [!NOTE]
+>
> - Disabling extensions does not remove the extensions from the effected workloads. > - For information on removing the OMS extension, see [How do I remove OMS extensions installed by Defender for Cloud](./faq-data-collection-agents.yml#how-do-i-remove-oms-extensions-installed-by-defender-for-cloud-).
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
Title: Configure import settings in the FHIR service - Azure Health Data Services description: This article describes how to configure import settings in the FHIR service.-+ Last updated 06/06/2022-+ # Configure bulk-import settings
Select **Enabled from selected virtual networks and IP addresses**. Under the Fi
| West Europe | 20.61.98.66 | | West US 2 | 40.64.135.77 |
-> [!NOTE]
-> The above steps are similar to the configuration steps described in the document **Converting your data to FHIR**. For more information, see [Configure the ACR firewall](./convert-data.md#step-6-optional-configure-the-azure-container-registry-firewall-for-secure-access).
- #### Option 2.2 : Access storage account provisioned in same Azure region as FHIR service The configuration process for IP addresses in the same region is just like above except a specific IP address range in Classless Inter-Domain Routing (CIDR) format is used instead (i.e., 100.64.0.0/10). The reason why the IP address range (100.64.0.0 ΓÇô 100.127.255.255) must be specified is because an IP address for the FHIR service will be allocated each time an `$import` request is made.
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
In a Standard logic app workflow that starts with the Request trigger (but not a
* An authorization policy must include at least the **Issuer** claim, which has a value that starts with either `https://sts.windows.net/` or `https://login.microsoftonline.com/` (OAuth V2) as the Azure AD issuer ID.
- For example, suppose that your logic app has an authorization policy that requires two claim types, **Audience** and **Issuer**. This sample [payload section](../active-directory/develop/access-tokens.md#payload-claims) for a decoded access token includes both claim types where `aud` is the **Audience** value and `iss` is the **Issuer** value:
+ For example, suppose that your logic app has an authorization policy that requires two claim types, **Audience** and **Issuer**. This sample [payload section](../active-directory/develop/access-token-claims-reference.md#payload-claims) for a decoded access token includes both claim types where `aud` is the **Audience** value and `iss` is the **Issuer** value:
```json {
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
Title: Connect to SAP
description: Connect to an SAP server from a workflow in Azure Logic Apps. ms.suite: integration-- Previously updated : 04/03/2023 Last updated : 05/23/2023 tags: connectors # Connect to SAP from workflows in Azure Logic Apps
-This how-to guide shows how to access your SAP server from a workflow in Azure Logic Apps using the [SAP connector](/connectors/sap/).
+This multipart how-to guide shows how to access your SAP server from a workflow in Azure Logic Apps using the SAP connector. You can use the SAP connector's operations to create automated workflows that run when triggered by events in your SAP server or in other systems and run actions to manage resources on your SAP server.
-## Prerequisites
-
-* An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-* The logic app workflow from where you want to access your SAP server.
-
- * If you're using a deprecated version of the SAP connector, you have to [migrate to the current connector](#migrate-to-current-connector) before you can connect to your SAP server.
-
- * If you're running your logic app workflow in multi-tenant Azure, review the [multi-tenant prerequisites](#multi-tenant-azure-prerequisites).
-
- * If you're running your logic app workflow in a Premium-level [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), review the [ISE prerequisites](#ise-prerequisites).
-
-* The [SAP Application server](https://wiki.scn.sap.com/wiki/display/ABAP/ABAP+Application+Server) or [SAP Message server](https://help.sap.com/saphelp_nw70/helpdata/en/40/c235c15ab7468bb31599cc759179ef/frameset.htm) that you want to access from Azure Logic Apps.
-
- For information about the SAP servers that support this connector, review [SAP compatibility](#sap-compatibility).
-
-* Set up your SAP server and user account to allow using RFC.
-
- For more information, which includes the supported user account types and the minimum required authorization for each action type (RFC, BAPI, IDOC), review the following SAP note: [460089 - Minimum authorization profiles for external RFC programs](https://launchpad.support.sap.com/#/notes/460089).
-
-* Your SAP user account needs access to the `RFC_METADATA` function group and the respective function modules for the following operations:
-
- | Operations | Access to function modules |
- ||-|
- | RFC actions | `RFC_GROUP_SEARCH` and `DD_LANGU_TO_ISOLA` |
- | BAPI actions | `BAPI_TRANSACTION_COMMIT`, `BAPI_TRANSACTION_ROLLBACK`, `RPY_BOR_TREE_INIT`, `SWO_QUERY_METHODS`, and `SWO_QUERY_API_METHODS` |
- | IDOC actions | `IDOCTYPES_LIST_WITH_MESSAGES`, `IDOCTYPES_FOR_MESTYPE_READ`, `INBOUND_IDOCS_FOR_TID`, `OUTBOUND_IDOCS_FOR_TID`, `GET_STATUS_FROM_IDOCNR`, and `IDOC_RECORD_READ` |
- | **Read Table** action | Either `RFC BBP_RFC_READ_TABLE` or `RFC_READ_TABLE` |
- | Grant strict minimum access to SAP server for your SAP connection | `RFC_METADATA_GET` and `RFC_METADATA_GET_TIMESTAMP` |
-
-* To use the **When a message is received from SAP** trigger, complete the following tasks:
-
- * Set up your SAP gateway security permissions or Access Control List (ACL). In the **Gateway Monitor** (T-Code SMGW) dialog box, which shows the **secinfo** and **reginfo** files, open the **Goto** menu, and select **Expert Functions** > **External Security** > **Maintenance of ACL Files**.
-
- The following permission setting is required:
-
- `P TP=LOGICAPP HOST=<on-premises-gateway-server-IP-address> ACCESS=*`
-
- This line has the following format:
-
- `P TP=<trading-partner-identifier-(program-name)-or-*-for-all-partners> HOST=<comma-separated-list-with-external-host-IP-or-network-names-that-can-register-the-program> ACCESS=<*-for-all-permissions-or-a-comma-separated-list-of-permissions>`
-
- If you don't configure the SAP gateway security permissions, you might receive the following error:
+Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps, but this connector is currently in preview and subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](#connector-technical-reference).
- `Registration of tp Microsoft.PowerBI.EnterpriseGateway from host <host-name> not allowed`
-
- For more information, review [SAP Note 1850230 - GW: "Registration of tp &lt;program ID&gt; not allowed"](https://userapps.support.sap.com/sap/support/knowledge/en/1850230).
-
- * Set up your SAP gateway security logging to help find Access Control List (ACL) issues. For more information, review the [SAP help topic for setting up gateway logging](https://help.sap.com/viewer/62b4de4187cb43668d15dac48fc00732/7.31.25/en-US/48b2a710ca1c3079e10000000a42189b.html).
-
- * In the **Configuration of RFC Connections** (T-Code SM59) dialog box, create an RFC connection with the **TCP/IP** type. Make sure that the **Activation Type** is set to **Registered Server Program**. Set the RFC connection's **Communication Type with Target System** value to **Unicode**.
-
- * If you use this SAP trigger with the **IDOC Format** parameter set to **FlatFile** along with the [Flat File Decode action](logic-apps-enterprise-integration-flatfile.md), you have to use the `early_terminate_optional_fields` property in your flat file schema by setting the value to `true`.
-
- This requirement is necessary because the flat file IDoc data record that's sent by SAP on the tRFC call `IDOC_INBOUND_ASYNCHRONOUS` isn't padded to the full SDATA field length. Azure Logic Apps provides the flat file IDoc original data without padding as received from SAP. Also, when you combine this SAP trigger with the Flat File Decode action, the schema that's provided to the action must match.
-
- > [!NOTE]
- >
- > This SAP trigger uses the same URI location to both renew and unsubscribe from a webhook subscription. The renewal
- > operation uses the HTTP `PATCH` method, while the unsubscribe operation uses the HTTP `DELETE` method. This behavior
- > might make a renewal operation appear as an unsubscribe operation in your trigger's history, but the operation is
- > still a renewal because the trigger uses `PATCH` as the HTTP method, not `DELETE`.
-
-* The message content to send to your SAP server, such as a sample IDoc file. This content must be in XML format and include the namespace of the [SAP action](#actions) you want to use. You can [send IDocs with a flat file schema by wrapping them in an XML envelope](#send-flat-file-idocs).
-
-### SAP compatibility
+## SAP compatibility
The SAP connector is compatible with the following types of SAP systems:
The SAP connector supports the following message and data integration types from
The SAP connector uses the [SAP .NET Connector (NCo) library](https://support.sap.com/en/product/connectors/msnet.html).
-To use the available [SAP trigger](#triggers) and [SAP actions](#actions), you need to first authenticate your connection. You can authenticate your connection with a username and password. The SAP connector also supports [SAP Secure Network Communications (SNC)](https://help.sap.com/viewer/e73bba71770e4c0ca5fb2a3c17e8e229/7.31.25/en-US/e656f466e99a11d1a5b00000e835363f.html) for authentication. You can use SNC for SAP NetWeaver single sign-on (SSO), or for additional security capabilities from external products. If you use SNC, review the [SNC prerequisites](#snc-prerequisites) and the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise).
-
-### Network prerequisites
-
-The SAP system requires network connectivity from the host of the SAP .NET Connector (NCo) library. The multi-tenant host of the SAP .NET Connector (NCo) library is the on-premises data gateway. If you use an on-premises data gateway cluster, all nodes of the cluster require network connectivity to the SAP system. The ISE host of the SAP .NET Connector (NCo) library is within the ISE virtual network.
-
-The SAP system-required network connectivity includes the following servers and
-
-* SAP Application Server, Dispatcher service (for all Logon types)
-
- Your SAP system can include multiple SAP Application Servers. The host of the SAP .NET Connector (NCo) library requires access to each server and their services.
-
-* SAP Message Server, Message service (for Logon type Group)
-
- The Message Server and service will redirect to one or more Application Server's Dispatcher services. The host of the SAP .NET Connector (NCo) library requires access to each server and their services.
-
-* SAP Gateway Server, Gateway service
-
-* SAP Gateway Server, Gateway secured service
-
- The SAP system-required network connectivity also includes this server and service to use with the Secure Network Communications (SNC).
-
-Redirection of requests from Application Server, Dispatcher service to Gateway Server, Gateway service occurs automatically within the SAP .NET Connector (NCo) library. This redirection occurs even if only the Application Server, Dispatcher service information is provided in the connection parameters.
-
-If you're using a load balancer in front of your SAP system, all the services must be redirected to their respective servers.
-
-For more information about SAP services and ports, review the [TCP/IP Ports of All SAP Products](https://help.sap.com/viewer/ports).
-
-> [!NOTE]
- > Make sure you enabled network connectivity from the host of SAP .NET Connector (NCo) library and that
- > the required ports are open on firewalls and network security groups. Otherwise, you get errors such as
- > **partner not reached** from component **NI (network interface)** and additional error text such as **WSAECONNREFUSED: Connection refused**.
-
-### Migrate to current connector
+To use the SAP connector operations, you have to first authenticate your connection and have the following options:
-The previous SAP Application Server and SAP Message server connectors were deprecated February 29, 2020. To migrate to the current SAP connector, follow these steps:
+* You can provide a username and password.
-1. Update your [on-premises data gateway](https://www.microsoft.com/download/details.aspx?id=53127) to the current version. For more information, review [Install an on-premises data gateway for Azure Logic Apps](logic-apps-gateway-install.md).
+* The SAP connector supports authentication with [SAP Secure Network Communications (SNC)](https://help.sap.com/viewer/e73bba71770e4c0ca5fb2a3c17e8e229/7.31.25/en-US/e656f466e99a11d1a5b00000e835363f.html).
-1. In your logic app workflow that uses the deprecated SAP connector, delete the **Send to SAP** action.
+You can use SNC for SAP NetWeaver single sign-on (SSO) or for security capabilities from external products. If you choose to use SNC, review the [SNC prerequisites](#snc-prerequisites) and the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise).
-1. Add the **Send message to SAP** action from the current SAP connector.
+## Connector technical reference
-1. Reconnect to your SAP system in the new action.
+The SAP connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-1. Save your logic app workflow. On the designer toolbar, select **Save**.
+| Logic app | Environment | Connector version |
+|--|-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label, and the ISE-native version, which appears in the designer with the **ISE** label and has different message limits than the managed connector. <br><br>**Note**: Make sure to use the ISE-native version, not the managed version. <br><br>For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the designer under the **Azure** label, and built-in connector (preview), which appears in the designer under the **Built-in** label and is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) |
-## Multi-tenant Azure prerequisites
+<a name="connector-parameters"></a>
-These prerequisites apply if your logic app workflow runs in multi-tenant Azure. The managed SAP connector doesn't run natively in an [ISE](connect-virtual-network-vnet-isolated-environment-overview.md).
+### Connector parameters
-> [!TIP]
-> If you're using a Premium-level ISE, you can use the SAP ISE connector instead of the managed SAP connector.
-> For more information, review the [ISE prerequisites](#ise-prerequisites).
+Along with simple string and number inputs, the SAP connector accepts the following table parameters (`Type=ITAB` inputs):
-The managed SAP connector integrates with SAP systems through your [on-premises data gateway](logic-apps-gateway-connection.md). For example, in send message scenarios, when a message is sent from a logic app workflow to an SAP system, the data gateway acts as an RFC client and forwards the requests received from the logic app workflow to SAP. Likewise, in receive message scenarios, the data gateway acts as an RFC server that receives requests from SAP and forwards them to the logic app workflow.
+* Table direction parameters, both input and output, for older SAP releases.
+* Changing parameters, which replace the table direction parameters for newer SAP releases.
+* Hierarchical table parameters.
-1. [Download and install the on-premises data gateway](logic-apps-gateway-install.md) on a host computer or virtual machine that exists in the same virtual network as the SAP system to which you're connecting.
+## Known issues and limitations
-1. [Create an Azure gateway resource](logic-apps-gateway-connection.md#create-azure-gateway-resource) for your on-premises data gateway in the Azure portal. This gateway helps you securely access on-premises data and resources. Make sure to use a supported version of the gateway.
+### SAP managed connector
- > [!TIP]
- > If you experience an issue with your gateway, try [upgrading to the latest version](https://aka.ms/on-premises-data-gateway-installer),
- > which might include updates to resolve your problem.
+* The SAP connector currently doesn't support SAP router strings. The on-premises data gateway must exist on a virtual network where the gateway can directly reach the SAP system that you want to connect.
-1. [Download and install the latest SAP client library](#sap-client-library-prerequisites) on the same local computer as your on-premises data gateway.
+* In general, the SAP trigger doesn't support data gateway clusters. In some failover cases, the data gateway node that communicates with the SAP system might differ from the active node, which results in unexpected behavior.
-1. Configure the network host names and service names resolution for the host machine where you installed the on-premises data gateway.
+ * For send message scenarios, data gateway clusters in failover mode are supported.
- If you intend to use the host names or service names for connections from Azure Logic Apps, you have to set up name resolution for each SAP Application, Message, and Gateway server along with their
-
- * Set up the network host name resolution in the **%windir%\System32\drivers\etc\hosts** file or in the DNS server that's available to your on-premises data gateway host machine.
-
- * Set up the service name resolution in the **%windir%\System32\drivers\etc\services** file.
-
- If you don't intend to use network host names or service names for the connection, you can use host IP addresses and service port numbers instead.
+ * Stateful [SAP actions](/connectors/sap/#actions) don't support data gateway clusters in load-balancing mode. Stateful communications must remain on the same data gateway cluster node. Either use the data gateway in non-cluster mode or in a cluster that's set up for failover only. For example, these actions include the following:
- If you don't have a DNS entry for your SAP system, the following example shows a sample entry for the hosts file:
+ * All actions that specify a **Session ID** value
+ * **\[BAPI] Commit transaction**
+ * **\[BAPI] Rollback transaction**
+ * **\[BAPI - RFC] Close stateful session**
+ * **\[BAPI - RFC] Create stateful session**
- ```text
- 10.0.1.9 sapserver # SAP single-instance system host IP by simple computer name
- 10.0.1.9 sapserver.contoso.com # SAP single-instance system host IP by fully qualified DNS name
- ```
+* In the action named **\[BAPI] Call method in SAP**, the auto-commit feature won't commit the BAPI changes if at least one warning exists in the **CallBapiResponse** object returned by the action. To commit BAPI changes despite any warnings, follow these steps:
- A sample set of entries for the services files is:
+ 1. Create a session explicitly using the action named **\[BAPI - RFC] Create stateful session**.
+ 1. In the action named **\[BAPI] Call method in SAP**, disable the auto-commit feature.
+ 1. Call the action named **\[BAPI] Commit transaction** instead.
- ```text
- sapdp00 3200/tcp # SAP system instance 00 dialog (application) service port
- sapgw00 3300/tcp # SAP system instance 00 gateway service port
- sapmsDV6 3601/tcp # SAP system ID DV6 message service port
- ```
+### SAP built-in connector
-## ISE prerequisites
+The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
-An ISE provides access to resources that are protected by an Azure virtual network and offers other ISE-native connectors that let logic app workflows directly access on-premises resources without using the on-premises data gateway.
+## Prerequisites
-1. If you don't already have an Azure Storage account with a blob container, create a container using either the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md).
+* An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-1. [Download and install the latest SAP client library](#sap-client-library-prerequisites) on your local computer. You should have the following assembly (.dll) files:
+* The [SAP Application server](https://wiki.scn.sap.com/wiki/display/ABAP/ABAP+Application+Server) or [SAP Message server](https://help.sap.com/saphelp_nw70/helpdata/en/40/c235c15ab7468bb31599cc759179ef/frameset.htm) that you want to access from Azure Logic Apps.
- * libicudecnumber.dll
+ * Set up your SAP server and user account to allow using RFC.
- * rscp4n.dll
+ For more information, which includes the supported user account types and the minimum required authorization for each action type (RFC, BAPI, IDoc), review the following SAP note: [460089 - Minimum authorization profiles for external RFC programs](https://launchpad.support.sap.com/#/notes/460089).
- * sapnco.dll
+ * Your SAP user account needs access to the `RFC_METADATA` function group and the respective function modules for the following operations:
- * sapnco_utils.dll
+ | Operations | Access to function modules |
+ ||-|
+ | RFC actions | `RFC_GROUP_SEARCH` and `DD_LANGU_TO_ISOLA` |
+ | BAPI actions | `BAPI_TRANSACTION_COMMIT`, `BAPI_TRANSACTION_ROLLBACK`, `RPY_BOR_TREE_INIT`, `SWO_QUERY_METHODS`, and `SWO_QUERY_API_METHODS` |
+ | IDoc actions | `IDOCTYPES_LIST_WITH_MESSAGES`, `IDOCTYPES_FOR_MESTYPE_READ`, `INBOUND_IDOCS_FOR_TID`, `OUTBOUND_IDOCS_FOR_TID`, `GET_STATUS_FROM_IDOCNR`, and `IDOC_RECORD_READ` |
+ | [**Read SAP table**](/connectors/sap/#read-sap-table) action | Either `RFC BBP_RFC_READ_TABLE` or `RFC_READ_TABLE` |
+ | Grant strict minimum access to SAP server for your SAP connection | `RFC_METADATA_GET` and `RFC_METADATA_GET_TIMESTAMP` |
-1. Create a .zip file that includes these assembly files at the root folder. Upload the package to your blob container in Azure Storage.
+* The logic app workflow from where you want to access your SAP server.
- > [!TIP]
- > Don't use a subfolder inside the .zip file. Only assemblies in the archive's root folder are deployed with the SAP connector in your ISE.
- >
- > If you use SNC, also include the SNC assemblies and binaries in the same .zip file at the root.
- > For more information, review the [SNC prerequisites (ISE)](#snc-prerequisites-ise).
+ * For a Consumption workflow in multi-tenant Azure Logic Apps, see [Multi-tenant prerequisites](#multi-tenant-prerequisites).
-1. In either the Azure portal or Azure Storage Explorer, browse to the container location where you uploaded the .zip file.
+ * For a Standard workflow in single-tenant Azure Logic Apps, see [Single-tenant prerequisites](#single-tenant-prerequisites).
-1. Copy the URL of the container location. Make sure to include the Shared Access Signature (SAS) token, so the SAS token is authorized. Otherwise, deployment for the SAP ISE connector fails.
+ * For a Consumption workflow in a Premium-level [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), see [ISE prerequisites](#ise-prerequisites).
-1. Install and deploy the SAP connector in your ISE. For more information, review [Add ISE connectors](add-artifacts-integration-service-environment-ise.md#add-ise-connectors-environment).
+ > [!NOTE]
+ >
+ > When you use a Premium-level ISE, use the ISE-native SAP connector, not the SAP managed connector,
+ > which doesn't natively run in an ISE. For more information, review the [ISE prerequisites](#ise-prerequisites).
- 1. In the [Azure portal](https://portal.azure.com), find and open your ISE.
+* To use either the SAP managed connector trigger named **When a message is received from SAP** or the SAP built-in trigger named **Register SAP RFC server for trigger**, complete the following tasks:
- 1. On the ISE menu, select **Managed connectors** &gt; **Add**. From the connectors list, find and select **SAP**.
+ * Set up your SAP gateway security permissions or Access Control List (ACL). In the **Gateway Monitor** (T-Code SMGW) dialog box, which shows the **secinfo** and **reginfo** files, open the **Goto** menu, and select **Expert Functions** > **External Security** > **Maintenance of ACL Files**.
- 1. On the **Add a new managed connector** pane, in the **SAP package** box, paste the URL for the .zip file that has the SAP assemblies. Again, make sure to include the SAS token.
+ The following permission setting is required:
- 1. Select **Create** to finish creating your ISE connector.
+ `P TP=LOGICAPP HOST=<on-premises-gateway-server-IP-address> ACCESS=*`
-1. If your SAP instance and ISE are in different virtual networks, you also need to [peer those networks](../virtual-network/tutorial-connect-virtual-networks-portal.md) so they're connected. Also review the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise).
+ This line has the following format:
-1. Get the IP addresses for the SAP Application, Message, and Gateway servers that you plan to use for connecting from your logic app workflow. Network name resolution isn't available for SAP connections in an ISE.
+ `P TP=<trading-partner-identifier-(program-name)-or-*-for-all-partners> HOST=<comma-separated-list-with-external-host-IP-or-network-names-that-can-register-the-program> ACCESS=<*-for-all-permissions-or-a-comma-separated-list-of-permissions>`
-1. Get the port numbers for the SAP Application, Message, and Gateway services that you plan you'll use for connection with Logic App. Service name resolution isn't available for SAP connector in ISE.
+ If you don't configure the SAP gateway security permissions, you might receive the following error:
-### SAP client library prerequisites
+ **Registration of tp Microsoft.PowerBI.EnterpriseGateway from host <*host-name*> not allowed**
-The following list describes the prerequisites for the SAP client library that you're using with the connector:
+ For more information, review [SAP Note 1850230 - GW: "Registration of tp &lt;program ID&gt; not allowed"](https://userapps.support.sap.com/sap/support/knowledge/en/1850230).
-* Make sure that you install the latest version, [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.25.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). Earlier versions of SAP NCo might experience the following issues:
+ * Set up your SAP gateway security logging to help find Access Control List (ACL) issues. For more information, review the [SAP help topic for setting up gateway logging](https://help.sap.com/viewer/62b4de4187cb43668d15dac48fc00732/7.31.25/en-US/48b2a710ca1c3079e10000000a42189b.html).
- * When more than one IDoc message is sent at the same time, this condition blocks all later messages that are sent to the SAP destination, causing messages to time out.
+ * In the **Configuration of RFC Connections** (T-Code SM59) dialog box, create an RFC connection with the **TCP/IP** type. Make sure that the **Activation Type** is set to **Registered Server Program**. Set the RFC connection's **Communication Type with Target System** value to **Unicode**.
- * Session activation might fail due to a leaked session. This condition might block calls sent by SAP to the logic app workflow trigger.
+ * If you use this SAP trigger with the **IDOC Format** parameter set to **FlatFile** along with the [Flat File Decode action](logic-apps-enterprise-integration-flatfile.md), you have to use the `early_terminate_optional_fields` property in your flat file schema by setting the value to `true`.
- * The on-premises data gateway (June 2021 release) depends on the `SAP.Middleware.Connector.RfcConfigParameters.Dispose()` method in SAP NCo to free up resources.
+ This requirement is necessary because the flat file IDoc data record that's sent by SAP on the tRFC call `IDOC_INBOUND_ASYNCHRONOUS` isn't padded to the full SDATA field length. Azure Logic Apps provides the flat file IDoc original data without padding as received from SAP. Also, when you combine this SAP trigger with the **Flat File Decode** action, the schema that's provided to the action must match.
- * After you upgrade the SAP server environment, you get the following exception message: 'The only destination &lt;some-GUID&gt; available failed when retrieving metadata from &lt;SAP-system-ID&gt; -- see log for details'.
+ > [!NOTE]
+ >
+ > In Consumption and Standard workflows, the SAP managed trigger named **When a message is received from SAP**
+ > uses the same URI location to both renew and unsubscribe from a webhook subscription. The renewal operation
+ > uses the HTTP `PATCH` method, while the unsubscribe operation uses the HTTP `DELETE` method. This behavior
+ > might make a renewal operation appear as an unsubscribe operation in your trigger's history, but the operation
+ > is still a renewal because the trigger uses `PATCH` as the HTTP method, not `DELETE`.
+ >
+ > In Standard workflows, the SAP built-in trigger named **Register SAP RFC server for trigger** uses the Azure
+ > Functions trigger instead, and shows only the actual callbacks from SAP.
-* You must have the 64-bit version of the SAP client library installed, because the data gateway only runs on 64-bit systems. Installing the unsupported 32-bit version results in a "bad image" error.
+* The message content to send to your SAP server, such as a sample IDoc file. This content must be in XML format and include the namespace of the [SAP action](/connectors/sap/#actions) that you want to use. You can [send IDocs with a flat file schema by wrapping them in an XML envelope](sap-create-example-scenario-workflows.md#send-flat-file-idocs).
-* From the client library's default installation folder, copy the assembly (.dll) files to another location, based on your scenario as follows:
+<a name="network-prerequisites"></a>
- * For a logic app workflow that runs in an ISE, follow the [ISE prerequisites](#ise-prerequisites) instead.
+### Network connectivity prerequisites
- * For a logic app workflow that runs in multi-tenant Azure and uses your on-premises data gateway, copy the DLL files to the on-premises data gateway installation folder, for example, "C:\Program Files\On-Premises Data Gateway".
+The SAP system requires network connectivity from the host of the SAP .NET Connector (NCo) library:
- > [!NOTE]
- > If your SAP connection fails with the error message, **Please check your account info and/or permissions and try again**,
- > make sure you copied the assembly (.dll) files to the data gateway installation folder, for example, "C:\Program Files\On-Premises Data Gateway".
- >
- > You can troubleshoot further issues using the [.NET assembly binding log viewer](/dotnet/framework/tools/fuslogvw-exe-assembly-binding-log-viewer).
- > This tool lets you check that your assembly files are in the correct location.
-
- * Optionally, when you install the SAP client library, select the **Global Assembly Cache registration** option.
+* For Consumption logic app workflows in multi-tenant Azure Logic Apps, the on-premises data gateway hosts the SAP .NET Connector (NCo) library. If you use an on-premises data gateway cluster, all nodes of the cluster require network connectivity to the SAP system.
-Note the following relationships between the SAP client library, the .NET Framework, the .NET runtime, and the gateway:
+* For Standard logic app workflows in single-tenant Azure Logic Apps, the logic app resource hosts the SAP .NET Connector (NCo) library. So, the logic app resource itself must enable virtual network integration, and that virtual network must have network connectivity to the SAP system.
-* Both the Microsoft SAP Adapter and the gateway host service use .NET Framework 4.7.2.
+* For Consumption logic app workflows in an ISE, the ISE virtual network hosts the SAP .NET Connector (NCo) library.
-* The SAP NCo for .NET Framework 4.0 works with processes that use .NET runtime 4.0 to 4.8.
+The SAP system-required network connectivity includes the following servers and
-* The SAP NCo for .NET Framework 2.0 works with processes that use .NET runtime 2.0 to 3.5, but no longer works with the latest gateway.
+* SAP Application Server, Dispatcher service (for all Logon types)
-### SNC prerequisites
+ Your SAP system can include multiple SAP Application Servers. The host of the SAP .NET Connector (NCo) library requires access to each server and their services.
-If you use an on-premises data gateway with optional SNC, which is only supported in multi-tenant Azure, you must configure these additional settings. If you're using an ISE, review the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise)
+* SAP Message Server, Message service (for Logon type Group)
-If you're using SNC with SSO, make sure the data gateway service is running as a user who is mapped against the SAP user. To change the default account, select **Change account**, and enter the user credentials.
+ The Message Server and service will redirect to one or more Application Server's Dispatcher services. The host of the SAP .NET Connector (NCo) library requires access to each server and their services.
-![Screenshot that shows Azure portal with on-premises data gateway settings and Service Settings page with button to change gateway service account selected.](./media/logic-apps-using-sap-connector/gateway-account.png)
+* SAP Gateway Server, Gateway service
-If you're enabling SNC through an external security product, copy the SNC library or files on the same computer where your data gateway is installed. Some examples of SNC products include [sapseculib](https://help.sap.com/saphelp_nw74/helpdata/en/7a/0755dc6ef84f76890a77ad6eb13b13/frameset.htm), Kerberos, and NTLM. For more information about enabling SNC for the data gateway, review [Enable Secure Network Communications (SNC)](#enable-secure-network-communications).
+* SAP Gateway Server, Gateway secured service
-> [!TIP]
-> The version of your SNC library and its dependencies must be compatible with your SAP environment.
->
-> * You must use `sapgenpse.exe` specifically as the SAPGENPSE utility.
-> * If you use an on-premises data gateway, also copy these same binary files to the installation folder there, for example, "C:\Program Files\On-Premises Data Gateway".
-> * If PSE is provided in your connection, you don't need to copy and set up PSE and SECUDIR for your on-premises data gateway.
-> * You can also use your on-premises data gateway to troubleshoot any library compatibility issues.
+ The SAP system-required network connectivity also includes this server and service to use with Secure Network Communications (SNC).
-#### SNC prerequisites (ISE)
+Redirection of requests from Application Server, Dispatcher service to Gateway Server, Gateway service automatically happens within the SAP .NET Connector (NCo) library. This redirection occurs even if only the Application Server, Dispatcher service information is provided in the connection parameters.
-The ISE version of the SAP connector supports SNC X.509. You can enable SNC for your SAP ISE connections with the following steps:
+If you're using a load balancer in front of your SAP system, you must redirect all the services to their respective servers.
+For more information about SAP services and ports, review the [TCP/IP Ports of All SAP Products](https://help.sap.com/viewer/ports).
-> [!IMPORTANT]
-> Before you redeploy an existing SAP connector to use SNC, you must delete all connections to the old connector.
-> Multiple logic app workflows can use the same connection to SAP. As such, you must delete any SAP connections from all
-> your logic app workflows in the ISE. Then, you must delete the old connector.
+> [!NOTE]
>
-> When you delete an old connector, you can still keep the logic app workflows that use this connector. After you redeploy
-> the connector, you can then authenticate the new connection in your SAP triggers and actions in these logic app workflows.
+> Make sure you enabled network connectivity from the host of the SAP .NET Connector (NCo) library and that the required
+> ports are open on firewalls and network security groups. Otherwise, you get errors such as **partner not reached**
+> from the **NI (network interface)** component and error text such as **WSAECONNREFUSED: Connection refused**.
-First, if you've already deployed the SAP connector without the SNC or SAPGENPSE libraries, delete all the connections and the connector.
+<a name="sap-client-library-prerequisites"></a>
-1. Sign in to the [Azure portal](https://portal.azure.com).
+### SAP NCo client library prerequisites
-1. Delete all connections to your SAP connector from your logic app workflows.
+To use the SAP connector, you'll need the SAP NCo client library named [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.25.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). The following list describes the prerequisites for the SAP NCo client library that you're using with the SAP connector:
- 1. Open your logic app resource in the Azure portal.
+* Version:
- 1. In your logic app's menu, under **Development Tools**, select **API connections**.
+ * SAP Connector (NCo 3.1) isn't currently supported as dual-version capability is unavailable.
- 1. On the **API connections** page, select your SAP connection.
+ * For Consumption logic app workflows that use the on-premises data gateway, make sure that you install the latest 64-bit version, [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.25.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). The data gateway runs only on 64-bit systems. Installing the unsupported 32-bit version results in a **"bad image"** error.
- 1. On the connection's page menu, select **Delete**.
+ Earlier versions of SAP NCo might experience the following issues:
- 1. Accept the confirmation prompt to delete the connection.
+ * When more than one IDoc message is sent at the same time, this condition blocks all later messages that are sent to the SAP destination, causing messages to time out.
- 1. Wait for the portal notification that the connection has been deleted.
+ * Session activation might fail due to a leaked session. This condition might block calls sent by SAP to the logic app workflow trigger.
-1. Or, delete connections to your SAP connector from the API connections in your ISE.
+ * The on-premises data gateway (June 2021 release and newer releases) depends on the `SAP.Middleware.Connector.RfcConfigParameters.Dispose()` method in SAP NCo to free up resources.
- 1. Open your ISE resource in the Azure portal.
+ * After you upgrade the SAP server environment, you get the following exception message: **"The only destination &lt;some-GUID&gt; available failed when retrieving metadata from &lt;SAP-system-ID&gt; -- see log for details"**.
- 1. In your ISE's menu, under **Settings**, select **API connections**.
+ * For Standard logic app workflows, you can use the 32-bit or 64-bit version for the SAP NCo client library, but make sure that you install the version that matches the configuration in your Standard logic app resource. To check this version, follow these steps:
- 1. On the **API connections** page, select your SAP connection.
+ 1. In the [Azure portal](https://portal.azure.com), open your Standard logic app.
- 1. On the connection's page menu, select **Delete**.
+ 1. On the logic app resource menu, under **Settings**, select **Configuration**.
- 1. Accept the confirmation prompt to delete the connection.
+ 1. On the **Configuration** pane, under **Platform settings**, check whether the **Platform** value is set to 64-bit or 32-bit.
- 1. Wait for the portal notification that the connection has been deleted.
+ 1. Make sure to install the matching version of the [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.25.0 compiled with .NET Framework 4.0](https://support.sap.com/en/product/connectors/msnet.html).
-Next, delete the SAP connector from your ISE. You must delete all connections to this connector in all your logic apps before you can delete the connector. If you haven't already deleted all connections, review the previous set of steps.
+ 1. To use the SAP connector, you need the following files from the SAP NCo client library and have them ready to upload to your logic app resource.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+ - **libicudecnumber.dll**
+ - **rscp4n.dll**
+ - **sapnco.dll**
+ - **sapnco_utils.dll**
-1. Open your ISE resource in the Azure portal again.
+* From the client library's default installation folder, copy the assembly (.dll) files to another location, based on your scenario as follows. Or, optionally, if you're using only the SAP managed connector, when you install the SAP NCo client library, select **Global Assembly Cache registration**. The ISE zip archive and SAP built-in connector currently doesn't support GAC registration.
-1. In your ISE's menu, under **Settings**, select **Managed connectors**.
+ * For a Consumption workflow that runs in multi-tenant Azure Logic Apps and uses your on-premises data gateway, copy the assembly (.dll) files to the on-premises data gateway installation folder, for example, **C:\Program Files\On-Premises Data Gateway**.
-1. On the **Managed connectors** page, select the checkbox for your SAP connector.
+ Make sure that you copy the assembly files to the data gateway's *installation folder*. Otherwise, your SAP connection might fail with the error message, **Please check your account info and/or permissions and try again**. You can troubleshoot further issues using the [.NET assembly binding log viewer](/dotnet/framework/tools/fuslogvw-exe-assembly-binding-log-viewer). This tool lets you check that your assembly files are in the correct location.
-1. In the toolbar, select **Delete**.
+ * For Standard workflows, copy the assembly (.dll) files to a location from where you can upload them to your logic app resource or project where you're building your workflow, either in the Azure portal or locally in Visual Studio Code, respectively.
-1. Accept the confirmation prompt to delete the connector.
+ * For a Consumption workflow in an ISE, follow the [ISE prerequisites](#ise-prerequisites) instead.
-1. Wait for the portal notification that the connector has been deleted.
+The following relationships exist between the SAP NCo client library, the .NET Framework, the .NET runtime, and the data gateway:
-Next, deploy or redeploy the SAP connector in your ISE:
+* The Microsoft SAP Adapter and the gateway host service both use .NET Framework 4.7.2.
-1. Prepare a new zip archive file to use in your SAP connector deployment. You must include the SNC library and the SAPGENPSE utility.
+* The SAP NCo for .NET Framework 4.0 works with processes that use .NET runtime 4.0 to 4.8.
- 1. Copy all SNC, SAPGENPSE, and NCo libraries to the root folder of your zip archive. Don't put these binaries in subfolders.
+* The SAP NCo for .NET Framework 2.0 works with processes that use .NET runtime 2.0 to 3.5, but no longer works with the latest gateway.
- 1. You must use the 64-bit SNC library. There's no 32-bit support.
+<a name="snc-prerequisites"></a>
- 1. Your SNC library and its dependencies must be compatible with your SAP environment. For how to check compatibility, the [ISE prerequisites](#ise-prerequisites).
+### SNC prerequisites
-1. Follow the deployment steps in [ISE prerequisites](#ise-prerequisites) with your new zip archive.
+### [Consumption](#tab/consumption)
-Last, create new connections that use SNC in all your logic apps that use the SAP connector. For each connection, follow these steps:
+<a name="snc-prerequisites-consumption"></a>
-1. Open your workflow in the workflow designer again.
+For Consumption workflows in multi-tenant Azure Logic Apps that use the on-premises data gateway, and optionally SNC, you must also configure the following settings.
-1. Create or edit a step that uses the SAP connector.
+* Make sure that your SNC library version and its dependencies are compatible with your SAP environment. To troubleshoot any library compatibility issues, you can use your on-premises data gateway and data gateway logs.
-1. Enter required information about your SAP connection.
+* For the SAPGENPSE utility, you must specifically use **sapgenpse.exe**.
- :::image type="content" source=".\media\logic-apps-using-sap-connector\ise-connector-settings.png" alt-text="Screenshot of the workflow designer, showing SAP connection settings." lightbox=".\media\logic-apps-using-sap-connector\ise-connector-settings.png":::
+* If you provide a Personal Security Environment (PSE) with your connection, you don't need to copy and set up the PSE and SECUDIR for your on-premises data gateway.
- > [!NOTE]
- > The fields **SAP Username** and **SAP Password** are optional. If you don't provide a username and password,
- > the connector uses the client certificate provided in a later step for authentication.
+* If you enable SNC through an external security product, such as [sapseculib](https://help.sap.com/saphelp_nw74/helpdata/en/7a/0755dc6ef84f76890a77ad6eb13b13/frameset.htm), Kerberos, or NTLM, make sure that the SNC library exists on the same computer as your data gateway installation. For this task, copy the SNC library's binary files to the same folder as the data gateway installation on your local computer. For example, **C:\Program Files\On-Premises Data Gateway**.
-1. Enable SNC.
+ > [!NOTE]
+ >
+ > On the computer with the data gateway installation and SNC library, don't set the
+ > environment variables for **SNC_LIB** and **SNC_LIB_64**. Otherwise, these variables
+ > take precedence over the SNC library value passed through the connector.
- 1. For **Use SNC**, select the checkbox.
+* To use SNC with single sign-on (SSO), make sure the data gateway service is running as a user who is mapped to an SAP user. To change the default account for the gateway service account, select **Change account**, and enter the user credentials.
- 1. For **SNC Library**, enter the name of your SNC library. For example, `sapcrypto.dll`.
+ ![Screenshot that shows the on-premises data gateway installer and Service Settings page with button to change gateway service account selected.](./media/logic-apps-using-sap-connector/gateway-account.png)
- 1. For **SNC Partner Name**, enter the backend's SNC name. For example, `p:CN=DV3, OU=LA, O=MS, C=US`.
+For more information about enabling SNC, review [Enable Secure Network Communications (SNC)](#enable-secure-network-communications).
- 1. For **SNC Certificate**, enter your SNC client's public certificate in base64-encoded format. Don't include the PEM header or footer. Don't enter the private certificate here because the PSE might contain multiple private certificates, but this **SNC Certificate** parameter identifies the certificates that must be used for this connection. For more information, review the following note.
+### [Standard](#tab/standard)
- 1. Optionally, enter SNC settings for **SNC My Name**, **SNC Quality of Protection** as needed.
+<a name="snc-prerequisites-standard"></a>
- :::image type="content" source=".\media\logic-apps-using-sap-connector\ise-connector-settings-snc.png" alt-text="Screenshot that shows the workflow designer and the SNC configuration settings for a new SAP connection." lightbox=".\media\logic-apps-using-sap-connector\ise-connector-settings-snc.png":::
+The SAP built-in connector supports only SNC X.509 authentication, not single sign-on (SSO) authentication. Make sure that you install the SNC and common crypto library assemblies as part of your [single-tenant prerequisites](#single-tenant-prerequisites) and [network connectivity prerequisites](#network-prerequisites). For more information about enabling SNC, review [Enable Secure Network Communications (SNC)](#enable-secure-network-communications).
-1. Configure PSE settings. For **PSE**, enter your SNC PSE as a base64-encoded binary.
+For SNC from SAP, you'll need to download the following files and have them ready to upload to your logic app resource. You can find these files in the **CommonCryptoLib.sar** package available from the [**SAP for Me, Software Download Center**](https://me.sap.com/softwarecenter)(SAP sign-in required). For more information, see [Download **CommonCryptoLib.sar**](#download-common-crypto).
- * The PSE must contain the private client certificate, which thumbprint matches the public client certificate that you provided in the previous step.
+- **sapcrypto.dll**
+- **sapgenpse.exe**
+- **slcryptokernal.dll**
- * The PSE may contain additional client certificates.
+> [!NOTE]
+>
+> If you use a different SNC implementation, these library files might have different names.
+> In any case, **sapgenpse.exe** is required to use SNC with the SAP built-in connector.
- * The PSE must have no PIN. If needed, set the PIN to empty using the SAPGENPSE utility.
+<a name="download-common-crypto"></a>
- For certificate rotation, follow these steps:
+#### Download CommonCryptoLib.sar
- 1. Update the base64-encoded binary PSE for all connections that use SAP ISE X.509 in your ISE.
+To get the required assemblies and other files for SNC from SAP, you can find these files in the **CommonCryptoLib.sar** package available from the [**SAP for Me, Software Download Center**](https://me.sap.com/softwarecenter)(SAP sign-in required). You can use any currently supported **CommonCryptoLib** library implementation, based on compatible versions specific to your SAP environment. However, Microsoft recommends that you use the latest version for the **CommonCryptoLib** library available from SAP, assuming that version is compatible with your SAP environment.
- 1. Import the new certificates into your copy of the PSE.
+To download the current **CommonCryptoLib** package, follow these steps:
- 1. Encode the PSE file as a base64-encoded binary.
+1. Sign in to the [**SAP for Me, Software Download Center**](https://me.sap.com/softwarecenter).
- 1. Edit the API connection for your SAP connector, and save the new PSE file there.
+1. On the **Download Software** page, select the **Installation & Upgrades** tab, expand **By Alphabetical Index (A-Z)**, and select **C** > **SAP Cryptographic Software** > **Downloads** tab > **SapCryptoLib** > **Downloads** tab > **CommonCryptoLib 8** > **Downloads** tab.
- The connector detects the PSE change and updates its own copy during the next connection request.
+1. From the **Items Available to Download** list, select **Windows on x64 64Bit** or **Windows Server on IA32 x32 Bit**, whichever matches Standard logic app platform configuration.
- To convert a binary PSE file into base64-encoded format, follow these steps:
-
- 1. Use a PowerShell script, for example:
-
- ```powershell
- Param ([Parameter(Mandatory=$true)][string]$psePath, [string]$base64OutputPath)
- $base64String = [convert]::ToBase64String((Get-Content -path $psePath -Encoding byte))
- if ($base64OutputPath -eq $null)
- {
- Write-Output $base64String
- }
- else
- {
- Set-Content -Path $base64OutputPath -Value $base64String
- Write-Output "Output written to $base64OutputPath"
- }
- ```
+ Microsoft recommends the 64-bit version.
- 1. Save the script as a `pseConvert.ps1` file, and then invoke the script, for example:
+1. From the list, select the highest level patch.
- ```output
- .\pseConvert.ps1 -psePath "C:\Temp\SECUDIR\request.pse" -base64OutputPath "connectionInput.txt"
- Output written to connectionInput.txt
- ```
+ The current patch number varies based on the selected Windows version.
- If the output path parameter isn't provided, the script's output to the console will have line breaks. Remove the line breaks of the base 64-encoded string for the connection input parameter.
+1. If you don't have the [`SAPCAR` utility](https://help.sap.com/docs/Convergent_Charging/d1d04c0d65964a9b91589ae7afc1bd45/467291d0dc104d19bba073a0380dc6b4.html) to extract the .sar file, follow these steps:
- > [!NOTE]
- > If you're using more than one SNC client certificate for your ISE, you must provide the same PSE for all connections.
- > The PSE must contain the client private certificate for each and all of the connections.
- > You must set the client public certificate parameter to match the specific private certificate for each connection used in your ISE.
+ 1. In the [**SAP for Me, Software Download Center**](https://me.sap.com/softwarecenter), on the **Download Software** page, select the **Support Packages & Patches** tab, expand **By Alphabetical Index (A-Z)**, and select **S** > **SAPCAR** > **Downloads** tab.
-1. Select **Create** to create your connection. If the parameters are correct, the connection is created. If there's a problem with the parameters, the connection creation dialog displays an error message.
+ 1. From the **Items Available to Download** list, select your operating system, and the **sapcar.exe** file for the **SAPCAR** utility.
> [!TIP]
- > To troubleshoot connection parameter issues, you can use an on-premises data gateway and the gateway's local logs.
-
-1. On the workflow designer toolbar, select **Save** to save your changes.
-
-## Send IDoc messages to SAP server
-
-Follow these examples to create a logic app workflow that sends an IDoc message to an SAP server and returns a response:
+ >
+ > If you're unfamiliar with the **SAPCAR** utility, review the following SAP blog post,
+ > [Easily extract SAR files](https://blogs.sap.com/2004/11/18/easily-extract-sar-files/).
-1. [Create a logic app workflow that is triggered by an HTTP request.](#add-http-request-trigger)
+ The following batch file is an improved version that extracts archives to a subdirectory with the same name:
-1. [Create an action in your workflow to send a message to SAP.](#create-sap-action-to-send-message)
+ ```text
+ @echo off
+ cd %~dp1
+ mkdir %~n1
+ sapcar.exe -xvf %~nx1 -R %~n1
+ pause
+ ```
-1. [Create an HTTP response action in your workflow.](#create-http-response-action)
+1. Include all the extracted .dll and .exe files, which the following list shows for the current SAP **CommonCryptoLib** package:
-1. [Create a remote function call (RFC) request-response pattern, if you're using an RFC to receive replies from SAP ABAP.](#create-rfc-request-response)
+ - **sapcrypto.dll**
+ - **sapgenpse.exe**
+ - **slcryptokernal.dll**
-1. [Test your logic app.](#test-your-logic-app-workflow)
+### [ISE](#tab/ise)
-<a name="add-http-request-trigger"></a>
+<a name="snc-prerequisites-ise"></a>
-### Add an HTTP request trigger
+The ISE-versioned SAP connector supports SNC X.509, not single sign-on (SSO) authentication. If you previously used the SAP connector without SNC, you can enable SNC for ISE-native SAP connections.
-To have your logic app workflow receive IDocs from SAP over XML HTTP, you can use the [Request trigger](../connectors/connectors-native-reqres.md).
+Before you redeploy an SAP connector to use SNC, or if you deployed the SAP connector without the SNC or SAPGENPSE libraries, you must delete all previously existing SAP connections and then the SAP connector. Multiple logic app workflows can use the same SAP connection. So, make sure that you delete any previously existing SAP connections from all logic app workflows in your ISE.
-> [!TIP]
-> To receive IDocs over Common Programming Interface Communication (CPIC) as plain XML or as a flat file, review the section, [Receive message from SAP](#receive-message-sap).
+After you delete the SAP connections, you must delete the SAP connector from your ISE. You can still keep the logic app workflows that use this connector. After you redeploy the connector, you can then authenticate the new connection in your workflows' SAP operations.
-This section continues with the Request trigger, so first, create a logic app workflow with an endpoint in Azure to send *HTTP POST* requests to your workflow. When your logic app workflow receives these HTTP requests, the trigger fires and runs the next step in your workflow.
+1. To delete existing SAP connections, follow either path:
-1. In the [Azure portal](https://portal.azure.com), create a blank logic app, which opens the workflow designer.
+ * In the [Azure portal](https://portal.azure.com), open each logic app resource and workflow to delete the SAP connections.
-1. In the search box, enter `http request` as your filter. From the **Triggers** list, select **When a HTTP request is received**.
+ 1. Open your logic app workflow in the designer.
- ![Screenshot that shows the workflow designer with a new Request trigger being added to the logic app workflow.](./media/logic-apps-using-sap-connector/add-http-trigger-logic-app.png)
+ 1. On your logic app menu, under **Development Tools**, select **API connections**.
-1. Save your logic app workflow, which generates an endpoint URL that can receive requests. On the designer toolbar, select **Save**. The endpoint URL now appears in your trigger.
+ 1. On the **API connections** page, select your SAP connection.
- ![Screenshot that shows the workflow designer with the Request trigger with generated POST URL being copied.](./media/logic-apps-using-sap-connector/generate-http-endpoint-url.png)
+ 1. On the connection's page menu, select **Delete**.
-### Create SAP action to send message
+ 1. Accept the confirmation prompt to delete the connection.
-Next, create an action to send your IDoc message to SAP when your [Request trigger](#add-http-request-trigger) fires. By default, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. The **Safe Typing** option is available for backward compatibility and only checks the string length. Learn more about the [Safe Typing option](#safe-typing).
+ 1. Wait for the portal notification that the connection has been deleted.
-1. In the workflow designer, under the trigger, select **New step**.
+ * In the [Azure portal](https://portal.azure.com), open your ISE resource to delete the SAP connections.
- ![Screenshot that shows the workflow designer with the workflow being edited to add a new step.](./media/logic-apps-using-sap-connector/add-sap-action-logic-app.png)
+ 1. On your ISE menu, under **Settings**, select **API connections**.
-1. In the search box, enter `send message sap` as your filter. From the **Actions** list, select **Send message to SAP**.
+ 1. On the **API connections** page, select your SAP connection.
- ![Screenshot that shows the workflow designer with the selected "Send message to SAP" action.](./media/logic-apps-using-sap-connector/select-sap-send-action.png)
+ 1. On the connection's page menu, select **Delete**.
- Or, you can select the **Enterprise** tab, and select the SAP action.
+ 1. Accept the confirmation prompt to delete the connection.
- ![Screenshot that shows the workflow designer with the selected "Send message to SAP" action under Enterprise tab.](./media/logic-apps-using-sap-connector/select-sap-send-action-ent-tab.png)
+ 1. Wait for the portal notification that the connection has been deleted.
-1. If your connection already exists, continue to the next step. If you're prompted to create a new connection, provide the following information to connect to your on-premises SAP server.
+1. Delete the SAP connector from your ISE. You must delete all connections to this connector in all your logic app workflows before you can delete the connector. If you haven't already deleted all connections, review the previous steps.
- 1. Provide a name for the connection.
+ 1. In the [Azure portal](https://portal.azure.com), open your ISE resource.
- 1. If you're using the data gateway, follow these steps:
+ 1. On your ISE menu, under **Settings**, select **Managed connectors**.
- 1. In the **Data Gateway** section, under **Subscription**, first select the Azure subscription for the data gateway resource that you created in the Azure portal for your data gateway installation.
+ 1. On the **Managed connectors** page, select the checkbox for the SAP connector.
- 1. Under **Connection Gateway**, select your data gateway resource in Azure.
+ 1. On the toolbar, select **Delete**.
- 1. For the **Logon Type** property, follow the step based on whether the property is set to **Application Server** or **Group**.
+ 1. Accept the confirmation prompt to delete the connector.
- * For **Application Server**, these properties, which usually appear optional, are required:
+ 1. Wait for the portal notification that the connector has been deleted.
- ![Screenshot that shows how to create SAP Application server connection.](./media/logic-apps-using-sap-connector/create-SAP-application-server-connection.png)
+1. Deploy or redeploy the SAP connector in your ISE.
- * For **Group**, these properties, which usually appear optional, are required:
+ 1. Prepare a new zip archive file to use in your SAP connector deployment. You must include the SNC library and the SAPGENPSE utility.
- ![Screenshot that shows how to create SAP Message server connection.](./media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
+ * You must use the 64-bit SNC library. There's no 32-bit support.
- In the SAP server, the Logon Group is maintained by opening the **CCMS: Maintain Logon Groups** (T-Code SMLG) dialog box. For more information, review [SAP Note 26317 - Set up for LOGON group for automatic load balancing](https://service.sap.com/sap/support/notes/26317).
+ * Your SNC library and dependencies must be compatible with your SAP environment. For how to check compatibility, the [ISE prerequisites](#ise-prerequisites).
- By default, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. The **Safe Typing** option is available for backward compatibility and only checks the string length. Learn more about the [Safe Typing option](#safe-typing).
+ 1. Copy all SNC, SAPGENPSE, and NCo libraries to the root folder of your zip archive. Don't put these binaries in a subfolder.
- 1. When you're finished, select **Create**.
+ 1. For your new zip archive, follow the deployment steps in [ISE prerequisites](#ise-prerequisites).
- Azure Logic Apps sets up and tests your connection to make sure that the connection works properly.
+1. For each workflow that uses the ISE-native SAP connector, [create a new SAP connection that enables SNC](#enable-secure-network-communications).
- > [!NOTE]
- > If you receive the following error, there is a problem with your installation of the SAP NCo client library:
- >
- > **Test connection failed. Error 'Failed to process request. Error details: 'could not load file or assembly 'sapnco, Version=3.0.0.42, Culture=neutral, PublicKeyToken 50436dca5c7f7d23' or one of its dependencies. The system cannot find the file specified.'.'**
- >
- > Make sure to [install the required version of the SAP NCo client library and meet all other prerequisites](#sap-client-library-prerequisites).
-
-1. Now find and select an action from your SAP server.
-
- 1. In the **SAP Action** box, select the folder icon. From the file list, find and select the SAP Message you want to use. To navigate the list, use the arrows.
-
- This example selects an IDoc with the **Orders** type.
-
- ![Screenshot that shows finding and selecting an IDoc action.](./media/logic-apps-using-sap-connector/SAP-app-server-find-action.png)
-
- If you can't find the action you want, you can manually enter a path, for example:
-
- ![Screenshot that shows manually providing a path to an IDoc action.](./media/logic-apps-using-sap-connector/SAP-app-server-manually-enter-action.png)
-
- > [!TIP]
- > Provide the value for **SAP Action** through the expression editor.
- > That way, you can use the same action for different message types.
-
- For more information about IDoc operations, review [Message schemas for IDoc operations](/biztalk/adapters-and-accelerators/adapter-sap/message-schemas-for-idoc-operations).
-
- 1. Click inside the **Input Message** box so that the dynamic content list appears. From that list, under **When a HTTP request is received**, select the **Body** field.
-
- This step includes the body content from your Request trigger and sends that output to your SAP server.
-
- ![Screenshot that shows selecting the "Body" property in the Request trigger.](./media/logic-apps-using-sap-connector/SAP-app-server-action-select-body.png)
-
- When you're finished, your SAP action looks like this example:
-
- ![Screenshot that shows completing the SAP action.](./media/logic-apps-using-sap-connector/SAP-app-server-complete-action.png)
-
-1. Save your logic app workflow. On the designer toolbar, select **Save**.
-
-### Send flat file IDocs
-
-You can use IDocs with a flat file schema if you wrap them in an XML envelope. To send a flat file IDoc, use the generic instructions to [create an SAP action to send your IDoc message](#create-sap-action-to-send-message) with the following changes.
-
-1. For the **Send message to SAP** action, use the SAP action URI `http://Microsoft.LobServices.Sap/2007/03/Idoc/SendIdoc`.
-
-1. Format your input message with an XML envelope.
-
-For example, review the following example XML payload:
-
-```xml
-<SendIdoc xmlns="http://Microsoft.LobServices.Sap/2007/03/Idoc/">
- <idocData>EDI_DC 3000000001017945375750 30INVOIC011BTSVLINV30KUABCABCFPPC LDCA X004010810 4 SAPMSX LSEDI ABCABCFPPC 000d3ae4-723e-1edb-9ca4-cc017365c9fd 20210217054521INVOICINVOIC01ZINVOIC2RE 20210217054520
-E2EDK010013000000001017945375000001E2EDK01001000000010 ABCABC1.00000 0060 INVO9988298128 298.000 298.000 LB Z4LR EN 0005065828 L
-E2EDKA1 3000000001017945375000002E2EDKA1 000000020 RS ABCABCFPPC 0005065828 ABCABCABC ABCABC Inc. Limited Risk Distributor ABCABC 1950 ABCABCABCA Blvd ABCABAABCAB L5N8L9 CA ABCABC E ON V-ABCABC LDCA
-E2EDKA1 3000000001017945375000003E2EDKA1 000000020 AG 0005065828 ABCABCFPPC ABCABC ABCABC ABCABC - FPP ONLY 88 ABCABC Crescent ABCABAABCAB L5R 4A2 CA ABCABC 111 111 1111 E ON ABCABCFPPC EN
-E2EDKA1 3000000001017945375000004E2EDKA1 000000020 RE 0005065828 ABCABCFPPC ABCABC ABCABC ABCABC - FPP ONLY 88 ABCABC Crescent ABCABAABCAB L5R 4A2 CA ABCABC 111 111 1111 E ON ABCABCFPPC EN
-E2EDKA1 3000000001017945375000005E2EDKA1 000000020 RG 0005065828 ABCABCFPPC ABCABC ABCABC ABCABC - FPP ONLY 88 ABCABC Crescent ABCABAABCAB L5R 4A2 CA ABCABC 111 111 1111 E ON ABCABCFPPC EN
-E2EDKA1 3000000001017945375000006E2EDKA1 000000020 WE 0005001847 41 ABCABC ABCABC INC (ABCABC) DC A. ABCABCAB 88 ABCABC CRESCENT ABCABAABCAB L5R 4A2 CA ABCABC 111-111-1111 E ON ABCABCFPPC EN
-E2EDKA1 3000000001017945375000007E2EDKA1 000000020 Z3 0005533050 ABCABCABC ABCABC Inc. ABCA Bank Swift Code -ABCABCABCAB Sort Code - 1950 ABCABCABCA Blvd. Acc No -1111111111 ABCABAABCAB L5N8L9 CA ABCABC E ON ABCABCFPPC EN
-E2EDKA1 3000000001017945375000008E2EDKA1 000000020 BK 1075 ABCABCABC ABCABC Inc 1950 ABCABCABCA Blvd ABCABAABCAB ON L5N 8L9 CA ABCABC (111) 111-1111 (111) 111-1111 ON
-E2EDKA1 3000000001017945375000009E2EDKA1 000000020 CR 1075 CONTACT ABCABCABC 1950 ABCABCABCA Blvd ABCABAABCAB ON L5N 8L9 CA ABCABC (111) 111-1111 (111) 111-1111 ON
-E2EDK02 3000000001017945375000010E2EDK02 000000020 0099988298128 20210217
-E2EDK02 3000000001017945375000011E2EDK02 000000020 00140-N6260-S 20210205
-E2EDK02 3000000001017945375000012E2EDK02 000000020 0026336270425 20210217
-E2EDK02 3000000001017945375000013E2EDK02 000000020 0128026580537 20210224
-E2EDK02 3000000001017945375000014E2EDK02 000000020 01740-N6260-S
-E2EDK02 3000000001017945375000015E2EDK02 000000020 900IAC
-E2EDK02 3000000001017945375000016E2EDK02 000000020 901ZSH
-E2EDK02 3000000001017945375000017E2EDK02 000000020 9078026580537 20210217
-E2EDK03 3000000001017945375000018E2EDK03 000000020 02620210217
-E2EDK03 3000000001017945375000019E2EDK03 000000020 00120210224
-E2EDK03 3000000001017945375000020E2EDK03 000000020 02220210205
-E2EDK03 3000000001017945375000021E2EDK03 000000020 01220210217
-E2EDK03 3000000001017945375000022E2EDK03 000000020 01120210217
-E2EDK03 3000000001017945375000023E2EDK03 000000020 02420210217
-E2EDK03 3000000001017945375000024E2EDK03 000000020 02820210418
-E2EDK03 3000000001017945375000025E2EDK03 000000020 04820210217
-E2EDK17 3000000001017945375000026E2EDK17 000000020 001DDPDelivered Duty Paid
-E2EDK17 3000000001017945375000027E2EDK17 000000020 002DDPdestination
-E2EDK18 3000000001017945375000028E2EDK18 000000020 00160 0 Up to 04/18/2021 without deduction
-E2EDK28 3000000001017945375000029E2EDK28 000000020 CA BOFACATT Bank of ABCABAB ABCABC ABCABAB 50127217 ABCABCABC ABCABC Inc.
-E2EDK28 3000000001017945375000030E2EDK28 000000020 CA 026000082 ABCAbank ABCABC ABCABAB 201456700OLD ABCABCABC ABCABC Inc.
-E2EDK28 3000000001017945375000031E2EDK28 000000020 GB ABCAGB2L ABCAbank N.A ABCABA E14, 5LB GB63ABCA18500803115593 ABCABCABC ABCABC Inc. GB63ABCA18500803115593
-E2EDK28 3000000001017945375000032E2EDK28 000000020 CA 020012328 ABCABANK ABCABC ABCABAB ON M5J 2M3 2014567007 ABCABCABC ABCABC Inc.
-E2EDK28 3000000001017945375000033E2EDK28 000000020 CA 03722010 ABCABABC ABCABABC Bank of Commerce ABCABAABCAB 64-04812 ABCABCABC ABCABC Inc.
-E2EDK28 3000000001017945375000034E2EDK28 000000020 IE IHCC In-House Cash Center IHCC1075 ABCABCABC ABCABC Inc.
-E2EDK28 3000000001017945375000035E2EDK28 000000020 CA 000300002 ABCAB Bank of ABCABC ABCABAB 0021520584OLD ABCABCABC ABCABC Inc.
-E2EDK28 3000000001017945375000036E2EDK28 000000020 US USCC US Cash Center (IHC) city USCC1075 ABCABCABC ABCABC Inc.
-E2EDK29 3000000001017945375000037E2EDK29 000000020 0064848944US A CAD CA ABCABC CA United States US CA A Air Air
-E2EDKT1 3000000001017945375000038E2EDKT1 000000020 ZJ32E EN
-E2EDKT2 3000000001017945375000039E2EDKT2 000038030 GST/HST877845941RT0001 *
-E2EDKT2 3000000001017945375000040E2EDKT2 000038030 QST1021036966TQ0001 *
-E2EDKT1 3000000001017945375000041E2EDKT1 000000020 Z4VL
-E2EDKT2 3000000001017945375000042E2EDKT2 000041030 0.000 *
-E2EDKT1 3000000001017945375000043E2EDKT1 000000020 Z4VH
-E2EDKT2 3000000001017945375000044E2EDKT2 000043030 *
-E2EDK14 3000000001017945375000045E2EDK14 000000020 008LDCA
-E2EDK14 3000000001017945375000046E2EDK14 000000020 00710
-E2EDK14 3000000001017945375000047E2EDK14 000000020 00610
-E2EDK14 3000000001017945375000048E2EDK14 000000020 015Z4F2
-E2EDK14 3000000001017945375000049E2EDK14 000000020 0031075
-E2EDK14 3000000001017945375000050E2EDK14 000000020 021M
-E2EDK14 3000000001017945375000051E2EDK14 000000020 0161075
-E2EDK14 3000000001017945375000052E2EDK14 000000020 962M
-E2EDP010013000000001017945375000053E2EDP01001000000020 000011 2980.000 EA 298.000 LB MOUSE 298.000 Z4TN 4260
-E2EDP02 3000000001017945375000054E2EDP02 000053030 00140-N6260-S 00000120210205 DFUE
-E2EDP02 3000000001017945375000055E2EDP02 000053030 0026336270425 00001120210217
-E2EDP02 3000000001017945375000056E2EDP02 000053030 0168026580537 00001020210224
-E2EDP02 3000000001017945375000057E2EDP02 000053030 9100000 00000120210205 DFUE
-E2EDP02 3000000001017945375000058E2EDP02 000053030 911A 00000120210205 DFUE
-E2EDP02 3000000001017945375000059E2EDP02 000053030 912PP 00000120210205 DFUE
-E2EDP02 3000000001017945375000060E2EDP02 000053030 91300 00000120210205 DFUE
-E2EDP02 3000000001017945375000061E2EDP02 000053030 914CONTACT ABCABCABC 00000120210205 DFUE
-E2EDP02 3000000001017945375000062E2EDP02 000053030 963 00000120210205 DFUE
-E2EDP02 3000000001017945375000063E2EDP02 000053030 965 00000120210205 DFUE
-E2EDP02 3000000001017945375000064E2EDP02 000053030 9666336270425 00000120210205 DFUE
-E2EDP02 3000000001017945375000065E2EDP02 000053030 9078026580537 00001020210205 DFUE
-E2EDP03 3000000001017945375000066E2EDP03 000053030 02920210217
-E2EDP03 3000000001017945375000067E2EDP03 000053030 00120210224
-E2EDP03 3000000001017945375000068E2EDP03 000053030 01120210217
-E2EDP03 3000000001017945375000069E2EDP03 000053030 02520210217
-E2EDP03 3000000001017945375000070E2EDP03 000053030 02720210217
-E2EDP03 3000000001017945375000071E2EDP03 000053030 02320210217
-E2EDP03 3000000001017945375000072E2EDP03 000053030 02220210205
-E2EDP19 3000000001017945375000073E2EDP19 000053030 001418VVZ
-E2EDP19 3000000001017945375000074E2EDP19 000053030 002RJR-00001 AB ABCABCABC Mouse FORBUS BLUETOOTH
-E2EDP19 3000000001017945375000075E2EDP19 000053030 0078471609000
-E2EDP19 3000000001017945375000076E2EDP19 000053030 003889842532685
-E2EDP19 3000000001017945375000077E2EDP19 000053030 011CN
-E2EDP26 3000000001017945375000078E2EDP26 000053030 00459064.20
-E2EDP26 3000000001017945375000079E2EDP26 000053030 00352269.20
-E2EDP26 3000000001017945375000080E2EDP26 000053030 01052269.20
-E2EDP26 3000000001017945375000081E2EDP26 000053030 01152269.20
-E2EDP26 3000000001017945375000082E2EDP26 000053030 0126795.00
-E2EDP26 3000000001017945375000083E2EDP26 000053030 01552269.20
-E2EDP26 3000000001017945375000084E2EDP26 000053030 00117.54
-E2EDP26 3000000001017945375000085E2EDP26 000053030 00252269.20
-E2EDP26 3000000001017945375000086E2EDP26 000053030 940 2980.000
-E2EDP26 3000000001017945375000087E2EDP26 000053030 939 2980.000
-E2EDP05 3000000001017945375000088E2EDP05 000053030 + Z400MS List Price 52269.20 17.54 1 EA CAD 2980
-E2EDP05 3000000001017945375000089E2EDP05 000053030 + XR1 Tax Jur Code Level 6795.00 13.000 52269.20
-E2EDP05 3000000001017945375000090E2EDP05 000053030 + Tax Subtotal1 6795.00 2.28 1 EA CAD 2980
-E2EDP05 3000000001017945375000091E2EDP05 000053030 + Taxable Amount + TaxSubtotal1 59064.20 19.82 1 EA CAD 2980
-E2EDP04 3000000001017945375000092E2EDP04 000053030 CX 13.000 6795.00 7000000000
-E2EDP04 3000000001017945375000093E2EDP04 000053030 CX 0 0 7001500000
-E2EDP04 3000000001017945375000094E2EDP04 000053030 CX 0 0 7001505690
-E2EDP28 3000000001017945375000095E2EDP28 000053030 00648489440000108471609000 CN CN ABCAB ZZ 298.000 298.000 LB US 400 United Stat KY
-E2EDPT1 3000000001017945375000096E2EDPT1 000053030 0001E EN
-E2EDPT2 3000000001017945375000097E2EDPT2 000096040 AB ABCABCABC Mouse forBus Bluetooth EN/XC/XD/XX Hdwr Black For Bsnss *
-E2EDS01 3000000001017945375000098E2EDS01 000000020 0011
-E2EDS01 3000000001017945375000099E2EDS01 000000020 01259064.20 CAD
-E2EDS01 3000000001017945375000100E2EDS01 000000020 0056795.00 CAD
-E2EDS01 3000000001017945375000101E2EDS01 000000020 01159064.20 CAD
-E2EDS01 3000000001017945375000102E2EDS01 000000020 01052269.20 CAD
-E2EDS01 3000000001017945375000103E2EDS01 000000020 94200000 CAD
-E2EDS01 3000000001017945375000104E2EDS01 000000020 9440.00 CAD
-E2EDS01 3000000001017945375000105E2EDS01 000000020 9450.00 CAD
-E2EDS01 3000000001017945375000106E2EDS01 000000020 94659064.20 CAD
-E2EDS01 3000000001017945375000107E2EDS01 000000020 94752269.20 CAD
-E2EDS01 3000000001017945375000108E2EDS01 000000020 EXT
-Z2XSK010003000000001017945375000109Z2XSK01000000108030 Z400 52269.20
-Z2XSK010003000000001017945375000110Z2XSK01000000108030 XR1 13.000 6795.00 CX
-</idocData>
-</SendIdoc>
-```
+#### Certificate rotation
-### Create HTTP response action
+1. For all connections that use SAP ISE X.509 in your ISE, update the base64-encoded binary PSE.
-Now add a response action to your logic app's workflow and include the output from the SAP action. That way, your logic app workflow returns the results from your SAP server to the original requestor.
+1. In your copy of the PSE, import the new certificates.
-1. In the workflow designer, under the SAP action, select **New step**.
+1. Encode the PSE file as a base64-encoded binary.
-1. In the search box, enter `response` as your filter. From the **Actions** list, select **Response**.
+1. Edit your SAP connection information, and save the new PSE file there.
-1. Click inside the **Body** box so that the dynamic content list appears. From that list, under **Send message to SAP**, select the **Body** field.
+ The connector detects the PSE change and updates its own copy during the next connection request.
- ![Screenshot that shows finishing the SAP action.](./media/logic-apps-using-sap-connector/select-sap-body-for-response-action.png)
+
-1. Save your logic app workflow.
+### Azure Logic Apps environment prerequisites
-### Create RFC request-response
+### [Consumption](#tab/consumption)
-You must create a request and response pattern if you need to receive replies by using a remote function call (RFC) to Azure Logic Apps from SAP ABAP. To receive IDocs in your logic app workflow, you should make the workflow's first action a [Response action](../connectors/connectors-native-reqres.md#add-a-response-action) with a status code of `200 OK` and no content. This recommended step completes the SAP Logical Unit of Work (LUW) asynchronous transfer over tRFC immediately, which leaves the SAP CPIC conversation available again. You can then add further actions in your logic app workflow to process the received IDoc without blocking further transfers.
+<a name="multi-tenant-prerequisites"></a>
-> [!NOTE]
-> The SAP trigger receives IDocs over tRFC, which doesn't have a response parameter by design.
+For a Consumption workflow in multi-tenant Azure Logic Apps, the SAP managed connector integrates with SAP systems through an [on-premises data gateway](logic-apps-gateway-connection.md). For example, in scenarios where your workflow sends a message to the SAP system, the data gateway acts as an RFC client and forwards the requests received from your workflow to SAP. Likewise, in scenarios where your workflow receives a message from SAP, the data gateway acts as an RFC server that receives requests from SAP and forwards them to your workflow.
-To implement a request and response pattern, you must first discover the RFC schema using the [`generate schema` command](#generate-schemas-for-artifacts-in-sap). The generated schema has two possible root nodes:
+1. On a host computer or virtual machine that exists in the same virtual network as the SAP system to which you're connecting, [download and install the on-premises data gateway](logic-apps-gateway-install.md).
-* The request node, which is the call that you receive from SAP.
-* The response node, which is your reply back to SAP.
+ The data gateway helps you securely access on-premises data and resources. Make sure to use a supported version of the gateway. If you experience an issue with your gateway, try [upgrading to the latest version](https://aka.ms/on-premises-data-gateway-installer), which might include updates to resolve your problem.
-In the following example, a request and response pattern is generated from the `STFC_CONNECTION` RFC module. The request XML is parsed to extract a node value in which SAP requests `<ECHOTEXT>`. The response inserts the current timestamp as a dynamic value. You receive a similar response when you send a `STFC_CONNECTION` RFC from a logic app workflow to SAP.
+1. In the Azure portal, [create an Azure gateway resource](logic-apps-gateway-connection.md#create-azure-gateway-resource) for your on-premises data gateway installation.
-```xml
-<STFC_CONNECTIONResponse xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
- <ECHOTEXT>@{first(xpath(xml(triggerBody()?['Content']), '/*[local-name()="STFC_CONNECTION"]/*[local-name()="REQUTEXT"]/text()'))}</ECHOTEXT>
- <RESPTEXT>Azure Logic Apps @{utcNow()}</RESPTEXT>
-</STFC_CONNECTIONResponse>
-```
+1. On the same local computer as your on-premises data gateway installation, [download and install the latest SAP NCo client library](#sap-client-library-prerequisites).
-### Test your logic app workflow
+1. For the host computer with your on-premises data gateway installation, configure the network host names and service names resolution.
-1. If your logic app isn't already enabled, on your logic app menu, select **Overview**. On the toolbar, select **Enable**.
+ * To use the host names or service names for connections from Azure Logic Apps, you have to set up name resolution for each SAP Application, Message, and Gateway server along with their
-1. On the designer toolbar, select **Run**. This step manually starts your logic app workflow.
+ * In the **%windir%\System32\drivers\etc\hosts** file or in the DNS server that's available to the host computer for your on-premises data gateway installation, set up the network host name resolution.
-1. Trigger your logic app workflow by sending an HTTP POST request to the URL in your Request trigger. Include your message content with your request. To the send the request, you can use a tool such as [Postman](https://www.getpostman.com/apps).
-
- For this article, the request sends an IDoc file, which must be in XML format and include the namespace for the SAP action you're using, for example:
-
- ```xml
- <?xml version="1.0" encoding="UTF-8" ?>
- <Send xmlns="http://Microsoft.LobServices.Sap/2007/03/Idoc/2/ORDERS05//720/Send">
- <idocData>
- <...>
- </idocData>
- </Send>
- ```
+ * In the **%windir%\System32\drivers\etc\services** file, set up the service name resolution.
-1. After you send your HTTP request, wait for the response from your logic app workflow.
+ * If you don't intend to use network host names or service names for the connection, you can use host IP addresses and service port numbers instead.
- > [!NOTE]
- > Your logic app workflow might time out if all the steps required for the response don't finish within the [request timeout limit](logic-apps-limits-and-config.md).
- > If this condition happens, requests might get blocked. To help you diagnose problems, learn how you can [check and monitor your logic apps](monitor-logic-apps.md).
+ * If you don't have a DNS entry for your SAP system, the following example shows a sample entry for the hosts file:
-You've now created a logic app workflow that can communicate with your SAP server. Now that you've set up an SAP connection for your logic app workflow, you can explore other available SAP actions, such as BAPI and RFC.
+ ```text
+ 10.0.1.9 sapserver # SAP single-instance system host IP by simple computer name
+ 10.0.1.9 sapserver.contoso.com # SAP single-instance system host IP by fully qualified DNS name
+ ```
-<a name="receive-message-sap"></a>
+ The following list shows a sample set of entries for the services files:
-## Receive message from SAP
+ ```text
+ sapdp00 3200/tcp # SAP system instance 00 dialog (application) service port
+ sapgw00 3300/tcp # SAP system instance 00 gateway service port
+ sapmsDV6 3601/tcp # SAP system ID DV6 message service port
+ ```
-This example uses a logic app workflow that triggers when the app receives a message from an SAP system.
+### [Standard](#tab/standard)
-### Add an SAP trigger
+<a name="single-tenant-prerequisites"></a>
-1. In the Azure portal, create a blank logic app, which opens the workflow designer.
+For a Standard workflow in single-tenant Azure Logic Apps, use the preview SAP *built-in* connector to directly access resources that are protected by an Azure virtual network. You can also use other built-in connectors that let workflows directly access on-premises resources without having to use the on-premises data gateway.
-1. In the search box, enter `when message received sap` as your filter. From the **Triggers** list, select **When a message is received from SAP**.
+1. To use the SAP connector, you need to download the following files and have them read to upload to your Standard logic app resource. For more information, see [SAP NCo client library prerequisites](#sap-client-library-prerequisites):
- ![Screenshot that shows adding an SAP trigger.](./media/logic-apps-using-sap-connector/add-sap-trigger-logic-app.png)
+ - **libicudecnumber.dll**
+ - **rscp4n.dll**
+ - **sapnco.dll**
+ - **sapnco_utils.dll**
- Or, you can select the **Enterprise** tab, and then select the trigger:
+1. To SNC from SAP, you need to download the following files and have them ready to upload to your logic app resource. For more information, see [SNC prerequisites](#snc-prerequisites-standard):
- ![Screenshot that shows adding an SAP trigger from the Enterprise tab.](./media/logic-apps-using-sap-connector/add-sap-trigger-ent-tab.png)
+ - **sapcrypto.dll**
+ - **sapgenpse.exe**
+ - **slcryptokernal.dll**
-1. If your connection already exists, continue with the next step so you can set up your SAP trigger. However, if you're prompted for connection details, provide the information so that you can create a connection to your on-premises SAP server now.
+#### Upload assemblies to Azure portal
- 1. Provide a name for the connection.
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
- 1. If you're using the data gateway, follow these steps:
+1. On the logic app menu, under **Workflows**, select **Assemblies**.
- 1. In the **Data Gateway** section, under **Subscription**, first select the Azure subscription for the data gateway resource that you created in the Azure portal for your data gateway installation.
+1. On the **Assemblies** page toolbar, select **Add**.
- 1. Under **Connection Gateway**, select your data gateway resource in Azure.
+1. After the **Add Assembly** pane opens, for **Assembly Type**, select **Client/SDK Assembly (.NET Framework)**.
- 1. Continue providing information about the connection. For the **Logon Type** property, follow the step based on whether the property is set to **Application Server** or **Group**:
+1. Under **Upload Files**, add the previously described required files that you downloaded:
- * For **Application Server**, these properties, which usually appear optional, are required:
+ **SAP NCo**
+ - **libicudecnumber.dll**
+ - **rscp4n.dll**
+ - **sapnco.dll**
+ - **sapnco_utils.dll**
- ![Screenshot that shows creating a connection to SAP Application server.](./media/logic-apps-using-sap-connector/create-SAP-application-server-connection.png)
+ **CommonCryptoLib**
+ - **sapcrypto.dll**
+ - **sapgenpse.exe**
+ - **slcryptokernal.dll**
- * For **Group**, these properties, which usually appear optional, are required:
+1. When you're ready, select **Upload Files**.
- ![Screenshot that shows creating a connection to SAP Message server](./media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
+ If the assembly file is 4 MB or smaller, you can either browse and select or drag and drop the file. For files larger than 4 MB, follow these steps instead:
- By default, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. The **Safe Typing** option is available for backward compatibility and only checks the string length. Learn more about the [Safe Typing option](#safe-typing).
+ 1. On the logic app menu, under **Development Tools**, select **Advanced Tools**.
- 1. When you're finished, select **Create**.
+ 1. On the **Advanced Tools** page, select **Go**.
- Azure Logic Apps sets up and tests your connection to make sure that the connection works properly.
+ 1. On the **Kudu** toolbar, from the **Debug console** menu, select **CMD**.
-1. Based on your SAP system configuration, provide the [required parameters](#parameters), and add any other parameters available for this trigger, for example:
+ 1. Open the following folders: **site** > **wwwroot**
- * To receive IDocs as plain XML, in the **When a message is received from SAP** trigger, which supports the SAP plain XML format, add and set the **IDOC Format** parameter to **SapPlainXml**.
+ 1. On the folder structure toolbar, select the plus (**+**) sign, and then select **New folder**.
- * To receive IDocs as a flat file using the same SAP trigger, add and set the **IDOC Format** parameter to **FlatFile**. When you also use the [Flat File Decode action](logic-apps-enterprise-integration-flatfile.md), in your flat file schema, you have to use the `early_terminate_optional_fields` property and set the value to `true`.
+ 1. Create the following folder and subfolders: **lib** > **builtinOperationSdks** > **net472**
- This requirement is necessary because the flat file IDoc data record that's sent by SAP on the tRFC call `IDOC_INBOUND_ASYNCHRONOUS` isn't padded to the full SDATA field length. Azure Logic Apps provides the flat file IDoc original data without padding as received from SAP. Also, when you combine this SAP trigger with the Flat File Decode action, the schema that's provided to the action must match.
+ 1. In the **net472** folder, upload the assembly files larger than 4 MB.
- * To [filter the messages that you receive from your SAP server, specify a list of SAP actions](#filter-with-sap-actions).
+#### SAP trigger requirements
- For example, select an SAP action from the file picker:
+The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
- ![Screenshot that shows adding an SAP action to logic app workflow.](./media/logic-apps-using-sap-connector/select-SAP-action-trigger.png)
+### [ISE](#tab/ise)
- Or, you can manually specify an action:
+<a name="ise-prerequisites"></a>
- ![Screenshot that shows manually entering the SAP action that you want to use.](./media/logic-apps-using-sap-connector/manual-enter-SAP-action-trigger.png)
+For a Consumption workflow in an ISE, the ISE provides access to resources that are protected by an Azure virtual network and offers other ISE-native connectors that let workflows directly access on-premises resources without having to use the on-premises data gateway.
- Here's an example that shows how the action appears when you set up the trigger to receive more than one message.
+> [!IMPORTANT]
+>
+> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic),
+> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard
+> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure
+> Logic Apps and provide the same capabilities plus more.
+>
+> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing
+> before this date are supported through August 31, 2024. For more information, see the following resources:
+>
+> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220)
+> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
+> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
+> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md)
+> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/)
+> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)
- ![Screenshot that shows a trigger example that receives multiple messages.](./media/logic-apps-using-sap-connector/example-trigger.png)
+1. If you don't already have an Azure Storage account with a blob container, create a container using either the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md).
- For more information about the SAP action, review [Message schemas for IDoc operations](/biztalk/adapters-and-accelerators/adapter-sap/message-schemas-for-idoc-operations)
+1. On your local computer, [download and install the latest SAP NCo client library](#sap-client-library-prerequisites). You should have the following assembly (.dll) files:
-1. Now save your logic app workflow so you can start receiving messages from your SAP system. On the designer toolbar, select **Save**.
+ * **libicudecnumber.dll**
+ * **rscp4n.dll**
+ * **sapnco.dll**
+ * **sapnco_utils.dll**
- Your logic app workflow is now ready to receive messages from your SAP system.
+1. From the root folder, create a .zip file that includes these assembly files. Upload the package to your blob container in Azure Storage.
> [!NOTE]
- > The SAP trigger in these steps is a webhook-based trigger, not a polling trigger. If you're using the data gateway,
- > the trigger is called from the data gateway only when a message exists, so no polling is necessary.
-
-1. In your logic app workflow's trigger history, check that the trigger registration succeeds.
-
-If you receive a **500 Bad Gateway** or **400 Bad Request** error with a message similar to **service 'sapgw00' unknown**, the network service name resolution to port number is failing, for example:
-
-```json
-{
- "body": {
- "error": {
- "code": 500,
- "source": "EXAMPLE-FLOW-NAME.eastus.environments.microsoftazurelogicapps.net",
- "clientRequestId": "00000000-0000-0000-0000-000000000000",
- "message": "BadGateway",
- "innerError": {
- "error": {
- "code": "UnhandledException",
- "message": "\nERROR service 'sapgw00' unknown\nTIME Wed Nov 11 19:37:50 2020\nRELEASE 721\nCOMPONENT NI (network interface)\nVERSION 40\nRC -3\nMODULE ninti.c\nLINE 933\nDETAIL NiPGetServByName: 'sapgw00' not found\nSYSTEM CALL getaddrinfo\nCOUNTER 1\n\nRETURN CODE: 20"
- }
- }
- }
- }
-}
-```
-
-* **Option 1:** In your API connection and trigger configuration, replace your gateway service name with its port number. In the example error, `sapgw00` needs to be replaced with a real port number, for example, `3300`. This is the only available option for ISE.
-
-* **Option 2:** If you're using the on-premises data gateway, you can add the gateway service name to the port mapping in `%windir%\System32\drivers\etc\services` and then restart the on-premises data gateway service, for example:
-
- ```text
- sapgw00 3300/tcp
- ```
-
-You might get a similar error when SAP Application server or Message server name resolves to the IP address. For ISE, you must specify the IP address for your SAP Application server or Message server. For the on-premises data gateway, you can instead add the name to the IP address mapping in `%windir%\System32\drivers\etc\hosts`, for example:
-
-```text
-10.0.1.9 SAPDBSERVER01 # SAP System Server VPN IP by computer name
-10.0.1.9 SAPDBSERVER01.someguid.xx.xxxxxxx.cloudapp.net # SAP System Server VPN IP by fully qualified computer name
-```
-
-### Parameters
-
-Along with simple string and number inputs, the SAP connector accepts the following table parameters (`Type=ITAB` inputs):
-
-* Table direction parameters, both input and output, for older SAP releases.
-
-* Changing parameters, which replace the table direction parameters for newer SAP releases.
-
-* Hierarchical table parameters
-
-### Filter with SAP actions
-
-You can optionally filter the messages that your logic app workflow receives from your SAP server by providing a list, or array, with a single or multiple SAP actions. By default, this array is empty, which means that your logic app receives all the messages from your SAP server without filtering.
+ >
+ > Don't use a subfolder inside the .zip file. Only assemblies in the archive's root folder
+ > are deployed with the SAP connector in your ISE.
+ >
+ > If you use SNC, also include the SNC assemblies and binaries in the same .zip file at the root.
+ > For more information, review the [SNC prerequisites for ISE](#snc-prerequisites-ise).
-When you set up the array filter, the trigger only receives messages from the specified SAP action types and rejects all other messages from your SAP server. However, this filter doesn't affect whether the typing of the received payload is weak or strong.
+1. In either the Azure portal or Azure Storage Explorer, browse to the container location where you uploaded the .zip file.
-Any SAP action filtering happens at the level of the SAP Adapter for your on-premises data gateway. For more information, review [how to send test IDocs to Azure Logic Apps from SAP](#test-sending-idocs-from-sap).
+1. Copy the URL for the container location. Make sure to include the Shared Access Signature (SAS) token, so the SAS token is authorized. Otherwise, deployment for the SAP ISE connector fails.
-If you can't send IDoc packets from SAP to your logic app workflow's trigger, review the Transactional RFC (tRFC) call rejection message in the SAP tRFC (T-Code SM58) dialog box. In the SAP interface, you might get the following error messages, which are clipped due to the substring limits on the **Status Text** field.
+1. In your ISE, install and deploy the SAP connector. For more information, review [Add ISE connectors](add-artifacts-integration-service-environment-ise.md#add-ise-connectors-environment).
-#### The RequestContext on the IReplyChannel was closed without a reply being sent
+ 1. In the [Azure portal](https://portal.azure.com), find and open your ISE.
-This error message means unexpected failures happen when the catch-all handler for the channel terminates the channel due to an error, and rebuilds the channel to process other messages.
+ 1. On the ISE menu, select **Managed connectors** &gt; **Add**. From the connectors list, find and select **SAP**.
-To acknowledge that your logic app workflow received the IDoc, [add a Response action](../connectors/connectors-native-reqres.md#add-a-response-action) that returns a `200 OK` status code. Leave the body empty and don't change or add to the headers. The IDoc is transported through tRFC, which doesn't allow for a response payload.
+ 1. On the **Add a new managed connector** pane, in the **SAP package** box, paste the URL for the .zip file that has the SAP assemblies. Again, make sure to include the SAS token.
-To reject the IDoc instead, respond with any HTTP status code other than `200 OK`. The SAP Adapter then returns an exception back to SAP on your behalf. You should only reject the IDoc to signal transport errors back to SAP, such as a misrouted IDoc that your application can't process. You shouldn't reject an IDoc for application-level errors, such as issues with the data contained in the IDoc. If you delay transport acceptance for application-level validation, you might experience negative performance due to blocking your connection from transporting other IDocs.
+ 1. Select **Create** to finish creating your ISE connector.
-If you're receiving this error message and experience systemic failures calling Azure Logic Apps, check that you've configured the network settings for your on-premises data gateway service for your specific environment. For example, if your network environment requires the use of a proxy to call Azure endpoints, you need to configure your on-premises data gateway service to use your proxy. For more information, review [Proxy Configuration](/dotnet/framework/network-programming/proxy-configuration).
+1. If your SAP instance and ISE are in different virtual networks, you also need to [peer those networks](../virtual-network/tutorial-connect-virtual-networks-portal.md) so they're connected. Review the [SNC prerequisites for ISE](#snc-prerequisites-ise).
-If you're receiving this error message and experience intermittent failures calling Azure Logic Apps, you might need to increase your retry count or also retry interval.
+1. Get the IP addresses for the SAP Application, Message, and Gateway servers that you plan to use for connecting from your workflow. Network name resolution isn't available for SAP connections in an ISE.
-1. Check SAP settings in your on-premises data gateway service configuration file, `Microsoft.PowerBI.EnterpriseGateway.exe.config`.
+1. Get the port numbers for the SAP Application, Message, and Gateway services that you plan to use for connecting from your workflow. Service name resolution isn't available for SAP connections in an ISE.
- 1. Under the `configuration` root node, add a `configSections` element, if none exists.
- 1. Under the `configSections` node, add a `section` element with the following attributes, if none exist: `name="SapAdapterSection" type="Microsoft.Adapters.SAP.Common.SapAdapterSection, Microsoft.Adapters.SAP.Common"`
+
- > [!IMPORTANT]
- > Don't change the attributes in existing `section` elements, if such elements already exist.
+<a name="enable-secure-network-communications"></a>
- Your `configSections` element looks like the following version, if no other section or section group is declared in the gateway service configuration:
+### Enable Secure Network Communications (SNC)
- ```xml
- <configSections>
- <section name="SapAdapterSection" type="Microsoft.Adapters.SAP.Common.SapAdapterSection, Microsoft.Adapters.SAP.Common"/>
- </configSections>
- ```
+### [Consumption](#tab/consumption)
- 1. Under the `configuration` root node, add an `SapAdapterSection` element, if none exists.
- 1. Under the `SapAdapterSection` node, add a `Broker` element with the following attributes, if none exist: `WebhookRetryDefaultDelay="00:00:00.10" WebhookRetryMaximumCount="2"`
+For a Consumption workflow that runs in multi-tenant Azure Logic Apps, you can enable SNC for authentication, which applies only when you use the data gateway. Before you start, make sure that you met all the necessary [prerequisites](logic-apps-using-sap-connector.md?tabs=multi-tenant#prerequisites) and [SNC prerequisites](logic-apps-using-sap-connector.md?tabs=multi-tenant#snc-prerequisites).
- > [!IMPORTANT]
- > Change the attributes for the `Broker` element, even if the element already exists.
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and workflow in the designer.
- The `SapAdapterSection` element looks like the following version, if no other element or attribute is declared in the SAP adapter configuration:
+1. Add or edit an SAP managed connector operation.
- ```xml
- <SapAdapterSection>
- <Broker WebhookRetryDefaultDelay="00:00:00.10" WebhookRetryMaximumCount="2" />
- </SapAdapterSection>
- ```
+1. In the SAP connection information box, provide the following [required information](/connectors/sap/#default-connection). The **Authentication Type** that you select changes the available options.
- The retry count setting looks like `WebhookRetryMaximumCount="2"`. The retry interval setting looks like `WebhookRetryDefaultDelay="00:00:00.10"` where the timespan format is `HH:mm:ss.ff`.
+ ![Screenshot showing SAP connection settings for Consumption.](media\logic-apps-using-sap-connector\sap-connection-consumption.png)
> [!NOTE]
- > For more information about the configuration file,
- > review [Configuration file schema for .NET Framework](/dotnet/framework/configure-apps/file-schema/).
+ >
+ > The **SAP Username** and **SAP Password** fields are optional. If you don't provide a username
+ > and password, the connector uses the client certificate provided in a later step for authentication.
-1. Save your changes. Restart your on-premises data gateway.
+1. To enable SNC, in the SAP connection information box, provide the following required information instead:
-#### The segment or group definition E2EDK36001 was not found in the IDoc meta
+ ![Screenshot showing SAP connection settings for SNC enabled for Consumption.](./media/logic-apps-using-sap-connector/sap-connection-snc-consumption.png)
-This error message means expected failures happen with other errors. For example, the failure to generate an IDoc XML payload because its segments aren't released by SAP. As a result, the segment type metadata required for conversion is missing.
+ | Parameter | Description |
+ |--| |
+ | **Use SNC** | Select the checkbox. |
+ | **SNC Library** | Enter one of the following values: <br><br>- The name for your SNC library, for example, **sapsnc.dll** <br>- The relative path to the NCo installation location, for example, **.\security\sapsnc.dll** <br>- The absolute path to the NCo installation location, for example, **c:\security\sapsnc.dll** |
+ | **SNC SSO** | Select either **Logon using the SNC identity** or **Logon with the username/password provided on RFC level**. <br><br>Typically, the SNC identity is used to authenticate the caller. You can choose to authenticate with a username and password instead, but this parameter value is still encrypted. |
+ | **SNC My Name** | In most cases, you can omit this value. The installed SNC solution usually knows its own SNC name. In the case where your solution supports multiple identities, you might have to specify the identity to use for this particular destination or server. |
+ | **SNC Partner Name** | Enter the name for the backend SNC, for example, **p:CN=DV3, OU=LA, O=MS, C=US**. |
+ | **SNC Quality of Protection** | Select the quality of service to use for SNC communication with this particular destination or server. The default value is defined by the backend system. The maximum value is defined by the security product used for SNC. |
+ | **SNC Certificate** | Enter your SNC client's public certificate in base64-encoded format. <br><br>**Note**: - Don't include the PEM header or footer. <br><br>- Don't enter the private certificate here because your Enter your SNC Personal Security Environment (PSE) might contain multiple private certificates. However, this **SNC Certificate** parameter identifies the certificates that this connection must use. For more information, review the next parameter. |
+ | **PSE** | Optional: Enter your SNC Personal Security Environment (PSE) as a base64-encoded binary. <br><br>- The PSE must contain the private client certificate where the thumbprint matches the public client certificate that you provided in the previous step. <br><br>- Although the PSE might contain multiple client certificates, to use different client certificates, create separate workflows instead. <br><br>- If you're using more than one SNC client certificate for your Standard logic app resource, you must provide the same PSE for all connections. The PSE must contain the client private certificate for each and all the connections. You must set the client public certificate parameter to match the specific private certificate for each connection. |
-To have these segments released by SAP, contact the ABAP engineer for your SAP system.
+1. To finish creating your connection, select **Create**.
-### Asynchronous request-reply for triggers
+ If the parameters are correct, the connection is created. If there's a problem with the parameters, the connection creation dialog displays an error message. To troubleshoot connection parameter issues, you can use the on-premises data gateway installation and the gateway's local logs.
-The SAP connector supports Azure's [asynchronous request-reply pattern](/azure/architecture/patterns/async-request-reply) for Azure Logic Apps triggers. You can use this pattern to create successful requests that would have otherwise failed with the default synchronous request-reply pattern.
+### [Standard](#tab/standard)
-> [!TIP]
-> In logic app workflows that have multiple response actions, all response actions must use the same request-reply pattern.
-> For example, if your logic app workflow uses a switch control with multiple possible response actions, you must configure
-> all the response actions to use the same request-reply pattern, either synchronous or asynchronous.
+For a Standard workflow that runs in single-tenant Azure Logic Apps, you can enable SNC for authentication. Before you start, make sure that you met all the necessary [prerequisites](logic-apps-using-sap-connector.md?tabs=single-tenant#prerequisites) and [SNC prerequisites for single-tenant](logic-apps-using-sap-connector.md?tabs=single-tenant#snc-prerequisites).
-If you enable asynchronous response for your response action, your logic app workflow can respond with a `202 Accepted` reply after accepting a request for processing. The reply contains a location header that you can use to retrieve the final state of your request.
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
-To configure an asynchronous request-reply pattern for your logic app workflow using the SAP connector, follow these steps:
+1. To specify your SNC Personal Security Environment (PSE) and PSE password, follow these steps:
-1. Open your logic app in the workflow designer.
+ 1. On your logic app resource menu, under **Settings**, select **Configuration**.
-1. Confirm that the SAP connector is the trigger for your logic app workflow.
+ 1. On the **Application settings** tab, check whether the settings named **SAP_PSE** and **SAP__PSE_Password** already exist. If they don't exist, you have to add both settings. To add a new setting, select **New application setting**, provide the following required information, and select **OK** for each setting:
-1. Open your logic app workflow's **Response** action. In the action's title bar, select the menu (**...**) &gt; **Settings**.
+ | Name | Value | Description |
+ ||-|-|
+ | **SAP_PSE** | <*PSE-value*> | Enter your SNC Personal Security Environment (PSE) as a base64-encoded binary. <br><br>- The PSE must contain the private client certificate where the thumbprint matches the public client certificate that you provided in the previous step. <br><br>- Although the PSE might contain multiple client certificates, to use different client certificates, create separate workflows instead. <br><br>- The PSE must have no PIN. If necessary, set the PIN to empty using the SAPGENPSE utility. <br><br>- If you're using more than one SNC client certificate for your ISE, you must provide the same PSE for all connections. The PSE must contain the client private certificate for each and all the connections. You must set the client public certificate parameter to match the specific private certificate for each connection. |
+ | **SAP_PSE_Password** | <*PSE-password*> | The password, also known as PIN, for your PSE |
-1. In the **Settings** for your response action, turn on the toggle under **Asynchronous Response**. Select done.
+1. Now, either create or open the workflow you want to use in the designer. On your logic app resource menu, under **Workflows**, select **Workflows**.
-1. Save the changes to your logic app workflow.
+1. In the designer, add or edit an SAP *built-in* connector operation.
-## Find extended error logs
+1. In the SAP connection information box, provide the following [required information](/azure/logic-apps/connectors/built-in/reference/sap/#authentication). The **Authentication Type** that you select changes the available options.
-For full error messages, check your SAP Adapter's extended logs. You can also [enable an extended log file for the SAP connector](#extended-sap-logging-in-on-premises-data-gateway).
+ ![Screenshot showing SAP built-in connection settings for Standard workflow with Basic authentication.](media\logic-apps-using-sap-connector\sap-connection-standard.png)
-* For on-premises data gateway releases from April 2020 and earlier, logs are disabled by default.
+1. To enable SNC, in the SAP connection information box, provide the [required information instead](/azure/logic-apps/connectors/built-in/reference/sap/#authentication).
-* For on-premises data gateway releases from June 2020 and later, you can [enable gateway logs in the app settings](/data-integration/gateway/service-gateway-tshoot#collect-logs-from-the-on-premises-data-gateway-app).
+ ![Screenshot showing SAP built-in connection settings for Standard workflow with SNC enabled.](media\logic-apps-using-sap-connector\sap-connection-snc-standard.png)
- * The default logging level is **Warning**.
+ | Parameter | Description |
+ |--| |
+ | **Authentication Type** | Select **Logon Using SNC**. |
+ | **SNC Partner Name** | Enter the name for the backend SNC, for example, **p:CN=DV3, OU=LA, O=MS, C=US**. |
+ | **SNC Quality of Protection** | Select the quality of service to use for SNC communication with this particular destination or server. The default value is defined by the backend system. The maximum value is defined by the security product used for SNC. |
+ | **SNC Type** | Select the SNC authentication to use. |
+ | **SNC Certificate** | Enter your SNC client's public certificate in base64-encoded format. <br><br>**Note**: - Don't include the PEM header or footer. <br><br>- Don't enter the private certificate here because the PSE might contain multiple private certificates. However, this **SNC Certificate** parameter identifies the certificates that this connection must use. |
- * If you enable **Additional logging** in the **Diagnostics** settings of the on-premises data gateway app, the logging level is increased to **Informational**.
+1. To finish creating your connection, select **Create**.
- * To increase the logging level to **Verbose**, update the following setting in your configuration file. Typically, the configuration file is located at `C:\Program Files\On-premises data gateway\Microsoft.PowerBI.DataMovement.Pipeline.GatewayCore.dll.config`.
-
- ```xml
- <setting name="SapTraceLevel" serializeAs="String">
- <value>Verbose</value>
- </setting>
- ```
+### [ISE](#tab/ise)
-### Extended SAP logging in on-premises data gateway
+For a Consumption workflow that runs in an ISE, you can enable SNC for authentication. Before you start, make sure that you met all the necessary [prerequisites](#prerequisites) and [SNC prerequisites for ISE](#snc-prerequisites-ise).
-If you use an [on-premises data gateway for Azure Logic Apps](logic-apps-gateway-install.md), you can configure an extended log file for the SAP connector. You can use your on-premises data gateway to redirect Event Tracing for Windows (ETW) events into rotating log files that are included in your gateway's logging .zip files.
+1. In the [Azure portal](https://portal.azure.com), open your ISE resource and logic app workflow in the designer.
-You can [export all of your gateway's configuration and service logs](/data-integration/gateway/service-gateway-tshoot#collect-logs-from-the-on-premises-data-gateway-app) to a .zip file in from the gateway app's settings.
+1. Add or edit an *ISE-versioned* SAP connector operation. Make sure that the SAP connector operation displays the **ISE** label.
-> [!NOTE]
-> Extended logging might affect your logic app workflow's performance when always enabled. As a best practice,
-> turn off extended log files after you're finished with analyzing and troubleshooting an issue.
+1. In the SAP connection information box, provide the following [required information](/connectors/sap/#default-connection). The **Authentication Type** that you select changes the available options.
-#### Capture ETW events
+ ![Screenshot showing SAP connection settings for ISE.](media\logic-apps-using-sap-connector\sap-connection-ise.png)
-Optionally, advanced users can capture ETW events directly. You can then [consume your data in Azure Diagnostics in Event Hubs](../azure-monitor/agents/diagnostics-extension-stream-event-hubs.md) or [collect your data to Azure Monitor Logs](../azure-monitor/agents/diagnostics-extension-logs.md). For more information, review the [best practices for collecting and storing data](/azure/architecture/best-practices/monitoring#collecting-and-storing-data). You can use [PerfView](https://github.com/Microsoft/perfview/blob/master/README.md) to work with the resulting ETL files, or you can write your own program. This walkthrough uses PerfView:
-
-1. In the PerfView menu, select **Collect** &gt; **Collect** to capture the events.
+ > [!NOTE]
+ >
+ > The **SAP Username** and **SAP Password** fields are optional. If you don't provide a username
+ > and password, the connector uses the client certificate provided in a later step for authentication.
-1. In the **Additional Provider** field, enter `*Microsoft-LobAdapter` to specify the SAP provider to capture SAP Adapter events. If you don't specify this information, your trace only includes general ETW events.
+1. To enable SNC, in the SAP connection information box, provide the following required information instead:
-1. Keep the other default settings. If you want, you can change the file name or location in the **Data File** field.
+ ![Screenshot showing SAP connection settings with SNC enabled for ISE.](./media\logic-apps-using-sap-connector\sap-connection-snc-ise.png)
-1. Select **Start Collection** to begin your trace.
+ | Parameter | Description |
+ |--|-|
+ | **Use SNC** | Select the checkbox. |
+ | **SNC Library** | Enter the name for your SNC library, for example, **sapcrypto.dll**. |
+ | **SNC Partner Name** | Enter the name for the backend SNC, for example, **p:CN=DV3, OU=LA, O=MS, C=US**. |
+ | **SNC My Name** and **SNC Quality of Protection** | Optional: Enter these values, as necessary. |
+ | **SNC Certificate** | Enter your SNC client's public certificate in base64-encoded format with the following guidance: <br><br>- Don't include the PEM header or footer. <br><br>- Don't enter the private certificate here because the Personal Security Environment (PSE) might contain multiple private certificates. However, this **SNC Certificate** parameter identifies the certificates that this connection must use. For more information, review the next parameter. |
+ | **PSE** (Personal Security Environment) | Enter your SNC PSE as a base64-encoded binary with the following guidance: <br><br>- The PSE must contain the private client certificate where the thumbprint matches the public client certificate that you provided in the previous step. <br><br>- Although the PSE might contain multiple client certificates. to use different client certificates, create separate workflow apps instead. <br><br>- The PSE must have no PIN. If necessary, set the PIN to empty using the SAPGENPSE utility. <br><br>- If you're using more than one SNC client certificate for your ISE, you must provide the same PSE for all connections. The PSE must contain the client private certificate for each and all the connections. You must set the client public certificate parameter to match the specific private certificate for each connection used in your ISE. |
-1. After you've reproduced your issue or collected enough analysis data, select **Stop Collection**.
+1. To finish creating your connection, select **Create**.
-1. To share your data with another party, such as Azure support engineers, compress the ETL file.
+ If the parameters are correct, the connection is created. If there's a problem with the parameters, the connection creation dialog displays an error message. To troubleshoot connection parameter issues, you can use an on-premises data gateway and the gateway's local logs.
-1. To view the content of your trace:
+
- 1. In PerfView, select **File** &gt; **Open** and select the ETL file you just generated.
+### Convert a binary PSE file into base64-encoded format
- 1. In the PerfView sidebar, the **Events** section under your ETL file.
+1. Use a PowerShell script, for example:
- 1. Under **Filter**, filter by `Microsoft-LobAdapter` to only view relevant events and gateway processes.
+ ```powershell
+ Param ([Parameter(Mandatory=$true)][string]$psePath, [string]$base64OutputPath)
+ $base64String = [convert]::ToBase64String((Get-Content -path $psePath -Encoding byte))
+ if ($base64OutputPath -eq $null)
+ {
+ Write-Output $base64String
+ }
+ else
+ {
+ Set-Content -Path $base64OutputPath -Value $base64String
+ Write-Output "Output written to $base64OutputPath"
+ }
+ ```
-### Test your workflow
+1. Save the script as a **pseConvert.ps1** file, and then invoke the script, for example:
-1. To trigger your logic app workflow, send a message from your SAP system.
+ ```output
+ .\pseConvert.ps1 -psePath "C:\Temp\SECUDIR\request.pse" -base64OutputPath "connectionInput.txt"
+ Output written to connectionInput.txt
+ ```
-1. On the logic app menu, select **Overview**. Review the **Runs history** for any new runs for your logic app workflow.
+ If you don't provide the output path parameter, the script's output to the console contains line breaks. Remove the line breaks in the base 64-encoded string for the connection input parameter.
-1. Open the most recent run, which shows the message sent from your SAP system in the trigger outputs section.
+<a name="test-sending-idocs-from-sap"></a>
-### Test sending IDocs from SAP
+### Set up and test sending IDocs to your workflow from SAP
-To send IDocs from SAP to your logic app workflow, you need the following minimum configuration:
+Follow these steps only for testing your SAP configuration with your logic app workflow. Production environments require additional configuration.
-> [!IMPORTANT]
-> Use these steps only when you test your SAP configuration with your logic app workflow. Production environments require additional configuration.
+To send IDocs from SAP to your workflow, you need the following minimum configuration:
1. [Create an RFC destination.](#create-rfc-destination)- 1. [Create an ABAP connection.](#create-abap-connection)- 1. [Create a receiver port.](#create-receiver-port)- 1. [Create a sender port.](#create-sender-port)- 1. [Create a logical system partner.](#create-logical-system-partner)- 1. [Create a partner profile.](#create-partner-profiles)- 1. [Test sending messages.](#test-sending-messages) #### Create RFC destination
-This destination will identify your logic app workflow for the receiver port.
+This destination identifies your logic app workflow as the receiver port.
-1. To open the **Configuration of RFC Connections** settings, in your SAP interface, use the **sm59** transaction code (T-Code) with the **/n** prefix.
+1. In SAP, open the **Configuration of RFC Connections** settings. You can use the **sm59** transaction code (T-Code) with the **/n** prefix.
1. Select **TCP/IP Connections** > **Create**. 1. Create a new RFC destination with the following settings:
- 1. For your **RFC Destination**, enter a name.
+ 1. For **RFC Destination**, enter a name.
- 1. On the **Technical Settings** tab, for **Activation Type**, select **Registered Server Program**.
+ 1. On the **Technical Settings** tab, for **Activation Type**, select **Registered Server Program**.
- 1. For your **Program ID**, enter a value. In the SAP server, your logic app workflow's trigger is registered by using this identifier.
+ 1. For **Program ID**, enter a value. In your SAP server, your workflow's trigger is registered using this identifier.
- > [!IMPORTANT]
- > The SAP **Program ID** is case-sensitive. Make sure you consistently use the same case format for your **Program ID**
- > when you configure your logic app workflow and SAP server. Otherwise, you might receive the following errors in the
- > tRFC Monitor (T-Code SM58) when you attempt to send an IDoc to SAP:
- >
- > * **Function IDOC_INBOUND_ASYNCHRONOUS not found**
- > * **Non-ABAP RFC client (partner type ) not supported**
- >
- > For more information from SAP, review the following notes (login required):
- >
- > * [https://launchpad.support.sap.com/#/notes/2399329](https://launchpad.support.sap.com/#/notes/2399329)
- > * [https://launchpad.support.sap.com/#/notes/353597](https://launchpad.support.sap.com/#/notes/353597)
+ > [!IMPORTANT]
+ >
+ > The SAP **Program ID** is case-sensitive. Make sure that you consistently use the same case format
+ > for your **Program ID** when you configure your workflow and SAP server. Otherwise, you might
+ > receive the following errors in the tRFC Monitor (T-Code SM58) when you attempt to send an IDoc to SAP:
+ >
+ > * **Function IDOC_INBOUND_ASYNCHRONOUS not found**
+ > * **Non-ABAP RFC client (partner type ) not supported**
+ >
+ > For more information from SAP, review the following notes (login required):
+ >
+ > * [https://launchpad.support.sap.com/#/notes/2399329](https://launchpad.support.sap.com/#/notes/2399329)
+ > * [https://launchpad.support.sap.com/#/notes/353597](https://launchpad.support.sap.com/#/notes/353597)
- 1. On the **Unicode** tab, for **Communication Type with Target System**, select **Unicode**.
+ 1. On the **Unicode** tab, for **Communication Type with Target System**, select **Unicode**.
- > [!NOTE]
- > SAP .NET Client libraries support only Unicode character encoding. If you get the error
- > `Non-ABAP RFC client (partner type ) not supported` when sending IDoc from SAP to
- > Azure Logic Apps, check that the **Communication Type with Target System** value is set to **Unicode**.
+ > [!NOTE]
+ >
+ > SAP .NET Client libraries support only Unicode character encoding. If you get the error
+ > **Non-ABAP RFC client (partner type) not supported** when you send an IDoc from SAP to
+ > Azure Logic Apps, check that the **Communication Type with Target System** value is set to **Unicode**.
1. Save your changes.
-1. Register your new **Program ID** with Azure Logic Apps by creating a logic app workflow that starts with the SAP trigger named **When a message is received from SAP**.
+1. Register your new **Program ID** with Azure Logic Apps by creating a logic app workflow that starts with the SAP managed trigger named **When a message is received**.
- This way, when you save your workflow, Azure Logic Apps registers the **Program ID** on the SAP Gateway.
+ That way, when you save your workflow, Azure Logic Apps registers the **Program ID** on the SAP Gateway.
-1. In your workflow's trigger history, the on-premises data gateway SAP Adapter logs, and the SAP Gateway trace logs, check the registration status. In the SAP Gateway monitor dialog box (T-Code SMGW), under **Logged-On Clients**, the new registration should appear as **Registered Server**.
+1. In your workflow's trigger history, the on-premises data gateway SAP Adapter logs, if applicable, and the SAP Gateway trace logs, check the registration status.
-1. To test your connection, in the SAP interface, under your new **RFC Destination**, select **Connection Test**.
+ In the SAP Gateway monitor box (T-Code SMGW), under **Logged-On Clients**, the new registration appears as **Registered Server**.
+
+1. To test your connection, under your new **RFC Destination**, select **Connection Test**.
#### Create ABAP connection
-This destination will identify your SAP system for the sender port.
+This destination identifies your SAP system as the sender port.
-1. To open the **Configuration of RFC Connections** settings, in your SAP interface, use the **sm59*** transaction code (T-Code) with the **/n** prefix.
+1. In SAP, open the **Configuration of RFC Connections** settings. You can use the **sm59** transaction code (T-Code) with the **/n** prefix.
1. Select **ABAP Connections** > **Create**. 1. For **RFC Destination**, enter the identifier for your test SAP system.
-1. By leaving the target host empty in the Technical Settings, you are creating a local connection to the SAP system itself.
+1. In **Technical Settings**, leave the target host empty to create a local connection to the SAP system.
1. Save your changes.
This destination will identify your SAP system for the sender port.
#### Create receiver port
-1. To open the **Ports In IDOC processing** settings, in your SAP interface, use the **we21** transaction code (T-Code) with the **/n** prefix.
+1. In SAP, open the **Ports In IDOC processing** settings. You can use the **we21** transaction code (T-Code) with the **/n** prefix.
1. Select **Ports** > **Transactional RFC** > **Create**.
This destination will identify your SAP system for the sender port.
#### Create sender port
-1. To open the **Ports In IDOC processing** settings, in your SAP interface, use the **we21** transaction code (T-Code) with the **/n** prefix.
+1. In SAP, open the **Ports In IDOC processing** settings. You can use the **we21** transaction code (T-Code) with the **/n** prefix.
1. Select **Ports** > **Transactional RFC** > **Create**.
-1. In the settings box that opens, select **own port name**. For your test port, enter a **Name** that starts with **SAP**. All sender port names must start with the letters **SAP**, for example, **SAPTEST**. Save your changes.
+1. In the settings box that opens, select **own port name**.
+
+1. For your test port, enter a **Name** that starts with **SAP**. Save your changes.
+
+ All sender port names must start with the letters **SAP**, for example, **SAPTEST**.
1. In the settings for your new sender port, for **RFC destination**, enter the identifier for [your ABAP connection](#create-abap-connection).
This destination will identify your SAP system for the sender port.
#### Create logical system partner
-1. To open the **Change View "Logical Systems": Overview** settings, in your SAP interface, use the **bd54** transaction code (T-Code).
+1. In SAP, open the **Change View "Logical Systems": Overview** settings. You can use the **bd54** transaction code (T-Code).
-1. Accept the warning message that appears: **Caution: The table is cross-client**
+1. Accept the following warning message that appears: **Caution: The table is cross-client**
1. Above the list that shows your existing logical systems, select **New Entries**.
This destination will identify your SAP system for the sender port.
#### Create partner profiles
-For production environments, you must create two partner profiles. The first profile is for the sender, which is your organization and SAP system. The second profile is for the receiver, which is your logic app.
+For production environments, you must create two partner profiles. The first profile is for the sender, which is your organization and SAP system. The second profile is for the receiver, which is your logic app resource and workflow.
-1. To open the **Partner profiles** settings, in your SAP interface, use the **we20** transaction code (T-Code) with the **/n** prefix.
+1. In SAP, open the **Partner profiles** settings. You can use the **we20** transaction code (T-Code) with the **/n** prefix.
1. Under **Partner Profiles**, select **Partner Type LS** > **Create**. 1. Create a new partner profile with the following settings:
- * For **Partner No.**, enter [your logical system partner's identifier](#create-logical-system-partner).
-
- * For **Partn. Type**, enter **LS**.
+ | Setting | Description |
+ ||-|
+ | **Partner No.** | Enter [your logical system partner's identifier](#create-logical-system-partner). |
+ | **Partn. Type** | Enter **LS**. |
+ | **Agent** | Enter the identifier for the SAP user account to use when you register program identifiers for Azure Logic Apps or other non-SAP systems. |
- * For **Agent**, enter the identifier for the SAP user account to use when you register program identifiers for Azure Logic Apps or other non-SAP systems.
+1. Save your changes.
-1. Save your changes. If you haven't [created the logical system partner](#create-logical-system-partner), you get the error, **Enter a valid partner number**.
+ If you haven't [created the logical system partner](#create-logical-system-partner), you get the error, **Enter a valid partner number**.
1. In your partner profile's settings, under **Outbound parmtrs.**, select **Create outbound parameter**.
For production environments, you must create two partner profiles. The first pro
* Enter your [receiver port's identifier](#create-receiver-port).
- * Enter an IDoc size for **Pack. Size**. Or, to [send IDocs one at a time from SAP](#receive-idoc-packets-from-sap), select **Pass IDoc Immediately**.
+ * Enter an IDoc size for **Pack. Size**. Or, to [send IDocs one at a time from SAP](sap-create-example-scenario-workflows.md#receive-idoc-packets-sap), select **Pass IDoc Immediately**.
1. Save your changes. #### Test sending messages
-1. To open the **Test Tool for IDoc Processing** settings, in your SAP interface, use the **we19** transaction code (T-Code) with the **/n** prefix.
+1. In SAP, open the **Test Tool for IDoc Processing** settings. You can use the **we19** transaction code (T-Code) with the **/n** prefix.
-1. Under **Template for test**, select **Via message type**, and enter your message type, for example, **CREMAS**. Select **Create**.
+1. Under **Template for test**, select **Via message type**. Enter your message type, for example, **CREMAS**. Select **Create**.
1. Confirm the **Which IDoc type?** message by selecting **Continue**.
For production environments, you must create two partner profiles. The first pro
1. Select **Standard Outbound Processing**.
-1. To start outbound IDoc processing, select **Continue**. When the tool finishes processing, the **IDoc sent to SAP system or external program** message appears.
-
-1. To check for processing errors, use the **sm58** transaction code (T-Code) with the **/n** prefix.
-
-## Receive IDoc packets from SAP
-
-You can set up SAP to [send IDocs in packets](https://help.sap.com/viewer/8f3819b0c24149b5959ab31070b64058/7.4.16/4ab38886549a6d8ce10000000a42189c.html), which are batches or groups of IDocs. To receive IDoc packets, the SAP connector, and specifically the trigger, doesn't need extra configuration. However, to process each item in an IDoc packet after the trigger receives the packet, some additional steps are required to split the packet into individual IDocs.
-
-Here's an example that shows how to extract individual IDocs from a packet by using the [`xpath()` function](./workflow-definition-language-functions-reference.md#xpath):
-
-1. Before you start, you need a logic app workflow with an SAP trigger. If you don't already have this trigger in your logic app workflow, follow the previous steps in this topic to [set up a logic app workflow with an SAP trigger](#receive-message-from-sap).
-
- > [!IMPORTANT]
- > The SAP **Program ID** is case-sensitive. Make sure you consistently use the same case format for your **Program ID**
- > when you configure your logic app workflow and SAP server. Otherwise, you might receive the following errors in the
- > tRFC Monitor (T-Code SM58) when you attempt to send an IDoc to SAP:
- >
- > * **Function IDOC_INBOUND_ASYNCHRONOUS not found**
- > * **Non-ABAP RFC client (partner type) not supported**
- >
- > For more information from SAP, review the following notes (login required)
- > [https://launchpad.support.sap.com/#/notes/2399329](https://launchpad.support.sap.com/#/notes/2399329)
- > and [https://launchpad.support.sap.com/#/notes/353597](https://launchpad.support.sap.com/#/notes/353597).
+1. To start outbound IDoc processing, select **Continue**.
- For example:
+ When the tool finishes processing, the **IDoc sent to SAP system or external program** message appears.
- ![Screenshot that shows adding an SAP trigger to logic app workflow.](./media/logic-apps-using-sap-connector/first-step-trigger.png)
-
-1. [Add a Response action to your logic app workflow](../connectors/connectors-native-reqres.md#add-a-response-action) to reply immediately with the status of your SAP request. As a best practice, add this action immediately after your trigger to free up the communication channel with your SAP server. Choose one of the following status codes (`statusCode`) to use in your response action:
-
- * **202 Accepted**, which means the request has been accepted for processing but the processing isn't complete yet.
-
- * **204 No Content**, which means the server has successfully fulfilled the request and there's no additional content to send in the response payload body.
-
- * **200 OK**. This status code always contains a payload, even if the server generates a payload body of zero length.
-
-1. Get the root namespace from the XML IDoc that your logic app workflow receives from SAP. To extract this namespace from the XML document, add a step that creates a local string variable and stores that namespace by using an `xpath()` expression:
-
- `xpath(xml(triggerBody()?['Content']), 'namespace-uri(/*)')`
-
- ![Screenshot that shows getting the root namespace from IDoc.](./media/logic-apps-using-sap-connector/get-namespace.png)
-
-1. To extract an individual IDoc, add a step that creates an array variable and stores the IDoc collection by using another `xpath()` expression:
-
- `xpath(xml(triggerBody()?['Content']), '/*[local-name()="Receive"]/*[local-name()="idocData"]')`
-
- ![Screenshot that shows getting an array of items.](./media/logic-apps-using-sap-connector/get-array.png)
-
- The array variable makes each IDoc available for your logic app workflow to process individually by enumerating over the collection. In this example, the logic app workflow transfers each IDoc to an SFTP server by using a loop:
-
- ![Screenshot that shows sending an IDoc to an SFTP server.](./media/logic-apps-using-sap-connector/loop-batch.png)
-
- Each IDoc must include the root namespace, which is the reason why the file content is wrapped inside a `<Receive></Receive>` element along with the root namespace before sending the IDoc to the downstream app, or SFTP server in this case.
-
-You can use the quickstart template for this pattern by selecting this template in the workflow designer when you create a new logic app workflow.
-
-![Screenshot that shows selecting a batch logic app template.](./media/logic-apps-using-sap-connector/select-batch-logic-app-template.png)
-
-## Generate schemas for artifacts in SAP
-
-This example uses a logic app workflow that you can trigger with an HTTP request. To generate the schemas for the specified IDoc and BAPI, the SAP action **Generate schema** sends a request to an SAP system.
-
-This SAP action returns an [XML schema](#sample-xml-schemas), not the contents or data of the XML document itself. Schemas returned in the response are uploaded to an integration account by using the Azure Resource Manager connector. Schemas contain the following parts:
-
-* The request message's structure. Use this information to form your BAPI `get` list.
-
-* The response message's structure. Use this information to parse the response.
-
-To send the request message, use the generic SAP action **Send message to SAP**, or the targeted **\[BAPI] Call method in SAP** actions.
-
-### Sample XML schemas
-
-If you're learning how to generate an XML schema for use in creating a sample document, review the following samples. These examples show how you can work with many types of payloads, including:
+1. To check for processing errors, use the **sm58** transaction code (T-Code) with the **/n** prefix.
-* [RFC requests](#xml-samples-for-rfc-requests)
+## Create workflows for common SAP scenarios
-* [BAPI requests](#xml-samples-for-bapi-requests)
+For the how-to guide to creating workflows for common SAP integration workloads, see the following steps:
-* [IDoc requests](#xml-samples-for-idoc-requests)
+* [Receive message from SAP](sap-create-example-scenario-workflows.md#receive-messages-sap)
+* [Receive IDoc packets from SAP](sap-create-example-scenario-workflows.md#receive-idoc-packets-sap)
+* [Send IDocs to SAP](sap-create-example-scenario-workflows.md#send-idocs-sap)
+* [Generate schemas for artifacts in SAP](sap-generate-schemas-for-artifacts.md)
-* Simple or complex XML schema data types
+## Create workflows for advanced SAP scenarios
-* Table parameters
+* [Change language headers for sending data to SAP](sap-create-example-scenario-workflows.md#change-language-headers)
+* [Confirm transaction separately and avoid duplicate IDocs](sap-create-example-scenario-workflows.md#confirm-transaction-explicitly)
-* Optional XML behaviors
+## Find extended error logs (Managed connector only)
-You can begin your XML schema with an optional XML prolog. The SAP connector works with or without the XML prolog.
+If you're using the SAP managed connector, you can find full error messages by checking your SAP Adapter's extended logs. You can also [enable an extended log file for the SAP connector](#set-up-extended-sap-logging).
-```xml
-<?xml version="1.0" encoding="utf-8">
-```
+* For on-premises data gateway releases from April 2020 and earlier, logs are disabled by default.
-#### XML samples for RFC requests
+* For on-premises data gateway releases from June 2020 and later, you can [enable gateway logs in the app settings](/data-integration/gateway/service-gateway-tshoot#collect-logs-from-the-on-premises-data-gateway-app).
-The following example is a basic RFC call. The RFC name is `STFC_CONNECTION`. This request uses the default namespace `xmlns=`, however, you can assign and use namespace aliases such as `xmmlns:exampleAlias=`. The namespace value is the namespace for all RFCs in SAP for Microsoft services. There's a simple input parameter in the request, `<REQUTEXT>`.
+ * The default logging level is **Warning**.
-```xml
-<STFC_CONNECTION xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
- <REQUTEXT>exampleInput</REQUTEXT>
-</STFC_CONNECTION>
-```
+ * If you enable **Additional logging** in the **Diagnostics** settings of the on-premises data gateway app, the logging level is increased to **Informational**.
-The following example is an RFC call with a table parameter. This example call and its group of test RFCs are available as part of all SAP systems. The table parameter's name is `TCPICDAT`. The table line type is `ABAPTEXT`, and this element repeats for each row in the table. This example contains a single line, called `LINE`. Requests with a table parameter can contain any number of fields, where the number is a positive integer (*n*).
-
-```xml
-<STFC_WRITE_TO_TCPIC xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
- <RESTART_QNAME>exampleQName</RESTART_QNAME>
- <TCPICDAT>
- <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
- <LINE>exampleFieldInput1</LINE>
- </ABAPTEXT>
- <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
- <LINE>exampleFieldInput2</LINE>
- </ABAPTEXT>
- <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
- <LINE>exampleFieldInput3</LINE>
- </ABAPTEXT>
- </TCPICDAT>
-</STFC_WRITE_TO_TCPIC>
-```
+ * To increase the logging level to **Verbose**, update the following setting in your configuration file. Typically, the configuration file is located at `C:\Program Files\On-premises data gateway\Microsoft.PowerBI.DataMovement.Pipeline.GatewayCore.dll.config`.
-> [!NOTE]
-> Observe the result of RFC **STFC_WRITE_TO_TCPIC** with the SAP Logon's Data Browser (T-Code SE16.) Use the table name **TCPIC**.
-
-The following example is an RFC call with a table parameter that has an anonymous field. An anonymous field is when the field has no name assigned. Complex types are declared under a separate namespace, in which the declaration sets a new default for the current node and all its child elements. The example uses the hex code`x002F` as an escape character for the symbol */*, because this symbol is reserved in the SAP field name.
-
-```xml
-<RFC_XML_TEST_1 xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
- <IM_XML_TABLE>
- <RFC_XMLCNT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
- <_x002F_AnonymousField>exampleFieldInput</_x002F_AnonymousField>
- </RFC_XMLCNT>
- </IM_XML_TABLE>
-</RFC_XML_TEST_1>
-```
+ ```xml
+ <setting name="SapTraceLevel" serializeAs="String">
+ <value>Verbose</value>
+ </setting>
+ ```
-The following example includes prefixes for the namespaces. You can declare all prefixes at once, or you can declare any number of prefixes as attributes of a node. The RFC namespace alias `ns0` is used as the root and parameters for the basic type.
+<a name="set-up-extended-sap-logging"></a>
-> [!NOTE]
-> Complex types are declared under a different namespace for RFC types with
-> the alias `ns3` instead of the regular RFC namespace with the alias `ns0`.
-
-```xml
-<ns0:BBP_RFC_READ_TABLE xmlns:ns0="http://Microsoft.LobServices.Sap/2007/03/Rfc/" xmlns:ns3="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
- <ns0:DELIMITER>0</ns0:DELIMITER>
- <ns0:QUERY_TABLE>KNA1</ns0:QUERY_TABLE>
- <ns0:ROWCOUNT>250</ns0:ROWCOUNT>
- <ns0:ROWSKIPS>0</ns0:ROWSKIPS>
- <ns0:FIELDS>
- <ns3:RFC_DB_FLD>
- <ns3:FIELDNAME>KUNNR</ns3:FIELDNAME>
- </ns3:RFC_DB_FLD>
- </ns0:FIELDS>
-</ns0:BBP_RFC_READ_TABLE>
-```
+## Set up extended SAP logging in on-premises data gateway (Managed connector only)
-#### XML samples for BAPI requests
+If you use an [on-premises data gateway for Azure Logic Apps](logic-apps-gateway-install.md), you can configure an extended log file for the SAP connector. You can use your on-premises data gateway to redirect Event Tracing for Windows (ETW) events into rotating log files that are included in your gateway's logging .zip files.
-The following XML samples are example requests to [call the BAPI method](#actions).
+You can [export all of your gateway's configuration and service logs](/data-integration/gateway/service-gateway-tshoot#collect-logs-from-the-on-premises-data-gateway-app) to a .zip file in from the gateway app's settings.
> [!NOTE]
-> SAP makes business objects available to external systems by describing them in response to RFC `RPY_BOR_TREE_INIT`,
-> which Azure Logic Apps issues with no input filter. Logic Apps inspects the output table `BOR_TREE`. The `SHORT_TEXT` field
-> is used for names of business objects. Business objects not returned by SAP in the output table aren't accessible to
-> Azure Logic Apps.
->
-> If you use custom business objects, you must make sure to publish and release these business objects in SAP. Otherwise,
-> SAP doesn't list your custom business objects in the output table `BOR_TREE`. You can't access your custom business
-> objects in Logic Apps until you expose the business objects from SAP.
-
-The following example gets a list of banks using the BAPI method `GETLIST`. This sample contains the business object for a bank, `BUS1011`.
-
-```xml
-<GETLIST xmlns="http://Microsoft.LobServices.Sap/2007/03/Bapi/BUS1011">
- <BANK_CTRY>US</BANK_CTRY>
- <MAX_ROWS>10</MAX_ROWS>
-</GETLIST>
-```
-
-The following example creates a bank object using the `CREATE` method. This example uses the same business object as the previous example, `BUS1011`. When you use the `CREATE` method to create a bank, be sure to commit your changes because this method isn't committed by default.
-
-> [!TIP]
-> Be sure that your XML document follows any validation rules configured in your SAP system. For example, in this sample document, the bank key (`<BANK_KEY>`) needs to be a bank routing number, also known as an ABA number, in the USA.
-
-```xml
-<CREATE xmlns="http://Microsoft.LobServices.Sap/2007/03/Bapi/BUS1011">
- <BANK_ADDRESS>
- <BANK_NAME xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc">ExampleBankName</BANK_NAME>
- <REGION xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc">ExampleRegionName</REGION>
- <STREET xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc">ExampleStreetAddress</STREET>
- <CITY xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc">Redmond</CITY>
- </BANK_ADDRESS>
- <BANK_COUNTRY>US</BANK_COUNTRY>
- <BANK_KEY>123456789</BANK_KEY>
-</CREATE>
-```
-
-The following example gets details for a bank using the bank routing number, the value for`<BANK_KEY>`.
-
-```xml
-<GETDETAIL xmlns="http://Microsoft.LobServices.Sap/2007/03/Bapi/BUS1011">
- <BANK_COUNTRY>US</BANK_COUNTRY>
- <BANK_KEY>123456789</BANK_KEY>
-</GETDETAIL>
-```
-
-#### XML samples for IDoc requests
-
-To generate a plain SAP IDoc XML schema, use the **SAP Logon** application and the `WE-60` T-Code. Access the SAP documentation through the GUI and generate XML schemas in XSD format for your IDoc types and extensions. For an explanation of generic SAP formats and payloads, and their built-in dialogs, review the [SAP documentation](https://help.sap.com/viewer/index).
-
-This example declares the root node and namespaces. The URI in the sample code, `http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05//700/Send`, declares the following configuration:
-
-* `/IDoc` is the root note for all IDocs.
-
-* `/3` is the record types version for common segment definitions.
-
-* `/ORDERS05` is the IDoc type.
-
-* `//` is an empty segment because there's no IDoc extension.
-
-* `/700` is the SAP version.
-
-* `/Send` is the action to send the information to SAP.
-
-```xml
-<ns0:Send xmlns:ns0="http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05//700/Send" xmlns:ns3="http://schemas.microsoft.com/2003/10/Serialization" xmlns:ns1="http://Microsoft.LobServices.Sap/2007/03/Types/Idoc/Common/" xmlns:ns2="http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05//700">
- <ns0:idocData>
-```
-
-You can repeat the `idocData` node to send a batch of IDocs in a single call. In the example below, there's one control record, `EDI_DC40`, and multiple data records.
-
-```xml
-<...>
- <ns0:idocData>
- <ns2:EDI_DC40>
- <ns1:TABNAM>EDI_DC40</ns1:TABNAM>
- <...>
- <ns1:ARCKEY>Cor1908207-5</ns1:ARCKEY>
- </ns2:EDI_DC40>
- <ns2:E2EDK01005>
- <ns2:DATAHEADERCOLUMN_SEGNAM>E23DK01005</ns2:DATAHEADERCOLUMN_SEGNAM>
- <ns2:CURCY>USD</ns2:CURCY>
- </ns2:E2EDK01005>
- <ns2:E2EDK03>
- <...>
- </ns0:idocData>
-```
-
-The following example is a sample IDoc control record, which uses the prefix `EDI_DC`. You must update the values to match your SAP installation and IDoc type. For example, your IDoc client code may not be `800`. Contact your SAP team to make sure you're using the correct values for your SAP installation.
-
-```xml
-<ns2:EDI_DC40>
- <ns:TABNAM>EDI_DC40</ns1:TABNAM>
- <ns:MANDT>800</ns1:MANDT>
- <ns:DIRECT>2</ns1:DIRECT>
- <ns:IDOCTYP>ORDERS05</ns1:IDOCTYP>
- <ns:CIMTYP></ns1:CIMTYP>
- <ns:MESTYP>ORDERS</ns1:MESTYP>
- <ns:STD>X</ns1:STD>
- <ns:STDVRS>004010</ns1:STDVRS>
- <ns:STDMES></ns1:STDMES>
- <ns:SNDPOR>SAPENI</ns1:SNDPOR>
- <ns:SNDPRT>LS</ns1:SNDPRT>
- <ns:SNDPFC>AG</ns1:SNDPFC>
- <ns:SNDPRN>ABAP1PXP1</ns1:SNDPRN>
- <ns:SNDLAD></ns1:SNDLAD>
- <ns:RCVPOR>BTSFILE</ns1:RCVPOR>
- <ns:RCVPRT>LI</ns1:RCVPRT>
-```
-
-The following example is a sample data record with plain segments. This example uses the SAP date format. Strong-typed documents can use native XML date formats, such as `2020-12-31 23:59:59`.
-
-```xml
-<ns2:E2EDK01005>
- <ns2:DATAHEADERCOLUMN_SEGNAM>E2EDK01005</ns2:DATAHEADERCOLUMN_SEGNAM>
- <ns2:CURCY>USD</ns2:CURCY>
- <ns2:BSART>OR</ns2:BSART>
- <ns2:BELNR>1908207-5</ns2:BELNR>
- <ns2:ABLAD>CC</ns2:ABLAD>
- </ns2>
- <ns2:E2EDK03>
- <ns2:DATAHEADERCOLUMN_SEGNAM>E2EDK03</ns2:DATAHEADERCOLUMN_SEGNAM>
- <ns2:IDDAT>002</ns2:IDDAT>
- <ns2:DATUM>20160611</ns2:DATUM>
- </ns2:E2EDK03>
-```
-
-The following example is a data record with grouped segments. The record includes a group parent node, `E2EDKT1002GRP`, and multiple child nodes, including `E2EDKT1002` and `E2EDKT2001`.
-
-```xml
-<ns2:E2EDKT1002GRP>
- <ns2:E2EDKT1002>
- <ns2:DATAHEADERCOLUMN_SEGNAM>E2EDKT1002</ns2:DATAHEADERCOLUMN_SEGNAM>
- <ns2:TDID>ZONE</ns2:TDID>
- </ns2:E2EDKT1002>
- <ns2:E2EDKT2001>
- <ns2:DATAHEADERCOLUMN_SEGNAM>E2EDKT2001</ns2:DATAHEADERCOLUMN_SEGNAM>
- <ns2:TDLINE>CRSD</ns2:TDLINE>
- </ns2:E2EDKT2001>
-</ns2:E2EDKT1002GRP>
-```
-
-The recommended method is to create an IDoc identifier for use with tRFC. You can set this transaction identifier, `tid`, using the [Send IDoc operation](/connectors/sap/#send-idoc) in the SAP connector API.
-
-The following example is an alternative method to set the transaction identifier, or `tid`. In this example, the last data record segment node and the IDoc data node are closed. Then, the GUID, `guid`, is used as the tRFC identifier to detect duplicates.
-
-```xml
- </E2STZUM002GRP>
- </idocData>
- <guid>8820ea40-5825-4b2f-ac3c-b83adc34321c</guid>
-</Send>
-```
-
-### Add the Request trigger
-
-1. In the Azure portal, create a blank logic app, which opens the workflow designer.
-
-1. In the search box, enter `http request` as your filter. From the **Triggers** list, select **When a HTTP request is received**.
-
- ![Screenshot that shows adding the Request trigger.](./media/logic-apps-using-sap-connector/add-http-trigger-logic-app.png)
-
-1. Now save your logic app so you can generate an endpoint URL for your logic app workflow. On the designer toolbar, select **Save**.
-
- The endpoint URL now appears in your trigger, for example:
-
- ![Screenshot that shows generating the endpoint URL.](./media/logic-apps-using-sap-connector/generate-http-endpoint-url.png)
-
-### Add an SAP action to generate schemas
-
-1. In the workflow designer, under the trigger, select **New step**.
-
- ![Screenshot that shows adding a new step to logic app workflow.](./media/logic-apps-using-sap-connector/add-sap-action-logic-app.png)
-
-1. In the search box, enter `generate schemas sap` as your filter. From the **Actions** list, select **Generate schemas**.
-
- ![Screenshot that shows adding the "Generate schemas" action to workflow.](./media/logic-apps-using-sap-connector/select-sap-schema-generator-action.png)
-
- Or, you can select the **Enterprise** tab, and select the SAP action.
-
- ![Screenshot that shows selecting the "Generate schemas" action from the Enterprise tab.](./media/logic-apps-using-sap-connector/select-sap-schema-generator-ent-tab.png)
-
-1. If your connection already exists, continue with the next step so you can set up your SAP action. However, if you're prompted for connection details, provide the information so that you can create a connection to your on-premises SAP server now.
-
- 1. Provide a name for the connection.
-
- 1. In the **Data Gateway** section, under **Subscription**, first select the Azure subscription for the data gateway resource that you created in the Azure portal for your data gateway installation.
-
- 1. Under **Connection Gateway**, select your data gateway resource in Azure.
-
- 1. Continue providing information about the connection. For the **Logon Type** property, follow the step based on whether the property is set to **Application Server** or **Group**:
-
- * For **Application Server**, these properties, which usually appear optional, are required:
-
- ![Screenshot that shows creating a connection for SAP Application server](./media/logic-apps-using-sap-connector/create-SAP-application-server-connection.png)
-
- * For **Group**, these properties, which usually appear optional, are required:
-
- ![Screenshot that shows creating a connection for SAP Message server](./media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
-
- 1. When you're finished, select **Create**.
-
- Azure Logic Apps sets up and tests your connection to make sure that the connection works properly.
-
-1. Provide the path to the artifact for which you want to generate the schema.
-
- You can select the SAP action from the file picker:
-
- ![Screenshot that shows selecting an SAP action.](./media/logic-apps-using-sap-connector/select-SAP-action-schema-generator.png)
-
- Or, you can manually enter the action:
-
- ![Screenshot that shows manually entering an SAP action.](./media/logic-apps-using-sap-connector/manual-enter-SAP-action-schema-generator.png)
-
- To generate schemas for more than one artifact, provide the SAP action details for each artifact, for example:
-
- ![Screenshot that shows selecting "Add new item".](./media/logic-apps-using-sap-connector/schema-generator-array-pick.png)
-
- ![Screenshot that shows two items.](./media/logic-apps-using-sap-connector/schema-generator-example.png)
-
- For more information about the SAP action, review [Message schemas for IDoc operations](/biztalk/adapters-and-accelerators/adapter-sap/message-schemas-for-idoc-operations).
-
-1. Save your logic app workflow. On the designer toolbar, select **Save**.
-
-By default, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. The **Safe Typing** option is available for backward compatibility and only checks the string length. Learn more about the [Safe Typing option](#safe-typing).
-
-### Test your workflow
-
-1. On the designer toolbar, select **Run** to trigger a run for your logic app workflow.
-
-1. Open the run, and check the outputs for the **Generate schemas** action.
+> Extended logging might affect your workflow's performance when always enabled. As a best practice,
+> turn off extended log files after you're finished with analyzing and troubleshooting an issue.
- The outputs show the generated schemas for the specified list of messages.
+### Capture ETW events
-### Upload schemas to an integration account
+As an optional advanced logging task, you can directly capture ETW events, and then [consume the data in Azure Diagnostics in Event Hubs](../azure-monitor/agents/diagnostics-extension-stream-event-hubs.md) or [collect your data to Azure Monitor Logs](../azure-monitor/agents/diagnostics-extension-logs.md). For more information, review the [best practices for collecting and storing data](/azure/architecture/best-practices/monitoring#collecting-and-storing-data).
-Optionally, you can download or store the generated schemas in repositories, such as a blob, storage, or integration account. Integration accounts provide a first-class experience with other XML actions, so this example shows how to upload schemas to an integration account for the same logic app workflow by using the Azure Resource Manager connector.
+To work with the resulting ETL files, you can use [PerfView](https://github.com/Microsoft/perfview/blob/master/README.md), or you can write your own program. The following walkthrough uses PerfView:
-> [!NOTE]
->
-> Schemas use base64-encoded format. To upload schemas to an integration account, you must decode them first
-> by using the `base64ToString()` function. The following example shows the code for the `properties` element:
->
-> ```json
-> "properties": {
-> "Content": "@base64ToString(items('For_each')?['Content'])",
-> "ContentType": "application/xml",
-> "SchemaType": "Xml"
-> }
-> ```
+1. In the PerfView menu, select **Collect** &gt; **Collect** to capture the events.
-1. In the workflow designer, under the trigger, select **New step**.
+1. In the **Additional Provider** parameter, enter `*Microsoft-LobAdapter` to specify the SAP provider to capture SAP Adapter events. If you don't specify this information, your trace only includes general ETW events.
-1. In the search box, enter `resource manager` as your filter. Select **Create or update a resource**.
+1. Keep the other default settings. If you want, you can change the file name or location in the **Data File** parameter.
- ![Screenshot that shows selecting an Azure Resource Manager action.](./media/logic-apps-using-sap-connector/select-azure-resource-manager-action.png)
+1. Select **Start Collection** to begin your trace.
-1. Enter the details for the action, including your Azure subscription, Azure resource group, and integration account. To add SAP tokens to the fields, click inside the boxes for those fields, and select from the dynamic content list that appears.
+1. After you've reproduced your issue or collected enough analysis data, select **Stop Collection**.
- 1. Open the **Add new parameter** list, and select the **Location** and **Properties** fields.
+1. To share your data with another party, such as Azure support engineers, compress the ETL file.
- 1. Provide details for these new fields as shown in this example.
+1. To view the content of your trace:
- ![Screenshot that shows entering details for the Azure Resource Manager action.](./media/logic-apps-using-sap-connector/azure-resource-manager-action.png)
+ 1. In PerfView, select **File** &gt; **Open** and select the ETL file you just generated.
- The SAP **Generate schemas** action generates schemas as a collection, so the designer automatically adds a **For each** loop to the action. Here's an example that shows how this action appears:
+ 1. In the PerfView sidebar, the **Events** section under your ETL file.
- ![Screenshot that shows the Azure Resource Manager action with a "for each" loop.](./media/logic-apps-using-sap-connector/azure-resource-manager-action-foreach.png)
+ 1. Under **Filter**, filter by `Microsoft-LobAdapter` to only view relevant events and gateway processes.
-1. Save your logic app workflow. On the designer toolbar, select **Save**.
+<a name="test-workflow-logging"></a>
### Test your workflow
-1. On the designer toolbar, select **Run** to manually trigger your logic app workflow.
+Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
-1. After a successful run, go to the integration account, and check that the generated schemas exist.
+### [Consumption](#tab/consumption)
-<a name="enable-secure-network-communications"></a>
+1. If your Consumption logic app resource isn't already enabled, on your logic app menu, select **Overview**. On the toolbar, select **Enable**.
-## Enable Secure Network Communications (SNC)
+1. On the designer toolbar, select **Run Trigger** > **Run** to manually start your workflow.
-Before you start, make sure that you met the previously listed [prerequisites](#prerequisites), which apply only when you use the data gateway, and your logic app workflow runs in multi-tenant Azure:
+1. To trigger your workflow, send a message from your SAP system.
-* Make sure the on-premises data gateway is installed on a computer that's in the same network as your SAP system.
+1. Return to your logic app's **Overview** pane. Under **Runs history**, find any new runs for your workflow.
-* For SSO, the data gateway is running as a user that's mapped to an SAP user.
+1. Open the most recent run, which shows a manual run. Find and review the trigger outputs section.
-* The SNC library that provides the additional security functions is installed on the same machine as the data gateway. Some examples include [sapseculib](https://help.sap.com/saphelp_nw74/helpdata/en/7a/0755dc6ef84f76890a77ad6eb13b13/frameset.htm), Kerberos, and NTLM.
+### [Standard](#tab/standard)
- To enable SNC for your requests to or from the SAP system, select the **Use SNC** check box in the SAP connection and provide these properties:
+1. If your Standard logic app resource is stopped or disabled, from your workflow, go to the logic app resource level, and select **Overview**. On the toolbar, select **Start**.
- ![Screenshot that shows setting up SNC in an SAP connection.](./media/logic-apps-using-sap-connector/configure-sapsnc.png)
+1. Return to the workflow level. On the workflow menu, select **Overview**. On the toolbar, select **Run** > **Run** to manually start your workflow.
- | Property | Description |
- |-| |
- | **SNC Library Path** | The SNC library name or path relative to NCo installation location or absolute path. Examples are `sapsnc.dll` or `.\security\sapsnc.dll` or `c:\security\sapsnc.dll`. |
- | **SNC SSO** | When you connect through SNC, the SNC identity is typically used for authenticating the caller. Another option is to override so that user and password information can be used for authenticating the caller, but the line is still encrypted. |
- | **SNC My Name** | In most cases, you can omit this property. The installed SNC solution usually knows its own SNC name. Only for solutions that support multiple identities, you might need to specify the identity to be used for this particular destination or server. |
- | **SNC Partner Name** | The name for the back-end SNC. |
- | **SNC Quality of Protection** | The quality of service to be used for SNC communication of this particular destination or server. The default value is defined by the back-end system. The maximum value is defined by the security product used for SNC. |
- |||
+1. To trigger your workflow, send a message from your SAP system.
- > [!NOTE]
- > Don't set the environment variables SNC_LIB and SNC_LIB_64 on the machine where you have the data gateway
- > and the SNC library. If set, they take precedence over the SNC library value passed through the connector.
-
-## Safe typing
+1. Return to your workflow's **Overview** pane. Under **Run History**, find any new runs for your workflow.
-By default, when you create your SAP connection, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. The **Safe Typing** option is available for backward compatibility and only checks the string length. If you choose **Safe Typing**, the DATS type and TIMS type in SAP are treated as strings rather than as their XML equivalents, `xs:date` and `xs:time`, where `xmlns:xs="http://www.w3.org/2001/XMLSchema"`. Safe typing affects the behavior for all schema generation, the send message for both the "been sent" payload and the "been received" response, and the trigger.
+1. Open the most recent run, which shows a manual run. Find and review the trigger outputs section.
-When strong typing is used (**Safe Typing** isn't enabled), the schema maps the DATS and TIMS types to more straightforward XML types:
-
-```xml
-<xs:element minOccurs="0" maxOccurs="1" name="UPDDAT" nillable="true" type="xs:date"/>
-<xs:element minOccurs="0" maxOccurs="1" name="UPDTIM" nillable="true" type="xs:time"/>
-```
+### [ISE](#tab/ise)
-When you send messages using strong typing, the DATS and TIMS response complies with the matching XML type format:
+See the steps for [SAP logging for Consumption logic apps in multi-tenant workflows](?tabs=multi-tenant#test-workflow-logging).
-```xml
-<DATE>9999-12-31</DATE>
-<TIME>23:59:59</TIME>
-```
-
-When **Safe Typing** is enabled, the schema maps the DATS and TIMS types to XML string fields with length restrictions only, for example:
-
-```xml
-<xs:element minOccurs="0" maxOccurs="1" name="UPDDAT" nillable="true">
- <xs:simpleType>
- <xs:restriction base="xs:string">
- <xs:maxLength value="8" />
- </xs:restriction>
- </xs:simpleType>
-</xs:element>
-<xs:element minOccurs="0" maxOccurs="1" name="UPDTIM" nillable="true">
- <xs:simpleType>
- <xs:restriction base="xs:string">
- <xs:maxLength value="6" />
- </xs:restriction>
- </xs:simpleType>
-</xs:element>
-```
-
-When messages are sent with **Safe Typing** enabled, the DATS and TIMS response looks like this example:
-
-```xml
-<DATE>99991231</DATE>
-<TIME>235959</TIME>
-```
+ ## Send SAP telemetry for on-premises data gateway to Azure Application Insights
-With the August 2021 update for the on-premises data gateway, SAP connector operations can send telemetry data from the SAP client library and traces from the Microsoft SAP Adapter to [Application Insights](../azure-monitor/app/app-insights-overview.md), which is a capability in Azure Monitor. This telemetry primarily includes the following data:
+With the August 2021 update for the on-premises data gateway, SAP connector operations can send telemetry data from the SAP NCo client library and traces from the Microsoft SAP Adapter to [Application Insights](../azure-monitor/app/app-insights-overview.md), which is a capability in Azure Monitor. This telemetry primarily includes the following data:
* Metrics and traces based on SAP NCo metrics and monitors. * Traces from Microsoft SAP Adapter.
-### Metrics and traces from SAP client library
+### Metrics and traces from SAP NCo client library
*Metrics* are numeric values that might or might not vary over a time period, based on the usage and availability of resources on the on-premises data gateway. You can use these metrics to better understand system health and to create alerts about the following activities:
-* Whether system health is declining.
-
+* System health decline.
* Unusual events.
+* Heavy system load.
-* Heavy load on your system.
-
-This information is sent to the Application Insights table, `customMetrics`. By default, metrics are sent at 30-second intervals.
+This information is sent to the Application Insights table named **customMetrics**. By default, metrics are sent at 30-second intervals.
SAP NCo metrics and traces are based on SAP NCo metrics, specifically the following NCo classes: * RfcDestinationMonitor.- * RfcConnectionMonitor.- * RfcServerMonitor.- * RfcRepositoryMonitor. For more information about the metrics that each class provides, review the [SAP NCo documentation (sign-in required)](https://support.sap.com/en/product/connectors/msnet.html#section_512604546).
-*Traces* include text information that is used with metrics. This information is sent to the Application Insights table named `traces`. By default, traces are sent at 10-minute intervals.
+*Traces* include text information that is used with metrics. This information is sent to the Application Insights table named **traces**. By default, traces are sent at 10-minute intervals.
### Set up SAP telemetry for Application Insights
To enable sending SAP telemetry to Application insights, follow these steps:
1. Download the NuGet package for **Microsoft.ApplicationInsights.EventSourceListener.dll** from this location: [https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/2.14.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/2.14.0).
-1. Add the downloaded file to your on-premises data gateway installation directory, for example, "C:\Program Files\On-Premises Data Gateway".
+1. Add the downloaded file to your on-premises data gateway installation directory, for example, **C:\Program Files\On-Premises Data Gateway**.
1. In your on-premises data gateway installation directory, check that the **Microsoft.ApplicationInsights.dll** file has the same version number as the **Microsoft.ApplicationInsights.EventSourceListener.dll** file that you added. The gateway currently uses version 2.14.0.
After your SAP operations run in your logic app workflow, you can review the tel
The following screenshot shows the Azure portal with Application Insights, which is open to the **Logs** pane:
- [![Screenshot showing the Azure portal with Application Insights open to the "Logs" pane for creating queries.](./media/logic-apps-using-sap-connector/application-insights-query-panel.png)](./media/logic-apps-using-sap-connector/application-insights-query-panel.png#lightbox)
+ [![Screenshot shows Azure portal with Application Insights open to the "Logs" pane for creating queries.](./media/logic-apps-using-sap-connector/application-insights-query-panel.png)](./media/logic-apps-using-sap-connector/application-insights-query-panel.png#lightbox)
1. On the **Logs** pane, you can create a [query](/azure/data-explorer/kusto/query/) using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/concepts/) that's based on your specific requirements.
After your SAP operations run in your logic app workflow, you can review the tel
The following screenshot shows the example query's metrics results table:
- [![Screenshot showing Application Insights with the metrics results table.](./media/logic-apps-using-sap-connector/application-insights-metrics.png)](./media/logic-apps-using-sap-connector/application-insights-metrics.png#lightbox)
+ [![Screenshot shows Application Insights with the metrics results table.](./media/logic-apps-using-sap-connector/application-insights-metrics.png)](./media/logic-apps-using-sap-connector/application-insights-metrics.png#lightbox)
* **MaxUsedCount** is "The maximal number of client connections that were simultaneously used by the monitored destination." as described in the [SAP NCo documentation (sign-in required)](https://support.sap.com/en/product/connectors/msnet.html#section_512604546). You can use this value to understand the number of simultaneously open connections.
- * The **valueCount** column shows **2** for each reading because metrics are generated at 30-second intervals, and Application Insights aggregates these metrics by the minute.
+ * The **valueCount** column shows **2** for each reading because metrics are generated at 30-second intervals. Application Insights aggregates these metrics by the minute.
* The **DestinationName** column contains a character string that is a Microsoft SAP Adapter internal name.
After your SAP operations run in your logic app workflow, you can review the tel
You can also create metric charts or alerts using those capabilities in Application Insights, for example:
-[![Screenshot showing Application Insights with the results in chart format.](./media/logic-apps-using-sap-connector/application-insights-metrics-chart.png)](./media/logic-apps-using-sap-connector/application-insights-metrics-chart.png#lightbox)
+[![Screenshot shows Application Insights with the results in chart format.](./media/logic-apps-using-sap-connector/application-insights-metrics-chart.png)](./media/logic-apps-using-sap-connector/application-insights-metrics-chart.png#lightbox)
### Traces from Microsoft SAP Adapter
traces
The following screenshot shows the example query's traces results table:
-[![Screenshot showing Application Insights with the traces results table.](./media/logic-apps-using-sap-connector/application-insights-traces.png)](./media/logic-apps-using-sap-connector/application-insights-traces.png#lightbox)
-
-## Advanced scenarios
-
-### Change language headers
-
-When you connect to SAP from Logic Apps, the default language for the connection is English. You can set the language for your connection by using the [standard HTTP header `Accept-Language`](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4) with your inbound requests.
-
-> [!TIP]
-> Most web browsers add an `Accept-Language` header based on the user's settings. The web browser applies this header when you create a new SAP connection in the workflow designer, either update your web browser's settings to use your preferred language, or create your SAP connection using Azure Resource Manager instead of the workflow designer.
-
-For example, you can send a request with the `Accept-Language` header to your logic app workflow by using the **Request** trigger. All the actions in your logic app workflow receive the header. Then, SAP uses the specified languages in its system messages, such as BAPI error messages.
-
-The SAP connection parameters for a logic app workflow don't have a language property. So, if you use the `Accept-Language` header, you might get the following error: **Please check your account info and/or permissions and try again.** In this case, check the SAP component's error logs instead. The error actually happens in the SAP component that uses the header, so you might get one of these error messages:
-
-* `"SAP.Middleware.Connector.RfcLogonException: Select one of the installed languages"`
-
-* `"SAP.Middleware.Connector.RfcAbapMessageException: Select one of the installed languages"`
-
-### Confirm transaction explicitly
-
-When you send transactions to SAP from Azure Logic Apps, this exchange happens in two steps as described in the SAP document, [Transactional RFC Server Programs](https://help.sap.com/doc/saphelp_nwpi71/7.1/22/042ad7488911d189490000e829fbbd/content.htm?no_cache=true). By default, the **Send to SAP** action handles both the steps for the function transfer and for the transaction confirmation in a single call. The SAP connector gives you the option to decouple these steps. You can send an IDoc and rather than automatically confirm the transaction, you can use the explicit **\[IDOC] Confirm transaction ID** action.
-
-This capability to decouple the transaction ID confirmation is useful when you don't want to duplicate transactions in SAP, for example, in scenarios where failures might happen due to causes such as network issues. When the **Send to SAP** action separately confirms the transaction ID, the SAP system completes the transaction only once.
-
-Here's an example that shows this pattern:
-
-1. Create a blank logic app workflow, and add the Request trigger.
-
-1. From the SAP connector, add the **\[IDOC] Send document to SAP** action. Provide the details for the IDoc that you send to your SAP system.
-
-1. To explicitly confirm the transaction ID in a separate step, in the **Confirm TID** field, select **No**. For the optional **Transaction ID GUID** field, you can either manually specify the value or have the connector automatically generate and return this GUID in the response from the **\[IDOC] Send document to SAP** action.
-
- ![Screenshot that shows the "[IDOC] Send document to SAP" action properties](./media/logic-apps-using-sap-connector/send-idoc-action-details.png)
-
-1. To explicitly confirm the transaction ID, add the **\[IDOC] Confirm transaction ID** action, making sure to [avoid sending duplicate IDocs to SAP](#avoid-sending-duplicate-idocs). Click inside the **Transaction ID** box so that the dynamic content list appears. From that list, select the **Transaction ID** value that's returned from the **\[IDOC] Send document to SAP** action.
-
- ![Screenshot that shows the "Confirm transaction ID" action](./media/logic-apps-using-sap-connector/explicit-transaction-id.png)
-
- After this step runs, the current transaction is marked complete at both ends, on the SAP connector side and on SAP system side.
-
-#### Avoid sending duplicate IDocs
-
-If you experience an issue with duplicate IDocs being sent to SAP from your logic app workflow, follow these steps to create a string variable to serve as your IDoc transaction identifier. Creating this transaction identifier helps prevent duplicate network transmissions when there are issues such as temporary outages, network issues, or lost acknowledgments.
-
-> [!NOTE]
-> SAP systems forget a transaction identifier after a specified time, or 24 hours by default.
-> As a result, SAP never fails to confirm a transaction identifier if the ID or GUID is unknown.
-> If confirmation for a transaction identifier fails, this failure indicates that communcation
-> with the SAP system failed before SAP was able to acknowledge the confirmation.
-
-1. In the workflow designer, add the action **Initialize variable** to your logic app workflow.
-
-1. In the editor for the action **Initialize variable**, configure the following settings. Then, save your changes.
-
- 1. For **Name**, enter a name for your variable. For example, `IDOCtransferID`.
-
- 1. For **Type**, select **String** as the variable type.
-
- 1. For **Value**, select the text box **Enter initial value** to open the dynamic content menu.
-
- 1. Select the **Expressions** tab. In the list of functions, enter the function `guid()`.
-
- 1. Select **OK** to save your changes. The **Value** field is now set to the `guid()` function, which generates a GUID.
-
-1. After the **Initialize variable** action, add the action **\[IDOC] Send document to SAP**.
-
-1. In the editor for the action **\[IDOC] Send document to SAP**, configure the following settings. Then, save your changes.
-
- 1. For **IDOC type** select your message type, and for **Input IDOC message**, specify your message.
-
- 1. For **SAP release version**, select your SAP configuration's values.
-
- 1. For **Record types version**, select your SAP configuration's values.
-
- 1. For **Confirm TID**, select **No**.
-
- 1. Select **Add new parameter list** > **Transaction ID GUID**.
-
- 1. Select the text box to open the dynamic content menu. Under the **Variables** tab, select the name of the variable that you created, for example, `IDOCtransferID`.
-
-1. On the title bar of the action **\[IDOC] Send document to SAP**, select **...** > **Settings**.
-
- For **Retry Policy**, it's recommended to select **Default** &gt; **Done**. However, you can instead configure a custom policy for your specific needs. For custom policies, it's recommended to configure at least one retry to overcome temporary network outages.
-
-1. After the action **\[IDOC] Send document to SAP**, add the action **\[IDOC] Confirm transaction ID**.
-
-1. In the editor for the action **\[IDOC] Confirm transaction ID**, configure the following settings. Then, save your changes.
-
-1. For **Transaction ID**, enter the name of your variable again. For example, `IDOCtransferID`.
-
-1. Optionally, validate the deduplication in your test environment.
-
- 1. Repeat the **\[IDOC] Send document to SAP** action with the same **Transaction ID** GUID that you used in the previous step.
-
- 1. To validate which IDoc number got assigned after each call to the **\[IDOC] Send document to SAP** action, use the **\[IDOC] Get IDOC list for transaction** action with the same **Transaction ID** and the **Receive** direction.
-
- If the same, single IDoc number is returned for both calls, the IDoc was deduplicated.
-
- When you send the same IDoc twice, you can validate that SAP is able to identify the duplication of the tRFC call and resolve the two calls to a single inbound IDoc message.
-
-## Known issues and limitations
-
-Here are the currently known issues and limitations for the managed (non-ISE) SAP connector:
-
-* In general, the SAP trigger doesn't support data gateway clusters. In some failover cases, the data gateway node that communicates with the SAP system might differ from the active node, which results in unexpected behavior.
-
- * For send scenarios, data gateway clusters in failover mode are supported.
-
- * Data gateway clusters in load-balancing mode aren't supported by stateful [SAP actions](#actions). These actions include **\[BAPI - RFC] Create stateful session**, **\[BAPI] commit transaction**, **\[BAPI] Rollback transaction**, **\[BAPI - RFC] Close stateful session**, and all actions that specify a **Session ID** value. Stateful communications must remain on the same data gateway cluster node.
-
- * For stateful SAP actions, use the data gateway either in non-cluster mode or in a cluster that's set up for failover only.
-
-* The SAP connector currently doesn't support SAP router strings. The on-premises data gateway must exist on the same LAN as the SAP system you want to connect.
-
-* In the **\[BAPI] Call method in SAP** action, the auto-commit feature won't commit the BAPI changes if at least one warning exists in the **CallBapiResponse** object returned by the action. To commit BAPI changes despite any warnings, create a session explicitly with the **\[BAPI - RFC] Create stateful session** action, disable the auto-commit feature in the **\[BAPI] Call method in SAP** action, and call the **\[BAPI] Commit transaction** action instead.
-
-* For [logic apps in an ISE](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE-labeled version uses the [ISE message limits](logic-apps-limits-and-config.md#message-size-limits) instead.
-
-## Connector reference
-
-For more information about the SAP connector, review the [connector reference](/connectors/sap/). You can find details about limits, parameters, and returns for the SAP connector, triggers, and actions.
-
-### Triggers
-
- :::column span="1":::
- [**When a message is received from SAP**](/connectors/sap/#when-a-message-is-received)
- :::column-end:::
- :::column span="3":::
- When a message is received from SAP, do something.
- :::column-end:::
-
-### Actions
-
- :::column span="1":::
- [**[BAPI - RFC] Close stateful session**](/connectors/sap/#[bapirfc]-close-stateful-session-(preview))
- :::column-end:::
- :::column span="3":::
- Close an existing stateful connection session to your SAP system.
- :::column-end:::
- :::column span="1":::
- [**[BAPI - RFC] Create stateful session**](/connectors/sap/#[bapirfc]-create-stateful-session-(preview))
- :::column-end:::
- :::column span="3":::
- Create a stateful connection session to your SAP system.
- :::column-end:::
- :::column span="1":::
- [**[BAPI] Call method in SAP**](/connectors/sap/#[bapi]-call-method-in-sap-(preview))
- :::column-end:::
- :::column span="3":::
- Call the BAPI method in your SAP system.
- \
- \
- You must use the following parameters with your call:
- \
- \
- **Business Object** (`businessObject`), which is a searchable drop-down menu.
- \
- \
- **Method** (`method`), which populates the available methods after you've selected a **Business Object**. The available methods vary depending on the selected **Business Object**.
- \
- \
- **Input BAPI parameters** (`body`), in which you call the XML document that contains the BAPI method input parameter values for the call, or the URI of the storage blob that contains your BAPI parameters.
- \
- \
- For detailed examples of how to use the **[BAPI] Call method in SAP** action, review the [XML samples of BAPI requests](#xml-samples-for-bapi-requests).
- \
- If you're using the workflow designer to edit your BAPI request, you can use the following search functions:
- \
- \
- Select an object in the designer to view a drop-down menu of available methods.
- \
- \
- Filter business object types by keyword using the searchable list provided by the BAPI API call.
- :::column-end:::
- :::column span="1":::
- [**[BAPI] Commit transaction**](/connectors/sap/#[bapi]-commit-transaction-(preview))
- :::column-end:::
- :::column span="3":::
- Commit the BAPI transaction for the session.
- :::column-end:::
- :::column span="1":::
- [**[BAPI] Rollback transaction**](/connectors/sap/#[bapi]-roll-back-transaction-(preview))
- :::column-end:::
- :::column span="3":::
- Roll back the BAPI transaction for the session.
- :::column-end:::
- :::column span="1":::
- [**[IDOC - RFC] Confirm transaction Id**](/connectors/sap/#[idocrfc]-confirm-transaction-id-(preview))
- :::column-end:::
- :::column span="3":::
- Send the transaction identifier confirmation to SAP.
- :::column-end:::
- :::column span="1":::
- [**[IDOC] Get IDOC list for transaction**](/connectors/sap/#[idoc]-get-idoc-list-for-transaction-(preview))
- :::column-end:::
- :::column span="3":::
- Get a list of IDocs for the transaction by session identifier or transaction identifier.
- :::column-end:::
- :::column span="1":::
- [**[IDOC] Get IDOC status**](/connectors/sap/#[idoc]-get-idoc-status-(preview))
- :::column-end:::
- :::column span="3":::
- Get the status of an IDoc.
- :::column-end:::
- :::column span="1":::
- [**[IDOC] Send document to SAP**](/connectors/sap/#[idoc]-send-document-to-sap-(preview))
- :::column-end:::
- :::column span="3":::
- Sends the IDoc message to your SAP server.
- \
- \
- You must use the following parameters with your call:
- \
- \
- **IDOC type with optional extension** (`idocType`), which is a searchable drop-down menu.
- \
- \
- **Input IDOC message** (`body`), in which you call the XML document containing the IDoc payload, or the URI of the storage blob that contains your IDoc XML document. This document must comply with either the SAP IDoc XML schema according to the WE60 IDoc Documentation, or the generated schema for the matching SAP IDoc action URI.
- \
- \
- The optional parameter **SAP release version** (`releaseVersion`) populates values after you select the IDoc type, and depends on the selected IDoc type.
- \
- \
- For detailed examples of how to use the Send IDoc action, review the [walkthrough for sending IDoc messages to your SAP server](#send-idoc-messages-to-sap-server).
- \
- \
- For how to use optional parameter **Confirm TID** (`confirmTid`), review the [walkthrough for confirming the transaction explicitly](#confirm-transaction-explicitly).
- :::column-end:::
- :::column span="1":::
- [**[RFC] Add RFC to transaction**](/connectors/sap/#[rfc]-add-rfc-to-transaction-(preview))
- :::column-end:::
- :::column span="3":::
- Add an RFC call to your transaction.
- :::column-end:::
- :::column span="1":::
- [**[RFC] Call function in SAP**](/connectors/sap/#[rfc]-call-function-in-sap-(preview))
- :::column-end:::
- :::column span="3":::
- Call an RFC operation (sRFC, tRFC, or qRFC) on your SAP system.
- :::column-end:::
- :::column span="1":::
- [**[RFC] Commit transaction**](/connectors/sap/#[rfc]-commit-transaction-(preview))
- :::column-end:::
- :::column span="3":::
- Commit the RFC transaction for the session and/or queue.
- :::column-end:::
- :::column span="1":::
- [**[RFC] Create transaction**](/connectors/sap/#[rfc]-create-transaction-(preview))
- :::column-end:::
- :::column span="3":::
- Create a new transaction by identifier and/or queue name. If the transaction exists, get the details.
- :::column-end:::
- :::column span="1":::
- [**[RFC] Get transaction**](/connectors/sap/#[rfc]-get-transaction-(preview))
- :::column-end:::
- :::column span="3":::
- Get the details of a transaction by identifier and/or queue name. Create a new transaction if none exists.
- :::column-end:::
- :::column span="1":::
- [**Generate schemas**](/connectors/sap/#generate-schemas)
- :::column-end:::
- :::column span="3":::
- Generate schemas for the SAP artifacts for IDoc, BAPI, or RFC.
- :::column-end:::
- :::column span="1":::
- [**Read SAP table**](/connectors/sap/#read-sap-table-(preview))
- :::column-end:::
- :::column span="3":::
- Read an SAP table.
- :::column-end:::
- :::column span="1":::
- [**Send message to SAP**](/connectors/sap/#send-message-to-sap)
- :::column-end:::
- :::column span="3":::
- Send any message type (RFC, BAPI, IDoc) to SAP.
- :::column-end:::
+[![Screenshot shows Application Insights with the traces results table.](./media/logic-apps-using-sap-connector/application-insights-traces.png)](./media/logic-apps-using-sap-connector/application-insights-traces.png#lightbox)
## Next steps
-* [Connect to on-premises systems](logic-apps-gateway-connection.md) from Azure Logic Apps
-* Learn how to validate, transform, and use other message operations with the [Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md)
-* [Managed connectors for Azure Logic Apps](../connectors/managed.md)
-* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
+* [Create example workflows for common SAP scenarios](sap-create-example-scenario-workflows.md)
logic-apps Sap Create Example Scenario Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/sap-create-example-scenario-workflows.md
+
+ Title: Create common SAP workflows
+description: Build workflows for common SAP scenarios in Azure Logic Apps.
+
+ms.suite: integration
++++ Last updated : 05/23/2023++
+# Create workflows for common SAP integration scenarios in Azure Logic Apps
++
+This how-to guide shows how to create example logic app workflows for some common SAP integration scenarios using Azure Logic Apps and the SAP connector.
+
+Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the preview SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps, but this connector is currently in preview and subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](logic-apps-using-sap-connector.md#connector-technical-reference).
+
+## Prerequisites
+
+- Before you start, make sure to [review and meet the SAP connector requirements](logic-apps-using-sap-connector.md#prerequisites) for your specific scenario.
+
+- The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
+
+<a name="receive-messages-sap"></a>
+
+## Receive messages from SAP
+
+The following example logic app workflow triggers when the workflow's SAP trigger receives a message from an SAP server.
+
+<a name="add-sap-trigger"></a>
+
+### Add an SAP trigger
+
+Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and blank workflow in the designer.
+
+1. In the designer, [follow these general steps to add the SAP managed connector trigger named **When a message is received**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+
+1. If prompted, provide the following [connection information](/connectors/sap/#default-connection) for your on-premises SAP server. When you're done, select **Create**. Otherwise, continue with the next step to set up your SAP trigger.
+
+ | Parameter | Required | Description |
+ |--|-|-|
+ | **Connection name** | Yes | Enter a name for the connection. |
+ | **Data Gateway** | Yes | 1. For **Subscription**, select the Azure subscription for the data gateway resource that you created in the Azure portal for your data gateway installation. <br><br>2. For **Connection Gateway**, select your data gateway resource in Azure. |
+ | **Client** | Yes | The SAP client ID to use for connecting to your SAP server |
+ | **Authentication Type** | Yes | The authentication type to use for your connection, which must be **Basic** (username and password). To create an SNC connection, see [Enable Secure Network Communications (SNC)](logic-apps-using-sap-connector.md?tabs=single-tenant#enable-secure-network-communications). |
+ | **SAP Username** | Yes | The username for your SAP server |
+ | **SAP Password** | Yes | The password for your SAP server |
+ | **Logon Type** | Yes | Select either **Application Server** or **Group** (Message Server), and then configure the corresponding required parameters, even though they appear optional: <br><br>**Application Server**: <br>- **AS Host**: The host name for your SAP Application Server <br>- **AS Service**: The service name or port number for your SAP Application Server <br>- **AS System Number**: Your SAP server's system number, which ranges from 00 to 99 <br><br>**Group**: <br>- **MS Server Host**: The host name for your SAP Message Server <br>- **MS Service Name or Port Number**: The service name or port number for your SAP Message Server <br>- **MS System ID**: The system ID for your SAP server <br>- **MS Logon Group**: The logon group for your SAP server. On your SAP server, you can find or edit the **Logon Group** value by opening the **CCMS: Maintain Logon Groups** (T-Code SMLG) dialog box. For more information, review [SAP Note 26317 - Set up for LOGON group for automatic load balancing](https://service.sap.com/sap/support/notes/26317). |
+ | **Safe Typing** | No | This option available for backward compatibility and only checks the string length. By default, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. Learn more about the [Safe Typing setting](#safe-typing). |
+ | **Use SNC** | No | To create an SNC connection, see [Enable Secure Network Communications (SNC)](logic-apps-using-sap-connector.md?tabs=single-tenant#enable-secure-network-communications). |
+
+ For other optional available connection parameters, see [Default connection information](/connectors/sap/#default-connection).
+
+ After Azure Logic Apps sets up and tests your connection, the trigger information box appears. For more information about any connection problems that might happen, see [Troubleshoot connections](#troubleshoot-connections).
+
+1. Based on your SAP server configuration and scenario, provide the necessary parameter values for the [**When a message is received** trigger](/connectors/sap/#when-a-message-is-received), and add any other available trigger parameters that you want to use in your scenario.
+
+ > [!NOTE]
+ >
+ > This SAP trigger is a webhook-based trigger, not a polling trigger, and doesn't include options to specify
+ > a polling schedule. For example, when you use the managed SAP connector with the on-premises data gateway,
+ > the trigger is called from the data gateway only when a message arrives, so no polling is necessary.
+
+ | Parameter | Required | Description |
+ |--|-|-|
+ | **GatewayHost** | Yes | The registration gateway host for the SAP RFC server |
+ | **GatewayService** | Yes | The registration gateway service for the SAP RFC server |
+ | **ProgramId** | Yes | The registration gateway program ID for the SAP RFC server. <br><br>**Note**: This value is case-sensitive. Make sure that you consistently use the same case format for the **Program ID** value when you configure your logic app workflow and SAP server. Otherwise, when you attempt to send an IDoc to SAP, the tRFC Monitor (T-Code SM58) might show the following errors (links require SAP login): <br><br>- [**Function IDOC_INBOUND_ASYNCHRONOUS not found** (2399329)](https://launchpad.support.sap.com/#/notes/2399329)<br>- [**Non-ABAP RFC client (partner type) not supported** (353597)](https://launchpad.support.sap.com/#/notes/353597) |
+ | **DegreeOfParallelism** | No | The number of calls to process in parallel. To add this parameter and change the value, from the **Add new parameter** list, select **DegreeOfParallelism**, and enter the new value. |
+ | **SapActions** | No | Filter the messages that you receive from your SAP server based on a [list of SAP actions](#filter-with-sap-actions). To add this parameter, from the **Add new parameter** list, select **SapActions**. In the new **SapActions** section, for the **SapActions - 1** parameter, use the file picker to select an SAP action or manually specify an action. For more information about the SAP action, see [Message schemas for IDoc operations](/biztalk/adapters-and-accelerators/adapter-sap/message-schemas-for-idoc-operations). |
+ | **IDoc Format** | No | The format to use for receiving IDocs. To add this parameter, from the **Add new parameter** list, select **IDoc Format**. <br><br>- To receive IDocs as SAP plain XML, from the **IDoc Format** list, select **SapPlainXml**. <br><br>- To receive IDocs as a flat file, from the **IDoc Format** list, select **FlatFile**. <br><br>- **Note**: If you also use the [Flat File Decode action](logic-apps-enterprise-integration-flatfile.md) in your workflow, in your flat file schema, you have to use the **early_terminate_optional_fields** property and set the value to **true**. This requirement is necessary because the flat file IDoc data record that's sent by SAP on the tRFC call named `IDOC_INBOUND_ASYNCHRONOUS` isn't padded to the full SDATA field length. Azure Logic Apps provides the flat file IDoc original data without padding as received from SAP. Also, when you combine this SAP trigger with the Flat File Decode action, the schema that's provided to the action must match. |
+ | **Receive IDOCS with unreleased segments** | No | Receive IDocs with or without unreleased segments. To add this parameter and change the value, from the **Add new parameter** list, select **Receive IDOCS with unreleased segments**, and select **Yes** or **No**. |
+ | **SncPartnerNames** | No | The list of SNC partners that have permissions to call the trigger at the SAP client library level. Only the listed partners are authorized by the SAP server's SNC connection. To add this parameter, from the **Add new parameter** list, select **SncPartnerNames**. Make sure to enter each name separated by a vertical bar (**\|**). |
+
+ The following example shows a basically configured SAP managed trigger in a Consumption workflow:
+
+ ![Screenshot shows basically configured SAP managed connector trigger in Consumption workflow.](./media/sap-create-example-scenario-workflows/trigger-sap-managed-consumption.png)
+
+ The following example shows an SAP managed trigger where you can filter messages by selecting SAP actions:
+
+ ![Screenshot shows selecting an SAP action to filter messages in a Consumption workflow.](./media/sap-create-example-scenario-workflows/trigger-sap-select-action-managed-consumption.png)
+
+ Or, by manually specifying an action:
+
+ ![Screenshot shows manually entering the SAP action to filter messages in a Consumption workflow.](./media/sap-create-example-scenario-workflows/trigger-sap-manual-enter-action-managed-consumption.png)
+
+ The following example shows how the action appears when you set up the trigger to receive more than one message:
+
+ ![Screenshot shows example trigger that receives multiple messages in a Consumption workflow.](./media/sap-create-example-scenario-workflows/trigger-sap-multiple-message-managed-consumption.png)
+
+1. Save your workflow so you can start receiving messages from your SAP server. On the designer toolbar, select **Save**.
+
+ Your workflow is now ready to receive messages from your SAP server.
+
+1. After the trigger fires and runs your workflow, review the workflow's trigger history to confirm that trigger registration succeeded.
+
+### [Standard](#tab/standard)
+
+The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
+
+> [!NOTE]
+>
+> The SAP built-in trigger is a non-polling, Azure Functions-based trigger, not a SOAP-based,
+> webhook trigger like the SAP managed trigger. So, the trigger doesn't include options to specify
+> a polling schedule. The trigger is called only when a message arrives, so no polling is necessary.
+>
+> To send a response following the SAP built-in trigger, make sure to add the
+> [**Respond to SAP server** action](/azure/logic-apps/connectors/built-in/reference/sap/#respond-to-sap-server.-(preview))
+> to your workflow, rather than use the **Response** action, which applies only to workflows that start with the **Request**
+> trigger named **When a HTTP request is received** and follow the Request-Response pattern.
+
+1. In Visual Studio Code, open your Standard logic app and a blank workflow in the designer.
+
+1. In the designer, [follow these general steps to find and add the SAP built-in trigger named **Register SAP RFC server for trigger**](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+1. If prompted, provide the following connection information for your on-premises SAP server. When you're done, select **Create**. Otherwise, continue with the next step to set up your SAP trigger.
+
+ | Parameter | Required | Description |
+ |--|-|-|
+ | **Connection name** | Yes | Enter a name for the connection. |
+ | **Client** | Yes | The SAP client ID to use for connecting to your SAP server |
+ | **Authentication Type** | Yes | The authentication type to use for your connection. To create an SNC connection, see [Enable Secure Network Communications (SNC)](logic-apps-using-sap-connector.md?tabs=single-tenant#enable-secure-network-communications). |
+ | **SAP Username** | Yes | The username for your SAP server |
+ | **SAP Password** | Yes | The password for your SAP server |
+ | **Logon Type** | Yes | Select either **Application Server** or **Group**, and then configure the corresponding required parameters, even though they appear optional: <br><br>**Application Server**: <br>- **Server Host**: The host name for your SAP Application Server <br>- **Service**: The service name or port number for your SAP Application Server <br>- **System Number**: Your SAP server's system number, which ranges from 00 to 99 <br><br>**Group**: <br>- **Server Host**: The host name for your SAP Message Server <br>- **Service Name or Port Number**: The service name or port number for your SAP Message Server <br>- **System ID**: The system ID for your SAP server <br>- **Logon Group**: The logon group for your SAP server. On your SAP server, you can find or edit the **Logon Group** value by opening the **CCMS: Maintain Logon Groups** (T-Code SMLG) dialog box. For more information, review [SAP Note 26317 - Set up for LOGON group for automatic load balancing](https://service.sap.com/sap/support/notes/26317). |
+ | **Language** | Yes | The language to use for sending data to your SAP server. The value is either **Default** (English) or one of the [permitted values](/azure/logic-apps/connectors/built-in/reference/sap/#parameters-21). <br><br>**Note**: The SAP built-in connector saves this parameter value as part of the SAP connection parameters. For more information, see [Change language headers for sending data to SAP](#change-language-headers). |
+
+ After Azure Logic Apps sets up and tests your connection, the SAP trigger information box appears. For more information about any connection problems that might happen, see [Troubleshoot connections](#troubleshoot-connections).
+
+1. Based on your SAP server configuration and scenario, provide the following trigger information, and add any available trigger parameters that you want to use in your scenario.
+
+ | Parameter | Required | Description |
+ |--|-|-|
+ | **IDoc Format** | Yes | The format to use for receiving IDocs. <br><br>- To receive IDocs as SAP plain XML, from the **IDoc Format** list, select **SapPlainXml**. <br><br>- To receive IDocs as a flat file, from the **IDoc Format** list, select **FlatFile**. <br><br>**Note**: If you also use the [Flat File Decode action](logic-apps-enterprise-integration-flatfile.md) in your workflow, in your flat file schema, you have to use the **early_terminate_optional_fields** property and set the value to **true**. This requirement is necessary because the flat file IDoc data record that's sent by SAP on the tRFC call named `IDOC_INBOUND_ASYNCHRONOUS` isn't padded to the full SDATA field length. Azure Logic Apps provides the flat file IDoc original data without padding as received from SAP. Also, when you combine this SAP trigger with the Flat File Decode action, the schema that's provided to the action must match. |
+ | **SAP RFC Server Degree of Parallelism** | Yes | The number of calls to process in parallel |
+ | **Allow Unreleased Segment** | Yes | Receive IDocs with or without unreleased segments. From the list, select **Yes** or **No**. |
+ | **SAP Gateway Host** | Yes | The registration gateway host for the SAP RFC server |
+ | **SAP Gateway Service** | Yes | The registration gateway service for the SAP RFC server |
+ | **SAP RFC Server Program ID** | Yes | The registration gateway program ID for the SAP RFC server. <br><br>**Note**: This value is case-sensitive. Make sure that you consistently use the same case format for the **Program ID** value when you configure your logic app workflow and SAP server. Otherwise, when you attempt to send an IDoc to SAP, the tRFC Monitor (T-Code SM58) might show the following errors (links require SAP login): <br><br>- [**Function IDOC_INBOUND_ASYNCHRONOUS not found** (2399329)](https://launchpad.support.sap.com/#/notes/2399329)<br>- [**Non-ABAP RFC client (partner type) not supported** (353597)](https://launchpad.support.sap.com/#/notes/353597) |
+ | **SAP SNC partners names** | No | The list of SNC partners that have permissions to call the trigger at the SAP client library level. Only the listed partners are authorized by the SAP server's SNC connection. To add this parameter, from the **Add new parameter** list, select **SAP SNC partners names**. Make sure to enter each name separated by a vertical bar (**\|**). |
+
+ The following example shows a basically configured SAP built-in trigger in a Standard workflow:
+
+ ![Screenshot shows basically configured SAP built-in connector trigger in Standard workflow.](./media/sap-create-example-scenario-workflows/trigger-sap-built-in-standard.png)
+
+1. Save your workflow so you can start receiving messages from your SAP server. On the designer toolbar, select **Save**.
+
+ Your workflow is now ready to receive messages from your SAP server.
+++
+<a name="receive-idoc-packets-sap"></a>
+
+## Receive IDoc packets from SAP
+
+To receive IDoc packets, which are batches or groups of IDocs, the SAP trigger doesn't need extra configuration. However, to process each item in an IDoc packet after the trigger receives the packet, you have to implement a few more steps to split the packet into individual IDocs by setting up SAP to [send IDocs in packets](https://help.sap.com/viewer/8f3819b0c24149b5959ab31070b64058/7.4.16/4ab38886549a6d8ce10000000a42189c.html).
+
+The following example workflow shows how to extract individual IDocs from a packet by using the [`xpath()` function](workflow-definition-language-functions-reference.md#xpath):
+
+1. Before you start, you need a Consumption or Standard logic app workflow with an SAP trigger. If your workflow doesn't already start with this trigger, follow the previous steps in this guide to [add the SAP trigger that can receive messages to your workflow](#receive-messages-sap).
+
+1. To immediately reply to your SAP server with the SAP request status, add the following response action, based on whether you use an SAP managed trigger or SAP built-in trigger:
+
+ - SAP managed trigger: For this trigger, [add a Response action to your workflow](../connectors/connectors-native-reqres.md#add-a-response-action).
+
+ In the Response action, use one of the following status codes (`statusCode`):
+
+ | Status code | Description |
+ |-|-|
+ | **202 Accepted** | The request was accepted for processing, but processing isn't complete yet. |
+ | **204 No Content** | The server successfully fulfilled the request, and there's no additional content to send in the response payload body. |
+ | **200 OK** | This status code always contains a payload, even if the server generates a payload body of zero length. |
+
+ - SAP built-in trigger: For this trigger, add the [**Respond to SAP server** action](/azure/logic-apps/connectors/built-in/reference/sap/#respond-to-sap-server.-(preview)) to your workflow.
+
+ > [!NOTE]
+ >
+ > As a best practice, add the response action immediately after the trigger to free up the communication channel with your SAP server.
+
+1. Get the root namespace from the XML IDoc that your workflow receives from SAP.
+
+ 1. To extract this namespace from the XML document and store the namespace in a local string variable, add the **Initialize variable** action.
+
+ 1. Rename the action's title to **Get namespace for root node in received IDoc**.
+
+ 1. Provide a name for the variable, and set the type to **String**.
+
+ 1. In the action's **Value** parameter, select inside the edit box, open the expression or function editor, and create the following expression using the [`xpath()` function](workflow-definition-language-functions-reference.md#xpath):
+
+ `xpath(xml(triggerBody()?['Content']), 'namespace-uri(/*)')`
+
+ **Consumption workflow**
+
+ ![Screenshot shows the expression to get the root node namespace from received IDoc for a Consumption workflow.](./media/sap-create-example-scenario-workflows/get-namespace-expression-consumption.png)
+
+ **Standard workflow**
+
+ ![Screenshot shows the expression to get the root node namespace from received IDoc for a Standard workflow.](./media/sap-create-example-scenario-workflows/get-namespace-expression-standard.png)
+
+ When you're done, the expression resolves and now appears as the following format:
+
+ ![Screenshot shows the resolved expression that gets the root node namespace from received IDoc.](./media/sap-create-example-scenario-workflows/get-namespace-expression-resolved.png)
+
+1. To extract an individual IDoc by storing the IDoc collection in a local array variable, follow these steps:
+
+ 1. Add another **Initialize variable** action.
+
+ 1. Rename the action's title to **Get array with IDoc data elements**.
+
+ 1. Provide a name for the variable, and set the type to **Array**.
+
+ The array variable makes each IDoc available for your workflow to process individually by enumerating over the collection.
+
+ 1. In the action's **Value** parameter, select inside the edit box, open the expression or function editor, and create the following `xpath()` expression:
+
+ `xpath(xml(triggerBody()?['Content']), '/*[local-name()="Receive"]/*[local-name()="idocData"]')`
+
+ When you're done, the expression resolves and now appears as the following format:
+
+ **Consumption workflow**
+
+ ![Screenshot shows the expression to get an array of IDocs for a Consumption workflow.](./media/sap-create-example-scenario-workflows/get-array-idoc-expression-resolved-consumption.png)
+
+ In this example, the following workflow transfers each IDoc to an SFTP server by using a **Control** action named **For each** and the SFTP-SSH action named **Create file**. Each IDoc must include the root namespace, which is the reason why the file content is wrapped inside a `<Receive></Receive>` element along with the root namespace before sending the IDoc to the downstream app, or SFTP server in this case.
+
+ ![Screenshot shows sending an IDoc to an SFTP server from a Consumption workflow.](./media/sap-create-example-scenario-workflows/get-idoc-loop-batch-consumption.png)
+
+ > [!NOTE]
+ > For Consumption workflows, this pattern is available as a quickstart template, which you can select
+ > from the template gallery when you create a Consumption logic app resource and blank workflow. Or,
+ > when the workflow designer is open, on designer toolbar, select **Templates**.
+ >
+ > ![Screenshot that shows selecting the template for getting an IDoc batch.](./media/sap-create-example-scenario-workflows/get-idoc-batch-sap-template-consumption.png)
+
+ **Standard workflow**
+
+ ![Screenshot shows the expression to get an array of IDocs for a Standard workflow.](./media/sap-create-example-scenario-workflows/get-array-idoc-expression-resolved-standard.png)
+
+ In this example, the following workflow transfers each IDoc to an SFTP server by using a **Control** action named **For each** and the SFTP-SSH action named **Create file**. Each IDoc must include the root namespace, which is the reason why the file content is wrapped inside a `<Receive></Receive>` element along with the root namespace before sending the IDoc to the downstream app, or SFTP server in this case.
+
+ ![Screenshot shows sending an IDoc to an SFTP server from a Standard workflow.](./media/sap-create-example-scenario-workflows/get-idoc-loop-batch-standard.png)
+++
+<a name="filter-with-sap-actions"></a>
+
+## Filter received messages with SAP actions
+
+If you use the SAP managed connector or ISE-versioned SAP connector, under the trigger in your workflow, set up a way to explicitly filter out any unwanted actions from your SAP server, based on the root node namespace in the received XML payload. You can provide a list (array) with a single or multiple SAP actions. By default, this array is empty, which means that your workflow receives all the messages from your SAP server without filtering. When you set up the array filter, the trigger receives messages only from the specified SAP action types and rejects all other messages from your SAP server. However, this filter doesn't affect whether the typing of the received payload is weak or strong. Any SAP action filtering happens at the level of the SAP Adapter for your on-premises data gateway. For more information, review [how to test sending IDocs to Azure Logic Apps from SAP](logic-apps-using-sap-connector.md#test-sending-idocs-from-sap).
+
+## Set up asynchronous request-reply pattern for triggers
+
+The SAP managed connector supports Azure's [asynchronous request-reply pattern](/azure/architecture/patterns/async-request-reply) for Azure Logic Apps triggers. You can use this pattern to create successful requests that would otherwise fail with the default synchronous request-reply pattern.
+
+> [!NOTE]
+>
+> In workflows with multiple **Response** actions, all **Response** actions must use the same request-reply pattern.
+> For example, if your workflow uses a switch control with multiple possible **Response** actions, you must set up
+> all the **Response** actions to use the same request-reply pattern, either synchronous or asynchronous.
+
+If you enable an asynchronous response for your **Response** action, your workflow can respond with a **202 Accepted** reply after accepting a request for processing. The reply contains a location header that you can use to retrieve the final state of your request.
+
+To configure an asynchronous request-reply pattern for your workflow using the SAP connector, follow these steps:
+
+1. In the designer, open your logic app workflow. Confirm that your workflow starts with an SAP trigger.
+
+1. In your workflow, find the **Response** action, and open that action's **Settings**.
+
+1. Based on whether you have a Consumption or Standard workflow, follow the corresponding steps:
+
+ - Consumption: Under **Asynchronous Response**, turn the setting from **Off** to **On**, and select **Done**.
+ - Standard: Expand **Networking**, and under **Asynchronous Response**, turn the setting from **Off** to **On**.
+
+1. Save your workflow.
+
+<a name="send-idocs-sap"></a>
+
+## Send IDocs to SAP
+
+To create a logic app workflow that sends an IDoc to an SAP server and returns a response, follow these examples:
+
+1. [Create a logic app workflow that's triggered by an HTTP request.](#add-request-trigger)
+1. [Add an SAP action to your workflow for sending an IDoc to SAP.](#add-sap-action-send-idoc)
+1. [Add a response action to your workflow.](#add-response-action)
+1. [Create a remote function call (RFC) request-response pattern, if you're using an RFC to receive replies from SAP ABAP.](#create-rfc-request-response-pattern)
+1. [Test your workflow.](#test-workflow)
+
+<a name="add-request-trigger"></a>
+
+### Add the Request trigger
+
+To have your workflow receive IDocs from SAP over XML HTTP, you can use the [Request built-in trigger](../connectors/connectors-native-reqres.md). This trigger creates an endpoint with a URL where your SAP server can send HTTP POST requests to your workflow. When your workflow receives these requests, the trigger fires and runs the next step in your workflow.
+
+To receive IDocs over Common Programming Interface Communication (CPIC) as plain XML or as a flat file, review the section, [Receive message from SAP](#receive-messages-sap).
+
+Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), create a Consumption logic app resource and a blank workflow in the designer.
+
+1. In the designer, [follow these general steps to find and add the Request built-in trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+
+ ![Screenshot shows the Request trigger for a Consumption workflow.](./media/sap-create-example-scenario-workflows/add-request-trigger-consumption.png)
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+ This step generates an endpoint URL where your trigger can receive requests from your SAP server, for example:
+
+ ![Screenshot shows the Request trigger's generated endpoint URL for receiving requests in a Consumption workflow.](./media/sap-create-example-scenario-workflows/generate-http-endpoint-url-consumption.png)
+
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), create a Standard logic app resource and a blank workflow in the designer.
+
+1. In the designer, [follow these general steps to find and add the Request built-in trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ ![Screenshot shows the Request trigger for a Standard workflow.](./media/sap-create-example-scenario-workflows/add-request-trigger-standard.png)
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+ This step generates an endpoint URL where your trigger can receive requests from your SAP server, for example:
+
+ ![Screenshot shows the Request trigger's generated endpoint URL for receiving requests in a Standard workflow.](./media/sap-create-example-scenario-workflows/generate-http-endpoint-url-standard.png)
+++
+<a name="add-sap-action-send-idoc"></a>
+
+### Add an SAP action to send an IDoc
+
+Next, create an action to send your IDoc to SAP when the workflow's request trigger fires. Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+
+### [Consumption](#tab/consumption)
+
+1. In the workflow designer, under the Request trigger, select **New step**.
+
+1. In the designer, [follow these general steps to find and add the SAP managed action named **Send message to SAP**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+1. If prompted, provide the following [connection information](/connectors/sap/#default-connection) for your on-premises SAP server. When you're done, select **Create**. Otherwise, continue with the next step to set up the SAP action.
+
+ | Parameter | Required | Description |
+ |--|-|-|
+ | **Connection name** | Yes | Enter a name for the connection. |
+ | **Data Gateway** | Yes | 1. For **Subscription**, select the Azure subscription for the data gateway resource that you created in the Azure portal for your data gateway installation. <br><br>2. For **Connection Gateway**, select your data gateway resource in Azure. |
+ | **Client** | Yes | The SAP client ID to use for connecting to your SAP server |
+ | **Authentication Type** | Yes | The authentication type to use for your connection, which must be **Basic** (username and password). To create an SNC connection, see [Enable Secure Network Communications (SNC)](logic-apps-using-sap-connector.md?tabs=single-tenant#enable-secure-network-communications). |
+ | **SAP Username** | Yes | The username for your SAP server |
+ | **SAP Password** | Yes | The password for your SAP server |
+ | **Logon Type** | Yes | Select either **Application Server** or **Group** (Message Server), and then configure the corresponding required parameters, even though they appear optional: <br><br>**Application Server**: <br>- **AS Host**: The host name for your SAP Application Server <br>- **AS Service**: The service name or port number for your SAP Application Server <br>- **AS System Number**: Your SAP server's system number, which ranges from 00 to 99 <br><br>**Group**: <br>- **MS Server Host**: The host name for your SAP Message Server <br>- **MS Service Name or Port Number**: The service name or port number for your SAP Message Server <br>- **MS System ID**: The system ID for your SAP server <br>- **MS Logon Group**: The logon group for your SAP server. On your SAP server, you can find or edit the **Logon Group** value by opening the **CCMS: Maintain Logon Groups** (T-Code SMLG) dialog box. For more information, review [SAP Note 26317 - Set up for LOGON group for automatic load balancing](https://service.sap.com/sap/support/notes/26317). |
+ | **Safe Typing** | No | This option available for backward compatibility and only checks the string length. By default, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. Learn more about the [Safe Typing setting](#safe-typing). |
+ | **Use SNC** | No | To create an SNC connection, see [Enable Secure Network Communications (SNC)](logic-apps-using-sap-connector.md?tabs=single-tenant#enable-secure-network-communications). |
+
+ For other optional available connection parameters, see [Default connection information](/connectors/sap/#default-connection).
+
+ After Azure Logic Apps sets up and tests your connection, the SAP action information box appears. For more information about any connection problems that might happen, see [Troubleshoot connections](#troubleshoot-connections).
+
+ ![Screenshot shows a Consumption workflow with the SAP managed action named Send message to SAP.](./media/sap-create-example-scenario-workflows/sap-send-message-consumption.png)
+
+1. In the **Send message to SAP** action, find and select an available SAP action on your SAP server to send the IDoc.
+
+ The **Send message to SAP** action is generic and can send a message for BAPI, IDoc, RFC, or tRFC, but you must first select the message type and SAP action to use.
+
+ 1. In the **SAP Action** parameter's edit box, select the folder icon. From the list that opens, select **BAPI**, **IDOC**, **RFC**, or **TRFC**. This example selects **IDOC**. If you select a different type, the available SAP actions change based on your selection.
+
+ > [!NOTE]
+ >
+ > If you get a **Bad Gateway (500)** error or **Bad request (400)** error, see [500 Bad Gateway or 400 Bad Request error](#bad-gateway-request).
+
+ ![Screenshot shows selecting IDOC for a Consumption workflow.](./media/sap-create-example-scenario-workflows/sap-send-message-select-idoc-type-consumption.png)
+
+ 1. Browse the SAP action types folders using the arrows to find and select the SAP action that you want to use.
+
+ This example selects **ORDERS** > **ORDERS05** > **720** > **Send**.
+
+ ![Screenshot shows finding an Orders action for a Consumption workflow.](./media/sap-create-example-scenario-workflows/sap-send-message-find-orders-action-consumption.png)
+
+ If you can't find the action you want, you can manually enter a path, for example:
+
+ ![Screenshot shows manually entering a path to an Orders action type for a Consumption workflow.](./media/sap-create-example-scenario-workflows/sap-manually-enter-action-consumption.png)
+
+ > [!TIP]
+ >
+ > For the **SAP Action** parameter, you can use the expression editor to provide the parameter value.
+ > That way, you can use the same SAP action for different message types.
+
+ For more information about IDoc messages, review [Message schemas for IDoc operations](/biztalk/adapters-and-accelerators/adapter-sap/message-schemas-for-idoc-operations).
+
+ 1. In the **Send message to SAP** action, include the body output from the Request trigger.
+
+ 1. In the **Input Message** parameter, select inside the edit box to open the dynamic content list.
+
+ 1. From the dynamic content list, under **When a HTTP request is received**, select **Body**. The **Body** field contains the body output from the Request trigger.
+
+ > [!NOTE]
+ > If the **Body** field doesn't appear in the list, next to the **When a HTTP request is received** label, select **See more**.
+
+ ![Screenshot shows selecting the Request trigger's output named Body for Consumption workflow.](./media/sap-create-example-scenario-workflows/sap-send-message-select-body-consumption.png)
+
+ The **Send message to SAP** action now includes the body content from the Request trigger and sends that output to your SAP server, for example:
+
+ ![Screenshot shows completed SAP action for Consumption workflow.](./media/sap-create-example-scenario-workflows/sap-send-message-complete-consumption.png)
+
+1. Save your workflow.
+
+### [Standard](#tab/standard)
+
+1. In the workflow designer, under the Request trigger, select the plus sign (**+**) > **Add an action**.
+
+1. In the designer, [follow these general steps to find and add the SAP built-in action named **[IDoc] Send document to SAP**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+ Rather than have a generic action to send messages with different types, the preview SAP built-in connector provides individual actions for BAPI, IDoc, RFC, and so on. For example, these actions include **[BAPI] Call method in SAP** and **[RFC] Call function in SAP**.
+
+1. If prompted, provide the following connection information for your on-premises SAP server. When you're done, select **Create**. Otherwise, continue with the next step to set up the SAP action.
+
+ | Parameter | Required | Description |
+ |--|-|-|
+ | **Connection name** | Yes | Enter a name for the connection. |
+ | **Client** | Yes | The SAP client ID to use for connecting to your SAP server |
+ | **Authentication Type** | Yes | The authentication type to use for your connection. To create an SNC connection, see [Enable Secure Network Communications (SNC)](logic-apps-using-sap-connector.md?tabs=single-tenant#enable-secure-network-communications). |
+ | **SAP Username** | Yes | The username for your SAP server |
+ | **SAP Password** | Yes | The password for your SAP server |
+ | **Logon Type** | Yes | Select either **Application Server** or **Group**, and then configure the corresponding required parameters, even though they appear optional: <br><br>**Application Server**: <br>- **Server Host**: The host name for your SAP Application Server <br>- **Service**: The service name or port number for your SAP Application Server <br>- **System Number**: Your SAP server's system number, which ranges from 00 to 99 <br><br>**Group**: <br>- **Server Host**: The host name for your SAP Message Server <br>- **Service Name or Port Number**: The service name or port number for your SAP Message Server <br>- **System ID**: The system ID for your SAP server <br>- **Logon Group**: The logon group for your SAP server. On your SAP server, you can find or edit the **Logon Group** value by opening the **CCMS: Maintain Logon Groups** (T-Code SMLG) dialog box. For more information, review [SAP Note 26317 - Set up for LOGON group for automatic load balancing](https://service.sap.com/sap/support/notes/26317). |
+ | **Language** | Yes | The language to use for sending data to your SAP server. The value is either **Default** (English) or one of the [permitted values](/azure/logic-apps/connectors/built-in/reference/sap/#parameters-21). <br><br>**Note**: The SAP built-in connector saves this parameter value as part of the SAP connection parameters. For more information, see [Change language headers for sending data to SAP](#change-language-headers). |
+
+ After Azure Logic Apps sets up and tests your connection, the SAP action information box appears. For more information about any connection problems that might happen, see [Troubleshoot connections](#troubleshoot-connections).
+
+ ![Screenshot shows a Standard workflow with the SAP built-in action named [IDoc] Send document to SAP.](./media/sap-create-example-scenario-workflows/sap-send-idoc-standard.png)
+
+1. In the **[IDoc] Send document to SAP** action, provide the information required for the action to send an IDoc to your SAP server, for example:
+
+ 1. For the **IDoc Format** parameter, select **SapPlainXML**.
+
+ 1. In the **Plain XML IDoc** parameter, select inside the edit box, and open the dynamic content list (lightning icon).
+
+ 1. From the dynamic content list, under **When a HTTP request is received**, select **Body**. The **Body** field contains the body output from the Request trigger.
+
+ > [!NOTE]
+ > If the **Body** field doesn't appear in the list, next to the **When a HTTP request is received** label, select **See more**.
+
+ ![Screenshot shows selecting the Request trigger's output named Body for Standard workflow.](./media/sap-create-example-scenario-workflows/sap-send-idoc-select-body-standard.png)
+
+ The **[IDoc] Send document to SAP** action now includes the body content from the Request trigger and sends that output to your SAP server, for example:
+
+ ![Screenshot shows completed SAP action for Standard workflow.](./media/sap-create-example-scenario-workflows/sap-send-idoc-complete-standard.png)
+
+1. Save your workflow.
+++
+<a name="send-flat-file-idocs"></a>
+
+#### Send flat file IDocs to SAP server
+
+To send an IDoc using a flat file schema, you can wrap the IDoc in an XML envelope and [follow the general steps to add an SAP action to send an IDoc](#add-sap-action-send-idoc), but with the following changes:
+
+### Wrap IDoc with XML envelope
+
+1. In the SAP action that you use to send the message, use the following URI:
+
+ **`http://Microsoft.LobServices.Sap/2007/03/Idoc/SendIdoc`**
+
+1. Format your input message with an XML envelope.
+
+The following example shows a sample XML payload:
+
+```xml
+<SendIdoc xmlns="http://Microsoft.LobServices.Sap/2007/03/Idoc/">
+ <idocData>EDI_DC 3000000001017945375750 30INVOIC011BTSVLINV30KUABCABCFPPC LDCA X004010810 4 SAPMSX LSEDI ABCABCFPPC 000d3ae4-723e-1edb-9ca4-cc017365c9fd 20210217054521INVOICINVOIC01ZINVOIC2RE 20210217054520
+E2EDK010013000000001017945375000001E2EDK01001000000010 ABCABC1.00000 0060 INVO9988298128 298.000 298.000 LB Z4LR EN 0005065828 L
+E2EDKA1 3000000001017945375000002E2EDKA1 000000020 RS ABCABCFPPC 0005065828 ABCABCABC ABCABC Inc. Limited Risk Distributor ABCABC 1950 ABCABCABCA Blvd ABCABAABCAB L5N8L9 CA ABCABC E ON V-ABCABC LDCA
+E2EDKA1 3000000001017945375000003E2EDKA1 000000020 AG 0005065828 ABCABCFPPC ABCABC ABCABC ABCABC - FPP ONLY 88 ABCABC Crescent ABCABAABCAB L5R 4A2 CA ABCABC 111 111 1111 E ON ABCABCFPPC EN
+E2EDKA1 3000000001017945375000004E2EDKA1 000000020 RE 0005065828 ABCABCFPPC ABCABC ABCABC ABCABC - FPP ONLY 88 ABCABC Crescent ABCABAABCAB L5R 4A2 CA ABCABC 111 111 1111 E ON ABCABCFPPC EN
+E2EDKA1 3000000001017945375000005E2EDKA1 000000020 RG 0005065828 ABCABCFPPC ABCABC ABCABC ABCABC - FPP ONLY 88 ABCABC Crescent ABCABAABCAB L5R 4A2 CA ABCABC 111 111 1111 E ON ABCABCFPPC EN
+E2EDKA1 3000000001017945375000006E2EDKA1 000000020 WE 0005001847 41 ABCABC ABCABC INC (ABCABC) DC A. ABCABCAB 88 ABCABC CRESCENT ABCABAABCAB L5R 4A2 CA ABCABC 111-111-1111 E ON ABCABCFPPC EN
+E2EDKA1 3000000001017945375000007E2EDKA1 000000020 Z3 0005533050 ABCABCABC ABCABC Inc. ABCA Bank Swift Code -ABCABCABCAB Sort Code - 1950 ABCABCABCA Blvd. Acc No -1111111111 ABCABAABCAB L5N8L9 CA ABCABC E ON ABCABCFPPC EN
+E2EDKA1 3000000001017945375000008E2EDKA1 000000020 BK 1075 ABCABCABC ABCABC Inc 1950 ABCABCABCA Blvd ABCABAABCAB ON L5N 8L9 CA ABCABC (111) 111-1111 (111) 111-1111 ON
+E2EDKA1 3000000001017945375000009E2EDKA1 000000020 CR 1075 CONTACT ABCABCABC 1950 ABCABCABCA Blvd ABCABAABCAB ON L5N 8L9 CA ABCABC (111) 111-1111 (111) 111-1111 ON
+E2EDK02 3000000001017945375000010E2EDK02 000000020 0099988298128 20210217
+E2EDK02 3000000001017945375000011E2EDK02 000000020 00140-N6260-S 20210205
+E2EDK02 3000000001017945375000012E2EDK02 000000020 0026336270425 20210217
+E2EDK02 3000000001017945375000013E2EDK02 000000020 0128026580537 20210224
+E2EDK02 3000000001017945375000014E2EDK02 000000020 01740-N6260-S
+E2EDK02 3000000001017945375000015E2EDK02 000000020 900IAC
+E2EDK02 3000000001017945375000016E2EDK02 000000020 901ZSH
+E2EDK02 3000000001017945375000017E2EDK02 000000020 9078026580537 20210217
+E2EDK03 3000000001017945375000018E2EDK03 000000020 02620210217
+E2EDK03 3000000001017945375000019E2EDK03 000000020 00120210224
+E2EDK03 3000000001017945375000020E2EDK03 000000020 02220210205
+E2EDK03 3000000001017945375000021E2EDK03 000000020 01220210217
+E2EDK03 3000000001017945375000022E2EDK03 000000020 01120210217
+E2EDK03 3000000001017945375000023E2EDK03 000000020 02420210217
+E2EDK03 3000000001017945375000024E2EDK03 000000020 02820210418
+E2EDK03 3000000001017945375000025E2EDK03 000000020 04820210217
+E2EDK17 3000000001017945375000026E2EDK17 000000020 001DDPDelivered Duty Paid
+E2EDK17 3000000001017945375000027E2EDK17 000000020 002DDPdestination
+E2EDK18 3000000001017945375000028E2EDK18 000000020 00160 0 Up to 04/18/2021 without deduction
+E2EDK28 3000000001017945375000029E2EDK28 000000020 CA BOFACATT Bank of ABCABAB ABCABC ABCABAB 50127217 ABCABCABC ABCABC Inc.
+E2EDK28 3000000001017945375000030E2EDK28 000000020 CA 026000082 ABCAbank ABCABC ABCABAB 201456700OLD ABCABCABC ABCABC Inc.
+E2EDK28 3000000001017945375000031E2EDK28 000000020 GB ABCAGB2L ABCAbank N.A ABCABA E14, 5LB GB63ABCA18500803115593 ABCABCABC ABCABC Inc. GB63ABCA18500803115593
+E2EDK28 3000000001017945375000032E2EDK28 000000020 CA 020012328 ABCABANK ABCABC ABCABAB ON M5J 2M3 2014567007 ABCABCABC ABCABC Inc.
+E2EDK28 3000000001017945375000033E2EDK28 000000020 CA 03722010 ABCABABC ABCABABC Bank of Commerce ABCABAABCAB 64-04812 ABCABCABC ABCABC Inc.
+E2EDK28 3000000001017945375000034E2EDK28 000000020 IE IHCC In-House Cash Center IHCC1075 ABCABCABC ABCABC Inc.
+E2EDK28 3000000001017945375000035E2EDK28 000000020 CA 000300002 ABCAB Bank of ABCABC ABCABAB 0021520584OLD ABCABCABC ABCABC Inc.
+E2EDK28 3000000001017945375000036E2EDK28 000000020 US USCC US Cash Center (IHC) city USCC1075 ABCABCABC ABCABC Inc.
+E2EDK29 3000000001017945375000037E2EDK29 000000020 0064848944US A CAD CA ABCABC CA United States US CA A Air Air
+E2EDKT1 3000000001017945375000038E2EDKT1 000000020 ZJ32E EN
+E2EDKT2 3000000001017945375000039E2EDKT2 000038030 GST/HST877845941RT0001 *
+E2EDKT2 3000000001017945375000040E2EDKT2 000038030 QST1021036966TQ0001 *
+E2EDKT1 3000000001017945375000041E2EDKT1 000000020 Z4VL
+E2EDKT2 3000000001017945375000042E2EDKT2 000041030 0.000 *
+E2EDKT1 3000000001017945375000043E2EDKT1 000000020 Z4VH
+E2EDKT2 3000000001017945375000044E2EDKT2 000043030 *
+E2EDK14 3000000001017945375000045E2EDK14 000000020 008LDCA
+E2EDK14 3000000001017945375000046E2EDK14 000000020 00710
+E2EDK14 3000000001017945375000047E2EDK14 000000020 00610
+E2EDK14 3000000001017945375000048E2EDK14 000000020 015Z4F2
+E2EDK14 3000000001017945375000049E2EDK14 000000020 0031075
+E2EDK14 3000000001017945375000050E2EDK14 000000020 021M
+E2EDK14 3000000001017945375000051E2EDK14 000000020 0161075
+E2EDK14 3000000001017945375000052E2EDK14 000000020 962M
+E2EDP010013000000001017945375000053E2EDP01001000000020 000011 2980.000 EA 298.000 LB MOUSE 298.000 Z4TN 4260
+E2EDP02 3000000001017945375000054E2EDP02 000053030 00140-N6260-S 00000120210205 DFUE
+E2EDP02 3000000001017945375000055E2EDP02 000053030 0026336270425 00001120210217
+E2EDP02 3000000001017945375000056E2EDP02 000053030 0168026580537 00001020210224
+E2EDP02 3000000001017945375000057E2EDP02 000053030 9100000 00000120210205 DFUE
+E2EDP02 3000000001017945375000058E2EDP02 000053030 911A 00000120210205 DFUE
+E2EDP02 3000000001017945375000059E2EDP02 000053030 912PP 00000120210205 DFUE
+E2EDP02 3000000001017945375000060E2EDP02 000053030 91300 00000120210205 DFUE
+E2EDP02 3000000001017945375000061E2EDP02 000053030 914CONTACT ABCABCABC 00000120210205 DFUE
+E2EDP02 3000000001017945375000062E2EDP02 000053030 963 00000120210205 DFUE
+E2EDP02 3000000001017945375000063E2EDP02 000053030 965 00000120210205 DFUE
+E2EDP02 3000000001017945375000064E2EDP02 000053030 9666336270425 00000120210205 DFUE
+E2EDP02 3000000001017945375000065E2EDP02 000053030 9078026580537 00001020210205 DFUE
+E2EDP03 3000000001017945375000066E2EDP03 000053030 02920210217
+E2EDP03 3000000001017945375000067E2EDP03 000053030 00120210224
+E2EDP03 3000000001017945375000068E2EDP03 000053030 01120210217
+E2EDP03 3000000001017945375000069E2EDP03 000053030 02520210217
+E2EDP03 3000000001017945375000070E2EDP03 000053030 02720210217
+E2EDP03 3000000001017945375000071E2EDP03 000053030 02320210217
+E2EDP03 3000000001017945375000072E2EDP03 000053030 02220210205
+E2EDP19 3000000001017945375000073E2EDP19 000053030 001418VVZ
+E2EDP19 3000000001017945375000074E2EDP19 000053030 002RJR-00001 AB ABCABCABC Mouse FORBUS BLUETOOTH
+E2EDP19 3000000001017945375000075E2EDP19 000053030 0078471609000
+E2EDP19 3000000001017945375000076E2EDP19 000053030 003889842532685
+E2EDP19 3000000001017945375000077E2EDP19 000053030 011CN
+E2EDP26 3000000001017945375000078E2EDP26 000053030 00459064.20
+E2EDP26 3000000001017945375000079E2EDP26 000053030 00352269.20
+E2EDP26 3000000001017945375000080E2EDP26 000053030 01052269.20
+E2EDP26 3000000001017945375000081E2EDP26 000053030 01152269.20
+E2EDP26 3000000001017945375000082E2EDP26 000053030 0126795.00
+E2EDP26 3000000001017945375000083E2EDP26 000053030 01552269.20
+E2EDP26 3000000001017945375000084E2EDP26 000053030 00117.54
+E2EDP26 3000000001017945375000085E2EDP26 000053030 00252269.20
+E2EDP26 3000000001017945375000086E2EDP26 000053030 940 2980.000
+E2EDP26 3000000001017945375000087E2EDP26 000053030 939 2980.000
+E2EDP05 3000000001017945375000088E2EDP05 000053030 + Z400MS List Price 52269.20 17.54 1 EA CAD 2980
+E2EDP05 3000000001017945375000089E2EDP05 000053030 + XR1 Tax Jur Code Level 6795.00 13.000 52269.20
+E2EDP05 3000000001017945375000090E2EDP05 000053030 + Tax Subtotal1 6795.00 2.28 1 EA CAD 2980
+E2EDP05 3000000001017945375000091E2EDP05 000053030 + Taxable Amount + TaxSubtotal1 59064.20 19.82 1 EA CAD 2980
+E2EDP04 3000000001017945375000092E2EDP04 000053030 CX 13.000 6795.00 7000000000
+E2EDP04 3000000001017945375000093E2EDP04 000053030 CX 0 0 7001500000
+E2EDP04 3000000001017945375000094E2EDP04 000053030 CX 0 0 7001505690
+E2EDP28 3000000001017945375000095E2EDP28 000053030 00648489440000108471609000 CN CN ABCAB ZZ 298.000 298.000 LB US 400 United Stat KY
+E2EDPT1 3000000001017945375000096E2EDPT1 000053030 0001E EN
+E2EDPT2 3000000001017945375000097E2EDPT2 000096040 AB ABCABCABC Mouse forBus Bluetooth EN/XC/XD/XX Hdwr Black For Bsnss *
+E2EDS01 3000000001017945375000098E2EDS01 000000020 0011
+E2EDS01 3000000001017945375000099E2EDS01 000000020 01259064.20 CAD
+E2EDS01 3000000001017945375000100E2EDS01 000000020 0056795.00 CAD
+E2EDS01 3000000001017945375000101E2EDS01 000000020 01159064.20 CAD
+E2EDS01 3000000001017945375000102E2EDS01 000000020 01052269.20 CAD
+E2EDS01 3000000001017945375000103E2EDS01 000000020 94200000 CAD
+E2EDS01 3000000001017945375000104E2EDS01 000000020 9440.00 CAD
+E2EDS01 3000000001017945375000105E2EDS01 000000020 9450.00 CAD
+E2EDS01 3000000001017945375000106E2EDS01 000000020 94659064.20 CAD
+E2EDS01 3000000001017945375000107E2EDS01 000000020 94752269.20 CAD
+E2EDS01 3000000001017945375000108E2EDS01 000000020 EXT
+Z2XSK010003000000001017945375000109Z2XSK01000000108030 Z400 52269.20
+Z2XSK010003000000001017945375000110Z2XSK01000000108030 XR1 13.000 6795.00 CX
+</idocData>
+</SendIdoc>
+```
+
+<a name="add-response-action"></a>
+
+### Add a response action
+
+Now, set up your workflow to return the results from your SAP server to the original requestor. For this task, follow these steps:
+
+### [Consumption](#tab/consumption)
+
+1. In the workflow designer, under the SAP action, select **New step**.
+
+1. In the designer, [follow these general steps to find and add the Request built-in action named **Response**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+1. In the **Response** action, for the **Body** parameter, select inside the edit box to open the dynamic content list.
+
+1. From the dynamic content list, under **Send message to SAP**, select **Body**. The **Body** field contains the body output from the SAP action.
+
+ ![Screenshot shows selecting SAP action output named Body for Consumption workflow.](./media/sap-create-example-scenario-workflows/response-action-select-sap-body-consumption.png)
+
+1. Save your workflow.
+
+### [Standard](#tab/standard)
+
+1. In the workflow designer, under the SAP action, select the plus sign (**+**) > **Add an action**.
+
+1. In the designer, [follow these general steps to find and add the Request built-in action named **Response**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+ > [!NOTE]
+ >
+ > If you use the SAP built-in trigger, which is an Azure Functions-based trigger, not a webhook trigger, add the
+ > [**Respond to SAP server** action](/azure/logic-apps/connectors/built-in/reference/sap/#respond-to-sap-server.-(preview))
+ > to your workflow and include the output from the SAP action.
+
+1. In the **Response** action, for the **Body** parameter, select inside the edit box to open the dynamic content list appears.
+
+1. From the dynamic content list, under **[IDoc] Send document to SAP**, find and select **Body**. The **Body** field contains the body output from the SAP action.
+
+ ![Screenshot showing selecting the SAP action output named Body for Standard workflow.](./media/sap-create-example-scenario-workflows/response-action-select-sap-body-standard.png)
+
+1. Save your workflow.
+++
+<a name="create-rfc-request-response-pattern"></a>
+
+### Create a remote function call (RFC) request-response pattern
+
+For the Consumption workflows that use the SAP managed connector and ISE-versioned SAP connector, if you have to receive replies by using a remote function call (RFC) to Azure Logic Apps from SAP ABAP, you must implement a request and response pattern. To receive IDocs in your workflow when you use the [Request trigger](../connectors/connectors-native-reqres.md), make sure that the workflow's first action is a [Response action](../connectors/connectors-native-reqres.md#add-response) that uses the **200 OK** status code without any content. This recommended step immediately completes the SAP Logical Unit of Work (LUW) asynchronous transfer over tRFC, which leaves the SAP CPIC conversation available again. You can then add more actions to your workflow for processing the received IDoc without blocking later transfers.
+
+> [!NOTE]
+>
+> The SAP trigger receives IDocs over tRFC, which doesn't have a response parameter, by design.
+
+To implement a request and response pattern, you must first discover the RFC schema using the [`generate schema` command](sap-generate-schemas-for-artifacts.md). The generated schema has two possible root nodes:
+
+* The request node, which is the call that you receive from SAP
+* The response node, which is your reply back to SAP
+
+In the following example, the `STFC_CONNECTION` RFC module generates a request and response pattern. The request XML is parsed to extract a node value where SAP requests `<ECHOTEXT>`. The response inserts the current timestamp as a dynamic value. You receive a similar response when you send a `STFC_CONNECTION` RFC from a logic app workflow to SAP.
+
+```xml
+<STFC_CONNECTIONResponse xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
+ <ECHOTEXT>@{first(xpath(xml(triggerBody()?['Content']), '/*[local-name()="STFC_CONNECTION"]/*[local-name()="REQUTEXT"]/text()'))}</ECHOTEXT>
+ <RESPTEXT>Azure Logic Apps @{utcNow()}</RESPTEXT>
+</STFC_CONNECTIONResponse>
+```
+
+<a name="test-workflow"></a>
+
+### Test your workflow
+
+### [Consumption](#tab/consumption)
+
+1. If your Consumption logic app resource isn't already enabled, on your logic app menu, select **Overview**. On the toolbar, select **Enable**.
+
+1. On the designer toolbar, select **Run Trigger** > **Run** to manually start your workflow.
+
+1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to include your message content with your request. To send the request, use a tool such as [Postman](https://www.getpostman.com/apps).
+
+ For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
+
+ ```xml
+ <?xml version="1.0" encoding="UTF-8" ?>
+ <Send xmlns="http://Microsoft.LobServices.Sap/2007/03/Idoc/2/ORDERS05//720/Send">
+ <idocData>
+ <...>
+ </idocData>
+ </Send>
+ ```
+
+1. After you send your HTTP request, wait for the response from your workflow.
+
+ > [!NOTE]
+ >
+ > Your workflow might time out if all the steps required for the response don't finish within the [request timeout limit](logic-apps-limits-and-config.md).
+ > If this condition happens, requests might get blocked. To help you diagnose problems, learn how you can [check and monitor your logic app workflows](monitor-logic-apps.md).
+
+You've now created a workflow that can communicate with your SAP server. Now that you've set up an SAP connection for your workflow, you can try experimenting with BAPI and RFC.
+
+### [Standard](#tab/standard)
+
+1. If your Standard logic app resource is stopped or disabled, from your workflow, go to the logic app resource level, and select **Overview**. On the toolbar, select **Start**.
+
+1. Return to the workflow level. On the workflow menu, select **Overview**. On the toolbar, select **Run** > **Run** to manually start your workflow.
+
+1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to your message content with your request. To send the request, use a tool such as [Postman](https://www.getpostman.com/apps).
+
+ For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
+
+ ```xml
+ <?xml version="1.0" encoding="UTF-8" ?>
+ <Send xmlns="http://Microsoft.LobServices.Sap/2007/03/Idoc/2/ORDERS05//720/Send">
+ <idocData>
+ <...>
+ </idocData>
+ </Send>
+ ```
+
+1. After you send the HTTP request, wait for the response from your workflow.
+
+ > [!NOTE]
+ >
+ > Your workflow might time out if all the steps required for the response don't finish within the [request timeout limit](logic-apps-limits-and-config.md).
+ > If this condition happens, requests might get blocked. To help you diagnose problems, learn [how to check and monitor your logic app workflows](monitor-logic-apps.md).
+
+You've now created a workflow that can communicate with your SAP server. Now that you've set up an SAP connection for your workflow, you can try experimenting with BAPI and RFC.
+++
+<a name="safe-typing"></a>
+
+## Safe typing
+
+By default, when you create a connection for the SAP managed operation, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. The **Safe Typing** option is available for backward compatibility and only checks the string length. If you choose **Safe Typing**, the DATS type and TIMS type in SAP are treated as strings rather than as their XML equivalents, `xs:date` and `xs:time`, where `xmlns:xs="http://www.w3.org/2001/XMLSchema"`. Safe typing affects the behavior for all schema generation, the send message for both the "been sent" payload and the "been received" response, and the trigger.
+
+When strong typing is used (**Safe Typing** isn't enabled), the schema maps the DATS and TIMS types to more straightforward XML types:
+
+```xml
+<xs:element minOccurs="0" maxOccurs="1" name="UPDDAT" nillable="true" type="xs:date"/>
+<xs:element minOccurs="0" maxOccurs="1" name="UPDTIM" nillable="true" type="xs:time"/>
+```
+
+When you send messages using strong typing, the DATS and TIMS response complies with the matching XML type format:
+
+```xml
+<DATE>9999-12-31</DATE>
+<TIME>23:59:59</TIME>
+```
+
+When **Safe Typing** is enabled, the schema maps the DATS and TIMS types to XML string fields with length restrictions only, for example:
+
+```xml
+<xs:element minOccurs="0" maxOccurs="1" name="UPDDAT" nillable="true">
+ <xs:simpleType>
+ <xs:restriction base="xs:string">
+ <xs:maxLength value="8" />
+ </xs:restriction>
+ </xs:simpleType>
+</xs:element>
+<xs:element minOccurs="0" maxOccurs="1" name="UPDTIM" nillable="true">
+ <xs:simpleType>
+ <xs:restriction base="xs:string">
+ <xs:maxLength value="6" />
+ </xs:restriction>
+ </xs:simpleType>
+</xs:element>
+```
+
+When messages are sent with **Safe Typing** enabled, the DATS and TIMS response looks like this example:
+
+```xml
+<DATE>99991231</DATE>
+<TIME>235959</TIME>
+```
+
+<a name="advanced-scenarios"></a>
+
+## Advanced scenarios
+
+<a name="change-language-headers"></a>
+
+### Change language headers for sending data to SAP
+
+When you connect to SAP from Azure Logic Apps, English is the default language used by the SAP connection for sending data to your SAP server. However, the SAP managed connector and SAP built-in connector handle changing and saving the language used in different ways.
+
+* When you create a connection with SAP built-in connector, the connection parameters let you specify and save the language parameter value as part of the SAP connection parameters.
+
+* When you create a connection with the SAP managed connector, the connection parameters don't have language parameter. So, during this time, you can't specify or the language to use for sending data to your SAP server. Instead, at both workflow design time and run time, the connector uses your web browser's local language from each request that's sent to your server. For example, if your browser is set to Portuguese, Azure Logic Apps creates and tests the SAP connection with Portuguese, but doesn't save the connection with that language.
+
+ However, you can set the language for your connection by using the [standard HTTP header `Accept-Language`](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4) with your inbound requests. Most web browsers add an `Accept-Language` header based on your locale settings. The web browser applies this header when you create a new SAP connection in the workflow designer. So, you can either update your web browser's settings to use your preferred language, or you can create your SAP connection using Azure Resource Manager instead of the workflow designer.
+
+ For example, you can send a request with the `Accept-Language` header to your logic app workflow by using the Request trigger named **When a HTTP request is received**. All the actions in your workflow receive the header. Then, SAP uses the specified languages in its system messages, such as BAPI error messages. If you don't pass an `Accept-Language` header at run time, by default, English is used.
+
+ If you use the `Accept-Language` header, you might get the following error: **Please check your account info and/or permissions and try again.** In this case, check the SAP component's error logs instead. The error actually happens in the SAP component that uses the header, so you might get one of these error messages:
+
+ * **"SAP.Middleware.Connector.RfcLogonException: Select one of the installed languages"**
+
+ * **"SAP.Middleware.Connector.RfcAbapMessageException: Select one of the installed languages"**
+
+<a name="confirm-transaction-explicitly"></a>
+
+## Confirm transaction separately and explicitly
+
+When you send transactions to SAP from Azure Logic Apps, this exchange happens in two steps as described in the SAP document, [Transactional RFC Server Programs](https://help.sap.com/doc/saphelp_nwpi71/7.1/22/042ad7488911d189490000e829fbbd/content.htm?no_cache=true).
+
+By default, the SAP managed connector action named [**Send message to SAP**](/connectors/sap/#send-message-to-sap) handles both the steps to transfer the function and confirm the transaction in a single call. You also have the option to decouple these steps. The capability to decouple the transfer and confirmation steps is useful for scenarios where you don't want to duplicate transactions in SAP. Such scenarios include failures that happen due to causes such as network issues.
+
+You can send an IDoc without automatically confirming the transaction using the SAP managed connector action named [**[IDOC] Send document to SAP**](/connectors/sap/#[idoc]-send-document-to-sap-(preview)). You can then explicitly confirm the transaction using the SAP managed connector action named [**[IDOC - RFC] Confirm transaction Id**](/connectors/sap/#[idocrfc]-confirm-transaction-id-(preview)). When your workflow separately confirms the transaction in a different step, the SAP system completes the transaction only once.
+
+In Standard workflows, the SAP built-in connector also has actions that separately handle the transfer and confirmation steps, specifically, [**[IDoc] Send document to SAP**](/azure/logic-apps/connectors/built-in/reference/sap/#[idoc]-send-document-to-sap-(preview)) and [**[IDOC - RFC] Confirm transaction Id**](/azure/logic-apps/connectors/built-in/reference/sap/#[idocrfc]-confirm-transaction-id-(preview)).
+
+The following example workflow shows this pattern:
+
+1. Create and open a Consumption or Standard logic app with a blank workflow in the designer. Add the Request trigger.
+
+1. To help avoid sending duplicate IDocs to SAP, [follow these alternative steps to create and use an IDoc transaction ID in your SAP actions](#create=transaction-ID-variable).
+
+1. Add the SAP action named **[IDOC] Send document to SAP** to your workflow. Provide the information for the IDoc that you send to your SAP system plus the following values:
+
+ | Parameter | Value | Description |
+ |--|-|-|
+ | **Confirm TID** | **No** | Don't automatically confirm the transaction ID, which explicitly happens in a separate step. |
+ | **Transaction Id GUID** | <*IDoc-transaction-ID*> | If this parameter doesn't automatically appear, open the **Add new parameters** list, and select the parameter. <br><br>You can either manually specify this value, or the connector can automatically generate this GUID as an output from the **[IDOC] Send document to SAP** action. This example leaves this parameter empty to automatically generate the GUID. |
+
+ **Consumption workflow**
+
+ ![Screenshot shows Consumption workflow with the action named IDOC Send document to SAP.](./media/sap-create-example-scenario-workflows/sap-send-idoc-with-id-consumption.png)
+
+ **Standard workflow**
+
+ ![Screenshot shows Standard workflow with the action named IDOC Send document to SAP.](./media/sap-create-example-scenario-workflows/sap-send-idoc-with-id-standard.png)
+
+1. On the SAP action named **[IDOC] Send document to SAP**, open **Settings** to review the **Retry Policy**.
+
+ The **Default** option is the recommended policy, but you can select a custom policy for your specific needs. If you choose to use a custom policy, set up at least one retry to overcome temporary network outages.
+
+1. Now, add the SAP action named **[IDOC - RFC] Confirm transaction Id**.
+
+ 1. In the **Transaction ID** parameter, select inside the edit box to open the dynamic content list.
+
+ 1. From the list, under **[IDOC] Send document to SAP**, select the **Transaction Id** value, which is the output from the previous SAP action.
+
+ **Consumption workflow**
+
+ ![Screenshot shows Consumption workflow with action named Confirm transaction ID, which includes GUID output from previous action.](./media/sap-create-example-scenario-workflows/sap-confirm-id-consumption.png)
+
+ **Standard workflow**
+
+ ![Screenshot shows Standard workflow with action named Confirm transaction ID, which includes GUID output from previous action.](./media/sap-create-example-scenario-workflows/sap-confirm-id-standard.png)
+
+ After this step runs, the current transaction is marked complete at both ends, on the SAP connector side and on SAP system side.
+
+<a name="create=transaction-ID-variable"></a>
+
+### Avoid sending duplicate IDocs with a transaction ID variable
+
+If you experience a problem with your workflow sending duplicate IDocs to SAP, you can create a string variable that serves as an IDoc transaction identifier. You can then use this identifier to help prevent duplicate network transmissions in conditions such as temporary outages, network issues, or lost acknowledgments.
+
+1. In the designer, after you add the Request trigger, and before you add the SAP action named **[IDOC] Send document to SAP**, add the action named **Initialize variable** to your workflow.
+
+1. Rename the action to **Create IDoc transaction ID**.
+
+1. In the action information box, provide the following parameter values:
+
+ | Parameter | Value | Description |
+ |--|-|-|
+ | **Name** | <*variable-name*> | A name for your variable, for example, **IDocTransactionID** |
+ | **Type** | **String** | The variable type |
+ | **Value** | `guid()` | Select inside the edit box, open the expression or function editor, and enter **guid()**. Save your changes. <br><br>The **Value** parameter is now set to the **guid()** function, which generates a GUID.|
+
+ **Consumption workflow**
+
+ ![Screenshot shows Consumption workflow with the action named Create transaction ID.](./media/sap-create-example-scenario-workflows/idoc-create-transaction-id-consumption.png)
+
+ **Standard workflow**
+
+ ![Screenshot shows Standard workflow with the action named Create transaction ID.](./media/sap-create-example-scenario-workflows/idoc-create-transaction-id-standard.png)
+
+ > [!NOTE]
+ >
+ > SAP systems forget a transaction identifier after a specified time, or 24 hours by default. As a result, SAP never fails
+ > to confirm a transaction identifier if the ID or GUID is unknown. If confirmation for a transaction identifier fails,
+ > this failure indicates that communcation with the SAP system failed before SAP was able to acknowledge the confirmation.
+
+1. Add the SAP action named **[IDOC] Send document to SAP** to your workflow. Provide the information for the IDoc that you send to your SAP system plus the following values:
+
+ | Parameter | Value | Description |
+ |--|-|-|
+ | **Confirm TID** | **No** | Don't automatically confirm the transaction ID, which explicitly happens in a separate step. |
+ | **Transaction Id GUID** | <*IDoc-transaction-ID*> | If this parameter doesn't automatically appear, open the **Add new parameters** list, and select the parameter. To select the transaction ID variable that you created, follow these steps: <br><br> 1. In the **Transaction Id GUID** parameter, select inside the edit box to open the dynamic content list. <br><br>2. From the list, under **Variables**, select the variable that you previously created, which is **IDocTransactionID** in this example. |
+
+ **Consumption workflow**
+
+ ![Screenshot shows Consumption workflow with action named IDOC Send document to SAP.](./media/sap-create-example-scenario-workflows/sap-send-idoc-with-var-consumption.png)
+
+ **Standard workflow**
+
+ ![Screenshot shows Standard workflow with action named IDOC Send document to SAP.](./media/sap-create-example-scenario-workflows/sap-send-idoc-with-var-standard.png)
+
+1. For the SAP managed action named **[IDOC] Send document to SAP**, open **Settings** to review the **Retry Policy**.
+
+ The **Default** option is the recommended policy, but you can select a custom policy for your specific needs. If you choose to use a custom policy, set up at least one retry to overcome temporary network outages.
+
+ > [!NOTE]
+ >
+ > Only managed connector actions currently have the Retry Policy setting, not built-in, service provider-based connectors.
+
+1. Now, add the SAP action named **[IDOC - RFC] Confirm transaction Id**.
+
+ 1. In the **Transaction ID** parameter, select inside the edit box to open the dynamic content list.
+
+ 1. From the list, under **Variables**, enter the name for the variable that you created, which is **IDocTransactionID** in this example.
+
+ **Consumption workflow**
+
+ ![Screenshot shows Consumption workflow with action named Confirm transaction ID using a variable.](./media/sap-create-example-scenario-workflows/sap-confirm-with-var-consumption.png)
+
+ **Standard workflow**
+
+ ![Screenshot shows Standard workflow with action named Confirm transaction ID using a variable.](./media/sap-create-example-scenario-workflows/sap-confirm-with-var-standard.png)
+
+1. Optionally, validate the deduplication in your test environment.
+
+ 1. Add another SAP action named **[IDOC] Send document to SAP**. In the **Transaction ID** parameter, select the **Transaction ID** GUID that you used in the previous step.
+
+ 1. To validate which IDoc number gets assigned after each call to the action named **[IDOC] Send document to SAP**, add the action named **[IDOC] Get IDOC list for transaction** to your workflow with the same **Transaction ID** and the **Receive** direction.
+
+ If the same IDoc number is returned for both calls, the IDoc was deduplicated.
+
+If you send the same IDoc twice, you can validate that SAP can identify the duplication of the tRFC call, and resolve the two calls to a single inbound IDoc message.
+
+<a name="troubleshoot"></a>
+
+## Troubleshoot problems
+
+<a name="troubleshoot-connections"></a>
+
+### Connection problems
+
+During connection creation, if you receive the following error, a problem exists with your installation of the SAP NCo client library:
+
+**Test connection failed. Error 'Failed to process request. Error details: 'could not load file or assembly 'sapnco, Version=3.0.0.42, Culture=neutral, PublicKeyToken 50436dca5c7f7d23' or one of its dependencies. The system cannot find the file specified.'.'**
+
+Make sure to [install the required version of the SAP NCo client library and meet all other prerequisites](logic-apps-using-sap-connector.md#sap-client-library-prerequisites).
+
+<a name="bad-gateway-request"></a>
+
+### 500 Bad Gateway or 400 Bad Request error
+
+If you receive a **500 Bad Gateway** or **400 Bad Request** error with a message similar to **service 'sapgw00' unknown**, the network service name resolution to port number is failing, for example:
+
+```json
+{
+ "body": {
+ "error": {
+ "code": 500,
+ "source": "EXAMPLE-FLOW-NAME.eastus.environments.microsoftazurelogicapps.net",
+ "clientRequestId": "00000000-0000-0000-0000-000000000000",
+ "message": "BadGateway",
+ "innerError": {
+ "error": {
+ "code": "UnhandledException",
+ "message": "\nERROR service 'sapgw00' unknown\nTIME Wed Nov 11 19:37:50 2020\nRELEASE 721\nCOMPONENT NI (network interface)\nVERSION 40\nRC -3\nMODULE ninti.c\nLINE 933\nDETAIL NiPGetServByName: 'sapgw00' not found\nSYSTEM CALL getaddrinfo\nCOUNTER 1\n\nRETURN CODE: 20"
+ }
+ }
+ }
+ }
+}
+```
+
+* **Option 1:** In your API connection and trigger configuration, replace your gateway service name with its port number. In the example error, `sapgw00` needs to be replaced with a real port number, for example, `3300`. This is the only available option for ISE.
+
+* **Option 2:** If you're using the on-premises data gateway, you can add the gateway service name to the port mapping in `%windir%\System32\drivers\etc\services` and then restart the on-premises data gateway service, for example:
+
+ ```text
+ sapgw00 3300/tcp
+ ```
+
+You might get a similar error when SAP Application server or Message server name resolves to the IP address. For ISE, you must specify the IP address for your SAP Application server or Message server. For the on-premises data gateway, you can instead add the name to the IP address mapping in `%windir%\System32\drivers\etc\hosts`, for example:
+
+```text
+10.0.1.9 SAPDBSERVER01 # SAP System Server VPN IP by computer name
+10.0.1.9 SAPDBSERVER01.someguid.xx.xxxxxxx.cloudapp.net # SAP System Server VPN IP by fully qualified computer name
+```
+
+<a name="errors-sending-idoc-packets"></a>
+
+## Errors sending IDoc packets from SAP to your trigger
+
+If you can't send IDoc packets from SAP to your trigger, review the Transactional RFC (tRFC) call rejection message in the SAP tRFC (T-Code SM58) dialog box. In the SAP interface, you might get the following error messages, which are clipped due to the substring limits on the **Status Text** field.
+
+### The segment or group definition E2EDK36001 was not found in the IDoc meta
+
+This error message means expected failures happen with other errors. For example, the failure to generate an IDoc XML payload because its segments aren't released by SAP. As a result, the segment type metadata required for conversion is missing.
+
+To have these segments released by SAP, contact the ABAP engineer for your SAP system.
+
+### The RequestContext on the IReplyChannel was closed without a reply being sent
+
+For SAP managed connector and ISE-versioned SAP connector, this error message means unexpected failures happen when the catch-all handler for the channel terminates the channel due to an error, and rebuilds the channel to process other messages.
+
+> [!NOTE]
+>
+> The SAP managed trigger and ISE-versioned SAP triggers are webhooks that use the SOAP-based SAP adapter. However, the SAP built-in trigger is an Azure Functions-based trigger that doesn't use a SOAP SAP adapter and doesn't get this error message.
+
+- To acknowledge that your workflow received the IDoc, [add a Response action](../connectors/connectors-native-reqres.md#add-a-response-action) that returns a **200 OK** status code. Leave the body empty and don't change or add to the headers. The IDoc is transported through tRFC, which doesn't allow for a response payload.
+
+- To reject the IDoc instead, respond with any HTTP status code other than **200 OK**. The SAP Adapter then returns an exception back to SAP on your behalf. You should only reject the IDoc to signal transport errors back to SAP, such as a misrouted IDoc that your application can't process. You shouldn't reject an IDoc for application-level errors, such as issues with the data contained in the IDoc. If you delay transport acceptance for application-level validation, you might experience negative performance due to blocking your connection from transporting other IDocs.
+
+- If you receive this error message and experience systemic failures calling Azure Logic Apps, check that you've configured the network settings for your on-premises data gateway service for your specific environment. For example, if your network environment requires the use of a proxy to call Azure endpoints, you need to configure your on-premises data gateway service to use your proxy. For more information, review [Proxy Configuration](/dotnet/framework/network-programming/proxy-configuration).
+
+- If you receive this error message and experience intermittent failures calling Azure Logic Apps, you might need to increase your retry count or also retry interval by following these steps:
+
+ 1. Check the SAP settings in your on-premises data gateway service configuration file named **Microsoft.PowerBI.EnterpriseGateway.exe.config**.
+
+ 1. Under the `configuration` root node, add a `configSections` element, if none exist.
+
+ 1. Under the `configSections` node, add a `section` element with the following attributes, if none exist: `name="SapAdapterSection" type="Microsoft.Adapters.SAP.Common.SapAdapterSection, Microsoft.Adapters.SAP.Common"`
+
+ > [!IMPORTANT]
+ >
+ > Don't change the attributes in existing `section` elements, if such elements already exist.
+
+ Your `configSections` element looks like the following version, if no other section or section group is declared in the gateway service configuration:
+
+ ```xml
+ <configSections>
+ <section name="SapAdapterSection" type="Microsoft.Adapters.SAP.Common.SapAdapterSection, Microsoft.Adapters.SAP.Common"/>
+ </configSections>
+ ```
+
+ 1. Under the `configuration` root node, add an `SapAdapterSection` element, if none exists.
+
+ 1. Under the `SapAdapterSection` node, add a `Broker` element with the following attributes, if none exist: `WebhookRetryDefaultDelay="00:00:00.10" WebhookRetryMaximumCount="2"`
+
+ > [!IMPORTANT]
+ > Change the attributes for the `Broker` element, even if the element already exists.
+
+ The `SapAdapterSection` element looks like the following version, if no other element or attribute is declared in the SAP adapter configuration:
+
+ ```xml
+ <SapAdapterSection>
+ <Broker WebhookRetryDefaultDelay="00:00:00.10" WebhookRetryMaximumCount="2" />
+ </SapAdapterSection>
+ ```
+
+ The retry count setting looks like `WebhookRetryMaximumCount="2"`. The retry interval setting looks like `WebhookRetryDefaultDelay="00:00:00.10"` where the timespan format is `HH:mm:ss.ff`.
+
+ > [!NOTE]
+ > For more information about the configuration file, review [Configuration file schema for .NET Framework](/dotnet/framework/configure-apps/file-schema/).
+
+ 1. Save your changes.
+
+ 1. If you're using the on-premises data gateway, restart your gateway.
+
+## Next steps
+
+- [Generate schemas for artifacts in SAP](sap-generate-schemas-for-artifacts.md)
logic-apps Sap Generate Schemas For Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/sap-generate-schemas-for-artifacts.md
+
+ Title: SAP artifact schemas
+description: Sample SAP artifacts for workflows in Azure Logic Apps
+
+ms.suite: integration
++++ Last updated : 05/23/2023++
+# Generate schemas for SAP artifacts in Azure Logic Apps
+
+This how-to guide shows how to create an example logic app workflow that generates schemas for SAP artifacts. The workflow starts with a **Request** trigger that can receive HTTP POST requests from your SAP server. The workflow then generates schemas for the specified IDoc and BAPI by using the SAP action named **Generate schemas** that sends a request to your SAP server. To send this request, you can use either the generic SAP managed connector action named **Send message to SAP**, or you can use the specific SAP managed or built-in action named **[BAPI] Call method in SAP**. This SAP action returns an [XML schema](#sample-xml-schemas), not the contents or data of the XML document itself. Schemas returned in the response are uploaded to an integration account by using the Azure Resource Manager connector. Schemas contain the following parts:
+
+| Component | Description |
+|--|-|
+| Request message structure | Use this information to form your BAPI `get` list. |
+| Response message structure | Use this information to parse the response. |
+
+Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the preview SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps, but this connector is currently in preview and subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](logic-apps-using-sap-connector.md#connector-technical-reference).
+
+## Prerequisites
+
+- Before you start, make sure to [review and meet the SAP connector requirements](logic-apps-using-sap-connector.md#prerequisites) for your specific scenario.
+
+- If you want to upload your generated schemas to a repository, such as an [integration account](logic-apps-enterprise-integration-create-integration-account.md), make sure that the repository already exists.
+
+## Generate schemas for an SAP artifact
+
+The following example logic app workflow triggers when the workflow's SAP trigger receives a request from an SAP server. The workflow then runs an SAP action that generates schemas for the specified SAP artifact.
+
+### Add the Request trigger
+
+To have your workflow receive requests from your SAP server over HTTP, you can use the [Request built-in trigger](../connectors/connectors-native-reqres.md). This trigger creates an endpoint with a URL where your SAP server can send HTTP POST requests to your workflow. When your workflow receives these requests, the trigger fires and runs the next step in your workflow.
+
+Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), create a Consumption logic app resource and blank workflow, which opens in the designer.
+
+1. In the designer, [follow these general steps to find and add the Request built-in trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+
+ ![Screenshot shows the Request trigger for a Consumption workflow.](./media/sap-generate-schemas-for-artifacts/add-request-trigger-consumption.png)
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+ This step generates an endpoint URL where your trigger can receive requests from your SAP server, for example:
+
+ ![Screenshot shows the Request trigger's generated endpoint URL for receiving requests in a Consumption workflow.](./media/sap-generate-schemas-for-artifacts/generate-http-endpoint-url-consumption.png)
+
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), create a Standard logic app resource and a blank workflow, which opens in the designer.
+
+1. In the designer, [follow these general steps to find and add the Request built-in trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ ![Screenshot shows the Request trigger for a Standard workflow.](./media/sap-generate-schemas-for-artifacts/add-request-trigger-standard.png)
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+ This step generates an endpoint URL where your trigger can receive requests from your SAP server, for example:
+
+ ![Screenshot shows the Request trigger's generated endpoint URL for receiving requests in a Standard workflow.](./media/sap-generate-schemas-for-artifacts/generate-http-endpoint-url-standard.png)
+++
+### Add an SAP action to generate schemas
+
+Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+
+### [Consumption](#tab/consumption)
+
+1. In the workflow designer, under the Request trigger, select **New step**.
+
+1. In the designer, [follow these general steps to find and add the SAP managed action named **Generate schemas**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+ For more information about this SAP managed action, see [Generate schemas](/connectors/sap/#generate-schemas).
+
+1. If prompted, provide the [connection information](/connectors/sap/#default-connection) for your on-premises SAP server. When you're done, select **Create**. Otherwise, continue with the next step to set up the SAP action.
+
+ By default, when you create a connection for an SAP managed operation, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. Learn more about the [Safe Typing setting](sap-create-example-scenario-workflows.md#safe-typing). For other optional available connection parameters, see [Default connection information](/connectors/sap/#default-connection).
+
+ After Azure Logic Apps sets up and tests your connection, the action information box appears. For more information about any connection problems that might happen, see [Troubleshoot connections](sap-create-example-scenario-workflows.md#troubleshoot-connections).
+
+ ![Screenshot shows Consumption workflow and SAP managed action named Generate schemas.](./media/sap-generate-schemas-for-artifacts/sap-generate-schemas-consumption.png)
+
+1. In the [**Generate schemas** action](/connectors/sap/#generate-schemas), provide a path to the artifact for which you want to generate the schema by selecting an available SAP action on your SAP server.
+
+ 1. In the **Body ActionUri** parameter's edit box, select the folder icon. From the list that opens, select **BAPI**, **IDOC**, **RFC**, or **TRFC**. This example selects **IDOC**. If you select a different type, the available SAP actions change based on your selection.
+
+ > [!NOTE]
+ >
+ > If you get a **Bad Gateway (500)** error or **Bad request (400)** error, see [500 Bad Gateway or 400 Bad Request error](sap-create-example-scenario-workflows.md#bad-gateway-request).
+
+ ![Screenshot shows Consumption workflow, Generate schemas action, and selecting IDOC.](./media/sap-generate-schemas-for-artifacts/sap-generate-schemas-select-idoc-consumption.png)
+
+ 1. Browse the SAP action types folders using the arrows to find and select the SAP action that you want to use.
+
+ This example selects **ORDERS** > **ORDERS05** > **720** > **Send**.
+
+ ![Screenshot shows Consumption workflow, Generate schemas action, and finding an Orders action.](./media/sap-generate-schemas-for-artifacts/sap-generate-schemas-select-artifact-consumption.png)
+
+ If you can't find the action you want, you can manually enter a path, for example:
+
+ ![Screenshot shows Consumption workflow and manually entering a path to an SAP action.](./media/sap-generate-schemas-for-artifacts/sap-generate-schemas-manual-consumption.png)
+
+ > [!TIP]
+ >
+ > For the **Body ActionUri** parameter, you can use the expression editor to provide the parameter value.
+ > That way, you can use the same SAP action for different message types.
+
+ For more information about this SAP action, see [Message schemas for IDoc operations](/biztalk/adapters-and-accelerators/adapter-sap/message-schemas-for-idoc-operations).
+
+ 1. To generate schemas for more than one artifact, in the **Body ActionUri** section, select **Add new item**.
+
+ ![Screenshot shows selecting the option to add a new item.](./media/sap-generate-schemas-for-artifacts/sap-generate-schemas-add-item-consumption.png)
+
+ 1. For each artifact, provide the SAP action that you want to use for schema generation, for example:
+
+ ![Screenshot shows multiple SAP actions to use for generating multiple schemas.](./media/sap-generate-schemas-for-artifacts/sap-generate-schemas-multiples-consumption.png)
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+### [Standard](#tab/standard)
+
+1. In the workflow designer, under the Request trigger, select the plus sign (**+**) > **Add an action**.
+
+1. In the designer, [follow these general steps to find and add the SAP built-in action named **Generate Schema**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+ For more information about this SAP built-in action, see [Generate Schema](/azure/logic-apps/connectors/built-in/reference/sap/#generate-schema-(preview)).
+
+1. If prompted, provide the following connection information for your on-premises SAP server. When you're done, select **Create**. Otherwise, continue with the next step to set up the SAP action.
+
+ | Parameter | Required | Description |
+ |--|-|-|
+ | **Connection name** | Yes | Enter a name for the connection. |
+ | **Client** | Yes | The SAP client ID to use for connecting to your SAP server |
+ | **Authentication Type** | Yes | The authentication type to use for your connection. To create an SNC connection, see [Enable Secure Network Communications (SNC)](logic-apps-using-sap-connector.md?tabs=single-tenant#enable-secure-network-communications). |
+ | **SAP Username** | Yes | The username for your SAP server |
+ | **SAP Password** | Yes | The password for your SAP server |
+ | **Logon Type** | Yes | Select either **Application Server** or **Group**, and then configure the corresponding required parameters, even though they appear optional: <br><br>**Application Server**: <br>- **Server Host**: The host name for your SAP Application Server <br>- **Service**: The service name or port number for your SAP Application Server <br>- **System Number**: Your SAP server's system number, which ranges from 00 to 99 <br><br>**Group**: <br>- **Server Host**: The host name for your SAP Message Server <br>- **Service Name or Port Number**: The service name or port number for your SAP Message Server <br>- **System ID**: The system ID for your SAP server <br>- **Logon Group**: The logon group for your SAP server. On your SAP server, you can find or edit the **Logon Group** value by opening the **CCMS: Maintain Logon Groups** (T-Code SMLG) dialog box. For more information, review [SAP Note 26317 - Set up for LOGON group for automatic load balancing](https://service.sap.com/sap/support/notes/26317). |
+ | **Language** | Yes | The language to use for sending data to your SAP server. The value is either **Default** (English) or one of the [permitted values](/azure/logic-apps/connectors/built-in/reference/sap/#parameters-21). <br><br>**Note**: The SAP built-in connector saves this parameter value as part of the SAP connection parameters. For more information, see [Change language headers for sending data to SAP](sap-create-example-scenario-workflows.md#change-language-headers). |
+
+ After Azure Logic Apps sets up and tests your connection, the action information box appears. For more information about any connection problems that might happen, see [Troubleshoot connections](sap-create-example-scenario-workflows.md#troubleshoot-connections).
+
+ ![Screenshot shows Standard workflow and SAP built-in action named Generate Schema.](./media/sap-generate-schemas-for-artifacts/sap-generate-schemas-standard.png)
+
+ > [!NOTE]
+ >
+ > If you get a **Bad Gateway (500)** error or **Bad request (400)** error, see [500 Bad Gateway or 400 Bad Request error](sap-create-example-scenario-workflows.md#bad-gateway-request).
+
+1. In the [**Generate Schema** action](/azure/logic-apps/connectors/built-in/reference/sap/#generate-schema-(preview)), provide the following information about the artifact for which to generate the schema.
+
+ This action's parameters change based on the **Operation Type** value that you select.
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Operation Type** | Yes | **BAPI**, **RFC**, **IDoc**, **RFC**, or **tRFC** | The operation type to use for the schema generation. This example selects and continues with **IDoc**. |
+ | **IDoc Type** | Yes | <*IDoc-type*> | Select the IDoc type to use for schema generation. This example selects **ORDERS05**. |
+ | **Release** | Yes | <*IDoc-release-number*> | Select the IDoc release number. This example selects **720**. |
+ | **Version** | Yes | <*IDoc-release-version*> | Select the IDoc release version. This example selects **3**. |
+ | **Direction** | Yes | <*request-direction*> | Select the direction for the request. This example selects **Send**. |
+
+ ![Screenshot shows Standard workflow, Generate Schema action, and IDoc artifact information.](./media/sap-generate-schemas-for-artifacts/sap-generate-schemas-select-idoc-standard.png)
+
+ For more information schemas for this SAP action, see [Message schemas for IDoc operations](/biztalk/adapters-and-accelerators/adapter-sap/message-schemas-for-idoc-operations).
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+++
+<a name="test-workflow"></a>
+
+### Test your workflow for schema generation
+
+Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+
+### [Consumption](#tab/consumption)
+
+1. If your Consumption logic app resource isn't already enabled, on your logic app menu, select **Overview**. On the toolbar, select **Enable**.
+
+1. On the designer toolbar, select **Run Trigger** > **Run** to manually start your workflow.
+
+1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. To send the request, use a tool such as [Postman](https://www.getpostman.com/apps).
+
+ For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
+
+ ```xml
+ <?xml version="1.0" encoding="UTF-8" ?>
+ <Send xmlns="http://Microsoft.LobServices.Sap/2007/03/Idoc/2/ORDERS05//720/Send">
+ <idocData>
+ <...>
+ </idocData>
+ </Send>
+ ```
+
+1. After you send your HTTP request, wait for the response from your workflow.
+
+ > [!NOTE]
+ >
+ > Your workflow might time out if all the steps required for the response don't finish within the [request timeout limit](logic-apps-limits-and-config.md).
+ > If this condition happens, requests might get blocked. To help you diagnose problems, learn how you can [check and monitor your logic app workflows](monitor-logic-apps.md).
+
+1. On your logic app's **Overview** pane, under **Runs history**, find and open the workflow run.
+
+1. Find the **Generate schemas** action, and review the action's outputs.
+
+ The outputs show the generated schemas for the specified messages.
+
+For more information about reviewing workflow run history, see [Monitor logic app workflows](monitor-logic-apps.md?tabs=consumption).
+
+### [Standard](#tab/standard)
+
+1. If your Standard logic app resource is stopped or disabled, from your workflow, go to the logic app resource level, and select **Overview**. On the toolbar, select **Start**.
+
+1. Return to the workflow level. On the workflow menu, select **Overview**. On the toolbar, select **Run** > **Run** to manually start your workflow.
+
+1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. To send the request, use a tool such as [Postman](https://www.getpostman.com/apps).
+
+ For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
+
+ ```xml
+ <?xml version="1.0" encoding="UTF-8" ?>
+ <Send xmlns="http://Microsoft.LobServices.Sap/2007/03/Idoc/2/ORDERS05//720/Send">
+ <idocData>
+ <...>
+ </idocData>
+ </Send>
+ ```
+
+1. After you send the HTTP request, wait for the response from your workflow.
+
+ > [!NOTE]
+ >
+ > Your workflow might time out if all the steps required for the response don't finish within the [request timeout limit](logic-apps-limits-and-config.md).
+ > If this condition happens, requests might get blocked. To help you diagnose problems, learn [how to check and monitor your logic app workflows](monitor-logic-apps.md).
+
+1. On your workflow's **Overview** pane, under **Run History**, find and open the workflow run.
+
+1. Find the **Generate Schema** action, and review the action's outputs.
+
+ The outputs show the generated schemas for the specified messages.
+
+For more information about reviewing workflow run history, see [Monitor logic app workflows](monitor-logic-apps.md?tabs=standard).
+++
+## Upload schemas to an integration account
+
+Optionally, you can download or store the generated schemas in repositories, such as an [integration account](logic-apps-enterprise-integration-create-integration-account.md) or Azure storage account, for example, in a blob container. Integration accounts provide a first-class experience with XML actions for workflows in Azure Logic Apps. You have the option to upload generated schemas to an existing integration account within the same workflow that generates those schemas by using the Azure Resource Manager action named **Create or update a resource**.
+
+> [!NOTE]
+>
+> Schemas use base64-encoded format. To upload schemas to an integration account, you must decode them first
+> by using the `base64ToString()` function. The following example shows the code for the `properties` element:
+>
+> ```json
+> "properties": {
+> "Content": "@base64ToString(items('For_each')?['Content'])",
+> "ContentType": "application/xml",
+> "SchemaType": "Xml"
+> }
+> ```
+
+For this task, you'll need an [integration account](logic-apps-enterprise-integration-create-integration-account.md), if you don't already have one. Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps to upload schemas to an integration account from your workflow after schema generation.
+
+### [Consumption](#tab/consumption)
+
+1. In the workflow designer, under the SAP managed action named **Generate schemas**, select **New step**.
+
+1. [Follow these general steps to find and add the Azure Resource Manager managed action named **Create or update a resource**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). If you're prompted to sign in with your credentials, go ahead and continue.
+
+ After Azure Logic Apps sets up and tests your connection, the action information box appears.
+
+ ![Screenshot shows Consumption workflow and an Azure Resource Manager action named Create or update a resource.](./media/sap-generate-schemas-for-artifacts/generate-schemas-azure-resource-manager-action-consumption.png)
+
+1. In the **Create or update a resource** action, provide the [required information](/connectors/arm/#create-or-update-a-resource).
+
+ 1. To include any outputs from previous steps in the workflow, select inside the parameter where you want to include the output, open the dynamic content list, and select the output to include.
+
+ 1. From the **Add new parameter** list, select the **Location** and **Properties** parameters.
+
+ 1. Provide the values for these added parameters, for example:
+
+ ![Screenshot shows Consumption workflow and Azure Resource Manager action with added parameters named Location and Properties.](./media/sap-generate-schemas-for-artifacts/generate-schemas-azure-resource-manager-action-complete-consumption.png)
+
+ The **Generate schemas** action generates schemas as a collection, so the designer automatically adds a **For each** loop around the Azure Resource Manager action, for example:
+
+ ![Screenshot shows Consumption workflow and for each loop with included Azure Resource Manager action.](./media/sap-generate-schemas-for-artifacts/generate-schemas-azure-resource-manager-for-each-consumption.png)
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+### [Standard](#tab/standard)
+
+1. In the workflow designer, under the SAP built-in action named **Generate Schema**, select the plus sign (**+**) > **Add an action**.
+
+1. [Follow these general steps to find and add the Azure Resource Manager managed action named **Create or update a resource**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action). If you're prompted to sign in with your credentials, go ahead and continue.
+
+ After Azure Logic Apps sets up and tests your connection, the action information box appears.
+
+ ![Screenshot shows Standard workflow and an Azure Resource Manager action named Create or update a resource.](./media/sap-generate-schemas-for-artifacts/generate-schemas-azure-resource-manager-action-standard.png)
+
+1. In the **Create or update a resource** action, provide the [required information](/connectors/arm/#create-or-update-a-resource).
+
+ 1. To include any outputs from previous steps in the workflow, select inside the parameter where you want to include the output, open the dynamic content list, and select the output to include.
+
+ 1. From the **Add new parameter** list, select the **Location** and **Properties** parameters.
+
+ 1. Provide the values for these added parameters, for example:
+
+ ![Screenshot shows Standard workflow and Azure Resource Manager action with added parameters named Location and Properties.](./media/sap-generate-schemas-for-artifacts/generate-schemas-azure-resource-manager-action-complete-standard.png)
+
+ The **Generate Schema** action generates schemas as a collection, so the designer automatically adds a **For each** loop around the Azure Resource Manager action, for example:
+
+ ![Screenshot shows Standard workflow and for each loop with included Azure Resource Manager action.](./media/sap-generate-schemas-for-artifacts/generate-schemas-azure-resource-manager-for-each-standard.png)
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+++
+### Test your workflow
+
+1. Based on whether you have a Consumption or Standard logic app workflow, [follow the general steps to manually test and run your workflow](#test-workflow).
+
+1. After a successful run, go to the integration account, and check that the generated schemas exist.
+
+## Sample XML schemas
+
+If you're learning how to generate an XML schema for use in creating a sample document, review the following samples. These examples show how you can work with many types of payloads, including:
+
+* [RFC requests](#xml-samples-for-rfc-requests)
+* [BAPI requests](#xml-samples-for-bapi-requests)
+* [IDoc requests](#xml-samples-for-idoc-requests)
+* Simple or complex XML schema data types
+* Table parameters
+* Optional XML behaviors
+
+You can begin your XML schema with an optional XML prolog. The SAP connector works with or without the XML prolog.
+
+```xml
+<?xml version="1.0" encoding="utf-8">
+```
+
+### XML samples for RFC requests
+
+The following example shows a basic RFC call where the RFC name is `STFC_CONNECTION`. This request uses the default namespace named `xmlns=`. However, you can assign and use namespace aliases such as `xmmlns:exampleAlias=`. The namespace value is the namespace for all the RFCs in SAP for Microsoft services. The request has a simple input parameter named `<REQUTEXT>`.
+
+```xml
+<STFC_CONNECTION xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
+ <REQUTEXT>exampleInput</REQUTEXT>
+</STFC_CONNECTION>
+```
+
+The following example shows an RFC call with a table parameter. This example call and group of test RFCs are available in all SAP systems. The table parameter is named `TCPICDAT`. The table line type is `ABAPTEXT`, and this element repeats for each row in the table. This example contains a single line, which is named `LINE`. Requests with a table parameter can contain any number of fields, where the number is a positive integer (*n*).
+
+```xml
+<STFC_WRITE_TO_TCPIC xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
+ <RESTART_QNAME>exampleQName</RESTART_QNAME>
+ <TCPICDAT>
+ <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
+ <LINE>exampleFieldInput1</LINE>
+ </ABAPTEXT>
+ <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
+ <LINE>exampleFieldInput2</LINE>
+ </ABAPTEXT>
+ <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
+ <LINE>exampleFieldInput3</LINE>
+ </ABAPTEXT>
+ </TCPICDAT>
+</STFC_WRITE_TO_TCPIC>
+```
+
+> [!TIP]
+>
+> To review the result from RFC **STFC_WRITE_TO_TCPIC**, use the SAP Logon's Data Browser (T-Code SE16) and the table named **TCPIC**.
+
+The following example shows an RFC call with a table parameter that has an anonymous field, which is a field without an assigned name. Complex types are declared under a separate namespace where the declaration sets a new default for the current node and all its child elements. The example uses the hex code `x002F` as an escape character for the symbol */* because this symbol is reserved in the SAP field name.
+
+```xml
+<RFC_XML_TEST_1 xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
+ <IM_XML_TABLE>
+ <RFC_XMLCNT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
+ <_x002F_AnonymousField>exampleFieldInput</_x002F_AnonymousField>
+ </RFC_XMLCNT>
+ </IM_XML_TABLE>
+</RFC_XML_TEST_1>
+```
+
+The following example includes prefixes for the namespaces. You can declare all prefixes at once, or you can declare any number of prefixes as attributes of a node. The RFC namespace alias named `ns0` is used as the root and parameters for the basic type.
+
+> [!NOTE]
+>
+> Complex types are declared under a different namespace for RFC types with
+> the alias `ns3` instead of the regular RFC namespace with the alias `ns0`.
+
+```xml
+<ns0:BBP_RFC_READ_TABLE xmlns:ns0="http://Microsoft.LobServices.Sap/2007/03/Rfc/" xmlns:ns3="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
+ <ns0:DELIMITER>0</ns0:DELIMITER>
+ <ns0:QUERY_TABLE>KNA1</ns0:QUERY_TABLE>
+ <ns0:ROWCOUNT>250</ns0:ROWCOUNT>
+ <ns0:ROWSKIPS>0</ns0:ROWSKIPS>
+ <ns0:FIELDS>
+ <ns3:RFC_DB_FLD>
+ <ns3:FIELDNAME>KUNNR</ns3:FIELDNAME>
+ </ns3:RFC_DB_FLD>
+ </ns0:FIELDS>
+</ns0:BBP_RFC_READ_TABLE>
+```
+
+#### XML samples for BAPI requests
+
+The following XML samples are example requests to [call the BAPI method](/connectors/sap/#[bapi]-call-method-in-sap-(preview)).
+
+> [!NOTE]
+> SAP makes business objects available to external systems by describing them in response to RFC `RPY_BOR_TREE_INIT`,
+> which Azure Logic Apps issues without an input filter. Azure Logic Apps inspects the output table `BOR_TREE`.
+> The `SHORT_TEXT` field is used for names of business objects. Business objects not returned by SAP in the output
+> table aren't accessible to Azure Logic Apps.
+>
+> If you use custom business objects, make sure to publish and release these business objects in SAP. Otherwise,
+> SAP doesn't list your custom business objects in the output table `BOR_TREE`. You can't access your custom
+> business objects in Azure Logic Apps until you expose the business objects from SAP.
+
+The following example gets a list of banks using the BAPI method `GETLIST`. This sample contains the business object for a bank named `BUS1011`.
+
+```xml
+<GETLIST xmlns="http://Microsoft.LobServices.Sap/2007/03/Bapi/BUS1011">
+ <BANK_CTRY>US</BANK_CTRY>
+ <MAX_ROWS>10</MAX_ROWS>
+</GETLIST>
+```
+
+The following example creates a bank object using the `CREATE` method. This example uses the same business object named `BUS1011`, as the previous example. When you use the `CREATE` method to create a bank, make sure to commit your changes because this method isn't committed by default.
+
+> [!TIP]
+>
+> Make sure that your XML document follows any validation rules configured in your SAP system. For example, for this
+> sample document, in the USA, the bank key (`<BANK_KEY>`) must be a bank routing number, also known as an ABA number.
+
+```xml
+<CREATE xmlns="http://Microsoft.LobServices.Sap/2007/03/Bapi/BUS1011">
+ <BANK_ADDRESS>
+ <BANK_NAME xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc">ExampleBankName</BANK_NAME>
+ <REGION xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc">ExampleRegionName</REGION>
+ <STREET xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc">ExampleStreetAddress</STREET>
+ <CITY xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc">Redmond</CITY>
+ </BANK_ADDRESS>
+ <BANK_COUNTRY>US</BANK_COUNTRY>
+ <BANK_KEY>123456789</BANK_KEY>
+</CREATE>
+```
+
+The following example gets details for a bank using the bank routing number, which is the value for `<BANK_KEY>`.
+
+```xml
+<GETDETAIL xmlns="http://Microsoft.LobServices.Sap/2007/03/Bapi/BUS1011">
+ <BANK_COUNTRY>US</BANK_COUNTRY>
+ <BANK_KEY>123456789</BANK_KEY>
+</GETDETAIL>
+```
+
+### XML samples for IDoc requests
+
+To generate a plain SAP IDoc XML schema, use the **SAP Logon** application and the `WE-60` T-Code. Access the SAP documentation through the user interface, and generate XML schemas in XSD format for your IDoc types and extensions. For more information about generic SAP formats and payloads, and their built-in dialogs, review the [SAP documentation](https://help.sap.com/viewer/index).
+
+This example declares the root node and namespaces. The URI in the sample code, `http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05//700/Send`, declares the following configuration:
+
+* `/IDoc` is the root node for all IDocs.
+
+* `/3` is the record types version for common segment definitions.
+
+* `/ORDERS05` is the IDoc type.
+
+* `//` is an empty segment because there's no IDoc extension.
+
+* `/700` is the SAP version.
+
+* `/Send` is the action to send the information to SAP.
+
+```xml
+<ns0:Send xmlns:ns0="http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05//700/Send" xmlns:ns3="http://schemas.microsoft.com/2003/10/Serialization" xmlns:ns1="http://Microsoft.LobServices.Sap/2007/03/Types/Idoc/Common/" xmlns:ns2="http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05//700">
+ <ns0:idocData>
+```
+
+You can repeat the `idocData` node to send a batch of IDocs in a single call. In the following example, there's one control record named `EDI_DC40`, and multiple data records.
+
+```xml
+<...>
+ <ns0:idocData>
+ <ns2:EDI_DC40>
+ <ns1:TABNAM>EDI_DC40</ns1:TABNAM>
+ <...>
+ <ns1:ARCKEY>Cor1908207-5</ns1:ARCKEY>
+ </ns2:EDI_DC40>
+ <ns2:E2EDK01005>
+ <ns2:DATAHEADERCOLUMN_SEGNAM>E23DK01005</ns2:DATAHEADERCOLUMN_SEGNAM>
+ <ns2:CURCY>USD</ns2:CURCY>
+ </ns2:E2EDK01005>
+ <ns2:E2EDK03>
+ <...>
+ </ns0:idocData>
+```
+
+The following example shows a sample IDoc control record, which uses a prefix named `EDI_DC`. You must update the values to match your SAP installation and IDoc type. For example, your IDoc client code might not be `800`. Contact your SAP team to make sure you're using the correct values for your SAP installation.
+
+```xml
+<ns2:EDI_DC40>
+ <ns:TABNAM>EDI_DC40</ns1:TABNAM>
+ <ns:MANDT>800</ns1:MANDT>
+ <ns:DIRECT>2</ns1:DIRECT>
+ <ns:IDOCTYP>ORDERS05</ns1:IDOCTYP>
+ <ns:CIMTYP></ns1:CIMTYP>
+ <ns:MESTYP>ORDERS</ns1:MESTYP>
+ <ns:STD>X</ns1:STD>
+ <ns:STDVRS>004010</ns1:STDVRS>
+ <ns:STDMES></ns1:STDMES>
+ <ns:SNDPOR>SAPENI</ns1:SNDPOR>
+ <ns:SNDPRT>LS</ns1:SNDPRT>
+ <ns:SNDPFC>AG</ns1:SNDPFC>
+ <ns:SNDPRN>ABAP1PXP1</ns1:SNDPRN>
+ <ns:SNDLAD></ns1:SNDLAD>
+ <ns:RCVPOR>BTSFILE</ns1:RCVPOR>
+ <ns:RCVPRT>LI</ns1:RCVPRT>
+```
+
+The following example shows a sample data record with plain segments. This example uses the SAP date format. Strong-typed documents can use native XML date formats, such as `2020-12-31 23:59:59`.
+
+```xml
+<ns2:E2EDK01005>
+ <ns2:DATAHEADERCOLUMN_SEGNAM>E2EDK01005</ns2:DATAHEADERCOLUMN_SEGNAM>
+ <ns2:CURCY>USD</ns2:CURCY>
+ <ns2:BSART>OR</ns2:BSART>
+ <ns2:BELNR>1908207-5</ns2:BELNR>
+ <ns2:ABLAD>CC</ns2:ABLAD>
+ </ns2>
+ <ns2:E2EDK03>
+ <ns2:DATAHEADERCOLUMN_SEGNAM>E2EDK03</ns2:DATAHEADERCOLUMN_SEGNAM>
+ <ns2:IDDAT>002</ns2:IDDAT>
+ <ns2:DATUM>20160611</ns2:DATUM>
+ </ns2:E2EDK03>
+```
+
+The following example shows a data record with grouped segments. The record includes a group parent node named `E2EDKT1002GRP`, and multiple child nodes, including `E2EDKT1002` and `E2EDKT2001`.
+
+```xml
+<ns2:E2EDKT1002GRP>
+ <ns2:E2EDKT1002>
+ <ns2:DATAHEADERCOLUMN_SEGNAM>E2EDKT1002</ns2:DATAHEADERCOLUMN_SEGNAM>
+ <ns2:TDID>ZONE</ns2:TDID>
+ </ns2:E2EDKT1002>
+ <ns2:E2EDKT2001>
+ <ns2:DATAHEADERCOLUMN_SEGNAM>E2EDKT2001</ns2:DATAHEADERCOLUMN_SEGNAM>
+ <ns2:TDLINE>CRSD</ns2:TDLINE>
+ </ns2:E2EDKT2001>
+</ns2:E2EDKT1002GRP>
+```
+
+The recommended method is to create an IDoc identifier for use with tRFC. You can set this transaction identifier named `tid` using the [Send IDoc operation](/connectors/sap/#send-idoc) in the SAP managed connector.
+
+The following example shows an alternative method to set the transaction identifier, or `tid`. In this example, the last data record segment node and the IDoc data node are closed. Then, the GUID, `guid`, is used as the tRFC identifier to detect duplicates.
+
+```xml
+ </E2STZUM002GRP>
+ </idocData>
+ <guid>8820ea40-5825-4b2f-ac3c-b83adc34321c</guid>
+</Send>
+```
+
+## Next steps
+
+* [Create example workflows for common SAP scenarios](sap-create-example-scenario-workflows.md)
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-nlp-processing-batch.md
Title: "Text processing with batch endpoints"
+ Title: "Deploy and run language models in batch endpoints"
-description: Learn how to use batch deployments to process text and output results.
+description: Learn how to use batch deployments to process text with large language models.
[!INCLUDE [cli v2](../../includes/machine-learning-dev-v2.md)]
-Batch Endpoints can be used to deploy expensive models, like language models, over text data. In this tutorial you'll learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
+Batch Endpoints can be used to deploy expensive models, like language models, over text data. In this tutorial, you learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace. It also shows how to do inference optimization using HuggingFace `optimum` and `accelerate` libraries.
## About this sample
-The model we are going to work with was built using the popular library transformers from HuggingFace along with [a pre-trained model from Facebook with the BART architecture](https://huggingface.co/facebook/bart-large-cnn). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation](https://arxiv.org/abs/1910.13461). This model has the following constraints which are important to keep in mind for deployment:
+The model we are going to work with was built using the popular library transformers from HuggingFace along with [a pre-trained model from Facebook with the BART architecture](https://huggingface.co/facebook/bart-large-cnn). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation](https://arxiv.org/abs/1910.13461). This model has the following constraints, which are important to keep in mind for deployment:
* It can work with sequences up to 1024 tokens. * It is trained for summarization of text in English.
model = ml_client.models.create_or_update(
We are going to create a batch endpoint named `text-summarization-batch` where to deploy the HuggingFace model to run text summarization on text files in English.
-1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
+1. Decide on the name of the endpoint. The name of the endpoint ends-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
# [Azure CLI](#tab/cli)
We are going to create a batch endpoint named `text-summarization-batch` where t
## Creating the deployment
-Let's create the deployment that will host the model:
+Let's create the deployment that hosts the model:
-1. We need to create a scoring script that can read the CSV files provided by the batch deployment and return the scores of the model with the summary. The following script does the following:
+1. We need to create a scoring script that can read the CSV files provided by the batch deployment and return the scores of the model with the summary. The following script performs these actions:
> [!div class="checklist"] > * Indicates an `init` function that detects the hardware configuration (CPU vs GPU) and loads the model accordingly. Both the model and the tokenizer are loaded in global variables. We are not using a `pipeline` object from HuggingFace to account for the limitation in the sequence lenghs of the model we are currently using.
- > * Notice that we are doing performing model optimizations to improve the performance using `optimum` and accelerate libraries. If the model or hardware doesn't support it, we will run the deployment without such optimizations.
+ > * Notice that we are doing performing **model optimizations** to improve the performance using `optimum` and `accelerate` libraries. If the model or hardware doesn't support it, we will run the deployment without such optimizations.
> * Indicates a `run` function that is executed for each mini-batch the batch deployment provides. > * The `run` function read the entire batch using the `datasets` library. The text we need to summarize is on the column `text`. > * The `run` method iterates over each of the rows of the text and run the prediction. Since this is a very expensive model, running the prediction over entire files will result in an out-of-memory exception. Notice that the model is not execute with the `pipeline` object from `transformers`. This is done to account for long sequences of text and the limitation of 1024 tokens in the underlying model we are using.
Let's create the deployment that will host the model:
# [Azure CLI](#tab/cli)
- The environment definition will be included in the deployment file.
+ The environment definition is included in the deployment file.
__deployment.yml__
Let's create the deployment that will host the model:
> [!IMPORTANT] > The environment `torch200-transformers-gpu` we've created requires a CUDA 11.8 compatible hardware device to run Torch 2.0 and Ubuntu 20.04. If your GPU device doesn't support this version of CUDA, you can check the alternative `torch113-conda.yaml` conda environment (also available on the repository), which runs Torch 1.3 over Ubuntu 18.04 with CUDA 10.1. However, acceleration using the `optimum` and `accelerate` libraries won't be supported on this configuration.
-1. Each deployment runs on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). In this example, our model can benefit from GPU acceleration, which is why we will use a GPU cluster.
+1. Each deployment runs on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). In this example, our model can benefit from GPU acceleration, which is why we use a GPU cluster.
# [Azure CLI](#tab/cli)
Let's create the deployment that will host the model:
> [!NOTE]
- > You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](./how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
+ > You are not charged for compute at this point as the cluster remains at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](./how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
1. Now, let's create the deployment.
Let's create the deployment that will host the model:
> [!IMPORTANT] > You will notice in this deployment a high value in `timeout` in the parameter `retry_settings`. The reason for it is due to the nature of the model we are running. This is a very expensive model and inference on a single row may take up to 60 seconds. The `timeout` parameters controls how much time the Batch Deployment should wait for the scoring script to finish processing each mini-batch. Since our model runs predictions row by row, processing a long file may take time. Also notice that the number of files per batch is set to 1 (`mini_batch_size=1`). This is again related to the nature of the work we are doing. Processing one file at a time per batch is expensive enough to justify it. You will notice this being a pattern in NLP processing.
-1. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+1. Although you can invoke a specific deployment inside of an endpoint, you usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
# [Azure CLI](#tab/cli)
For testing our endpoint, we are going to use a sample of the dataset [BillSum:
> [!TIP]
- > Notice that by indicating a local path as an input, the data will be uploaded to Azure Machine Learning default's storage account.
+ > Notice that by indicating a local path as an input, the data is uploaded to Azure Machine Learning default's storage account.
4. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
As mentioned in some of the notes along this tutorial, processing text may have
> [!div class="checklist"] > * Some NLP models may be very expensive in terms of memory and compute time. If this is the case, consider decreasing the number of files included on each mini-batch. In the example above, the number was taken to the minimum, 1 file per batch. While this may not be your case, take into consideration how many files your model can score at each time. Have in mind that the relationship between the size of the input and the memory footprint of your model may not be linear for deep learning models. > * If your model can't even handle one file at a time (like in this example), consider reading the input data in rows/chunks. Implement batching at the row level if you need to achieve higher throughput or hardware utilization.
-> * Set the `timeout` value of your deployment accordly to how expensive your model is and how much data you expect to process. Remember that the `timeout` indicates the time the batch deployment would wait for your scoring script to run for a given batch. If your batch have many files or files with many rows, this will impact the right value of this parameter.
+> * Set the `timeout` value of your deployment accordly to how expensive your model is and how much data you expect to process. Remember that the `timeout` indicates the time the batch deployment would wait for your scoring script to run for a given batch. If your batch have many files or files with many rows, this impacts the right value of this parameter.
## Considerations for MLflow models that process text The same considerations mentioned above apply to MLflow models. However, since you are not required to provide a scoring script for your MLflow model deployment, some of the recommendations mentioned may require a different approach. * MLflow models in Batch Endpoints support reading tabular data as input data, which may contain long sequences of text. See [File's types support](how-to-mlflow-batch.md#files-types-support) for details about which file types are supported.
-* Batch deployments will call your MLflow model's predict function with the content of an entire file in as Pandas dataframe. If your input data contains many rows, chances are that running a complex model (like the one presented in this tutorial) will result in an out-of-memory exception. If this is your case, you can consider:
+* Batch deployments calls your MLflow model's predict function with the content of an entire file in as Pandas dataframe. If your input data contains many rows, chances are that running a complex model (like the one presented in this tutorial) results in an out-of-memory exception. If this is your case, you can consider:
* Customize how your model runs predictions and implement batching. To learn how to customize MLflow model's inference, see [Logging custom models](how-to-log-mlflow-models.md?#logging-custom-models). * Author a scoring script and load your model using `mlflow.<flavor>.load_model()`. See [Using MLflow models with a scoring script](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script) for details.
machine-learning How To Use Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoints.md
job = ml_client.batch_endpoints.invoke(
Batch endpoints support reading files or folders from different locations. To learn more about the supported types and how to specify them read [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md). > [!TIP]
-> Local data folders/files can be used when executing batch endpoints from the Azure Machine Learning CLI or Azure Machine Learning SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
-
-> [!IMPORTANT]
-> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
-
+> **Using the REST API:** Batch endpoints provide an open and durable API to invoke the endpoints and create jobs. See [Create jobs and input data for batch endpoints (REST)](how-to-access-data-batch-endpoints-jobs.md?tabs=rest) to learn how to use it.
## Accessing outputs from batch jobs
Batch endpoints can handle multiple deployments under the same endpoint, allowin
You can add, remove, and update deployments without affecting the endpoint itself. + ### Add non-default deployments To add a new deployment to an existing endpoint, use the code:
networking Azure Network Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/azure-network-latency.md
Title: Azure network round-trip latency statistics description: Learn about round-trip latency statistics between Azure regions. -+ Previously updated : 06/30/2022- Last updated : 05/28/2023+ # Azure network round-trip latency statistics
-Azure continuously monitors the latency (speed) of core areas of its network using internal monitoring tools as well as measurements collected by [ThousandEyes](https://thousandeyes.com), a third-party synthetic monitoring service.
+Azure continuously monitors the latency (speed) of core areas of its network using internal monitoring tools and measurements collected by [ThousandEyes](https://thousandeyes.com), a third-party synthetic monitoring service.
## How are the measurements collected?
-The latency measurements are collected from ThousandEyes agents, hosted in Azure cloud regions worldwide, that continuously send network probes between themselves in 1-minute intervals. The monthly latency statistics are derived from averaging the collected samples for the month.
+The latency measurements are collected from ThousandEyes agents, hosted in Azure cloud regions worldwide, that continuously sends network probes between themselves in 1-minute intervals. The monthly latency statistics are derived from averaging the collected samples for the month.
## June 2022 round-trip latency figures
-The monthly Percentile P50 round trip times between Azure regions for the past 30 days (ending on June 30, 2022) are shown below. The following measurements are powered by [ThousandEyes](https://thousandeyes.com).
+The monthly Percentile P50 round trip times between Azure regions for the past 30 days (ending on June 30, 2022) are shown as follows. The following measurements are powered by [ThousandEyes](https://thousandeyes.com).
:::image type="content" source="media/azure-network-latency/azure-network-latency-thmb-july-2022.png" alt-text="Chart of the inter-region latency statistics as of June 30, 2022." lightbox="media/azure-network-latency/azure-network-latency-july-2022.png":::
networking Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure networking description: Sample Azure Resource Graph queries for Azure networking showing use of resource types and tables to access Azure networking related resources and properties. Previously updated : 07/07/2022 Last updated : 05/28/2023 --++ # Azure Resource Graph sample queries for Azure networking
-This page is a collection of [Azure Resource Graph](../../governance/resource-graph/overview.md)
-sample queries for Azure networking. For a complete list of Azure Resource Graph samples, see
-[Resource Graph samples by Category](../../governance/resource-graph/samples/samples-by-category.md)
-and [Resource Graph samples by Table](../../governance/resource-graph/samples/samples-by-table.md).
+This page is a collection of [Azure Resource Graph](../../governance/resource-graph/overview.md) sample queries for Azure networking. For a complete list of Azure Resource Graph samples, see [Resource Graph samples by Category](../../governance/resource-graph/samples/samples-by-category.md) and [Resource Graph samples by Table](../../governance/resource-graph/samples/samples-by-table.md).
## Sample queries
and [Resource Graph samples by Table](../../governance/resource-graph/samples/sa
## Next steps - Learn more about the [query language](../../governance/resource-graph/concepts/query-language.md).+ - Learn more about how to [explore resources](../../governance/resource-graph/concepts/explore-resources.md).+ - See samples of [Starter language queries](../../governance/resource-graph/samples/starter.md).+ - See samples of [Advanced language queries](../../governance/resource-graph/samples/advanced.md).
operator-nexus Howto Baremetal Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-functions.md
This article describes how to perform lifecycle management operations on Bare Me
This command will `power-off` the specified `bareMetalMachineName`. ```azurecli
- az networkcloud baremetalmachine power-off \
- --name "bareMetalMachineName" \
- --resource-group "resourceGroupName"
+az networkcloud baremetalmachine power-off \
+ --name "bareMetalMachineName" \
+ --resource-group "resourceGroupName"
``` ## Start the BMM
This command will `power-off` the specified `bareMetalMachineName`.
This command will `start` the specified `bareMetalMachineName`. ```azurecli
- az networkcloud baremetalmachine start \
- --name "bareMetalMachineName" \
- --resource-group "resourceGroupName"
+az networkcloud baremetalmachine start \
+ --name "bareMetalMachineName" \
+ --resource-group "resourceGroupName"
``` ## Restart the BMM
This command will `start` the specified `bareMetalMachineName`.
This command will `restart` the specified `bareMetalMachineName`. ```azurecli
- az networkcloud baremetalmachine restart \
- --name "bareMetalMachineName" \
- --resource-group "resourceGroupName"
+az networkcloud baremetalmachine restart \
+ --name "bareMetalMachineName" \
+ --resource-group "resourceGroupName"
``` ## Make a BMM unschedulable (cordon)
On executing the `cordon` command, with the value `True` for the `evacuate`
parameter, the workloads that are running on the BMM are `stopped` and the BMM is set to `pending` state. ```azurecli
- az networkcloud baremetalmachine cordon \
- --evacuate "True" \
- --name "bareMetalMachineName" \
- --resource-group "resourceGroupName"
+az networkcloud baremetalmachine cordon \
+ --evacuate "True" \
+ --name "bareMetalMachineName" \
+ --resource-group "resourceGroupName"
``` The `evacuate "True"` removes workloads from that node while `evacuate "False"` only prevents the scheduling of new workloads.
You can make a BMM `schedulable` (usable) by executing the [`uncordon`](#make-a-
state on the BMM are `restarted` when the BMM is `uncordoned`. ```azurecli
- az networkcloud baremetalmachine uncordon \
- --name "bareMetalMachineName" \
- --resource-group "resourceGroupName"
+az networkcloud baremetalmachine uncordon \
+ --name "bareMetalMachineName" \
+ --resource-group "resourceGroupName"
``` ## Reimage a BMM
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
Previously updated : 03/07/2023 Last updated : 05/26/2023
You can use the following options to configure your DNS settings for private end
> Existing Private DNS Zones tied to a single service should not be associated with two different Private Endpoints as it will not be possible to properly resolve two different A-Records that point to the same service. However, Private DNS Zones tied to multiple services would not face this resolution constraint. ## Azure services DNS zone configuration+ Azure creates a canonical name DNS record (CNAME) on the public DNS. The CNAME record redirects the resolution to the private domain name. You can override the resolution with the private IP address of your private endpoints. Your applications don't need to change the connection URL. When resolving to a public DNS service, the DNS server will resolve to your private endpoints. The process doesn't affect your existing applications.
For Azure services, use the recommended zone names as described in the following
| Azure Bot Service (Microsoft.BotService/botServices) / Bot | privatelink.directline.botframework.com | directline.botframework.com </br> europe.directline.botframework.com | | Azure Bot Service (Microsoft.BotService/botServices) / Token | privatelink.token.botframework.com | token.botframework.com </br> europe.token.botframework.com | | Azure Health Data Services (Microsoft.HealthcareApis/workspaces) / healthcareworkspace | privatelink.workspace.azurehealthcareapis.com </br> privatelink.fhir.azurehealthcareapis.com </br> privatelink.dicom.azurehealthcareapis.com | workspace.azurehealthcareapis.com </br> fhir.azurehealthcareapis.com </br> dicom.azurehealthcareapis.com |
-| Azure Databricks (Microsoft.Databricks/workspaces) / databricks_ui_api, browser_authentication | privatelink.azuredatabricks.net | azuredatabricks.net
+| Azure Databricks (Microsoft.Databricks/workspaces) / databricks_ui_api, browser_authentication | privatelink.azuredatabricks.net | azuredatabricks.net |
+| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) / global | privatelink-global.wvd.microsoft.com | azuredatabricks.net |
+| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces and Microsoft.DesktopVirtualization/hostpools) / feed, connection | privatelink.wvd.microsoft.com | azuredatabricks.net |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hubs-compatible-endpoint)
For Azure services, use the recommended zone names as described in the following
| Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.usgovcloudapi.net | redis.cache.usgovcloudapi.net | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.us | azurehdinsight.us | | Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.ml.azure.us<br/>privatelink.notebooks.usgovcloudapi.net | api.ml.azure.us<br/>notebooks.usgovcloudapi.net<br/>instances.azureml.us<br/>aznbcontent.net<br/>inference.ml.azure.us |
+| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) / global | privatelink-global.wvd.azure.us | wvd.azure.us |
+| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces and Microsoft.DesktopVirtualization/hostpools) / feed, connection | privatelink.wvd.azure.us | wvd.azure.us |
>[!Note] >In the above text, `{region}` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes:
For Azure services, use the recommended zone names as described in the following
| Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.chinacloudapi.cn | redis.cache.chinacloudapi.cn | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.cn | azurehdinsight.cn | | Azure Data Explorer (Microsoft.Kusto) | privatelink.{regionName}.kusto.windows.cn | {regionName}.kusto.windows.cn |
+| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) / global | privatelink-global.wvd.azure.cn | wvd.azure.us |
+| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces and Microsoft.DesktopVirtualization/hostpools) / feed, connection | privatelink.wvd.azure.cn | wvd.azure.us |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hubs-compatible-endpoint)
private-multi-access-edge-compute-mec Partner Programs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-multi-access-edge-compute-mec/partner-programs.md
Networking ISV partners include software vendors that provide network functions
|Firewall |SD-WAN | |||
-| [Palo Alto Networks](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/paloaltonetworks.vmseries-ngfw-vm-edge-panos-10-1-5?exp=ubp8&tab=Overview) | [NetFoundry](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/netfoundryinc.application-ziti-private-edge?exp=ubp8&tab=Overview) |
-| | [Nuage Networks by Nokia](https://aka.ms/nokianuage)|
+| [Palo Alto Networks](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/paloaltonetworks.vmseries-ngfw-vm-edge-panos-10-2-4?tab=Overview) | [NetFoundry](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/netfoundryinc.application-ziti-private-edge?exp=ubp8&tab=Overview) |
| | [VMware SD-WAN by Velocloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vmware-inc.vmware_sdwan_edge_zones?exp=ubp8&tab=Overview) | | | [Versa Networks](https://aka.ms/versa) | ### SIM & RAN
-SIM partners provide wireless authentication technologies and embedded cellular modules. RAN partners deliver various hardware equipment (such as radios and antennas) necessary to deploy private mobile networks. The following partners have completed interop tests with Azure private MEC. Please contact the partner or your Microsoft representative for more details:
+SIM partners provide wireless authentication technologies and embedded cellular modules. RAN partners deliver various hardware equipment (such as radios and antennas) necessary to deploy private mobile networks. The following partners have completed interop tests with Azure private MEC. Contact the partner or your Microsoft representative for more details:
|SIM |RAN (hardware)| |||
SIM partners provide wireless authentication technologies and embedded cellular
| Idemia | ASOCS | | JCI | Commscope | | Transatel | Compal |
-| | Ericsson |
| | Foxconn | | | Fujitsu | | | Inventec |
Our application ISV partners include:
## Next steps - To partner with Microsoft and deploy Azure private MEC solutions: - [Join the Azure private MEC Managed Solution Providers program](https://aka.ms/privateMECmsp) to get started if you're an operator and system integrator managed service providers.
- - [Contact the Azure private MEC team](https://aka.ms/privateMEC_ISV) if you are a Platform partner, such as a network function or hardware vendor.
+ - [Contact the Azure private MEC team](https://aka.ms/privateMEC_ISV) if you're a Platform partner, such as a network function or hardware vendor.
- Onboard your applications to the Azure Marketplace, and then [pre-register for the forthcoming Azure private MEC ISV or developer program](https://aka.ms/privateMECpartnerprogram).
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md
Get started quickly with the [SAP on Azure Deployment Automation Framework](depl
- An [Azure CLI](/cli/azure/install-azure-cli) installation on your local computer. - An [Azure PowerShell](/powershell/azure/install-az-ps#update-the-azure-powershell-module) installation on your local computer. - A Service Principal to use for the control plane deployment
+- Ability to create an Azure Devops project if you want to use Azure DevOps for deployment.
Some of the prerequisites may already be installed in your deployment environment. Both Cloud Shell and the deployer have Terraform and the Azure CLI installed.
-## Clone the repository
-Clone the repository and prepare the execution environment by using the following steps:
+## Use SAP on Azure Deployment Automation Framework from Azure DevOps Services
+
+Using Azure DevOps streamlines the deployment process by providing pipelines that can be executed to perform both the infrastructure deployment and the configuration and SAP installation activities.
+You can use Azure Repos to store your configuration files and Azure Pipelines to deploy and configure the infrastructure and the SAP application.
+
+### Sign up for Azure DevOps Services
+
+To use Azure DevOps Services, you need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory (Azure AD). To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either _sign-in_ or create a new account.
+
+Follow the guidance here [Configure Azure DevOps for SDAF](configure-devops.md) to configure Azure DevOps for the SAP on Azure Deployment Automation Framework.
+
+## Creating the SAP on Azure Deployment Automation Framework environment without Azure DevOps
+
+You can run the SAP on Azure Deployment Automation Framework from a virtual machine in Azure. The following steps describe how to create the environment.
+
+Clone the repository and prepare the execution environment by using the following steps on a Linux Virtual machine in Azure:
+
+Ensure the Virtual Machine has the following prerequisites installed:
+ - git
+
+Ensure that the virtual machine is using either a system assigned or user assigned identity with permissions on the subscription to create resources.
+ - Create a directory called `Azure_SAP_Automated_Deployment` for your automation framework deployment. ```bash mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
-git clone https://github.com/Azure/sap-automation-bootstrap.git config
git clone https://github.com/Azure/sap-automation.git sap-automation git clone https://github.com/Azure/sap-automation-samples.git samples+
+git clone https://github.com/Azure/sap-automation-bootstrap.git config
+
+cd sap-automation/deploy/scripts
+
+./configure_deployer.sh
``` + > [!TIP] > The deployer already clones the required repositories.
sap Proximity Placement Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/proximity-placement-scenarios.md
Title: Azure proximity placement groups for SAP applications | Microsoft Docs
-description: Describes SAP deployment scenarios with Azure proximity placement groups
+ Title: Configuration options for optimal network latency with SAP applications | Microsoft Docs
+description: Describes SAP deployment scenarios to achieve optimal network latency
-tags: azure-resource-manager
- Last updated 12/18/2022 -
-# Azure proximity placement groups for optimal network latency with SAP applications
+# Configuration options for optimal network latency with SAP applications
> [!IMPORTANT]
-> In November 2021 we made significant changes in the way how proximity placement groups should be used with SAP workload in zonal deployments.
+> In November 2021 we made significant changes in the way how proximity placement groups should be used with SAP workload in zonal deployments.
-SAP applications based on the SAP NetWeaver or SAP S/4HANA architecture are sensitive to network latency between the SAP application tier and the SAP database tier. This sensitivity is the result of most of the business logic running in the application layer. Because the SAP application layer runs the business logic, it'ssues queries to the database tier at a high frequency, at a rate of thousands or tens of thousands per second. In most cases, the nature of these queries is simple. They can often be run on the database tier in 500 microseconds or less.
+SAP applications based on the SAP NetWeaver or SAP S/4HANA architecture are sensitive to network latency between the SAP application tier and the SAP database tier. This sensitivity is the result of most of the business logic running in the application layer. Because the SAP application layer runs the business logic, it issues queries to the database tier at a high frequency, at a rate of thousands or tens of thousands per second. In most cases, the nature of these queries is simple. They can often be run on the database tier in 500 microseconds or less.
The time spent on the network to send such a query from the application tier to the database tier and receive the result sent back has a major impact on the time it takes to run business processes. This sensitivity to network latency is why you might want to achieve certain minimum network latency in SAP deployment projects. See [SAP Note #1100926 - FAQ: Network performance](https://launchpad.support.sap.com/#/notes/1100926/E) for guidelines on how to classify the network latency. In many Azure regions, the number of datacenters has grown. At the same time, customers, especially for high-end SAP systems, are using more special VM families like M- or Mv2 family, or in rare cases HANA Large Instances. These Azure virtual machine types aren't always available in each of the datacenters that collect into an Azure region. These facts can create opportunities to optimize network latency between the SAP application layer and the SAP DBMS layer.
-To give you a possibility to optimize network latency, Azure offers [proximity placement groups](../../virtual-machines/co-location.md). Proximity placement groups can be used to force grouping of different VM types under a single network spine that provides sufficient low network latency between these different VM types weren't yet provided so far. In the process of deploying the first VM into such a proximity placement group, the VM gets bound to a specific network spine. As all the other VMs that are going to be deployed into the same proximity placement group, those VMs get grouped under the same network spine. As appealing as this prospect sounds, the usage of the construct introduces some restrictions and pitfalls as well:
+Azure provides different deployment options for SAP workloads, enabling you to optimize network latency. Detailed information about each option is thoroughly described in the following section:
+
+- [Proximity Placement Groups](#proximity-placement-groups)
+- [Virtual Machine Scale Set with Flexible Orchestration](#virtual-machine-scale-set-with-flexible-orchestration)
+
+## Proximity Placement Groups
+
+Proximity placement groups enable the grouping of different VM types under a single network spine, ensuring optimal low network latency between them. When the first VM is deployed in proximity placement group, that VM gets bound to a specific network spine. As all the other VMs that are going to be deployed into the same proximity placement group, those VMs get grouped under the same network spine. As appealing as this prospect sounds, the usage of the construct introduces some restrictions and pitfalls as well:
- You can't assume that all Azure VM types are available in every and all Azure datacenters or under each and every network spine. As a result, the combination of different VM types within one proximity placement group can be severely restricted. These restrictions occur because the host hardware that is needed to run a certain VM type might not be present in the datacenter or under the network spine to which the proximity placement group was assigned - As you resize parts of the VMs that are within one proximity placement group, you can't automatically assume that in all cases the new VM type is available in the same datacenter or under the network spine the proximity placement group got assigned to
To give you a possibility to optimize network latency, Azure offers [proximity p
> - Only on granularity of a single SAP system and not for a whole system landscape or a complete SAP landscape > - In a way to keep the different VM types and the number of VMs within a proximity placement group to a minimum - The scenarios where you used proximity placement groups so far were: - Deploying SAP workload with availability sets. Where the SAP database tier, the SAP application tier and ASCS/SCS VMs were grouped in three different availability sets. In such a case, you wanted to make sure that the availability sets weren't spread across the complete Azure region since this could, dependent on the Azure region, result in network latency that could impact SAP workload negatively - You wanted to deploy the critical resources of your SAP workload across different Availability Zones and on the other hand wanted to make sure that the VMs of the application tier in each of the zones would be spread across different fault domains by using availability sets. In this case, as later described in the document, proximity placement groups are the glue needed - You used proximity placement groups to group VMs together to achieve optimal network latency between the services hosted in the VMs
-As for deployment scenario #1, in many regions, especially regions without Availability Zones and most regions with Availability Zones, the network latency independent on where the VMs land is acceptable. Though there are some regions of Azure that can't provide a sufficiently good experience without collocating the three different availability sets without the usage of proximity placement groups.
-As of the deployment scenario #2, we are going to recommend a different way of using proximity placement groups in the following sections of this document.
+As for deployment scenario #1, in many regions, especially regions without Availability Zones and most regions with Availability Zones, the network latency independent on where the VMs land is acceptable. Though there are some regions of Azure that can't provide a sufficiently good experience without collocating the three different availability sets without the usage of proximity placement groups.
+As of the deployment scenario #2, we're going to recommend a different way of using proximity placement groups in the following sections of this document.
+### What are proximity placement groups?
-## What are proximity placement groups?
An Azure proximity placement group is a logical construct. When a proximity placement group is defined, it's bound to an Azure region and an Azure resource group. When VMs are deployed, a proximity placement group is referenced by: - The first Azure VM deployed under a network spine with many Azure compute units and low network latency. Such a network spine often matches a single Azure datacenter. You can think of the first virtual machine as a "scope VM" that is deployed into a compute scale unit based on Azure allocation algorithms that are eventually combined with deployment parameters.
To reduce risk of the above, it's recommended to use the intent option when crea
A single [Azure resource group](../../azure-resource-manager/management/manage-resources-portal.md) can have multiple proximity placement groups assigned to it. But a proximity placement group can be assigned to only one Azure resource group.
+### Proximity placement groups with SAP systems that use only Azure VMs
+
+In this section, we're going through the deployment architectures used so far and new recommendations
-## Proximity placement groups with SAP systems that use only Azure VMs
-In this section, we are going through the deployment architectures used so far and new recommendations
+#### Proximity placement groups with zonal deployments
-### Proximity placement groups with zonal deployments
For deployments that don't use HANA Large Instances, it's important to provide a reasonably low network latency between the SAP application tier and the DBMS tier. To enable such a reasonably low network latency for a limited set of scenarios, an Azure proximity placement group can be defined for such an SAP system.
-Avoid bundling several SAP production or non-production systems into a single proximity placement group. Avoid bundles of SAP systems because the more systems you group in a proximity placement group, the higher the chances:
+Avoid bundling several SAP production or nonproduction systems into a single proximity placement group. Avoid bundles of SAP systems because the more systems you group in a proximity placement group, the higher the chances:
-- That you require a VM type that is not available under the network spine into which the proximity placement group was assigned to.-- That resources of non-mainstream VMs, like M-Series VMs, could eventually be unfulfilled when you need to expand the number of VMs into a proximity placement group over time.
+- That you require a VM type that isn't available under the network spine into which the proximity placement group was assigned to.
+- That resources of nonmainstream VMs, like M-Series VMs, could eventually be unfulfilled when you need to expand the number of VMs into a proximity placement group over time.
The proximity placement group usage that we recommended so far, looks like in this graphic
The proximity placement group usage that we recommended so far, looks like in th
You created a proximity placement group (PPG) in each of the two Availability Zones you deployed your SAP system into. All the VMs of a particular zone are part of the individual proximity placement group of that particular zone. You started in each zone with deploying the DBMS VM to scope the PPG and then deployed the ASCS VM into the same zone and PPG. In a third step, you created an Azure availability set, assigned the availability set to the scoped PPG and deployed the SAP application layer into it. The advantage of this configuration was that all the components were nicely aligned underneath the same network spine. The large disadvantage is that your flexibility in resizing virtual machines can be limited. - Based on many improvements deployed by Microsoft into the Azure regions to reduce network latency within an Azure Availability Zone, the new deployment guidance for zonal deployments, looks like: ![New Proximity placement groups with zones](./media/sap-proximity-placement-scenarios/vm-ppg-zone.png)
-The difference to the recommendation given so far is that the database VMs in the two zones are no more a part of the proximity placement groups. The proximity placement groups per zone are now scoped with the deployment of the VM running the SAP ASCS/SCS instances. This also means that for the regions where Availability Zones are collected by multiple datacenters, the ASCS/SCS instance, and the application tier could run under one network spine and the database VMs could run under another network spine. Though with the network improvements made, the network latency between the SAP application tier and the DBMS tier still should be sufficient for sufficiently good performance and throughput. The advantage of this new configuration is that you have more flexibility in resizing VMs or moving to new VM types with either the DBMS layer or/and the application layer of the SAP system.
+The difference to the recommendation given so far is that the database VMs in the two zones are no more a part of the proximity placement groups. The proximity placement groups per zone are now scoped with the deployment of the VM running the SAP ASCS/SCS instances. This also means that for the regions where Availability Zones are collected by multiple datacenters, the ASCS/SCS instance, and the application tier could run under one network spine and the database VMs could run under another network spine. Though with the network improvements made, the network latency between the SAP application tier and the DBMS tier still should be sufficient for sufficiently good performance and throughput. The advantage of this new configuration is that you have more flexibility in resizing VMs or moving to new VM types with either the DBMS layer or/and the application layer of the SAP system.
For the special case of using Azure NetApp Files (ANF) for the DBMS environment and the ANF related new functionality of [Azure NetApp Files application volume group for SAP HANA](../../azure-netapp-files/application-volume-group-introduction.md) and its necessity for proximity placement groups, check the document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md).
+#### Proximity placement groups with availability set deployments
-### Proximity placement groups with availability set deployments
In this case, the purpose is to use proximity placement groups to collocate the VMs that are deployed through different availability sets. In this usage scenario, you aren't using a controlled deployment across different Availability Zones in a region. Instead you want to deploy the SAP system by using availability sets. As a result, you have at least an availability set for the DBMS VMs, ASCS/SCS VMs, and the application tier VMs. Since you can't specify at deployment time of a VM an availability set AND an Availability Zone, you can't control where the VMs in the different availability sets are going to be allocated. This could result in some Azure regions that the network latency between different VMs, still could be too high to give a sufficiently good performance experience. So the resulting architecture would look like: - ![Proximity placement groups with AvSets](./media/sap-proximity-placement-scenarios/vm-ppg-avsets.png)
-In this graphic, a single proximity placement group would be assigned to a single SAP system. This PPG gets assigned to the three availability sets. The proximity placement group is then scoped by deploying the first database tier VMs into the DBMS availability set. This architecture recommendation will collocate all VMs under the same network spine. It's introducing the restrictions mentioned earlier in this article. Therefore, the proximity placement group architecture should be used sparsely.
+In this graphic, a single proximity placement group would be assigned to a single SAP system. This PPG gets assigned to the three availability sets. The proximity placement group is then scoped by deploying the first database tier VMs into the DBMS availability set. This architecture recommendation collocates all VMs under the same network spine. It's introducing the restrictions mentioned earlier in this article. Therefore, the proximity placement group architecture should be used sparsely.
+### Proximity placement groups and HANA Large Instances
-## Proximity placement groups and HANA Large Instances
If some of your SAP systems rely on [HANA Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) for the application layer, you can experience significant improvements in network latency between the HANA Large Instances unit and Azure VMs when you're using HANA Large Instances units that are deployed in [Revision 4 rows or stamps](../../virtual-machines/workloads/sap/hana-network-architecture.md#networking-architecture-for-hana-large-instance). One improvement is that HANA Large Instances units, as they're deployed, deploy with a proximity placement group. You can use that proximity placement group to deploy your application layer VMs. As a result, those VMs will be deployed in the same datacenter that hosts your HANA Large Instances unit. To determine whether your HANA Large Instances unit is deployed in a Revision 4 stamp or row, check the article [Azure HANA Large Instances control through Azure portal](../../virtual-machines/workloads/sap/hana-li-portal.md#look-at-attributes-of-single-hli-unit). In the attributes overview of your HANA Large Instances unit, you can also determine the name of the proximity placement group because it was created when your HANA Large Instances unit was deployed. The name that appears in the attributes overview is the name of the proximity placement group that you should deploy your application layer VMs into.
-As compared to SAP systems that use only Azure virtual machines, when you use HANA Large Instances, you have less flexibility in deciding how many [Azure resource groups](../../azure-resource-manager/management/manage-resources-portal.md) to use. All the HANA Large Instances units of a [HANA Large Instances tenant](../../virtual-machines/workloads/sap/hana-know-terms.md) are grouped in a single resource group, as described [this article](../../virtual-machines/workloads/sap/hana-li-portal.md#display-of-hana-large-instance-units-in-the-azure-portal). Unless you deploy into different tenants to separate, for example, production and non-production systems or other systems, all your HANA Large Instances units will be deployed in one HANA Large Instances tenant. This tenant has a one-to-one relationship with a resource group. But a separate proximity placement group will be defined for each of the single units.
+As compared to SAP systems that use only Azure virtual machines, when you use HANA Large Instances, you have less flexibility in deciding how many [Azure resource groups](../../azure-resource-manager/management/manage-resources-portal.md) to use. All the HANA Large Instances units of a [HANA Large Instances tenant](../../virtual-machines/workloads/sap/hana-know-terms.md) are grouped in a single resource group, as described [this article](../../virtual-machines/workloads/sap/hana-li-portal.md#display-of-hana-large-instance-units-in-the-azure-portal). Unless you deploy into different tenants to separate, for example, production and nonproduction systems or other systems, all your HANA Large Instances units will be deployed in one HANA Large Instances tenant. This tenant has a one-to-one relationship with a resource group. But a separate proximity placement group will be defined for each of the single units.
As a result, the relationships among Azure resource groups and proximity placement groups for a single tenant will be as shown here: ![Proximity placement groups and HANA Large Instances](./media/sap-proximity-placement-scenarios/ppg-for-hana-large-instance-units.png)
-## Example of deployment with proximity placement groups
+### Example of deployment with proximity placement groups
+ Following are some PowerShell commands that you can use to deploy your VMs with Azure proximity placement groups. The first step, after you sign in to [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/), is to check whether you're in the Azure subscription that you want to use for the deployment:
-<pre><code>
+```azurepowershell-interactive
Get-AzureRmContext
-</code></pre>
-
-If you need to change to a different subscription, you can do so by running this command:
-<pre><code>
+# If you need to change to a different subscription, you can do so by running this command:
Set-AzureRmContext -Subscription "PPG test subscription"
-</code></pre>
-Create a new Azure resource group by running this command:
-
-<pre><code>
+# Create a new Azure resource group by running this command:
New-AzResourceGroup -Name "ppgexercise" -Location "westus2"
-</code></pre>
-
-Create the new proximity placement group by running this command:
-<pre><code>
+# Create the new proximity placement group by running this command:
New-AzProximityPlacementGroup -ResourceGroupName "ppgexercise" -Name "collocate" -Location "westus2"
-</code></pre>
-Deploy your first VM into the proximity placement group by using a command like this one:
-
-<pre><code>
+# Deploy your first VM into the proximity placement group by using a command like this one:
New-AzVm -ResourceGroupName "ppgexercise" -Name "ppgscopevm" -Location "westus2" -OpenPorts 80,3389 -ProximityPlacementGroup "collocate" -Size "Standard_E16s_v4"
-</code></pre>
+```
The preceding command deploys a Windows-based VM. After this VM deployment succeeds, the network spine scope of the proximity placement group is defined within the Azure region. All subsequent VM deployments that reference the proximity placement group, as shown in the preceding command, will be deployed under the same network spine, as long as the VM type can be hosted on hardware placed under that network spine, and capacity for that VM type is available.
-## Combine availability sets and Availability Zones with proximity placement groups
-One of the problems to using Availability Zones for SAP system deployments is that you canΓÇÖt deploy the SAP application tier by using availability sets within the specific Availability Zone. You want the SAP application tier to be deployed in the same zones as the SAP ASCS/SCS VMs. Referencing an Availability Zone and an availability set when deploying a single VM is not possible so far. But just deploying a VM instructing an Availability Zone, you lose the ability to make sure the application layer VMs are spread across different update and failure domains.
+### Combine availability sets and Availability Zones with proximity placement groups
+
+One of the problems to using Availability Zones for SAP system deployments is that you canΓÇÖt deploy the SAP application tier by using availability sets within the specific Availability Zone. You want the SAP application tier to be deployed in the same zones as the SAP ASCS/SCS VMs. Referencing an Availability Zone and an availability set when deploying a single VM isn't possible so far. But just deploying a VM instructing an Availability Zone, you lose the ability to make sure the application layer VMs are spread across different update and failure domains.
By using proximity placement groups, you can bypass this restriction. Here's the deployment sequence:
By using proximity placement groups, you can bypass this restriction. Here's the
Instead of deploying the first VM as demonstrated in the previous section, you reference an Availability Zone and the proximity placement group when you deploy the VM:
-<pre><code>
+```azurepowershell-interactive
New-AzVm -ResourceGroupName "ppgexercise" -Name "centralserviceszone1" -Location "westus2" -OpenPorts 80,3389 -Zone "1" -ProximityPlacementGroup "collocate" -Size "Standard_E8s_v4"
-</code></pre>
+```
A successful deployment of this virtual machine would host the ASCS/SCS instance of the SAP system in one Availability Zone. The scope of the proximity placement group is fixed to one of the network spines in the Availability Zone you defined.
In the next step, you need to create the availability sets you want to use for t
Define and create the proximity placement group. The command for creating the availability set requires an additional reference to the proximity placement group ID (not the name). You can get the ID of the proximity placement group by using this command:
-<pre><code>
+```azurepowershell-interactive
Get-AzProximityPlacementGroup -ResourceGroupName "ppgexercise" -Name "collocate"
-</code></pre>
+```
When you create the availability set, you need to consider additional parameters when you're using managed disks (default unless specified otherwise) and proximity placement groups:
-<pre><code>
+```azurepowershell-interactive
New-AzAvailabilitySet -ResourceGroupName "ppgexercise" -Name "ppgavset" -Location "westus2" -ProximityPlacementGroupId "/subscriptions/my very long ppg id string" -sku "aligned" -PlatformUpdateDomainCount 3 -PlatformFaultDomainCount 2
-</code></pre>
+```
Ideally, you should use three fault domains. But the number of supported fault domains can vary from region to region. In this case, the maximum number of fault domains possible for the specific regions is two. To deploy your application layer VMs, you need to add a reference to your availability set name and the proximity placement group name, as shown here:
-<pre><code>
+```azurepowershell-interactive
New-AzVm -ResourceGroupName "ppgexercise" -Name "appinstance1" -Location "westus2" -OpenPorts 80,3389 -AvailabilitySetName "myppgavset" -ProximityPlacementGroup "collocate" -Size "Standard_E16s_v4"
-</code></pre>
+```
The result of this deployment is:+ - A Central Services for your SAP system that's located in a specific Availability Zone or Availability Zones. - An SAP application layer that's located through availability sets in the same network spine as the SAP Central services (ASCS/SCS) VM or VMs. > [!NOTE] > Because you deploy one DBMS and ASCS/SCS VMs into one zone and the second DBMS and ASCS/SCS VMs into another zone to create a high availability configurations, you'll need a different proximity placement group for each of the zones. The same is true for any availability set that you use.
-## Change proximity placement group configurations of an existing system
+### Change proximity placement group configurations of an existing system
+ If you implemented proximity placement groups as of the recommendations given so far, and you want to adjust to the new configuration, you can do so with the methods described in these articles: -- [Deploy VMs to proximity placement groups using Azure CLI](../../virtual-machines/linux/proximity-placement-groups.md)-- [Deploy VMs to proximity placement groups using PowerShell](../../virtual-machines/windows/proximity-placement-groups.md)
+- [Deploy VMs to proximity placement groups using Azure CLI](../../virtual-machines/linux/proximity-placement-groups.md).
+- [Deploy VMs to proximity placement groups using PowerShell](../../virtual-machines/windows/proximity-placement-groups.md).
You can also use these commands for cases where you're getting allocation errors in cases where you can't move to a new VM type with an existing VM in the proximity placement group.
+## Virtual Machine Scale Set with Flexible orchestration
+
+To avoid the limitations associated with proximity placement group, it's advised to deploy SAP workload across availability zones using flexible scale set with FD=1. This deployment strategy ensures that VMs deployed in each zone aren't restricted to a single datacenter or network spine, and all SAP system components, such as databases, ASCS/ERS, and application tier are scoped within a zone. With all SAP system components being scoped at the zonal level, the network latency between different components of a single SAP system must be sufficient to ensure satisfactory performance and throughput. The key benefit of this new deployment option with flexible scale set with FD=1 is that it provides greater flexibility in resizing VMs or switching to new VM types for all layers of SAP system. Also, the scale set would allocate VMs across multiple fault domains within a single zone, which is ideal for running multiple VMs of the application tier in each zone. For more information, see [virtual machine scale set for SAP workload](./virtual-machine-scale-set-sap-deployment-guide.md) document.
+
+![SAP workload deployment in flexible scale set](./media/sap-proximity-placement-scenarios/sap-deployment-flexible-scale-set.png)
+
+In a nonproduction or non-HA environment, it's possible to deploy all SAP system components, including the database, ASCS, and application tier, within a single zone using a flexible scale set with FD=1.
+ ## Next steps+ Check out the documentation: - [SAP workloads on Azure: planning and deployment checklist](./deployment-checklist.md)
sap Sap High Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-architecture-scenarios.md
Title: Azure VMs HA architecture and scenarios for SAP NetWeaver | Microsoft Docs description: High-availability architecture and scenarios for SAP NetWeaver on Azure Virtual Machines
-tags: azure-resource-manager
vm-windows Previously updated : 12/16/2022 Last updated : 05/26/2023 - # High-availability architecture and scenarios for SAP NetWeaver
-[1928533]:https://launchpad.support.sap.com/#/notes/1928533
-[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[2015553]:https://launchpad.support.sap.com/#/notes/2015553
-[2178632]:https://launchpad.support.sap.com/#/notes/2178632
-[2243692]:https://launchpad.support.sap.com/#/notes/2243692
- [Logo_Linux]:media/virtual-machines-shared-sap-shared/Linux.png [Logo_Windows]:media/virtual-machines-shared-sap-shared/Windows.png--
-[sap-installation-guides]:http://service.sap.com/instguides
-
-[azure-resource-manager/management/azure-subscription-service-limits]:../../azure-resource-manager/management/azure-subscription-service-limits.md
-[azure-resource-manager/management/azure-subscription-service-limits-subscription]:../../azure-resource-manager/management/azure-subscription-service-limits.md
-
-[dbms-guide]:../../virtual-machines-windows-sap-dbms-guide-general.md
-
-[deployment-guide]:deployment-guide.md
-
-[dr-guide-classic]:https://go.microsoft.com/fwlink/?LinkID=521971
-
-[getting-started]:get-started.md
-
-[sap-high-availability-architecture-scenarios]:sap-high-availability-architecture-scenarios.md
-[sap-high-availability-guide-wsfc-shared-disk]:sap-high-availability-guide-wsfc-shared-disk.md
-[sap-high-availability-guide-wsfc-file-share]:sap-high-availability-guide-wsfc-file-share.md
-[sap-ascs-high-availability-multi-sid-wsfc]:sap-ascs-high-availability-multi-sid-wsfc.md
-[sap-high-availability-infrastructure-wsfc-shared-disk]:sap-high-availability-infrastructure-wsfc-shared-disk.md
[sap-ascs-ha-multi-sid-wsfc-file-share]:sap-ascs-ha-multi-sid-wsfc-file-share.md [sap-ascs-ha-multi-sid-wsfc-shared-disk]:sap-ascs-ha-multi-sid-wsfc-shared-disk.md
-[sap-hana-ha]:sap-hana-high-availability.md
-[sap-suse-ascs-ha]:high-availability-guide-suse.md
-[sap-suse-ascs-ha-anf]:high-availability-guide-suse-netapp-files.md
[sap-higher-availability]:sap-higher-availability-architecture-scenarios.md-
-[planning-guide]:planning-guide.md
-[planning-guide-1.2]:planning-guide.md#e55d1e22-c2c8-460b-9897-64622a34fdff
-[planning-guide-11]:planning-guide.md#7cf991a1-badd-40a9-944e-7baae842a058
-[planning-guide-11.4.1]:planning-guide.md#5d9d36f9-9058-435d-8367-5ad05f00de77
-[planning-guide-11.5]:planning-guide.md#4e165b58-74ca-474f-a7f4-5e695a93204f
-[planning-guide-2.1]:planning-guide.md#1625df66-4cc6-4d60-9202-de8a0b77f803
-[planning-guide-2.2]:planning-guide.md#f5b3b18c-302c-4bd8-9ab2-c388f1ab3d10
-[planning-guide-3.1]:planning-guide.md#be80d1b9-a463-4845-bd35-f4cebdb5424a
-[planning-guide-3.2.1]:planning-guide.md#df49dc09-141b-4f34-a4a2-990913b30358
-[planning-guide-3.2.2]:planning-guide.md#fc1ac8b2-e54a-487c-8581-d3cc6625e560
-[planning-guide-3.2.3]:planning-guide.md#18810088-f9be-4c97-958a-27996255c665
-[planning-guide-3.2]:planning-guide.md#8d8ad4b8-6093-4b91-ac36-ea56d80dbf77
-[planning-guide-3.3.2]:planning-guide.md#ff5ad0f9-f7f4-4022-9102-af07aef3bc92
-[planning-guide-5.1.1]:planning-guide.md#4d175f1b-7353-4137-9d2f-817683c26e53
-[planning-guide-5.1.2]:planning-guide.md#e18f7839-c0e2-4385-b1e6-4538453a285c
-[planning-guide-5.2.1]:planning-guide.md#1b287330-944b-495d-9ea7-94b83aff73ef
-[planning-guide-5.2.2]:planning-guide.md#57f32b1c-0cba-4e57-ab6e-c39fe22b6ec3
-[planning-guide-5.2]:planning-guide.md#6ffb9f41-a292-40bf-9e70-8204448559e7
-[planning-guide-5.3.1]:planning-guide.md#6e835de8-40b1-4b71-9f18-d45b20959b79
-[planning-guide-5.3.2]:planning-guide.md#a43e40e6-1acc-4633-9816-8f095d5a7b6a
-[planning-guide-5.4.2]:planning-guide.md#9789b076-2011-4afa-b2fe-b07a8aba58a1
-[planning-guide-5.5.1]:planning-guide.md#4efec401-91e0-40c0-8e64-f2dceadff646
-[planning-guide-5.5.3]:planning-guide.md#17e0d543-7e8c-4160-a7da-dd7117a1ad9d
-[planning-guide-7.1]:planning-guide.md#3e9c3690-da67-421a-bc3f-12c520d99a30
-[planning-guide-7]:planning-guide.md#96a77628-a05e-475d-9df3-fb82217e8f14
-[planning-guide-9.1]:planning-guide.md#6f0a47f3-a289-4090-a053-2521618a28c3
-[planning-guide-azure-premium-storage]:planning-guide.md#ff5ad0f9-f7f4-4022-9102-af07aef3bc92
-
-[virtual-machines-windows-portal-sql-alwayson-int-listener]:/azure/azure-sql/virtual-machines/windows/availability-group-load-balancer-portal-configure
- [sap-ha-partner-information]:https://scn.sap.com/docs/DOC-8541 [azure-sla]:https://azure.microsoft.com/support/legal/sla/
-[azure-virtual-machines-manage-availability]:../../virtual-machines/windows/manage-availability.md
[azure-storage-redundancy]:/azure/storage/common/storage-redundancy [azure-storage-managed-disks-overview]:../../virtual-machines/managed-disks-overview.md
-[planning-guide-figure-100]:media/virtual-machines-shared-sap-planning-guide/100-single-vm-in-azure.png
-[planning-guide-figure-1300]:media/virtual-machines-shared-sap-planning-guide/1300-ref-config-iaas-for-sap.png
-[planning-guide-figure-1400]:media/virtual-machines-shared-sap-planning-guide/1400-attach-detach-disks.png
-[planning-guide-figure-1600]:media/virtual-machines-shared-sap-planning-guide/1600-firewall-port-rule.png
-[planning-guide-figure-1700]:media/virtual-machines-shared-sap-planning-guide/1700-single-vm-demo.png
-[planning-guide-figure-1900]:media/virtual-machines-shared-sap-planning-guide/1900-vm-set-vnet.png
-[planning-guide-figure-200]:media/virtual-machines-shared-sap-planning-guide/200-multiple-vms-in-azure.png
-[planning-guide-figure-2100]:media/virtual-machines-shared-sap-planning-guide/2100-s2s.png
-[planning-guide-figure-2200]:media/virtual-machines-shared-sap-planning-guide/2200-network-printing.png
-[planning-guide-figure-2300]:media/virtual-machines-shared-sap-planning-guide/2300-sapgui-stms.png
-[planning-guide-figure-2400]:media/virtual-machines-shared-sap-planning-guide/2400-vm-extension-overview.png
-[planning-guide-figure-2500]:media/virtual-machines-shared-sap-planning-guide/planning-monitoring-overview-2502.png
-[planning-guide-figure-2600]:media/virtual-machines-shared-sap-planning-guide/2600-sap-router-connection.png
-[planning-guide-figure-2700]:media/virtual-machines-shared-sap-planning-guide/2700-exposed-sap-portal.png
-[planning-guide-figure-2800]:media/virtual-machines-shared-sap-planning-guide/2800-endpoint-config.png
-[planning-guide-figure-2900]:media/virtual-machines-shared-sap-planning-guide/2900-azure-ha-sap-ha.png
-[planning-guide-figure-2901]:medi.png
-[planning-guide-figure-300]:media/virtual-machines-shared-sap-planning-guide/300-vpn-s2s.png
-[planning-guide-figure-3000]:media/virtual-machines-shared-sap-planning-guide/3000-sap-ha-on-azure.png
-[planning-guide-figure-3200]:media/virtual-machines-shared-sap-planning-guide/3200-sap-ha-with-sql.png
-[planning-guide-figure-3201]:medi.png
-[planning-guide-figure-400]:media/virtual-machines-shared-sap-planning-guide/400-vm-services.png
-[planning-guide-figure-600]:media/virtual-machines-shared-sap-planning-guide/600-s2s-details.png
-[planning-guide-figure-700]:media/virtual-machines-shared-sap-planning-guide/700-decision-tree-deploy-to-azure.png
-[planning-guide-figure-800]:media/virtual-machines-shared-sap-planning-guide/800-portal-vm-overview.png
-[planning-guide-microsoft-azure-networking]:planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd
-[planning-guide-storage-microsoft-azure-storage-and-data-disks]:planning-guide.md#a72afa26-4bf4-4a25-8cf7-855d6032157f
-
-[planning-guide-microsoft-azure-networking]:planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd
-[planning-guide-storage-microsoft-azure-storage-and-data-disks]:planning-guide.md#a72afa26-4bf4-4a25-8cf7-855d6032157f
-
-[sap-ha-guide-2]:#42b8f600-7ba3-4606-b8a5-53c4f026da08
-[sap-ha-guide-4]:#8ecf3ba0-67c0-4495-9c14-feec1a2255b7
-[sap-ha-guide-8]:#78092dbe-165b-454c-92f5-4972bdbef9bf
-[sap-ha-guide-8.1]:#c87a8d3f-b1dc-4d2f-b23c-da4b72977489
-[sap-ha-guide-8.9]:#fe0bd8b5-2b43-45e3-8295-80bee5415716
-[sap-ha-guide-8.11]:#661035b2-4d0f-4d31-86f8-dc0a50d78158
-[sap-ha-guide-8.12]:#0d67f090-7928-43e0-8772-5ccbf8f59aab
-[sap-ha-guide-8.12.1]:#5eecb071-c703-4ccc-ba6d-fe9c6ded9d79
-[sap-ha-guide-8.12.3]:#5c8e5482-841e-45e1-a89d-a05c0907c868
-[sap-ha-guide-8.12.3.1]:#1c2788c3-3648-4e82-9e0d-e058e475e2a3
-[sap-ha-guide-8.12.3.2]:#dd41d5a2-8083-415b-9878-839652812102
-[sap-ha-guide-8.12.3.3]:#d9c1fc8e-8710-4dff-bec2-1f535db7b006
-[sap-ha-guide-9]:#a06f0b49-8a7a-42bf-8b0d-c12026c5746b
-[sap-ha-guide-9.1]:#31c6bd4f-51df-4057-9fdf-3fcbc619c170
-[sap-ha-guide-9.1.1]:#a97ad604-9094-44fe-a364-f89cb39bf097
----
-[sap-ha-guide-figure-1000]:./media/virtual-machines-shared-sap-high-availability-guide/1000-wsfc-for-sap-ascs-on-azure.png
-[sap-ha-guide-figure-1001]:./media/virtual-machines-shared-sap-high-availability-guide/1001-wsfc-on-azure-ilb.png
-[sap-ha-guide-figure-1002]:./media/virtual-machines-shared-sap-high-availability-guide/1002-wsfc-sios-on-azure-ilb.png
-[sap-ha-guide-figure-2000]:./media/virtual-machines-shared-sap-high-availability-guide/2000-wsfc-sap-as-ha-on-azure.png
-[sap-ha-guide-figure-2001]:./media/virtual-machines-shared-sap-high-availability-guide/2001-wsfc-sap-ascs-ha-on-azure.png
-[sap-ha-guide-figure-2003]:./media/virtual-machines-shared-sap-high-availability-guide/2003-wsfc-sap-dbms-ha-on-azure.png
-[sap-ha-guide-figure-2004]:./media/virtual-machines-shared-sap-high-availability-guide/2004-wsfc-sap-ha-e2e-archit-template1-on-azure.png
-[sap-ha-guide-figure-2005]:./media/virtual-machines-shared-sap-high-availability-guide/2005-wsfc-sap-ha-e2e-arch-template2-on-azure.png
-
-[sap-ha-guide-figure-3000]:./media/virtual-machines-shared-sap-high-availability-guide/3000-template-parameters-sap-ha-arm-on-azure.png
-[sap-ha-guide-figure-3001]:./media/virtual-machines-shared-sap-high-availability-guide/3001-configuring-dns-servers-for-Azure-vnet.png
-[sap-ha-guide-figure-3002]:./media/virtual-machines-shared-sap-high-availability-guide/3002-configuring-static-IP-address-for-network-card-of-each-vm.png
-[sap-ha-guide-figure-3003]:./media/virtual-machines-shared-sap-high-availability-guide/3003-setup-static-ip-address-ilb-for-ascs-instance.png
-[sap-ha-guide-figure-3004]:./media/virtual-machines-shared-sap-high-availability-guide/3004-default-ascs-scs-ilb-balancing-rules-for-azure-ilb.png
-[sap-ha-guide-figure-3005]:./media/virtual-machines-shared-sap-high-availability-guide/3005-changing-ascs-scs-default-ilb-rules-for-azure-ilb.png
-[sap-ha-guide-figure-3006]:./media/virtual-machines-shared-sap-high-availability-guide/3006-adding-vm-to-domain.png
-[sap-ha-guide-figure-3007]:./media/virtual-machines-shared-sap-high-availability-guide/3007-config-wsfc-1.png
-[sap-ha-guide-figure-3008]:./media/virtual-machines-shared-sap-high-availability-guide/3008-config-wsfc-2.png
-[sap-ha-guide-figure-3009]:./media/virtual-machines-shared-sap-high-availability-guide/3009-config-wsfc-3.png
-[sap-ha-guide-figure-3010]:./media/virtual-machines-shared-sap-high-availability-guide/3010-config-wsfc-4.png
-[sap-ha-guide-figure-3011]:./media/virtual-machines-shared-sap-high-availability-guide/3011-config-wsfc-5.png
-[sap-ha-guide-figure-3012]:./media/virtual-machines-shared-sap-high-availability-guide/3012-config-wsfc-6.png
-[sap-ha-guide-figure-3013]:./media/virtual-machines-shared-sap-high-availability-guide/3013-config-wsfc-7.png
-[sap-ha-guide-figure-3014]:./media/virtual-machines-shared-sap-high-availability-guide/3014-config-wsfc-8.png
-[sap-ha-guide-figure-3015]:./media/virtual-machines-shared-sap-high-availability-guide/3015-config-wsfc-9.png
-[sap-ha-guide-figure-3016]:./media/virtual-machines-shared-sap-high-availability-guide/3016-config-wsfc-10.png
-[sap-ha-guide-figure-3017]:./media/virtual-machines-shared-sap-high-availability-guide/3017-config-wsfc-11.png
-[sap-ha-guide-figure-3018]:./media/virtual-machines-shared-sap-high-availability-guide/3018-config-wsfc-12.png
-[sap-ha-guide-figure-3019]:./media/virtual-machines-shared-sap-high-availability-guide/3019-assign-permissions-on-share-for-cluster-name-object.png
-[sap-ha-guide-figure-3020]:./media/virtual-machines-shared-sap-high-availability-guide/3020-change-object-type-include-computer-objects.png
-[sap-ha-guide-figure-3021]:./media/virtual-machines-shared-sap-high-availability-guide/3021-check-box-for-computer-objects.png
-[sap-ha-guide-figure-3022]:./media/virtual-machines-shared-sap-high-availability-guide/3022-set-security-attributes-for-cluster-name-object-on-file-share-quorum.png
-[sap-ha-guide-figure-3023]:./media/virtual-machines-shared-sap-high-availability-guide/3023-call-configure-cluster-quorum-setting-wizard.png
-[sap-ha-guide-figure-3024]:./media/virtual-machines-shared-sap-high-availability-guide/3024-selection-screen-different-quorum-configurations.png
-[sap-ha-guide-figure-3025]:./media/virtual-machines-shared-sap-high-availability-guide/3025-selection-screen-file-share-witness.png
-[sap-ha-guide-figure-3026]:./media/virtual-machines-shared-sap-high-availability-guide/3026-define-file-share-location-for-witness-share.png
-[sap-ha-guide-figure-3027]:./media/virtual-machines-shared-sap-high-availability-guide/3027-successful-reconfiguration-cluster-file-share-witness.png
-[sap-ha-guide-figure-3028]:./media/virtual-machines-shared-sap-high-availability-guide/3028-install-dot-net-framework-35.png
-[sap-ha-guide-figure-3029]:./media/virtual-machines-shared-sap-high-availability-guide/3029-install-dot-net-framework-35-progress.png
-[sap-ha-guide-figure-3030]:./media/virtual-machines-shared-sap-high-availability-guide/3030-sios-installer.png
-[sap-ha-guide-figure-3031]:./media/virtual-machines-shared-sap-high-availability-guide/3031-first-screen-sios-data-keeper-installation.png
-[sap-ha-guide-figure-3032]:./media/virtual-machines-shared-sap-high-availability-guide/3032-data-keeper-informs-service-be-disabled.png
-[sap-ha-guide-figure-3033]:./media/virtual-machines-shared-sap-high-availability-guide/3033-user-selection-sios-data-keeper.png
-[sap-ha-guide-figure-3034]:./media/virtual-machines-shared-sap-high-availability-guide/3034-domain-user-sios-data-keeper.png
-[sap-ha-guide-figure-3035]:./media/virtual-machines-shared-sap-high-availability-guide/3035-provide-sios-data-keeper-license.png
-[sap-ha-guide-figure-3036]:./media/virtual-machines-shared-sap-high-availability-guide/3036-data-keeper-management-config-tool.png
-[sap-ha-guide-figure-3037]:./media/virtual-machines-shared-sap-high-availability-guide/3037-tcp-ip-address-first-node-data-keeper.png
-[sap-ha-guide-figure-3038]:./media/virtual-machines-shared-sap-high-availability-guide/3038-create-replication-sios-job.png
-[sap-ha-guide-figure-3039]:./media/virtual-machines-shared-sap-high-availability-guide/3039-define-sios-replication-job-name.png
-[sap-ha-guide-figure-3040]:./media/virtual-machines-shared-sap-high-availability-guide/3040-define-sios-source-node.png
-[sap-ha-guide-figure-3041]:./media/virtual-machines-shared-sap-high-availability-guide/3041-define-sios-target-node.png
-[sap-ha-guide-figure-3042]:./media/virtual-machines-shared-sap-high-availability-guide/3042-define-sios-synchronous-replication.png
-[sap-ha-guide-figure-3043]:./media/virtual-machines-shared-sap-high-availability-guide/3043-enable-sios-replicated-volume-as-cluster-volume.png
-[sap-ha-guide-figure-3044]:./media/virtual-machines-shared-sap-high-availability-guide/3044-data-keeper-synchronous-mirroring-for-SAP-gui.png
-[sap-ha-guide-figure-3045]:./media/virtual-machines-shared-sap-high-availability-guide/3045-replicated-disk-by-data-keeper-in-wsfc.png
-[sap-ha-guide-figure-3046]:./media/virtual-machines-shared-sap-high-availability-guide/3046-dns-entry-sap-ascs-virtual-name-ip.png
-[sap-ha-guide-figure-3047]:./media/virtual-machines-shared-sap-high-availability-guide/3047-dns-manager.png
-[sap-ha-guide-figure-3048]:./media/virtual-machines-shared-sap-high-availability-guide/3048-default-cluster-probe-port.png
-[sap-ha-guide-figure-3049]:./media/virtual-machines-shared-sap-high-availability-guide/3049-cluster-probe-port-after.png
-[sap-ha-guide-figure-3050]:./media/virtual-machines-shared-sap-high-availability-guide/3050-service-type-ers-delayed-automatic.png
-[sap-ha-guide-figure-5000]:./media/virtual-machines-shared-sap-high-availability-guide/5000-wsfc-sap-sid-node-a.png
-[sap-ha-guide-figure-5001]:./media/virtual-machines-shared-sap-high-availability-guide/5001-sios-replicating-local-volume.png
-[sap-ha-guide-figure-5002]:./media/virtual-machines-shared-sap-high-availability-guide/5002-wsfc-sap-sid-node-b.png
-[sap-ha-guide-figure-5003]:./media/virtual-machines-shared-sap-high-availability-guide/5003-sios-replicating-local-volume-b-to-a.png
-
-[sap-ha-guide-figure-6003]:./media/virtual-machines-shared-sap-high-availability-guide/6003-sap-multi-sid-full-landscape.png
-
-[sap-templates-3-tier-multisid-xscs-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-xscs%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-xscs-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-xscs-md%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-db-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-db%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-db-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-db-md%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-apps-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-apps%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-apps-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-apps-md%2Fazuredeploy.json
-
-[virtual-machines-azure-resource-manager-architecture-benefits-arm]:../../azure-resource-manager/management/overview.md#the-benefits-of-using-resource-manager
-
-[virtual-machines-manage-availability]:../../manage-availability.md
-- ## Terminology definitions **High availability**: Refers to a set of technologies that minimize IT disruptions by providing business continuity of IT services through redundant, fault-tolerant, or failover-protected components inside the *same* data center. In our case, the data center resides within one Azure region. **Disaster recovery**: Also refers to the minimizing of IT services disruption and their recovery, but across *various* data centers that might be hundreds of miles away from one another. In our case, the data centers might reside in various Azure regions within the same geopolitical region or in locations as established by you as a customer. - ## Overview of high availability
-SAP high availability in Azure can be separated into three types:
-
-* **Azure infrastructure high availability**:
- For example, high availability can include compute (VMs), network, or storage and its benefits for increasing the availability of SAP applications.
+SAP high availability in Azure can be separated into three types:
-* **Utilizing Azure infrastructure VM restart to achieve *higher availability* of SAP applications**:
+* **Azure infrastructure high availability**:
+
+ For example, high availability can include compute (VMs), network, or storage and its benefits for increasing the availability of SAP applications.
- If you decide not to use functionalities such as Windows Server Failover Clustering (WSFC) or Pacemaker on Linux, Azure VM restart is utilized. It protects SAP systems against planned and unplanned downtime of the Azure physical server infrastructure and overall underlying Azure platform.
+* **Utilizing Azure infrastructure VM restart to protect SAP applications**:
+
+ If you decide not to use functionalities such as Windows Server Failover Clustering (WSFC) or Pacemaker on Linux, Azure VM restart is utilized. It restores functionality in the SAP systems if there are any planned and unplanned downtime of the Azure physical server infrastructure and overall underlying Azure platform.
-* **SAP application high availability**:
+* **SAP application high availability**:
- To achieve full SAP system high availability, you must protect all critical SAP system components. For example:
- * Redundant SAP application servers.
- * Unique components. An example might be a single point of failure (SPOF) component, such as an SAP ASCS/SCS instance or a database management system (DBMS).
+ To achieve full SAP system high availability, you must protect all critical SAP system components. For example:
+
+ * Redundant SAP application servers.
+ * Unique components. An example might be a single point of failure (SPOF) component, such as an SAP ASCS/SCS instance or a database management system (DBMS).
SAP high availability in Azure differs from SAP high availability in an on-premises physical or virtual environment.
-There is no sapinst-integrated SAP high-availability configuration for Linux as there is for Windows. For information about SAP high availability on-premises for Linux, see [High availability partner information][sap-ha-partner-information].
+There's no sapinst-integrated SAP high-availability configuration for Linux as there is for Windows. For information about SAP high availability on-premises for Linux, see [High availability partner information][sap-ha-partner-information].
## Azure infrastructure high availability ### SLA for single-instance virtual machines
-There is currently a single-VM SLA of 99.9% with premium storage. To get an idea about what the availability of a single VM might be, you can build the product of the various available [Azure Service Level Agreements][azure-sla].
+There's currently a single-VM SLA of 99.9% with premium storage. To get an idea about what the availability of a single VM might be, you can build the product of the various available [Azure Service Level Agreements][azure-sla].
The basis for the calculation is 30 days per month, or 43,200 minutes. For example, a 0.05% downtime corresponds to 21.6 minutes. As usual, the availability of the various services is calculated in the following way:
-(Availability Service #1/100) * (Availability Service #2/100) * (Availability Service #3/100) \*…
+(Availability Service #1/100) x (Availability Service #2/100) x (Availability Service #3/100) \*…
For example:
-(99.95/100) * (99.9/100) * (99.9/100) = 0.9975 or an overall availability of 99.75%.
+(99.95/100) x (99.9/100) x (99.9/100) = 0.9975 or an overall availability of 99.75%.
### Multiple instances of virtual machines in the same availability set
-For all virtual machines that have two or more instances deployed in the same *availability set*, we guarantee that you will have virtual machine connectivity to at least one instance at least 99.95% of the time.
+
+For all virtual machines that have two or more instances deployed in the same *availability set*, we guarantee that you have virtual machine connectivity to at least one instance at least 99.95% of the time.
When two or more VMs are part of the same availability set, each virtual machine in the availability set is assigned an *update domain* and a *fault domain* by the underlying Azure platform.
-* **Update domains** guarantee that multiple VMs are not rebooted at the same time during the planned maintenance of an Azure infrastructure. Only one VM is rebooted at a time.
+* **Update domains** guarantee that multiple VMs aren't rebooted at the same time during the planned maintenance of an Azure infrastructure. Only one VM is rebooted at a time.
+* **Fault domains** guarantee that VMs are deployed on hardware components that don't share a common power source and network switch. When servers, a network switch, or a power source undergo an unplanned downtime, only one VM is affected.
-* **Fault domains** guarantee that VMs are deployed on hardware components that do not share a common power source and network switch. When servers, a network switch, or a power source undergo an unplanned downtime, only one VM is affected.
+For more information, see [manage the availability of virtual machines in Azure using availability set](../../virtual-machines/availability-set-overview.md).
-For more information, see [Manage the availability of Windows virtual machines in Azure][azure-virtual-machines-manage-availability].
+### Azure Availability Zones
-An availability set is used for achieving high availability of:
+Azure is in process of rolling out a concept of [Azure Availability Zones](../../availability-zones/az-overview.md) throughout different [Azure Regions](https://azure.microsoft.com/global-infrastructure/regions/). In Azure regions where Availability Zones are offered, the Azure regions have multiple data centers, which are independent in supply of power source, cooling, and network. Reason for offering different zones within a single Azure region is to enable you to deploy applications across two or three Availability Zones offered. Assuming that issues in power sources and/or network would affect one Availability Zone infrastructure only, your application deployment within an Azure region is still fully functional. Eventually with some reduced capacity since some VMs in one zone might be lost. But VMs in the other two zones are still up and running. The Azure regions that offer zones are listed in [Azure Availability Zones](../../availability-zones/az-overview.md).
-* Redundant SAP application servers.
-* Clusters with two or more nodes (VMs, for example) that protect SPOFs such as an SAP ASCS/SCS instance or a DBMS.
+On using Availability Zones, there are some things to consider. The considerations list like:
+* You can't deploy Azure Availability Sets within an Availability Zone. Only possibility to combine Availability sets and Availability Zones is with [proximity placement groups](../../virtual-machines/co-location.md). For more information, see article [Combine availability sets and availability zones with proximity placement groups](./proximity-placement-scenarios.md#combine-availability-sets-and-availability-zones-with-proximity-placement-groups).
+* You can't use the [Basic Load Balancer](../../load-balancer/load-balancer-overview.md) to create failover cluster solutions based on Windows Failover Cluster Services or Linux Pacemaker. Instead you need to use the [Azure Standard Load Balancer SKU](../../load-balancer/load-balancer-standard-availability-zones.md).
+* Azure Availability Zones aren't giving any guarantees of certain distance between the different zones within one region.
+* The network latency between different Azure Availability Zones within the different Azure regions might be different from Azure region to region. There would be cases, where you as a customer can reasonably run the SAP application layer deployed across different zones since the network latency from one zone to the active DBMS VM is still acceptable from a business process impact. Whereas there could be customer scenarios where the latency between the active DBMS VM in one zone and an SAP application instance in a VM in another zone can be too intrusive and not acceptable for the SAP business processes. As a result, the deployment architectures need to be different with an active/active architecture for the application or active/passive architecture if latency is too high.
+* Using [Azure managed disks](https://azure.microsoft.com/services/managed-disks/) is mandatory for deploying into Azure Availability Zones.
-### Azure Availability Zones
-Azure is in process of rolling out a concept of [Azure Availability Zones](../../availability-zones/az-overview.md) throughout different [Azure Regions](https://azure.microsoft.com/global-infrastructure/regions/). In Azure regions where Availability Zones are offered, the Azure regions have multiple data centers, which are independent in supply of power source, cooling, and network. Reason for offering different zones within a single Azure region is to enable you to deploy applications across two or three Availability Zones offered. Assuming that issues in power sources and/or network would affect one Availability Zone infrastructure only, your application deployment within an Azure region is still fully functional. Eventually with some reduced capacity since some VMs in one zone might be lost. But VMs in the other two zones are still up and running. The Azure regions that offer zones are listed in [Azure Availability Zones](../../availability-zones/az-overview.md).
+### Virtual Machine Scale Set with Flexible Orchestration
-When using Availability Zones, there are some things to consider. The considerations list like:
+In Azure, Virtual Machine Scale Sets with Flexible orchestration offers a means of achieving high availability for SAP workloads, much like other deployment frameworks such as availability sets and availability zones. With flexible scale set, VMs can be distributed across various availability zones and fault domains, making it a suitable option for deploying highly available SAP workloads.
-- You can't deploy Azure Availability Sets within an Availability Zone. Only possibility to combine Availability sets and Availability Zones is with [proximity placement groups](../../virtual-machines/co-location.md). For more information see article [Combine availability sets and availability zones with proximity placement groups](./proximity-placement-scenarios.md#combine-availability-sets-and-availability-zones-with-proximity-placement-groups)-- You can't use the [Basic Load Balancer](../../load-balancer/load-balancer-overview.md) to create failover cluster solutions based on Windows Failover Cluster Services or Linux Pacemaker. Instead you need to use the [Azure Standard Load Balancer SKU](../../load-balancer/load-balancer-standard-availability-zones.md)-- Azure Availability Zones are not giving any guarantees of certain distance between the different zones within one region-- The network latency between different Azure Availability Zones within the different Azure regions might be different from Azure region to region. There will be cases, where you as a customer can reasonably run the SAP application layer deployed across different zones since the network latency from one zone to the active DBMS VM is still acceptable from a business process impact. Whereas there will be customer scenarios where the latency between the active DBMS VM in one zone and an SAP application instance in a VM in another zone can be too intrusive and not acceptable for the SAP business processes. As a result, the deployment architectures need to be different with an active/active architecture for the application or active/passive architecture if latency is too high.-- Using [Azure managed disks](https://azure.microsoft.com/services/managed-disks/) is mandatory for deploying into Azure Availability Zones
+Virtual machine scale set with flexible orchestration offers the flexibility to create the scale set within a region or span it across availability zones. On creating, the flexible scale set within a region with platformFaultDomainCount>1 (FD>1), the VMs deployed in the scale set would be distributed across specified number of fault domains in the same region. On the other hand, creating the flexible scale set across availability zones with platformFaultDomainCount=1 (FD=1) would distribute the VMs across different zones and the scale set would also [distribute VMs across different fault domains within each zone on a best effort basis](../../virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains.md). **For SAP workload only flexible scale set with FD=1 is supported.**
+The advantage of using flexible scale sets with FD=1 for cross zonal deployment, instead of traditional availability zone deployment is that the VMs deployed with the scale set would be distributed across different fault domains within the zone in a best-effort manner. To avoid the limitations associated with utilizing [proximity placement group](./proximity-placement-scenarios.md#combine-availability-sets-and-availability-zones-with-proximity-placement-groups) for ensuring VMs availability across all Azure datacenters or under each network spine, it's advised to deploy SAP workload across availability zones using flexible scale set with FD=1. This deployment strategy ensures that VMs deployed in each zone aren't restricted to a single datacenter or network spine, and all SAP system components, such as databases, ASCS/ERS, and application tier are scoped at zonal level.
+
+So, for new SAP workload deployment across availability zones, we advise to use flexible scale set with FD=1. For more information, see [virtual machine scale set for SAP workload](./virtual-machine-scale-set-sap-deployment-guide.md) document.
### Planned and unplanned maintenance of virtual machines Two types of Azure platform events can affect the availability of your virtual machines: * **Planned maintenance** events are periodic updates made by Microsoft to the underlying Azure platform. The updates improve overall reliability, performance, and security of the platform infrastructure that your virtual machines run on.- * **Unplanned maintenance** events occur when the hardware or physical infrastructure underlying your virtual machine has failed in some way. It might include local network failures, local disk failures, or other rack level failures. When such a failure is detected, the Azure platform automatically migrates your virtual machine from the unhealthy physical server that hosts your virtual machine to a healthy physical server. Such events are rare, but they might also cause your virtual machine to reboot.
-For more information, see [Manage the availability of Windows virtual machines in Azure][azure-virtual-machines-manage-availability].
+For more information, see [maintenance of virtual machines in Azure](../../virtual-machines/maintenance-and-updates.md).
### Azure Storage redundancy+ The data in your storage account is always replicated to ensure durability and high availability, meeting the Azure Storage SLA even in the face of transient hardware failures. Because Azure Storage keeps three images of the data by default, the use of RAID 5 or RAID 1 across multiple Azure disks is unnecessary.
Because Azure Storage keeps three images of the data by default, the use of RAID
For more information, see [Azure Storage replication][azure-storage-redundancy]. ### Azure Managed Disks
-Managed Disks is a resource type in Azure Resource Manager that is recommended to be used instead of virtual hard disks (VHDs) that are stored in Azure storage accounts. Managed disks automatically align with an Azure availability set of the virtual machine they are attached to. They increase the availability of your virtual machine and the services that are running on it.
+
+Managed Disks is a resource type in Azure Resource Manager, is a recommended storage option instead of virtual hard disks (VHDs) that are stored in Azure storage accounts. Managed disks automatically align with an Azure availability set of the virtual machine they're attached to. They increase the availability of your virtual machine and the services that are running on it.
For more information, see [Azure Managed Disks overview][azure-storage-managed-disks-overview]. We recommend that you use managed disks because they simplify the deployment and management of your virtual machines.
+## Comparison of different deployment types for SAP workload
+Here's a quick summary of the various deployment types that are available for SAP workloads.
-## Utilizing Azure infrastructure high availability to achieve *higher availability* of SAP applications
+| Features | Virtual Machine Scale Set with Flexible Orchestration (FD=1) | Availability Zone | Availability Set |
+|--|--|--|--|
+| Deployment behavior | Instances land across 1, 2 or 3 availability zones and distributed across different racks within each zone on best effort basis | Instances land across 1, 2 or 3 availability zones | Instances land within region and distributed across different fault/update domain |
+| Assign VM and managed disks to specific Availability zone | Yes | Yes | No |
+| Fault domain - Max spreading (Azure will maximally spread instances) | Yes | No | Yes, based on the number of fault domains defined during creation. |
+| Compute to storage fault domain alignment | No | No | Yes |
+| Capacity Reservation | Yes (assign capacity reservation at VM level) | Yes | No |
-If you decide not to use functionalities such as WSFC or Pacemaker on Linux (supported for SUSE Linux Enterprise Server [SLES] 12 and later and Red Hat Enterprise Linux [RHEL] 7 and later), Azure VM restart is utilized. It protects SAP systems against planned and unplanned downtime of the Azure physical server infrastructure and overall underlying Azure platform.
+> [!NOTE]
+>
+> * Update domains have been deprecated in Flexible Orchestration mode. For more information, see [Migrate deployments and resources to Virtual Machine Scale Sets in Flexible orchestration](../../virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-migration-resources.md)
+> * For more information on compute to storage fault domain alignment, see [Choosing the right number of fault domains for Virtual Machine Scale Set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains.md) and [How do availability sets work?](../../virtual-machines/availability-set-overview.md#how-do-availability-sets-work)
-For more information about this approach, see [Utilize Azure infrastructure VM restart to achieve higher availability of the SAP system][sap-higher-availability].
+## High availability deployment options for SAP workload
-## <a name="baed0eb3-c662-4405-b114-24c10a62954e"></a> High availability of SAP applications on Azure IaaS
+When deploying a high availability SAP workload on Azure, it's important to take into account the various deployment types available, and how they can be applied across different Azure regions (such as across zones, in a single zone, or in a region with no zones). Following table illustrates several high availability options for SAP systems in Azure regions.
-To achieve full SAP system high availability, you must protect all critical SAP system components. For example:
- * Redundant SAP application servers.
- * Unique components. An example might be a single point of failure (SPOF) component, such as an SAP ASCS/SCS instance or a database management system (DBMS).
+| System type | Across different zones in a region | In a singe zone of a region | In a region with no zones |
+| - | | | |
+| High Availability SAP system | [Flexible scale set with FD=1](./virtual-machine-scale-set-sap-deployment-guide.md) | [Availability Sets with Proximity Placement Groups](./proximity-placement-scenarios.md#proximity-placement-groups-with-availability-set-deployments) | [Availability Sets](./sap-high-availability-architecture-scenarios.md#multiple-instances-of-virtual-machines-in-the-same-availability-set) |
+| | [Availability Sets and Availability Zones with Proximity Placement Groups](./proximity-placement-scenarios.md#combine-availability-sets-and-availability-zones-with-proximity-placement-groups) | [Flexible scale set with FD=1](./virtual-machine-scale-set-sap-deployment-guide.md) (select only one zone) | [Flexible scale set with FD=1](./virtual-machine-scale-set-sap-deployment-guide.md) (no zones are defined) |
+| | [Availability Zones](./high-availability-zones.md) | [Availability Sets](./sap-high-availability-architecture-scenarios.md#multiple-instances-of-virtual-machines-in-the-same-availability-set) | |
-The next sections discuss how to achieve high availability for all three critical SAP system components.
+* **Deployment across different zones in a region:** For the highest availability, SAP systems should be deployed across different zones in a region. This ensures that if one zone is unavailable, the SAP system continues to be available in another zone. If you're deploying new SAP workload across availability zones, it's advised to use flexible virtual machine scales set with FD=1 deployment option. It allows you to deploy multiple VMs across different zones in a region without worrying about capacity constraints or placement groups. The scale set framework makes sure that the VMs deployed with the scale set would be distributed across different fault domains within the zone in a best effort manner. All the high available SAP components like SAP ASCS/ERS, SAP databases are distributed across different zones, whereas multiple application servers in each zone are distributed across different fault domain on best effort basis.
+* **Deployment in a single zone of a region:** To deploy your high-availability SAP system regionally in a location with multiple availability zones, and if it's essential for all components of the system to be in a single zone, then it's advised to use Availability Sets with Proximity Placement Groups deployment option. This approach allows you to group all SAP system components in a single availability zone, ensuring that the virtual machines within the availability set are spread across different fault and update domains. While this deployment aligns compute to storage fault domains, proximity isn't guaranteed. However, as this deployment option is regional, it doesn't support Azure Site Recovery for zone-to-zone disaster recovery. Moreover, this option restricts the entire SAP deployment to one data center, which may lead to capacity limitations if you need to change the SKU size or scale-out application instances.
+* **Deployment in a region with no zones:** If you're deploying your SAP system in a region that doesn't have any zones, it's advised to use Availability sets. This option provides redundancy and fault tolerance by placing VMs in different fault domains and update domains.
-### High-availability architecture for SAP application servers
-
-> This section applies to:
->
-> ![Windows logo.][Logo_Windows] Windows and ![Linux logo.][Logo_Linux] Linux
+> [!IMPORTANT]
>
+> It should be noted that the deployment options for Azure regions are only suggestions. The most suitable deployment strategy for your SAP system will depend on your particular requirements and environment.
-You usually don't need a specific high-availability solution for the SAP application server and dialog instances. You achieve high availability by redundancy, and you configure multiple dialog instances in various instances of Azure virtual machines. You should have at least two SAP application instances installed in two instances of Azure virtual machines.
+## Utilizing Azure infrastructure high availability to protect SAP applications
-![Figure 1: High-availability SAP application server][sap-ha-guide-figure-2000]
+If you decide not to use functionalities such as WSFC or Pacemaker on Linux (supported for SUSE Linux Enterprise Server 12 and later, and Red Hat Enterprise Linux 7 and later), Azure VM restart is utilized. It restores functionality in the SAP systems if there are any planned and unplanned downtime of the Azure physical server infrastructure and overall underlying Azure platform.
-_**Figure 1:** High-availability SAP application server_
+For more information about the approach, see [Utilize Azure infrastructure VM restart to achieve higher availability of the SAP system][sap-higher-availability].
-You must place all virtual machines that host SAP application server instances in the same Azure availability set. An Azure availability set ensures that:
+## High availability of SAP applications on Azure IaaS
-* All virtual machines are not part of the same update domain.
- An update domain ensures that the virtual machines aren't updated at the same time during planned maintenance downtime.
+To achieve full SAP system high availability, you must protect all critical SAP system components. For example:
- The basic functionality, which builds on different update and fault domains within an Azure scale unit, was already introduced in the [update domains](./planning-guide.md#update-domains) section.
+* Redundant SAP application servers.
+* Unique components. An example might be a single point of failure (SPOF) component, such as an SAP ASCS/SCS instance or a database management system (DBMS).
-* All virtual machines are not part of the same fault domain.
- A fault domain ensures that virtual machines are deployed so that no single point of failure affects the availability of all virtual machines.
+The next sections discuss how to achieve high availability for all three critical SAP system components.
-The number of update and fault domains that can be used by an Azure availability set within an Azure scale unit is finite. If you keep adding VMs to a single availability set, two or more VMs will eventually end up in the same fault or update domain.
+### High-availability architecture for SAP application servers
-If you deploy a few SAP application server instances in their dedicated VMs, assuming that we have five update domains, the following picture emerges. The actual maximum number of update and fault domains within an availability set might change in the future:
+> ![Windows logo.][Logo_Windows] Windows and ![Linux logo.][Logo_Linux] Linux
-![Figure 2: High availability of SAP application servers in an Azure availability set][planning-guide-figure-3000]
-_**Figure 2:** High availability of SAP application servers in an Azure availability set_
+You usually don't need a specific high-availability solution for the SAP application server and dialog instances. You achieve high availability by redundancy, and you configure multiple dialog instances in various instances of Azure virtual machines. You should have at least two SAP application instances installed in two instances of Azure virtual machines.
-For more information, see [Manage the availability of Windows virtual machines in Azure][azure-virtual-machines-manage-availability].
+Depending on the deployment type (flexible scale set with FD=1, availability zone or availability set), you must distribute your SAP application server instances accordingly to achieve redundancy.
-For more information, see the [Azure availability sets](./planning-guide.md#availability-sets) section of the Azure virtual machines planning and implementation for SAP NetWeaver document.
+* **Flexible scale set with platformFaultDomainCount=1 (FD=1):** SAP application servers deployed with flexible scale set (FD=1) distribute the virtual machines across different availability zones and the scales set would also distribute VMs across different fault domains within each zone on a best effort basis. This ensures that if one zone is unavailable, the SAP application servers deployed in another zone continues to be available.
+* **Availability zone:** SAP application servers deployed across availability zones ensure that VMs are span across different zones to achieve redundancy. This ensures that if one zone is unavailable, the SAP application servers deployed in another zone continues to be available. For more information, see [SAP workload configurations with Azure Availability Zones](./high-availability-zones.md)
+* **Availability set:** SAP application servers deployed in availability set ensure that VMs are distributed across different [fault domains](./planning-guide.md#fault-domains) and [update domains](./planning-guide.md#update-domains). On placing VMs across different update domains, ensure that VMs aren't updated at the same time during planned maintenance downtime. Whereas, placing VMs in different fault domain ensures that VM is protected from hardware failures or power interruptions within a data center. But the number of fault and update domains that you can use in Azure availability set within an Azure scale unit is finite. If you keep adding VMs to a single availability set, two or more VMs would eventually end up in the same fault or update domain. For more information, see the [Azure availability sets](./planning-guide.md#availability-sets) section of the Azure virtual machines planning and implementation for SAP NetWeaver document.
-**Unmanaged disks only:** Because the Azure storage account is a potential single point of failure, it's important to have at least two Azure storage accounts, in which at least two virtual machines are distributed. In an ideal setup, the disks of each virtual machine that is running an SAP dialog instance would be deployed in a different storage account.
+**Unmanaged disks only:** When using unmanaged disks with availability set, it's important to recognize that the Azure storage account becomes a single point of failure. Therefore, it's imperative to posses a minimum of two Azure storage accounts, in which at least two virtual machines are distributed. In an ideal setup, the disks of each virtual machine that is running an SAP dialog instance would be deployed in a different storage account.
> [!IMPORTANT] > We strongly recommend that you use Azure managed disks for your SAP high-availability installations. Because managed disks automatically align with the availability set of the virtual machine they are attached to, they increase the availability of your virtual machine and the services that are running on it.
->
### High-availability architecture for an SAP ASCS/SCS instance on Windows > ![Windows logo.][Logo_Windows] Windows
->
-
-You can use a WSFC solution to protect the SAP ASCS/SCS instance. The solution has two variants:
-* **Cluster the SAP ASCS/SCS instance by using clustered shared disks**: For more information about this architecture, see [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk][sap-high-availability-guide-wsfc-shared-disk].
+You can use a WSFC solution to protect the SAP ASCS/SCS instance. Based on the type of cluster share configuration (file share or shared disk), you can refer to appropriate solution based on your storage type.
-* **Cluster the SAP ASCS/SCS instance by using file share**: For more information about this architecture, see [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using file share][sap-high-availability-guide-wsfc-file-share].
+* **Cluster share - File share**
+
+ * [High Availability of SAP ASCS/SCS instance using SMB on Azure Files](./high-availability-guide-windows-azure-files-smb.md).
+ * [High Availability of SAP ASCS/SCS instance using SMB on Azure NetApp Files](./high-availability-guide-windows-netapp-files-smb.md).
+ * [High Availability of SAP ASCS/SCS instance using Scale Out File Server (SOFS)](./sap-high-availability-guide-wsfc-file-share.md).
-* **Cluster the SAP ASCS/SCS instance by using ANF SMB share**: For more information about this architecture, see Cluster [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using ANF SMB file share](./high-availability-guide-windows-netapp-files-smb.md).
+* **Cluster share - Shared disk**
+
+ * [High availability of SAP ASCS/SCS instance using Azure shared disk](./sap-high-availability-guide-wsfc-shared-disk.md).
+ * [High availability of SAP ASCS/SCS instance using SIOS](./sap-high-availability-guide-wsfc-shared-disk.md).
### High-availability architecture for an SAP ASCS/SCS instance on Linux > ![Linux logo.][Logo_Linux] Linux
->
-> For more information about clustering the SAP ASCS/SCS instance by using the SLES cluster framework, see [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications][sap-suse-ascs-ha]. For alternative HA architecture on SLES, which doesn't require highly available NFS see [High-availability guide for SAP NetWeaver on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications][sap-suse-ascs-ha-anf].
-For more information about clustering the SAP ASCS/SCS instance by using the Red Hat cluster framework, see [Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux](./high-availability-guide-rhel.md)
+On Linux, the configuration of SAP ASCS/SCS instance clustering depends on the operating system distribution and the type of storage being used. It's recommended to implement the suitable solution according to your specific OS cluster framework.
+* **SUSE Linux Enterprise Server (SLES)**
+
+ * [High Availability of SAP ASCS/SCS instance using NFS with simple mount](./high-availability-guide-suse-nfs-simple-mount.md).
+ * [High Availability of SAP ASCS/SCS instance using NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md).
+ * [High Availability of SAP ASCS/SCS instance using NFS on Azure NetApp Files](./high-availability-guide-suse-netapp-files.md).
+ * [High Availability of SAP ASCS/SCS instance using NFS Server](./high-availability-guide-suse-nfs.md).
+
+* **Red Hat Enterprise Linux (RHEL)**
+
+ * [High Availability of SAP ASCS/SCS instance using NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md).
+ * [High Availability of SAP ASCS/SCS instance using NFS on Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md).
### SAP NetWeaver multi-SID configuration for a clustered SAP ASCS/SCS instance
-> ![Windows logo.][Logo_Windows] Windows
->
-> Multi-SID is supported with WSFC, using file share and shared disk.
->
-> For more information about multi-SID high-availability architecture on Windows, see:
+> ![Windows logo.][Logo_Windows] Window
-* [SAP ASCS/SCS instance multi-SID high availability for Windows Server Failover Clustering and file share][sap-ascs-ha-multi-sid-wsfc-file-share]
+Multi-SID is supported with WSFC, using file share and shared disk. For more information about multi-SID high-availability architecture on Windows, see:
-* [SAP ASCS/SCS instance multi-SID high availability for Windows Server Failover Clustering and shared disk][sap-ascs-ha-multi-sid-wsfc-shared-disk]
+* File share: [SAP ASCS/SCS instance multi-SID high availability for Windows Server Failover Clustering and file share][sap-ascs-ha-multi-sid-wsfc-file-share].
+* Shared disk: [SAP ASCS/SCS instance multi-SID high availability for Windows Server Failover Clustering and shared disk][sap-ascs-ha-multi-sid-wsfc-shared-disk].
> ![Linux logo.][Logo_Linux] Linux
->
-> Multi-SID clustering is supported on Linux Pacemaker clusters for SAP ASCS/ERS, limited to **five** SAP SIDs on the same cluster.
-> For more information about multi-SID high-availability architecture on Linux, see:
-
-* [HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide](./high-availability-guide-suse-multi-sid.md)
-* [HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide](./high-availability-guide-rhel-multi-sid.md)
-
-### High-availability DBMS instance
-
-The DBMS also is a single point of contact in an SAP system. You need to protect it by using a high-availability solution. The following figure shows a SQL Server Always On high-availability solution in Azure, with Windows Server Failover Clustering and the Azure internal load balancer. SQL Server Always On replicates DBMS data and log files by using its own DBMS replication. In this case, you don't need cluster shared disk, which simplifies the entire setup.
-
-![Figure 3: Example of a high-availability SAP DBMS, with SQL Server Always On][sap-ha-guide-figure-2003]
-_**Figure 3:** Example of a high-availability SAP DBMS, with SQL Server Always On_
+Multi-SID clustering is supported on Linux Pacemaker clusters for SAP ASCS/ERS, limited to **five** SAP SIDs on the same cluster. For more information about multi-SID high-availability architecture on Linux, see:
-For more information about clustering SQL Server DBMS in Azure by using the Azure Resource Manager deployment model, see these articles:
+* SUSE Linux Enterprise Server (SLES): [HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide](./high-availability-guide-suse-multi-sid.md).
+* Red Hat Linux Enterprise (RHEL): [HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide](./high-availability-guide-rhel-multi-sid.md).
-* [Configure an Always On availability group in Azure virtual machines manually by using Resource Manager](/azure/azure-sql/virtual-machines/windows/availability-group-overview)
+### High-availability of DBMS instance
-* [Configure an Azure internal load balancer for an Always On availability group in Azure][virtual-machines-windows-portal-sql-alwayson-int-listener]
+In an SAP system, the DBMS servers as the single point of failure as well. So, it's important to protect the database by implementing high-availability solution. The high availability solution of DBMS varies based on the database used for SAP system. Based on your database, follow the guidelines to achieve high availability on your database.
-For more information about clustering SAP HANA DBMS in Azure by using the Azure Resource Manager deployment model, see [High availability of SAP HANA on Azure virtual machines (VMs)][sap-hana-ha].
+| Database | DR recommendation |
+| - | |
+| SAP HANA | [HANA System Replication (HSR)](sap-hana-availability-across-regions.md) |
+| Oracle | [Oracle Data Guard](../../virtual-machines/workloads/oracle/oracle-reference-architecture.md#disaster-recovery-for-oracle-databases) |
+| IBM DB2 | [High availability disaster recovery (HADR)](dbms-guide-ha-ibm.md) |
+| Microsoft SQL | [Microsoft SQL Always On](dbms-guide-sqlserver.md#sql-server-always-on) |
+| SAP ASE | [ASE HADR Always On](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/installation-procedure-for-sybase-16-3-patch-level-3-always-on/ba-p/368199) |
sap Sap Higher Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-higher-availability-architecture-scenarios.md
[getting-started]:get-started.md [sap-higher-availability]:sap-higher-availability-architecture-scenarios.md
-[sap-high-availability-architecture-scenarios-sap-app-ha]:sap-high-availability-architecture-scenarios.md#baed0eb3-c662-4405-b114-24c10a62954e
+[sap-high-availability-architecture-scenarios-sap-app-ha]:sap-high-availability-architecture-scenarios.md
[planning-guide]:planning-guide.md [planning-guide-1.2]:planning-guide.md#e55d1e22-c2c8-460b-9897-64622a34fdff
sentinel Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-built-in.md
Title: Detect threats with built-in analytics rules in Microsoft Sentinel | Microsoft Docs description: Learn how to use out-of-the-box threat detection rules, based on built-in templates, that notify you when something suspicious happens. + Previously updated : 11/09/2021- Last updated : 05/28/2023 # Detect threats out-of-the-box
-After you've [connected your data sources](quickstart-onboard.md) to Microsoft Sentinel, you'll want to be notified when something suspicious occurs. That's why Microsoft Sentinel provides out-of-the-box, built-in templates to help you create threat detection rules.
+After you've [set up Microsoft Sentinel to collect data from all over your organization](connect-data-sources.md), you'll need to dig through all that data to detect security threats to your environment. But don't worry&mdash;Microsoft Sentinel provides out-of-the-box, built-in templates to help you create threat detection rules to do all that work for you. These rules are known as **analytics rules**.
-Rule templates were designed by Microsoft's team of security experts and analysts based on known threats, common attack vectors, and suspicious activity escalation chains. Rules created from these templates will automatically search across your environment for any activity that looks suspicious. Many of the templates can be customized to search for activities, or filter them out, according to your needs. The alerts generated by these rules will create incidents that you can assign and investigate in your environment.
+Microsoft's team of security experts and analysts designed these analytics rule templates based on known threats, common attack vectors, and suspicious activity escalation chains. Rules created from these templates automatically search across your environment for any activity that looks suspicious. Many of the templates can be customized to search for activities, or filter them out, according to your needs. The alerts generated by these rules create incidents that you can assign and investigate in your environment.
This article helps you understand how to detect threats with Microsoft Sentinel:
This article helps you understand how to detect threats with Microsoft Sentinel:
## View built-in detections
-To view all analytics rules and detections in Microsoft Sentinel, go to **Analytics** > **Rule templates**. This tab contains all the Microsoft Sentinel built-in rules, as well as the **Threat Intelligence** rule type.
+To view all analytics rules and detections in Microsoft Sentinel, go to **Analytics** > **Rule templates**. This tab contains all the Microsoft Sentinel built-in rules, according to the types displayed in the following table.
:::image type="content" source="media/tutorial-detect-built-in/view-oob-detections.png" alt-text="Screenshot shows built-in detection rules to find threats with Microsoft Sentinel.":::
Built-in detections include:
| Rule type | Description | | | | | **Microsoft security** | Microsoft security templates automatically create Microsoft Sentinel incidents from the alerts generated in other Microsoft security solutions, in real time. You can use Microsoft security rules as a template to create new rules with similar logic. <br><br>For more information about security rules, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md). |
-| <a name="fusion"></a>**Fusion**<br>(some detections in Preview) | Microsoft Sentinel uses the Fusion correlation engine, with its scalable machine learning algorithms, to detect advanced multistage attacks by correlating many low-fidelity alerts and events across multiple products into high-fidelity and actionable incidents. Fusion is enabled by default. Because the logic is hidden and therefore not customizable, you can only create one rule with this template. <br><br>The Fusion engine can also correlate alerts produced by [scheduled analytics rules](#scheduled) with those from other systems, producing high-fidelity incidents as a result. |
-| **Machine learning (ML) behavioral analytics** | ML behavioral analytics templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. <br><br>Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type. |
-| **Threat Intelligence** | Take advantage of threat intelligence produced by Microsoft to generate high fidelity alerts and incidents with the **Microsoft Threat Intelligence Analytics** rule. This unique rule is not customizable, but when enabled, will automatically match Common Event Format (CEF) logs, Syslog data or Windows DNS events with domain, IP and URL threat indicators from Microsoft Threat Intelligence. Certain indicators will contain additional context information through MDTI (**Microsoft Defender Threat Intelligence**).<br><br>For more information on how to enable this rule, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md).<br>For more details on MDTI, see [What is Microsoft Defender Threat Intelligence](/../defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti)
-| <a name="anomaly"></a>**Anomaly** | Anomaly rule templates use machine learning to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed. <br><br>While the configurations of out-of-the-box rules can't be changed or fine-tuned, you can duplicate a rule and then change and fine-tune the duplicate. In such cases, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode. Then compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. <br><br>For more information, see [Use customizable anomalies to detect threats in Microsoft Sentinel](soc-ml-anomalies.md) and [Work with anomaly detection analytics rules in Microsoft Sentinel](work-with-anomaly-rules.md). |
+| <a name="fusion"></a>**Fusion**<br>(some detections in Preview) | Microsoft Sentinel uses the Fusion correlation engine, with its scalable machine learning algorithms, to detect advanced multistage attacks by correlating many low-fidelity alerts and events across multiple products into high-fidelity and actionable incidents. Fusion is enabled by default. Because the logic is hidden and therefore not customizable, you can only create one rule with this template. <br><br>The Fusion engine can also correlate alerts produced by [scheduled analytics rules](#scheduled) with alerts from other systems, producing high-fidelity incidents as a result. |
+| **Machine learning (ML) behavioral analytics** | ML behavioral analytics templates are based on proprietary Microsoft machine learning algorithms, so you can't see the internal logic of how they work and when they run. <br><br>Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type. |
+| **Threat Intelligence** | Take advantage of threat intelligence produced by Microsoft to generate high fidelity alerts and incidents with the **Microsoft Threat Intelligence Analytics** rule. This unique rule is not customizable, but when enabled, automatically matches Common Event Format (CEF) logs, Syslog data or Windows DNS events with domain, IP and URL threat indicators from Microsoft Threat Intelligence. Certain indicators contain additional context information through MDTI (**Microsoft Defender Threat Intelligence**).<br><br>For more information on how to enable this rule, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md).<br>For more details on MDTI, see [What is Microsoft Defender Threat Intelligence](/../defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti)
+| <a name="anomaly"></a>**Anomaly** | Anomaly rule templates use machine learning to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed. <br><br>While the configurations of out-of-the-box rules can't be changed or fine-tuned, you can duplicate a rule, and then change and fine-tune the duplicate. In such cases, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode. Then compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. <br><br>For more information, see [Use customizable anomalies to detect threats in Microsoft Sentinel](soc-ml-anomalies.md) and [Work with anomaly detection analytics rules in Microsoft Sentinel](work-with-anomaly-rules.md). |
| <a name="scheduled"></a>**Scheduled** | Scheduled analytics rules are based on built-in queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules. <br><br>Several new scheduled analytics rule templates produce alerts that are correlated by the Fusion engine with alerts from other systems to produce high-fidelity incidents. For more information, see [Advanced multistage attack detection](configure-fusion-rules.md#configure-scheduled-analytics-rules-for-fusion-detections).<br><br>**Tip**: Rule scheduling options include configuring the rule to run every specified number of minutes, hours, or days, with the clock starting when you enable the rule. <br><br>We recommend being mindful of when you enable a new or edited analytics rule to ensure that the rules will get the new stack of incidents in time. For example, you might want to run a rule in synch with when your SOC analysts begin their workday, and enable the rules then.| | <a name="nrt"></a>**Near-real-time (NRT)**<br>(Preview) | NRT rules are limited set of scheduled rules, designed to run once every minute, in order to supply you with information as up-to-the-minute as possible. <br><br>They function mostly like scheduled rules and are configured similarly, with some limitations. For more information, see [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md). |
This procedure describes how to use built-in analytics rules templates.
> - You can also **push rules to Microsoft Sentinel via [API](/rest/api/securityinsights/) and [PowerShell](https://www.powershellgallery.com/packages/Az.SecurityInsights/0.1.0)**, although doing so requires additional effort. > > When using API or PowerShell, you must first export the rules to JSON before enabling the rules. API or PowerShell may be helpful when enabling rules in multiple instances of Microsoft Sentinel with identical settings in each instance.
->
+
+### Access permissions for analytics rules
+
+When you create an analytics rule, an access